next up previous contents
Next: The Silva & Almeida Up: Backpropagation Previous: Global adaptive training step

The Matlab neural network toolbox style training

 

mlbp Create a multi-layer perceptron network using the Matlab neural network toolbox style training
-di <i-data> name of the input data frame for training the network
-do <o-data> name of the target data frame for training the network
-net <nlay> <nn1> ... <nnN> network configuration, number of layers, number of neurons in each layer
[-types <s | t | l> ... <s | t | l>] use a sigmoid, tanh or linear network in each layer, default is sigmoid
[-ti <ti-data>] input data frame for test set
[-to <to-data>] target data frame for test set
[-vi <vi-data>] input data frame for a validation set
[-vo <vo-data>] target data frame for a validation set
-nout <wdata> data frame for saving the trained network weights
[-ef <edata>] output error to a frame
[-bs <tstep>] training step length (default is 0.01)
[-em <epochs>] maximum number of training epochs (default is 200)
[-mdm <mdown>] training step multiplier downwards (default is 0.7)
[-mup <mup>] training step multiplier upwards (default is 1.2)
[-mom <moment>] moment parameter (default is 0.95)
[-ac <adapt-cr>] adapt criterion (default is 1.01)
[-one] forces one neuron / one input
[-penalty <penalty>] regularization coefficient

This command trains a backpropagation network using the Matlab style of training algorithm. This method is based on a global adaptive learning rate parameter and a momentum term. By default one neuron for each input is used.

Example (ex5.9): Train a three-layer (input + hidden + output layer) MLP network with sine data using sigmoid activation functions in neurons. After training, save the network output.

NDA> load sin.dat
NDA> select sinx -f sin.x
NDA> select siny -f sin.y
NDA> mlbp -di sinx -do siny -net 3 1 15 1 -types t t t -em 100
      -nout wei -ef virhe -bs 0.1 -ac 1.1 -mup 1.1 -mdn 0.7
      -mom 0.95
NDA> fbp -d sinx -dout out -win wei
NDA> select output -f sin.x out.0
NDA> save output

figure2378


next up previous contents
Next: The Silva & Almeida Up: Backpropagation Previous: Global adaptive training step

Anssi Lensu
Tue Jul 23 11:58:18 EET DST 2002