next up previous contents
Next: The Matlab neural network Up: Backpropagation Previous: The basic learning rule

Global adaptive training step length

 

abp Create a multi-layer perceptron network using a global adaptive training step length
-di <i-data> name of the input data frame for training
-do <o-data> name of the target data frame for training
-net <nlay> <nn1> ... <nnN> network configuration, number of layers, number of neurons in each layer
[-types <s | t | l> ... <s | t | l>] use a sigmoid, tanh or linear network in each layer, default is sigmoid
[-ti <ti-data>] input data frame for test set
[-to <to-data>] target data frame for test set
[-vi <vi-data>] input data frame for a validation set
[-vo <vo-data>] target data frame for a validation set
-nout <wdata> data frame for saving the trained network weights
[-ef <edata>] output error to a frame
[-bs <tstep>] training step length (default is 0.01)
[-em <epochs>] maximum number of training epochs (default is 200)
[-mdn <mdown>] training step multiplier downwards (default is 0.8)
[-mup <mup>] training step multiplier upwards (default is 1.1)
[-ac <adapt-cr>] adaption criterion (default is 1.01)
[-one] forces one neuron / one input
[-penalty <penalty>] regularization coefficient

This command trains a backpropagation network using a Matlab style training algorithm. This method is based on a global adaptive learning rate parameter. By default one neuron for each input is used.

Example (ex5.8): Train a three-layer (input + hidden + output layer) MLP network with sine data using sigmoid activation functions in the neurons. After training, save the network output.

NDA> load sin.dat
NDA> select sinx -f sin.x
NDA> select siny -f sin.y
NDA> abp -di sinx -do siny -net 3 1 10 1 -types s s s -em 350
      -nout wei -ef virhe -bs 1.2 -ac 1.04 -mup 1.1 -mdn 0.8
NDA> fbp -d sinx -dout out -win wei
NDA> select output -f sin.x out.0
NDA> save output

figure2362


next up previous contents
Next: The Matlab neural network Up: Backpropagation Previous: The basic learning rule

Anssi Lensu
Tue Jul 23 11:58:18 EET DST 2002