next up previous contents
Next: The basic learning rule Up: Backpropagation Previous: Backpropagation

The basic learning rule

 

bp Create a multi-layer perceptron network using the basic learning rule for training
-di <i-data> name of the input data frame for training
-do <o-data> name of the target data frame for training
-net <nlayers> <nneu1> ... <nneuN> network configuration, number of layers and number of neurons in each layer
[-types <s | t | l> ... <s | t | l>] use a sigmoid, tanh or linear activation function in each layer, default is sigmoid
[-ti <ti-data>] input data frame for test set
[-to <to-data>] target data frame for test set
-nout <wdata> data frame for saving the trained network weights
[-vi <vi-data>] input data frame for a validation set
[-vo <vo-data>] target data frame for a validation set
[-ef <edata>] output error to a frame
[-bs <tstep>] training step length (default is 0.1)
[-em <epochs>] maximum number of training epochs (default is 200)
[-one] forces one neuron / one input
[-penalty <penalty>] regularization coefficient

The test dataset (-ti, -to) is used as a stopping criterion. The learning does not actually stop, but learning algorithm checks after every successful training epoch, if the error of the test set is smaller than before during the training and saves the network weights. After the maximum number of the training epochs have been reached, these weights are saved to the namespace.

Example (ex5.6): Train a three-layer (input + hidden + output layer) MLP network and save the network output.

NDA> load sin.dat
NDA> select sinx -f sin.x
NDA> select siny -f sin.y
NDA> bp -di sinx -do siny -net 3 1 10 1 -types s s s -em 2000
      -nout wei -ef virhe -bs 0.01 
NDA> fbp -d sinx -dout out -win wei
NDA> select output -f sin.x out.0
NDA> save output

figure2292


next up previous contents
Next: The basic learning rule Up: Backpropagation Previous: Backpropagation

Anssi Lensu
Thu May 17 15:00:44 EET DST 2001