next up previous contents
Next: Global adaptive training step Up: Backpropagation Previous: The basic learning rule

The basic learning rule with a momentum term

 

mbp Create a multi-layer perceptron network using the basic rule with a momentum term for learning
-di <i-data> name of the input data frame for training
-do <o-data> name of the target data frame for training
-net <nlay> <nn1> ... <nnN> network configuration, number of layers, number of neurons in each layer
[-types <s | t | l> ... <s | t | l>] use a sigmoid, tanh or linear network in each layer, default is sigmoid
[-ti <ti-data>] input data frame for test set
[-to <to-data>] target data frame for test set
[-vi <vi-data>] input data frame for a validation set
[-vo <vo-data>] target data frame for a validation set
-nout <wdata> data frame for saving the trained network weights
[-ef <edata>] output error to a frame
[-bs <tstep>] training step length (default is 0.01)
[-em <epochs>] maximum number of training epochs (default is 200)
[-mom <moment>] moment parameter (default is 0.1)
[-one] forces one neuron / one input
[-penalty <penalty>] regularization coefficient

This command trains an MLP network using the Matlab style training algorithm. The method is based on a global learning rate parameter with a momentum term. By default one neuron for each input is used.

Example (ex5.7): Train a three-layer (input + hidden + output layer) MLP network with sine data using sigmoid activation functions in neurons. After training, save the network output.

NDA> load sin.dat
NDA> select sinx -f sin.x
NDA> select siny -f sin.y
NDA> mbp -di sinx -do siny -net 3 1 10 1 -types s s s -em 1000
      -nout wei -ef virhe -bs 0.2 -mom 0.1
NDA> fbp -d sinx -dout out -win wei
NDA> select output -f sin.x out.0
NDA> save output

figure2347


next up previous contents
Next: Global adaptive training step Up: Backpropagation Previous: The basic learning rule

Anssi Lensu
Tue Jul 23 11:58:18 EET DST 2002