next up previous contents
Next: The Levenberg-Marquard training Up: Backpropagation Previous: The Silva & Almeida

The RPROP training

 

rprop Create a multi-layer perceptron network using the RPROP training
-di <i-data> name of the input data frame for training the network
-do <o-data> name of the target data frame for training the network
-net <nlayers> <nneu1> ... <nneuN> network configuration, number of layers, number of neurons in each layer
[-types <s | t | l> ... <s | t | l>] use a sigmoid, tanh or linear network in each layer, default is sigmoid
[-ti <ti-data>] input data frame for test set
[-to <to-data>] output data frame for test set
[-vi <vi-data>] input data frame for a validation set
[-vo <vo-data>] target data frame for a validation set
-nout <wdata> data frame for saving the trained network weights
[-ef <edata>] output error to a frame
[-bs <tstep>] training step length (default is 0.01)
[-em <epochs>] maximum number of training epochs (default is 200)
[-mdm <mdown>] training step multiplier downwards (default is 0.8)
[-mup <mup>] training step multiplier upwards (default is 1.1)
[-smin <min-tsl>] minimum training step length (default is 0.0001)
[-smax <max-tsl>] maximum training step length (default is 10.0)
[-one] forces one neuron / one input

This command trains a backpropagation network using the RPROP training algorithm. The method is based on adaptive learning rate parameters on each neuron's weight. By default, one neuron for each input is used.

Example (ex5.11): Train a three-layer (input + hidden + output layer) MLP network with sine data using sigmoid activation functions in neurons. After training is done, save the network output and error.

NDA> load sink.dat
NDA> load sint.dat
NDA> select sinx -f sink.ox
NDA> select siny -f sink.oy
NDA> select sinox -f sink.ox
NDA> select sintx -f sint.tx
NDA> select sinty -f sint.ty
NDA> rprop -di sinx -do siny -net 3 1 3 1 -types s s s -em 40
      -bs 0.05 -mup 1.1 -mdm  0.8 -nout wei -ti sintx -to sinty
      -ef virhe
NDA> fbp -d sinox -dout out -win wei
NDA> select test -f virhe.TrainError virhe.TestError
NDA> select output -f sink.ox out.0
NDA> save output
NDA> save test

figure2225


next up previous contents
Next: The Levenberg-Marquard training Up: Backpropagation Previous: The Silva & Almeida

Anssi Lensu
Wed Oct 6 12:57:48 EET DST 1999