next up previous contents
Next: The RPROP training Up: Backpropagation Previous: The Matlab neural network

The Silva & Almeida training

 

sabp Create a multi-layer perceptron network using the Silva & Almeida training
-di <i-data> name of the input data frame for training the network
-do <o-data> name of the target data frame for training the network
-net <nlayers> <nneu1> ... <nneuN> network configuration, number of layers, number of neurons in each layer
[-types <s | t | l> ... <s | t | l>] use a sigmoid, tanh or linear network in each layer, default is sigmoid
[-ti <ti-data>] input data frame for test set
[-to <to-data>] output data frame for test set
[-vi <vi-data>] input data frame for a validation set
[-vo <vo-data>] target data frame for a validation set
-nout <wdata> data frame for saving the trained network weights
[-ef <edata>] output error to a frame
[-bs <tstep>] training step length (default is 0.01)
[-em <epochs>] maximum number of training epochs (default is 200)
[-mdm <mdown>] training step multiplier downwards (default is 0.8)
[-mup <mup>] training step multiplier upwards (default is 1.1)
[-one] forces one neuron / one input

This command trains an MLP network using the Silva & Almeida training algorithm. This method is based on adaptive learning rate parameters on each neuron's weight. Unlike in the RPROP algorithm the maximum and minimum learning rate is not defined in this algorithm. By default, one neuron for each input is used.

Example (ex5.10): Train a three-layer (input + hidden + output layer) MLP network with sine data using sigmoid activation functions in neurons. After training, save the network output and plot the training error graph.

NDA> load sin.dat
NDA> select sinx -f sin.x
NDA> select siny -f sin.y
NDA> sabp -di sinx -do siny -net 3 1 10 1 -types s s s -em 100
      -nout wei -ef virhe -bs 1.0 -mup 1.1 -mdm 0.8
NDA> fbp -d sinx -dout out -win wei
NDA> select train -f virhe.TrainError
NDA> select output -f sin.x out.0
NDA> save output
NDA> mkgrp xxx
NDA> ldgrv xxx -f virhe.TrainError -co black
NDA> show xxx

figure2209


next up previous contents
Next: The RPROP training Up: Backpropagation Previous: The Matlab neural network

Anssi Lensu
Wed Oct 6 12:57:48 EET DST 1999