next up previous contents
Next: The RPROP training Up: Backpropagation Previous: The Matlab neural network

The Silva & Almeida training

 

tabular1776

This command trains a Backpropagation network using the Silva & Almeida training algorithm. The method is based on adaptive learning rate parameters on each neuron's weight. Unlike in the RPROP algorithm the maximum and minimum learning rate is not defined in this algorithm. By default one neuron for each input is used.

Example (ex5.10): Train a three-layered (input + hidden + output layer) network with a sine function using a sigmoid activation function in neurons. After training save the network output and plot the training error graph.

NDA> load sin.dat
NDA> select sinx -f sin.x
NDA> select siny -f sin.y
NDA> sabp -di sinx -do siny -net 3 1 10 1 -types s s s -em 100
 -nout wei -ef virhe -bs 1.0 -mup 1.1 -mdm 0.8
NDA> fbp -di sinx -do out -win wei
NDA> select train -f virhe.TrainError
NDA> select output -f sin.x out.0
NDA> save output
NDA> mkgrp xxx
NDA> ldgrv xxx -f virhe.TrainError -co black
NDA> show xxx

figure1786



Erkki Hakkinen
Thu Sep 24 11:51:34 EET DST 1998