next up previous contents
Next: Classical scaling and Sammon's Up: Backpropagation Previous: Importing a new data

Visualizing the MLP network

In order to explore the trained MLP network, you can do it through the TS-SOM structure. The basic idea is to generate an artificial data space for inputs such that each neurons of the TS-SOM is set to some point of this data space. Then, the trained MLP network is used to compute output values for the generate point of the neuron. The TS-SOM structure can be then used to present data points in the input and output spaces. See also the example ex5.18.

In the following example, the MLP network is trained by the Boston data to obtain relationships between some background variables and the prices of the appartments. Then, new values of these background variables are generated and the output values are computed by the trained network to each neuron of the TS-SOM.

Firstly, the MLP network is trained as follows:

NDA> load boston.dat
NDA> select inputflds -f boston.crim boston.zn boston.ptratio
   boston.b boston.chas NDA> boston.dis boston.indus
NDA> select trg -f boston.rate
NDA> prepro -d trg -dout trg2 -e
NDA> prop -di src2 -do trg2 -net 2 15 1 -full -types s s -em 300
  -bs 0.01 -mup 1.1 -mdm 0.8 -nout wei -ef virhe

The following commands show an example how the input and output data of the MLP network can be visualized. An empty TS-SOM structure is created, and then data points are generated for each neuron to represent one point in the input space (see the command somgrid from Sect. 5.1.10). In the first example, the field "crim" will increase according to the x-coordinates and the field "zn" according to the y-coordinates of the neurons. The rest of the fields ("ptratio", "b", "chas", "dis" and "indus") are kept as constant values that are their averages.

NDA> select x -f inputflds.crim
NDA> select y -f inputflds.zn
NDA> select z -f inputflds.ptratio inputflds.b inputflds.chas
  inputflds.dis inputflds.indus
NDA> fldstat -d x -dout xsta -min -max
NDA> fldstat -d y -dout ysta -min -max
NDA> fldstat -d z -dout zsta -avg
#
# Create an empty TS-SOM and generate data points for its neurons
#
NDA> somtr build -sout s1 -l 6 -D 2
NDA> somgrid -s s1 -min xsta.min -max xsta.max -d x -dout x1
   -sca vec -dim 0
NDA> somgrid -s s1 -min ysta.min -max ysta.max -d y -dout y1
   -sca vec -dim 1
NDA> somgrid -s s1 -min zsta.avg -max zsta.avg -d z -dout z1
   -sca vec -dim 0
#
# Compute output values by the MLP network
#
NDA> select data1 -f x1.crim y1.zn z1.ptratio z1.b z1.chas z1.dis z1.indus
NDA> fbp -d data1 -nin wei -dout trgout
NDA> select data1 -d trgout
#
#
NDA> mkgrp win1 -s /crim_zn/s1
NDA> setgdat win1 -d /crim_zn/data1
...
NDA> ngray /crim_zn/win1 -f /crim_zn/data1.0 -sca som 
NDA> bar /crim_zn/win1 -inx 0 -f /crim_zn/data1.zn -co green -sca som 
NDA> bar /crim_zn/win1 -inx 1 -f /crim_zn/data1.crim -co red -sca som

By using similar command sequences, we can visualize the relationships between different compinations of the background variables and outputs of the MLP network. The results are presented below.

figure1835

In three figures, the gray level of the boxes describes the output values and two background variables are mapped to the x and y axes. In the last figure, the output is mapped to the z axis. Figure a: the background variables are "crim" and "zn"; Figure b: the background variables are "ptratio" and "chas"; Figure c: the background variables are "dis" and "indus". Figure d: the variables "crim", "b" and "indus" increases according to the x coordinates and the variables "zn", "chas" and "dis" according to the y coodinates. The predicted variable "rate" is mapped to the z axis. Note that the last example uses also other background variables for computing, but only these two have been displayed.


next up previous contents
Next: Classical scaling and Sammon's Up: Backpropagation Previous: Importing a new data

Erkki Hakkinen
Thu Sep 24 11:51:34 EET DST 1998