next up previous contents
Next: Classical scaling and Sammon's Up: Backpropagation Previous: Importing a new data

Visualizing an MLP network

In order to explore the trained MLP network, one can use the TS-SOM structure. The basic idea is to generate an artificial data space for inputs such that each neuron of the TS-SOM is set to some point of this data space. Then, the trained MLP network is used to compute output values for the generated points. The TS-SOM structure can then be used to present data points in the input and output spaces. See also the example ex5.18.

In the following example, the MLP network is trained with Boston data to obtain relationships between some background variables and the prices of the appartments. Then, new values of these background variables are generated and the output values are computed by the trained network to each neuron of the TS-SOM.

First, the MLP network is trained as follows:

NDA> load boston.dat
NDA> select inputflds -f boston.crim boston.zn boston.ptratio
      boston.b boston.chas boston.dis boston.indus
NDA> select trg -f boston.rate
NDA> prepro -d trg -dout trg2 -e
NDA> prop -di src2 -do trg2 -net 2 15 1 -types s s -em 300
      -bs 0.01 -mup 1.1 -mdm 0.8 -nout wei -ef virhe

The following commands show an example how the input and output data of the MLP network can be visualized. An empty TS-SOM structure is created and then data points are generated for each neuron to represent one point in the input space (see somgrid in section 5.1.10). In the first example, field crim will increase according to the x-coordinates and the field zn according to the y-coordinates of the neurons. The rest of the fields (ptratio, b, chas, dis and indus) are kept as constant values that are their averages.

NDA> select x -f inputflds.crim
NDA> select y -f inputflds.zn
NDA> select z -f inputflds.ptratio inputflds.b inputflds.chas
      inputflds.dis inputflds.indus
NDA> fldstat -d x -dout xsta -min -max
NDA> fldstat -d y -dout ysta -min -max
NDA> fldstat -d z -dout zsta -avg
# Create an empty TS-SOM and generate data points for its neurons
NDA> somtr build -sout s1 -l 6 -D 2
...
NDA> somgrid -s s1 -min xsta.min -max xsta.max -d x -dout x1
      -sca vec -dim 0
NDA> somgrid -s s1 -min ysta.min -max ysta.max -d y -dout y1
      -sca vec -dim 1
NDA> somgrid -s s1 -min zsta.avg -max zsta.avg -d z -dout z1
      -sca vec -dim 0
# Compute output values by the MLP network
NDA> select data1 -f x1.crim y1.zn z1.ptratio z1.b z1.chas
      z1.dis z1.indus
NDA> fbp -d data1 -nin wei -dout trgout
NDA> select data1 -d trgout
NDA> mkgrp win1 -s /crim_zn/s1
NDA> setgdat win1 -d /crim_zn/data1
...
NDA> ngray /crim_zn/win1 -f /crim_zn/data1.0 -sca som 
NDA> bar /crim_zn/win1 -inx 0 -f /crim_zn/data1.zn -co green
NDA> bar /crim_zn/win1 -inx 1 -f /crim_zn/data1.crim -co red

By using similar command sequences, we can visualize the relationships between different combinations of the background variables and outputs of the MLP network. The results are presented below.

figure2453

In the three grayscale figures the gray level of the boxes describes the output values and two background variables are mapped to the x and y axes. In the last figure, the output is mapped to the z axis. Figure a) the background variables are crim and zn; figure b) the background variables are ptratio and chas; figure c) the background variables are dis and indus. In figure d) the variables crim, b and indus increase on the x-axis and the variables zn, chas and dis increase on the y-axis. The predicted variable rate is mapped to the z-axis. Note that the last example also uses other background variables for computing, but only these two have been displayed.


next up previous contents
Next: Classical scaling and Sammon's Up: Backpropagation Previous: Importing a new data

Anssi Lensu
Tue Jul 23 11:58:18 EET DST 2002