Help in viewing the output of a neural network

8 views (last 30 days)
I used nftool in Matlab 2012 and trained a network. I gave the training inputs as x=[250:1] and targets as t=[250:1]. I used 10 hidden layers. I trained the network and got the results and exported the data to workspace. I finally got net as the variable in the workspace. I want to realize this network in hardware. So, I want the exact weights and bias values. I searched the net and typed net.IW and got 10 values, net.LW and got yet another 10 values and net.b and got 11 values. My network diagram is as shown below:
Output Neural Net
If IW is the Input Weight and LW is the Layer Weight, I realized the network diagram as shown below:
My realization of output Neural Network
I got 10 values for IW, which would fit into 10 hidden layers, and 10 values for LW, which will fit into 10 hidden layers according to my realization. I got 11 values for bias, which will fit in to my network, since there are 11 (b) blocks in the network. But, i am missing one IW and one LW value.
I want to know whether there is a mistake in my realization of the output of the network or I missed any of the values. Please help.

Accepted Answer

Greg Heath
Greg Heath on 13 Feb 2014
> Help in viewing the output of a neural network
> Asked by sundar on 2 Feb 2014 at 11:01
> I used nftool in Matlab 2012 and trained a network. I gave the training inputs as x=[250:1] and targets as t=[250:1].
% Syntax error:
x = [250:1]
x = Empty matrix: 1-by-0
> I used 10 hidden layers.
%Terminology Error
You used 1 hidden layer with 10 hidden nodes
>I trained the network and got the results and exported the data to workspace. I finally got net as the variable in the workspace. I want to realize this network in hardware. So, I want the exact weights and bias values. I searched the net and typed net.IW and got 10 values, net.LW and got yet another 10 values and net.b and got 11 values. My network diagram is as shown below:
> Output Neural Net
% Diagram Omitted. See above.
> If IW is the Input Weight and LW is the Layer Weight, I realized the network diagram as shown below:
% Diagram Omitted. See above.
The 4-block diagram is somewhat mislabeled and misleading:
1. Unfortunately, the standard terminology for this type of NN
is two-layer Multilayer Perceptron.
a. It corresponds to the two weight-layers or, equivalently, the
two neuron(i.e., activation-function)-layers.
b. In particular, it is called a two-layer network even though
there are three layers of nodes: input, hidden, and output.
c. Since the signals are represented by the nodes, I would
prefer to call it a three (node) layer net. So, to avoid confusion,
I simply refer to the net as a single hidden layer net.
2.The 1st box represents an input fan-in unit (node layer) with as
many input nodes as the dimensionality of the input vector.
3. The hidden neuron layer signal is represented by the output
of the 2nd box
4. The output neuron layer signal is represented by the output
of the 3rd box.
5. The existence of the 4th box is misleading. So, just imagine it
does not exist and label the arrow coming from the 3rd box as the
output signal.
6. The weight and bias subboxes in the hidden layer box are
usually labeled IW and b1.
7. The weight and bias subboxes in the output layer box are
usually labelled LW and b2.
8. My personal preference for diagramming would be to replace
the 1st and 4th boxes with arrows labelled x(input) and y(output).
The arrow between the 2nd and 3rd boxes would be labelled
h(hidden).
> My realization of output Neural Network
> I got 10 values for IW, which would fit into 10 hidden layers, and 10 values for LW, which will fit into 10 hidden layers according to my realization. I got 11 values for bias, which will fit in to my network, since there are 11 (b) blocks in the network. But, i am missing one IW and one LW value.
> I want to know whether there is a mistake in my realization of the output of the network or I missed any of the values
You used the term "layers" instead of "nodes"
[ 10 1 ] = size(IW)% IW connects the 1-D input and the 10-D hidden layer signal
[ 10 1 ] = size(b1) % b1 connects the 1-D input bias and the 10-D hidden layer signal
h = tansig( b1 + IW*x); % [ 10 1 ] = size(h)
[ 1 10 ] = size(LW)% LW connects and the 10-D hidden layer signal and the the 1-D output
[ 1 1 ] = size(b2) % b2 connects the 1-D output bias and the 1-D output layer signal
y = purelin( b2 + LW*h); % [ 1 1 ] = size(y); purelin(z) = z
Hope this helps.
Thank you for formally accepting my answer
Greg

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!