How to use narxnet for new set of data?

5 views (last 30 days)
I trained a narxnet network with 4 inputs and 2 targets for system identification. The training performance (RMS) seems pretty good, but the problem is that I don't know how to use this net for new set of data. According to the Matlab Help, I should use closed loop form (netc) for doing this:
netc = closeloop(net);
view(netc);
[Xs,Xi,Ai,Ts] = preparets(netc,X,{},T);
y = netc(Xs,Xi,Ai);
In this process, the target value (T) is required which doesn't make sense because target is not available for system identification. How should I use this network for new set of input data while I don't have target values?
  2 Comments
abhilasha singh
abhilasha singh on 21 Jun 2021
i also have the same problem .I even tried with taking intial targets as 'ones' but still the prediction is wrong.Please someone suggest the solution to evaluate NARX using new input sequence.

Sign in to comment.

Accepted Answer

Greg Heath
Greg Heath on 28 Aug 2014
1. The best way to solve a problem is to use the MATLAB example data with which we are familiar
help nndata
2. It doesn't make sense to guess at what the delays should be. Find the statistically significant feedback delays indicated by the target autocorrelation function and the statistically significant input delays indicated by the input/target crosscorrelation function. If you don't have a correlation function algorithm in another toolbox use nncorr. However, it has a bug that yields symmetric crosscorrelations. Therefore you have to combine nncorr(x,t...) with nncorr(t,x,...) as illustrated in many of my posts. Search using
greg narxnet nncorr
3. It doesn't make sense to use the default datadivision setting 'dividerand' because it ruins the correlations found above. Use either 'divideblock' or 'divideind'.
4. Use as few hidden nodes as possible. I typically design 10 nets for each trial value of H less than the upperbound Hub that is determined using the number of weights used (including) delays.
5. See what I wrote previously re (a)retraining, (b) Using Xf,Af for new data.
Greg
  4 Comments
Gabriel Theberge
Gabriel Theberge on 1 May 2016
Edited: Gabriel Theberge on 1 May 2016
Hi Greg H. In this last reply, you clarified a lot of things that I know are misunderstood by most of Matlab NARX users. From what I understand now, a well trained NARX model cannot be used in "openloop form" to generate prediction, even for ONE STEP AHEAD prediction, am I correct?
A lot of your comments should be added to the MATLAB documentation with NN Toolbox because a lot of details are not clear at all in this documentation. The proof is in the number of users that asks questions about the subject. Even if it's clear that we must use "close-loop form" for multi-step predictions on new data, I thought I could make one prediction (the very next step of the Y time series) in the close-loop form. But from what you said, It seems that even for one step, we need close loop and the feedback delay buffer. I think this is what is called :
Pf = Final input delay conditions
Af = Final layer delay conditions
from [net,tr,Y,E,Pf,Af] = train(net,P,T,Pi,Ai)
in the Matlab documentation of train function.
Muhammad Adil Raja
Muhammad Adil Raja on 16 Mar 2020
Thanks Greg for your response. I find it quite useful. I have a concern though about using preparets for preparing data for testing an open loop or closed loop network. What if we are doing a blind test in which we do not have target values to test a network. How will preparets prepare data in that case, especially layer states, which, presumably, it prepares with the help of target data. Absense of target data messes up the whole network like this. I wonder what is a workaround for this please!

Sign in to comment.

More Answers (2)

Greg Heath
Greg Heath on 25 Aug 2014
1. Test netc with the original data. If performance is lousy, train it starting with the existing weights from the openloop design and the original data.
2. Use the form
[net tr Ys Es Xf Af ] = train(netc,Xs,Ts,Xi,Ai);
to get Xf and Af which will become Xi and Ai for the new input data.
3 For more examples search
greg narxnet
Hope this helps.
Thank you for formally accepting my answer
Greg
  4 Comments
Sam136
Sam136 on 28 Aug 2014
Edited: Sam136 on 28 Aug 2014
Here is my code:
%%Training with a set of data
inputdelays=0;
feedbackdelays=1:2;
hiddensizes=[5];
net=narxnet(inputdelays,feedbackdelays,hiddensizes);
X = con2seq(input_train);
T= con2seq(target_train);
[Xs,Xi,Ai,Ts] = preparets(net,X,{},T);
[net,tr] = train(net,Xs,Ts,Xi,Ai);
% Training performance is pretty good.
%%Testing for new set of data
X_test=con2seq(input_test);
T_test=??? % There is no target values for testing the net!
[Xs,Xi,Ai,Ts] = preparets(net,X_test,{},T_test); % Again there is no
% target value
Y=net(X_test,Xi,Ai) % This doesn't work because of size error. size of X
% is 4*10000 size of Xs is 6*10000 and size of X_test is 4*800.
I am stuck in this part of my project for a long time. Can anyone help me to use the net for new set of data? I have searched many topics, but there is no clear answer. P.S. My goal of using NN is system identification (my system is dynamic not static) that's why I use open-loop narxnet. I think close loop is for multi-step ahead prediction which is not my case.
Greg Heath
Greg Heath on 30 Aug 2014
Openloop narxnet is a design step, not a deployable net. Openloop requires an additional input which contains target data. See the diagram obtained via the commands
view(net)
view(netc)
Back to essentials:
1. You did not use nncorr to determine the statistically significant input and feedback delays. 2. You did not overwite the dividerand default. Therefore a. the correlations between output and delayed input and feedback are probably severely compromised. b. You didn't realize train automatically divides the data into random training, validation and test subsets.
I am surprised your training turned out pretty good. How much of the mean target variance is explained be your model?
There are so many problems that I can only refer you to my posts.
greg narxnet closeloop

Sign in to comment.


Sam136
Sam136 on 4 Sep 2014
Edited: Sam136 on 4 Sep 2014
Thanks Greg for the clarification. I used your code for close loop Narxnet:
X = con2seq(input_train);
T= con2seq(target_train);
net = narxnet(1:2,1:2,[5]);
net.divideFcn = 'divideblock';
[ Xs, Xi, Ai, Ts ] = preparets(net, X,{},T );
ts = cell2mat(Ts);
MSE00s = mean(var(ts',1));
[ net, tr, Ys, Es, Xsf, Asf ] = train( net, Xs, Ts, Xi, Ai );
Ys = net(Xs,Xi,Ai);
Es = gsubtract(Ts,Ys);
view(net)
R2s = 1 - perform(net,Ts,Ys)/MSE00s
netc = closeloop(net);
view(netc)
[Xcs,Xci,Aci,Tcs] = preparets(netc,X,{},T);
Ycs = netc( Xcs, Xci, Aci );
R2cs1 = 1 - perform(netc,Tcs,Ycs)/MSE00s
if R2cs1 < 0.95*R2s
[ net_new, tr, Ycs_new, Ecs, Xcf, Acf ] = train( netc, Xcs, Tcs, Xci, Aci );
view(net_new)
R2cs2 = 1 - perform(net_new,Tcs,Ycs)/MSE00s;
Ycs_new=net_new(Xcs,Xcf,Acf);
end
First of all, the CL results for the same data that I used for training the OP are not good at all. Then, I trained CL with the procedure that you said, and after a few iteration, training stops because maximum MU reached, and no improvement in the results occured. Based on the code, do you have any idea to improve the results?
Below is the results: R2s=1, R2cs1=-1.5078, R2cs2=-1.5078
  1 Comment
Greg Heath
Greg Heath on 5 Sep 2014
I don't see any coding errors.
Suggestions
1. Choose ID using target/input cross-correlation function and FD using target autocorrelation function
greg narxnet nncorr
2. Choose H by trial and error
for h = Hmin:dH:Hmax
3. Choose initial random weights by trial and error
for i = 1:Ntrials
I have posted many examples

Sign in to comment.

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Tags

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!