f



open-loop to closed-loop Narx NN in Matlab Help

Dear all, 

In the Matlab NN Toolbox help (2010b), there is a section for dynamic NN using Narx NN. Here the help shows us that after we had successfully train the Narx NN in the open loop manner (aka feedforward NN), we can closed it (using some command) to become closed loop Narx NN (aka recurrent NN). So, I am had some confusions regrading this matter:-

1- It this technique is acceptable? For what i know, to develop a recurrent NN, you have to train the NN closed-loop. Not just by simply closing the loop.

2- So, now since the Narx NN is in closed loop, so what do we call it afterwards: a Feedforward or Recurrent NN? If Recurrent NN, the training happen earlier is in open loop not in closed loop. If feedforward NN, the network now is in closed loop scheme. 

3- Training using close-loop Narx is so long. Is there any way to make the training go faster? (expect for reducing the amount of data)  


Thanks in advance for you help.
0
annursi (8)
10/16/2012 2:32:12 AM
comp.soft-sys.matlab 211264 articles. 26 followers. lunamoonmoon (257) is leader. Post Follow

3 Replies
1728 Views

Similar Articles

[PageSpeed] 45

"Dinie Muhammad" <annursi@gmail.com> wrote in message <k5igvc$399$1@newscl01ah.mathworks.com>...
> Dear all, 
> 
> In the Matlab NN Toolbox help (2010b), there is a section for dynamic NN using Narx NN. Here the help shows us that after we had successfully train the Narx NN in the open loop manner (aka feedforward NN), we can closed it (using some command) to become closed loop Narx NN (aka recurrent NN). So, I am had some confusions regrading this matter:-
> 
> 1- It this technique is acceptable? 

Anyting that works is acceptable.

>For what i know, to develop a recurrent NN, you have to train the NN closed-loop. Not just by simply closing the loop.

The documentation does point out that closed loop performance degrades over time because of the accumulation of errors between target and output but doesn't offer any remedy.

I'm no expert in time-series NNs. However, in order to understand what is going on I would first use the NARXNET examples in the documentation and try to use information from significant lags determined from correlation fnctions and minimize the number of hidden nodes to mitigate overtraining a overfit net. For example, use the same demo/example data for

1. The input-target crosscorrelation function and a resulting TIMEDELAYNET design.
2.  The target autocorrelation function and a NARNET design.
3.  Combine the information from 1 and 2 to design a NARXNET.

> 2- So, now since the Narx NN is in closed loop, so what do we call it afterwards: a Feedforward or Recurrent NN? If Recurrent NN, the training happen earlier is in open loop not in closed loop. If feedforward NN, the network now is in closed loop scheme. 

Before closing the loop it is a feedforward net.
After closing the loop it is a recurrent (I like the term feedack) net.

> 3- Training using close-loop Narx is so long. Is there any way to make the training go faster? (expect for reducing the amount of data)  

How do you train the closed loop NARX using the NN TBX? I saw no explanation or examples in the documentation. 

Obviously, knowledge of the significant inputs and significant lags would help. However,  different delays for different inputs and different outputs is not allowed by the current MATLAB functions.

Minimization of the number of hidden nodes is also useful for generalizing to unseen 
future data.

Again, I think using the same data (e.g., simplenarx_dataset)  for all 3 designs will help understanding.

> Thanks in advance for you help.

You are welcome. However, currently, my advice is just conjecture.

Hope this helps.

Greg
0
heath (3983)
10/19/2012 10:13:08 AM
"Greg Heath" <heath@alumni.brown.edu> wrote in message <k5r93k$3tg$1@newscl01ah.mathworks.com>...
> "Dinie Muhammad" <annursi@gmail.com> wrote in message <k5igvc$399$1@newscl01ah.mathworks.com>...
> > Dear all, 
> > 
> > In the Matlab NN Toolbox help (2010b), there is a section for dynamic NN using Narx NN. Here the help shows us that after we had successfully train the Narx NN in the open loop manner (aka feedforward NN), we can closed it (using some command) to become closed loop Narx NN (aka recurrent NN). So, I am had some confusions regrading this matter:-
> > 
> > 1- It this technique is acceptable? 
> 
> Anyting that works is acceptable.
> 

I really like the idea. But, if there are some references from journals and books it would be more convincing and concrete. From what I had read so far, I still haven't found any authors who had use this kind of technique in their work. Maybe the Mathwork guy should comment on this.

> >For what i know, to develop a recurrent NN, you have to train the NN closed-loop. Not just by simply closing the loop.
> 
> The documentation does point out that closed loop performance degrades over time because of the accumulation of errors between target and output but doesn't offer any remedy.
> 
> I'm no expert in time-series NNs. However, in order to understand what is going on I would first use the NARXNET examples in the documentation and try to use information from significant lags determined from correlation fnctions and minimize the number of hidden nodes to mitigate overtraining a overfit net. For example, use the same demo/example data for
> 
> 1. The input-target crosscorrelation function and a resulting TIMEDELAYNET design.

so this is where we find the significant number of input?

> 2.  The target autocorrelation function and a NARNET design.

and this is where we find the significant number of hidden nodes or lag?

> 3.  Combine the information from 1 and 2 to design a NARXNET.

finally combine the information of the best input and number hidden nodes/lag in the real design? Please correct me if I am wrong. 

How do you know the from the significant lag to choose the best number of hidden neurons?

> 
> > 2- So, now since the Narx NN is in closed loop, so what do we call it afterwards: a Feedforward or Recurrent NN? If Recurrent NN, the training happen earlier is in open loop not in closed loop. If feedforward NN, the network now is in closed loop scheme. 
> 
> Before closing the loop it is a feedforward net.
> After closing the loop it is a recurrent (I like the term feedack) net.
> 

Answered clearly. Thank you

> > 3- Training using close-loop Narx is so long. Is there any way to make the training go faster? (expect for reducing the amount of data)  
> 
> How do you train the closed loop NARX using the NN TBX? I saw no explanation or examples in the documentation. 
> 
> Obviously, knowledge of the significant inputs and significant lags would help. However,  different delays for different inputs and different outputs is not allowed by the current MATLAB functions.
> 
> Minimization of the number of hidden nodes is also useful for generalizing to unseen 
> future data.
> 
> Again, I think using the same data (e.g., simplenarx_dataset)  for all 3 designs will help understanding.
> 

Well, I actually just use the train command after the Narx NN is closed. The training time afterward is much longer than in open loop scheme.  Based on your experience, do you have any 'rule of thumb' or techniques to select the best number of hidden neurons? Recently, I had tried to used the validation error for the basis of selecting the best number of hidden nodes. But, I still do not know where to start and where to end? Should I start from 1 hidden neurons until I find the NN with lowest validation error? And how do I know that the NN have the lowest validation error? Maybe by adding 1-2 hidden neuron more, I can get lower error. This could take a lot of time and effort.  

> > Thanks in advance for you help.
> 
> You are welcome. However, currently, my advice is just conjecture.
> 
> Hope this helps.
> 
> Greg

Dear Dr Heath,

Thank for your comments and answers. I am looking forwards for your next reply.

Thank you.

Dinie
0
annursi (8)
10/20/2012 6:31:08 PM
"Dinie Muhammad" <annursi@gmail.com> wrote in message <k5uqlc$8dn$1@newscl01ah.mathworks.com>...
> "Greg Heath" <heath@alumni.brown.edu> wrote in message <k5r93k$3tg$1@newscl01ah.mathworks.com>...
> > "Dinie Muhammad" <annursi@gmail.com> wrote in message <k5igvc$399$1@newscl01ah.mathworks.com>...
> > > Dear all, 
> > > 
> > > In the Matlab NN Toolbox help (2010b), there is a section for dynamic NN using Narx NN. Here the help shows us that after we had successfully train the Narx NN in the open loop manner (aka feedforward NN), we can closed it (using some command) to become closed loop Narx NN (aka recurrent NN). So, I am had some confusions regrading this matter:-
> > > 
> > > 1- It this technique is acceptable? 
> > 
> > Anyting that works is acceptable.
> > 
> 
> I really like the idea. But, if there are some references from journals and books it would be more convincing and concrete. From what I had read so far, I still haven't found any authors who had use this kind of technique in their work. Maybe the Mathwork guy should comment on this.

I didn't recommend the approach for publication , only to give a "feel" for a division of significance between the effects of input delay and output feedback delay. Since the input and output are significantly correlated, the ranking is suboptimal. Search algorithms like STEPWISEFIT AND SEQUENTIALFS WOULD DO A BETTER JOB OF THIS.

A good technique would be to put the input, candidate delayed versions of input and candidate versions of delayed target output into a combined input matrix for a feedforward net. Then, use a stepwise search to rank the inputs.

> > >For what i know, to develop a recurrent NN, you have to train the NN closed-loop. Not just by simply closing the loop.
> > 
> > The documentation does point out that closed loop performance degrades over time because of the accumulation of errors between target and output but doesn't offer any remedy.
> > 
> > I'm no expert in time-series NNs. However, in order to understand what is going on I would first use the NARXNET examples in the documentation and try to use information from significant lags determined from correlation functions and minimize the number of hidden nodes to mitigate overtraining a overfit net. For example, use the same demo/example data for
> > 
> > 1. The input-target crosscorrelation function and a resulting TIMEDELAYNET design.
> 
> so this is where we find the significant number of input?

.... the number of significant input delays:

crosscorrXT = nncorr(zscore(X,1),zscore(T,1),N-1);

Use only nonegative lags

[ sigxcorr indx ] = find(abs(crosscorrXT(N:end) > 0.21))

See my recent NEWSGROUP post on Determinig Significant Correlations

> > 2.  The target autocorrelation function and a NARNET design.
> 
> and this is where we find the significant number of hidden nodes or lag?

Just the significant positive lags.

autocorrT = nncorr(zscore(T,1),zscore(T,1),N-1);

Use only nonegative lags

[ sigacorr inda ] = find(abs(autocorrT(N+1:end) > 0.21))

 
> > 3.  Combine the information from 1 and 2 to design a NARXNET.
> 
> finally combine the information of the best input and number hidden nodes/lag in the real design? Please correct me if I am wrong. 
>
No.

Use the combined knowledge of significant input and feedback lags to determmine 
the smallest good value for the number of hidden nodes, H. In otherwords, the feedback 
problem is essentially reduced to a feedforward problem.

In the feedforward problem I typically loop over numH = 10 candidate values for H using 
Ntrials = 10 trials for random intitial weights. The "optimum" choice for H can be estimated from the Ntrials X NumH matrix tabulation of the normalized MSE. To see example code, search in the NEWSGROUP and ANSWERS using some subset of the following keywords:

heath close clear Neq Nw Hub
 
> How do you know the from the significant lag to choose the best number of hidden neurons?

See above. 

> > > 2- So, now since the Narx NN is in closed loop, so what do we call it afterwards: a Feedforward or Recurrent NN? If Recurrent NN, the training happen earlier is in open loop not in closed loop. If feedforward NN, the network now is in closed loop scheme. 
> > 
> > Before closing the loop it is a feedforward net.
> > After closing the loop it is a recurrent (I like the term feedack) net.
> 
> Answered clearly. Thank you
> 
> > > 3- Training using close-loop Narx is so long. Is there any way to make the training go faster? (expect for reducing the amount of data)  
> > 
> > How do you train the closed loop NARX using the NN TBX? I saw no explanation or examples in the documentation. 
> > 
> > Obviously, knowledge of the significant inputs and significant lags would help. However,  different delays for different inputs and different outputs is not allowed by the current MATLAB functions.
> > 
> > Minimization of the number of hidden nodes is also useful for generalizing to unseen 
> > future data.
> > 
> > Again, I think using the same data (e.g., simplenarx_dataset)  for all 3 designs will help understanding.
> > 
> 
> Well, I actually just use the train command after the Narx NN is closed. The training time afterward is much longer than in open loop scheme.  Based on your experience, do you have any 'rule of thumb' or techniques to select the best number of hidden neurons?

Yes. See above

 Recently, I had tried to used the validation error for the basis of selecting the best number of hidden nodes. But, I still do not know where to start and where to end? Should I start from 1 hidden neurons until I find the NN with lowest validation error? 

No. See my previous posts. I find Hub, the upper bound for H so that the number of unknown weights, Nw, is <=  the number of training equations. Then I choose 10 values and perform the 100 trials above. If It appears that "optimum' is between two candidates, I may or may not refine the search.

And how do I know that the NN have the lowest validation error? Maybe by adding 1-2 hidden neuron more, I can get lower error. This could take a lot of time and effort. 

I haven't run into too many designs that can't  be accomplished in  100 trials. 

I use a degree-of-freedom modified NMSE instead of a val set. See myposts.
 
> > > Thanks in advance for you help.
> > 
> > You are welcome. However, currently, my advice is just conjecture.
> > 
> > Hope this helps.
> > 
> > Greg
> 
> Dear Dr Heath,
> 
> Thank for your comments and answers. I am looking forwards for your next reply.

Whoomp! Here it is.

Enjoy
0
heath (3983)
10/21/2012 12:17:08 AM
Reply: