Gradient descent with layer recurrent network in parallel mod exits after 1 epoch with performance target (0) met.

3 views (last 30 days)
I am using the NNet toolbox and the Parallel Computing toolbox.
I have replicated my problem using sample data (as opposed to my own data) so that it is simple for others to test.
load magmulseq;
input = catsamples(y1,y2,y3,'pad');
target_output = catsamples(u1,u2,u3,'pad');
lrn_net = layrecnet(1:2, 4, 'traingdx');
[input_s,input_delay,layer_delay, target_s] = preparets(lrn_net,input,target_output);
[lrn_net, lrn_trainingrecord] = train(lrn_net,input_s,target_s,input_delay,layer_delay, 'useParallel', 'yes', 'useGPU','yes', 'showResources','yes');
This gives me the following result:
Training with TRAINGDX.
Calculation mode: Parallel with MEX Workers
Epoch 0/1000, Time 0.087754, Performance 5.5405/0, Gradient NaN/1e-05, Validation Checks 0/6
Epoch 1/1000, Time 0.18154, Performance 0/0, Gradient NaN/1e-05, Validation Checks 0/6
Training with TRAINGDX completed: Performance goal met.
If I then do this:
net_output = lrn_net(input_s);
I see that net_output is a 1 x 1484 cell where each cell has 3 doubles set to NaN.
This problem also occurs if I use traingda but it does not occur if I use trainlm or trainscg. It happens with both my own data and the sample data in magmulseq.
Is there an extra step that I need to take if I want to use gradient descent with a layer recurrent network?
Thanks very much for your time,
V

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!