Why the number of epochs, using trainlm, is so low?

3 views (last 30 days)
Hi, I'm a new user of this community and I apologize for my english at first. I'm using nntool for develop a neural network that can predict solar irradiance. Now i'm training to use this tool. I use a feedforward backpropagation with 2 input neurons, 1 output and 1 hidden layer formed by one neuron. I use trainlm algorithm and mse function error. Tha transfer function is tansig for hidden layer and output layer. The training parameters are: epochs 100 goal 0 max_fail 5 mem_reduc 1 min_grad 1e-010 mu 0.001 mu_dec 0.1 mu_inc 10 mu_max 10000000000 show 25 time inf. I initialized the weights and i have trained my neural network but after 11 epochs the training stopped with a value error about 1e-012. So the result are ok but i don't understand why the train stop. In fact the slope of the error function is good when the train stop. This happens also with XOR network. About that I have another question because after the training, output is [-1 1 1 -1] but it must be [0 1 1 0]. But the simulation works well. In this case I have used a ffbp with trainlm and the same training parameters. The network was 2-2-1 with logsig as transfer function for output and hidden layers. I hope in your explanation. Thanks.

Accepted Answer

Greg Heath
Greg Heath on 5 Oct 2014
% Hi, I'm a new user of this community and I apologize for my english at % first. I'm using nntool for develop a neural network that can predict % solar irradiance.
1. I assume this is not the MATLAB timeseries solar_dataset. Correct?
% Now i'm training to use this tool. I use a feedforward backpropagation with % with 2 input neurons, 1 output and 1 hidden layer formed by one neuron.
2. fitnet or feedforwardnet?
3. The input node layer contains FAN-IN-UNITS, not neurons.
% I use trainlm algorithm and mse function error. Tha transfer function is tansig % for hidden layer and output layer. The training parameters are: epochs 100 % goal 0 max_fail 5 mem_reduc 1 min_grad 1e-010 mu 0.001 mu_dec 0.1 % mu_inc 10 mu_max 10000000000 show 25 time inf.
4. To avoid confusion, just list the parameter settings that are not defaults
% I initialized the weights and i have trained my neural networkbut after 11 epochs % the training stopped with a value error about 1e-012. So the result are ok but i % don't understand why the train stop. In fact the slope of the error function is good % when the train stop. This happens also with XOR network.
5. [ net tr output error ] = train(net,input,target);
stoppingcriterion = tr.stop % No semicolon. Also try tr = tr
% About that I have another question because after the training, output is [-1 1 1 -1] % but it must be [0 1 1 0]. But the simulation works well. In this case I have used a ffbp % with trainlm and the same training parameters. The network was 2-2-1 with logsig as % transfer function for output and hidden layers. I hope in your explanation. Thanks.
6. You have missed something. There is no way to get negative output from logsig. Therefore you must be using the default normalization mapminax.
7.Although you given a pretty good explanation. It would help immensely if you posted your code.
Hope this helps.
Thank you for formally accepting my answer
Greg
  1 Comment
Image Analyst
Image Analyst on 8 Oct 2014
Antonio's reply moved here since it's not an Answer to the original question:
Hi Greg, thanks for asking.
1. You assume correctly.
2. I think it's feedforwardnet because I select, from GUI, that tipe of network.
4. Ok. I have used default parameters.
5. Sorry, but I don't understand what you want to say with that code. Unfortunely I'm using GUI.
6. I used tansig as transfer function. I made confusion.
7. I don't know how I can get the code from GUI.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!