THE DIFFERACE RESULT IN NEURAL NETWORK PROBLEM

1 view (last 30 days)
Hi
I hope someone can help me with my question.
When I run the backprop neural network more than once on the same data set i get a different set of results. (the predicted results are different each time). Is there a way to train the Neural Network to output the same (lowest error predictions) if you run the code more than once for the same data set ? when enter the testimage for first time will classify as first class but when rerun the program with same testimage will classify as second class.. how can solve this problem ...this is my file Thanks

Accepted Answer

Greg Heath
Greg Heath on 19 Jun 2014
if you are using a saved net that has been previously trained, this should not happen.
If you are setting the RNG to the same initial state before retraining, this should not happen.
Hope this helps.
Thank you for formally accepting my answer
Greg
  6 Comments
Greg Heath
Greg Heath on 21 Jun 2014
What are the sizes of your training, validation and test sets?
What range of hidden node values are you searching over?
How many random initial weight initializations for each hidden node value?
What are the trn/val/test R-squared values for the "best" (i.e. max(R2val)) design?
primrose khaleed
primrose khaleed on 21 Jun 2014
Edited: Cedric on 21 Jun 2014
thank you greg.. the size of trainig ,validation and test is:
mynet.divideParam.trainRatio = 70/100;
mynet.divideParam.valRatio = 15/100;
mynet.divideParam.testRatio = 15/100;
the hidden node i used the defult (=10)
but i have the stuped quetions : i dont underetand How many random initial weight initializations for each hidden node value?
What are the trn/val/test R-squared values for the "best" (i.e. max(R2val)) design?
how can random initial wights??? the matlab do not do it???
plz help me greg

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!