- Find a pseudocode for the Levenberg-Marquardt algorithm. If you're doing this as part of a class assignment, your textbook may have such a pseudocode.
- Implement that pseudocode. If you experience difficulty translating one or more of the steps, break it down until you have a set of substeps that you do know how to implement.
- Test your implementation using a few problems whose answer you know. Again if you're doing this as part of a class assignment your textbook may have worked examples you can use as test cases.
- Fix any bugs in your implementation you found in step 3.
- If you found any bugs in step 3, return to step 3 and use the previous test cases as well as some new ones.
Request for Simplified version of Levenberg Marquardt Algorithm
2 views (last 30 days)
Show older comments
Following is simpified and manual code of Sigmoid Function of Back Propogation Method,
How can I generate similar code for Levenberg Marquardt algorithm??
tt=(t-min(t))/(max(t)-min(t)); %Scaling Target from 0 to 1
x(1,:)=(x(1,:)-min(x(1,:)))/(max(x(1,:))-min(x(1,:))); %scaling inputs from 0 to 1
x(2,:)=(x(2,:)-min(x(2,:)))/(max(x(2,:))-min(x(2,:)));
x(3,:)=(x(3,:)-min(x(3,:)))/(max(x(3,:))-min(x(3,:)));
n=length(t);
%
dw=[0;0;0]; %intial weight update for momentum
mu=0.9; %momentum
w=[-0.2,-0.3,-0.1]; %initial random weights
beta=0.8; %learning rate
for k=1:4000 %k is epoch number (number of times the whole dataset is processsed by the network
for j= 1:n % j is the input vector number in the dataset
u=w*x(:,j); %weighted sum =w1x1+w2x2+w3x3 in vector multiplication form w*x. x(:,j) is the all rows (i.e., 3 inputs) of the jth column of data matrix
weight(j,:)=w;
y(j)=1/(1+exp(-u)); %model output is sigmoid output y
e(j)=tt(j)-y(j); %error for input vector j. As sigmoid function goes from 0 to 1 max, target is scaled from 0 to 1 and named tt to match it
mse(j)=(e(j)^2)/2; %mean sqaure error for the jth input
p=-e(j)*y(j)*(1-y(j))*x(:,j); %gradient of error for the 3 weights, this is a column vector of 3 error gradients.
dw=mu*dw+(1-mu)*beta*(-p); %weight change with learning rate and momentum
%dw=beta*(-p); %weight change for the 3 weights; dw is a column vector
w=w+dw'; % new weight after adding dw. dw is transposed to a row vector before adding to w.
end
mseEpoch(k)=sum(mse)/354; %mean sqaure error for the epoch k
k=k+1;
end
msetotal=sum(mseEpoch)/k;
2 Comments
Steven Lord
on 6 May 2021
Answers (0)
See Also
Categories
Find more on Surface and Mesh Plots in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!