MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi

### Discover what MATLAB® can do for your career.

Opportunities for recent engineering grads.

# Thread Subject: optimization result not right

 Subject: optimization result not right From: xueqi Date: 17 Jun, 2012 18:03:06 Message: 1 of 12 Hi All, I want to optimize a function using femincon. But the result shown is not right and changed with the initial point. Could you tell me how to deal with it? Really appreciate:) Here is my code. function f = eu(x) f=0.4*exp(-(1.2*x(1)+0.8*x(2)+200-x(1)-x(2))*0.091)+...            0.3*exp(-(0.6*x(1)+1.3*x(2)+200-x(1)-x(2))*0.091)+...             0.3*exp(-(1.6*x(1)+1.8*x(2)+200-x(1)-x(2))*0.091); options = optimset('Algorithm','interior-point','Display','iter','MaxIter',10000,'MaxFunEvals',300000,'TolFun',1e-10,'TolX',1e-10,'TolCon',1e-10); A=[1,1]; b=200; lb = zeros(2,1); [x,fval,exitflag,output] = fmincon(@eu,[100,100],A,b,[],[],lb,[],[],options); The result is x =    33.1890 33.1891 fval =   1.0102e-008 But I use maple to do the same thing and clearly it give a merrier answer with x = 16.1114055242944 25.7712836854408 -9.51154566365912526*10^(-9),
 Subject: optimization result not right From: Sargondjani Date: 18 Jun, 2012 06:47:06 Message: 2 of 12 first of all: fmincon can only find a local minimum. so if your objective is not well behaved, it is not guaranteed to converge to the gloabl minimum. that might explain why you get different results with different starting value second, you should supply the analytical gradient. this will help alot third, try lowering TolFun since your results is in order e-8 and the 'good' result is e-9, it might be that the change in the objective was smaller than e-10, while it is still far from minimum
 Subject: optimization result not right From: Johan Lofberg Date: 18 Jun, 2012 07:43:06 Message: 3 of 12 The function is convex and very nicely behaved. However, it is very small numerically, so what you see is some simply termination criteria kicking in. Very large changes in x gives very small changes in objective. For instance, if you scale the objective with 1000, you get the solution   11.966133405720440   21.093212575892892 with objective 9.5593e-009. You should take a step back and think of it in terms of what the numbers represent. Is this really a reasonable model? Does it make sense that you can make the variables twice as large but you only get a change in the tenth decimal? Are you using the correct scales in you model, etc etc. "xueqi " wrote in message ... > Hi All, > > I want to optimize a function using femincon. But the result shown is not right and changed with the initial point. Could you tell me how to deal with it? Really appreciate:) > Here is my code. > > function f = eu(x) > f=0.4*exp(-(1.2*x(1)+0.8*x(2)+200-x(1)-x(2))*0.091)+... > 0.3*exp(-(0.6*x(1)+1.3*x(2)+200-x(1)-x(2))*0.091)+... > 0.3*exp(-(1.6*x(1)+1.8*x(2)+200-x(1)-x(2))*0.091); > > options = optimset('Algorithm','interior-point','Display','iter','MaxIter',10000,'MaxFunEvals',300000,'TolFun',1e-10,'TolX',1e-10,'TolCon',1e-10); > A=[1,1]; > b=200; > lb = zeros(2,1); > [x,fval,exitflag,output] = fmincon(@eu,[100,100],A,b,[],[],lb,[],[],options); > > The result is > > x = > > 33.1890 33.1891 > > > fval = > > 1.0102e-008 > > But I use maple to do the same thing and clearly it give a merrier answer with > x = > 16.1114055242944 25.7712836854408 > > -9.51154566365912526*10^(-9),
 Subject: optimization result not right From: Matt J Date: 18 Jun, 2012 09:43:09 Message: 4 of 12 "xueqi " wrote in message ... > Hi All, > > I want to optimize a function using femincon. But the result shown is not right and changed with the initial point. Could you tell me how to deal with it? Really appreciate:) > Here is my code. > > function f = eu(x) > f=0.4*exp(-(1.2*x(1)+0.8*x(2)+200-x(1)-x(2))*0.091)+... > 0.3*exp(-(0.6*x(1)+1.3*x(2)+200-x(1)-x(2))*0.091)+... > 0.3*exp(-(1.6*x(1)+1.8*x(2)+200-x(1)-x(2))*0.091); > ================== Similar to what Johan said, the presence of the +200 terms in your function is unnecessary. All it does is scale your function by a very small positive number. A much better scaled expression of your function would probably be A=-.0091*[0.2 -0.2;-0.4 0.3;0.6 0.8]; d=[0.4, 0.3, 0.3]; f=@(x) d*exp(A*x(:));
 Subject: optimization result not right From: Matt J Date: 18 Jun, 2012 10:04:07 Message: 5 of 12 "xueqi " wrote in message ... > > But I use maple to do the same thing and clearly it give a merrier answer with > x = > 16.1114055242944 25.7712836854408 > > -9.51154566365912526*10^(-9), ==================== Also, the result returned by Maple looks suspicious. First of all, it has returned a negative number for the objective function, when your function should be positive valued everywhere. Sure, it's a very small negative number, but it makes one wonder what numerically non-robust things it might be doing. Secondly, the solution returned by MATLAB appears to be better once you scale the objective function as in my last post: x1=[33.1890 33.1891]; x2=[16.1114055242944 25.7712836854408]; >>f(x1),f(x2) ans =     0.9058 ans =     0.9313
 Subject: optimization result not right From: Matt J Date: 18 Jun, 2012 10:12:06 Message: 6 of 12 "Matt J" wrote in message ... > > Similar to what Johan said, the presence of the +200 terms in your function is unnecessary. All it does is scale your function by a very small positive number. A much better scaled expression of your function would probably be > > A=-.0091*[0.2 -0.2;-0.4 0.3;0.6 0.8]; > d=[0.4, 0.3, 0.3]; > > f=@(x) d*exp(A*x(:)); =============== Small correction. The above should be A=-.091*[0.2 -0.2;-0.4 0.3;0.6 0.8]; and now indeed the Maple solution does gives a lower objective function value.
 Subject: optimization result not right From: xueqi Date: 18 Jun, 2012 13:08:06 Message: 7 of 12 Yeah Actually it is a bad model for this reason!! It took me a while to figure out this point! Now I am making some modification for this model but meanwhile I still want the answers in this model:) "Johan Löfberg" wrote in message ... > The function is convex and very nicely behaved. However, it is very small numerically, so what you see is some simply termination criteria kicking in. Very large changes in x gives very small changes in objective. > > For instance, if you scale the objective with 1000, you get the solution > > 11.966133405720440 > 21.093212575892892 > > with objective 9.5593e-009. > > You should take a step back and think of it in terms of what the numbers represent. Is this really a reasonable model? Does it make sense that you can make the variables twice as large but you only get a change in the tenth decimal? Are you using the correct scales in you model, etc etc. > > "xueqi " wrote in message ... > > Hi All, > > > > I want to optimize a function using femincon. But the result shown is not right and changed with the initial point. Could you tell me how to deal with it? Really appreciate:) > > Here is my code. > > > > function f = eu(x) > > f=0.4*exp(-(1.2*x(1)+0.8*x(2)+200-x(1)-x(2))*0.091)+... > > 0.3*exp(-(0.6*x(1)+1.3*x(2)+200-x(1)-x(2))*0.091)+... > > 0.3*exp(-(1.6*x(1)+1.8*x(2)+200-x(1)-x(2))*0.091); > > > > options = optimset('Algorithm','interior-point','Display','iter','MaxIter',10000,'MaxFunEvals',300000,'TolFun',1e-10,'TolX',1e-10,'TolCon',1e-10); > > A=[1,1]; > > b=200; > > lb = zeros(2,1); > > [x,fval,exitflag,output] = fmincon(@eu,[100,100],A,b,[],[],lb,[],[],options); > > > > The result is > > > > x = > > > > 33.1890 33.1891 > > > > > > fval = > > > > 1.0102e-008 > > > > But I use maple to do the same thing and clearly it give a merrier answer with > > x = > > 16.1114055242944 25.7712836854408 > > > > -9.51154566365912526*10^(-9),
 Subject: optimization result not right From: xueqi Date: 18 Jun, 2012 13:10:07 Message: 8 of 12 Hi By mentioning analytical gradient...Do you suggest me to solve the function using first order condition? "Sargondjani" wrote in message ... > first of all: fmincon can only find a local minimum. so if your objective is not well behaved, it is not guaranteed to converge to the gloabl minimum. that might explain why you get different results with different starting value > > second, you should supply the analytical gradient. this will help alot > > third, try lowering TolFun since your results is in order e-8 and the 'good' result is e-9, it might be that the change in the objective was smaller than e-10, while it is still far from minimum
 Subject: optimization result not right From: Matt J Date: 18 Jun, 2012 13:36:07 Message: 9 of 12 "xueqi " wrote in message ... > Hi By mentioning analytical gradient...Do you suggest me to solve the function using first order condition? > No, Sargondjani's suggestion was to use the GradObj option. The first order condition wouldn't be that easy to solve. Even in the unconstrained case, you would need fsolve, probably.
 Subject: optimization result not right From: Sargondjani Date: 18 Jun, 2012 13:42:07 Message: 10 of 12 "xueqi " wrote in message ... > Hi By mentioning analytical gradient...Do you suggest me to solve the function using first order condition? you dont need to solve it, but you can supply a function to matlab, so matlab does not have to approximate the gradient/jacobian with finite differences. matlab can then use the analytical gradient for first order condition (and also to calculate the Hessian) i suspect that in your case the fininite difference is going to be very inaccurate... the default value for the change in x is 1e-6 i think, hence the difference in the objective function will be close to maximum precision (1e-16 in double), so can easily be very inaccurate...
 Subject: optimization result not right From: Matt J Date: 18 Jun, 2012 14:14:07 Message: 11 of 12 "Sargondjani" wrote in message ... > "xueqi " wrote in message ... > > i suspect that in your case the fininite difference is going to be very inaccurate... the default value for the change in x is 1e-6 i think, hence the difference in the objective function will be close to maximum precision (1e-16 in double), so can easily be very inaccurate... ============== I don't see how scaling of the objective function alone should affect precision. In the test below, I get very close agreement between the analytical and finite difference gradients, f=@(x) 0.4*exp(-(0.2*x(1)-0.2*x(2)+200)*0.091)+...   0.3*exp(-(-0.4*x(1)+0.3*x(2)+200)*0.091)+...   0.3*exp(-(0.6*x(1)+0.8*x(2)+200)*0.091); g=@(x)A.'*(d(:).*exp(A*x(:)))*exp(-.091*200); %analytical gradient x1=[33.1890 33.1891]; analytical=g(x1); findiff=([f0(x1+[dx,0]) ; f0(x1+[0,dx])]-f0(x1))/dx; >>graderror =norm(analytical-findiff)./norm(analytical)*100 graderror =   1.4640e-005
 Subject: optimization result not right From: xueqi Date: 18 Jun, 2012 14:51:07 Message: 12 of 12 Hi, I get rid of the exp(200) and now the result is quite promising!!Really help! Thanks!:) About the suspicious Maple result, it is because I am minimizing the minus traget function in Matlab while maximize it in Maple... "Matt J" wrote in message ... > "Matt J" wrote in message ... > > > > Similar to what Johan said, the presence of the +200 terms in your function is unnecessary. All it does is scale your function by a very small positive number. A much better scaled expression of your function would probably be > > > > A=-.0091*[0.2 -0.2;-0.4 0.3;0.6 0.8]; > > d=[0.4, 0.3, 0.3]; > > > > f=@(x) d*exp(A*x(:)); > =============== > > Small correction. The above should be > > A=-.091*[0.2 -0.2;-0.4 0.3;0.6 0.8]; > > and now indeed the Maple solution does gives a lower objective function value.

## Tags for this Thread

### What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.

Contact us