fminimax fails to minimize objective functions

4 views (last 30 days)
Background:
I have written a Matlab code which calculates wind-turbine blade energy production and mass for a given blade shape. The input variables are control points that changes the shape of Bezier curves (or the shape of the blade). The models that compute the objective functions are Matlab codes based on theoretical equations and experimental data. The experimental data is used in the energy calculation while the mass calculation involves a set of x- & y-coordinates describing the blade's outer contour.
I want to maximize and minimize the energy production and mass (respectively) simultaneously using the fminimax command in Matlab, subject to nonlinear inequality constraints.
Note that maximization of the energy production is performed by minimizing its inverse. I have a total of three objectives I want to minimize: 1/energy production, mass, and a third one not described here.
Code:
function [x,maxfval,fxmin,fe] = localS(x0,w,z,bnds,wtdata)
N=wtdata.N;
%Sort chord and airfoil locations
%Twists
x0(1:N)=sort(x0(1:N),'descend');
%Chords
x0((N+1):(2*N)+1)=sort(x0((N+1):(2*N)+1),'descend');
%Thickness
x0(((2*N)+2):end-2)=sort(x0(((2*N)+2):end-2),'descend');
%Root Transition
x0((end-1):end)=sort(x0((end-1):end),'ascend');
opt=optimset('MaxIter',4,'Display','iter-detailed',...
'FinDiffType','forward','FunValCheck','on','MeritFunction','multiobj');
fx=[];
history=[];
%Normalize upper & lower bounds
ub_dim=bnds(1,1:((2*N)+1));
lb = bnds(2,1:((2*N)+1))./ub_dim;
ub = ub_dim./ub_dim;
%Normalize decision variables
x0dim=x0(((2*N)+2):end);
x0st=x0(1:((2*N)+1))./ub_dim;
objective = @(x)objasf(w,z,x,lb,ub,wtdata,ub_dim,x0dim);
constraint = @(x)subcon(x,N);
function [al] = objasf(w,z,x,lb,ub,wtdata,ub_dim,x0dim)
%Make sure finite differencing does not exceed bounds
if (any(x<lb))||(any(x>ub))||(isreal(x)~=1)
x(x<lb)=lb(x<lb);
x(x>ub)=ub(x>ub);
end
x_in=cat(2,(x.*ub_dim),x0dim);
fx=objF(x_in,wtdata);%<--Matlab code that computes a vector of objectives plus a value for constraint violation
fx_o=fx(1:(end-1));%<--vector of objectives
al=(w.*(fx_o-z))+((10^(-6))*sum(w.*(fx_o-z)));%<--normalize objectives & convert multi-obj. problem to single-obj.
history=cat(1,history,cat(2,cat(2,al,fx_o(end)),max(al)));%<--store info regarding each function evaluation
end
function [c,ceq] = subcon(x,N)
x_twists=x(1:N);%<--variables must be decreasing
x_chords=x((N+1):(2*N)+1);%<--variables must be decreasing
%Make sure variables are sorted from largest to smallest (decreasing in magnitude)
c = cat(2,(x_twists(2:end)-x_twists(1:(end-1))),(x_chords(2:end)-x_chords(1:(end-1))));
ceq = fx(end);
end
%Perform optimization
[x,fval,maxfval,exitflag,output] = fminimax(objective,x0st,[],[],[],[],lb,ub,constraint,opt);
%Make sure final answer for decision vector does not exceed bounds
if (any(x<lb))||(any(x>ub))||(isreal(x)~=1)
x(x<lb)=lb(x<lb);
x(x>ub)=ub(x>ub);
end
%Find best solution from list of function evaluations
[~, loc]=min(abs(history(:,end)-maxfval));
fxmin=history(loc(end),1:(end-1));
fe=output.funcCount;
end
Output:
(Each line represents one function evaluation)
history =
Objective 1 Objective 2 Objective 3 Max Objective
-0.080349 0.0559330 0.0026559 0.055933
-0.080349 0.0559330 0.0026559 0.055933
-0.080349 0.0559330 0.0026565 0.055933
-0.080349 0.0559330 0.0026565 0.055933
-0.080349 0.0559330 0.0026559 0.055933
-0.080349 0.0559330 0.0026567 0.055933
-0.080349 0.0559330 0.0026559 0.055933
-0.080349 0.0559330 0.0026559 0.055933
-0.080349 0.0559330 0.0026565 0.055933
-0.080349 0.0559330 0.0026559 0.055933
-0.060749 -0.083450 0.0045672 0.0045672
-0.060749 -0.083450 0.0045672 0.0045672
-0.060749 -0.083450 0.0045672 0.0045672
-0.060749 -0.083450 0.0045672 0.0045672
-0.060749 -0.083450 0.0045672 0.0045672
-0.060749 -0.083450 0.0045672 0.0045672
-0.060749 -0.083450 0.0045672 0.0045672
-0.060749 -0.083450 0.0045672 0.0045672
-0.060749 -0.083450 0.0045672 0.0045672
-0.045087 -0.113910 0.3759200 0.37592
-0.073846 -0.046424 0.0520220 0.052022
-0.067983 -0.067152 0.0198970 0.019897
-0.062777 -0.086817 0.0098890 0.0098886
-0.061437 -0.084987 0.0070430 0.007043
-0.059923 -0.090505 0.0048910 0.0048914
-0.060327 -0.086974 0.0048100 0.00481
-0.060610 -0.085211 0.0045994 0.0045994
-0.060610 -0.085211 0.0045994 0.0045994
-0.060610 -0.085211 0.0045994 0.0045994
-0.060610 -0.085211 0.0045994 0.0045994
-0.060610 -0.085211 0.0045994 0.0045994
-0.060610 -0.085211 0.0045994 0.0045994
-0.060610 -0.085211 0.0045994 0.0045994
-0.060610 -0.085211 0.0045994 0.0045994
-0.060610 -0.085211 0.0045994 0.0045994
-0.050964 -0.067548 0.2397900 0.23979
-0.072312 -0.050078 0.0489470 0.048947
-0.066237 -0.072061 0.0190200 0.01902
-0.062984 -0.084199 0.0106240 0.010624
-0.060629 -0.090942 0.0067559 0.0067559
-0.060249 -0.088056 0.0055264 0.0055264
-0.060385 -0.086628 0.0051663 0.0051663
-0.060449 -0.085918 0.0048842 0.0048842
-0.060584 -0.085564 0.0049129 0.0049129
-0.060597 -0.085388 0.0048086 0.0048086
-0.060603 -0.085299 0.0048012 0.0048012
-0.060606 -0.085255 0.0047159 0.0047159
-0.060608 -0.085233 0.0045725 0.0045725
-0.060608 -0.085233 0.0045725 0.0045725
-0.060608 -0.085233 0.0045725 0.0045725
-0.060608 -0.085233 0.0045725 0.0045725
-0.060608 -0.085233 0.0045725 0.0045725
-0.060608 -0.085233 0.0045725 0.0045725
-0.060608 -0.085233 0.0045725 0.0045725
-0.060608 -0.085233 0.0045725 0.0045725
-0.060608 -0.085233 0.0045725 0.0045725
-0.064592 -0.065096 0.0547880 0.054788
-0.063221 -0.074569 0.0223980 0.022398
-0.062350 -0.077450 0.0115940 0.011594
-0.059119 -0.093555 0.0068829 0.0068829
-0.059666 -0.089426 0.0055556 0.0055556
-0.060094 -0.087322 0.0051376 0.0051376
-0.060303 -0.086276 0.0049866 0.0049866
-0.060407 -0.085754 0.0048042 0.0048042
-0.060458 -0.085493 0.0047930 0.004793
-0.060584 -0.085363 0.0047229 0.0047229
-0.060596 -0.085298 0.0047678 0.0047678
-0.060602 -0.085266 0.0047412 0.0047412
-0.060605 -0.085249 0.0046034 0.0046034
-0.060606 -0.085241 0.0045916 0.0045916
-0.060609 -0.085229 0.0045668 0.0045668
-0.060609 -0.085229 0.0045668 0.0045668
-0.060609 -0.085229 0.0045668 0.0045668
-0.060609 -0.085229 0.0045668 0.0045668
-0.060609 -0.085229 0.0045668 0.0045668
-0.060609 -0.085229 0.0045668 0.0045668
-0.060609 -0.085229 0.0045668 0.0045668
-0.060609 -0.085229 0.0045668 0.0045668
-0.060609 -0.085229 0.0045668 0.0045668
-0.060609 -0.085229 0.0045668 0.0045668
Problem:
I'm trying to improve the convergence of my blade optimization by combining fminimax with a genetic algorithm, however fminimax does not appear to be minimizing the objectives. Instead, as shown in the output above, it jumps and only worsens the original design (Max Objective is increasing when it should be decreasing monotonically towards negative values).
  1. What is causing fminimax to do this?
  2. How can I fix fminimax such that it minimizes all objectives appropriately?
Things that I tried:
  1. Changing the step size
  2. Reducing the number of variables
I applied fminimax on an analytical test functions, and it works perfectly fine. But it's not the case for objective functions that involve several lines of code that may exhibit noise?
Thanks for your help and guidance.
UPDATE 1:
This is what it should look like when fminimax is working: http://www.2shared.com/photo/nOgiTiHk/ResultsMathWorksHelp.html
For some reason it worked for this particular blade as an initial solution, but I can't get it to work consistently for other blades.
UPDATE 2:
I noticed that I have incorrectly normalized the decision variables. Now that each variable spans between 0 and 1, where 0 and 1 are the lower and upper bounds respectively, fminimax is able to minimize all three objectives if an appropriate step size is selected. The step size that I have chosen is 0.05. Nevertheless, it still looks like there is a lot of noise in the objective functions and I was hoping if someone could provide me with further suggestions on how to improve the gradient performance, if any. Thank you.
UPDATE 3:
I have uploaded in the following link an Excel file with all the relevant information from each function evaluation: http://dc654.2shared.com/download/WlXDsN2Z/results.xls?tsid=20130621-172718-ed27ac49
fminimax is violating my third constraint. I've been reading some of the other threads (e.g. http://www.mathworks.com/matlabcentral/newsreader/view_thread/318297) with a similar problem with fmincon. Is there a way or a parameter setting that will help fminimax from spending function evaluations in the infeasible region or simply terminate it once it starts violating a constraint?
  1 Comment
Matt J
Matt J on 21 Jun 2013
Edited: Matt J on 21 Jun 2013
Is there a way or a parameter setting that will help fminimax from spending function evaluations in the infeasible region
No. And in fact, equality constraints, linear or nonlinear, can never be precisely satisfied due to floating point precision limits.
or simply terminate it once it starts violating a constraint
You could use OutputFcn to send a stop flag, but it's hard to see why it would be useful to stop anytime the constraints are violated. It could stop with a highly sub-optimal x.

Sign in to comment.

Answers (3)

Matt J
Matt J on 17 Jun 2013
Edited: Matt J on 17 Jun 2013
Hard to say, partly because we can't see what objF is doing.
Just as a remark, though, the following seems like an unrecommendable idea, since it effectively makes your functions non-differentiable,
%Make sure finite differencing does not exceed bounds
if (any(x<lb))||(any(x>ub))||(isreal(x)~=1)
x(x<lb)=lb(x<lb);
x(x>ub)=ub(x>ub);
end
Is there a reason you don't just let the algorithm worry about satisfying constraints and bounds? It uses the SQP algorithm, which is supposed to be able to do that pretty well.
  2 Comments
Matias
Matias on 17 Jun 2013
Edited: Matias on 17 Jun 2013
You're right, SQP does satisfy bounds pretty well. However, at times it exceeds them by the smallest amount (e.g. 1e-7 depending on the step size), particularly when the objective function is near a minimum when the decision variables are near the bounds.
Unfortunately the current models in objF will crash if the decision variables exceed the bounds, even by a very small amount.
Matt J
Matt J on 20 Jun 2013
Edited: Matt J on 20 Jun 2013
Unfortunately the current models in objF will crash if the decision variables exceed the bounds, even by a very small amount.
This sounds rather suspicious. Basically, you're saying that your objective function has a closed domain. I've never heard of such a function. Often you encounter functions like log(x), tan(x), etc... whose domains do not fill all of R^1, but their domains are open.
One adverse consequence of what you are saying is that your function can never be differentiable at the lb,ub boundaries. To be differentiable at a point, a function has to be defined throughout a neighbourhood surrounding that point.
Because your function is not differentiable at these boundaries, it breaks assumptions that FMINIMAX relies on. FMINIMAX cannot be guaranteed to converge properly to a point at the boundaries, if that's where the solution lies.

Sign in to comment.


Roger Stafford
Roger Stafford on 17 Jun 2013
I don't have the time to carefully analyze your code, Matias, but my impression from the way you have worded your problem is that you may have misunderstood what the 'fminimax' function accomplishes. It requires a finite set of objective functions - you appear to have only three of them - and varies the parameters you are furnishing it with in such a way as to minimize the largest of these objective functions over all possible parameter values. However I see you apparently attempting to compare the reciprocal of the energy provided, the mass of the blades, and an unnamed third quantity. I don't think taking the largest among these three quantities as you vary the blade shape makes any sense.
Suppose for the sake of discussion that you are able to vary the blade shape parameters in such a way as to leave the blade mass constant. For each set of parameters you could determine the energy produced as a function of the wind speed. Your problem then is to compare these different energy versus wind speed curves and seek the one that is overall the best for your purposes. I can conceive of some ratio of energy produced versus wind energy per unit area that would be a good measure for making such an overall decision. In that case you would have a problem for 'fminimax'. For each set of shape parameters compute such a magic ratio for each of a finite set of wind speeds and select the worst performing one among them. Then choose among all the shape parameters that set which minimizes this worst case performance. The key here is computing quantities in the various objective functions which are in some sense meaningfully comparable. Thereby you are in effect defining what you mean by "worst case".
If you are counting on having a constant wind speed, then you would need some measure of energy versus mass combinations suitable for choosing the best among them. In that case all you need is a straight minimization (or maximization) function like 'fminunc' or 'fmincon'. It would not be appropriate to use 'fminimax' here. Again the key here is devising measures of comparable performance.

Matt J
Matt J on 20 Jun 2013
Edited: Matt J on 20 Jun 2013
Max Objective is increasing when it should be decreasing monotonically towards negative values
The Max Objective data that you show doesn't appear to be increasing perpetually. It has a few upward jumps at certain iterations, but the overall trend is downward.
Interestingly, I see nothing in the description of the SQP algorithm (which fminimax uses) that guarantees monotonic descent of the objective at all iterations. In fact, the line search method described here
says that it is not the objective that is pushed downward at each iteration, but rather a modified merit function that takes the constraint violation into consideration as well.
The step size that I have chosen is 0.05. Nevertheless, it still looks like there is a lot of noise in the objective functions and I was hoping if someone could provide me with further suggestions on how to improve the gradient performance, if any
One way to improve gradient performance would be to make sure your objective has a gradient! I still think your pre-transformation
x(x<lb)=lb(x<lb);
x(x>ub)=ub(x>ub);
is creating mischief. In particular, it renders your objective function non-differentiable at the lb,ub boundaries. The step size 0.05 seems huge to me. It is probably allowing you in some cases to find a descent direction at the boundaries, in spite of the non-differentiability, but not reliably, and at the cost of a very poor gradient and Hessian approximation.
  2 Comments
Matias
Matias on 21 Jun 2013
Edited: Matias on 21 Jun 2013
First of all, thanks a lot for helping me. I think you are correct. It seems like the "noise" is actually fminimax trying to minimize the constraint violation plus the objective functions (i.e. merit function). I looked back at the constraint violation values in Update 1 and they are all zero.
I replaced the pre-transformation
if (any(x<lb))||(any(x>ub))||(isreal(x)~=1)
x(x<lb)=lb(x<lb);
x(x>ub)=ub(x>ub);
end
with
k1=x((N+1):end);
k2=lb((N+1):end);
if ((any(k1<k2))||(isreal(k1)~=1))
al=ones(size(z))*Inf;
return
end
In the above code I only applied the boundary transformation with the variables that caused problems, returning Inf if they went beyond the lower bound. No problems were found if the upper bound was exceeded, so I removed it. However, I ran an optimization and this occurred:
??? Error using ==> optimfcnchk>checkfun at 338
User function '@(x)objasf(w,z,x,lb,wtdata,ub_dim,lb_dim,x0dim,N)' returned Inf or -Inf when evaluated;
GOALCON cannot continue.
Error in ==> goalcon at 24
f = feval(funfcn{3},x,varargin{:});
Error in ==> fminimax>@(y,varargin)feval(cfun{3},y,neqgoals,funfcn,confcn,WEIGHT,GOAL,x,errCheck,varargin{:}) at 462
cfun{3} = @(y,varargin) feval(cfun{3},y,neqgoals,funfcn,confcn,WEIGHT,GOAL,x,errCheck,varargin{:});
Error in ==> nlconst at 822
[nctmp,nceqtmp] = feval(confcn{3},x,varargin{:});
Error in ==> fminimax at 469
[xnew,gamma,LAMBDA,EXITFLAG,OUTPUT]=...
Error in ==> localS at 66
[x,fval,maxfval,exitflag,output]=fminimax(objective,x0st,[],[],[],[],lb,ub,constraint,opt);
I also have another question. Please see UPDATE 3.
Matt J
Matt J on 21 Jun 2013
Edited: Matt J on 22 Jun 2013
Your modified transformation does not solve the issue. The objective function is required to have an open domain, and to be at least twice continuously differentiable. Your function's domain is still closed, with boudaries at {x(i)=lb(i) | i>N}.
The question you should be answering for us is why your function becomes undefined for x(i)<lb(i)? Also, why do you have isreal(k1)~=1? There is no reason FMINIMAX should ever iterate over complex x.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!