Using Optimization Toolbox with unknown function.

25 views (last 30 days)
I want to use optimization toolbox to optimize my input parameters but i don't have an objective function. The objective function has no explicit form and its value is measured with simulation (running a .exe created from Matlab code).
How can i give optimization toolbox's functions the input function fun in order to work?
I have a main function Simulation.m (function Simulation) and i have created the .exe based on this function.
I changed my main function to function y = Simulation(x), and used [x]=fminunc(@Simulation,x0) but i get an error with the input function.

Answers (3)

Star Strider
Star Strider on 5 Oct 2014
You must have an objective function!
If you wanted me to optimise your function, what would you tell me about it? Think of what you want your ‘Simulation’ function to do (for instance, what output you want from it), and write an objective function to optimise the parameters to match that output.
The solver and approach you use depend on your problem.
  2 Comments
John
John on 5 Oct 2014
It's impossible to have an explicit form of the objective function that's why i use simulation to get the value of the function according to the input parameters i give.
And i want to optimize my system by choosing the best input parameters. I have a matrix with 6*54 parameters to be optimized.
Can i use optimization toolbox to find new input parameters that give me better result after i run simulations with the new parameters?
What i will use as function at fminunc(fun,x0)?
Star Strider
Star Strider on 5 Oct 2014
It will likely not be possible to optimise 6*54 parameters, even with an objective function.

Sign in to comment.


Titus Edelhofer
Titus Edelhofer on 5 Oct 2014
Hi,
as Star Strider said, you must have an objective function. This does not mean you have to have an explicit function, but you must have some measure of saying "this set of parameters is good" or "this set of parameters is bad", where the measure is a single value with "the smaller the value the better the set of parameters".
One example of this is curve fitting: the parameters are parameters to some sort of function (e.g. a * exp(b*x), and the measurement of "goodness" is the difference between the fitted values and some measured values.
For you this means: write an objective function, that accepts a vector of size 6*54, take those 324 values, run the simulation, and return a value describing "good" or "bad" as stated above.
Then use this objective function and call the appropriate optimizer, e.g. fminunc, if you have no constraints, or fmincon, if you have.
Titus
  2 Comments
Matt J
Matt J on 5 Oct 2014
Edited: Matt J on 5 Oct 2014
This does not mean you have to have an explicit function
@John,
You indeed do not have to have a closed form expression for the objective function's value. Any mechanism that generates that value will do. However, you have to pre-analyze the objective function and be sure it has certain properties that make it "legal" for use with fminunc. In particular, you must be sure that the objective function is twice continuously differentiable. This means, for example, that it will contain no quantizing operations like ceil(), floor(), round(), no nearest-neighbor or linear interpolation, no abs() operations, etc...
Also, since you have over 300 parameters and you plan to let fminunc compute approximate derivatives using finite differences, it could be very slow if your function has to run a fresh simulation for every single difference delta that it needs. If there is a way your simulator can also generate the gradient of the objective funtion, it way be well worth investing in that, and then using the 'GradObj' option.
John
John on 6 Oct 2014
Thanks for your suggestions. I am trying to find something like (Pest-Pobs)^2 for my objective function where Pest will be the simulated value and Pobs will be the observed value.
The problem is that i don't have any observed values and if i put random observed values maybe there are not any solutions or no convergence.
About the optimization algorithm i will try many in order to study convergence and/or fitness of solution. But first of all i have to find which objective function to use.

Sign in to comment.


Dorsa Haji Ghaffari
Dorsa Haji Ghaffari on 16 Sep 2020
Hi! I am having a similar problem using fmincon. I get my objective function value from real-time experiments and can't define a formula for it. I just know I need to get it close to zero. So I basically input two different parameters into my experiment and get the objective function out, and my goal is to eventually find the best two parameters for minimizing the objective function. Does anyone have a suggestion?
Thank you!
  4 Comments
omid rastkhiz paydar
omid rastkhiz paydar on 15 Feb 2021
Edited: omid rastkhiz paydar on 15 Feb 2021
Hi Dorsa,
I just want to know did you find out how you should face this issue? since I have the same problem,
My black box is ANSYS- Mechanical which gives me result for natural frequency and I can't have a objective function for it, So I need to use optimization toolbox with a column of numbers as an input instead of Obj Func,
Please let me know if you have solved this issue,
Walter Roberson
Walter Roberson on 15 Feb 2021
It is not possible to optimize under the circumstances described -- not in any meaningful way.
Suppose you give me a fine list of inputs and corresponding outputs, each written to finite precision, and suppose in return I could give you back a function that perfectly matched the values to within round-off error. Is that something that is theoretically possible ? Yes -- Lagrange Interpolation and Chebychev functions show that it is possible. So you give me some data, and I give you back a function that is mathematically perfect, so it must be the right function, no?
No. If you give me the inputs 1, 5, 17 at x = [1,2,3], sure I can give you back a quadratic function that fits them perfectly: I can also give you a quadratic function plus a 347 Hz sine wave of amplitude 342303. Sine waves are exactly 0 at integer multiples of 2*pi, so the sine wave contributes nothing at all at the points you measured. So the quadratic + sine wave is also a perfect fit. So is the variation with 346 Hz. Or amplitude -654321 . Or -654322 ...
After a moment, you will realize that there is a literal infinite number of functions that perfectly fit any finite data to within round-off error, and that they can give extremely different results for points in-between the ones you gave explicit data for.
... And in any case where there are an infinite number of solutions, the probably that any one of them is the "right" solution for the occasion is literally 0. As in 1/infinity --> 0.
Therefore, if all you have is data, and you do not know what form the function needs to take, you can never get the "right" function in order to optimize in any meaningful way.
The situation is very different if you have a finite list of possible forms of functions, with the associated parameters unknown: in such a case, you can do fitting of each form against the data to arrive at potential functions to optimize.
... Sadly, it turns out that even if you have a finite list of possible forms, a lot of the time you still cannot decide, or you can even discover that a different one fits better than the function you know is the right one. Noise does nasty things to curve fitting. The good news, though, is that with the list of fitted forms in hand, you could proceed to make predictions and those can really help weed out the crop of possibilities.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!