# Thread Subject: linprog constraints of type non-equal to 0

 Subject: linprog constraints of type non-equal to 0 From: Biljana Date: 11 Jul, 2012 09:45:12 Message: 1 of 15 Hi, I have to solve a constraint optimisation problem with simplex using linprog. However I have around 10 constrains from type: w1x1-w2x2 ~=0. Can someone tell me how to transfer them to inequalities without examining all posible combinations that will occure if I do the following: w1x1-w2x2 < 0 or w1x1-w2x2 > 0 . Thanks.
 Subject: linprog constraints of type non-equal to 0 From: Torsten Date: 11 Jul, 2012 10:53:55 Message: 2 of 15 On 11 Jul., 11:45, "Biljana " wrote: > Hi, > > I have to solve a constraint optimisation problem with simplex using linprog. However I have around 10 constrains from type: > > w1x1-w2x2 ~=0. > > Can someone tell me how to transfer them to inequalities without examining all posible combinations that will occure if I do the following: > > w1x1-w2x2 < 0 or > w1x1-w2x2 > 0 . > > Thanks. First try to solve your problem without caring about the 10 constraints. If all of them are satisfied, you are done. Otherwise, add some small rounding error to the solution variables (about +/- 1e-20) - nobody will observe then if w1x1-w2x2 =0 or w1x1-w2x2 ~=0. Best wishes Torsten.
 Subject: linprog constraints of type non-equal to 0 From: Bruno Luong Date: 11 Jul, 2012 11:09:20 Message: 3 of 15 "Biljana" wrote in message ... > Hi, > > I have to solve a constraint optimisation problem with simplex using linprog. However I have around 10 constrains from type: > > w1x1-w2x2 ~=0. > > Can someone tell me how to transfer them to inequalities without examining all posible combinations that will occure if I do the following: > > w1x1-w2x2 < 0 or > w1x1-w2x2 > 0 . > This is an ill-defined constraints. In the majority of optimization problems, the feasible set must be closed http://en.wikipedia.org/wiki/Closed_set Otherwise the problem is mathematically not solvable (the solution converge towards a point that does not belong to the set) Bruno
 Subject: linprog constraints of type non-equal to 0 From: Bruno Luong Date: 11 Jul, 2012 11:48:12 Message: 4 of 15 What is the solution of the problem min(x), x in R x >= 0 x ~= 0 ? None! Bruno
 Subject: linprog constraints of type non-equal to 0 From: Matt J Date: 11 Jul, 2012 15:37:13 Message: 5 of 15 "Bruno Luong" wrote in message ... > "Biljana" wrote in message ... > > Hi, > > > > I have to solve a constraint optimisation problem with simplex using linprog. However I have around 10 constrains from type: > > > > w1x1-w2x2 ~=0. > > > > Can someone tell me how to transfer them to inequalities without examining all posible combinations that will occure if I do the following: > > > > w1x1-w2x2 < 0 or > > w1x1-w2x2 > 0 . > > > > This is an ill-defined constraints. In the majority of optimization problems, the feasible set must be closed http://en.wikipedia.org/wiki/Closed_set > > Otherwise the problem is mathematically not solvable (the solution converge towards a point that does not belong to the set) ====== Hmmm. Don't know if this is true for the "majority of optimization problems". What about min x-log(x) s.t. x>=0 The explicit constraint set {x>=0} is closed, but the feasible set is {x>0}.
 Subject: linprog constraints of type non-equal to 0 From: Bruno Luong Date: 11 Jul, 2012 16:38:29 Message: 6 of 15 "Matt J" wrote in message ... > min x-log(x) > s.t. x>=0 Let's agree about the vocalubary. The feasible set is { x >= 0 }. Not { x > = 0 } intersects { x : f(x) < inf }. When an optimization is formulated, the objective function must be defined and finite everywhere in the feasible set A = { x >= 0 }. http://en.wikipedia.org/wiki/Mathematical_optimization This is how I have learned in school too. This is not the case in the example you give. Bruno
 Subject: linprog constraints of type non-equal to 0 From: Matt J Date: 11 Jul, 2012 17:47:07 Message: 7 of 15 "Bruno Luong" wrote in message ... > "Matt J" wrote in message ... > > > min x-log(x) > > s.t. x>=0 > > Let's agree about the vocalubary. > > The feasible set is { x >= 0 }. > > Not { x > = 0 } intersects { x : f(x) < inf }. > > When an optimization is formulated, the objective function must be defined and finite everywhere in the feasible set A = { x >= 0 }. > > http://en.wikipedia.org/wiki/Mathematical_optimization > > This is how I have learned in school too. ================= Well, the conventions you learned in school are uncommon to me. I think you have to allow the feasible set to be defined on things like "{ x > = 0 } intersects { x : f(x) < inf }" because objective functions typically are only defined and finite on open sets (and to be differentiable, they have to be). I don't see why someone would insist that the explicit closed constraints like {x>=0} define the feasible set all by themselves. It's unnecessary for the theory of optimality conditions to work. Also, it means that in order to obey your conventions, I have to formulate mathematical programs like in my example on ugly-looking closed subsets of the domain of the objective function: min. x-log(x) s.t. x>=epsilon>0 Yuck! In any case, I agree that this irrelevant to the OP. In the OP's particular situation, where the objective is linear, the feasible set has to be closed.
 Subject: linprog constraints of type non-equal to 0 From: Bruno Luong Date: 11 Jul, 2012 19:48:12 Message: 8 of 15 > > min. f(x) := x-log(x) > s.t. x>=epsilon>0 > > Yuck! There is nothing wrong with such formulation. Since f(x) -> infinity when x -> 0, hence the above is strictly equivalent with your original problem. But now formulated with epsilon, it becomes a standard optimization problem as defined by the Wikipedia page. Beside the Wikipedia, if you are attentive reader, when people talk about constrained optimization theory (for example the book that you cited here http://www.mathworks.com/matlabcentral/newsreader/view_thread/321542#881982, chapter 12), the constraints are formulated with the closed inequality ">=" or "<=", and never the open one ">", or "<". Why? Because: First: The problem arises when one start to formulate an optimization on a non-closed set (note that a non-closed does not necessary mean open) such as: min g(x) := x (or any strictly increasing function) s.t. x > 0 Such problem does not have solution, thus there is no interest what so ever to study it. Second: When the numerical method find a sequence {xk} (in feasible set) that converges toward xopt, and f(xk) -> inf(f), at least it would be nice if xopt is still warranty to be feasible point. This can only be true if the feasible set is closed. Bruno
 Subject: linprog constraints of type non-equal to 0 From: Matt J Date: 11 Jul, 2012 20:18:30 Message: 9 of 15 "Bruno Luong" wrote in message ... > > > > min. f(x) := x-log(x) > > s.t. x>=epsilon>0 > > > > Yuck! > > There is nothing wrong with such formulation. Since f(x) -> infinity when x -> 0, hence the above is strictly equivalent with your original problem. But now formulated with epsilon, it becomes a standard optimization problem as defined by the Wikipedia page. ============== There is nothing technically wrong with bounding above epsilon, but IMO it's ugly, because for example then you have to analyze what epsilon value to use when formulating the problem. This is usually unnecessary. Standard algorithms will implicitly avoid a boundary where the function tends to infinity without explicitly constraining it above an epsilon>0. Also, I don't see where the Wikipedia page you cited demands that the feasible set be closed. All I see is: "Typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The domain A of f is called the search space or the choice set, while the elements of A are called candidate solutions or feasible solutions." > Beside the Wikipedia, if you are attentive reader, when people talk about constrained optimization theory (for example the book that you cited here http://www.mathworks.com/matlabcentral/newsreader/view_thread/321542#881982, chapter 12), the constraints are formulated with the closed inequality ">=" or "<=", and never the open one ">", or "<". =========== Yes, "constraints" are formulated this way, but the feasible set and the constraints are not the same. The feasible set is the intersection of the constraints and the open domain of the objective function. > Why? Because: > > First: The problem arises when one start to formulate an optimization on a non-closed set (note that a non-closed does not necessary mean open) such as: > > min g(x) := x (or any strictly increasing function) > s.t. x > 0 > > Such problem does not have solution, thus there is no interest what so ever to study it. ============= That happens for some objective functions, but not all (for example mine). You surely have to have conditions for a minimizer to exist and be attained in the feasible set, but closure of the feasible set is only 1 possible condition. There are standard alternatives to this, for example that the objective function is coercive http://en.wikipedia.org/wiki/Coercive_function Optimization textbooks do mention these.   > Second: When the numerical method find a sequence {xk} (in feasible set) that converges toward xopt, and f(xk) -> inf(f), at least it would be nice if xopt is still warranty to be feasible point. This can only be true if the feasible set is closed. ===================== Again, no. Closure of the feasible set is a sufficient condition, not a necessary one.
 Subject: linprog constraints of type non-equal to 0 From: Bruno Luong Date: 11 Jul, 2012 21:23:07 Message: 10 of 15 "Matt J" wrote in message ... > > There is nothing technically wrong with bounding above epsilon, but IMO it's ugly, because for example then you have to analyze what epsilon value to use when formulating the problem. This is usually unnecessary. Standard algorithms will implicitly avoid a boundary where the function tends to infinity without explicitly constraining it above an epsilon>0. But standard algorithm is not exclusively design to handle function that goes to infinity at the border, it's designed to handles all other cases as well, including when the infimum is reached at the border. >Also, I don't see where the Wikipedia page you cited demands that the feasible set be closed. I never said the Wikipedia stated the feasible set needs to be closed. What I said is: the wikipedia defines the *feasible set* as the set of points that satisfied the constraints, and the definition is unrelated to the objective function. *You* defines the feasible set as where the objective function as finite (within the constraints). This definition is NOT standard to my book. (...Let's agree about the vocabulary) > > Yes, "constraints" are formulated this way, but the feasible set and the constraints are not the same. The feasible set is the intersection of the constraints and the open domain of the objective function. Again this is YOUR definition of "feasible set", which is NOT in my book (actually it's also from Nocedal's book that you obviously know, and Wikipedia). To me feasible set = set of constraints. It's imaginable that the cost function would not being well defined in everywhere the feasible set. The minimizer is allowed to pick any point in the feasible set and ask the objective function value at this point. The objective function is prohibited to return infinity as you might know. > > Again, no. Closure of the feasible set is a sufficient condition, not a necessary one. Sure, but it's necessary in many cases, as I showed with my example of g(x)=x. Mimimizing g(x) on x<0 is non-sense, because all numerical solution will converge toward a point where one CANNOT take as solution. A total paradox. When the closure is not necessary (as with your example of x-log(x)), then one can discard the open boundary and replace with epsilon-spaced closed boundary, as with Torsen's suggestion. So in summarize, closed boundary is generic enough to handle all cases, and it is even necessary in many cases. If the infimum never reach at the border, then remove the small open set around this border is certainly OK. Re Torsen's suggestion. Bruno
 Subject: linprog constraints of type non-equal to 0 From: Bruno Luong Date: 11 Jul, 2012 21:46:13 Message: 11 of 15 From the Wikipedia: [ Given: a function f : A \to R from some set A to the real numbers     Sought: an element x0 in A such that f(x0) ? f(x) for all x in A ("minimization") ...  The domain A of f is called the search space or the choice set, while the elements of A are called candidate solutions or feasible solutions. ] Allow me to read relevant part for you Matt: - [ f: A \to R, .. to the real numbers ]: set of real numbers is R. Infinity is NOT a real number, meaning f(A) < infinity, meaning f cannot take infinity value on A. - [... elements of A are feasible solution]: meaning A (where f is defined) is feasible set. Agree or not? Bruno
 Subject: linprog constraints of type non-equal to 0 From: Matt J Date: 12 Jul, 2012 21:24:31 Message: 12 of 15 "Bruno Luong" wrote in message ... > >*You* defines the feasible set as where the objective function as finite (within the constraints). ============== Yes! This definition is NOT standard to my book. (...Let's agree about the vocabulary) ================ Well, we've read different books, I guess. I don't expect that all books will opt for the same semantics. I'll take your word for it that Nocedal opts for yous, whereas Bertsekas opts for mine. I suspect though, that some authors might require the constraint set to be the same as the feasible set just to simplify presentation. Since the constraint set is always closed, you don't have to do much pre-analysis about whether the minimum is attained. However, this formulation has its disadvantages implementation-wise, too (see below).   > To me feasible set = set of constraints. It's imaginable that the cost function would not being well defined in everywhere the feasible set. The minimizer is allowed to pick any point in the feasible set and ask the objective function value at this point. The objective function is prohibited to return infinity as you might know. ============= If the objective function is "prohibited to return infinity", then why in your conventions is it "imaginable that the cost function would not be well-defined everywhere"? In my conventions, this never occurs because the feasible set is always a subset of the domain of the objective (and therefore always finite for any feasible point). > > > > Again, no. Closure of the feasible set is a sufficient condition, not a necessary one. > > Sure, but it's necessary in many cases, as I showed with my example of g(x)=x. ========= Sure, but it's also unnecessary in many cases, as I showed with my example of g(x)=x-log(x) :-) > When the closure is not necessary (as with your example of x-log(x)), then one can discard the open boundary and replace with epsilon-spaced closed boundary, as with Torsen's suggestion. ============ Yes, assuming it's easy to find a value of epsilon. Maybe a better example of why I don't like doing this is an n-dimensional extension of my problem, which arises in tomographic imaging min. f(x) = sum( A*x - log(A*x) ) s.t. x>=0 where A is a large matrix. So, the domain of f(x) is {x: A*x>0}. Because A is large and hard to analyze, it can be hard to find a rectangular subset of this {x>epsilon} which is close enough to the boundary to avoid changing the minimum over {x>=0}. You also don't want epsilon to be too small because then numerical precision might not let you distinguish between x=epsilon and x=0.
 Subject: linprog constraints of type non-equal to 0 From: Matt J Date: 12 Jul, 2012 21:34:21 Message: 13 of 15 "Bruno Luong" wrote in message ... > > Allow me to read relevant part for you Matt: > > - [ f: A \to R, .. to the real numbers ]: set of real numbers is R. Infinity is NOT a real number, meaning f(A) < infinity, meaning f cannot take infinity value on A. > - [... elements of A are feasible solution]: meaning A (where f is defined) is feasible set. > > Agree or not? =========== Agree, but I think it should already be obvious to you that I agree with this. You know that I define the feasible set as the intersection of the constraints and the domain of the objective function. Therefore any point x in the feasible set is in the domain of the objective function and therefore satisifies f(x)
 Subject: linprog constraints of type non-equal to 0 From: Bruno Luong Date: 12 Jul, 2012 22:42:32 Message: 14 of 15 "Matt J" wrote in message ... > "Bruno Luong" wrote in message > Yes, assuming it's easy to find a value of epsilon. Maybe a better example of why I don't like doing this is an n-dimensional extension of my problem, which arises in tomographic imaging > > min. f(x) = sum( A*x - log(A*x) ) > s.t. x>=0 > > where A is a large matrix. So, the domain of f(x) is {x: A*x>0}. Because A is large and hard to analyze, it can be hard to find a rectangular subset of this {x>epsilon} which is close enough to the boundary to avoid changing the minimum over {x>=0}. You could choose the constraints x >= 0 |A*x| >= epsilon. That would make the selection of epsilon easier, and the feasible set is closed, and f(x) is defined everywhere. >You also don't want epsilon to be too small because then numerical precision might not let you distinguish between x=epsilon and x=0. Wait... you just admit problem could arises when x=0, and epsilon must be bigger. ;-) Bruno
 Subject: linprog constraints of type non-equal to 0 From: Matt J Date: 12 Jul, 2012 23:59:23 Message: 15 of 15 "Bruno Luong" wrote in message ... > "Matt J" wrote in message ... > > "Bruno Luong" wrote in message > > Yes, assuming it's easy to find a value of epsilon. Maybe a better example of why I don't like doing this is an n-dimensional extension of my problem, which arises in tomographic imaging > > > > min. f(x) = sum( A*x - log(A*x) ) > > s.t. x>=0 > > > > where A is a large matrix. So, the domain of f(x) is {x: A*x>0}. Because A is large and hard to analyze, it can be hard to find a rectangular subset of this {x>epsilon} which is close enough to the boundary to avoid changing the minimum over {x>=0}. > > You could choose the constraints > x >= 0 > |A*x| >= epsilon. > > That would make the selection of epsilon easier, and the feasible set is closed, and f(x) is defined everywhere. =============== I'm not sure it's so easy to choose this epsilon. The only way I can see to deduce bounds in the space of A*x is to start with an initial point x0 and compute f(x0). You know xmin, the original minimum over {x>=0}, satisfies f(x0)>=f(xmin). The best you can then say is that all terms in your objective function are bound at the optimum as (A(i,:)*xmin)-log(A(i,:)*xmin) <= f(x0) for all i=1...N=size(A,1). So you would want to choose each epsilon(i) so that epsilon(i) - log(epsilon(i)) = f(x0) for all i=1...N. But in practical scenarios f(x0) is a sum over many terms and can be quite a large value, and the solution to this will then be approximately epsilon(i)=exp(-f(x0)) which may be too small to matter in numerical precision. > >You also don't want epsilon to be too small because then numerical precision might not let you distinguish between x=epsilon and x=0. > > Wait... you just admit problem could arises when x=0, and epsilon must be bigger. ;-=============== I said YOU don't want epsilon to be too small...

### Everyone's Tags:

Separated by commas
Ex.: root locus, bode

### What are tags?

A tag is like a keyword or category label associated with each thread. Tags make it easier for you to find threads of interest.

Anyone can tag a thread. Tags are public and visible to everyone.