Page 1 of 1

Scaling input data to make QP problem stable

Posted: April 5th, 2021, 8:04 pm
by Olga1597
Is there some theory about how  inputs data should be scaled to make QP problem numerical stable?
 I have coded QP solver. To make this solver work stable for different input data I scale input values. I use several heuristics and they almost always works. But I cannot find books and papers with proofs of stability in different cases. And it makes me unhappy.

Re: Scaling input data to make QP problem stable

Posted: April 6th, 2021, 2:13 pm
by Alan
Pretty vague description. In any event, you might google for discussions about "making a covariance matrix positive definite" (when it is only positive semidefinite) and "preconditioning a  matrix".    

A more complete answer (by anybody) would require:
1. Explain your QP problem
2. Explain your algorithm to solve it.
3. Explain your 'stability' issues and exactly what you mean by 'scaling your data'.
4. Give an example.

Please use the LaTeX

Re: Scaling input data to make QP problem stable

Posted: April 6th, 2021, 7:26 pm
by Cuchulainn
0. What is "QP"?
quadratic programming?

Re: Scaling input data to make QP problem stable

Posted: April 7th, 2021, 11:49 pm
by Olga1597
0. What is "QP"?
quadratic programming?
Yes. This problem https://en.wikipedia.org/wiki/Quadratic_programming

Re: Scaling input data to make QP problem stable

Posted: April 8th, 2021, 12:26 pm
by Cuchulainn
0. What is "QP"?
quadratic programming?
Yes. This problem https://en.wikipedia.org/wiki/Quadratic_programming
I see. And now questions 1 to 4.

Re: Scaling input data to make QP problem stable

Posted: April 10th, 2021, 6:37 pm
by Olga1597
1. I want to find minimum of function 

$$0.5y^TWy - a^TWy$$
with constraints
$$C_1B^{-1}y >= b_1, C_2y>=b_2$$
Input values: x (for matrix B), a (initial approximation), W diagonal matrix of positive weights.

2. It's an iterative dual space algorithm with active constraints set. I use some precision (one constant for all cases 1e-7) in my algorithm to detect if all constraints are satisfied with given precision. 
3. I'm looking for solution that is not far from initial approximation (a). Dimension of problem (n) can be very different (from 5 to 3000), number of constraints is big enought (may be ~5*n), input values can be too small (~1e-5) and too big (~1e5). Usually I had same order of values inside x and a. And I normalized initial problem to make norms of initial vectors near to 1. Without such normalization algorithm frequently behaves unstable and can return solution that is very far from initial approximation and obviously wrong. 
Now I have inputs with x vector with values of order 1e5 and a values of order 1e-5. I'm thinking about scalling x and a separately. But I'm looking for some theory about this stuff. 

Re: Scaling input data to make QP problem stable

Posted: April 11th, 2021, 8:47 am
by Cuchulainn
Let me paraphrase: you are working on a problem with very large and very small data, yes?

When these two worlds clash we can probably expect all kinds of problems, such as

. Catastrophic subtraction cancellation
. Machine overflow/underflow. One brute force solution is to use multiprecision library,
. Ill_conditioned systems

Maybe this is an area to start looking,

Re: Scaling input data to make QP problem stable

Posted: April 12th, 2021, 1:12 pm
by Alan
With general non-linear constrained optimization, it sometimes helps to move to [$]z = \log y[$] for some or all of the unknowns. You might try that by moving to a general non-linear optimizer. For example, Mathematica has FindMinimum, with lots of documentation online. In the end, it's somewhat of a black-box but, in my experience, a pretty reliable black box.