April 10th, 2021, 6:37 pm

1. I want to find minimum of function

$$0.5y^TWy - a^TWy$$

with constraints

$$C_1B^{-1}y >= b_1, C_2y>=b_2$$

Input values: *x* (for matrix B), *a* (initial approximation), W diagonal matrix of positive weights.

2. It's an iterative dual space algorithm with active constraints set. I use some precision (one constant for all cases 1e-7) in my algorithm to detect if all constraints are satisfied with given precision.

3. I'm looking for solution that is not far from initial approximation (*a*). Dimension of problem (n) can be very different (from 5 to 3000), number of constraints is big enought (may be ~5*n), input values can be too small (~1e-5) and too big (~1e5). Usually I had same order of values inside x and a. And I normalized initial problem to make norms of initial vectors near to 1. Without such normalization algorithm frequently behaves unstable and can return solution that is very far from initial approximation and obviously wrong.

Now I have inputs with *x* vector with values of order 1e5 and *a* values of order 1e-5. I'm thinking about scalling *x* and *a* separately. But I'm looking for some theory about this stuff.