Examples

Here is a simple optimization problem formulated with Convex.jl:

julia> using SDPAFamily, LinearAlgebra
julia> using Convex
julia> y = Semidefinite(3)Variable size: (3, 3) sign: real vexity: affine id: 919…187
julia> p = maximize(eigmin(y), tr(y) <= 5; numeric_type = BigFloat)maximize └─ eigmin (concave; real) └─ 3×3 real variable (id: 919…187) subject to └─ <= constraint (affine) ├─ sum (affine; real) │ └─ diag (affine; real) │ └─ … └─ 5 status: `solve!` not called yet
julia> solve!(p, () -> SDPAFamily.Optimizer(presolve=true))
julia> p.optval1.666666666666666666666666666666757179710741031466357020049203471966738613424135

Optimal guessing probability for a pair of quantum states

In physics, a state represents a possible configuration of a physical system. In quantum mechanical systems with finitely many degrees of freedom, states are represented by density matrices, which are $d\times d$ matrices with complex entries that are positive semi-definite and have trace equal to one. States can be measured; mathematically, a measurement with $n$ possible outcomes is represented by a set of measurement operators $\{E_j\}_{j=1}^n$, where each $E_j$ is a $d\times d$ matrix. For example, imagine an experiment in which a charged particle is released in a magnetic field such that it will hit either a detector on the left or a detector on the right. This corresponds to a measurement of the particle with two outcomes, and hence two measurement operators $\{E_1, E_2\}$, which to the left and right detector.

In order for $\{E_j\}_{j=1}^n$ to be a valid set of measurement operators, each $E_j$ must be positive semi-definite, and the family $\{E_j\}_{j=1}^n$ must have the property that $\sum_{j=1}^n E_j = I_d$, the $d\times d$ identity matrix. If the state of the system is represented by $\rho$, and a measurement with measurement operators $\{E_j\}_{j=1}^n$ is performed, then outcome $j$ is obtained with probability $\operatorname{tr}[\rho E_j]$.

Consider the case where $d=2$ (i.e. the states are qubits), and the state of the system is either represented by $\rho_1 = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$ or by $\rho_2 = \frac{1}{2}\begin{pmatrix} 1 & -i \\ i & 1 \end{pmatrix}$, but we don't know which; let's say there is a referee who flipped a fair coin, and then prepared the system in either $\rho_1$ or $\rho_2$. We will perform a measurement of the system, and then use the outcome to make a guess about the state of the system. What is the measurement that gives the highest probability of correctly determining which state the system is in, and what's the optimal probability?

We will perform a measurement with measurement operators $E_1$ and $E_2$. If we get outcome $1$, we will guess the system is in state $\rho_1$ and and if we obtain outcome 2, we guess the system is in state $\rho_2$. Then the probability of guessing correctly is

\[p_\text{guess}(E_1, E_2) = \frac{1}{2}\operatorname{tr}(\rho_1 E_1) + \frac{1}{2}\operatorname{tr}(\rho_2 E_2)\]

since there is a 50% chance of the system being in state $\rho_1$, in which case we guess correctly when we get outcome 1 (which occurs with probability $\operatorname{tr}(\rho_1 E_1)$), and a 50% chance of the system being in state $\rho_2$, in which case we guess correctly when we get outcome $2$.

Our goal now is to choose the optimal measurement operators to have the the best chance of guessing correctly. That is, we aim to maximize the above expression over all choices of $E_1$ and $E_2$ such that $\{E_1, E_2\}$ is a valid set of measurement operators. This is a semidefinite program, which can be solved e.g. with SDPAFamily.jl In this simple example with only two states to discriminate between, the problem can be solved analytically, and the solution is related to the trace distance between the two states. This problem specifically is Example 3.2.1 of the edX Quantum Cryptography notes by Thomas Vidick. It can be seen that the optimal guessing probability is

\[p_\text{guess} = \frac{1}{2} + \frac{1}{2 \sqrt{2}}\]

Let us see to what accuracy we can recover that result using the SDPA solvers.

julia> using Convex, SDPAFamily, Printf
julia> ρ₁ = Complex{BigFloat}[1 0; 0 0]2×2 Matrix{Complex{BigFloat}}: 1.0+0.0im 0.0+0.0im 0.0+0.0im 0.0+0.0im
julia> ρ₂ = (1//2)*Complex{BigFloat}[1 -im; im 1]2×2 Matrix{Complex{BigFloat}}: 0.50+0.0im 0.0-0.50im 0.0+0.50im 0.50+0.0im
julia> E₁ = ComplexVariable(2, 2)Variable size: (2, 2) sign: complex vexity: affine id: 934…955
julia> E₂ = ComplexVariable(2, 2)Variable size: (2, 2) sign: complex vexity: affine id: 345…901
julia> problem = maximize( real((1//2)*tr(ρ₁*E₁) + (1//2)*tr(ρ₂*E₂)), [E₁ ⪰ 0, E₂ ⪰ 0, E₁ + E₂ == Diagonal(ones(2))]; numeric_type = BigFloat );
julia> p_guess = 1//2 + 1/(2*sqrt(big(2)))0.8535533905932737622004221810524245196424179688442370182941699344976831196155298
julia> for variant in (:sdpa, :sdpa_dd, :sdpa_qd, :sdpa_gmp) solve!(problem, () -> SDPAFamily.Optimizer(silent = true, presolve = true, variant = variant)) error = abs(problem.optval - p_guess) print("$variant solved the problem with an absolute error of ") @printf("%.2e.\n", error) endsdpa solved the problem with an absolute error of 4.44e-08. sdpa_dd solved the problem with an absolute error of 3.02e-16. sdpa_qd solved the problem with an absolute error of 3.13e-17. sdpa_gmp solved the problem with an absolute error of 2.19e-31.

Here, we have solved the problem four times, once with each variant of the SDPA family of optimizers supported by this package. We can see that SDPA-GMP has solved the problem to an accuracy of $\sim 10^{-31}$, far exceeding machine precision.

As usual with semidefinite programs, we can recover a set of optimal measurements:

julia> evaluate(E₁)2×2 Matrix{Complex{BigFloat}}:
 0.853553+0.0im            0.0+0.353553im
      0.0-0.353553im  0.146447+0.0im
julia> evaluate(E₂)2×2 Matrix{Complex{BigFloat}}: 0.146447+0.0im 0.0-0.353553im 0.0+0.353553im 0.853553+0.0im

Note that this is an example where the presolve routine is essential to getting good results:

julia> for variant in (:sdpa, :sdpa_dd, :sdpa_qd, :sdpa_gmp)
           solve!(problem, () -> SDPAFamily.Optimizer(silent = true, presolve = false, variant = variant))
           error = abs(problem.optval - p_guess)
           print("$variant solved the problem with an absolute error of ")
           @printf("%.2e.\n", error)
       endsdpa solved the problem with an absolute error of 4.44e-08.
sdpa_dd solved the problem with an absolute error of 3.02e-16.
sdpa_qd solved the problem with an absolute error of 3.13e-17.
sdpa_gmp solved the problem with an absolute error of 2.19e-31.

We can see that without the presolve routine, we have only recovered the true solution up to errors of size $\sim 10^{-1}$ for :sdpa variant. All other variants have failed to produce a result due to redundant constraints and returned with default value 0.

This problem is revisited at very high precision in Changing parameters & solving at very high precision.

Polynomial optimization

The following example is adapted from an example in the SumOfSquares.jl documentation to use SDPAFamily.jl. Even though the problem is only specified with Float64's, since the entries are specified as integers, they can be sent to SDPA-GMP without a loss of precision.

julia> using SumOfSquares
julia> using DynamicPolynomials
julia> using SDPAFamily
julia> @polyvar x1 x2 # Create symbolic variables (not JuMP decision variables)(x1, x2)
julia> # Create a Sum of Squares JuMP model with the SDPAFamily solver model = SOSModel(() -> SDPAFamily.Optimizer{Float64}( # JuMP only supports Float64 variant = :sdpa_gmp, # use the arbitrary precision variant params = ( epsilonStar = 1e-30, # constraint tolerance epsilonDash = 1e-30, # normalized duality gap tolerance precision = 200 # arithmetric precision used in sdpa_gmp )))A JuMP Model Feasibility problem with: Variables: 0 Model mode: AUTOMATIC CachingOptimizer state: EMPTY_OPTIMIZER Solver name: SDPAFamily
julia> @variable(model, γ) # Create a JuMP decision variable for the lower boundγ
julia> # f(x) is the Goldstein-Price function f1 = x1 + x2 + 1x1 + x2 + 1
julia> f2 = 19 - 14 * x1 + 3 * x1^2 - 14 * x2 + 6 * x1 * x2 + 3 * x2^23x1² + 6x1x2 + 3x2² - 14x1 - 14x2 + 19
julia> f3 = 2 * x1 - 3 * x22x1 - 3x2
julia> f4 = 18 - 32 * x1 + 12 * x1^2 + 48 * x2 - 36 * x1 * x2 + 27 * x2^212x1² - 36x1x2 + 27x2² - 32x1 + 48x2 + 18
julia> f = (1 + f1^2 * f2) * (30 + f3^2 * f4)144x1⁸ - 288x1⁷x2 - 648x1⁶x2² + 1224x1⁵x2³ + 1305x1⁴x2⁴ - 1836x1³x2⁵ - 1458x1²x2⁶ + 972x1x2⁷ + 729x2⁸ - 768x1⁷ + 1344x1⁶x2 + 2592x1⁵x2² - 4080x1⁴x2³ - 3480x1³x2⁴ + 3672x1²x2⁵ + 1944x1x2⁶ - 648x2⁷ + 952x1⁶ - 168x1⁵x2 - 5370x1⁴x2² + 1240x1³x2³ + 8730x1²x2⁴ - 1188x1x2⁵ - 4428x2⁶ + 1344x1⁵ - 7680x1⁴x2 + 9840x1³x2² + 5040x1²x2³ - 11880x1x2⁴ + 1944x2⁵ - 2454x1⁴ + 5784x1³x2 + 7776x1²x2² - 23616x1x2³ + 14346x2⁴ - 1072x1³ + 7344x1²x2 - 19296x1x2² + 12288x2³ + 1260x1² - 4680x1x2 + 3060x2² + 720x1 + 720x2 + 600
julia> @constraint(model, f >= γ) # Constrains f(x) - γ to be sum of squares(144)x1⁸ + (-288)x1⁷x2 + (-648)x1⁶x2² + (1224)x1⁵x2³ + (1305)x1⁴x2⁴ + (-1836)x1³x2⁵ + (-1458)x1²x2⁶ + (972)x1x2⁷ + (729)x2⁸ + (-768)x1⁷ + (1344)x1⁶x2 + (2592)x1⁵x2² + (-4080)x1⁴x2³ + (-3480)x1³x2⁴ + (3672)x1²x2⁵ + (1944)x1x2⁶ + (-648)x2⁷ + (952)x1⁶ + (-168)x1⁵x2 + (-5370)x1⁴x2² + (1240)x1³x2³ + (8730)x1²x2⁴ + (-1188)x1x2⁵ + (-4428)x2⁶ + (1344)x1⁵ + (-7680)x1⁴x2 + (9840)x1³x2² + (5040)x1²x2³ + (-11880)x1x2⁴ + (1944)x2⁵ + (-2454)x1⁴ + (5784)x1³x2 + (7776)x1²x2² + (-23616)x1x2³ + (14346)x2⁴ + (-1072)x1³ + (7344)x1²x2 + (-19296)x1x2² + (12288)x2³ + (1260)x1² + (-4680)x1x2 + (3060)x2² + (720)x1 + (720)x2 + (-γ + 600) is SOS
julia> @objective(model, Max, γ)γ
julia> optimize!(model)
julia> println(objective_value(model))3.0

Let's check the input file that is used to specify the problem for the SDPA-GMP binary. The following command uses some implementation details of how JuMP stores the underlying optimizer and so may not work in later JuMP versions. However, the following path is always printed when setting verbose = SDPAFamily.VERBOSE.

julia> path = joinpath(MOI.get(model, SDPAFamily.TemporaryDirectory()),  "input.dat-s")"/home/runner/.julia/scratchspaces/bfe18334-aefd-11e9-1109-4bf2b15a5b91/solves/jl_Rmkqvu/input.dat-s"
julia> readlines(path)[1:10] .|> println;121 3 -45 -45 15 -1.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 -0.0 0 1 1 1 -144.0 0 1 2 2 288.0 0 1 3 3 648.0 0 1 4 4 -1224.0 0 1 5 5 -1305.0 0 1 6 6 1836.0

The full file is longer, but what gets passed to the optimizer for this problem are floating point numbers that can be faithfully read by SDPA-GMP at the 200-bits of precision it uses internally. Thus, in this case, that JuMP restricts the Julia model to store the numbers at machine precision does not affect the precision of data that SDPA-GMP receives. It does, however, affect the precision of data that JuMP can recover from the output file. In this case, JuMP receives the correct answer to full machine precision ($\sim 10^{-16}$), but the true answer printed by SDPA-GMP (which can be seen in the file output.dat-s) is in fact correct to $\sim 10^{-30}$ in this case.

For this kind of problem which uses JuMP, the precision advantage of SDPA-GMP over other problems is that SDPA-GMP should be able to solve the problem to the full $\sim 10^{-16}$ precision representable by 64-bit floating point numbers, while solvers which solve the problem in machine precision can only recover the result to $\sim 10^{-8}$.

SDP relaxation in polynomial optimization problem

We consider a polynomial optimization problem (POP) where we wish to find $p^* = \inf \lbrace x | x \geq 0,\ x^2 \geq 1 \rbrace$, which has an SDP relaxation of order $r$ as

This is an example when most solvers fail due to numerical errors using Float64. It can be shown that for all $r \geq 1$, we have $y_1^* = 0$ as the optimal value for the SDP relaxation. However, most solvers will report $y_1^* = 1$, which is in fact the optimal value for $p^*$ in the original problem. The details are discussed in [1]. Such problem can be overcome by using appropriate parameters for SDPA-GMP. We now demonstrate this using Convex.jl.

julia> using SDPAFamily, SCS, Convex
julia> function relaxed_pop(r::Int, T) v = Variable(2*r) M1 = v[1:1+r]' for i in 2:r M1 = vcat(M1, v[i:i+r]') end t = [1 v[1:r]'] M1 = vcat(t, M1) c1 = M1 in :SDP M2 = M1[2:end, 1:end-1] c2 = M2 in :SDP M3 = M1[2:end, 2:end] - M1[1:end-1, 1:end-1] c3 = M3 in :SDP return Problem{T}(:minimize, v[1], [c1, c2, c3]) endrelaxed_pop (generic function with 1 method)
julia> p1 = relaxed_pop(5, Float64);
julia> solve!(p1, () -> SCS.Optimizer(max_iters = 10000, verbose = 0));ERROR: MethodError: no method matching SCS.Optimizer(; max_iters=10000, verbose=0) Closest candidates are: SCS.Optimizer() at /home/runner/.julia/packages/SCS/Y2mH0/src/MOI_wrapper/MOI_wrapper.jl:133 got unsupported keyword arguments "max_iters", "verbose"
julia> p2 = relaxed_pop(5, BigFloat);
julia> solve!(p2, () -> SDPAFamily.Optimizer(presolve = true, verbose = SDPAFamily.SILENT, params = ( epsilonStar = 1e-90, epsilonDash = 1e-90, precision = 5000, betaStar = 0.5, betaBar = 0.5, gammaStar = 0.5, lambdaStar = 1e5, omegaStar = 2.0, maxIteration = 10000 )));┌ Warning: Problem status INFEASIBLE_OR_UNBOUNDED; solution may be inaccurate. └ @ Convex ~/.julia/packages/Convex/tSTAW/src/solution.jl:342
julia> p1.statusOPTIMIZE_NOT_CALLED::TerminationStatusCode = 0
julia> p1.optval
julia> p2.statusINFEASIBLE_OR_UNBOUNDED::TerminationStatusCode = 6
julia> p2.optval0.04949918918831336472764681101625896134523760638926332698732731226858940703623707

[1] H. Waki, M. Nakata, and M. Muramatsu, ‘Strange behaviors of interior-point methods for solving semidefinite programming problems in polynomial optimization’, Comput Optim Appl, vol. 53, no. 3, pp. 823–844, Dec. 2012.