1

For the complex polynomial $$P_n(z) := 1+z+\frac{z^2}{2} + \sum_{j=3}^n \gamma_j z^j,\quad z \in \mathbb C.$$ I want to solve the following minimax/minmax optimization problem: $$\min_{\gamma_j} \max_{\lambda \in \sigma(A)\in \mathbb C} \big \vert P_n(\lambda \Delta t) \big \vert -1, \quad \Delta t \in \mathbb R_+ \text{ given.}$$ This problem arises from optimizing the stability region of explicit Runge-Kutta methods with $P_n(z)$ being the corresponding stability polynomial. The minimization over $\gamma_j$ is convex due to the fact that the objective is linear in these coefficients. What gives me trouble is that the coefficients have to be determined "for the worst case", thus the maximization over the eigenvalues of the linear RHS operator.

I already took a look at different questions addressing minimax, e.g. 1, 2 3 but was not able to find something transferable to my problem.

I tried nesting optimization routine calls like

ConvexOptimization[NMaximize[{Abs[Pn[omega * deltaT, gamma]] - 1,
 omega <= 0},omega], {}, gamma];

but ran into trouble since inside NMaximize gammais not realized:

NMaximize::nnum: The function value 1-Abs[0.607297 -0.522152 Subscript[gamma, 1]] is not a number at {omega} = {-1.82905}.

If I switch from ConvexOptimization to NMinimize and specify some custom initial values,

ip = Table[{i}, {i, 0.1, 1, 0.1}];
NMinimize[NMaximize[{Abs[Pn[omega * deltaT, combinedA]] - 1, omega <= 0}, omega],
gamma, Method -> {"Automatic", "InitialPoints" -> ip}

the problem persists

NMaximize::nnum: The function value 1-Abs[0.607297 -0.522152 Subscript[gamma, 1]] is not a number at {omega} = {-1.82905}.

Setting the optimization variable gamma[[1]] = 42prior to calling the optimizer routines gives for both NMinimizeaswell as ConvexOptimization the (not surprising) error

NMinimize::ivar: 42 is not a valid variable.

ConvexOptimization::nvar: 42 is not a valid variable.

J. M.'s missing motivation
  • 124,525
  • 11
  • 401
  • 574
Dan Doe
  • 113
  • 5

2 Answers2

4

The problem you wish to solve, that is the one in which you would not need to enumerate the eigenvalues, may not be that easy to solve.

However, the one you stated is indeed a convex problem, more specifically a second-order cone program, and you can solve it in Mathematica.

Take for example

n = 8;
SeedRandom[3];
A = RandomReal[{-1, 1}, {n, n}];

and its eigenvalues

eigs = List /@ Thread[\[Lambda] -> Eigenvalues[A]] // Chop

Now define the polynomial $P_e$

Pe[z_, n_] := 1 + z + z^2/2 + Sum[Subscript[\[Gamma], i] z^i, {i, 3, n}]

and evaluate $P_e(\lambda \, dT)$ for each eigenvalue

dT = 1/10;
pes = Expand[Pe[\[Lambda] dT, n] /. eigs];

Next, assuming that the $\gamma$'s are real, then

vars = Append[Variables[Pe[1, n]], \[Rho]];
npes = Map[Norm, Transpose[
    Simplify[{(pes + Conjugate[pes])/2, (pes - Conjugate[pes])/(2 I)},
             Element[vars, Reals]] // Chop]];

contains the absolute value of each complex entry in pes and

SecondOrderConeOptimization[\[Rho], Thread[npes < \[Rho]], vars, MaxIterations -> Infinity]

produces

{Subscript[[Gamma], 3] -> -333.118, Subscript[[Gamma], 4] -> -3165.22, Subscript[[Gamma], 5] -> 12340.3, Subscript[[Gamma], 6] -> -258577., Subscript[[Gamma], 7] -> -28418.7, Subscript[[Gamma], 8] -> -1512.65, [Rho] -> 0.908534}

which is a solution to the problem you stated because the value of $\rho$ is an upper-bound to $\max_{\lambda \in \sigma(A)} |P_e(\lambda dT)|$. That is, it solves the equivalent convex problem

$ \min \{ \rho : |P_e(\lambda dT)| \leq \rho \quad \forall \lambda \in \sigma(A) \} $

The constant offset of $1$ does not affect the optimal solution.

Mauricio de Oliveira
  • 2,001
  • 13
  • 15
  • Great work, thanks a lot! One thing that is currently not working for me is the special case when one of the eigenvalues is purely real. In that case, the optimizer reports that the curvature cannot be determined. Do you immediately see why that is / what a possible remedy could be? – Dan Doe May 21 '22 at 11:36
  • The nature of the eigenvalues should not affect anything. Can you post the matrix you’re having trouble with? – Mauricio de Oliveira May 21 '22 at 14:00
  • So for instance for eigenvalues {{\[Lambda]->1.44297},{\[Lambda]->-0.702045}} (but it is really for any set of eigenvalues where one is purely real) I get the error that the curvature of the constraining polynomials cannot be determined. – Dan Doe May 21 '22 at 14:24
  • Perhaps because in your example above $n = 2$, in which case the solution to the problem is trivial? I do not have any problem if $n \geq 3$. – Mauricio de Oliveira May 21 '22 at 18:15
  • I found the issue - I simply glossed over the line vars = Append[Variables[Pe[1, n]], \[Rho]]; which went for purely real eigenvalues unnoticed. – Dan Doe May 22 '22 at 08:07
  • Glad to hear you found the problem. – Mauricio de Oliveira May 22 '22 at 14:55
0

Based on the existing answer, I found a different way how to formulate the optimization problem.

Define the polynomial $P_n(z)$ as before, only with more input arguments

Pe[z_, n_, gamma_] := 1 + z + z^2/2 + Sum[gamma[[i]]*z^i, {i, 3, n}]

Then for a rule of eigenvalues of eigs (see previous answer) list the polynomials:

Pnoms[deltaT_, n_, gamma_] := Expand[Pe[\[Lambda] *deltaT, n, gamma] /. eigs];

compute their absolute values

AbsPnoms[deltaTIn_, n_, gamma_] := Module[{deltaT = deltaTIn}, 
  pnoms = Pnoms[deltaT, n, gamma]; 
  Return[Map[Norm, Transpose[Simplify[{(pnoms + Conjugate[pnoms])/2, 
  (pnoms - Conjugate[pnoms])/(2 I)}, Element[gamma, Reals]] // Chop]]]];

Then an equivalent way how to solve the optimization problem is

optVars = Array[Indexed[gam, #]&, e-2];
deltaT = 1;
optRule = ConvexOptimization[Max[AbsPnoms[deltaT, n, optVars]], {}, optVars];
Dan Doe
  • 113
  • 5