I have a rather complex optimization problem of the following general form: $$f(a_1, \dots, a_k) = \min_{(b_1, \dots, b_m) \in \Omega_b} \left\{\max_{(c_1, \dots, c_n) \in \Omega_c} g(a_1, \dots, a_k, b_1, \dots, b_m, c_1, \dots, c_n)\right\}.$$ Or, in vector form, $$f(\mathbf{a}) = \min_{\mathbf{b} \in \Omega_b} \Big\{\max_{\mathbf{c} \in \Omega_c} \, g(\mathbf{a}, \mathbf{b}, \mathbf{c})\Big\}.$$
In other words, I first want to maximize a given function over a number of variables, then find the minimum over a different set of variables, and then evaluate the resulting solution "function" at certain points.
The function $g$ is so messy that I do not expect to find an explicit form for $f(\mathbf{a})$, but I would still like to be able to (numerically) evaluate $f$ at some points, to get an idea of how $f$ behaves.
In terms of Mathematica, the inner maximization can be done numerically with FindMaximum, but this does not work if the other variables are not explicitly instantiated; so a solution of the form FindMinimum[FindMaximum[g[a,b,c], c], b] unfortunately does not work. For given $\mathbf{a}$ I could try to manually build a list of values FindMaximum[g[a,b,c], c] for a whole range of values $\mathbf{b} \in \Omega_{b}$, and take the minimum over these values, but I was wondering: doesn't Mathematica have some cleaner, more direct way to numerically compute minimax solutions?
(In my application, $(k, m, n) = (3, 6, 6)$ and $\Omega_b, \Omega_c$ are continuous subsets of $\mathbb{R}^6$, so this is not a discrete optimization problem where all possible solutions can just be enumerated. Also, I expect $g$ to be reasonably smooth, so I'm looking for a saddle-point solution in $\mathbb{R}^{12}$.)