3

Given some $d \in \Bbb{Z}_{>0}$, $t \in [0,1]$, and $0 < a < b < 1$, fix $I = [a,b]$. I'm trying to compute $$ \min_n \min_{E_n(t)} f_d(\mathbf{x}) $$ where $n \in \Bbb{Z}_{>0}$, $$ f_d(\mathbf{x}) = \begin{cases} d^2 \tan(\frac{\pi}{2} \, \mathbf{x}) + 8d & \text{if } \mathbf{x} \in I\\ f(x_1) + (1 - x_1) f(x_2, \dotsc, x_n) & \text{if } \mathbf{x} = (x_1, \dotsc, x_n) \in I^n \end{cases} $$ and $E_n(t) = \{\mathbf{x} \in I^n \mid g_n(\mathbf{x}) \geq t\}$ with $$ g_n(\mathbf{x}) = 1 - \prod_{k = 1}^n (1 - x_k). $$ My current setup is

Clear[ff, gf, cf, min]

{a, b} = {1/10, 95/100};
(*Target function*)
ff[n_?IntegerQ, d_?IntegerQ] := 
  Fold[d^2*Tan[#2*Pi/2] + 8*d + (1 - #2) #1 &, 0, 
   Reverse[Array[x, n]]];

(*Constraints*)
gf[n_?IntegerQ] := (1 - Product[(1 - x[i]), {i, n}]);
cf[n_?IntegerQ, t_?NumericQ] := 
  Join[{gf[n] >= t}, Table[a <= x[i] <= b, {i, n}]];

min[n_, d_, t_] := NMinimize[{ff[n, d], cf[n, t]}, Array[x, n]];

I thought that I could just run NMinimize on min, but it spits out a bunch of errors:

NMinimize[{First @ min[n, 25, 0.75], n \[Element] Integers, 100 >= n >= 1}, n]

(*
Array::ilsmn: Single or list of non-negative machine-sized integers expected at position 2 of Array[x,n]. >>
NMinimize::bcons: The following constraints are not valid: {cf[n,0.75]}. Constraints should be equalities, inequalities, or domain specifications involving the variables. >>
NMinimize::objfs: The objective function {ff[n,25],cf[n,0.75]} should be scalar-valued. >>
NMinimize[{{ff[n, 25], cf[n, 0.75]}, n \[Element] Integers, n >= 1},
  n]
*)

I assume that this is because First is operating on the FullForm of min before the minimization algorithm gets to substitute something for n or, in other words, because First @ min[n, 25, 0.75] is evaluated eagerly. I tried wrapping this function in a Function, but that didn't work either, I think because NMinimize works by symbol replacement instead of by function evaluation.

On the other hand, the brute-force approach seems to work, but it is very inefficient

MinimalBy[#, First] & @ Map[min[#, 25, 0.75] &, Range[20]] // Timing

(*{67.768000, {{1234.95, {x[1] -> 0.402552, x[2] -> 0.376923, 
x[3] -> 0.32842}}}}*)

Truth be told, pretty soon I noticed that $\min_{E_n(t)} f_d(\mathbf{x})$ starts as a decreasing function, reaches a rather small minimum, and then keeps increasing. My instinctive approach would have been something like

takeWhile2[#, Function[{prev, cur}, First[prev] > First[cur]]] & @
  lazyMap[min[#, 25, 0.75]&, lazyRange[100]]

which sadly hinges on three functions which I don't know how (or if) can be implemented in Mathematica, namely

  • lazyRange[n]: A lazy list containing the elements of Range[n]. This isn't strictly necessary in this example, but I would need it to remove the artificial bound $n \leq 100$.

  • lazyMap[f, expr]: Just a lazy version of Map[f, expr]. This is where the main performance improvement would come from.

  • takeWhile2[list, crit]: Just like TakeWhile, but with a binary criterion. It always gives the first element of list and then keeps giving elements as long as crit is True as a function of the last given element and of the current one.

A.P.
  • 163
  • 8
  • 1
    Before someone points it out: there are two questions on this site with deceptively very similar titles to this one (the first being almost identical), but those are about expressing functions of a dynamic number of variables — which I already know how to do, as you can see from my code above. – A.P. Dec 05 '15 at 01:18

1 Answers1

2

Here's a straightforward procedural implementation of the functional stub I wrote at the end of the question:

minN::itmax = "Maximum of `1` iterations reached.";
minN[d_, t_, nmax_] :=  Module[{cur = min[1, d, t], next},
    Catch[ 
        Do[(next = min[i, d, t]; 
            If[First[next] >= First[cur], Throw[{i - 1, cur}], cur = next]),
            {i, 2, nmax}];
        Message[minN::itmax, nmax];
        {nmax, next}]];
minN[d_, t_] := minN[d, t, d];

Note that the default upper bound of d iterations is arbitrary, but seems to be more than sufficient. More importantly, while the empirical evidence suggests that $\min_{E_n(t)} f_d(\mathbf{x})$ has exactly one local (and global) minimum, I don't know how to prove it beyond reasonable doubt.


While this piece of code is enough for my purposes — which is why I'm posting it as an answer — I would prefer a more general and/or idiomatic solution.

A.P.
  • 163
  • 8