6

I have a similar question compared to this post.

I am trying to maximize a function, which is a black-box to me. I have tools of gradient-free methods; in fact, this problem is reasonably smooth that I can even use the numerical approximation of gradients most of the time. However, there is this last problem: the domain where the function is defined is unknown. I cannot know whether the function is defined at this point before I evaluate it.

Therefore, what I did is if during evaluation I realized this function is undefined, I gave it the value of either -inf or -nan. However, the program sometimes got to this point and try to evaluate its derivatives - and then the program just fails.

I don't think this is a rare problem, so I tried to search the web, but with no luck. I would like to bring this question up here, and would really appreciate any ideas that may help.

Anton Menshov
  • 8,672
  • 7
  • 38
  • 94
Shawn Wang
  • 163
  • 4
  • 1
    There are several other things that you really should consider here. First, is the feasible region of your problem convex? Second, is the function that you're trying to maximize concave? If either of these fails, then your optimization routine could easily end up trapped at a local maximum. – Brian Borchers Sep 08 '13 at 16:01
  • @BrianBorchers Thanks for reminding! I should've included this in the post. This problem is neither convex nor concave, so yes it is easy to get trapped at a local maximum. I haven't thought about this point, and in this case I should really use a global algorithm like annealing or genetic. Thanks again! – Shawn Wang Sep 08 '13 at 18:14
  • There is no explicit question. A good question has a list of knowns, a list of constraints, and the desired form and nature of the answer. You do not have a single question mark in your question. – EngrStudent Feb 07 '14 at 16:20
  • How to optimize when a global optimization problem is unbouned? –  Feb 07 '14 at 14:59
  • @mudassar: Are you asking a new question or are you trying to clarify the OP's question? – Paul Feb 07 '14 at 16:38
  • @ShawnWang: What exactly is your question? – Paul Feb 07 '14 at 16:40

2 Answers2

3

Pick a fixed point $x_0$ with a finite function value.

Then return at points where $f(x)=\infty$ as gradient the gradient at the auxiliary point $x_0+t(x-x_0)$ with finite function value, where $t<1$ is chosen by backtracking.

This should fix your problems, at least when using a line search method.

Anton Menshov
  • 8,672
  • 7
  • 38
  • 94
Arnold Neumaier
  • 11,318
  • 20
  • 47
1

Judging from your description, your problem occurs when using derivative based methods.

So, I suggest you try to stay in feasible regions by --if necessary-- adjusting the length of the step in the current gradient direction, such that you always hit a valid point in your search space.

For other derivative-free methods like Simplex or Genetic Algorithms, the sporadic failure of the evaluation does not make the algorithm fail.

Disclaimer: I have only basic insight in the field of optimization.

Jan
  • 3,418
  • 22
  • 37