14

Finding a global minimum for this problem (non-linear optimization by the Nelder-Mead downhill simplex method) may not be possible, but by finding local minimum, I am expecting the value of the function at the minimum is around 1 or (far) less than 1 (the lower the value, the better is the answer).

"Nelder-Mead" method is available for NMinimize in Mathematica. The problem I am facing is, even though I am changing the method options for Nelder-Mead (for example, "ShrinkRatio", "ContractRatio", "ReflectRatio", and so on) and also initializing the calculation for different initial values (which is chosen by the "RandomSeed"), I am getting minima which are way greater than 1.

I found people deal these kind of problems by writing codes either in C++ or Fortran and the results are good, but I know neither of them so I am trying to do it with Mathematica using the built-in functions.

Here is the what I am trying (a simplified version of the original problem that I have to deal with which has even more number of parameters):

A = ({{a11, 0},{0, a22}});
B = ({{b11, b13},{b13, b22}});
M1 = A + Exp[I phi] B;
M2 = a A + b Exp[I theta] B;
M3 = a A - 3 b Exp[I theta] B;
m1 = Abs[Eigenvalues[M1]];
m2 = Abs[Eigenvalues[M2]];
m3 = Abs[Eigenvalues[M3]];


t1 = T1[a11_?NumberQ, a22_?NumberQ, b11_?NumberQ, b22_?NumberQ, 
b13_?NumberQ, phi_?NumberQ] = m1[[1]];
t2 = T2[a11_?NumberQ, a22_?NumberQ, b11_?NumberQ, b22_?NumberQ, 
b13_?NumberQ, phi_?NumberQ] = m1[[2]];

t3 = T3[a11_?NumberQ, a22_?NumberQ, b11_?NumberQ, b22_?NumberQ, 
b13_?NumberQ, theta_?NumberQ, a_?NumberQ, b_?NumberQ] = m2[[1]];
t4 = T4[a11_?NumberQ, a22_?NumberQ, b11_?NumberQ, b22_?NumberQ, 
b13_?NumberQ, theta_?NumberQ, a_?NumberQ, b_?NumberQ] = m2[[2]];

t5 = T5[a11_?NumberQ, a22_?NumberQ, b11_?NumberQ, b22_?NumberQ, 
b13_?NumberQ, theta_?NumberQ, a_?NumberQ, b_?NumberQ] = m3[[1]];
t6 = T6[a11_?NumberQ, a22_?NumberQ, b11_?NumberQ, b22_?NumberQ, 
b13_?NumberQ, theta_?NumberQ, a_?NumberQ, b_?NumberQ] = m3[[2]];

O2 = 0.235; O1 = 72;
E2 = 0.031; E1 = 0.87;

O4 = 0.0222196; O3 = 0.958157;
E4 = 0.006198; E3 = 0.018862;

O6 = 0.0941698; O5 = 1.60087;
E6 = 3.6079*^-7; E5 = 0.00026;

function[a11, a22, b11, b22, b13, a, b, phi,theta] = 
((t1 - O1)/E1)^2 + ((t2 - O2)/E2)^2 + ((t3 - O3)/E3)^2 + ((t4 - O4)/E4)^2 +
((t5 - O5)/E5)^2 + ((t6 - O6)/E6)^2;

Do[Print[NMinimize[
function[a11, a22, b11, b22, b13, a, b, phi, theta], 
{a11, a22,b11, b22, b13, a, b, phi, theta},
Method -> {"NelderMead", "ShrinkRatio" -> 0.5,"ContractRatio" -> 0.7,
"ReflectRatio" -> 1, "ExpandRatio" -> 2,"RandomSeed" -> j}]], {j, 5}]

Five random iterations (with different initial conditions chosen randomly) find different minima but all are much greater than the expected result. Here they are:

(* {9081.63
{a11->0.0585825,a22->1.22145,b11->-0.26137,b22->0.327416,b13->-0.158818,
a->-1.2025,b->0.168387,phi->-1.01436,theta->0.698082}}

{32448.8
{a11->-0.542868,a22->0.845826,b11->1.10494,b22->0.739838,b13->-0.16513,
a->0.795721,b->0.346771,phi->-0.369352,theta->-0.077746}}

{7122.85
{a11->0.0846693,a22->-0.643122,b11->-0.0834931,b22->-1.14742,b13->-0.37268,
a->1.36398,b->-0.208855,phi->-0.367912,theta->-0.359771}}

{14919
{a11->-0.14745,a22->-0.0507363,b11->0.414601,b22->0.306933,b13->0.420321,
a->0.364225,b->0.663577,phi->-0.625049,theta->-0.33245}}

{13582.6
{a11->-0.182997,a22->0.431253,b11->0.201061,b22->0.507661,b13->0.258584,
a->-0.471931,b->0.741867,phi->-0.5953,theta->-0.0911202}} *)

I tried with different combinations of the Nelder-Mead options with no improvements. The default value of the MaxIterations is 100, and I also tried by increasing the number of iterations by using the package of Nelder-Mead from the excellent post Shaving the last 50 ms off NMinimize by Oleksandr R., but again did not get any improvement. Can anyone help me with what I am missing?

SAS
  • 195
  • 1
  • 8
  • Yes, I have seen that and at first I forgot to mention that, so I just edited my question few mins ago, where I included the fact that, I also used that package and by changing the number of Max Iteration, I did not get any improved result. May be still I will be able to get good result by using that package, but could not figure out what I am missing. @rm-rf – SAS Jun 28 '13 at 20:12
  • @MichaelE2, Oh, Really ?!! I am using version 8. So you are saying version 9 is giving god results ! Btw, are you saying you ran my code without any modification ? If so then I have to try to get Mathematica 9. Thanks for your useful reply. – SAS Jun 28 '13 at 21:40
  • @MichaelE2, thanks! though I am using version "8", but it is the "student" version, I am guessing it could be another reason. Let me get some other versions and then I will let you know whether I get good results. – SAS Jun 28 '13 at 21:47
  • @MichaelE2 the values you quote are not proper minima; you have a copy/paste error in the definition of the function. It isn't easy to find good minima for this function--using many thousands of different initial simplexes, the lowest I found was about 1500, but values around 6500 are more common. I didn't see any less than 1. I will write an answer later giving some advice on this problem... too hung over at the moment to concentrate properly. – Oleksandr R. Jun 29 '13 at 15:43
  • @OleksandrR. Thanks, I appreciate your concern and waiting for your valuable suggestions . And as you mentioned, unfortunately most of the time I am also getting min which is about 6506.35 (which is too big for my purpose) – SAS Jun 29 '13 at 15:47
  • I hope the function can be edited and corrected -- still when I copy and paste it, I get the same results. (And there's still and extra ].) – Michael E2 Jun 29 '13 at 15:51
  • I found the error. Fixed. I'll delete my erroneous results. – Michael E2 Jun 29 '13 at 16:07
  • As an idea, if you have an algorithm which solves it in C or Fortran, you could always implement it in Mathematica and ask for help speeding it up. But it seems nontrivial to minimize this function so I am guessing it's an algorithm problem. – acl Jun 29 '13 at 16:17
  • @acl I agree. This function seems particularly difficult, so I doubt if any ordinary minimizer in any language could solve it easily. Differential evolution may have a fighting chance of doing it, with appropriate choice of the mutation and selection operators and meta-optimized tuning parameters. The best result I have seen so far using this approach is 34.7, which at least is a lot closer. – Oleksandr R. Jun 29 '13 at 20:14
  • No doubt that it is highly non-trivial problem (specially the original problem that I have to deal with having more number of parameters). It is not possible to find a "global" minimum but my target is to find a really good "local" minimum. As I told previously I am not familiar with C++, but people get good results by using that, here is one such package to handle these kinda problems http://seal.web.cern.ch/seal/snapshot/work-packages/mathlibs/minuit/ – SAS Jun 29 '13 at 20:28
  • I have just seen a value of 2.30 from differential evolution, so things are improving. It seems to me though that Nelder-Mead is not a good choice for this problem. Are others really getting good results using it? How long do they have to wait? (I'm wondering if maybe they just choose huge numbers of starting simplices and pick the best result.) Do you know anything about the expected values of the parameters? And what is the physical meaning of this problem, if any? – Oleksandr R. Jun 29 '13 at 20:49
  • @OleksandrR. ,I am not bound to use Nelder-Mead method,just following people working in this area, who mainly use this method so that even if the function is not differentiable you are safe. BTW, the answer to your question is yes, and the number you got (2.30) which is close to 1 looks far better. But I do not really know how long does it take to get the result for them as none mention about the timing in their papers rather mention that it is indeed a tedious job to do. And there is no clue for the expected values of the parameters. – SAS Jun 29 '13 at 21:03
  • @OleksandrR. , about the physical meaning, each of the eigenvalues represents observables, so by doing the fitting, you have to fix the parameters in such a way that, the eigenvalues are almost equal to the experimentally observed values (which in my code are Oi's). BTW, finding the good local minimum is the first step, after getting it, further fine tuning of the parameters is involved to match the eigenvalues. – SAS Jun 29 '13 at 21:08
  • Well, I've just found a value of 2.54×10^-11. This at least gives me confidence that the problem is not absolutely impossible. :) It will take a bit longer for me to write an answer, though. – Oleksandr R. Jun 29 '13 at 21:20
  • @OleksandrR. , Thanks for your effort. Take time to write your answer :) – SAS Jun 29 '13 at 21:22

1 Answers1

18

It seems to me that there are two possible approaches one might use to solve this problem. Either we can follow others and use the Nelder-Mead method, or we can try to use another, better suited method, such as differential evolution. This problem is very strongly multimodal, with a huge number of deceptive local minima, and this is exactly the type of function for which the Nelder-Mead method performs least well. To overcome this difficulty, it will be necessary to try the minimization repeatedly with enormously many initial simplices in the hope of falling into a favorable part of the parameter landscape.

I will show the approach using the Nelder-Mead method first, since this is what the question principally concerns. It is also the most relevant to Mathematica, since although NMinimize does support differential evolution (Method -> "DifferentialEvolution"), I actually didn't use Mathematica here, but rather my own differential evolution minimizer written in Python.

Sampling so many initial simplices will take a long time, and if we are going to minimize the same function repeatedly, it makes sense to use my compiled Nelder-Mead implementation. This will probably not be quite as fast as MINUIT or other native C++/Fortran codes, but when compiled using a C compiler, it will not be too far away from their performance, either.

First, let's write your problem down a bit more cleanly:

A = {{a11, 0}, {0, a22}}; B = {{b11, b13}, {b13, b22}};

M1 = A + Exp[I phi] B;
M2 = a A + b Exp[I theta] B;
M3 = a A - 3 b Exp[I theta] B;

m1 = FullSimplify@Abs@Eigenvalues[M1]; t1 = m1[[1]]; t2 = m1[[2]];
m2 = FullSimplify@Abs@Eigenvalues[M2]; t3 = m2[[1]]; t4 = m2[[2]];
m3 = FullSimplify@Abs@Eigenvalues[M3]; t5 = m3[[1]]; t6 = m3[[2]];

problem = Simplify[
   ((t1 - O1)/E1)^2 + ((t2 - O2)/E2)^2 +
    ((t3 - O3)/E3)^2 + ((t4 - O4)/E4)^2 +
    ((t5 - O5)/E5)^2 + ((t6 - O6)/E6)^2
  ];
parameters = {a11, a22, b11, b22, b13, a, b, phi, theta};

function = Block[{
    O2 = 0.235, O1 = 72,
    E2 = 0.031, E1 = 0.87,

    O4 = 0.0222196, O3 = 0.958157,
    E4 = 0.006198, E3 = 0.018862,

    O6 = 0.0941698, O5 = 1.60087,
    E6 = 3.6079*^-7, E5 = 0.00026
    },
   Function @@ {parameters, problem}
  ];

Next, we compile the minimizer with which we will attempt to solve it. Here we use a looser tolerance than usual (Sqrt[$MachineEpsilon] vs. $MachineEpsilon), since if we have already fallen into a poor local minimum there is not much point in expending further effort to find the very lowest function value in this part of parameter space. It is better to give up and try again somewhere else.

minimizer = With[{
    nelderMead = NelderMeadMinimize`Dump`CompiledNelderMead[function, parameters],
    bounds = {-1, 1}, dimension = Length[parameters], tolerance = Sqrt[$MachineEpsilon]
   },
   Compile[{{dummy, _Integer, 0}},
    nelderMead[RandomReal[bounds, {dimension + 1, dimension}], tolerance, -1],
    CompilationOptions ->
     {"ExpressionOptimization" -> True, "InlineCompiledFunctions" -> True}, 
    RuntimeOptions ->
     {"CompareWithTolerance" -> False, "EvaluateSymbolically" -> False},
    RuntimeAttributes -> Listable, "Parallelization" -> True, CompilationTarget -> "C"
   ]
  ];

Now it's just a matter of running the minimization sufficiently many times to get a decent answer. For present purposes we will make only 1 000 trials, although in practice many more will be required in order to get anywhere near a function value of 1. Beware: this code is very CPU-intensive, and can freeze your computer while it runs.

{First[#], Thread[parameters -> Rest[#]]} & /@ Take[minimizer@Range[1000] // Sort, 10]

I must admit, I don't know how many attempts are likely to be needed for this, as I stopped after a few hundred thousand tries without getting close to the target range. Since others claim to use this method successfully, I assume that it will eventually succeed, but this is presumably the main reason why minimizing this type of problem is reputed to be such a tedious task.


Alternatively, we might expect more rapid success using differential evolution, which is much more suited to this kind of problem. The main difficulty with this method is that it contains two tuning parameters, the values of which are critical to its success or otherwise, and (unlike the Nelder-Mead parameters) cannot generally be chosen using a priori theoretical arguments. Therefore, it is usually necessary to tune these according to some sort of meta-optimization scheme. More prosaically, NMinimize is quite slow, and only implements one mutation method (the classical rand/1 operator), although other, dramatically more efficient, possibilities have appeared in the literature. For these reasons, I found it preferable to use my own code rather than Mathematica's, and this happens to be written in Python.

Your function can be programmed in Python as follows:

from numpy import (exp, sqrt, abs)

def function(a11, a22, b11, b22, b13, a, b, phi, theta):
    O2 = 0.235; O1 = 72;
    E2 = 0.031; E1 = 0.87;

    O4 = 0.0222196; O3 = 0.958157;
    E4 = 0.006198; E3 = 0.018862;

    O6 = 0.0941698; O5 = 1.60087;
    E6 = 3.6079e-7; E5 = 0.00026;

    var3 = E1**2; var4 = 1/var3; var5 = -2*O1; var6 = b11 + b22; var7 = 1j*phi;
    var8 = exp(var7); var9 = var6*var8; var10 = -a22; var11 = a11 + var10; 
    var12 = var11**2; var13 = -b22; var14 = b11 + var13; var15 = 2*var11*var14*var8;
    var16 = b13**2; var17 = 4*var16; var18 = var14**2; var19 = var17 + var18; 
    var20 = 2j*phi; var21 = exp(var20); var22 = var19*var21;
    var23 = var12 + var15 + var22; var24 = sqrt(var23); var25 = -var24; 
    var26 = a11 + a22 + var9 + var25; var27 = abs(var26); var28 = var5 + var27;
    var29 = var28**2; var31 = E2**2; var32 = 1/var31; var33 = -2*O2; 
    var34 = a11 + a22 + var9 + var24; var35 = abs(var34); var36 = var33 + var35;
    var37 = var36**2; var39 = E3**2; var40 = 1/var39; var41 = -2*O3; 
    var42 = a11 + a22; var43 = a*var42; var44 = 1j*theta; var45 = exp(var44);
    var46 = b*var6*var45; var47 = a**2; var48 = var47*var12;
    var49 = 2*a*var11*b*var14*var45; var50 = b**2; var51 = 2j*theta;
    var52 = exp(var51); var53 = var50*var19*var52; var54 = var48 + var49 + var53; 
    var55 = sqrt(var54); var56 = -var55; var57 = var43 + var46 + var56;
    var58 = abs(var57); var59 = var41 + var58; var60 = var59**2; var62 = E4**2; 
    var63 = 1/var62; var64 = -2*O4; var65 = var43 + var46 + var55;
    var66 = abs(var65); var67 = var64 + var66; var68 = var67**2; var70 = E5**2; 
    var71 = 1/var70; var72 = -2*O5; var73 = -3*b*var6*var45;
    var74 = -6*a*var11*b*var14*var45; var75 = 9*var50*var19*var52;
    var76 = var48 + var74 + var75; var77 = sqrt(var76); var78 = -var77;
    var79 = var43 + var73 + var78; var80 = abs(var79); var81 = var72 + var80;
    var82 = var81**2; var84 = E6**2; var85 = 1/var84; var86 = -2*O6;
    var87 = var43 + var73 + var77; var88 = abs(var87); var89 = var86 + var88;
    var90 = var89**2; 

    return (var4*var29 + var32*var37 + var40*var60 +
            var63*var68 + var71*var82 + var85*var90)/4

As you might guess, this is not hand-written code, but rather translated from the original using Mathematica. Perhaps it isn't the most efficient way to write it, but it's not too bad, taking advantage of Mathematica's simplifications and common subexpression elimination. (It may not matter very much anyway, since Python isn't a high performance language to start with. Its performance is good enough, but I use it mainly because it's widely available and very pleasant to work with.)

For obvious reasons, I don't think it's a particularly good idea to post large amounts of Python code here on this site, so I'll let the definition of the function stand on its own. You can download the remaining code, i.e. the differential evolution implementation, the meta-optimization script, and an example using meta-optimized tuning parameters, from the git repository. You'll need Python 2.5 and NumPy 1.3 (or newer) to run them. If you aren't familiar with Python, it's important to note that the language has undergone some incompatible changes in Python 3, and so, although automated conversion using 2to3 should be possible, if you want to use the files unmodified, you will need to use Python 2.

I should say first of all that I did try the rand/1 mutation operator, but found it ineffective for this problem. Instead, I used the MDE5 mutator described in R. Thangaraj, M. Pant, and A. Abraham, Appl. Math. Comp. 216 (2), 532 (2010), which has often proven to be a more powerful alternative to the classical method. It's possible that other schemes might be even better, but meta-optimization takes a long time, so I didn't try any others. A variety of methods are implemented in the code that I found to have advantages for particular problems, and many more are available in the literature. Before you implement any of the latter, though, you should be aware that many publications in this area fail to employ appropriate methodology or arrive at robust conclusions, so quite often a method will be described as effective that actually doesn't work well for real problems. My advice would be just to use the ones already provided, and not waste your time with this.

Meta-optimization appears to be absolutely vital for good results on your particular problem, as is frequently the case in general. Here, we choose the tuning parameters $0 < F \le 2$ (mixing coefficient) and $0 < C \le 1$ (crossover probability) by running differential evolution on itself, aiming to minimize the objective function as robustly as possible while also limiting the number of iterations required to do so. This is somewhat difficult due to the tendency of black-box numerical minimizers to optimize exactly what you've asked for, yet not at all what you actually meant. For example, a meta-optimization scheme designed to maximize the initial rate of decrease of the function value can often result in poor performance for problems that have different characteristics at different scales. This problem becomes significantly more difficult after reaching function values of about 6000, so it's important that we also sample results from beyond this threshold.

You can run the meta-optimization code (optimization_function.py) for yourself in about a day, but the results I obtained were that $F \approx 0.850$ and $C = 0.975$. This is still a difficult problem, so even using these for the actual minimization (function.py) results in success (i.e., a function value far less than unity) only about 40-50% of the time. Nonetheless, this is a lot better than the Nelder-Mead results. After a run time of about two or three minutes, the result (if it succeeds) will be around $10^{-20}$, as shown by the following example solutions:

solutions = {
  {0.006302962181370445, 0.16279423867218476, -9.548089292946795,
   -62.2208122770191, 24.715755884121787, -6.7915955413380225,
   -0.002710346806452485, -7.127353886016603*^10, 0.43823648438746504}, 
  {-3.7513268290908126, -0.10760731343238998, -57.860129251744794,
   -10.517744414643426, 25.254637364611664, 0.2936728267065106,
   -0.003026492897390265, 0.04259960531247162, -0.5451053025455311}, 
  {-0.2708690475702807, -69.15868531322194, -0.11698443104717468,
   4.8543712324002435, 0.9233797217848526, 0.009694619653952973,
   -0.10947502700209849, 8.451868538832148, -4.973378159025613}
 };
function @@@ solutions
(* -> {1.71426*10^-21, 3.81267*10^-19, 1.80781*10^-19} *)

It's interesting to note that these results are significant up to the last place--in fact, just importing the output of the Python program into Mathematica introduced a small rounding error that increased these significantly from the original values of $\approx 2 × 10^{-21}$. Here we see that rounding them off to the nearest multiple of $10^{-12}$ can produce an increase of around 10 orders of magnitude:

function @@@ Round[solutions, 1*^-12]
(* -> {5.20078*10^-10, 2.19968*10^-10, 1.38377*10^-13} *)
Oleksandr R.
  • 23,023
  • 4
  • 87
  • 125
  • ,Wow!Differential evolution seems far better to minimize the particular function in hand.And it is a good finding that the problem is too sensitive to the number of significant digits,so I have to be really careful about it! I appreciate your help,Thanks :) – SAS Jun 30 '13 at 14:46
  • @Taarchira please see the updated answer including the links to the Python code, which you are free to use as you see fit. Sorry for the delay in finishing the job. – Oleksandr R. Jul 02 '13 at 23:41
  • ,Thank you for the update. – SAS Jul 06 '13 at 20:55
  • Instead of using "Pythonika", is there any other easier way to translate Mathematica code/function to Python (2.6 or 2.7) ? – SAS Jul 10 '13 at 14:18
  • @Taarchira not as far as I know. Your function just happens to be simple enough (not using any high-level functions) that one can easily write it directly in Python. I used Experimental\OptimizeExpressionto do the common subexpression elimination and then just changed a few of the operators from *Mathematica* to Python (e.g. parentheses are used in place of brackets,^becomes**,Ibecomes1j`, and so on). – Oleksandr R. Jul 10 '13 at 16:30
  • when I run the actual minimization (function.py) each time it calculates for single time. But if I want to compute it let say for 1000 times {which I tried by defining a function like repeat(),but could not figure it out} could you tell me what would be the modification? The reason to do is,so that I can plot a graph min_of_function vs number_of_repetitions which will give me better understanding about the best possible local minimum.

    A sample of how I tried:

    def repeat(function,n): for i in range(n): function() def f(): (what we have.........)

    repeat(function,1000)

    – SAS Jul 24 '13 at 00:01
  • @Taarchira since I can't really teach you how to write Python in comments, it's probably best if you figure this one out for yourself. I'm sure there are good books on Python out there if you want to learn it. Anyway, please be careful with this approach because the convergence tolerances I used aren't necessarily tight enough to distinguish one minimum as better than another in the $10^{-20}$ range. It may be better to come back into Mathematica and use FindMinimum at high precision to do this sort of thing, using the Python results as initial guesses. – Oleksandr R. Jul 26 '13 at 14:40
  • @ Oleksandr R., in this example of minimization, all the parameters are free to take any value ( x= [-Infinity,+infinity] ) to optimize the function. but is it possible to use this differential evolution method to optimize a function when the parameters are restricted to in certain interval (lets say x= [x_min,x_max] ) ? – SAS Jul 12 '14 at 16:45
  • 1
    @SAS yes, it's possible. But you have to build your constraints into the objective function somehow. A simple way to do it is to return a function value of infinity if the constraints are violated--but this excludes large parts of the complete search space and may not be appropriate for soft constraints. What you seek to do is not easy in a general and abstract sense, which is why I didn't address it in the algorithm itself. But for a specific problem, modifying the objective function is normally not too difficult. – Oleksandr R. Jul 12 '14 at 17:38
  • 1
    @SAS One way to deal with simple bounds as constraints is to treat the boundaries as reflective, i.e. if the value lies outside of this range then it is pushed back inside according to the amount of constraint violation. This can be appropriate for some problems and inappropriate for others; you might prefer e.g. to randomly reassign values that violate constraints. Everything in global optimization is so dependent on the problem, that I'm not sure there is any general advice on this topic. – Oleksandr R. Jul 12 '14 at 17:43
  • @ Oleksandr R, sometimes during computation using the Differential Evolution package, for the actual minimization (lets say in a loop of 100 repetition) it finds NAN and terminates the further computation (where as I want all 100 times repetitions/minimizations). The problem is may be to do with Numpy/Scipy packages. It happens sometimes and not for all kind of problems in hand. Is there a way that, when it finds NAN it does not terminate computation and completes the full loop/repetitions ignoring the times it gets NAN ? – SAS Jul 18 '14 at 00:30
  • @ Oleksandr R, As for example, I have a python file, named test.txt. I am using/importing the Differential Evolution package inside the test.txt file for minimization. I get errors like :

    numpy.linalg.linalg.LinAlgError: Array must not contain infs or NaNs and immediately terminates the loop.

    The detail of the error is in the following comment.

    – SAS Jul 18 '14 at 00:31
  • Traceback (most recent call last): File "test.txt", line 182, in

    output_function=None File "/home/differential_evolution.py", line 259, in minimize >>>fitness_monitor=update_cm, >>>generation_monitor=output_function

    File "/home/differential_evolution.py", line 115, in select >>>map(objective_function, trial_population) File "test.txt", line 137, in function >>>eigenvalues_of_M, eigenvectors_of_M = numpy.linalg.eig(M);

    – SAS Jul 18 '14 at 00:43
  • File "/anaconda/1.6.1/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 1018, in eig _assertFinite(a) File "/opt/anaconda/1.6.1/lib/python2.7/site-packages/numpy/linalg/linalg.py", line 165, in _assertFinite raise LinAlgError("Array must not contain infs or NaNs") numpy.linalg.linalg.LinAlgError: Array must not contain infs or NaNs – SAS Jul 18 '14 at 00:44
  • If you could help me with this, that would be really great, your differential evolution package is indeed splendid and really helping me in many problems :-) – SAS Jul 18 '14 at 00:50
  • Glad to hear you're making progress! I'm sorry, but I really don't think I can help with this problem. My code never produces any inf or nan values, and nor does it perform any linear algebra. So, these values and the error are being produced entirely by your own code, which I haven't seen (and this isn't the place for it anyway). At a guess I would say that, because differential evolution evaluates the objective function with random parameters, perhaps for certain values of these parameters your function is not well behaved. If so, you need to check for that before evaluating it. – Oleksandr R. Jul 18 '14 at 19:28