10

Question:

I have been trying to solve this coupled ODE set.

\begin{align} ( \frac{ \mu^2}{B} +1 ) \Phi^2 + \frac{1}{A} {\Phi^{\prime 2}} + \frac{1}{2}\lambda \Phi^4 - \frac{A'}{r A^2} + \frac{1}{r^2 A} - \frac{1}{r^2} &= 0, \cr ( \frac{\mu ^2 }{B} - 1 )\Phi^2 + \frac{1}{A} \Phi^{\prime 2} - \frac{1}{2}\lambda \Phi^4 -\frac{B'}{r A B} - \frac{1}{r^2 A} + \frac{1}{r^2} &= 0, \cr \frac{1}{A}{\Phi^{\prime\prime }} +\left (\frac{\mu^2}{B} - 1 \right )\Phi + \Phi' \left (\frac{B'}{2 AB} - \frac{A'}{2A^2} + \frac{2}{A r}\right ) - \lambda \Phi^3 &= 0, \cr \end{align} where $\Phi(r), A(r), B(r)$ are functions to be solved, with boundary condition \begin{align} \Phi(0) & = 0.1, \cr \Phi'(0) & = 0, \cr \Phi(\infty) & = 0, \cr \Phi'(\infty) & = 0, \end{align} for various $(\mu, \lambda)$ pairs. One further requirement is $\Phi$ does not have zeros except for the one at infinity. When $\lambda$ is small, say $\mu = 2, \lambda=300$, I am able to solve it easily with the shooting method. However, when $\lambda$ is large, such as $\mu = 2, \lambda=10^9$, the sensitivity on the initial guess is so bad the shooting method becomes not possible. I am curious what the efficient method for large $\lambda$ is.

Possible (partial) solutions

Below I list possible ways to attack the problem. Some of them are analysis for certain regions of the parameter space, such as small $\lambda$ or large $r$.

Shooting Method (small $\lambda$, working)

The following sample code set $A(0)=1, \Phi(0)=0.1, \Phi'(0)=0$ and shoot for $B(0)$ with bisection until the boundary condition at infinity is met.

rStart1 = 10^(-5); 
rEnd1 = 50; 
epsilon = 10^(-6); 
WorkingprecisionVar = 23; 
accuracyVar = Automatic; 
precisionGoalVar = Automatic; 
boolUnderShoot = -1; 
Clear[GRHunterPhi4Shooting]; 
GRHunterPhi4Shooting[μ_, λ_, ϕ0_] := Module[{}, eq1 = (μ^2/B[r] + 1)*ϕ[r]^2 + (1/A[r])*Derivative[1][ϕ][r]^2 + (1/2)*λ*ϕ[r]^4 - Derivative[1][A][r]/r/A[r]^2 + 
  1/r^2/A[r] - 1/r^2; eq2 = (μ^2/B[r] - 1)*ϕ[r]^2 + (1/A[r])*Derivative[1][ϕ][r]^2 - (1/2)*λ*ϕ[r]^4 - Derivative[1][B][r]/r/A[r]/B[r] - 1/r^2/A[r] + 1/r^2; 
eq3 = (1/A[r])*Derivative[2][ϕ][r] + (μ^2/B[r] - 1)*ϕ[r] + Derivative[1][ϕ][r]*(Derivative[1][B][r]/2/A[r]/B[r] - Derivative[1][A][r]/2/A[r]^2 + 2/A[r]/r) - λ*ϕ[r]^3; 
bc = {A[rStart1] == 1, B[rStart1] == B0, ϕ[rStart1] == ϕ0, Derivative[1][ϕ][rStart1] == epsilon}; 
sollst1 = ParametricNDSolveValue[Flatten[{eq1 == 0, eq2 == 0, eq3 == 0, bc, WhenEvent[ϕ[r] < 0, {boolUnderShoot = 1, "StopIntegration"}], 
    WhenEvent[Derivative[1][ϕ][r] > 0, {boolUnderShoot = 0, "StopIntegration"}]}], {A, B, ϕ}, {r, rStart1, rEnd1}, {B0}, WorkingPrecision -> WorkingprecisionVar, 
  AccuracyGoal -> accuracyVar, PrecisionGoal -> precisionGoalVar, MaxSteps -> 10000]]

funcPar = GRHunterPhi4Shooting[2, 300, 1/10]; 
boolUnderShoot =. ; 
bl = 0; 
bu = 10; 
Do[boolUnderShoot = -1; bmiddle = (bl + bu)/2; WorkingprecisionVar = Max[Log10[2^i] + 5, 23]; Quiet[funcPar[bmiddle]]; If[boolUnderShoot == 1, bl = bmiddle, bu = bmiddle]; ,   {i, 80}]
{bl, bu}
SetPrecision[%, 30]
funcInst = funcPar[bl][[3]]
Plot[{funcInst[r]}, {r, rStart1, rEnd1}, PlotRange -> All]

(*output: 
{296772652924728041363665/302231454903657293676544, 593545305849456082727335/604462909807314587353088}
{0.981938339341054485604558577553, 0.981938339341054485604566849359} *)

enter image description here

The dependence on $B(0)$ can be seen from

Show[Plot[funcPar[B0][[3]]["Domain"][[1, 2]], {B0, 0.97, 1}
,Frame -> True, FrameLabel -> {B[0], r}]]

enter image description here

However, the case I am really after is $\lambda$ being large, say $\lambda \sim 10^9$. I observe that when $\lambda$ is larger than $10^4$, the sensitivity on $B(0)$ greatly increases, so becomes the number of steps required for the bisection, until a point it is too big to be computationally feasible. For example, for $\lambda$ ~ 300, it takes about 80 steps, for $\lambda\sim 3\times 10^4$, 400 steps, for $\lambda \sim 10^6$, more than 2000 steps, etc. So, I wonder if there is a better way to deal with it for these large $\lambda's$.

FDM + Equilibrium Method (for large $\lambda$, not working yet)

I have been trying relaxation/ equilibrium methods as well. The issue for this method is that, 1) it remains not clear which term I should lag, 2) after linearizing the system, only the trivial solution $\Phi(r)=0$ gets picked out. A sample code for relaxation is the following, which does not generate the desired solution yet.

epsilon = 10^(-6); 
rStart1 = 10^(-5); 
rEnd1 = 50; 
pts = 100; 
intIterMax = 15; 
r = Range[rStart1, rEnd1, (rEnd1 - rStart1)/(pts - 1)]; 
rulePar = {μ -> 2, λ -> 300 (*3*10^9*)}; 
boundOfϕLeft = 1/10; 
boundOfϕRight = epsilon; 
boundOfϕdLeft = -epsilon; 
boundOfϕdRight = -epsilon; 
funcInitialϕ[rr_] := ((1 + rr/(rEnd1/10))*boundOfϕLeft)/E^(rr/(rEnd1/10)); 
funcInitialB[r_] := 1; 
funcInitialA[r_] := 1; 
A = {}; 
B = {}; 
ϕ = {}; 
ϕd = {}; 
A = Append[A, (funcInitialA[r[[#1]]] & ) /@ Range[Length[r]]]; 
B = Append[B, (funcInitialB[r[[#1]]] & ) /@ Range[Length[r]]]; 
ϕ = Append[ϕ, (funcInitialϕ[r[[#1]]] & ) /@ Range[Length[r]]]; 
ϕd = Append[ϕd, (D[funcInitialϕ[x], x] /. {x -> r[[#1]]} & ) /@ Range[Length[r]]]; 
dfdr = NDSolve`FiniteDifferenceDerivative[1, r, "DifferenceOrder" -> 1]["DifferentiationMatrix"]; 
dfdr2 = NDSolve`FiniteDifferenceDerivative[2, r, "DifferenceOrder" -> 1]["DifferentiationMatrix"]; 
eq1 = ConstantArray[{}, intIterMax + 1]; 
eq2 = ConstantArray[{}, intIterMax + 1]; 
eq3 = ConstantArray[{}, intIterMax + 1]; 
eq4 = ConstantArray[{}, intIterMax + 1]; 
sol = {}; 
Do[A = Append[A, Table[Unique[fA], Length[r]]]; B = Append[B, Table[Unique[fB], Length[r]]]; ϕ = Append[ϕ, Table[Unique[fϕ], Length[r]]]; 
ϕd = Append[ϕd, Table[Unique[fϕd], Length[r]]]; Evaluate[First[ϕ[[k + 1]]]] = boundOfϕLeft; Evaluate[Last[ϕ[[k + 1]]]] = boundOfϕRight; 
Evaluate[First[ϕd[[k + 1]]]] = boundOfϕdLeft; Evaluate[Last[ϕd[[k + 1]]]] = boundOfϕdRight; 
eq1[[k + 1]] = (μ^2*B[[k + 1]]*ϕ[[k]]*ϕ[[k]] + ϕ[[k]]*ϕ[[k + 1]]) + A[[k + 1]]*ϕd[[k]]*ϕd[[k]] + (1/2)*λ*ϕ[[k]]^3*ϕ[[k + 1]] + dfdr . A[[k + 1]]/r + 
   A[[k + 1]]/r^2 - 1/r^2 /. rulePar; eq2[[k + 1]] = (μ^2*B[[k + 1]]*ϕ[[k]]*ϕ[[k]] - ϕ[[k]]*ϕ[[k + 1]]) + A[[k + 1]]*ϕd[[k]]*ϕd[[k]] - 
   (1/2)*λ*ϕ[[k]]^3*ϕ[[k + 1]] + (A[[k]]/r)*(dfdr . B[[k + 1]]/B[[k]]) - A[[k + 1]]/r^2 + 1/r^2 /. rulePar; 
eq3[[k + 1]] = A[[k]]*dfdr . ϕd[[k + 1]] + (μ^2*B[[k]] - 1)*ϕ[[k + 1]] + ϕd[[k]]*((-A[[k]]/2)*(dfdr . B[[k + 1]]/B[[k]]) + dfdr . A[[k + 1]]/2) + 
   ϕd[[k + 1]]*(2*(A[[k]]/r)) - λ*ϕ[[k + 1]]*ϕ[[k]]^2 /. rulePar; eq4[[k + 1]] = dfdr . ϕ[[k + 1]] - ϕd[[k + 1]] /. rulePar; Off[FindRoot::precw]; 
sol = Append[sol, FindRoot[Flatten[{eq1[[k + 1,2 ;; All]], eq2[[k + 1,2 ;; All]], eq3[[k + 1,2 ;; All]], eq4[[k + 1,2 ;; All]]}] /. If[k > 1, sol[[k - 1]], {}], 
   Join[Table[{A[[k + 1,i]], A[[k,i]]/2}, {i, 1, Length[r]}], Table[{B[[k + 1,i]], B[[k,i]]/2}, {i, 1, Length[r]}], Table[{ϕ[[k + 1,i]], ϕ[[k,i]]/2}, 
      {i, 2, Length[r] - 1}], Table[{ϕd[[k + 1,i]], ϕ[[k,i]]/2}, {i, 2, Length[r] - 1}]] /. If[k > 1, sol[[k - 1]], {}], MaxIterations -> 500]]; , {k, intIterMax}]; 
solϕ = Interpolation[Transpose[{r, Last[ϕ] /. Last[sol]}], InterpolationOrder -> 2]; 
Plot[{funcInitialϕ[r], solϕ[r]}, {r, rStart1, rEnd1}, PlotRange -> All]

Please note in the above code I switched $1/A(r)$ to $A(r)$, and $1/B(r)$ to $B(r)$. The output is the figure below (blue: initial guess, yellow my solution.) Increasing iteration makes the solution (yellow curve) go to constant zero. Also, please note that the iteration code and shooting code have conflict definitions. You might want to put them into different context if you play with both.

enter image description here

Small $r$, Large $r$ analysis

To comment on @bbgodfrey's comment, in the shooting method, the choice of $A(0)$ was chosen based on the small $r$ analysis. I expanded everything around $r=0$.

eq1test = (μ^2/B[r] + 1)*ϕ[r]^2 + 1/(A[r])*ϕ'[r]^2 + 
1/2*λ*ϕ[r]^4 - A'[r]/r/A[r]^2 + 1/r^2/A[r] - 1/r^2;
eq2test = (μ^2/B[r] - 1)*ϕ[r]^2 + 1/(A[r])*ϕ'[r]^2 - 
1/2*λ*ϕ[r]^4 - B'[r]/r/A[r]/B[r] - 1/r^2/A[r] + 1/r^2;
eq3test = 1/A[r]*ϕ''[r] + (μ^2/B[r] - 1) ϕ[r] + ϕ'[
 r] (B'[r]/2/A[r]/B[r] - A'[r]/2/A[r]^2 + 
  2/A[r]/r) - λ*ϕ[r]^3;

Flatten@{Thread[
CoefficientList[Series[eq1test*r^2, {r, 0, 2}], r] == 0], 
Thread[CoefficientList[Series[eq2test*r^2, {r, 0, 2}], r] == 0], 
Thread[CoefficientList[Series[eq3test*r, {r, 0, 1}], r] == 0]};
Solve[%, {A[0], A'[0], A''[0], B[0], B'[0], B''[0], ϕ'[0]}]//Simplify

Output: $\left\{\left\{A(0)\to 1,A'(0)\to 0,A''(0)\to \lambda \phi (0)^4-2 \phi (0) \phi ''(0)+\frac{4 \phi (0)^2}{3},B(0)\to \frac{\mu ^2 \phi (0)}{\lambda \phi (0)^3-3 \phi ''(0)+\phi (0)},B'(0)\to 0,B''(0)\to \frac{\mu ^2 \phi (0)^2 \left(3 \lambda \phi (0)^3-12 \phi ''(0)+2 \phi (0)\right)}{3 \left(\lambda \phi (0)^3-3 \phi ''(0)+\phi (0)\right)},\phi '(0)\to 0\right\}\right\}$

Just for completeness, for large $r$:

intExpansOrder = 6;
ruleExpans[intExpansOrder_] := {A[r] -> Sum[a[i]/r^i, {i, 0, intExpansOrder}],
B[r] -> Sum[b[i]/r^i, {i, 0, intExpansOrder}], ϕ[r] -> 
Sum[f[i]/r^i, {i, 0, intExpansOrder}]};
(*rulePar={μ\[Rule]3,λ\[Rule]1000000};*)
rulePar = {μ -> 2, λ -> 300};

Unevaluated[(μ^2/B[r] + 1)*ϕ[r]^2 + 
 1/(A[r])*D[ϕ[r], r]^2 + 1/2*λ*ϕ[r]^4 - 
 D[A[r], r]/r/A[r]^2 + 1/r^2/A[r] - 1/r^2] /. 
ruleExpans[intExpansOrder] /. rulePar;
lstCoeffLargeReq1 = CoefficientList[Series[%, {r, Infinity, intExpansOrder}] // Normal,  1/r];

Unevaluated[(μ^2/B[r] - 1)*ϕ[r]^2 + 
 1/(A[r])*D[ϕ[r], r]^2 - 1/2*λ*ϕ[r]^4 - 
 D[B[r], r]/r/A[r]/B[r] - 1/r^2/A[r] + 1/r^2] /.ruleExpans[intExpansOrder] /. rulePar; 
lstCoeffLargeReq2 = CoefficientList[Series[%, {r, Infinity, intExpansOrder}] // Normal,    1/r];

Unevaluated[
1/A[r]*D[ϕ[r], {r, 2}] + (μ^2/B[r] - 1) ϕ[r] + 
 D[ϕ[r], 
   r] (D[B[r], r]/2/A[r]/B[r] - D[A[r], r]/2/A[r]^2 + 
    2/A[r]/r) - λ*ϕ[r]^3] /.ruleExpans[intExpansOrder] /. rulePar;
lstCoeffLargeReq3 =  CoefficientList[Series[%, {r, Infinity, intExpansOrder}] // Normal,    1/r];

solLargeR =   Solve[Thread[Flatten@{lstCoeffLargeReq1, lstCoeffLargeReq2, 
    lstCoeffLargeReq3} == 0], 
Flatten@Table[{a[i], b[i], f[i]}, {i, 0, 
   intExpansOrder - 2}] ]; // AbsoluteTiming

rulePar
A[r] /. ruleExpans[intExpansOrder - 2] /. solLargeR
B[r] /. ruleExpans[intExpansOrder - 2] /. solLargeR
ϕ[r] /. ruleExpans[intExpansOrder - 2] /. solLargeR

Output: $\left\{1,-\frac{1044}{7 r^4}-\frac{4}{r^2}+1,-\frac{1044}{7 r^4}-\frac{4}{r^2}+1\right\}$, $\left\{4,\frac{1072}{7 r^4}+\frac{8}{r^2}+4,\frac{1072}{7 r^4}+\frac{8}{r^2}+4\right\}$, $\left\{0,-\frac{430 \sqrt{2}}{7 r^4}-\frac{\sqrt{2}}{r^2},\frac{430 \sqrt{2}}{7 r^4}+\frac{\sqrt{2}}{r^2}\right\}$.

Based on the requirement $\Phi>0$, the last solution is what I want.

(B-)Spline Method (update: 5/22/2018)

As @user21 pointed out, imposing both $\phi(r)=0, \phi'(r)=0$ on both ends can be tricky for FEM. As an alternative, I am going to post my attempt on using spline basis to attack the problem.

Set up grid:

epsilon = 10^-6;
rStart1 = 10^-5;
rEnd1 = 50;
Nmax = 300;
spacing = (rEnd1 - rStart1)/(Nmax);
Clear[r];
Do[r[i] = rStart1 + spacing*i, {i, -3, Nmax + 3}];
rulePar = {μ -> 2, λ -> 0(*300*)};

Define cubic spline:

B3[x_, i_, h_] := 1/h^3*Piecewise[
    {{(x - r[i - 2])^3, r[i - 2] <= x <= r[i - 1]},
    {h^3 + 3 h^2 (x - r[i - 1]) + 3 h (x - r[i - 1])^2 -  3 (x - r[i - 1])^3, r[i - 1] <= x <= r[i]},
    {h^3 + 3 h^2 (r[i + 1] - x) + 3 h (r[i + 1] - x)^2 -  3 (r[i + 1] - x)^3, r[i] <= x <= r[i + 1]},
    {(r[i + 2] - x)^3,  r[i + 1] <= x <= r[i + 2]}}
   , 0]

Then I linearize the ODE's using Newton's method:

A[x_, k_] := Sum[ca[i, k]*B3[x, i, spacing], {i, -1, Nmax + 1}]
B[x_, k_] := Sum[cb[i, k]*B3[x, i, spacing], {i, -1, Nmax + 1}]
ϕ[x_, k_] := Sum[cϕ[i, k]*B3[x, i, spacing], {i, -1, Nmax + 1}]

dϕ[x0_, k_] := D[ϕ[x, k], x] /. {x -> x0};
d2ϕ[x0_, k_] := D[ϕ[x, k], {x, 2}] /. {x -> x0};
dA[x0_, k_] := D[A[x, k], x] /. {x -> x0};
dB[x0_, k_] := D[B[x, k], x] /. {x -> x0};

eq1[x_, k_] := μ^2*(B[x, k] ϕ[x, k]^2 + 
   2 B[x, k] ϕ[x, k] ϕ[x, k + 1] + 
   B[x, k + 1] ϕ[x, k]^2) + ϕ[x, k]^2 + 
   2 ϕ[x, k] ϕ[x, k + 1] + A[x, k] dϕ[x, k]^2 + 
   2 A[x, k] dϕ[x, k] dϕ[x, k + 1] + 
   A[x, k + 1] dϕ[x, k]^2 + 
   1/2*λ*(ϕ[x, k]^4 + 
   4 ϕ[x, k]^3 ϕ[x, k + 1]) + 
  (dA[x, k] + dA[x, k + 1])/x + 
  (A[x, k] + A[x, k + 1])/x^2 - 1/x^2 /. rulePar; 

eq2[x_, k_] := μ^2*(B[x, k] ϕ[x, k]^2 + 
   2 B[x, k] ϕ[x, k] ϕ[x, k + 1] + 
   B[x, k + 1] ϕ[x, k]^2) - (ϕ[x, k]^2 + 
   2 ϕ[x, k] ϕ[x, k + 1]) + (A[x, k] dϕ[x, k]^2 + 
   2 A[x, k] dϕ[x, k] dϕ[x, k + 1] + 
   A[x, k + 1] dϕ[x, k]^2) - 
   1/2*λ*(ϕ[x, k]^4 + 
   4 ϕ[x, k]^3 ϕ[x, k + 1]) + 
   1/x*(A[x, k] dB[x, k]/B[x, k] + 
   A[x, k + 1] dB[x, k]/B[x, k] + 
   A[x, k] dB[x, k + 1]/B[x, k] - 
   A[x, k] dB[x, k]/B[x, k]^2*B[x, k + 1]) - 
   (A[x, k] + A[x, k + 1])/x^2 + 
   1/x^2 /. rulePar; 

eq3[x_, k_] := (A[x, k]*d2ϕ[x, k] + A[x, k + 1]*d2ϕ[x, k] + 
  A[x, k]*d2ϕ[x, k + 1]) + μ^2*(B[x, k] ϕ[x, k] + 
   B[x, k + 1] ϕ[x, k] + B[x, k] ϕ[x, k + 1]) - (ϕ[
   x, k] + ϕ[x, k + 1]) + 
   -1/2*(dϕ[x, k] A[x, k] dB[x, k]/B[x, k] + 
   dϕ[x, k + 1] A[x, k] dB[x, k]/B[x, k] + 
   dϕ[x, k] A[x, k + 1] dB[x, k]/B[x, k] + 
   dϕ[x, k] A[x, k] dB[x, k + 1]/B[x, k] - 
   dϕ[x, k] A[x, k] dB[x, k]/B[x, k]^2*B[x, k + 1]) + 
   1/2*(dϕ[x, k] dA[x, k] + dϕ[x, k + 1] dA[x, k] + 
   dϕ[x, k] dA[x, k + 1]) + 
   2/x*(dϕ[x, k] A[x, k] + dϕ[x, k + 1] A[x, k] + 
   dϕ[x, k] A[x, k + 1]) - 
   λ*(ϕ[x, k]^3 +  3 ϕ[x, k]^2*ϕ[x, k + 1]) /. rulePar;

I take an initial guess and use NonlinearModelFit to get the expansion coefficients of the cubic spline:

initialDataϕ = Table[{x, (1 + x/(rEnd1/10))*E^(-x/(rEnd1/10))*1/10}, {x, rStart1, rEnd1, spacing/2}];
inifitϕ = NonlinearModelFit[initialDataϕ, Sum[cϕ[i, 0]*B3[x, i, spacing], {i, -1, Nmax + 1}], Table[cϕ[i, 0], {i, -1, Nmax + 1}], x]
Show[ListPlot[initialDataϕ], Plot[inifitϕ[x], {x, rStart1, rEnd1}, PlotStyle -> Orange]]

initialDataA = Table[{x, 1}, {x, rStart1, rEnd1, spacing}];
inifitA = NonlinearModelFit[initialDataA, Sum[ca[i, 0]*B3[x, i, spacing], {i, -1, Nmax + 1}], Table[ca[i, 0], {i, -1, Nmax + 1}], x]
(*Show[ListPlot[initialDataA],Plot[inifitA[x],{x,rStart1,rEnd1}, PlotStyle -> Orange]]*)

initialDataB = Table[{x, 1}, {x, rStart1, rEnd1, spacing}];
inifitB = NonlinearModelFit[initialDataB, Sum[cb[i, 0]*B3[x, i, spacing], {i, -1, Nmax + 1}], Table[cb[i, 0], {i, -1, Nmax + 1}], x]
(*Show[ListPlot[initialDataB],Plot[inifitB[x],{x,rStart1,rEnd1}, PlotStyle -> Orange]]*)

My initial guess and spline fitting of that.

Then I start the iteration

sol = {};(*The correction of each order*)
ruleCorrection = {};(*The base + correction of each order*)
ruleCorrection = Append[ruleCorrection, 
Join[inifitϕ["BestFitParameters"], 
inifitA["BestFitParameters"], 
inifitB["BestFitParameters"]]]; 
(*The zeroth order, i.e. the guess*)

Do[
  bc = {
    dϕ[rStart1, k] == 0(*epsilon*), 
    ϕ[rStart1, k] == 0(*1/10*), 
    dϕ[rEnd1, k] == 0(*epsilon*), 
    ϕ[rEnd1, k] == 0(*epsilon*),
    A[rStart1, k] ==(*1*)0, 
    dB[rStart1, k] == 0(*epsilon*)
    };
  varUndeter = Flatten[{{ca[#, k], 0}, {cb[#, k], 0}, {cϕ[#, k], 0}} & /@ Range[-1, Nmax + 1], 1];
  eqset =  Flatten@Table[
  Evaluate@{eq1[x, k - 1] == 0, eq2[x, k - 1] == 0, 
    eq3[x, k - 1] == 0}, {x, rStart1, rEnd1, spacing}] /.   ruleCorrection[[-1]];

  sol = Append[sol, FindRoot[Join[eqset, bc], varUndeter, WorkingPrecision -> Automatic]];
  (*calculate the correction*)
  ruleCorrection = Append[ruleCorrection, 
  Thread[Flatten@Table[{ca[i, k], cb[i, k], cϕ[i, k]}, {i, -1,  Nmax + 1}] 
  -> (Flatten@Table[{ca[i, k - 1], cb[i, k - 1],  cϕ[i, k - 1]}, {i, -1, Nmax + 1}] /. ruleCorrection[[-1]]) 
  + (Flatten@Table[{ca[i, k], cb[i, k], cϕ[i, k]}, {i, -1,  Nmax + 1}] /. sol[[-1]])]];
  (*combine the correction to the base*)
  , {k, 1, 2}] // AbsoluteTiming

(*output: `{816.48, Null}`*)

enter image description here enter image description here

Unfortunately, the first iteration looks about right but the second iteration completely destroys the function. I suspect there is something wrong with my way of linearization. Any idea?

Update 5/23/2018

It seems the issue is related to the bad convergence of Newton's method and my bad initial guess. I added a relaxation to the iteration step, in the way $\phi \rightarrow \phi + 1/10 \Delta \phi$ in each iteration.

weight=1/10;
ruleCorrection=Append[ruleCorrection,
    Thread[Flatten[Table[{ca(i,k),cb(i,k),cϕ(i,k)},{i,-1,Nmax+1}]]->
    (Flatten[Table[{ca(i,k-1),cb(i,k-1),cϕ(i,k-1)},{i,-1,Nmax+1}]]/. ruleCorrection[[-1]])
    +(weight*Flatten[Table[{ca(i,k),cb(i,k),cϕ(i,k)},{i,-1,Nmax+1}]]/. sol[[-1]])]];

Now, the convergence looks a bit better in the first a few steps, but at the 4th iteration there is a sudden big change. I start to doubt whether this iteration converges at all.

Plot[{1/10 (x/(rEnd1/10)+1) E^(-(x/(rEnd1/10))),ϕ(x,2)/. ruleCorrection[[3]],ϕ(x,3)/. ruleCorrection[[4]],ϕ(x,4)/. ruleCorrection[[5]]},{x,rStart1,rEnd1},PlotRange->All,AxesLabel->{"r","ϕ(r)"},PlotLegends->Placed[{"Initial Guess","3rd Iteration","4th Iteration","5th Iteration"},{Right,Top}]]

enter image description here

Update:

Using Nmax = 500 grid and MachinePrecision, I get the following plot.

enter image description here

It seems the iteration goes fine for the region where the derivative is small, while gets unstable wherever the derivative gets large. I suspect it has something to do with the way I linearize the derivative terms. Also, I am thinking about non-uniform splines but not sure the best way to implement in $Mathematica$.

Update: Now it's clear that the divergence is caused by $A(r)$ and $B(r)$. The solution oscillates like crazy. I am amazed that $\phi(r)$ still looks okay somehow.

enter image description here enter image description here

I wonder if I should calculate $\phi$, $A$, $B$ in different iterations.

Update: I think the issue is related to that I use cubic bspline expansion for all three functions, $A, B, \phi$, and that requires too many boundary conditions. Let's count the constraints and the independent variables (coefficients of the spline expansion.)

We have $x_0, x_1, ..., x_N$, a total of $N+1$ nodal points.

For each function to be solved, if using cubic bsplines, we need $B_{-1}, B_{0}, ..., B_{N+1}$, a total of $(N+3)$ splines for each function, hence $(N+3)$ coefficients to be determined. In order to solve it, we need $(N+3) - (N+1)=2$ boundary conditions. This somehow means, cubic bspline is only suitable for second order differential equations where it has two boundary conditions. If I use cubic spline to expand all three variables, I'll need six bc's, which is what I did in the sample code by throwing in 'redundant' boundary conditions. However, I don't think this is correct, as the system should be solvable with only four bc's. I have tried to expand $A$ with linear bspline to reduce the required bc's by two, but FindRoot starts to complain about singular Jacobian. This doesn't look legit either, as both $A$ and $B$ are first order, it does not make much sense to expand $A$ with linear spline and $B$ with cubic spline so that the total degrees of freedom is correct.

So the question now is, what is the right bspline one should use to expand this coupled ODE system.

Boson Bear
  • 383
  • 1
  • 10
  • 1
    Well, this type of problem is just… hard. Have you tried using solution for small $\lambda$ (or small rEnd1) as initial guess and increase its value slowly? Here's an example. – xzczd May 20 '18 at 03:45
  • @xzczd thanks for the comment. I did start with small $\lambda$ but due to the demanding precision of the initial guess, I don't think the solution initial value for small $\lambda$ can be used for large $\lambda$ case. As for lower rEnd1, I tried that too. The problem is that lowering rEnd1 requires a good knowledge of the asymptotic behavior of $\Phi$ at large r but I couldn't find one. If I lower rEnd1 and demand $\Phi(rEnd1) =0$ with brute-force, the shape of the solution is changed a lot, unfortunately. – Boson Bear May 20 '18 at 04:27
  • 1
    Why are you assuming A[rStart1] == 1 in the λ == 300 computation? – bbgodfrey May 20 '18 at 13:24
  • @bbgodfrey that are two evidences. 1) small $r$ expansion, 2) numerically, if I set A(0) equal to a different value, say 1.3, it will be pulled down to 1 immediately. I have added some code for the expansion. – Boson Bear May 20 '18 at 14:46
  • @Michael E2 thanks a lot for the edit! I always have trouble typing up the greek letters in my code... – Boson Bear May 20 '18 at 16:01
  • 1
    There are some useful tools for formatting posts here and here. You may also find this helpful. – Michael E2 May 20 '18 at 16:06
  • Solving this with FEM would need some thought as the boundary conditions on both sides involve y and y'` which is currently not possible with FEM. – user21 May 21 '18 at 04:59
  • @user21 exactly! I was having a hard time to decide which equation to be replaced by the boundary condition. If I use forward difference then it's the last equation, and backward difference it's the first, but y' makes it more complicated. Do you think increasing the differential order would help by any chance? – Boson Bear May 21 '18 at 12:40
  • Hard to say, typically, I'd think that a higher order gives a better solution but that a solution with a lower order can not be found at all is less common I would think. – user21 May 22 '18 at 04:54
  • @user21 I have switched to the cubic bspline method with which I can impose the boundary conditions now, but I am still struggling with the linearization. Any idea? – Boson Bear May 22 '18 at 22:03
  • What precisely are you seeking from this calculation? Is it just to see the curves at some enormous value of λ, or something else. – bbgodfrey May 23 '18 at 01:59
  • @bbgodfrey yes I need the curves at large $\lambda$. It's okay if the solution is approximated. That is the main reason I ditch the shooting method, as it's hit or miss, and it gets harder and harder to hit in large $\lambda$. When it misses, I don't get any meaningful solutions at all. Naively, I'd think the relaxation method would serve this purpose better as it at least gives me an curve, up to a certain precision. – Boson Bear May 23 '18 at 02:25
  • @BosonBear, For how to do linearization for nonlinear PDEs you can have a look at chapter 4.2.4 of this dissertation. I hope that helps. It's a bit complicated though. Good luck! – user21 May 23 '18 at 05:46
  • @user21 Thanks! It looks very relevant. I'll give it a try. – Boson Bear May 23 '18 at 12:58
  • @user21 I checked the reference. It is very nice and generic but I think the method is basically the same as what I use here. In the example, $u\partial_x u + u \partial_y u$ is linearized to $(u_0 \partial_x u_0 + u_0 \partial_y u_0) + (u_0 \partial_x (u-u_0) + u_0 \partial_y (u-u_0)) +((u-u_0) \partial_x u_0 + (u-u_0 )\partial_y u_0)$, which is exactly what I did here. Still, the non-convergence indicates maybe there is something wrong with my linearization. – Boson Bear May 23 '18 at 21:03
  • @user21 I have posted my code of implementing cubic FEM. However, after I get the bilinear form and load vector, the linear system is either not solvable (with complaints of singular, badly conditioned, etc.) or solved with great numeric noise. Would you mind taking a quick look? – Boson Bear May 28 '18 at 18:42

2 Answers2

8

Edit: I have reorganized my earlier answer and added a significant amount of new material.

Transformed Equations

As suggested in the question, the equations are simpler, if A and B are replaced by their reciprocals.

eq1 = (μ^2 B[r] + 1) ϕ[r]^2 + A[r] ϕ'[r]^2 + λ ϕ[r]^4/2 + A'[r]/r + (A[r] - 1)/r^2; 
eq2 = (μ^2 B[r] - 1) ϕ[r]^2 + A[r] ϕ'[r]^2 - λ ϕ[r]^4/2 + B'[r] (A[r]/B[r])/r - 
    (A[r] - 1)/r^2;
eq3 = A[r] ϕ''[r] + (μ^2 B[r] - 1) ϕ[r] + ϕ'[r] (A'[r]/2 - B'[r] (A[r]/B[r])/2 + 
    2 A[r]/r) - λ ϕ[r]^3;

Next, rescale B by μ^2/λ and r by Sqrt[λ] to eliminate the parameter μ and cause λ to appear only as 1/λ. The latter is useful, because the OP is seeking solutions for very large λ. Other advantages will become apparent below.

eq1 = (B[r] + 1/λ) ϕ[r]^2 + A[r] ϕ'[r]^2 + ϕ[r]^4/2 + A'[r]/r + (A[r] - 1)/r^2; 
eq2 = (B[r] - 1/λ) ϕ[r]^2 + A[r] ϕ'[r]^2 - ϕ[r]^4/2 + 
    B'[r] (A[r]/B[r])/r - (A[r] - 1)/r^2;
eq3 = A[r] ϕ''[r] + (B[r] - 1/λ) ϕ[r] + ϕ'[r] 
    (A'[r]/2 - B'[r] (A[r]/B[r])/2 + 2 A[r]/r) - ϕ[r]^3; 

Incidentally, eq3 can be replaced by

Collect[eq3 - ϕ'[r] (eq1 - eq2) r/2, {(ϕ''[r], ϕ'[r]}, Simplify]
(* ϕ[r] (-(1/λ) + B[r] - ϕ[r]^2) + (1/r + A[r]/r - 
   (r ϕ[r]^2)/λ - 1/2 r ϕ[r]^4) ϕ'[r] + A[r] ϕ''[r] *)

although I have not experimented with the consequences of doing so.

Numerical Solutions

The equations above can be solved, up to a point, by the straightforward method described here and elsewhere. In essence, we seek to find the value of B[rStart1] that maximizes the distance in r to which the equations can be integrated. Since this is a separatrix problem, integration is terminated when the solution begins moving away from the separatrix in the positive direction, estimated by ϕ'[r] == 0, or the negative direction, estimated by ϕ[r] == 0. If values of B[rStart1] corresponding to each of these cases are known, successive integrations each can reduce the range of B[rStart1] in which the separatrix lies by a factor of two, extending the integration distance further and further. The code used to do so in this case is,

rStart1 = 10^-5; rEnd1 = 50000; epsilon = 0; ϕ0 = 1/10; wp = 180; imax = 700;
eq1 = (B[r] + 1/λ) ϕ[r]^2 + A[r] ϕ'[r]^2 + ϕ[r]^4/2 + A'[r]/r + ( A[r] - 1)/r^2; 
eq2 = (B[r] - 1/λ) ϕ[r]^2 + A[r] ϕ'[r]^2 - ϕ[r]^4/2 + B'[r] (A[r]/B[r])/r - 
    (A[r] - 1)/r^2;
eq3 = A[r] ϕ''[r] + (B[r] - 1/λ) ϕ[r] + ϕ'[r] (A'[r]/2 - B'[r] (A[r]/B[r])/2 + 
    2 A[r]/r) - ϕ[r]^3; 
bc = {A[rStart1] == 1, B[rStart1] == B0, ϕ[rStart1] == ϕ0, ϕ'[rStart1] == epsilon}; 
sp = ParametricNDSolveValue[{eq1 == 0, eq2 == 0, eq3 == 0, bc,
    WhenEvent[ϕ[r] < 0, {bool = 0, "StopIntegration"}, "LocationMethod" -> "StepEnd"], 
    WhenEvent[ϕ'[r] > 0 || ϕ[r] > ϕ0, {bool = 1, "StopIntegration"}, 
        "LocationMethod" -> "StepEnd"]} /. λ -> λ0, 
    {A, B, ϕ}, {r, rStart1, rEnd1}, {B0, λ0, wp0}, 
        WorkingPrecision -> wp0, MaxSteps -> Infinity, 
        Method -> "StiffnessSwitching", Method -> {"ParametricSensitivity" -> None}];
bl = 1/100; bu = 1/80; 
stats = ConstantArray[0, imax];
Row[{ProgressIndicator[Dynamic[ip], {0, imax}], "  ", Dynamic[ip], 
    "  ", Dynamic[bdyn], "  ", Dynamic[N@rm], "  ", Dynamic[bmiddle]}]
Do[bool = -1; bmiddle = (bl + bu)/2; s = sp[bmiddle, 100000, wp]; 
    rm = s[[3]]["Domain"][[1, 2]]; bdyn = bool; stats[[i]] = {i, rm, bool, bmiddle}; 
    If[bool == 1, bl = bmiddle, bu = bmiddle]; ip = i; 
    If[bool == -1, Return[]], {i, imax}] // AbsoluteTiming

The optimized value of B0 obtained in this way for λ == 100000 is 0.01016031900968464871544308335974195764944415361378410243389163748047 8961132458481009266158640334327225172025641862957650121324881901748306 9285571201404204397892103445395443769068014. The computation ran out of WorkingPrecision (set to 180) for i near 620, at which point it had reached an r of about 30000. It took over 26 hours.

Superimposed plots of the results for several values of λ appear below.

enter image description here

enter image description here

enter image description here

Dashed line segments represent approximate symbolic solutions, described below. inf corresponds to λ == Infinity.

Approximate Solutions for r Much Greater than λ.

When ϕ[r] is exponentially small, which occurs for r much greater than λ, ϕ[r] can be ignored in eq1 and eq2, and ϕ[r]^3 can be ignored in eq3.

eq1a = A'[r]/r + (A[r] - 1)/r^2 == 0;
eq2a = B'[r] (A[r]/B[r])/r - (A[r] - 1)/r^2 == 0

which can be solved by

DSolve[{eq1a == 0, eq2a == 0}, {A[r], B[r]}, r] // Flatten // Simplify
(* {A[r] -> (r + C[1])/r, B[r] -> (r C[2])/(r + C[1])} *)

Inserting these into the linearized eq3 then gives

eq3a = Simplify[(A[r] ϕ''[r] + (B[r] - 1/λ) ϕ[r] + ϕ'[r] (A'[r]/2 - B'[r] 
    (A[r]/B[r])/2 + 2 A[r]/r)) /. {A -> Function[r, (r + C[1])/r], 
    B -> Function[r, (r C[2])/(r + C[1])]}]
(* (-(1/λ) + (r C[2])/(r + C[1])) ϕ[r] + ((2 r + C[1]) ϕ'[r] + r (r + C[1]) ϕ''[r])/r^2 *)

The further simplification C[1] -> 0, which typically is reasonable in this regime, then leads to

DSolve[Simplify[eq3a /. C[1] -> 0] == 0, ϕ[r], r] // Flatten
(* {ϕ[r] -> (E^(-((r Sqrt[1 - λ C[2]])/Sqrt[λ])) C[3])/r + 
   (E^((r Sqrt[1 - λ C[2]])/Sqrt[λ]) Sqrt[λ] C[4])/(2 r Sqrt[1 - λ C[2]])} *)

The second, exponentially growing term, vanishes by assumption, requiring C[4] == 0. So,

 (* ϕ[r] -> E^(-(r Sqrt[1/λ - C[2]]) C[3])/r *)

Unfortunately, there is no obvious way to determine the three constants {C[1], C[2], C[3]} apart from fitting the approximate symbolic solutions to their numerical counterparts. Still, the symbolic approximations give additional insights into the asymptotic behavior of the solutions. Incidentally, it is possible to solve eq3a numerically without setting C[1] to zero, if good estimates can be made for ϕ[r] and ϕ'[r] at some large value of r, perhaps based on the approximate symbolic solution just above. For instance, with λ == 1000, plotted earlier,

sa = NDSolveValue[eq3a ==  0, ϕ[rm - 10] == s[[3]][rm - 10], ϕ'[rm - 10] == 
    s[[3]]'[rm - 10]}, ϕ, {r, 20, rm}, WorkingPrecision -> 30];

Plotting this improved approximation together with the numerical curve for ϕ yields very good agreement for r as small as 25.

Quiet@LogPlot[{s[[3]][r], If[r > 20, sa[r]]}, {r, rStart1, rm}, 
    AxesLabel -> {r, ϕ}, opt // Evaluate, PlotRange -> {10^-16, 1}]

enter image description here

The dashed colored curves in the plots of A and B above, are the symbolic approximations to A and B derived in this section, with C[1] and C[2] obtained from the corresponding numerical solutions near their maximum values in r. A table of {λ, C[1], C[2], C[1]/λ, C[2] λ} follows.

enter image description here

Approximate Solutions for r Somewhat Less than λ but Larger than about 1000.

For r somewhat smaller than λ but not too small, the dominant terms in eq3 are (B[r] - 1/λ) ϕ[r] and ϕ[r]^3, so they must approximately cancel. In other words, ϕ[r] == Sqrt[B[r] - 1/λ]. Inserting this expression into eq1 and eq2 and again keeping only dominant terms then gives A[r] == 4/7 and B[r] == Sqrt[2/7]/r. These are plotted as black dashed lines in the first three figures. Visibly, they approximate well the curves for large λ (and for not so large λ, but only over small ranges of r). It is interesting to ask, therefore, over what ranges of r and λ is ϕ[r] == Sqrt[B[r] - 1/λ] a good approximation numerically. The following plot of ϕ[r]^2/(B[r] - 1/λ) illustrates the answer.

enter image description here

The approximation seems good roughly for 1000 < r < λ/3. Substituting ϕ[r] == Sqrt[B[r] - 1/λ] into eq1 and eq2, solving for A'[r] andB'[r]` and making no further approximations yields,

Simplify[{eq1, eq2} /. ϕ -> Function[r, Sqrt[B[r] - 1/λ]]];
eqλ = Last[Simplify[Solve[% == 0, {A'[r], B'[r]}], r > 0] /. Rule -> Equal]
(* {A'[r] == 1/(r^2 λ^3 B[r]^2) (r λ (r^2 + 2 λ^2) B[r]^2 - r^3 λ^3 B[r]^4 
    - 2 r λ^2 A[r] (-1 + λ B[r] + λ B[r]^2) + Sqrt[2] r Sqrt[-λ^3 A[r] 
    (-1 + λ B[r]) (-2 λ A[r] (-1 + λ B[r] + λ B[r]^2) + 
    B[r]^2 (r^2 + 2 λ^2 - 2 r^2 λ B[r] + r^2 λ^2 B[r]^2))]), 
    B'[r] == (-2 λ^2 A[r] (-1 + λ B[r]) + Sqrt[2] Sqrt[λ^3 A[r] (-1 + λ B[r]) 
    (2 λ A[r] (-1 + λ B[r] + λ B[r]^2) - B[r]^2 (r^2 + 2 λ^2 - 2 r^2 λ B[r]
    + r^2 λ^2 B[r]^2))])/(r λ^3 A[r] B[r]) *)

These equations can be solved in seconds for λ == 100000 (for instance) using

rStart1 = 10^-5; rEnd1 = 30000;
NDSolveValue[{eqλ /. λ -> 100000, A[rStart1] == 1, B[rStart1] == 1/100}, {A, B}, {r, 
    rStart1, rEnd1}, Method -> "StiffnessSwitching", WorkingPrecision -> 120]

Results are indistinguishable to the eye for the range of r shown in the first three figures, except at near r == 30000. Indeed, this computation can be performed for λ == 10^8 out to about r == 10^8/3.

In summary, the system of ODEs can be solved numerically for all large λ out to r == 30000 or moderately more, although slowly. It also can be solved symbolically at very large λ for r < λ/3 and for r > 2 λ (rough limits), up to the values of three constants, two of which can be estimated from the table above. The solution between these two regions can only be guessed but probably looks much like curves of, say, λ == 30000 in the first three figures.

bbgodfrey
  • 61,439
  • 17
  • 89
  • 156
  • thanks a lot! I'll study the answer first thing this morning. – Boson Bear May 21 '18 at 12:41
  • That's a very interesting analysis on large $\lambda$. I tried to rescale $\lambda$ to get rid of it but didn't think of this $\sqrt \lambda$ rescaling. That's brilliant. Still, I have two questions. 1) why do you need the third WhenEvent for $\phi>2\phi(0)$. Isn't it that can be captured by $\phi'>0$ condition? 2) I don't understand how the code in the beginning helps with the shooting for large $\lambda$ when it's as large as $\sim 10^9$. Do you think there's a way to combine your asymptotic analysis into the shooting?One last comment: indeed the rescaling of $B$ gets rid of $\mu$.Sharp eye! – Boson Bear May 21 '18 at 12:56
  • Ah I got it . You just verified the large $\lambda$ approximation works. So for large $\lambda$ one just needs to solve the `approximation' equations. Let me try it. Thanks a million. – Boson Bear May 21 '18 at 13:06
  • I tried to numerically solve your eq1a, eq2a, and eq3 with rescaled $\phi$ but didn't have much success. It seems the shooting strategy fails for the approximated equation set and only $\phi = $ constant are picked out. Any idea? – Boson Bear May 21 '18 at 16:14
  • 1
    The approximated equations break down at small r. Nonetheless, the asymptotic solutions may provide useful insight. Also, remember that, if you use the rescaled ϕ, you must rescale the boundary conditions too. – bbgodfrey May 21 '18 at 16:22
  • Indeed, I think your answer provides a great insight for the large $\lambda$ large $r$ solution. Now I am a bit confused at what happens at small $r$ for this large $\lambda$ approximation. It seems the only reason it breaks down in small $r$ is due to eq3a /.{C[0]->0}. Before that line everything is a leading order expansion of $\lambda$. However, even if I keep C[0] and solve {eq1a,eq2a, eq3} numerically, it doesn't work. So, I agree that it the large $\lambda$ approximation breaks down at small $r$ but don't see why it should.. – Boson Bear May 21 '18 at 16:27
  • 1
    At small r, the rescaled ϕ is large for large λ, so all the terms in all three equations must be retained. For λ == 1000, small r means r < 30 for A and B and r < 70 for ϕ. – bbgodfrey May 21 '18 at 16:35
  • I think I got why the approximation fails for small $r$. The rescaling substitute with $\phi_{new} = \sqrt{\lambda} \phi$. At small $r$, $\phi\sim 1/10$ becomes $\phi_{new} \sim \sqrt{\lambda}/10$. So after the rescaling, the $\phi$ terms in eq1 and eq2 are still large at small $r$, so they cannot be thrown away. On the other hand, at large $r$, $\phi$ was small anyway, so it's safe to throw them away after the rescaling I guess. – Boson Bear May 21 '18 at 16:36
  • do you think there's a way to combine your large $r$ analysis with the cubic bspline approach? I have switched to the cubic bspline method with which I can impose the boundary conditions on both ends now, but I am still struggling with the linearization. Any idea? – Boson Bear May 22 '18 at 22:05
  • Possibly, although the equations are quite nonlinear. I am considering options. – bbgodfrey May 22 '18 at 22:19
  • A quick thought: due to Newton method the convergence for the linearization is bad if the initial guess is too far away from the true solution. That might be the reason that it breaks down in the second iteration. Maybe we can use the large r expression as part of the initial guess? Let me try that. – Boson Bear May 23 '18 at 01:57
  • That's a very nice improvement! Maybe now I can try to reduce my rEnd1 in the shooting using your new result. Hmm. Let me try with large $\lambda$. – Boson Bear May 23 '18 at 13:00
  • I tried to use the large $r$ to decrease the required distance for shooting. Unfortunately, it doesn't seem to change it much. I think that's because for large $\lambda$, $\phi$ asymptotes to zero too slowly, which makes the point beyond which $r$ is treated as "large" is farther than small $\lambda$ case. – Boson Bear May 23 '18 at 21:08
  • Thanks for the update. Just looking by eyes, I can tell the shape con the small $r$ solution does look right because I tried to crank up $\lambda$ using brute force. – Boson Bear May 24 '18 at 13:10
  • @BosonBear I have the rStart = 10000 result and am computing rStart = 20000 now. Note that Method -> "StiffnessSwitching" is essential. – bbgodfrey May 24 '18 at 13:44
  • Sorry for letting the bounty expire @bbgodfrey. I'm very new to the site and thought the bounty would automatically apply to the top voted answer. I apologize. – Boson Bear May 30 '18 at 18:19
  • @BosonBear Actually, it goes to the top new answer, which was yours. But, you cannot give yourself a bounty, so it simply disappeared. Please do not feel bad about this. I still plan to provide a better answer soon. – bbgodfrey May 30 '18 at 18:30
  • Thank you for the nice words and the explanation. Now actually I am very puzzled about the ODE set. Besides I am very new to numerical analysis - which I'm sure contributes a lot to the hustle - I can't even get either the FDM or FEM work, which implies something trickly going on, which happens to be avoided in shooting method. If we really follow the principle `shooting first, relaxation after that,' I suppose one should be able to reproduce the answer with relaxation. Anyway, I'll be very curious to see your improvement. – Boson Bear May 30 '18 at 18:40
  • @BosonBear I have not forgotten about this interesting problem and have made further progress. I hope to update my answer in a few days. – bbgodfrey Jun 04 '18 at 04:09
  • don't wanna rush you but would you like to share your progress when you have time? I'm very curious about what method you're working on now. Thanks a lot. – Boson Bear Jun 12 '18 at 14:12
  • @BosonBear Please accept my apologies. I shall try to have something by the end of the week. – bbgodfrey Jun 12 '18 at 14:58
  • no worries. Please don't feel sorry. We are all busy and we know that. I am already very grateful for the help you have given. – Boson Bear Jun 12 '18 at 16:24
  • @BosonBear I have almost developed a good approximation to the λ = 10^8 calculation. I can approximately integrate the equations until ϕ switches from a roughly t^(-1/2) dependence to the asymptotic behavior already given in my answer. Also, I can splice the asymptotic expressions for A and B to this result. However, I have not yet determined how to splice the asymptotic expression for ϕ to my new approximation. One way or another, I shall lay out what I have very soon. – bbgodfrey Jun 18 '18 at 04:07
  • thank you for the update. Now I'm thinking about converting the semi infinite interval to a finite one. I learned from a BVP software, BVPSUITE that it seems possible to cut $(a,\infty)$ to $(a,1]$ and $[1,\infty)$, then do a $x\rightarrow 1/x$ transformation for the second interval. Some sort of matching at 1 is necessary. However, I've not been able to implement it yet. Any idea? – Boson Bear Jun 18 '18 at 23:20
  • Also, I have rewritten the cubic FEM code and got rid of some bugs. Now the whole crazy oscillation is mostly gone except for that near xEnd1. That's why I suspect setting xEnd1 large isn't enough for relaxation method. – Boson Bear Jun 18 '18 at 23:22
  • @BosonBear I have reorganized my earlier material and added new results. This still is not the answer you are seeking but is closer. I do not have additional ideas at the moment. Best wishes. – bbgodfrey Jul 02 '18 at 14:09
  • Thank you for the update. This is very helpful. I'm still working on the collocation method in the context of singular BVP problem and hopefully will update soon. Your results can be used as a cross check once I get some solutions using collocation. Again, thanks for the effort and input! Speak soon. – Boson Bear Jul 06 '18 at 17:22
2

Here I provide a half answer to the question using cubic FEM. The idea is to use piecewise cubic polynomials to approximate the solution and solve the weak form. This converts the ODE system into an algebraic system. There is still an issue in coping with the linear algebra system,which I will comment in the end. I will follow the procedure outlined in this book by Zhilin Li.

First of all, linearize the system using Newton-Raphson.

Clear[eq1mix, eq2mix, eq3mix]; 
eq1mix = Derivative[1][A][x]/x + A[x]/x^2 + A[x]*Derivative[1][ϕ][x]^2 + ϕ[x]^2*(μ^2*B[x] + 1) - 1/x^2 +  (1/2)*λ*ϕ[x]^4
eq2mix = (A[x]*Derivative[1][B][x])/(x*B[x]) - A[x]/x^2 + A[x]*Derivative[1][ϕ][x]^2 + ϕ[x]^2*(μ^2*B[x] - 1) +    1/x^2 - (1/2)*λ*ϕ[x]^4
eq3mix = Derivative[1][ϕ][x]*(Derivative[1][A][x]/2 - (A[x]*Derivative[1][B][x])/(2*B[x]) + (2*A[x])/x) +    A[x]*Derivative[2][ϕ][x] + ϕ[x]*(μ^2*B[x] - 1) - λ*ϕ[x]^3
rulePerturb = {
    ϕ[x] -> ϵ*Hold[ϕ[x, k + 1]] + Hold[ϕ[x, k]], 
    Derivative[1][ϕ][x] -> ϵ*Hold[dϕ[x, k + 1]] + Hold[dϕ[x, k]], 
    Derivative[2][ϕ][x] -> ϵ*Hold[d2ϕ[x, k + 1]] +  Hold[d2ϕ[x, k]], 
    A[x] -> ϵ*Hold[A[x, k + 1]] + Hold[A[x, k]],
    Derivative[1][A][x] -> ϵ*Hold[dA[x, k + 1]] + Hold[dA[x, k]], 
    Derivative[2][A][x] -> ϵ*Hold[d2A[x, k + 1]] + Hold[d2A[x, k]], 
    B[x] -> ϵ*Hold[B[x, k + 1]] + Hold[B[x, k]], 
    Derivative[1][B][x] -> ϵ*Hold[dB[x, k + 1]] + Hold[dB[x, k]], 
    Derivative[2][B][x] -> ϵ*Hold[d2B[x, k + 1]] + Hold[d2B[x, k]]}; 
Clear[A, dA, d2A, B, dB, d2B, ϕ, dϕ, d2ϕ]; 
eq1Perturb = Normal[Series[Expand[Simplify[eq1mix]] /. rulePerturb, {ϵ, 0, 1}]] /. {ϵ -> 1}; 
eq2Perturb = Normal[Series[Expand[Simplify[eq2mix]] /. rulePerturb, {ϵ, 0, 1}]] /. {ϵ -> 1}; 
eq3Perturb = Normal[Series[Expand[Simplify[eq3mix]] /. rulePerturb, {ϵ, 0, 1}]] /. {ϵ -> 1}; 
ReleaseHold[eq1Perturb]
ReleaseHold[eq2Perturb]
ReleaseHold[eq3Perturb]

Next, build up the cubic basis. Please note, different from the basis I used in one of the code I posted earlier, which is Lagrange interpolation ($i.e.$ inserting more control points in an interval to get four intervals five control points), here I am using Hermite interpolation (categorizing the basis with its function value and derivative values at control points, $i.e.$ build up double nodes.)

epsilon = 10^(-6); 
rStart1 = 10^(-5); 
rEnd1 = 50; 
Nmax = 120; 
spacing = (rEnd1 - rStart1)/Nmax; 
Clear[r]; 
Do[r[i] = rStart1 + spacing*i, {i, -1, Nmax + 1}]; 
rulePar = {μ -> 2, λ -> 1000}; 
B1[x_, i_] := Piecewise[{
    {((x - r[i - 1])^2*((2*(x - r[i]))/(r[i - 1] - r[i]) + 1))/(r[i] - r[i - 1])^2, r[i - 1] <= x <= r[i]}, 
    {((x - r[i + 1])^2*((2*(x - r[i]))/(r[i + 1] - r[i]) + 1))/(r[i] - r[i + 1])^2, r[i] <= x <= r[i + 1]}}, 0]
B2[x_, i_] := Piecewise[{
    {((x - r[i])*(x - r[i - 1])^2)/(r[i] - r[i - 1])^2, r[i - 1] <= x <= r[i]}, 
    {((x - r[i])*(x - r[i + 1])^2)/(r[i + 1] - r[i])^2, 
 r[i] <= x <= r[i + 1]}}, 0]
Plot[B1[x, 5], {x, r[4], r[6]}]
Plot[B2[x, 5], {x, r[4], r[6]}]

enter image description here enter image description here

The local stiffness matrix is then assembled.

b[localindex_, x_, i_] := Piecewise[{{B1[x, i], localindex == 1}, {B1[x, i + 1], localindex == 2}, {B2[x, i], localindex == 3}, {B2[x, i + 1], localindex == 4}}]; 
mtxLocalStiffnessbb[i_] := Table[Integrate[b[j, x, i]*b[k, x, i], {x, r[i], r[i + 1]}], {j, 4}, {k, 4}]
mtxLSConstbb = mtxLocalStiffnessbb[0]; 
N[MatrixForm[%]]
mtxLocalStiffnessbdb[i_] := Table[Integrate[b[j, x, i]*D[b[k, x, i], x], {x, r[i], r[i + 1]}], {j, 4}, {k, 4}]
mtxLSConstbdb = mtxLocalStiffnessbdb[0]; 
N[MatrixForm[%]]
mtxLocalStiffnessdbdb[i_] := Table[Integrate[D[b[j, x, i], x]*D[b[k, x, i], x], {x, r[i], r[i + 1]}], {j, 4}, {k, 4}]
mtxLSConstdbdb = mtxLocalStiffnessdbdb[0]; 
N[MatrixForm[%]]

Next, we construct the global bilinear form and load vector element by element.

ruleInnerProductB1[j_] := {
    A[x, k + 1] -> cA1[j - 1, k + 1]*mtxLSConstbb[[1,2]] + cA1[j, k + 1]*mtxLSConstbb[[1,1]] + cA1[j + 1, k + 1]*mtxLSConstbb[[2,1]] +  cA2[j - 1, k + 1]*mtxLSConstbb[[3,2]] + cA2[j, k + 1]*mtxLSConstbb[[3,1]] + cA2[j + 1, k + 1]*mtxLSConstbb[[4,1]], 
    dA[x, k + 1] -> cA1[j - 1, k + 1]*mtxLSConstbdb[[1,2]] + cA1[j, k + 1]*mtxLSConstbdb[[1,1]] + cA1[j + 1, k + 1]*mtxLSConstbdb[[2,1]] + 
 cA2[j - 1, k + 1]*mtxLSConstbdb[[3,2]] + cA2[j, k + 1]*mtxLSConstbdb[[3,1]] + cA2[j + 1, k + 1]*mtxLSConstbdb[[4,1]], 
    B[x, k + 1] -> cB1[j - 1, k + 1]*mtxLSConstbb[[1,2]] + cB1[j, k + 1]*mtxLSConstbb[[1,1]] + cB1[j + 1, k + 1]*mtxLSConstbb[[2,1]] + 
 cB2[j - 1, k + 1]*mtxLSConstbb[[3,2]] + cB2[j, k + 1]*mtxLSConstbb[[3,1]] + cB2[j + 1, k + 1]*mtxLSConstbb[[4,1]], 
    dB[x, k + 1] -> cB1[j - 1, k + 1]*mtxLSConstbdb[[1,2]] + cB1[j, k + 1]*mtxLSConstbdb[[1,1]] + cB1[j + 1, k + 1]*mtxLSConstbdb[[2,1]] + 
 cB2[j - 1, k + 1]*mtxLSConstbdb[[3,2]] + cB2[j, k + 1]*mtxLSConstbdb[[3,1]] + cB2[j + 1, k + 1]*mtxLSConstbdb[[4,1]], 
    ϕ[x, k + 1] -> cϕ1[j - 1, k + 1]*mtxLSConstbb[[1,2]] + cϕ1[j, k + 1]*mtxLSConstbb[[1,1]] + cϕ1[j + 1, k + 1]*mtxLSConstbb[[2,1]] + 
 cϕ2[j - 1, k + 1]*mtxLSConstbb[[3,2]] + cϕ2[j, k + 1]*mtxLSConstbb[[3,1]] + cϕ2[j + 1, k + 1]*mtxLSConstbb[[4,1]], 
    dϕ[x, k + 1] -> cϕ1[j - 1, k + 1]*mtxLSConstbdb[[1,2]] + cϕ1[j, k + 1]*mtxLSConstbdb[[1,1]] + cϕ1[j + 1, k + 1]*mtxLSConstbdb[[2,1]] + 
 cϕ2[j - 1, k + 1]*mtxLSConstbdb[[3,2]] + cϕ2[j, k + 1]*mtxLSConstbdb[[3,1]] + cϕ2[j + 1, k + 1]*mtxLSConstbdb[[4,1]], 
    d2ϕ[x, k + 1] -> cϕ1[j - 1, k + 1]*mtxLSConstdbdb[[1,2]] + cϕ1[j, k + 1]*mtxLSConstdbdb[[1,1]] + cϕ1[j + 1, k + 1]*mtxLSConstdbdb[[2,1]] + 
 cϕ2[j - 1, k + 1]*mtxLSConstdbdb[[3,2]] + cϕ2[j, k + 1]*mtxLSConstdbdb[[3,1]] + cϕ2[j + 1, k + 1]*mtxLSConstdbdb[[4,1]]}

ruleInnerProductB2[j_] := {
    A[x, k + 1] -> cA1[j - 1, k + 1]*mtxLSConstbb[[1,4]] + cA1[j, k + 1]*mtxLSConstbb[[1,3]] + cA1[j + 1, k + 1]*mtxLSConstbb[[2,3]] + 
 cA2[j - 1, k + 1]*mtxLSConstbb[[3,4]] + cA2[j, k + 1]*mtxLSConstbb[[3,3]] + cA2[j + 1, k + 1]*mtxLSConstbb[[4,3]], 
    dA[x, k + 1] -> cA1[j - 1, k + 1]*mtxLSConstbdb[[1,4]] + cA1[j, k + 1]*mtxLSConstbdb[[1,3]] + cA1[j + 1, k + 1]*mtxLSConstbdb[[2,3]] + 
 cA2[j - 1, k + 1]*mtxLSConstbdb[[3,4]] + cA2[j, k + 1]*mtxLSConstbdb[[3,3]] + cA2[j + 1, k + 1]*mtxLSConstbdb[[4,3]], 
    B[x, k + 1] -> cB1[j - 1, k + 1]*mtxLSConstbb[[1,4]] + cB1[j, k + 1]*mtxLSConstbb[[1,3]] + cB1[j + 1, k + 1]*mtxLSConstbb[[2,3]] + 
 cB2[j - 1, k + 1]*mtxLSConstbb[[3,4]] + cB2[j, k + 1]*mtxLSConstbb[[3,3]] + cB2[j + 1, k + 1]*mtxLSConstbb[[4,3]], 
    dB[x, k + 1] -> cB1[j - 1, k + 1]*mtxLSConstbdb[[1,4]] + cB1[j, k + 1]*mtxLSConstbdb[[1,3]] + cB1[j + 1, k + 1]*mtxLSConstbdb[[2,3]] + 
 cB2[j - 1, k + 1]*mtxLSConstbdb[[3,4]] + cB2[j, k + 1]*mtxLSConstbdb[[3,3]] + cB2[j + 1, k + 1]*mtxLSConstbdb[[4,3]], 
    ϕ[x, k + 1] -> cϕ1[j - 1, k + 1]*mtxLSConstbb[[1,4]] + cϕ1[j, k + 1]*mtxLSConstbb[[1,3]] + cϕ1[j + 1, k + 1]*mtxLSConstbb[[2,3]] + 
 cϕ2[j - 1, k + 1]*mtxLSConstbb[[3,4]] + cϕ2[j, k + 1]*mtxLSConstbb[[3,3]] + cϕ2[j + 1, k + 1]*mtxLSConstbb[[4,3]], 
    dϕ[x, k + 1] -> cϕ1[j - 1, k + 1]*mtxLSConstbdb[[1,4]] + cϕ1[j, k + 1]*mtxLSConstbdb[[1,3]] + cϕ1[j + 1, k + 1]*mtxLSConstbdb[[2,3]] + 
 cϕ2[j - 1, k + 1]*mtxLSConstbdb[[3,4]] + cϕ2[j, k + 1]*mtxLSConstbdb[[3,3]] + cϕ2[j + 1, k + 1]*mtxLSConstbdb[[4,3]], 
    d2ϕ[x, k + 1] -> cϕ1[j - 1, k + 1]*mtxLSConstdbdb[[1,4]] + cϕ1[j, k + 1]*mtxLSConstdbdb[[1,3]] + cϕ1[j + 1, k + 1]*mtxLSConstdbdb[[2,3]] + 
 cϕ2[j - 1, k + 1]*mtxLSConstdbdb[[3,4]] + cϕ2[j, k + 1]*mtxLSConstdbdb[[3,3]] + cϕ2[j + 1, k + 1]*mtxLSConstdbdb[[4,3]]}

    ruleCutOff = {cϕ1[-1, k + 1] -> 0,
     cϕ2[-1, k + 1] -> 0, cA1[-1, k + 1] -> 0,
     cA2[-1, k + 1] -> 0, cB1[-1, k + 1] -> 0,
     cB2[-1, k + 1] -> 0, cϕ1[Nmax + 1, k + 1] -> 0, 
     cϕ2[Nmax + 1, k + 1] -> 0, cA1[Nmax + 1, k + 1] -> 0,
     cA2[Nmax + 1, k + 1] -> 0, cB1[Nmax + 1, k + 1] -> 0, 
     cB2[Nmax + 1, k + 1] -> 0}; 

I take the initial guess to be the solution at $\lambda=300$.

WorkingprecisionVar = 23; 
accuracyVar = Automatic; 
precisionGoalVar = Automatic; 
boolUnderShoot = -1; 
Clear[GRHunterPhi4Shooting]; 
GRHunterPhi4Shooting[μ_, λ_, ϕ0_] := Module[{eq1, eq2, eq3}, 
eq1 = (μ^2/B[r] + 1)*ϕ[r]^2 + (1/A[r])*Derivative[1][ϕ][r]^2 + (1/2)*λ*ϕ[r]^4 - Derivative[1][A][r]/r/A[r]^2 + 1/r^2/A[r] - 1/r^2; 
eq2 = (μ^2/B[r] - 1)*ϕ[r]^2 + (1/A[r])*Derivative[1][ϕ][r]^2 - (1/2)*λ*ϕ[r]^4 - Derivative[1][B][r]/r/A[r]/B[r] - 1/r^2/A[r] + 1/r^2; 
eq3 = (1/A[r])*Derivative[2][ϕ][r] + (μ^2/B[r] - 1)*ϕ[r] + Derivative[1][ϕ][r]*(Derivative[1][B][r]/2/A[r]/B[r] - Derivative[1][A][r]/2/A[r]^2 + 
    2/A[r]/r) - λ*ϕ[r]^3; bc = {A[rStart1] == 1, B[rStart1] == B0, ϕ[rStart1] == ϕ0, Derivative[1][ϕ][rStart1] == epsilon}; 
sollst1 = ParametricNDSolveValue[Flatten[{eq1 == 0, eq2 == 0, eq3 == 0, bc, WhenEvent[ϕ[r] < 0, {boolUnderShoot = 1, "StopIntegration"}], 
    WhenEvent[Derivative[1][ϕ][r] > 0, {boolUnderShoot = 0, "StopIntegration"}]}], {A, B, ϕ}, {r, rStart1, rEnd1}, {B0}, 
  WorkingPrecision -> WorkingprecisionVar, AccuracyGoal -> accuracyVar, PrecisionGoal -> precisionGoalVar, MaxSteps -> 10000]]
funcPar = GRHunterPhi4Shooting[μ /. rulePar, 300, 1/10]; 
boolUnderShoot =. ; 
bl = 0; 
bu = 10; 
Do[boolUnderShoot = -1; bmiddle = (bl + bu)/2; 
WorkingprecisionVar = Max[Log10[2^i] + 5, 23]; 
Quiet[funcPar[bmiddle]]; 
If[boolUnderShoot == 1, bl = bmiddle, bu = bmiddle]; , {i, 80}]
funcInst = funcPar[bl][[3]]
Plot[{funcInst[r]}, {r, rStart1, rEnd1}, PlotRange -> All]

I plan to fit it with the cubic basis later but so far I am only focusing one step in the iteration.

Preparing the algebraic equations for the linearized system by taking the inner product of each equation with the $2(N_{max}+1)$ basis functions. The inner product is defined as the integration over the whole domain, since the test functions are the basis in $H^2(\Omega)$.

funcNonNeg[x_] := Max[x, 0]; 
funcNonMax[x_] := Min[x, Nmax]; 
arryLoadVec = SparseArray[ConstantArray[0, {6*Nmax + 6}]]
arryBilinearForm = SparseArray[ConstantArray[0, {6*Nmax + 6, 6*Nmax + 6}]]
SetSharedVariable[arryBilinearForm, arryLoadVec]; 
varMaxRecursion = 20; 
varAccuracyGoal = Automatic; 
varWorkingPrecision = 35; 
AbsoluteTiming[
ParallelDo[On[NIntegrate::izero]; 
    kthOrder = 0; 
    lastIterRes = {
    A[x, 0] -> funcPar[bl][[1]][x], 
    dA[x, 0] -> Derivative[1][funcPar[bl][[1]]][x], 
    B[x, 0] -> funcPar[bl][[2]][x], 
    dB[x, 0] -> Derivative[1][funcPar[bl][[2]]][x], 
    ϕ[x, 0] -> funcPar[bl][[3]][x], 
    dϕ[x, 0] -> Derivative[1][funcPar[bl][[3]]][x], 
    d2ϕ[x, 0] -> Derivative[2][funcPar[bl][[3]]][
    lstCoeff = Flatten[({cA1[#1, k + 1], cA2[#1, k + 1], cB1[#1, k + 1], cB2[#1, k + 1], cϕ1[#1, k + 1], cϕ2[#1, k + 1]} & ) /@ Range[funcNonNeg[nodept - 1], funcNonMax[nodept + 1]]] /. {k -> kthOrder}; 
    arryCoeff = CoefficientArrays[(ReleaseHold[eq1Perturb] /. rulePar /. ruleInnerProductB1[nodept] /. ruleCutOff /. {k -> kthOrder} /. lastIterRes)*B1[x, nodept], lstCoeff]; 
    {arryLoadVecStep, arryBilinearFormStep} = NIntegrate[arryCoeff, {x, r[funcNonNeg[nodept - 1]], r[funcNonMax[nodept + 1]]}, MaxRecursion -> varMaxRecursion, AccuracyGoal -> varAccuracyGoal, WorkingPrecision -> varWorkingPrecision]; 
    arryLoadVec[[nodept*3*2 + 1]] = arryLoadVecStep; 
    arryBilinearForm[[nodept*3*2 + 1]] = Flatten[{ConstantArray[0, {funcNonNeg[6*(nodept - 1)]}], arryBilinearFormStep, ConstantArray[0, {funcNonNeg[6*(Nmax + 1) - funcNonNeg[6*(nodept - 1)] - Length[arryBilinearFormStep]]}]}]; 
    arryCoeff = CoefficientArrays[(ReleaseHold[eq1Perturb] /. rulePar /. ruleInnerProductB2[nodept] /. ruleCutOff /. {k -> kthOrder} /. lastIterRes)*B2[x, nodept],  lstCoeff]; 
    {arryLoadVecStep, arryBilinearFormStep} = NIntegrate[arryCoeff, {x, r[funcNonNeg[nodept - 1]], r[funcNonMax[nodept + 1]]}, MaxRecursion -> varMaxRecursion, AccuracyGoal -> varAccuracyGoal, WorkingPrecision -> varWorkingPrecision]; 
    arryLoadVec[[nodept*3*2 + 2]] = arryLoadVecStep; 
    arryBilinearForm[[nodept*3*2 + 2]] = Flatten[{ConstantArray[0, {funcNonNeg[6*(nodept - 1)]}], arryBilinearFormStep,  ConstantArray[0, {funcNonNeg[6*(Nmax + 1) - funcNonNeg[6*(nodept - 1)] - Length[arryBilinearFormStep]]}]}]; 
    arryCoeff = CoefficientArrays[(ReleaseHold[eq2Perturb] /. rulePar /. ruleInnerProductB1[nodept] /. ruleCutOff /. {k -> kthOrder} /. lastIterRes)*B1[x, nodept], lstCoeff]; 
    {arryLoadVecStep, arryBilinearFormStep} = NIntegrate[arryCoeff, {x, r[funcNonNeg[nodept - 1]], r[funcNonMax[nodept + 1]]},  MaxRecursion -> varMaxRecursion, AccuracyGoal -> varAccuracyGoal, WorkingPrecision -> varWorkingPrecision]; 
    arryLoadVec[[nodept*3*2 + 3]] = arryLoadVecStep; 
    arryBilinearForm[[nodept*3*2 + 3]] = Flatten[{ConstantArray[0, {funcNonNeg[6*(nodept - 1)]}], arryBilinearFormStep, ConstantArray[0, {funcNonNeg[6*(Nmax + 1) - funcNonNeg[6*(nodept - 1)] - Length[arryBilinearFormStep]]}]}]; 
    arryCoeff = CoefficientArrays[(ReleaseHold[eq2Perturb] /. rulePar /. ruleInnerProductB2[nodept] /. ruleCutOff /. {k -> kthOrder} /. lastIterRes)*B2[x, nodept], lstCoeff]; 
    {arryLoadVecStep, arryBilinearFormStep} = NIntegrate[arryCoeff, {x, r[funcNonNeg[nodept - 1]], r[funcNonMax[nodept + 1]]},  MaxRecursion -> varMaxRecursion, AccuracyGoal -> varAccuracyGoal, WorkingPrecision -> varWorkingPrecision]; 
    arryLoadVec[[nodept*3*2 + 4]] = arryLoadVecStep; 
    arryBilinearForm[[nodept*3*2 + 4]] = Flatten[{ConstantArray[0, {funcNonNeg[6*(nodept - 1)]}], arryBilinearFormStep,  ConstantArray[0, {funcNonNeg[6*(Nmax + 1) - funcNonNeg[6*(nodept - 1)] - Length[arryBilinearFormStep]]}]}]; 
    arryCoeff = CoefficientArrays[(ReleaseHold[eq3Perturb] /. rulePar /. ruleInnerProductB1[nodept] /. ruleCutOff /. {k -> kthOrder} /. lastIterRes)*B1[x, nodept], lstCoeff]; 
    {arryLoadVecStep, arryBilinearFormStep} = NIntegrate[arryCoeff, {x, r[funcNonNeg[nodept - 1]], r[funcNonMax[nodept + 1]]}, MaxRecursion -> varMaxRecursion, AccuracyGoal -> varAccuracyGoal, WorkingPrecision -> varWorkingPrecision]; 
    arryLoadVec[[nodept*3*2 + 5]] = arryLoadVecStep; 
    arryBilinearForm[[nodept*3*2 + 5]] = Flatten[{ConstantArray[0, {funcNonNeg[6*(nodept - 1)]}], arryBilinearFormStep, ConstantArray[0, {funcNonNeg[6*(Nmax + 1) - funcNonNeg[6*(nodept - 1)] - Length[arryBilinearFormStep]]}]}]; 
    arryCoeff = CoefficientArrays[(ReleaseHold[eq3Perturb] /. rulePar /. ruleInnerProductB2[nodept] /. ruleCutOff /. {k -> kthOrder} /. lastIterRes)*B2[x, nodept], lstCoeff]; 
    {arryLoadVecStep, arryBilinearFormStep} = NIntegrate[arryCoeff, {x, r[funcNonNeg[nodept - 1]], r[funcNonMax[nodept + 1]]}, MaxRecursion -> varMaxRecursion, AccuracyGoal -> varAccuracyGoal, WorkingPrecision -> varWorkingPrecision]; 
    arryLoadVec[[nodept*3*2 + 6]] = arryLoadVecStep; 
    arryBilinearForm[[nodept*3*2 + 6]] = Flatten[{ConstantArray[0, {funcNonNeg[6*(nodept - 1)]}], arryBilinearFormStep, ConstantArray[0, {funcNonNeg[6*(Nmax + 1) - funcNonNeg[6*(nodept - 1)] - Length[arryBilinearFormStep]]}]}]; , {nodept, 0, Nmax}]]

Adding boundary conditions:

(*ϕ(0)*)
arryBilinearForm = Insert[arryBilinearForm,  SparseArray[{5} -> 1, {Length@arryLoadVec}], 5];
(*ϕ'(0)*)
arryBilinearForm = Insert[arryBilinearForm,  SparseArray[{6} -> 1, {Length@arryLoadVec}], 6];
(*ϕ(∞)*)
arryBilinearForm = Insert[arryBilinearForm, SparseArray[{-2} -> 1, {Length@arryLoadVec}], -1];
(*ϕ'(∞)*)
arryBilinearForm = Insert[arryBilinearForm,  SparseArray[{-1} -> 1, {Length@arryLoadVec}], -2];

(*ϕ(0)*)arryLoadVec = Insert[arryLoadVec, 0(*-1/10*), 5];
(*ϕ'(0)*)arryLoadVec = Insert[arryLoadVec, 0, 6];
(*ϕ(∞)*)arryLoadVec = Insert[arryLoadVec, 0, -1];
(*ϕ'(∞)*)arryLoadVec = Insert[arryLoadVec, 0, -2];

Now we end up with a $(6N_{max} + 10) \times (6N_{max} + 6)$ matrix, and a $(6N_{max} + 10)$ vector. Apparently not all $(6N_{max} + 10)$ constraints are linearly independent.

So far, I have tried RowReduce, SingularValueDecomposition, and LinearSolve. It seems I either get large numerical noise or a trivial solution. I hope there will be ways to improve. Please comment if you have suggestions or ideas.

enter image description here

Boson Bear
  • 383
  • 1
  • 10
  • Here are a few tips: Are you sure the system can be solved with the parameters you want; try to check with another software if possible. Before implementing a non-linear solver try to solve some simple problems first to see that the discretization works. Once that works try to solve some non-linear problem for which you have the solution. Perhaps "Advanced Topics in Finite Element Analysis of Structures: With Mathematica and MATLAB; Bhatti" is helpful. – user21 May 29 '18 at 05:17
  • @user21 That's a good idea. I shall try it out and get back. Thanks. – Boson Bear May 29 '18 at 20:38