19

Dear Mathematica users,

I would like to numerically solve a, as the title says, Poisson equation with pure Neumann boundary conditions

$-\nabla^2(\psi)=f$
$\nabla(\psi)\cdot \text{n}=g$

Is it possible?

For an example I will use the demo from FEniCS project.

$f=10\text{Exp}(-((x - 0.5)^2 + (y - 0.5)^2)/0.02)$
$g=-\text{Sin}(5x)$

In Mathematica

f = 10*Exp[-(Power[x - 0.5, 2] + Power[y - 0.5, 2])/0.02]
g = -Sin[5*x];
Needs["NDSolve`FEM`"]
bmesh = ToBoundaryMesh["Coordinates" -> {{0., 0.}, {1, 0.}, {1, 1}, {0., 1}, {0.5, 0.5}},"BoundaryElements" -> {LineElement[{{1, 2}, {2, 3}, {3, 4}, {4,1}}]}];
mesh = ToElementMesh[bmesh, "MaxCellMeasure" -> 0.001];

m[x_, y_] = 
 NDSolveValue[{-Laplacian[u[x, y], {x, y}] == f + NeumannValue[g, True]
(*,DirichletCondition[u[x,y]==0, x==0.5&&y==0.5]*)}, u, {x, y} \[Element] mesh][x, y]

ddfdx[x_, y_] := Evaluate[Derivative[1, 0][m][x, y]];
ddfdy[x_, y_] := Evaluate[Derivative[0, 1][m][x, y]];
Show[ContourPlot[m[x, y], {x, y} \[Element] mesh, PlotLegends -> Automatic, Contours -> 50], VectorPlot[{ddfdx[x, y], ddfdy[x, y]}, {x, y} \[Element] mesh, 
  VectorColorFunction -> Hue, VectorScale -> {Small, 0.6, None}]]

Trying to solve this in Mathematica gives a clear and understandable error

NDSolveValue::femibcnd: No DirichletCondition or Robin-type NeumannValue was specified; the result may be off by a constant value.

However, the result is not so clear and understandable.

enter image description here

The answer one would like to get should look something like the following image enter image description here

I tried equation elimination from this post, i.e. using

DirichletCondition[u[x, y] == 0, x == 0.5 && y == 0.5]

to get the result. Now it looks almost decent if one doesn't care about the sink which appears (and wrong gradient). Sadly I do care as I'm interested in the gradient of $\psi$ so such an approach leads me nowhere.

enter image description here

enter image description here

So then the question - is it possible to numerically solve Poisson equation with pure Neumann boundary conditions with Mathematica? Can anyone suggest some steps how to do this?

To add, sadly I am not a mathematician so I lack the ability to implement some routine on my own. Maybe something can be done using the weak formulation as in the example, but before trying to implement (would it actually be possible?) that I would like to learn if there is another way.

Mefistofelis
  • 305
  • 1
  • 9

6 Answers6

20

That's a typical problem; it is caused by the matrix of the discretized system having a one-dimensional kernel (and cokernel). One can stabilize the system by adding a row and a column that represent a homogeneous mean-value constraint. I don't know whether NDSolve can do that (user21 will be able to tell us), but one can do that with low-level FEM-programming:

Needs["NDSolve`FEM`"]
bmesh = ToBoundaryMesh[
   "Coordinates" -> {{0., 0.}, {1, 0.}, {1, 1}, {0., 1}, {0.5, 0.5}}, 
   "BoundaryElements" -> {LineElement[{{1, 2}, {2, 3}, {3, 4}, {4, 1}}]}];
mesh = ToElementMesh[bmesh, "MaxCellMeasure" -> 0.001];

vd = NDSolve`VariableData[{"DependentVariables", "Space"} -> {{u}, {x, y}}];
sd = NDSolve`SolutionData[{"Space"} -> {mesh}];
cdata = InitializePDECoefficients[vd, sd, 
   "DiffusionCoefficients" -> {{-IdentityMatrix[2]}}, 
   "MassCoefficients" -> {{1}}, 
   "LoadCoefficients" -> {{f}}
   ];
bcdata = InitializeBoundaryConditions[vd, sd, {{NeumannValue[g, True]}}];
mdata = InitializePDEMethodData[vd, sd];

(*Discretization*)
dpde = DiscretizePDE[cdata, mdata, sd];
dbc = DiscretizeBoundaryConditions[bcdata, mdata, sd];
{load, stiffness, damping, mass} = dpde["All"];
mass0 = mass;
DeployBoundaryConditions[{load, stiffness}, dbc];

enter image description here

Here the warning message is created. We ignore it because we augment the stiffness matrix in the following way:

a = SparseArray[{Total[mass0]}];
L = ArrayFlatten[{{stiffness, Transpose[a]}, {a, 0.}}];
b = Flatten[Join[load, {0.}]];
v = LinearSolve[L, b, Method -> "Pardiso"][[1 ;; Length[mass]]];

Now we can plot the solution:

solfun = ElementMeshInterpolation[{mesh}, v];
DensityPlot[solfun[x, y], {x, y} ∈ mesh, 
 ColorFunction -> "SunsetColors"]

enter image description here

I leave the cosmetics to you. Beware that the derivatives of these finite-element solutions are guaranteed to be close to the actual solution only in the $L^2$-norm. So it may happen that the gradient vector field looks much rougher than you would expect.

Henrik Schumacher
  • 106,770
  • 7
  • 179
  • 309
  • @user21 You know probably of a better solution for this... – Henrik Schumacher Feb 13 '19 at 17:08
  • Thank you for this really insightful answer! A lot to learn about the Finite Element Programming to understand everything here though. Interesting! Still curious and eager to learn something from @user21.. – Mefistofelis Feb 18 '19 at 10:32
  • You're welcome! Have also a look at the extensive tutorial on FEM written by user21. – Henrik Schumacher Feb 18 '19 at 10:33
  • @Mefistofelis, I only saw this now; for some reason I did not get a @ notification. I'll try to have a look at this in the next few days if still needed. – user21 Feb 18 '19 at 14:01
  • @user21 Thank you. =D – Henrik Schumacher Feb 20 '19 at 15:44
  • Oh, thank you! Your code gave me an idea for an improvement of mine. Probably boundary integral constraints (weak boundary constraints) could probably done in a similar fashion. One would use the pure Robin bc (no "Neumann" part like NeumanValue[u[x],...]) and use DiscteizeBoundaryCondition... I can not try that right now, but is perhaps a future idea. – user21 Feb 21 '19 at 05:55
  • Sounds interesting. I am always glad to serve as inspiration! – Henrik Schumacher Feb 21 '19 at 09:03
  • Please pardon my, probably ridiculously silly, question - $\nabla^2(u)+f \rightarrow 1.3$ which is the value that comes from the modified load matrix. I find it rather.. random(?), but I suppose it would take too long to explain why exactly this value. However, in layman terms, does this mean that the Poisson equation is solved as $-\nabla^2(u)=f+1.3$? – Mefistofelis Feb 22 '19 at 13:25
  • @Mefistofelis I am afraid that I don't understand your question. Where exactly does 1.3 pop up? – Henrik Schumacher Feb 22 '19 at 13:49
  • Apologies, I guess I'm really straying away from Mathematica and this is not a place where to discuss simple math problems . The solver itself works great and that is the main point, thank you for that! – Mefistofelis Feb 22 '19 at 14:18
  • I rethought my question - $-\nabla^2(\psi) \neq f$, but instead $-\nabla^2(\psi)=f+1.3'$. I am wondering how excatly this 'constant' is 'chosen'/found? Is this the value of a 'homogeneous mean-value constraint'? I clearly understand that Neumann boundary problem has a solution up to additive constant, but I'm not entirely sure about this value. I understand that we redefine and solve a slightly different problem, so can it maybe be said that we solve $-\nabla^2(\psi)=f$ where $f=f+k$ and in this case $k$ happens to be $1.3$? – Mefistofelis Feb 22 '19 at 15:02
  • Ah, now I start to understand. Okay, the problem with the Laplacian with pure Neumann conditions is not only that it has the constant functions in its kernel. It is also not surjective. The image of the operator is precisely the space of function of mean value 0. If f does not satisfy this, the equations $-\Delta \psi = f$ is not solvable! What the constrained approach does is to compute the $L^2$-Moore-Penrose-pseudoinverse. So it first projects f $L^2$-orthogonally onto the space of 0-mean functions and then solves the equation with this new right hand side (which is now solvable). [...] – Henrik Schumacher Feb 22 '19 at 15:12
  • $L^2$-orthogonal projection onto the space of 0-mean functions is by just subtracting the mean of f. So I guess f has mean equal to -1.3, right? – Henrik Schumacher Feb 22 '19 at 15:13
  • @HenrikSchumacher Hello, I'm trying to understand the basic idea of the clever substitution you made here: Given a singulaer system K.u==r you add a condition um=m.u==0 for the meanvalue of list u. I tried extended system {{K,0},{m,0}}.{u,um}=={r,0} . Is that the extension you made? Thanks! – Ulrich Neumann Dec 07 '22 at 11:54
  • Hi Ulrich. Almost. The matrix m is to be chosen such that conditionm.u == 0 characterizes the orthogonal complement of the nullspace of K. And if we are looking for the least squares solution of K.u = r in this orthogonal complement, the KKT equations are precisely {{K,Transpose[m]},{m,0}}.{u,um}=={r,0}, where um is the Lagrange multiplier. u should then be identical to PseudoInverse[K].r -- but it is easier to compute because the saddle point system is sparse while PseudoInverse[K] is dense. – Henrik Schumacher Dec 09 '22 at 20:22
  • @HenrikSchumacher Thanks for your helpful explanation. I tried to find this extended set of equations with penalty-ansatz but failed. Still I'm trying to understand the singularity of the underlying problem. K comes from integration of dot-products of gradients of meshfunctions I think – Ulrich Neumann Dec 11 '22 at 11:56
16

We can use the method of the false transient:

f = 10*Exp[-(Power[x - 0.5, 2] + Power[y - 0.5, 2])/0.02];
g = -Sin[5*x];
Needs["NDSolve`FEM`"]
bmesh = ToBoundaryMesh[
   "Coordinates" -> {{0., 0.}, {1, 0.}, {1, 1}, {0., 1}, {0.5, 0.5}}, 
   "BoundaryElements" -> {LineElement[{{1, 2}, {2, 3}, {3, 4}, {4, 
        1}}]}];
mesh = ToElementMesh[bmesh, "MaxCellMeasure" -> 0.001];
t0 = 300; dif = 1000;
m = NDSolveValue[{dif*D[u[t, x, y], t] - f - 
     Laplacian[u[t, x, y], {x, y}] == NeumannValue[g, True], 
   u[0, x, y] == 0}, u, {t, 0, t0}, {x, y} \[Element] mesh]

Visualisation

{Show[ContourPlot[m[t0, x, y], {x, y} \[Element] mesh,

PlotLegends -> Automatic, Contours -> 50, ColorFunction -> "BlueGreenYellow"], VectorPlot[ Evaluate[Grad[m[t, x, y], {x, y}] /. t -> t0], {x, y} [Element] mesh, VectorColorFunction -> Hue, VectorScale -> {Small, 0.6, None}]], DensityPlot[m[t0, x, y], {x, y} [Element] mesh, PlotLegends -> Automatic, ColorFunction -> "SunsetColors"]}

Figure 1

Update 1. Thanks to the nice post @ConvexHull with Discontinuous Galerkin method illustration I have added a new numerical method for linear and nonlinear Poisson's equations described in our paper and on my page here. The method is based on the Euler wavelets and the Newton's iterative method. Using Lagrange multiplayer with an additional constraint we have

f[x_, y_] := 10*Exp[-(Power[x - 0.5, 2] + Power[y - 0.5, 2])/0.02];
g[x_] := -Sin[5*x];
UE[m_, t_] := EulerE[m, t];
psi[k_, n_, m_, t_] := 
 Piecewise[{{2^(k/2) Sqrt[2/Pi] UE[m, 2^k t - 2 n + 1], (n - 1)/
      2^(k - 1) <= t < n/2^(k - 1)}, {0, True}}];
PsiE[k_, M_, t_] := 
 Flatten[Table[psi[k, n, m, t], {n, 1, 2^(k - 1)}, {m, 0, M - 1}]]
k0 = 3; M0 = 6; With[{k = k0, M = M0}, 
 nn = Length[Flatten[Table[1, {n, 1, 2^(k - 1)}, {m, 0, M - 1}]]]];
dx = 1/(nn);  xl = Table[ l*dx, {l, 0, nn}]; zcol = 
 xcol = Table[(xl[[l - 1]] + xl[[l]])/2, {l, 2, nn + 1}]; Psijk = 
 With[{k = k0, M = M0}, PsiE[k, M, t1]]; Int1 = 
 With[{k = k0, M = M0}, Integrate[PsiE[k, M, t1], t1]];
Int2 = Integrate[Int1, t1]; Int3 = Integrate[Int2, t1]; 
Psi[y_] := Psijk /. t1 -> y; int1[y_] := Int1 /. t1 -> y; 
int2[y_] := Int2 /. t1 -> y; int3[y_] := Int3 /. t1 -> y;
M = nn;

U1 = Array[a, {M, M}]; U2 = Array[b, {M, M}]; G1 = Array[g1, {M}]; G2 = Array[g2, {M}]; G3 = Array[g3, {M}]; G4 = Array[g4, {M}];

uint1 = int3[1] . U1 . int1[1] + 1/2 G1 . int1[1] + G4 . int1[1]; uint2 = int1[1] . U2 . int3[1] + 1/2 G2 . int1[1] + G3 . int1[1]; u1[x_, z_] := int2[x] . U1 . Psi[z] + x G1 . Psi[z] + G4 . Psi[z]; u2[x_, z_] := Psi[x] . U2 . int2[z] + z G2 . Psi[x] + G3 . Psi[x]; uz[x_, z_] := Psi[x] . U2 . int1[z] + G2 . Psi[x]; ux[x_, z_] := int1[x] . U1 . Psi[z] + G1 . Psi[z]; uxx[x_, z_] := Psi[x] . U1 . Psi[z]; uzz[x_, z_] := Psi[x] . U2 . Psi[z];

Equations and solution

eqn = Join[
  Flatten[Table[(uxx[xcol[[i]], zcol[[j]]] + 
       uzz[xcol[[i]], zcol[[j]]]) + f[xcol[[i]], zcol[[j]]] + 
     lambda , {i, M}, {j, M}]], 
  Flatten[Table[
    u1[xcol[[i]], zcol[[j]]] - u2[xcol[[i]], zcol[[j]]], {i, M}, {j, 
     M}]]]; bc = 
 Join[Flatten[Table[ux[1, zcol[[j]]] - g[1] == 0, {j, M}]], 
  Flatten[Table[ux[0, zcol[[j]]] + g[0] == 0, {j, 1, M}]], 
  Flatten[Table[uz[xcol[[j]], 0] + g[xcol[[j]]] == 0, {j, M}]], 
  Flatten[Table[
    uz[xcol[[j]], 1] - g[xcol[[j]]] == 0, {j, M}]], {uint2 == 
    0}]; var = 
 Join[Flatten[U1], Flatten[U2], G1, G2, G3, G4, {lambda}];

eq = Join[Table[eqn[[i]] == 0, {i, Length[eqn]}], bc];

{v, m} = CoefficientArrays[eq, var]; sol1 = LinearSolve[m, -v];

Visualization

rul = Table[var[[i]] -> sol1[[i]], {i, Length[var]}];

Plot3D[Evaluate[u1[x, y] /. rul], {x, 0, 1}, {y, 0, 1}, PlotLegends -> Automatic, ColorFunction -> "SunsetColors", PlotRange -> All, Exclusions -> None]

Plot3D[Evaluate[u2[x, y] /. rul], {x, 0, 1}, {y, 0, 1}, PlotLegends -> Automatic, ColorFunction -> "SunsetColors", PlotRange -> All, Exclusions -> None, PlotTheme -> "Marketing", MeshStyle -> White]

Figure 2

We can compare sol1 with solfun by Henrik Schumacher as follows

Needs["NDSolve`FEM`"]
bmesh = ToBoundaryMesh[
   "Coordinates" -> {{0., 0.}, {1, 0.}, {1, 1}, {0., 1}, {0.5, 0.5}}, 
   "BoundaryElements" -> {LineElement[{{1, 2}, {2, 3}, {3, 4}, {4, 
        1}}]}];
mesh = ToElementMesh[bmesh, "MaxCellMeasure" -> 0.001];
f0 = 10*Exp[-(Power[x - 0.5, 2] + Power[y - 0.5, 2])/0.02];
g0 = -Sin[5*x];

vd = NDSolveVariableData[{&quot;DependentVariables&quot;, &quot;Space&quot;} -&gt; {{u}, {x, y}}]; sd = NDSolveSolutionData[{"Space"} -> {mesh}]; cdata = InitializePDECoefficients[vd, sd, "DiffusionCoefficients" -> {{-IdentityMatrix[2]}}, "MassCoefficients" -> {{1}}, "LoadCoefficients" -> {{f0}}]; bcdata = InitializeBoundaryConditions[vd, sd, {{NeumannValue[g0, True]}}]; mdata = InitializePDEMethodData[vd, sd];

(Discretization) dpde = DiscretizePDE[cdata, mdata, sd]; dbc = DiscretizeBoundaryConditions[bcdata, mdata, sd]; {load, stiffness, damping, mass} = dpde["All"]; mass0 = mass; DeployBoundaryConditions[{load, stiffness}, dbc];

a = SparseArray[{Total[mass0]}]; L = ArrayFlatten[{{stiffness, Transpose[a]}, {a, 0.}}]; b = Flatten[Join[load, {0.}]]; v = LinearSolve[L, b, Method -> "Pardiso"][[1 ;; Length[mass]]];

solfun = ElementMeshInterpolation[{mesh}, v];

{Plot3D[solfun[x, y], {x, y} [Element] mesh, ColorFunction -> "SunsetColors"],Plot[{solfun[1, y],

Evaluate[(u1[1, y]) /. rul]}, {y, 0, 1}, PlotStyle -> {{Blue}, {Yellow, Dashed}}, Frame -> True], Plot[{solfun[0, y], Evaluate[(u1[0, y]) /. rul]}, {y, 0, 1}, PlotStyle -> {{Blue}, {Yellow, Dashed}}, Frame -> True]}

Figure 3

Absolute maximal difference of two solutions is about $4\times 10^{-4}$. The constraint is satisfied as

{uint1, uint2} /. rul

Out[]= {1.0213210^-8, 2.5514310^-14}

Note that the Tikhonov regularization method gives solution with constant shift relative to sol1.

Update 2. In my answer here the local discontinuous Galerkin method has been used to solve system of ODEs. Let consider LDG application to solve elliptic PDE. The theory is discussed here. The implementation is very straightforward. In this example we use Euler polynomials and Gauss formula for integration:

Get["NumericalDifferentialEquationAnalysis`"];

UT[m_, t_] := EulerE[m, t];

M0 = 3; nn = 8; ns = 32; h = 1/(ns - 1); np = 5; tmax = 1; x[t_] = Table[Symbol["x" <> ToString[i]][t], {i, 1, ns}]; v[t_] = Table[Symbol["v" <> ToString[i]][t], {i, 1, ns}]; g0[x_] := -Sin[5 x]; xgrid = Table[k h, {k, 0, ns - 1}]; fddf = NDSolve`FiniteDifferenceDerivative[Derivative[2], xgrid, DifferenceOrder -> 2]; M = fddf@"DifferentiationMatrix"; x0 = Table[-g0[0], {i, ns}]; F = 10*Table[ Exp[-(Power[y - 1/2, 2] + Power[t - 1/2, 2]) 50], {y, xgrid}]; force[t_] := mu; v0 = Table[0, {i, ns}]; eqs = {x''[t] == -M . x[t] + force[t] - F, x'[1] == g[1], x'[0] == -g[0]}; f = Join[-M . x[t] + force[t] - F, v[t]]; B = Array[b, {ns}]; ini = Join[x0, B];

LDGODEs[M0_, nn_, ns_, np_, f_, ini_, tmax_] := Module[{dx = tmax/nn, A = Array[a, {M0 + 1, nn, 2 ns}]}, xl = Table[ l*dx, {l, 0, nn}]; psi[m_, k_, t_] := Piecewise[{{UT[ m, (2 t - xl[[k + 1]] - xl[[k]])/(xl[[k + 1]] - xl[[k]])], xl[[k]] <= t <= xl[[k + 1]]}, {0, True}}]; g = Table[ GaussianQuadratureWeights[np, xl[[i]], xl[[i + 1]]], {i, nn}]; xc = Table[(xl[[l - 1]] + xl[[l]])/2, {l, 2, nn + 1}]; dp = Table[ D[UT[m, (2 t - xl[[k + 1]] - xl[[k]])/(xl[[k + 1]] - xl[[k]])], t], {k, nn}]; rul = Join[ Table[x[t][[i]] -> Sum[a[m, k, i + ns] psi[m - 1, k, t], {m, 1, M0 + 1}, {k, 1, nn}], {i, ns}], Table[v[t][[i]] -> Sum[a[m, k, i] psi[m - 1, k, t], {m, 1, M0 + 1}, {k, 1, nn}], {i, 1, ns}]]; eq = Flatten[ Table[-Sum[ a[i + 1, n, ks] (Table[(psi[i, n, t] If[j == 0, 0, dp[[n]] /. m -> j]), {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]]), {i, 0, M0}] - (Table[(f[[ks]] /. rul) psi[j, n, t], {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]]), {j, 0, M0}, {n, 1, nn}, {ks, 2 ns}] + Table[Sum[ a[i + 1, n, ks] (psi[i, n, xl[[n + 1]]] psi[j, n, xl[[n + 1]]]), {i, 0, M0}] - Sum[ If[n < 2, ini[[ks]]/(M0 + 1), a[i + 1, n - 1, ks] psi[i, n - 1, xl[[n]]]] psi[j, n, xl[[n]]], {i, 0, M0}], {j, 0, M0}, {n, 1, nn}, {ks, 2 ns}]]; bc1 = Table[ Sum[a[i + 1, k, s] psi[i, k, 1.], {i, 0, M0}, {k, nn}] == g0[1.], {s, ns}]; bc2 = Table[ Sum[a[i + 1, k, ns + 2] psi[i, k, xc[[n]]], {i, 0, M0}, {k, nn}] - Sum[a[i + 1, k, ns + 1] psi[i, k, xc[[n]]], {i, 0, M0}, {k, nn}] == -h g0[xc[[n]]], {n, 1, nn}]; bc3 = Table[ Sum[a[i + 1, k, 2 ns] psi[i, k, xc[[n]]], {i, 0, M0}, {k, nn}] - Sum[a[i + 1, k, 2 ns - 1] psi[i, k, xc[[n]]], {i, 0, M0}, {k, nn}] == h g0[xc[[n]]], {n, 1, nn}]; bc4 = {Sum[ a[i + 1, n, ks] (Table[(psi[i, n, t]), {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]]), {i, 0, M0}, {n, nn}, {ks, ns + 1, 2 ns}] == 0}; eqn = Table[eq[[k]] == 0, {k, Length[eq]}]; var = Join[Flatten[A], B, {mu}]; {vec, mat} = CoefficientArrays[Join[eqn, bc1, bc2, bc3, bc4], var]; sln = LeastSquares[mat, -vec]; sol = Table[var[[i]] -> sln[[i]], {i, Length[var]}]; sol]

Note, that final matrix mat with dimensions of mat // Dimensions {2097, 2081} needs to be solve with LeastSquares. Solution

ldgsol = LDGODEs[M0, nn, ns, np, f, ini, tmax]; // AbsoluteTiming

It takes about 30 s on my laptop. Visualization

lst = Table[{{x, s h}, 
    Evaluate[
     Sum[a[i + 1, k, ns + 1 + s] psi[i, k, x], {i, 0, M0}, {k, 
        nn}] /. sol]}, {x, 0, 1, .03333}, {s, 0, ns - 1}];

u = Interpolation[Flatten[lst, 1], InterpolationOrder -> 4] {Plot3D[u[x, y], {x, 0, 1}, {y, 0, 1}, ColorFunction -> "SunsetColors", PlotLegends -> Automatic, AxesLabel -> Automatic], DensityPlot[u[x, y], {x, 0, 1}, {y, 0, 1}, ColorFunction -> "SunsetColors", PlotLegends -> Automatic]}

Figure 4

Update 3. We can improve code above for discontinues Galerkin method (LDG) by definition square matrix and using LinearSolve as follows

Clear["Global`*"]

Get["NumericalDifferentialEquationAnalysis`"];

UT[m_, t_] := EulerE[m, t];

M0 = 3; nn = 8; ns = 32; h = 1/(ns - 1); np = 5; tmax = 1; x[t_] = Table[Symbol["x" <> ToString[i]][t], {i, 1, ns}]; v[t_] = Table[Symbol["v" <> ToString[i]][t], {i, 1, ns}]; g0[x_] := -Sin[5 x]; xgrid = Table[k h, {k, 0, ns - 1}]; M2 = NDSolveFiniteDifferenceDerivative[Derivative[2], xgrid, DifferenceOrder -&gt; Round[ns/2]]@&quot;DifferentiationMatrix&quot;; M1 = NDSolveFiniteDifferenceDerivative[Derivative[1], xgrid, DifferenceOrder -> Round[ns/2]]@"DifferentiationMatrix"; v0 = Table[-g0[0], {i, ns}]; F = 10Table[ Exp[-(Power[y - 1/2, 2] + Power[t - 1/2, 2]) 50], {y, xgrid}]; force[t_] := mu; f = Join[-M2 . x[t] + force[t] - F, v[t]]; B = Array[b, {ns}]; ini = Join[v0, B]; LDGODEs[M0_, nn_, ns_, np_, f_, ini_, tmax_] := Module[{dx = tmax/nn, A = Array[a, {M0 + 1, nn, 2 ns}], B1 = Array[b1, {M0 + 1, nn}], B2 = Array[b2, {M0 + 1, nn}]}, xl = Table[ ldx, {l, 0, nn}]; psi[m_, k_, t_] := Piecewise[{{UT[ m, (2 t - xl[[k + 1]] - xl[[k]])/(xl[[k + 1]] - xl[[k]])], xl[[k]] <= t <= xl[[k + 1]]}, {0, True}}]; g = Table[ GaussianQuadratureWeights[np, xl[[i]], xl[[i + 1]]], {i, nn}]; xc = Table[(xl[[l - 1]] + xl[[l]])/2, {l, 2, nn + 1}]; dp = Table[ D[UT[m, (2 t - xl[[k + 1]] - xl[[k]])/(xl[[k + 1]] - xl[[k]])], t], {k, nn}]; rul = Join[ Table[x[t][[i]] -> Sum[a[m, k, i + ns] psi[m - 1, k, t], {m, 1, M0 + 1}, {k, 1, nn}], {i, ns}], Table[v[t][[i]] -> Sum[a[m, k, i] psi[m - 1, k, t], {m, 1, M0 + 1}, {k, 1, nn}], {i, 1, ns}]]; eq1 = Flatten[ Table[-Sum[ a[i + 1, n, ks] (Table[(psi[i, n, t] If[j == 0, 0, dp[[n]] /. m -> j]), {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]]), {i, 0, M0}] - (Table[(f[[ks]] /. rul) psi[j, n, t], {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]]), {j, 0, M0}, {n, 1, nn}, {ks, ns}] + Table[Sum[ a[i + 1, n, ks] (psi[i, n, xl[[n + 1]]] psi[j, n, xl[[n + 1]]]), {i, 0, M0}] - Sum[ If[n < 2, ini[[ks]]/(M0 + 1), a[i + 1, n - 1, ks] psi[i, n - 1, xl[[n]]]] psi[j, n, xl[[n]]], {i, 0, M0}], {j, 0, M0}, {n, 1, nn}, {ks, ns}]]; eq2 = Flatten[ Table[-Sum[ a[i + 1, n, ks] (Table[(psi[i, n, t] If[j == 0, 0, dp[[n]] /. m -> j]), {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]]), {i, 0, M0}] - (Table[(f[[ks]] /. rul) psi[j, n, t], {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]]), {j, 0, M0}, {n, 1, nn}, {ks, ns + 2, 2 ns - 1}] + Table[Sum[ a[i + 1, n, ks] (psi[i, n, xl[[n + 1]]] psi[j, n, xl[[n + 1]]]), {i, 0, M0}] - Sum[ If[n < 2, ini[[ks]]/(M0 + 1), a[i + 1, n - 1, ks] psi[i, n - 1, xl[[n]]]] psi[j, n, xl[[n]]], {i, 0, M0}], {j, 0, M0}, {n, 1, nn}, {ks, ns + 2, 2 ns - 1}]];

bc1 = Table[ Table[((M1 . x[t] /. rul)[[1]] + g0[t]) psi[j, n, t], {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]], {j, 0, M0}, {n, 1, nn}]; bc2 = Table[ Table[((M1 . x[t] /. rul)[[ns]] - g0[t]) psi[j, n, t], {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]], {j, 0, M0}, {n, 1, nn}]; bc3 = Table[ Sum[a[i + 1, k, s] psi[i, k, 1.], {i, 0, M0}, {k, nn}] - g0[1.], {s, ns}]; bc4 = {Sum[(a[i + 1, n, ks] + a[i + 1, n, ks + 1])/ 2 (Table[(psi[i, n, t]), {t, g[[n]][[All, 1]]}] . g[[n]][[All, 2]]), {i, 0, M0}, {n, nn}, {ks, ns + 1, 2 ns - 1}]};

var = Join[Flatten[A], B, {mu}]; eq = Join[eq1, eq2, Flatten[bc1], Flatten[bc2], bc3, bc4]; eqn = Table[eq[[k]] == var[[k]] 10^-16, {k, Length[eq]}]; {vec, mat} = CoefficientArrays[eqn, var]; sln = LinearSolve[mat, -vec, Method -> "Multifrontal"]; sol = Table[var[[i]] -> sln[[i]], {i, Length[var]}]; sol];

Numerical solution

ldgsol = LDGODEs[M0, nn, ns, np, f, ini, tmax]; // AbsoluteTiming

Compare to FEM we have on the border at x=0, 1 Figure 5

Therefore, the maximal error is about $4\times 10^{-4} $ same as in a case of Euler wavelets collocation method.

Alex Trounev
  • 44,369
  • 3
  • 48
  • 106
  • I think your source has the wrong sign. – ConvexHull Jul 27 '22 at 10:09
  • Wrong sign compare to what? – Alex Trounev Jul 27 '22 at 11:16
  • 1
    Compared to the other solutions presented here. I recover your solution if the sign of f is changed. To be honest, the derivative in normal direction is also not uniquely defined in this Fenics example. How is the orientation of the normal vector defined? Always pointing outward? – ConvexHull Jul 27 '22 at 11:48
  • You are right, thank you very much, post been corrected with a new picture. – Alex Trounev Jul 27 '22 at 12:02
  • Your welcome! ;) – ConvexHull Jul 27 '22 at 12:03
  • Can you elaborate about why your matrix is not squared? Generally LDG is implemented in the flux form and transformed via Schurr complement to the primal form. Do you solve for primal or flux system? How did you choose the penalty parameter? I am not able to find a related parameter in your code. – ConvexHull Sep 23 '22 at 20:33
  • @ConvexHull Matrix 'matis not square occasionally, since in my solver independently used Euler polynomials, piecewise functionpsiand colocation pointsxgrid. To make square matrix we need some relation betweenM0,nnandns. Please, pay attention that in my solver used Lagrange multipliermuin lineforce[t_] := mu`. – Alex Trounev Sep 23 '22 at 21:13
  • Thank you for the reply. However, I still have problems to follow your implementation. The choice of polynomial space should not have an influence on the overall system to be solved. Euler polynomials are not different compared to other commonly used polynomials. Anyway nice work! – ConvexHull Sep 24 '22 at 08:15
  • @ConvexHull You can check, that LDG solution not so differ from FEM solution and Euler wavelets collocation method as well. Also, please pay attention, that in my solver used Gauss quadrature instead of integrals. – Alex Trounev Sep 24 '22 at 10:44
  • Sure the methods are related but not similar. For example standard FE methods are $C^0$ and $H^1$ conform, whereas DG methods are not. Could you give me a hint where an important part of DG, the surface integral contribution, is calculated? Thank you! – ConvexHull Sep 24 '22 at 13:17
  • @ConvexHull Please, see Update 3 for my answer with LDG code generating square matrix. The surface contribution been calculated in the second Table in eq1 and eq2. – Alex Trounev Oct 06 '22 at 15:34
  • Thank you for your effort. – ConvexHull Oct 06 '22 at 21:27
14

I have an approach that is similar to Henrik's but is a bit more streamlined and possibly more efficient.

Let's start with some definitions and the mesh:

f = 10*Exp[-(Power[x - 0.5, 2] + Power[y - 0.5, 2])/0.02];
g = -Sin[5*x];

Needs["NDSolve`FEM`"]
mesh = ToElementMesh[Rectangle[]];

Then there is a utility function that is not documented that we can use here. It's purpose to the take a PDE, BCs, ics and a region and generate the discretized versions from them. I'll use the same names, then it should be clear what the function does.

{dpde, dbc, vd, sd, mdata} = 
  ProcessPDEEquations[{-Laplacian[u[x, y], {x, y}]
     == f + NeumannValue[g, True]}, u, {x, y} \[Element] mesh];
{load, stiffness, damping, mass} = dpde["All"];
DeployBoundaryConditions[{load, stiffness}, dbc];

This generates the same data as in Henrik's post. Now, we write a helper function that implements the integral constraint. We need, loosely speaking, the contribution of each node to the total area of the region. Now, there is the mesh["MeshElementMeasure"] which gives the area contribution of each element. What we are looking for is the 'area' contribution 'surrounding' each mesh node, figuratively speaking. Henrik extracted that information from the mass matrix. I used the load vector. That should be a bit more efficient and needs less data massaging. But it the same principal. To get those values we use the discretization mechanism we already have. We set the load coefficient to 1. If you were to sum some over the discretized load vector, you'd get the area of the region. The constraint matrix contains the value we like this integral to be, 0 in this case. Then I fill in the data structure that DeployBoundaryCondition needs.

FEMIntegralConstraint[value_, methodData_, vd_, sd_] := Module[
  {prec, dof, loadContribution, stiffnessContribution, initCoeffs, 
   constraintMatrix, constraintRows, constraintValueMatrix},      

  prec = methodData["Precision"];
  dof = methodData["DegreesOfFreedom"];

  (* no values are added to the load vector and stiffness matrix *)
  loadContribution = SparseArray[{}, {dof, 1}, N[0, prec]];
  stiffnessContribution = SparseArray[{}, {dof, dof}, N[0, prec]];

  (* init the load coefficient to 1 *)
  initCoeffs = InitializePDECoefficients[vd, sd, "LoadCoefficients" -> {{1}}];

  (* values inserted into the stiffness matrix overwriting what there was *)
  constraintMatrix = Transpose[
    DiscretizePDE[initCoeffs, methodData, sd]["LoadVector"]];

  constraintRows = {1};
  (* values inseted into the load vector, overwriting what there was *)
    constraintValueMatrix = SparseArray[{{N[value, prec]}}];

  DiscretizedBoundaryConditionData[{loadContribution, 
    stiffnessContribution, constraintMatrix, constraintRows, 
    constraintValueMatrix,
    (* DOF, hanging nodes DOF, 
    lagrangian multipliers DOF *)
    {dof, 0, Length[constraintRows]}},
   (* scale factor value *)
   1]
  ]

With this in place we can now construct the integral constraint and deploy those. "Append" means those constraints are appended to the system matrices (as Lagrange multipliers)

dbc = FEMIntegralConstraint[0, mdata, vd, sd]
DeployBoundaryConditions[{load, stiffness}, dbc, 
 "ConstraintMethod" -> "Append" ]

The rest is the same:

v = LinearSolve[stiffness, load];
solfun3 = 
  ElementMeshInterpolation[{mesh}, Take[v, mdata["DegreesOfFreedom"]]];

We can check that the constraint worked:

NIntegrate[solfun3[x, y], {x, y} \[Element] mesh]
1.1645204900769326`*^-7

Which is quite acceptable, I think. Also there is no difference to the result Henrik provided.

Show[ContourPlot[solfun3[x, y], {x, y} \[Element] mesh, 
  PlotLegends -> Automatic, Contours -> 50, 
  ColorFunction -> "BlueGreenYellow"], 
 VectorPlot[
  Evaluate[Grad[solfun3[x, y], {x, y}]], {x, y} \[Element] mesh, 
  VectorColorFunction -> Hue, VectorScale -> {Small, 0.6, None}]]

enter image description here

One thing is that the result on your linked page seems to have a bit higher a max value, but I may be wrong. If you have it installed, I'd check the values.

Update:

For version 10 you may be lucky and can use this as a replacement for ProcessPDEEquations. If the vd does not work you could try to get it from the methodData["VariableData"] in stead.

PDEtoMatrix[{pde_, \[CapitalGamma]___}, u_, r__] := 
 Module[{ndstate, feData, sd, vd, bcData, methodData, pdeData},
  {ndstate} =
   NDSolve`ProcessEquations[Flatten[{pde, \[CapitalGamma]}], u, 
    Sequence @@ {r}];
  sd = ndstate["SolutionData"][[1]]; vd = ndstate["VariableData"];
  feData = ndstate["FiniteElementData"];
  pdeData = feData["PDECoefficientData"];
  bcData = feData["BoundaryConditionData"];
  methodData = feData["FEMMethodData"];
  {DiscretizePDE[pdeData, methodData, sd], 
   DiscretizeBoundaryConditions[bcData, methodData, sd], vd, sd, 
   methodData}
  ]

An explanation for PDEtoMatrix can be found in the talk here.

user21
  • 39,710
  • 8
  • 110
  • 167
  • By any chance something of this might not work in Mathematica 10 (or 10.2, suppose I had to mention this at start, my apologies!)? Direct usage of your code is not possible, e.g. first list equality is not the same length. – Mefistofelis Feb 19 '19 at 18:18
  • @Mefistofelis, I no longer have a version 10 installed. Does Henrik's version work for you? – user21 Feb 20 '19 at 05:46
  • Correct. As you used the same names as Henrik (and thank you for that!), it is clear that the 'utility function' ProcessPDEEquations doesn't work for older versions, i.e. nothing is returned. However, I suppose, one can combine both answers, taking your part from the implementation of the integral constraint and using Henriks way of doing discretization. And then your way works perfectly! Thank you! – Mefistofelis Feb 20 '19 at 14:11
  • @Mefistofelis, yes, this should indeed work. Or, alternatively you could roll your own function ProcessPDEEquations or call is something else, see update. – user21 Feb 20 '19 at 14:35
  • @user21 Could you please provide the complete function call PDEtoMatrix[...] for this example? Thanks°! – Ulrich Neumann Feb 29 '24 at 09:39
  • @UlrichNeumann, just replace the head NDSolve[....] with PDEToMatrix[....]. Does that not work? – user21 Feb 29 '24 at 14:37
  • @user21 Thanks! I tried sol=PDEtoMatrix[{-Laplacian[u[x, y], {x, y}] == f + NeumannValue[g, True]}, u, {x, y} \[Element] mesh] and got a result, which doesn't give detailed information about the FEM-matrices. – Ulrich Neumann Feb 29 '24 at 14:54
  • @UlrichNeumann You should get a DiscretizedPDEData among other things. And you can extract the system matrices from that. See also the FEM Programming – user21 Feb 29 '24 at 20:42
  • @user21 Thanks again, I'll try it out – Ulrich Neumann Mar 01 '24 at 08:29
10

I'd like to add a finite difference method (FDM) based solution. I'll use pdetoae for the generation of difference equations.

f = 10 Exp[-(Power[x - 0.5, 2] + Power[y - 0.5, 2])/0.02];
g = -Sin[5 x];

With[{u = u[x, y]},
  eq = -Laplacian[u, {x, y}] == f;
  bc = {Table[D[u, var] == g /. var -> 1, {var, {x, y}}],
        Table[D[u, var] == - g /. var -> 0, {var, {x, y}}]}
  ];
points = 25; domain = {0, 1};
grid = Array[# &, points, domain];
difforder = 2;
ptoafunc = pdetoae[u[x, y], {grid, grid}, difforder];
(* Definition of pdetoae isn't included in this post,
   please find it in the link above. *)
ae = ptoafunc@eq;
aebc = ptoafunc@bc;
var = Outer[u, grid, grid, 1];
constbc = u[0, 0] == 0;

At this point we have points × points + points × 4 + 1 equations at hand, but only points × points unknown variables therein, so the equation system is over-determined. Usually the solution is to remove some of the equations from the system as I've done many times before, but this method doesn't work well in this case:

del = #[[2 ;; -2]] &;
{b, mat} = CoefficientArrays[{del /@ del@ae, 
      Delete[MapAt[del, aebc, {1, All}], {2, 2, 1}], constbc} // Flatten, 
    var // Flatten]; // AbsoluteTiming

sollst = LinearSolve[N@mat, -b]; // AbsoluteTiming

LinearSolve::nosol

To be honest, I'm not sure why this happens, but anyway, I've found a workaround. We just need to turn to LeastSquares:

{b, mat} = CoefficientArrays[{ae, aebc, constbc} // Flatten, 
    var // Flatten]; // AbsoluteTiming

sollst = LeastSquares[N@mat, -b]; // AbsoluteTiming    
solmat = Partition[sollst, points, points];
solfunc = ListInterpolation[solmat, {grid, grid}];

Show[ContourPlot[solfunc[x, y], {x, ##}, {y, ##}, PlotLegends -> Automatic, 
    Contours -> 50, ColorFunction -> "AvocadoColors"], 
   VectorPlot[Evaluate[Grad[solfunc[x, y], {x, y}]], {x, ##}, {y, ##}, 
    VectorColorFunction -> "TemperatureMap"]] & @@ domain

Mathematica graphics

xzczd
  • 65,995
  • 9
  • 163
  • 468
  • Hi. Looking at your legend it seems the range is from 0 to 1.0 - can you check what the Integral over you solution is? Should be zero I think. – user21 Feb 20 '19 at 06:23
  • @user21 For points = 50 it's about 0.495172, but I think it's reasonable, because I'm not using constant integral as the constraint here, mine is constbc = u[0, 0] == 0;. – xzczd Feb 20 '19 at 06:28
  • I see; However, u[0,0]==0 is a Dirichlet condition or am I missing something? – user21 Feb 20 '19 at 06:53
  • @user21 Yeah, a Dirichlet b.c.. Since the analytic solution of this problem involves one constant, I think this should be enough to determine a particular solution? – xzczd Feb 20 '19 at 06:56
  • 2
    Yes, I think that is correct. I am not sure why the LinearSolve fails. To get to the integral you could use an iterative process to adjust the Dirichlet bc in such a way that the integral becomes 0. – user21 Feb 20 '19 at 07:05
  • 3
    One could also simply subtract the solution's average from the solution... – Henrik Schumacher Feb 20 '19 at 16:28
2

Although the question is already a little older, I want to share an answer with two solution strategies using a Discontinuous Galerkin method (LDG):

  1. Modifying the equation

$$-\Delta u = f,$$ $$\text{...}$$ $$-\Delta u + \varepsilon u = f,$$

for some small $\varepsilon>0$, which can be interpreted as a regularization (e.g. Tikhonov). This corresponds roughly to adding $\varepsilon$ to the diagonal of your system matrix (Jitter).

  1. Imposing an additional constraint

$$\int_\Omega u \,\mathrm{d}x=0,$$

using Lagrange multiplier. Then your system would be $$\begin{bmatrix} A & 1 \\ 1^T & 0 \end{bmatrix} \begin{bmatrix} u \\ \lambda \end{bmatrix} = \begin{bmatrix} f \\ 0 \end{bmatrix}.$$

See also: Stack Computational Science

Results

  • Mesh with 4x4 elements
  • Polynomial degree of 6
  • $\epsilon = 1\times 10^{-6}$

First method

enter image description here

Second method

enter image description here

ConvexHull
  • 121
  • 5
  • 2
    Welcome. Would be nice if you could add the code for 1. 2 is essentially the answer Henrik has given, not? – user21 Aug 05 '22 at 01:10
  • @user21 My contribution focuses more on numerical and mathematical part. It's not directly done in Mathematica. So unfortunately the answer is no.The second method is more or less good described by Henrik. – ConvexHull Aug 05 '22 at 11:08
  • You understand that the point of this site is to share Mathematica code. If OP wanted a numerical or mathematica description he/she would have asked elsewhere. – user21 Aug 05 '22 at 15:06
  • To put in other words: showing plots created in something else then Mathematica is not useful on this site because none can replicate that. You could use the online version of the Wolfram language and share your code. – user21 Aug 05 '22 at 15:13
  • Can you elaborate how you version is different from Henriks'? How is it advantageous? – user21 Aug 05 '22 at 15:15
  • @user21 I understand that this is not directly useful for the Mathematica users. However, the question is ask quite general and the old answers here are really vague and inaccurate (from a mathematical point of view). Note that I have just recently pointed out that the second best contribution with 14 likes was answered incorrectly, see the comments. Moreover, for Mathematica users the link to "Scicomp" may be helpful. If such contributions are not welcome here, then good luck. – ConvexHull Aug 05 '22 at 15:50
  • The first method is more advantageous in the sense that it can be generalized to other regularizations. Method two is simply a Lagrange multiplier version. There is simply nothing to compare or elaborate. I do not understand your point. – ConvexHull Aug 05 '22 at 16:00
  • 1
    @ConvexHull I looked up samples of your calculations on YouTube. Could you explain why the Discontinuous Galerkin method implementation with MATLAB more preferable then with Mathematica? – Alex Trounev Aug 19 '22 at 09:46
  • Actually, I have no preferences regarding the programming language. The current implementation in MATLAB has historical reasons. If I would have more time I would reimplement all my stuff in Julia. – ConvexHull Aug 19 '22 at 16:03
  • @AlexTrounev Why do you prefer Mathematica? – ConvexHull Aug 19 '22 at 16:43
  • @ConvexHull Actually I am Fortran and MATLAB user, while Mathematica is my hobby. Could you add code for the First and Second method? I will translate code for Mathematica users. – Alex Trounev Aug 20 '22 at 03:37
  • @AlexTrounev I would like to leave it as a mathematical contribution. The code is currently not encapsulated and part of a larger non-public framework. – ConvexHull Aug 20 '22 at 07:07
  • 1
    @ConvexHull Ok! Thank you. I will try do it by myself. – Alex Trounev Aug 20 '22 at 07:55
2

addendum (little bit late) to HenrikSchumacher's remarkable answer

Henrik showed that LinaerSolve[stiffness,load] (result from NDSolve-solution) has no solution.

If we consider the equivalent symmetric problem

LinearSolve[Transpose[stiffness] . stiffness ,Transpose[stiffness] . load] Mathematica evaluates a unique solution of the underlying problem.

ui = LinearSolve[Transpose[stiffness] . stiffness ,Transpose[stiffness] . load];
solfun = ElementMeshInterpolation[{mesh}, ui ];
Plot3D[solfun[x, y], {x, y} \[Element] mesh ]

enter image description here

Offset correction might be incorporated later.

Perhaps choosing the symmetrized equation system is the workaround to avoid singularity issue (stiffness)?

Ulrich Neumann
  • 53,729
  • 2
  • 23
  • 55