4

The associated Legendre ODE is given by

$$ \left( (1-x^2) f'(x) \right)' - \frac{m^2}{1-x^2} f(x) = \lambda f(x)$$

The eigenfunctions have certain properties that I would like to understand by looking NOT at the eigenfunctions and eigenvalues but only by referring to the ODE itself. So please pretend that you are not aware of the fact that you can construct the two things analytically.

Then, we have that the eigenfunctions are all $C^{\infty}$ if we exclude the Legendre-polynomials of the second kind, can we see this directly from the differential equation? Probably this follows for the Legendre polynomials somehow if we require boundedness of the solutions at the end of the interval to exclude the Legendre polynomials of the second kind. But how does this follow for the associated Legendre polynomials, i.e. then the ODE has an additional singular term that could cause additional trouble? Despite, all the solutions are proper well-behaved functions, so we should somehow see this out of the ODE itself.

Furthermore, if we increase $m$, then the ground-state solutions get higher. I mean, if we start with $m=0$, then the ground-state is $\lambda=0$, for $m=1$ we have $\lambda= 1(1+1)=2$ and for $m=2$ we have $\lambda = 2(2+1)=6$. Can we see this directly out of the differential equations that this is the case?

1 Answers1

5

The equation must be considered in the context of the Hilbert space $L^{2}(-1,1)$ because the filter of requiring eigenfunctions to be in $L^{2}$ is what eliminates the non-regular solutions, and it is what determines the eigenvalues. For $m = 1,2,3,\cdots$, the operators $$ L_{m}f = -\frac{d}{dx}\left[(1-x^{2})\frac{d}{dx}f\right] + \frac{m^{2}}{1-x^{2}}f $$ are selfadjoint on the domain $\mathcal{D}(L_{m})$ consisting of all twice absolutely continuous functions $f$ for which $L_{m}f \in L^{2}$. No endpoint conditions are needed or are possible. So you can integrate by parts and be assured that all of the evaluation terms vanish in order to obtain $$ (L_m f,f) = \|\sqrt{1-x^{2}}f'\|^{2}+\|\frac{m}{\sqrt{1-x^{2}}}f\|^{2}. $$ It is automatically true that $f \in \mathcal{D}(L_{m})$ implies $$ \sqrt{1-x^{2}}f',\; \frac{1}{\sqrt{1-x^{2}}}f \in L^{2}(-1,1). $$ Hence, the product of these two expressions is in $L^{1}(-1,1)$, which gives $ff' \in L^{1}$ and thereby guarantees the existence of the following endpoint limits $$ \left.\int_{-1}^{1}ff'\,dx = \frac{f^{2}(x)}{2}\right|_{-1}^{1}. $$ These endpoint limits are also $0$ because there are non-zero boundary functionals; or you can appeal to the fact that $f/\sqrt{1-x^{2}}\in L^{2}$ in order to conclude that $f^{2}(\pm 1)$ cannot be non-zero. So quite a lot can be said just knowing that $f \in \mathcal{D}(L_m)$. And you can carry the analysis further by assuming $f \in \mathcal{D}(L_m)$ is also a solution of $L_m f = \lambda f$ for some $\lambda$.

Another important part of classical analysis for solving ODEs is the Method of Frobenius http://en.wikipedia.org/wiki/Frobenius_method . This classical method gives series solutions for equations with regular singular points, which nearly nearly all of the classical Sturm-Liouville eigenvalue problems have, at least in the finite plane. For example, $x=0$ is a regular singular point of $$ p(x)y''+q(x)y'+r(x)y = 0 $$ if $p$, $q$, $r$ have power series expansions around $x=0$, and the normalized equation $$ y''+\frac{q}{p}y+\frac{r}{p}y = 0 $$ has no worse than an order 1 pole for $q/p$ and an order 2 pole for $r/p$. Then you can get an approximation for the behavior of at least one solution by solving Euler's equation $x^{2}y''+axy'+by=0$ where $a$ and $b$ are the coefficients of the highest order singular terms of $q/p$ and $r/p$, respectively. This leads to at least one solution of the form $x^{m}\sum_{n=0}^{\infty}a_{n}x^{n}$ where $m$ is the solution of the indicial equation $m(m-1)+am+b=0$ with the largest real part. More generally, the substitution $y=x^{m}w$ leads to a new equation for $w$ which has a power series solutions at $x=0$. So that's the classical method that was invented about 140 years ago.

The power of the Method of Frobenius in this case is two-fold:

  1. It motives a substitution $f=(1-x^{2})^{m/2}g$ which greatly simplifies the equation, and where it can be directly seen that differentiatng a solution $y=g$ for some $m$ and $\lambda$ gives a solution $y=g'$ for $m+1$ and the same $\lambda$.

  2. The substitution leads to an equation which admits power series solutions. Because of the symmetry at $\pm 1$, the new equation admits entire solutions which happen to be polynomials for specific $\lambda$. A direct power series analysis shows that only the polynomial ones are acceptable solutions in $\mathcal{D}(L_m)$.

To carry out the Method of Frobenius, start with the eigenfunction equation: $$ (1-x^{2})f''-2xf'-\frac{m^{2}}{1-x^{2}}f+\lambda f = 0 \\ (x^{2}-1)f''+2xf'-\frac{m^{2}}{x^{2}-1}f-\lambda f = 0 \\ f''+\frac{2x}{(x-1)(x+1)}f'-\left[\frac{m^{2}}{(x-1)^{2}(x+1)^{2}}+\frac{\lambda}{(x-1)(x+1)}\right]f = 0. $$ (Note: I have negated your eigenvalue parameter because $L_{m}$ is a positive operator; so $L_{m}f=\lambda f$ leads to $\lambda > 0$ using the negative of your $\lambda$.) Only the highest order terms are initially considered in this method. For example, consider the equation near $x=1$: $$ f'' + \left[\frac{1}{x-1}+\cdots\right]f'+\left[-\frac{m^{2}}{4(x-1)^{2}}+\cdots\right]f = 0. $$ This determines a form of solution $f=(x-1)^{\alpha}g$ where $\alpha$ satisfies the indicial equation $$ \alpha(\alpha-1)+\alpha - \frac{m^{2}}{4} = 0 \\ %% \alpha^{2}-\frac{m^{2}}{4} = 0,\\ \alpha = \pm \frac{m}{2}. $$ Because the difference of these roots is an integer, only the one with the largest real part (i.e., $\alpha=m/2$) is guaranteed in general to lead to a solution of the form $$ f(x)= (1-x)^{m/2}\sum_{n=0}^{\infty}a_{n}(1-x)^{n}. $$ So classical considerations suggest a substitution of the form $$ f(x) = (1-x^{2})^{m/2}g(x). $$ This substitution leads to a simpler equation which is also sometimes called the Associated Legendre Equation: $$ (1-x^{2})g''-2x(m+1)g'-m(m+1)g + \lambda g = 0. $$ I believe that it was this form of the equation where it was discovered that differentiating the equation lead to another equation of the same form, but with a different $m$. For example, differentiate once and you get a new equation in $h=g'$: $$ (1-x^{2})h''-2x(m+2)h'-(m+1)(m+2)h + \lambda h = 0. $$ You'll notice that the new equation is the same as the original, but with $m$ replaced by $m+1$. So I think you can see how taking derivatives of solutions of the base Legendre equation where $m=0$, $$ (1-x^{2})g''-2xg'+\lambda g = 0 $$ leads to solutions of all of the higher order equations. All you have to do is multiply the derivatives by factors $(1-x^{2})^{m/2}$ in order to obtain full solutions of your original equation. Explicitly, if $P_{n}$ is the Legendre polynomial of order $n$, then $$ (1-x^{2})^{m/2}\frac{d^{m}}{dx^{m}}P_{n}(x) $$ is a solution of your equation.

Emilio Novati
  • 62,675
Disintegrating By Parts
  • 87,459
  • 5
  • 65
  • 149
  • @TobiasHurth : The details I gave in the last part are for the equation where $m^{2}/(1-x^{2})$ is present, and the operator version for $L_{m}$ include the same term. The operator version shows you that the eigenvalues of $L_{m}$ are positive. What additional eigenvalues are you thinking of? – Disintegrating By Parts Dec 22 '14 at 01:52
  • @TobiasHurth : The solutions of $L_0 f=\lambda f$ are $P_0,P_1,P_2,P_3,\cdots$ with eigenvalues $0,1(2),2(3),3(4),\cdots$. The solutions of $L_1 f=\lambda f$ are $(1-x^{2})^{1/2}P_{n}'$ with eigenvalues $0,1(2),2(3),3(4),\cdots$, except that the first one drops out because $P_0 ' =0$. Similarly, $P_0''=P_1''=0$, and so on. – Disintegrating By Parts Dec 22 '14 at 02:10
  • @TobiasHurth : I only looked at $P_{n}$ at the very end. Everything else did not look at specific eigenvalues and eigenfunctions, but you will need to look at solutions at point. That said, $\mathcal{D}(L_{m})$ is invariant for $m \ge 1$, which comes from the form domain. And you can see that $(L_{m+1}f,f)=(L_{m}f,f)+|(1-x^{2})^{-1/2}f|^{2}$. That alone tells you that the lowest level eigenvalue increases with $m$ because the eigenfunction corresponding to the minimum eigenvalue minimizes $(L_m f,f)/(f,f)$. I think that's right? – Disintegrating By Parts Dec 22 '14 at 02:38
  • @TobiasHurth : Ah, and that last fact I gave you then implies that the derivative of the eigenfunction with lowest eigenvalue for $L_m$ must be $0$, or it would violate the operator identity. So, I guess you can deduce that you must have polynomials! Very strange. Of course, you have to start by assuming there is some eigenvalue. – Disintegrating By Parts Dec 22 '14 at 02:45
  • If $\lambda$ is the minimum point of the spectrum and is an eigenvalue, then it minimizes the form. So, yes, when you have discrete spectrum, Raleigh-Ritz still applies. More generally, if the spectrum is bounded below, the infimum of $(Lf,f)/(f,f)$ is the smallest point of the spectrum. And you say that if the minimum of $(Lf,f)/(f,f)$ is achieved at some $f_0$ for a selfadjoint $L$ then $Lf_0=\lambda f_0$ where $\lambda=(Lf_0,f_0)/(f_0,f_0)$. – Disintegrating By Parts Dec 22 '14 at 03:01
  • @TobiasHurth : Peel them off, one at a time by minimizing over the things orthogonal to the first eigenfunctions. This works where you have discretely separated eigenvalues at the bottom, which is the usual case for Quantum where you have bound states at the bottom energy levels. The eigenvalues may cluster as you approach the continuous spectrum, but they're discretely separated from below. – Disintegrating By Parts Dec 22 '14 at 03:06
  • @TobiasHurth : I have pointed you to this before, but I posted this because of so much confusion about $L_0$. I show that the boundary functionals $A_{\pm}$ $B_{\pm}$ are asymptotic limits, and every $f \in \mathcal{D}(L_0)$ has the asymptotic expansion: $f(x) = A_{\pm}(f)\frac{1}{2}\ln\left(\frac{1+x}{1-x}\right)+B_{\pm}(f)+o\left(\sqrt{1-x^{2}}\ln\left(\frac{1+x}{1-x}\right)\right)$. So, yes, boundedness is equivalent to the two endpoint conditions $A_{\pm}(f)=0$, which gives a selfadjoint Sturm-Liouville problem. – Disintegrating By Parts Dec 22 '14 at 03:26
  • The place where I give you the asymptotics for the domain: http://math.stackexchange.com/questions/886775/selfadjoint-restrictions-of-legendre-operator-fracddx1-x2-fracddx – Disintegrating By Parts Dec 22 '14 at 03:29
  • @TobiasHurth : And you as well. – Disintegrating By Parts Dec 22 '14 at 03:33
  • @TobiasHurth : I was stating equivalents, not proving anything. Essentially selfadjoint: The closure of the minimal operator equals the maximal operator. So you can close from $\mathcal{C}_{0}^{\infty}(-1,1)$ to get the unique selfadjoint extension, which is what enables you to integrate by parts freely. Equivalently, these operators are in the limit point case at each endpoint. Adding the condition $\sqrt{1-x^{2}}f'\in L^{2}(-1,1)$ for $L_0$ makes $L_0$ selfadjoint because it also defines the Friedrichs extension. The Friedrichs extension for $L_m$, $m > 1$ must be the unique s.a. extension. – Disintegrating By Parts Dec 22 '14 at 13:51
  • @TobiasHurth : I happened to just be checking online. Yes, you integrate by parts with $C_{0}^{\infty}(-1,1)$ functions and you get the identity. Then the identity extends to everything in the graph of the maximal operator because the closure of the minimal is the maximal when it is essentially selfadjoint. That means convergence in the graph implies convergence of $L_{m}f_{n}$, of $\sqrt{1-x^{2}}f_{n}'$ and of $f_{n}/\sqrt{1-x^{2}}$, and the final identity persists. – Disintegrating By Parts Dec 23 '14 at 00:02
  • 1
    @TobiasHurth : It follows because $(L_m f,f)=|\sqrt{1-x^{2}}f'|^{2}+m^{2}|f/\sqrt{1-x^{2}}|^{2}$ for $f \in \mathcal{C}_{0}^{\infty}(-1,1)$. Hence, $|\sqrt{1-x^{2}}f'|^{2} \le (L_m f,f) \le |L_m f||f| \le \frac{1}{2}{|L_m f|^{2}+|f|^{2}}$. – Disintegrating By Parts Dec 23 '14 at 16:10
  • @TobiasHurth : $\sqrt{1-x^{2}}f_n'$ converges in $L^{2}$ implies $f_n'$ converges in $L^{2}[-1+\delta,1-\delta]$ for any $\delta > 0$. $f_n(x)-f_n(0)=\int_{0}^{x}f_{n}'(t),dt$. You have convergence of second derivative, too. This is why ODEs are soooo much nicer than PDES. The maximal operator consists of all twice absolutely continuous functions with $L_m f \in L^{2}(-1,1)$. Differentiation is closed on its natural domain. $L_m$ is closed on its natural domain. – Disintegrating By Parts Dec 23 '14 at 16:22
  • @TobiasHurth : Differentiation is closed on $L^{2}[a,b]$ for all $[a,b]\subset (-1,1)$. In fact, second differentiation is closed on its natural domain, which means you don't even need a go-between. You use integration to show it, and you should go through it once for yourself. – Disintegrating By Parts Dec 23 '14 at 16:42
  • @TobiasHurth : You have more than what you wrote. You have $\sqrt{1-x^{2}}f_n'$ converges in $L^{2}$ to, say, $h$. Then $f_{n}'$ converges in $L^{2}[-1+\delta,1-\delta]$ to $h/\sqrt{1-x^{2}}$. And $f_n$ converges to $f$ in $L^{2}[-1+\delta,1-\delta]$. That gives you what you want: $h/\sqrt{1-x^{2}}=f'$ or $h=\sqrt{1-x^{2}}f'$. – Disintegrating By Parts Dec 23 '14 at 18:19
  • @TobiasHurth : I'll repeat. $\sqrt{1-x^{2}}f_{n}'$ converges to some $h \in L^{2}$. And $f_{n}$ converges in $L^{2}$ to some $f$. So $(1/\sqrt{1-x^{2}})\sqrt{1-x^{2}}f_{n}'=f_{n}'$ converges in $L^{2}[-1+\delta,1+\delta]$ to $h/\sqrt{1-x^{2}}$ while $f_{n} \rightarrow f$ in $L^{2}[-1+\delta,1+\delta]$. Because differentiation is closed, then $f$ is a.c. with $f'=h/\sqrt{1-x^{2}}$ on every interval $[-1+\delta,1+\delta]$. So $f'=h/\sqrt{1-x^{2}}$. Hence, $\sqrt{1-x^{2}}f_{n}'\rightarrow h=\sqrt{1-x^{2}}f'$. – Disintegrating By Parts Dec 23 '14 at 18:30
  • @TobiasHurth : You're ignoring the first statement: $\sqrt{1-x^{2}}f_{n}'$ converges in $L^{2}[-1,1]$ to some $h$. And we showed that $h=\sqrt{1-x^{2}}f'$ must hold. So $\sqrt{1-x^{2}}f_{n}'$ converges in $L^{2}[-1,1]$ to $\sqrt{1-x^{2}}f'$. – Disintegrating By Parts Dec 23 '14 at 18:35
  • @TobiasHurth : Suppose $f_{k}\in\mathcal{D}(L_{m})\rightarrow f$ and $L_{m}f_{k} \rightarrow g$. Then $(L_{m}(f_{k}-f_{j}),f_{k}-f_{j})=|\sqrt{1-x^{2}}(f_{k}'-f_{j}')|^{2}+|\frac{m}{\sqrt{1-x^{2}}}(f_{k}-f_{j})|^{2}$ converges to $0$ as $k,j\rightarrow\infty$. – Disintegrating By Parts Mar 02 '15 at 00:29
  • @TobiasHurth : $\int_{-1+\delta}^{1-\delta}|f_{n}'|^{2}dx \le C(\alpha)\int_{-1}^{1}|\sqrt{1-x^{2}}f_{n}'|^{2}dx$. – Disintegrating By Parts Mar 02 '15 at 01:24
  • @TobiasHurth : Define $L\frac{d}{dx} : D(L)\subset L^{2}[a,b]\rightarrow L^{2}[a,b]$ where $\mathcal{D}(L)$ consists of all $f \in L^{2}[a,b]$ which are absolutely continuous with $f' \in L^{2}[a,b]$. Then $L$ is a closed linear operator (i.e., has closed graph.) – Disintegrating By Parts Mar 02 '15 at 01:28
  • @TobiasHurth : Knowing that $L$ is closed as I just described gives you this result: If ${ f_{n} } \subset \mathcal{D}(L)$ with $f_{n} \rightarrow f$ and $f_{n}'=Lf_{n}\rightarrow g$, then $f \in \mathcal{D}(L)$ and $Lf=g$. – Disintegrating By Parts Mar 02 '15 at 01:30
  • @TobiasHurth It's a useful way to bootstrap by starting with a basic knowledge of $\frac{d}{dx}$ on $L^{2}[a,b]$. You can start with $L$ on $\mathcal{C}^{\infty}_{c}$, show $L$ is symmetric, conclude that it's closable, and then use the adjoint relation as a weak equation to prove $\mathcal{D}(L^{\star})$ is the set of absolutely continuous $f \in L^{2}[a,b]$ for which $f' \in L^{2}$. I like clean rigorous arguments like that because they extend so well. – Disintegrating By Parts Mar 02 '15 at 01:40
  • I usually define $L_{0}$ to have domain with $C^{\infty}{0}$ functions, $L{\min}$ its closure, and $L_{\max}$ is adjoint. My notation is probably non-standard. – Disintegrating By Parts Mar 02 '15 at 01:48
  • Did you look at my derivation of asymptotics for the ordinary Legendre operator? I pointed you to that a couple of times. The assumption $Lf = -((1-x^{2})f')' +m^{2}/(1-x^{2})f$ and $f$ are in $L^{2}$ forces the absolute convergence of $(Lf)g-f(Lg)$ for $f,g\in\mathcal{D}(L)$, and of $(Lf)f$, etc., which forces limits of $\int_{-1+\delta}^{1-\delta}(Lf)f,dx$ as $\delta\downarrow 0$. Legendre's identity $(Lf)g-f(Lg) = \frac{d}{dx}[(1-x^{2})(fg'-f'g)]$ gives the existence of limits $\lim_{x\rightarrow\pm 1}(1-x^{2})(fg'-f'g)$ for all $f,g\in\mathcal{D}(L)$, just as one example. – Disintegrating By Parts Mar 06 '15 at 06:01
  • @TobiasHurth : (See previous remark also.) At some point, you need to spend a little time studying endpoint conditions because that's where all the information is concerning limit point and limit circle cases, as well as the information for formulating well-posed selfadjoint operators. – Disintegrating By Parts Mar 06 '15 at 06:04
  • @TobiasHurth : I showed the existence of various endpoint limits in this post (look at the solution I posted and how I proved existence of limits.) This is always the case for Sturm-Liouville. http://math.stackexchange.com/questions/886775/selfadjoint-restrictions-of-legendre-operator-fracddx1-x2-fracddx . – Disintegrating By Parts Mar 06 '15 at 14:33
  • @TobiasHurth : If a limit of $f$ exists and that limit depends continuously on the graph norm of the maximal operator $\mathcal{D}(L_{0}^{\star})$, then that limit would always have to be $0$. Why? That's part of the general theory of endpoint conditions for an essentially selfadjoint Sturm-Liouville operator such as $Lf =-((1-x^{2})f')'+m^{2}f/(1-x^{2})$. So, when you start multiplying $L^{2}$ things together because of graph properties ... you get limits that are going to be $0$ at endpoints. – Disintegrating By Parts Mar 06 '15 at 16:43
  • @TobiasHurth : You definitely get the existence of limits $(1-x^{2})(f'g-fg')$ at $\pm 1$ for all $f,g \in \mathcal{D}(L_{\max})$, and those limits are continuous linear functionals on the graph of $L_{\max}$ which vanish on the domain of $L_{0}$. So, those limits must be $0$ because, if not, they would give rise to 2-dimensional $L^{2}(-1,0]$ or $L^{2}[0,1)$ eigenspaces with non-real eigenvalues, and we know that does not happen. – Disintegrating By Parts Mar 06 '15 at 16:53
  • (1) You know about $L_{0}$, $L_{\min}=\overline{L_{0}}$ and $L_{\max}=L_{0}^{\star}$. $L_{\min} \preceq L_{\max}$ and $L_{\min}$ is symmetric. The domain of $L_{\max}$ is all twice absolutely cont. $f \in L^{2}(-1,1)$ for which $L_{\max}f \in L^{2}$. (2) Linear functional $\Phi : \mathcal{G}(L_{\max})$ continuous and vanishing on $\mathcal{G}(L_{\min})$ correspond to $\mathcal{N}(L_{\max}^{2}+I)$. (3) Dimension in (2) is about limit-point/circle (4) You're in l.c. at both ends with one $L^{2}$ solution $f_{m}=(1-x^{2})^{m/2}$ at $L_{m}f_{m}=m(m+1)f_{m}$. So $L_{\min}=L_{\max}$. So $\Phi=0$. – Disintegrating By Parts Mar 06 '15 at 18:40
  • @TobiasHurth : Typo : You're in l.p. (limit point) case at both endpoints. – Disintegrating By Parts Mar 06 '15 at 18:47
  • @TobiasHurth : First take a look here at my answer. http://math.stackexchange.com/questions/1012104/classification-of-operators/1012535#1012535 – Disintegrating By Parts Mar 06 '15 at 19:43
  • I think the problem is the following: In this answer you said that the minimal operator is the operator that where all functions in its domain satisfy $f(a)=f(b)=f'(a)=f'(b)=0.$ If you look in the book by Teschl that I quoted, there he does not say that the minimal operator satisfies $f(a)=f(b)=f'(a)=f'(b)=0$ but rather contains all functions that satisfy $\lim_{x \rightarrow a,b} p(f'(x)g(x)-f(x)g'(x))=0.$ With your definition it is clear that if we are in the l.p. case, then since $L_{min}= L_{max}$ all functions need to satisfy this boundary condition, but with this other definition, it is. –  Mar 06 '15 at 20:00
  • @TobiasHurth : I was using that as an example for the regular case. It is regular because $q\in L^{1}$, and you do get such an equivalence for the regular case. This problem here is a singular problem, but the ideas are identical. And, no, I did not get any of this from Teschl. Cordes does a much more thorough job. $L_{\min}$ here is the closure from $C^{\infty}_{0}$. But, the theory of functionals as related to deficiency indices is only one of symmetric operators. That's also in Cordes. – Disintegrating By Parts Mar 06 '15 at 21:10