3

Let $\Omega$ be bounded area in $\mathbb{R}^{n}$, $u_{0} \in C(\bar{\Omega})$. If $u \in C_{t}^{1} C_{x}^{2}(\Omega \times(0,+\infty)) \cap C(\bar{\Omega} \times[0,+\infty))$ satisfies:

$$ \left\{\begin{array}{lc} \partial_{t} u-\Delta u=0, & (t, x) \in(0,+\infty) \times \Omega \\ u(0, x)=u_{0}(x), & x \in \Omega \\ u(t, x)=0, & (t, x) \in(0,+\infty) \times \partial \Omega \end{array}\right. $$

Then prove:

$$ \sup _{\Omega}|u(\cdot, t)| \leq C e^{-\mu t} \sup _{\Omega}\left|u_{0}\right|, \quad t>0 $$

Where $\mu, C$ are positive numbers dependent on $n$ and $\Omega$.

And I think the method should be 1. define a auxiliary function and 2. use this result, but I cannot figure it out.

And the dependency of $n$ and $\Omega$ seems strange to me. How should I use these two coeffients? Should I use something like Bochner's technique?

robothead
  • 519

1 Answers1

2

Suppose that $B_R$ is a ball such that $\Omega \subset \subset B_R$. Let $\varphi$ solve the eigenvalue problem $ - \Delta \varphi = \lambda_1 \varphi$ in $B_R$, $\varphi =0$ on $\partial B_R$ where $\lambda_1$ is first Dirichlet eigenvalue of the Laplacian. It is well known that a solution is given by $$\varphi (x) = \vert x \vert^{\frac{2-n} 2 } J_{\frac{n-2} 2} \big ( \frac{\alpha}{R} \vert x \vert \big) $$ where $J_{\frac{n-2} 2}$ is a Bessel function, $\alpha$ is the first zero of $J_{\frac{n-2} 2}$, and $\lambda_1=\alpha^2$. As $\alpha$ is the first zero of $J_{\frac{n-2} 2}$, we have $\varphi >0$ in $B_R$.

Let $w(x,t):= \beta(\sup_\Omega \vert u_0 \vert)e^{-\lambda_1 t} \varphi(x)- u(x,t) $, $\beta>0$ to be chosen later. Then \begin{align*}\partial_tw-\Delta w &=-\beta\lambda_1(\sup_\Omega \vert u_0 \vert)e^{-\lambda_1 t} \varphi +\beta\lambda_1(\sup_\Omega \vert u_0 \vert)e^{-\lambda_1 t} \varphi\\&=0 \end{align*}in $B_R$ and $w=\beta(\sup_\Omega \vert u_0 \vert)e^{-\lambda_1 t} \varphi\geqslant0$ on $\partial \Omega \times (0,\infty)$. Since $B_R \subset\subset \Omega$, $\varphi \geqslant c_0>0$ for some constant $c_0$. By setting $\beta = 1/c_0$, we have $$w= \alpha(\sup_\Omega \vert u_0 \vert) \varphi- u_0\geqslant \sup_\Omega \vert u_0 \vert - u_0\geqslant 0. $$ Thus, the parabolic maximum principle implies that $$u(x,t) \leqslant \alpha(\sup_\Omega \vert u_0 \vert)e^{-\lambda_1 t} \varphi(x)\leqslant C(\sup_\Omega \vert u_0 \vert)e^{-\lambda_1 t} $$ using that $\varphi$ is bounded.

JackT
  • 6,854
  • Brilliant! How do you come up with the construction of $w(x,t)$? What is the intuition behind it? – robothead Dec 21 '21 at 03:05
  • Well I knew I wanted to look at something like $w(x,t) = e^{-\mu t} \varphi(x,t) (\sup_\Omega \vert u_0 \vert ) -(x,t)$ because if $w \geqslant 0 $ and $\varphi$ is bounded then I'm done. – JackT Dec 21 '21 at 05:42
  • My mind was drawn to $\varphi$ being an eigenvector because I remembered a very similar question in Evans (if you are curious it is this one: https://math.stackexchange.com/questions/1159499/exponential-decay-estimate) – JackT Dec 21 '21 at 05:42
  • Once I landed on $\varphi$ being an eigenvector it was all just a matter of checking details. For example, I needed $\varphi$ to be an eigenvector for $\lambda_1$ (as opposed to any eigenvalue) because that was the only way to guarantee $\varphi>0$ – JackT Dec 21 '21 at 05:44
  • @ JackT Thank you for explanation! Great answer! – robothead Dec 21 '21 at 07:47