1

Suppose $f$ is continuous on $[a,b]$, that $f(x) \geq 0 \ \forall x \in [a,b]$, and $\int_{a}^{b} f(x) dx = 0$. Prove that $f(x) = 0 \ \forall x \in [a,b]$

I have seen various posts on this:

If the Riemann Integral of a continuous nonnegative function is 0, then that function is identically 0.

$f\geq 0$, continuous and $\int_a^b f=0$ implies $f=0$ everywhere on $[a,b]$

and many more. I follow these proofs, but I want to understand why mine is incorrect. I will explain why I know it is flawed after showing it since without context, it won't make sense.

Since $f$ is continuous, the mean value theorem for integrals tells me $\exists c \in (a,b)$ s.t.

$f(c) = \frac{\int_{a}^{b} f(x)dx}{b-a} = \frac{0}{b-a} = 0$

Now, since $f$ is continuous on $[a,b]$, specifically, it is continuous at $c$, I know $\forall x \in [a,b], \forall \epsilon > 0, \exists \delta > 0$ s.t. whenever $|x - c| < \delta, |f(x) - f(c)| < \epsilon$.

But, $f(c) = 0$. So, $|f(x) - f(c)| = |f(x)| < \epsilon$

Therefore, $f(x) = 0$

Now, the reason I know my proof is flawed is because I never used the fact that $f(x) \geq 0$. This assumption cannot be dropped as there are a plethora of counterexamples (for one of many counterexamples, take $\sin(x)$ on $[0,2\pi]$). Despite knowing this, and reading the other proofs, I am still struggling to see where the invalid step is in my proof. I'd like to understand what part of my proof is incorrect that way I won't make a similar mistake again on other proofs. Does anyone see where I made a false move? Thanks in advance!

Nolan P
  • 1,116
  • 3
  • 18
  • 1
    Hi: maybe the problem is that you are showing that $f(x) = 0$ when it's very close to $c$ but you aren't showing that it's equal to $0$ everywhere ? I'm not sure if that's the problem but it's a very interesting question. – mark leeds Jul 13 '22 at 20:33
  • 1
    I think what I said is related to what JonathanZ wrote below ( I should have said there was dependence on $\epsilon$ rather than $c$ or that $c$ needs to be arbitary) but he explained it in a much clearer way. Thanks JonathanZ. – mark leeds Jul 13 '22 at 23:33

2 Answers2

3

As $\epsilon$ gets smaller, the $\delta$-interval around $c$ where $|f(x)| \lt \epsilon$ is true gets smaller.

In order for "$\forall \epsilon \gt 0, |f(x)| \lt \epsilon$" to imply "$f(x) = 0$", it has to be true for some fixed $x$. In your argument, the location of $x$ "moves around" as you change $\epsilon$.

It's possible that $x$ may be "fixed but arbitrary", and the argument still works, but your $x$ has a (hidden) dependence on $\epsilon$.

If you want to see this in action, let $f(x) = x-\frac{1}{2}$ on $[0,1]$. Then $c = \frac{1}{2}$, and you can explicitly compute $\delta$ in terms of $\epsilon$. Fix a value of $x$ near (but not at) $\frac{1}{2}$, and as you let $\epsilon$ get smaller, your fixed $x$ value will at some point no longer lie in $(c-\delta, c+\delta)$, and $|f(x)| \lt \epsilon$ will stop being true.

(Incidentally, this problem of using variables to represent values, and not tracking if it represents a fixed value, or ranges over some set, and whether or not it depends on another variable, happens all the time. Our notation is bad at encoding it, and teachers would rather not get bogged down spending time to make it explicit. Making mistakes with it so that you learn to not make them again in the future is part of the infamous "mathematical maturity".)

JonathanZ
  • 10,615
  • 1
    I see. This makes sense! Thank you for clarifying, and yes, now that I've seen it, I hope I won't make this mistake again! – Nolan P Jul 13 '22 at 21:46
3

It is not true that $|f(x)|<\epsilon$ for all $x$ in the domain. It is only true that $|x-c|<\delta\implies |f(x)|<\epsilon$ for all $x$ in the domain.

Your argument relies on considering the limit as $\epsilon\to 0$, but $\delta$ is dependent on $\epsilon$ and so whether the premise $|x-c|<\delta$ holds depends on $\delta$. If $\delta\to 0$ as $\epsilon\to 0$ then for any $x\neq c$, the statement $|x-c|<\delta$ will stop being true for sufficiently small $\epsilon$, so we cannot conclude $|f(x)|<\epsilon$ for all $\epsilon$. (This doesn't mean that $|f(x)|\ge \epsilon$, just that we cannot make a conclusion one way or the other.)

It would be too cumbersome to enforce variable names that emphasise the dependence on other variables e.g. in place of $\delta$, using $\delta_{\epsilon_x,x}$ (as $\epsilon$ depends on $x$ and $\delta$ on $x,\epsilon$) or $\delta(\epsilon(x), x)$, but it can be a useful exercise to use a variable like $\delta_\epsilon$ here or there when you are uncertain about whether a limiting argument is correct, what the order of quantifiers should be, or in which order the process of "fixing a value of $\epsilon$; then finding a $\delta$" takes place.

A.M.
  • 3,944
  • 1
    So, I agree quite solidly with everything you said, and like the idea of making dependence explicit. But I've been thinking more about how this mistake happened, and I don't see how even being painfully explicit could help. I think the OP's key error was looking at $x$ in $|f(x)| < \epsilon$ and thinking of it as a fixed value, but really it ranges over an interval of values that depends $\delta$. I suppose if we annotated it as $x_{(c-\delta_{\epsilon}, c+\delta_{\epsilon}})$ it might slow a person down from making that mistake, but I'd don't really think so. – JonathanZ Jul 14 '22 at 14:47
  • 1
    If I were doing a post-mortem on this "fatal" error, I would say the key mistake was "taking stuff out of context". I.e. they looked at "$\forall x \in [a,b], \forall \epsilon > 0, \exists \delta > 0$ s.t. whenever $|x - c| < \delta, |f(x) - f(c)| < \epsilon$", and pulled out "$\forall \epsilon > 0, |f(x) - f(c)| < \epsilon$, which Just Isn't Allowed. – JonathanZ Jul 14 '22 at 14:54
  • 1
    I could go for an argument that the context is just the words and symbols we use to describe the dependence, so they are the same thing. Maybe I'm using "dependence" to describe the abstract relationship and "context" to refer to the representation as symbols. But I do like to sometimes look at people doing math as just symbol processors, and I think the OP saw $|f(x) - f(c)| < \epsilon$ and thought "Yipee!", and pulled it out of its context as they recognized it as a fragment that is used for proving things equal to $0$. – JonathanZ Jul 14 '22 at 15:00