On page 104 in Calculus (2008 4th edition), Michael Spivak proves that limits of two functions equals the limit of the product of the functions, as follows.
If $\epsilon>0$ there are $\delta_1,\delta_2>0$ such that, for all $x$,
$$\text{if }0 <|x-a|<\delta_1,\text{ then }|f(x)-l|<\min\left(1,\frac{\epsilon}{2(|m|+1)}\right),$$
$$\text{ and if }0 <|x-a|<\delta_2,\text{ then }|g(x)-m|<\frac{\epsilon}{2(|l|+1)}.$$
This is supposedly due to the fact that $\lim_{x\to a}f(x)=l$ and $\lim_{x\to a}g(x)=m$. There are two things I don't understand about this argument.
First, shouldn't you start from $\epsilon$ when making such a proof, not $\frac{\epsilon}{2(|m|+1)}$ or something like that?
Second, does the first row really hold? It merely tries (if I've understood it correctly) that
$$\lim_{x\to a}f(x)=l.$$
In the definition of a limit it says that for any $\epsilon$ we can find a corresponding $\delta$. But it seems like you can't in this case. What if $\frac{\epsilon}{2(|m|+1)}>1$? Then we can't really know for sure if the first statement holds, since for all we know, $|f(x)-l|$ could lie between $1$ and $\frac{\epsilon}{2(|m|+1)}>1$.
How am I understanding his proof wrong?