You can use a proof by contradiction.
Assume it is injective but not surjective--that is, that there exists an element of R that is not in the image $f(R)$.
Since there is at least one element of $R$ that is not in the image of $R$, the size of the image $f(R)$ is at most $|R|-1$. So our mapping $f$ maps $|R|$ elements onto an image set of size $|R|-1$, and by the pigeonhole principle, at least one element of $f(R)$ must be the image of 2 distinct elements of $R$. This contradicts our assumption of injectivity.
\begin{align}\phantom{space}
\end{align}
Here's another (more roundabout) way to get a contradiction: Divide $R$ in to two disjoint subsets: $A$, the set of elements of $R$ that are in $f(R)$, and $B$, the set of elements of $R$ that are not in $f(R)$. So $A$ and $B$ are disjoint, and we have $R=A \cup B$. Now,
\begin{align}|R|&=|A|+|B|-\underbrace{|A \cap B|}_{\substack{=0}}\\
&=|A|+|B|\\
&> |A|,
\end{align}
since we assumed that $f$ was not surjective, and, hence, $B$ is nonempty. But $f$ is injective, so $|A|=|f(R)|=|R|$, and this gives us $$|R|>|R|.$$