The usual method is slow for double roots because of the following Taylor series argument. Assume $f$ is twice continuously differentiable and $r$ is a single root, i.e. $f(r)=0$ and $f'(r) \neq 0$. Then
\begin{eqnarray*}\left ( x - \frac{f(x)}{f'(x)} \right ) - r & = & x - r - \left ( x - r + \frac{f''(r)}{2 f'(r)} (x-r)^2 + o((x-r)^2) \right ) \\
& = & \frac{f''(r)}{2 f'(r)} (x-r)^2 + o((x-r)^2)
\end{eqnarray*}
What I did here was Taylor expand $\frac{f(x)}{f'(x)}$ to second order about $x=r$, and then substitute in the fact that $f(r)=0$, which cancels a lot of terms. The result means that when $r$ is a single root, the method converges quadratically. Roughly speaking this means that the number of correct digits double at each step, once the error is small enough. On the other hand, if it is a double root (i.e. $f(r)=f'(r)=0$ and $f''(r) \neq 0$), we have a different situation. Here we cannot expand $f(x)/f'(x)$ about $x=r$ because this function is not defined there. Instead we must do the following:
\begin{eqnarray*}\left ( x - \frac{f(x)}{f'(x)} \right ) - r & = & x-r - \frac{1/2 f''(r) (x-r)^2 + o((x-r)^2)}{f''(r)(x-r) + o(x-r)} \\
& = & x-r -\frac{1/2 (x-r) + o(x-r)}{1 + o(1)} \\
& = & x-r - (1/2 (x-r) + o(x-r))(1+o(1)) \\
& = & 1/2 (x-r) + o(x-r)
\end{eqnarray*}
This means the method converges linearly with a coefficient of $1/2$. This means the error is approximately halved at each step, once the error is small enough, which basically means that you gain a correct binary digit at each step. This gets even worse for higher order roots: for a root of order $n$ we have a coefficient of $1-\frac{1}{n}$.
There is a modified Newton method which can detect proximity to a non-simple root and modify $f$ in such a way that quadratic convergence is recovered. If you know the order of the root is $n$, then you can use
$$x_{k+1} = x_k - n \frac{f(x_k)}{f'(x_k)}.$$
A similar argument to the first one shows that this converges quadratically. If you don't know the order of the root, there is a method which can estimate it: see http://mathfaculty.fullerton.edu/mathews/n2003/NewtonAccelerateMod.html
Edit: the symbol $o(f(x))$, called "little oh notation", is a standard shorthand. It means an unspecified function $g(x)$ such that $g(x)/f(x) \to 0$ as $x$ tends to something specified by context (usually $0$ or $\infty$, in this case $0$). So for example, calculus tells us that
$$\lim_{h \to 0} \frac{f(x+h) - f(x) - h f'(x)}{h} = 0$$
which is the same as
$$f(x+h) - f(x) - h f'(x) = o(h)$$
Little oh notation has a weaker counterpart, called "Big oh notation". That is, the symbol $O(f(x))$ means an unspecified function $g(x)$ such that $g(x)/f(x)$ is bounded as $x$ tends to something specified by context.