As Yuval comments, "misconception" isn't the right term for the notation. It's well-defined notation, which I agree is somewhat bad and unfortunate, but it's valid. It might make sense once we say that, whenever you see a $O(\cdot)$ in a statement, you are being told that there exists a choice from the set $O(\cdot)$ that makes the statement true. So you can read $f(n) = O(g(n))$ as "there exists $h(n) \in O(g(n))$ such that $f(n) = h(n)$.
This may seem silly for this simple example (we could just write $f(n) \in O(g(n))$), but I think it can come in handy with more complex statements. I am thinking of proving big-O containment over a sequence of steps:
\begin{align}
f(n) &= ... \\
&\leq ... \\
&= O(...) \\
&= O(g(n)).
\end{align}
Maybe this seems nicer or prettier than trying to use $\in$ and $\subseteq$ etc. on each line.
I would guess that this, along with the fact that it's slightly quicker to write $=$ than $\in$, and easier in e.g. plain text environments, accounts for the popularity of the notation. But a historical perspective would be interesting.