The probability of having all $n$ desired selections after $r$ "draws" can be computed by matrix multiplication. In fact we will do a little more than this and compute the chance of having $k$ out of $n$ of the desired selections for each "turn".
At the beginning, before any selection is made, the chance of having all $n$ is of course zero. More to the point, the chance of having none of the desired selections is $1$. We will set up a row vector $\vec{p}_r$ that expresses with entries the probability that after $r$ selections, we have $k=0,1,\ldots,n$ of the ones desired:
$$ \vec{p}_r = (p_r^{(0)}, p_r^{(1)}, \ldots, p_r^{(n)} ) $$
where $p_r^{(k)}$ is the chance after $r$ turns that we have $k$ of the desired selections.
In the language of Markov chains we say that having all $n$ desired selections is an absorbing state, because once you have all of them, further selections do not alter that circumstance. You still have at least one of each of the desired items.
The initial state is having none of the desired items with probability $1$:
$$ \vec{p}_0 = (1,0,\ldots,0) $$
We can update the distribution of probabilities by multiplying that row vector on the right by a state transition probability matrix of size $(n+1)\times (n+1)$. In this case the chance of going from having $k$ desired items to having $k+1$ desired items is $\frac{n-k}{m}$, and this probability is the same for each turn at selecting a new item. The only other transition possible is to stay at having exactly $k$ desired items, either because we draw one of the undesired items or because we draw one we already have. The matrix therefore has at most two nonzero entries in each row, with only one nonzero entry in the last row (the "absorbing" state).
The resulting expression after $r$ selections is this:
$$ \vec{p}_r = \vec{p}_0
\begin{pmatrix} 1- \frac{n}{m} & \frac{n}{m} & 0 & \dots & 0 \\
0 & 1- \frac{n-1}{m} & \frac{n-1}{m} & \dots & 0 \\
\vdots & \; & \ddots & \ddots & \vdots \\
\vdots & \; & \; & 1- \frac{1}{m} & \frac{1}{m} \\
0 & \dots & \dots & 0 & 1 \end{pmatrix}^r $$
For a problem of modest size $n,m$, you can simply multiply the successive powers of the state transition probability matrix times the initial state until we find $r$ for which the final component $p_r^{(n)}$ exceeds the $X$ prescribed in the Question. For larger sizes we may want to use some linear algebra tricks (binary exponentiation, diagonalization) to speed up the computation.