In general, is it possible to design such a filter?
If by "in general" you mean "for any arbitrary IIR filter", then the answer is no. In fact, for any real filter, retrieving the actual $x(k)$ in a useful way is problematic, at best.
The basic problem is that a filter filters information out, and leaves you with a residue. At some point, the initial information is simply lost, and you cannot retrieve it.
About the only time that an inverse filter will work is if you're dealing with some natural phenomenon that shapes the spectrum of $x(k)$ but does not significantly remove any information that you need.
If not then under what conditions is it impossible to retrieve the input $x(k)$?
Mathematically, if you have a filter $H(\omega)$, and it is zero for one or more values of $\omega$, then you simply cannot reconstruct any components of $x(n)$ at those frequencies.
Moreover, if the filter defined by $y(k) = h\left ( x(k) \right)$ has pure delay or unstable zeros (i.e., if it is not minimum phase), then $x(k)$ cannot be reconstructed. There are techniques for coming up with a best guess of the signal, however.
Practically, the filtering problem is never $y(k) = h\left ( x(k) \right)$. It is always $y(k) = h\left ( x(k) \right) + n(k)$, where $n(k)$ is some noise process. So where the filter's response comes close to zero in the frequency domain, you cannot practically reconstruct $x(k)$.
I have known discrete-time IIR filter. I pass a signal x(n) through the filter and obtain the output y(n). How would I design a filter to obtain x(n) from y(n)?
In increasing order of both difficulty and possibility of success:
- Cheat, and just measure x(n).
- Calculate the transfer function, $H(\omega)$, for your filter. If the filter is minimum phase, then $\dfrac{1}{H(\omega)}$ will be realizable and stable. Implement that. Your output will be $\hat X(\omega) = H(\omega) \left(Y(\omega) N(\omega) \right) $ -- which means that at any frequency where the original filter really attenuates the signal, your estimate of $x(k)$ will be noisy.
- Study up, and implement a Kalman filter to estimate $\hat x(k) \simeq x(k)$ from $y(k)$. If you know the filter structure exactly, this is probably the best way to get an estimate $\hat x(k)$ that is optimal in the least-squares sense. Assuming your Kalman filter is stable, and the turn-on transient is acceptable, you can derive a steady-state Kalman filter from it.
- A corollary: there's probably some variant of the Wiener filter that would do this, too -- but it's been a very long time since I've needed to understand those critters, so I can't say for sure. Certainly, a steady-state Kalman and a properly constructed Wiener filter -- if it exists -- would basically be the same thing, since they're solving the same problem.
- If mean-squared error is not the thing you need to minimize, or if you don't know the filter structure exactly, then some other optimal state estimation method will be your friend. I'd start with H-infinity filters, but I'm not sure that's where I'd end up.
- Yet another wrinkle in this is that -- especially if the system isn't minimum-phase -- if you can get by with a delayed estimate (i.e., you're estimating $x(k - k_0)$ for some delay $k_0 > 0$) then it would be better than any estimate of $x(k)$, and this, too, would drop out of a steady-state Kalman or Wiener filter design exercise.
If you take that last course, note that especially if you're dealing with a non-minimum-phase filter you will be able to get much better estimates of delayed versions of $x(k)$. I.e., if there's some delay $\kappa$ that is perfectly acceptable, then you can make a Kalman estimator that finds $\hat x(k - \kappa) \simeq x(k - \kappa)$ with less (possibly much less) error than any filter could find $\hat x(k) \simeq x(k)$