I do not know if the Latex code (which is auto-generated is invalid) or if pdflatex is supposed to be able to handle this code, but I need to find the correct package. I looked at this entering-unicode-characters-in-latex and added the package mentioned there. But still get the error. May be I need to define a macro? Do I need to add a smart \DeclareUnicodeCharacter to handle this? Is there a way to use this \DeclareUnicodeCharacter to make these unicodes below which are causing the problem to be just a single white space so that pdflatex and latex can process the file?
Mathematica generates Latex code for one of its expression which uses special Mathematical symbols (small dot above and below letters) as shown in this link
Here is a screen shot from my notebook showing the Latex code generated

When I use this Latex code in my document, pdflatex.exe actually crashed (MikTex) and also texlive on Linux gave the error:
(/usr/local/texlive/2014/texmf-dist/tex/latex/amsfonts/umsa.fd)
(/usr/local/texlive/2014/texmf-dist/tex/latex/amsfonts/umsb.fd)
! Undefined control sequence.
l.11 ...to \text{DifferentialRoot}\left(\{\unicode
{f818},\unicode{f817}\}
?
The code generated is complicated and I can't edit it by hand, else I will break the Latex code generated if I start moving parts around (I also need to run this many times, so need a one time fix)
Here is a MWE on one actual case. I have many such cases, so looking for a macro I can put at top of the Latex document that solves all these \unicode problems. A solution that removes all those unicodes is just fine as well. The symbols do not have to have those dots above or below, but this is how Mathematica generates them. But for the pdf file, I do not need them really.
\documentclass[12pt]{article}
\usepackage{amsmath,mathtools}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage[mathletters]{ucs}
\usepackage[utf8]{inputenc}
\begin{document}
$\left\{\left\{y(x)\to \text{DifferentialRoot}\left(\{\unicode{f818},
\unicode{f817}\}
\unicode{f4a1}\left\{\left(a^2 \unicode{f817}^3+a\right)
\unicode{f818}(\unicode{f817})+\left(2 \unicode{f817}^3 a-1\right)
\unicode{f818}'(\unicode{f817})+\unicode{f817}
\unicode{f818}''(\unicode{f817})=0,
\unicode{f818}(1)=c_1,\unicode{f818}'(1)=c_2\right\}\right)(x)\right\}\right\}$
\end{document}


\documentclass{article} \newcommand{\unicode}[1]{} \begin{document} $\unicode{f817} \unicode{f4a1} \unicode{f818}$ \end{document}– Johannes_B Jul 22 '14 at 07:26Double superscript! Double superscript. l.37724 ...code{f4a1}\left\{\left(a^2 \unicode{ }^ 3+a\right) \unicode{ }(\un...?– Nasser Jul 22 '14 at 07:31\newcommand{\unicode}[1]{\begingroup{ \endgroup}instead. But to be honest, i don't see the point in doing so. The error will disappear, but the whole meaning will drop dead. Not a good idea, imho. – Johannes_B Jul 22 '14 at 07:39A solution that removes all those unicodes is just fine as well. The symbols do not have to have those dots above or belowand I just happened to try the first answer first before you posted yours and it worked, then I saw your answer after that. I wish the rules allows one to accept more than one answer. Thanks again for your help and time. – Nasser Jul 22 '14 at 18:14