I am not familiar with an infinite-horizon formulation for time-varying LQR, but I can supply an answer for a finite-horizon formulation.
For a finite-horizon formulation the Ricccati equation for a time-varying LQR problem (i.e. the equilibrium point $x_d$ changes with time) is called the "Differential Riccati Equation".
Derivation: Lets assume you want to implement a LQR controller that stabilizes the system:
$$\dot{x} = f(x,u)$$
around some trajectory $\left(x_d(t),u_d(t)\right)$. If you linearize the system around this trajectory you get:
\begin{align}\dot{\bar{x}} &= A(t)\bar{x}(t)+B(t)\bar{u}(t)-\dot{x}_d \\ &\approx A(t)\bar{x}(t)+B(t)\bar{u}(t)\end{align}
where $$\bar{x}(t) = x(t)-x_d(t)$$ $$\bar{u}(t) = u(t)-u_d(t)$$
and
$$A(t) = \frac{\partial f}{\partial x}|_{(x,u)=(x_d,u_d)}$$ $$B(t) = \frac{\partial f}{\partial u}|_{(x,u)=(x_d,u_d)}$$
The approximation made in the equation for $\dot{\bar{x}}$ introduces an error into your control law, but yields a tractable problem. The magnitude of the error will depend on the problem, but this approach works in many applications (see reference at end for an example).
Now we want to solve the LQR problem for a finite-horizon cost function given by:
$$\int_{0}^T \left( \bar{x}^T Q \bar{x} + \bar{u}^T R \bar{u} \right) dt,~~~\text{where}~~Q=Q^T\geq 0,~\text{and} ~R=R^T>0$$
We chose a quadratic cost-to-go function of the form:
$$ J(\bar{x},t) = \bar{x}(t)^TS(t)\bar{x}(t),~~~\text{where}~~S(t)=S(t)^T>0$$
such that the Hamilton-Jacobi-Bellman condition for the optimality of our chosen $J(\bar{x},t)$ becomes:
$$0 = \min_{\bar{u}}\left[ \bar{x}^T Q \bar{x} + \bar{u}^T R \bar{u} + \frac{\partial J}{\partial \bar{x}}\left(A(t)\bar{x}(t)+B(t)\bar{u}(t)\right) + \frac{\partial J}{\partial t} \right] $$
The control law that minimizes this expression is found by solving for $\bar{u}$ in
$$0 = \frac{\partial}{\partial \bar{u}}\left[ \bar{x}^T Q \bar{x} + \bar{u}^T R \bar{u} + \frac{\partial J}{\partial \bar{x}}\left(A(t)\bar{x}(t)+B(t)\bar{u}(t)\right) + \frac{\partial J}{\partial t} \right]$$
Hence, the $\bar{u}$ that minimizes the cost-to-go $J(\bar{x},t)$ is:
$$\bar u(t) = -R^{-1}B^T(t)S(t)\bar{x}(t)$$
If we insert this expression for $\bar{u}$ into the Hamilton-Jacobi-Bellman condition, we get:
$$0 = Q-S(t)B(t)R^{-1}(t)B^T(t)S(t) + S(t)A(t)+A^T(t)S(t)+\dot{S}(t)$$
This equation is called the Differential Riccati equation, and it consists of the standard LQR algebraic Riccati equation and an additional time-dependent term $\dot{S}(t)$.
You solve for $S(t)$, and consequently $\bar{u}(t)$ and $J(\bar{x},t)$, by setting $S(T) = 0$ and solving the differential equation:
$$-\dot{S}(t) = Q-S(t)B(t)R^{-1}(t)B^T(t)S(t) + S(t)A(t)+A^T(t)S(t)$$
backwards in time.
Reference: Section 3.A in http://groups.csail.mit.edu/robotics-center/public_papers/Tedrake09a.pdf