Probably this is a stupid question, but I have a doubt
How can we solve the differential equation $y’(t)=y(t-1)$ ?
I have no assumption on $y$, only that it is a function from $\mathbb{R}$ in itself.
Probably this is a stupid question, but I have a doubt
How can we solve the differential equation $y’(t)=y(t-1)$ ?
I have no assumption on $y$, only that it is a function from $\mathbb{R}$ in itself.
Your equation is a first-order homogeneous linear delay differential equation. There are several ways to deal with them. Let's consider the little bit more general problem $\dot{y}(t) = y(t-\tau)$, with a delay $\tau$.
The first one is a bit naive : in the same spirit as its not-delayed analog, one uses the exponential Ansatz $y(x) = e^{\lambda t}$ to translate the problem into an algebraic equation (the so-called "characteristic polynomial"). We get $$ \lambda e^{\lambda t} = e^{\lambda(t-\tau)} \verb+ +\Leftrightarrow\verb+ + \lambda = e^{-\lambda\tau} \verb+ +\Leftrightarrow\verb+ + \lambda_k = \frac{W_k(\tau)}{\tau} $$ where $W_k$ is the $k^{\mathrm{th}}$ branch of the Lambert $W$ function; it has an infinite number of branches (even if only two are real), that is why the final solution is given by $y(t) = \displaystyle\sum_{k\in\mathbb{Z}}A_ke^{\lambda_kt}$, with $\lambda_k = \frac{W_k(\tau)}{\tau}$; in consequence, you need an infinite (but countable) number of initial conditions for the constants $A_k$, although a constraint to be real can kill a lot of them at the same time.
The second method makes use of Fourier transform. Recalling that $\mathfrak{F}\left[\frac{\mathrm{d}}{\mathrm{d}t}\right] \equiv i\omega$ and $\mathfrak{F}[y(t-\tau)] = \hat{y}(\omega)e^{-i\omega\tau}$, one finds that $$ i\omega\hat{y}(\omega) = \hat{y}(\omega)e^{-i\omega\tau} \verb+ +\Leftrightarrow\verb+ + \hat{y}(\omega) = \frac{1}{i\omega-e^{-i\omega\tau}} $$ and, when inverting the Fourier transform, $$ y(t) = \int_{-\infty}^{\infty}\frac{e^{i\omega t}}{i\omega-e^{-i\omega\tau}}\frac{\mathrm{d}\omega}{2\pi i}. $$ This integral can be computed thanks to the residues of the integrand; its poles $\omega_k$ satisfy $i\omega-e^{-i\omega\tau} = 0$, such that $\omega_k = \frac{W_k(\tau)}{i\tau}$. And from there, one understands that it will lead to the same result as the ansatz method.
N.B. : those who are at ease with distributional equations will see that the equation $(i\omega-e^{-i\omega\tau})\hat{y}(\omega)$ in the Fourier space gives directly $\hat{y}(\omega) = \displaystyle\sum_{k\in\mathbb{Z}} A_k\delta\left(\omega-\frac{W_k(\tau)}{i\tau}\right)$.
A third method, mostly used for numerical solving, reformulates the inital equation in the following way : $$ y(t) = y(t_0) + \int_{t_0}^t y(t'-\tau) \,\mathrm{d}t' $$ Given some initial conditions on a whole interval such as $y(t) = \phi(t)$ $\forall t\in[t_0-\tau,t_0]$, the solution can be reconstructed piecewisely in the following manner : $$ y(t) = \begin{cases} \displaystyle \phi(t) && t\in[t_0-\tau,t_0] \\ \displaystyle y(t_0-\tau) + \int_{t_0-\tau}^t y(t'-\tau) \,\mathrm{d}t' && t\in[t_0,t_0+\tau] \\ \displaystyle y(t_0) + \int_{t_0}^t y(t'-\tau) \,\mathrm{d}t' && t\in[t_0,t_0+\tau] \\ \displaystyle y(t_0+\tau) + \int_{t_0+\tau}^t y(t'-\tau) \,\mathrm{d}t' && t\in[t_0+\tau,t_0+2\tau] \\ \displaystyle \cdots && \cdots \\ \displaystyle y(t_0+k\tau) + \int_{t_0+k\tau}^t y(t'-\tau) \,\mathrm{d}t' && t\in[t_0+k\tau,t_0+(k+1)\tau] \\ \displaystyle \cdots && \cdots \end{cases} $$ such that each line replaces $y$ in the next one (e.g. $y$ is replaced by $\phi$ in the expression of the second line, because $t\in[t_0,t_0+\tau] \Rightarrow t-\tau\in[t_0-\tau,t_0]$). It is actually a recursive definition.
$\mathrm{\underline{N.B.}}$ : this method is overconstrained, because the choice $y(t) = \phi(t)$ $\forall t\in[t_0-\tau,t_0]$ implies an uncountable infinite number of initial conditions, where only a countable number of them are necessary (cf. addendum infra).
Addendum Let's check that the equation is indeed linear in the first place. Taking $y_{1,2}$ solutions to the equation and $\alpha_{1,2}\in\mathbb{C}$, one has : $$ \begin{array}{rcl} \displaystyle\frac{\mathrm{d}}{\mathrm{d}t}(\alpha_1y_1(t) + \alpha_2y_2(t)) &=& \alpha_1\dot{y}_1(t) + \alpha_2\dot{y}_2(t) \\ &=& \alpha_1y_1(t-\tau) + \alpha_2y_2(t-\tau) \\ &=& (\alpha_1y_1 + \alpha_2y_2)(t-\tau) \end{array} $$ and thus $\alpha_1y_1 + \alpha_2y_2$ is also a solution.
It was to be expected since $y(t-\tau) = T_{\lambda}y(t)$ where $$ T_{\lambda} = e^{-\lambda\frac{\mathrm{d}}{\mathrm{d}t}} = \sum_{n=0}^\infty\frac{(-\lambda)^n}{n!}\frac{\mathrm{d}^n}{\mathrm{d}t^n} $$ is the translation operator. It contains derivatives of every order, that is why a first-order linear delay differential equation can be seen as an infinite-order linear ordinary differential equation and needs a (countable) infinite number of initial/boundary conditions.
Note that other delay differential equations, even linear ones as second-order, can become quite a nightmare to solve, but more expert people than me will explain you the delight of this more or less unfamous area of mathematics.