In digital communications systems we say that the received signal can be expressed as
$$y(t)= x(t)+ n(t)\tag1$$
if x(t) is the transmitted signal and $n(t)$ is the noise.
I am wondering whether a better model would be
$$y(t)= x(t-t_0)+ n(t)\tag2$$
To account for delay caused by channel. Is my logic correct?
If so then, what are the assumptions made when using (1) and what are we accounting for if we use (2)? ie what type of errors are we accounting for (delay spread, or time acquistion error...)
I am looking forward to hearing your responses and discussion on this. thanks