I asked a very similar question: Prove that $E([c-U]) = c-1$. There, I had missed a simple trick. But then, I've been trying to extend it to $E([c-xU])$ and haven't been able to put a wrap on it with the same approach. Here, $c$ and $x$ are real scalars and $U$ is a uniform random number between $0$ and $1$.
My attempt
Let:
$$c=n+u_1$$ $$x=m+u_2$$
We know that the smallest value $[c-xU]$ can take is $(n-m-1)$ and the largest value it can take is $n$.
In general we get:
$$[c-xU] = n-m+i $$ When $$(n-m)+i < c-xU<(n-m)+i+1$$ $$=> \frac{m+u_1-i-1}{x} < U < \frac{m+u_1-i}{x}$$ $$\forall \;\; i \in {-1,0,1,\dots m}$$
At first glance, each of those intervals is of length $\frac{1}{x}$. So, the probability of $U$ falling into each of them should also be $\frac{1}{x}$.
However, this misses the possibility that either end of an interval might be $<0$ or $>1$. This can make the probability of $U$ falling into the interval $0$ or less than $\frac{1}{x}$.
For instance, when $m=1$ and $u_2=0$, (meaning $x=1$); we get to the simpler version of the problem linked in the question with two intervals of sizes $u_1$ and $1-u_1$.
In particular, we get that when $i<u_1-u_2$, the start of the interval must be $0$ and when $i>m+u_1-1$, the end of the interval must be $1$. But, I can't wrap this up into a nice expression for the overall summation.
This has been the thorn in my side, haven't been able to properly account for these shrinking intervals and come up with a final closed-form expression.