Your "$\text{timestretch}(\text{signal}, k)$" is what we call "interpolation by $k$", usually. (if you don't believe it: try for yourself!)
Let us adopt a sensible notation (and not call function English words, which typically leads to confusion).
- $s\in \mathbb C^N$: discrete time input signal of length $N$
- $m\in\mathbb N$: interpolation ratio
- $r\in \mathbb C^{mN},\, m\in\mathbb N$: $m$ times as long output signal, subject to:
- $r[ln] = s[n],\,n\in\mathbb Z,\,l \in \mathbb N$ (i.e. the stretched signal still goes through the same points as the original: "interpolation criterion")
- $\text{DFT}\{r\}[f] <\epsilon \text{ for } |f| > N/2,\, \epsilon > 0$ (image suppression to at most a small $\epsilon$)
So, there's very many resampling / interpolation algorithms that you could employ.
The probably simplest is zero padding, i.e you take the $\text{DFT}\{r\}$, which is inherently $N$ long, extend the result by $(m-1)N$ zeros, making it $mN$ values long, and do the inverse transform.
That amounts to sinc interpolation due to the convolution theorem of the DFT and the fact that zero-padding mathematically "looks" like you've taken a $N$-periodic signal and multiplied it with an $N$-long rectangular window.
So, your $operation$ is just plain boring "appending zeros until you hit the target length". Often, DSP is that easy!