I just found out on my own. Assuming that $X \in \mathbb{R}^{m \times n}$, we define the difference matrices $\Delta_I X = [ X_{i,j} - X_{i-1,j} ]$ and $\Delta_J X = [X_{i,j} - X_{i,j-1}]$, where negative indices are defined $\text{mod}\ m$ or $\text{mod}\ n$ for $i$ or $j$, respectivelly. I first noticed by inspection that both $\Delta_I X$ and $\Delta_J X$ can be obtained via the multiplication of $X$ with circulant matrices $R_I$ and $R_J$ given by
\begin{align}
R_I &= \begin{pmatrix}
1 & 0 & 0 &\dots & 0 & -1 \\
-1 & 1 & 0 & \dots & 0 & 0 \\
0 & -1 & 1 & \dots & 0 & 0 \\
& & & \vdots & & \\
0 & 0 & 0 & \dots & -1 & 1
\end{pmatrix} \in \mathbb{R}^{m \times m} \\
\end{align}
and
\begin{align}
R_J &= \begin{pmatrix}
1 & -1 & 0 &\dots & 0 \\
0 & 1 & -1 & \dots & 0 \\
0 & 0 & 1 & \dots & 0 \\
& & & \vdots & \\
0 & 0 & 0 & \dots & -1 \\
-1 & 0 & 0 & \dots & 1
\end{pmatrix} \in \mathbb{R}^{n \times n}.
\end{align}
Precisely, we have that $\Delta_I X = R_I X$ and $\Delta_J X = X R_J$. Using the Kronecker Product $\otimes$ we can write these expressions in vector form as
\begin{align}
\text{vec}(\Delta_I X) &= (R_I \otimes I_n) \text{vec}(X) = (R_I \otimes I_n) x\\
\text{vec}(\Delta_J X) &= (I_m \otimes R_J) \text{vec}(X) = (I_m \otimes R_J) x.
\end{align}
Concatenating $R_I \otimes I_n$ and $I_m \otimes R_J$ we have the desired matrix
\begin{align}
R &= \begin{pmatrix} R_I \otimes I_n \\ I_m \otimes R_J \end{pmatrix} \in \mathbb{R}^{2mn \times mn}.
\end{align}
Obviously, the matrix-vector product $Rx$ can be implemented indirectly, that is, without having to store $R$ in memory. This can be done by shifting and subtracting. In any case, this matrix representation of the "pre TV function" is very interesting from an analytical standpoint. Since transposition distributes over the Kronecker product, we have that
\begin{align}
R^T &= \begin{pmatrix} R_I^T \otimes I_n & I_m \otimes R_J^T \end{pmatrix} \in \mathbb{R}^{mn \times 2mn}.
\end{align}
I also noticed by inspection that
\begin{align}
\text{devec}[(R_I^T \otimes I_n)x] &= R_I^T X = \text{flip}_J \left( R_I \text{flip}_J (X) \right), \tag{$1$}
\end{align}
where $\text{flip}_J(M)$ is the matrix $M$ with all columns written in reversed order. Similarly, we have that
\begin{align}
\text{devec}[(I_m \otimes R_J^T)x] &= X R_J^T = \text{flip}_I \left(\text{flip}_I(X) R_J \right), \tag{$2$}
\end{align}
where $\text{flip}_J(M)$ is the matrix $M$ with all rows written in reversed order. Therefore, using $(1)$ and $(2)$, the matrix-vector product $R^Tx$ can be easily implemented with flips, shifts and subtractions similar to $Rx$, and without the need of storing $R$ nor $R^T$ in memory.