There is no problem to define an "entrywise" matrix norm defined as follows:
$$
\|A\|_1=\|vec(A)\|_1=\sum_{i,j}|A_{i,j}|
$$
see wikipedia, Matrix norms. It is a norm and the induced distance is (if $A$, $B$ have same dimensions):
$$
d(A,B)=\sum_{i,j}|A_{i,j}-B_{i,j}|
$$
This norm is a sub-multiplicative norm (see here):
$$
\|AB\|_1\leq\|A\|_1\|B\|_1
$$
but, attention, in general:
$$
\|A^tA\|_1\neq\|AA^t\|_1
$$
Another example of "entrywise" matrix norm often encountered in practice is the Frobenius norm. This norm is defined as follows:
$$
\|A\|_F = \sqrt{\text{tr}(A^tA)}=\sqrt{\sum_{i,j}|A_{i,j}|^2}
$$
Frobenius norm also fulfills the sub-multiplicative property (Cauchy-Schwarz in action, see
here):
$$
\|AB\|_F\leq \|A\|_F\|B\|_F
$$
Compared to the previous case $\|.\|_1$, like we have $\text{tr}(A^tA)=\text{tr}(AA^t)$, Frobenius norm also fulfills the property:
$$
\|AA^t\|_F=\|A^tA\|_F
$$
To be complete one must also say a word about Matrix norms induced by vector norms. One can define:
$$
\|A\|_1=\sup_{x\neq 0}\frac{\|Ax\|_1}{\|x\|_1}
$$
(attention despite identical the notation, this norm is different from the previously defined $\|vec(.)\|_1 $, here we have $\|A\|_1=\max_j \sum_i|A_{i,j}|$)
An immediate generalization, valid for any $1\leq p \leq \infty$, is:
$$
\|A\|_p=\sup_{x\neq 0}\frac{\|Ax\|_p}{\|x\|_p}
$$
Such norms are called Matrix norms induced by vector norms, they automatically fulfill the sub-multiplicative property:
$$
\|AB\|_p \leq \|A\|_p \|B\|_p
$$
Attention: however in general,
$$
\|AA^t\|_p \neq \|A^tA\|_p
$$
Also note this important fact, as we are in finite dimension all these previously defined matrix norms are equivalent in the sense that for any two matrix norm $\|.\|_\alpha$ and $\|.\|_\beta$ it exists $r$ and $s$ such that:
$$
\forall A,\ r\|A\|_\alpha\leq \|A\|_\beta \leq s\|A\|_\alpha
$$
in peculiar if a sequence $n\rightarrow (A)_n$ is convergent for a given matrix norm it is also convergent for all the other norms.