I think this question is a little more low-level then the one that it's being marked as a duplicate of, so I'm going to answer it.
The basic thing that you need to know is that an inner product on the 4-vector space need not have the form it has in Euclidean coordinates. That is, defining $w = ct$, it is not necessarily the case for all 4-D vector spaces that:$$\vec a \cdot \vec b = a_w b_w + a_x b_x + a_y b_y + a_z b_z.$$Probably the easiest one to wrap your head around is a skewed coordinate system where your basis vectors $\hat e_{w,x,y,z}$ are not orthogonal: yes $\vec a = \sum_i a_i \hat e_i$ for some basis vectors, but $\hat e_i \cdot \hat e_j \ne \delta_{ij}$ (where $\delta$ is the Kronecker delta symbol). Then it's obvious that if $C_{ij} = \hat e_i \cdot \hat e_j$ the result has instead a matrix hiding inside of it: $$\vec a \cdot \vec b = \sum_{ij} a_i C_{ij} b_j = \mathbf a^T ~\mathbf C~ \mathbf b.$$This matrix has a special name and it is called the metric or the metric tensor.
In turn, we can imagine an "inner product" for vectors in $\mathbb R^4$ with any other metric, and see where it goes. The only usual stipulation is that the matrix be symmetric, $\mathbf C^T = \mathbf C,$ and usually invertible, so that the "dual space" is "isomorphic to" the original vector space -- we'll talk about what these "dual vectors" are in a second.
So in special relativity we have this "Lorentz group" of coordinate transformations, and if we're going to go whole-hog with this relativity business then everything physical as we know it must depend on 4-vectors and other quantities which are unchanged by the Lorentz group, otherwise your experimental predictions will nontrivially depend on what coordinates you use.
Well, it happens to be the case that the Lorentz group preserves "dot products" for a different metric, which is given as either $\pm$ (depending on convention) of the matrix:$$\mathbf {g} =\begin{bmatrix}1&0&0&0\\
0&-1&0&0\\
0&0&-1&0\\
0&0&0&-1\end{bmatrix},$$sometimes also called by the symbol $\mathbb \eta.$ Whether you use $+$ or $-$ depends essentially on whether you like to think of time as an imaginary dimension of space or space as an imaginary dimension of time; either way some factors of $\sqrt{-1}$ appear in some expressions but not others. I prefer $+$ because it means that trajectories which stay "inside" a light cone have a positive spacetime interval and the "proper time" is just the square root of the spacetime interval, but other people may have other conventions.
Now the Lorentz group has three sorts of "generators" (different things that can happen that build up the whole group). These are the parity transforms (multiplying the w-component or the whole 4-vector by -1), the rotations of the 3D subspace (x, y, z), and the "Lorentz boosts" of the form $$\mathcal L_x(\beta) = \frac{1}{\sqrt{1 - \beta^2}} ~ \begin{bmatrix}1&-\beta&0&0\\
-\beta&1&0&0\\
0&0&1&0\\
0&0&0&1\end{bmatrix}.$$The exact derivation of these boosts I will leave to other tutorials, and since rotations in 3D preserve the Euclidean metric $\mathbf C = \mathbf I$ it shouldn't be too hard to see that they also preserve the 3D $\mathbf C = -\mathbf I$ while doing nothing to the time coordinate, so they preserve $\mathbf g$ and we'll skip that proof. Let's also ignore the $y$ and $z$-directions and focus on the $wx$-mixing Lorentz boost. (There is no loss of generality here: any transform in the Lorentz group can be written as a rotation followed by a Lorentz boost in the $x$ direction followed by another rotation.)
First off, before we start that, you can confirm the mathematical consistency of special relativity by $\mathcal L(-\beta) \mathcal L(\beta) = I,$ which is always a great way to start.
Now our boost $\mathcal L = \mathcal L_x(\beta)$ maps $\mathbf a \mapsto \mathcal L~\mathbf a$ and $\mathbf b \mapsto \mathcal L~\mathbf b$, so our inner product between these two becomes:$$\mathbf a^T ~\mathbf g ~ \mathbf b \mapsto (\mathcal L~\mathbf a)^T ~\mathbf g~(\mathcal L~\mathbf b) = \mathbf a^T (\mathcal L^T ~ \mathbf g ~ \mathcal L) \mathbf b.$$ For this to be a scalar it must be unchanged by the Lorentz boost, hence we need $\mathcal L^T ~ \mathbf g ~ \mathcal L = \mathbf g.$ You can confirm that the following matrix product works out:$$\begin{align}\mathcal L(\beta)^T ~\mathbf g~ \mathcal L(\beta) &= \frac{1}{1 - \beta^2}\begin{bmatrix}1&-\beta\\-\beta&1\end{bmatrix}
\begin{bmatrix}1&0\\0&-1\end{bmatrix}
\begin{bmatrix}1&-\beta\\-\beta&1\end{bmatrix}\\
&=\frac{1}{1 - \beta^2}\begin{bmatrix}1 - \beta^2&0\\0&\beta^2 - 1\end{bmatrix}\\
&=\begin{bmatrix}1&0\\0&-1\end{bmatrix} = \mathbf g \end{align}$$which proves that Lorentz boosts distinctively preserve this particular dot product from this particular metric for all 4-vectors.
Similar arguments about $\mathbf A^T ~ \mathbf g ~ \mathbf A$ apply for the parity transforms and for the rotations, of course. Usually we demand an even wider invariance of all of our physical predictions under the "Poincaré group", which takes the Lorentz group of coordinate transformations and adds spacetime-translations to it, but this just means that we always talk about differences in positions, say by explicitly including our spacetime "origin" point in our expressions.
This metric, therefore, is the way that we produce scalar numbers out of 4-vectors that are "invariant" in special relativity, which helps for making physical theories that are "manifestly covariant" -- their predictions do not change with respect to Lorentz boosts or rotations or translations of coordinates.
One more point of notation: when we have a metric which is not the trivial Euclidean metric, people often write the column vector $\mathbf b$ with "upper" indices $b^i$ and the row-vector $\mathbf a^T ~\mathbf g$, often called the "dual" of $\mathbf a$, with "lower" indices $a_i$. This preserves the appearance of the summation formula above; we can always state:$$\vec a \cdot \vec b = \sum_i a_i b^i = \sum_i a^i b_i.$$ It becomes in turn very common to just implicitly sum whenever you see the same symbol for a lowered and raised index, the so called "Einstein summation convention." With the above metric this becomes very easy: whenever you have a 4-vector $(A, \vec b)$ (time component plus space component), the dual vector is $(A, -\vec b)$, and the Lorentz-covariant inner product between two such things is $A_1 A_2 - \vec b_1 \cdot \vec b_2$ for the "ordinary" Euclidean definition of the dot product.
In turn, doing this with the 4-displacement $(c~\Delta t, \Delta \vec r)$ gives a Poincaré-invariant quantity $c^2 (\Delta t)^2 - |\Delta \vec r|^2$. This is a quantity which Lorentz boost preserve, and it can be thought of as a "dot product" (really should be called the "Lorentz-covariant inner product") of the two spacetime 4-vectors.
If it is positive the square root is called the proper time between the two events that it measures the spacetime displacement between; it is the time that elapses for the inertial reference frames which think that both of the events happened "at the same place." If it is negative then $\sqrt{-\sum_i r_i r^i}$ is a "proper distance" between the two events seen by the reference frames which think that both of the events happened "at the same time". In relativity these are mutually exclusive: things which are objectively space-separated are not objectively time-separated and vice versa.