It's not clear how you are wanting to normalize the function. If you are wanting to normalize the spectrum so that the amplitude of the function in the time domain matches the amplitude of the spectrum in the frequency domain, this depends on if the function you are analyzing is real or complex. I'll show a demonstration for a cosine wave.
Let
\begin{equation}X[k] = \sum_{n=0}^{N-1}A\cos(\frac{2\pi f_{0}n}{N})e^{-j2\pi\frac{kn}{N}}\end{equation}
Using Euler's formula, we can rewrite this as
\begin{equation}X[k] = A\sum_{n=0}^{N-1}(\frac{e^{j2\pi\frac{f_{0}n}{N}}+e^{-j2\pi\frac{f_{0}n}{N}}}{2})e^{-j2\pi\frac{kn}{N}}\end{equation}
Expanding this, we get
\begin{equation}X[k] = \frac{A}{2}\sum_{n=0}^{N-1}e^{j2\pi\frac{f_{0}n}{N}}e^{-j2\pi\frac{kn}{N}}+\frac{A}{2}\sum_{n=0}^{N-1}e^{-j2\pi\frac{f_{0}n}{N}}e^{-j2\pi\frac{kn}{N}}\end{equation}
Simplifying, this becomes
\begin{equation}X[k] = \frac{A}{2}\sum_{n=0}^{N-1}e^{-j2\pi\frac{(k-f_{0})n}{N}}+\frac{A}{2}\sum_{n=0}^{N-1}e^{-j2\pi\frac{(k+f_{0})n}{N}}\end{equation}
Each complex exponential has a magnitude of one, so
\begin{equation}\sum_{n=0}^{N-1}1 = N\end{equation}
This means the amplitude of each delta after the Fourier transform is applied is $\frac{AN}{2}$, so to normalize back to A, you need to multiply by $\frac{2}{N}$. A similar proof shows that for complex signals you multiply by $\frac{1}{N}$.
This leads into a discussion of the power spectral density (PSD) as mentioned by the other commenter. That definition of a PSD seems to be incorrect, assuming $f_{s}$ is the sample rate which it appears to be. There are two definitions given by Stoica and Moses in "Spectral Analysis of Signals". They are
\begin{equation} \phi_{1}(\omega) = \sum_{k=-\infty}^{\infty}r(k)e^{-j\omega k}\end{equation}
and
\begin{equation} \phi_{2}(\omega) = \lim_{N \to \infty}E\{{\frac{1}{N}\lvert\sum_{t=0}^{N-1}y(t)e^{-j\omega t}\rvert^{2}}\}\end{equation}
where $r(k)$ is the autocorrelation sequence (see eqs 1.3.7 and 1.3.10). These are shown to be equivalent under the assumption that $r(k)$ decays sufficiently quickly, ie
\begin{equation}\lim_{N \to \infty}\sum_{k=-N}^{N}|k|r(k)e^{-j\omega k} = 0\end{equation}
(see equations 1.3.11-1.3.17).
The definition of the PSD listed by the other user (a) assumes the signal being analyzed is real (ie sines and cosines not complex exponentials), and (b) makes the common misconception of how to normalize the PSD. Dividing by the sample rate squashes everything. The correct way to normalize the PSD is by the bin width.
Notice that both definitions of the PSD are defined for continuous spectra, ie, the PSD is the DTFT of the autocorrelation sequence, not the DFT, which means the PSD is a continuous spectrum. However, it is obviously impossible to compute a true continuous spectrum, so we approximate it using the DFT. If our sample rate is $f_{s}$ and our signal is length $N$, then our sample points in the spectrum will lie at integer multiples of the fundamental frequency $\delta_{f} = \frac{f_{s}}{N}$. For a frequency $f_{0}$ being one of the integer multiples of the fundamental frequency, the power spectrum accumulates all of the power in the range $[f_{0}-\frac{\delta_{f}}{2},f_{0}-\frac{\delta_{f}}{2})$ into a single frequency bin, much like an FFT accumulates the amplitude into a single frequency bin. This gives the total power of the spectrum across a finite frequency band.
To normalize to get a PSD, we have to divide by the bandwidth (ie the bin width $\delta_{f}$) across which the power has been accumulated. This gives us an average power spectral density, ie the average power contained across a finite bandwidth. This is because the assumption is that random signals have finite average power. Random signals do not have finite energy as they are assumed second order stationary, and therefore do not have DTFTs that converge. However, second order stationary random signals have finite average power, hence the need for a PSD estimate via the DTFT of the autocorrelation sequence. This is also the reason for the limit in the Periodogram estimate (see the discussion at the beginning of section 1.3 in the Stoica and Moses book mentioned above).
Hopefully this clarifies some things!
EDIT:
Here is some Matlab code to show the normalization in the first part of my post.
% Define Frequency location
f1 = 1;
f2 = 3;
% Sample rate
fs = 10;
% Signal length
N = 100;
% Frequency resolution
df = fs/N;
% Define frequency locations
w = -fs/2:df:fs/2-df;
% Define signals
t = 0:N-1;
A = 10cos(2pif1/fst);
B = 10exp(1i2pif2/fs*t);
% Plot signals
figure; plot(w,abs(fftshift(fft(A))))
hold on;
plot(w,abs(fftshift(fft(B))))
hold off;
legend('Cosine','Complex Exponential')
figure; plot(w,abs(fftshift(fft(A*2/N))))
hold on;
plot(w,abs(fftshift(fft(B/N))))
hold off;
legend('Normalized Cosine','Normalized Complex Exponential')

Additionally, if you want to normalize to 0 dB, calculate the FFT, divide by the maximum absolute value, and then take the log and plot.
EDIT 2:
I had to go back and review some notation for this. It turns out I had a couple misunderstandings as well. As previously, "Spectral Analysis of Signals" will be the reference for all the derivations.
TL;DR: The PSD does not (in general) give the power content of a signal at frequency $\omega$. It instead gives the power at $\omega$ in the signal's autocorrelation sequence.
Full breakdown:
It's not a super long proof but an important one. Let's define the autocorrelation as
\begin{equation}r(k) = E\{y(t)y^{*}(t-k)\}\end{equation}
The PSD is then defined (as noted previously) as
\begin{equation}\phi(\omega) = \sum_{k=-\infty}^{\infty}r(k)e^{-j\omega k}\end{equation}
Now let's define a new autocorrelation sequence as follows
\begin{equation}r'(k) = E\{y(t)y^{*}(t+k)\}\end{equation}
This is obviously very similar to the definition of $r(k)$, however, it is an important difference to note. Now, if we define a new PSD based on this estimate of the autocorrelation sequence, we get
\begin{equation}\phi'(\omega) = \sum_{k=-\infty}^{\infty}r’(k)e^{-j\omega k} = \sum_{k=-\infty}^{\infty}r(k)e^{j\omega k} = \phi(-\omega)\end{equation}
If the PSD represented the power in the signal, we would have gotten $\phi(\omega) = \phi'(\omega)$, because the signal's spectral content shouldn't be dependent on it's autocorrelation sequence. However, the PSD does clearly depend on the definition of the signal's autocorrelation sequence, so it cannot represent the power in the signal. It may be enough in the case that the original signal is real to say that the PSD relates to the power in the signal, and therefore has a relationship with the power spectrum. However, this is not generally true, as $\phi(\omega) \neq \phi(-\omega)$ for complex valued signals. I also noted before that we wanted to compute an average power spectral density as random signals have finite average power. This is given by the second definition of the PSD originally written.
Therefore, the frequency averaged squared magnitude spectrum is a PSD estimate, not a power spectrum estimate. The proof as I see it is as follows. Let the frequency averaged squared magnitude spectrum be
\begin{equation} \phi(\omega) = \frac{1}{N}\lvert Y(\omega)\rvert^{2} = \frac{1}{N}Y(\omega)Y^{*}(\omega)\end{equation}
Taking the inverse Fourier transform, we get something proportional to
\begin{equation}y(t)*y(-t)\end{equation}
This is a simplified definition of the autocorrelation sequence. Therefore, the average squared magnitude spectrum is a scaled PSD, that again describes the power contained in the autocorrelation sequence, not the signal itself. I'm not sure if there is a proof for the relation between the power spectrum and the power spectral density for real valued signals, but it might only need to be that $\phi(\omega) = \phi(-\omega)$ for real spectra.
I edited my earlier response in light of this new information.
1 Stoica, P., & Moses, R. L. (2005). Spectral analysis of signals (Vol. 452, pp. 25-26). Upper Saddle River, NJ: Pearson Prentice Hall.
f_samplingshould be50, not1/50. See my answer ;) – Jdip Dec 11 '23 at 18:40