9

Motivation: I felt excited by the answer of X-Rui in this thread. So, I tried to generalize his answer.

I tried to obtain the distribution in $[0,1]$ of the fractional parts of the numbers $n/1, n/2, .... n/n$ as $n$ tends to $\infty$.

There may be many applications like computing the limit of ${1\over n}(\sum_k [an/k] - \sum_k n/k)$ as $n$ tends to $\infty$.

My overall question is: do you think the following informal argument makes sense? (of course, there should be much more work to make it a formal proof).

Fix an integer $n>1$, which is in fact assumed to be large. Let $\alpha$ be another positive large number.

Building upon the argument of X-Rui, we have

for ${\alpha\over \alpha+1} n < k \leq n$, there holds $1\leq {n\over k} < 1 + {1\over \alpha}$

for ${\alpha\over \alpha+2}n < k \leq {\alpha\over \alpha+1}n$, there holds $1+{1\over \alpha}\leq {n\over k} < 1 + {2\over \alpha}$.

for ${\alpha\over \alpha+3}n < k \leq {\alpha\over \alpha+2}n$, there holds $1+{2\over \alpha}\leq {n\over k} < 1 + {3\over \alpha}$.

. . .

for ${\alpha\over 2\alpha}n < k \leq {\alpha\over 2\alpha-1}$ there holds $2 - {1\over \alpha} \leq {n\over k} < 2$

for ${\alpha\over 2\alpha+1} n < k \leq {\alpha \over 2\alpha}$, there holds $2\leq {n\over k} < 2 + {1\over \alpha}$

and so on.

Hence, if $q$ is a number between $1$ and $\alpha-1$, the fractional part of $n/k$ will lie between ${q-1\over \alpha}$ and ${q\over \alpha}$ whenever ${n\alpha \over m\alpha + q} < k \leq {n\alpha\over m\alpha+q-1}$, with $m = 1, 2,3,...$.

So, as $n$ becomes very large, the proportion of these fractional parts among the fractional parts of $n/1,n/2,... n/n$ will tend to $$ S = \alpha \sum_{m=1}^\infty {1\over m\alpha+q-1} - {1\over m\alpha + q} = \sum_m {1\over m + {q\over \alpha}-{1\over \alpha}} - {1\over m + {q\over \alpha}}.$$ Since $\alpha$ has been assumed to be large, we can use the formula ${1\over x-\varepsilon}-{1\over x} \approx {\varepsilon\over x^2}$ to get $$ S \approx \sum_m \bigg({1\over m+{q\over \alpha}}\bigg)^2 {1\over \alpha}.$$ Now, in order to have a continuous distribution, we replace ${q\over \alpha}$ by $x$, and $1\over \alpha$ by $dx$, to obtain $$d\phi = \sum_m {1\over (x+m)^2} dx, $$ where $\phi$ is the cumulative function of the desired distribution. In other words, the density of this distribution is $$\varphi(x) = \sum_m {1\over (m+x)^2}.$$

My secondary question is: does the above sum have a known analytic form?

Note: the above function is indeed a distribution density (actually it is the folded distribution of the ditribution $1/(1+x)^2$ in $[0, \infty)$): to see that, we have only to check that $\int_0^1 \varphi(x) dx = 1$: We have $$S_m = \int_0^1 {1\over (m+x)^2} = {1\over m} - {1\over m+1} = {1\over m(m+1)}.$$ Hence $\sum_{m\geq 1} S_m < \infty$ and $\int_0^1\varphi(x)dx = \sum_m S_m$.

Now $$\sum_{m=1}^N S_m = \sum_{m=1}^N {1\over m} - {1\over m+1} = 1 - {1\over N+1} \longrightarrow 1, \quad {\rm as}\ N\to \infty.$$

MikeTeX
  • 2,078
  • First of all, I’m glad my answer can be your inspiration! Coincidentally, on $[0, \infty)$, $\varphi$ is the same as the electric field strength when you put a unit positive charge at each negative integer. I guess the cumulative is then related to the potential? Don’t know how this physical connection helps if at all. Also, before the second question, $d\phi$ might be a typo? Should be $d\Phi$ right? – X-Rui Dec 21 '23 at 12:25
  • Yes, thx for the typo I've just corrected. – MikeTeX Dec 21 '23 at 12:28
  • Your link about the electric potential is interesting. Not sure if it helps much though. – MikeTeX Dec 21 '23 at 12:38

3 Answers3

3

Answer to the overall question. I can neither assert the correctness of the reasoning nor find flaws in it. But, if we start over with the core idea in a cleaner way, the conclusion does hold.

Think $\frac{k}{n} = \frac{1}{n/k}$ as uniformly distanced points on $(0, 1]$. As $n \to \infty$, it's reasonable to think we are just sampling the uniform distribution on $(0, 1]$. Asking for the distribution of $\left\{\frac{n}{k}\right\}$ would translate into the distribution of $Y = \left\{\frac{1}{X}\right\}$ where $X \sim U(0, 1)$.

As usual, we investigate the cumulative distribution function of $Y = \{\frac{1}{X}\}$. For $y \in [0, 1)$, $$\begin{gather*} Y = \left\{\frac{1}{X}\right\} \leq y \iff \frac{1}{X} \in \bigcup_{m=1}^\infty [m, m+y] \iff X \in \bigcup_{m=1}^\infty \left[\frac{1}{y+m}, \frac{1}{m}\right], \\ \Phi(y) = P(Y \leq y) = \sum_{m=1}^\infty P\left(\frac{1}{y+m} \leq X \leq \frac{1}{m}\right) = \sum_{m=1}^\infty \left(\frac{1}{m} - \frac{1}{y+m}\right). \end{gather*}$$ This CDF agrees with the PDF given in the description: $$\Phi'(y)=\sum_{m=1}^\infty \frac{1}{(y+m)^2} = \varphi(y).$$

Answer to the secondary question. We can extend the formula of $\Phi$ to all real numbers except negative integers. It would not be the CDF of $Y$ any more, but it agrees with the CDF on $[0, 1]$, which is the part that matters anyway. The extended $\Phi$ is actually a continuous extension of the partial sums of the harmonic series. For all $x \neq -1, -2, -3, \dots$, $$\begin{split} \Phi(x+1) &= \sum_{m=1}^\infty \left(\frac{1}{m} - \frac{1}{x+1+m}\right) \\ %&= \sum_{m=1}^\infty \left(\frac{1}{m} - \frac{1}{m+1} + \frac{1}{m+1} - \frac{1}{x+m+1}\right) \\ &= \sum_{m=1}^\infty \left(\frac{1}{m} - \frac{1}{m+1}\right) + \sum_{m=1}^\infty\left(\frac{1}{m+1} - \frac{1}{x+m+1}\right) \\ &= 1 + \sum_{m=2}^\infty\left(\frac{1}{m} - \frac{1}{x+m}\right) \\ &= \frac{1}{x+1} + \sum_{m=1}^\infty\left(\frac{1}{m} - \frac{1}{x+m}\right) \\ &= \Phi(x) + \frac{1}{x+1}. \end{split}$$ And since $\Phi(0)=0$, we have $\Phi(n)=H_n=\sum_{m=1}^n \frac{1}{m}$ for all natural number $n$.

There is a known continuous formula (ref 1 and 2) for the harmonic numbers which is $$H(x) = \psi(x+1) + \gamma = \frac{\Gamma'(x+1)}{\Gamma(x+1)} + \gamma,$$ where $\psi(x) = \frac{\Gamma'(x)}{\Gamma(x)}$ is the digamma function and $\gamma \approx 0.577$ is the Euler-Mascheroni constant. In fact, this $H$ is identical to $\Phi$ (ref 2, 3, and 4). This gives us a nice formula for $\Phi$ using $\psi$, $$\Phi(x) = \psi(x+1) + \gamma = \psi(x) + \frac{1}{x} + \gamma.$$ The latter equality is from the recurrence formula of $\psi$ (ref 1 and 2). Then, with Gauss's digamma theorem (ref 2), it is possible to calculate the exact value of $\Phi$ for any rational number in $[0, 1]$ using only elementary functions.

References

  1. This MSE answer by Simply Beautiful Art gave a continuous formula for harmonic numbers using gamma function.
  2. This wiki page on digamma function provides many properties of the digamma function useful to this answer, notably in the sections Relation to harmonic numbers, Series formula, Recurrence formula and characterization, and Gauss's digamma theorem.
  3. Equation 6.3.16 in Handbook of Mathematical Functions by M. Abramowitz and I. A. Stegun is the reference Wiki used. It states the identity between $H$ and $\Phi$ without a proof.
  4. This desmos approximation by me was a test I did when I first had the idea.
X-Rui
  • 1,299
  • Very nice. I think this will be the accepted answer, as it should not be too difficult now to make it a formal proof. Your insight regarding $\Phi$ is bright too. I will wait some time, thought, in order to encourage others to give their answer. – MikeTeX Dec 22 '23 at 12:57
  • I've come back from week end, and just seen your updates. I'm happy you were able to assert the relation between $\psi$ and $\Phi$. It is interesting also, to compare this answer with the expression for $\Phi$ I gave in my answer. This gives a possibly new integral expression for $\psi$. Maybe you or me, or both of us, should write an article about the distribution of the fractional part. Do you think this is worthy of that ? – MikeTeX Dec 23 '23 at 18:01
  • I have recovered your nice result in another way. See the "edit" in my answer. – MikeTeX Dec 23 '23 at 20:53
  • @MikeTex It’s interesting to see how we arrive at the same result via very different routes. However I don’t believe we are discovering anything new (call me pessimistic haha). I feel digamma function is interesting enough that people have figured out the obvious things, and the question in this post itself holds both purely mathematical and physical value. In particular, if I understand correctly, I think the integral formula you found in your answer has been recorded in the handbook in ref 3 (eq. 6.3.21 and .22). If you do write an article on this just give me a shoutout and I’d be glad! – X-Rui Dec 27 '23 at 00:51
  • I admit it would be a bit surprising that no one has already tackled the problem of the distribution of the integral part of $n/i$. Especially considering the extensive literature about the distribution of sequences and other random variables modulo 1. – MikeTeX Dec 27 '23 at 16:17
2

After some research, I am able to elaborate about to the secondary question. So, rather than editing my question which is already very long, I post the partial answers here.

With the notations in the question, it turns out that $\varphi(x) = \zeta(2, x) - {1\over x^2}$ for all $x>0$, where $\zeta(s,z)$ is the so-called Hurwitz Zeta function (for $x=0$, the formula for $\varphi$ in the question implies immediately that $\varphi(0)=\pi^2/6$ (Basel problem)). This function has an integral representation, which allows us to write $$\varphi(x) = \int_0^\infty {t e^{-tx} \over 1-e^{-t}} dt - {1\over x^2},\quad (x > 0).\quad\quad (*)$$

It does not seem that this density can be expressed with usual analytic functions, but fortunately, we usually need its cumulative distribution $\phi$ between two bounds $a$ and $b$, with $0< a,b\leq 1$, that is, $$\int_a^b dx\int_0^\infty {t e^{-tx} \over 1-e^{-t}} dt - {1\over a} + {1\over b} = \int_0^\infty dt \int_a^b (\ldots)dx - {1\over a} + {1\over b} = \int_0^\infty dt {e^{-ta}-e^{-tb}\over 1-e^{-t}} dt - {1\over a} + {1\over b}. $$ The change of variables $y = e^{-t}$ leads to $$ \phi(b) - \phi(a) + {1\over a} - {1\over b} = \int_1^0 {y^a - y^b\over 1-y}{(-1)\over y} dy = \int_0^1 {y^{a-1} - y^{b-1}\over 1 - y}dy. $$ We can change variables again with $z = 1-y$ to get $$ \phi(b) - \phi(a) + {1\over a} - {1\over b} = \int_0^1 {(1-z)^{a-1} - (1-z)^{b-1}\over z} dz. $$ Thus, denoting $$f(t) = \int_0^1 {(1-z)^{t-1} - 1\over z} dz,$$ we get $$ \phi(b) - \phi(a) = f(a) - f(b) - {1\over a} + {1\over b} $$ The integral for $f(t)$ can sometimes be computed.

For example, if $a =1/2$ and $b = 1$, we find $$ \phi(1) - \phi(1/2) = 2\ln(2) - 0 - 2 + 1 = 2\ln(2) - 1. $$ This is the same result as discussed in the aforementioned thread.

Another example, if $a = 1/3$ and $b = 1$, an online integral calculator gives $f(1/3)$ which leads to $$ \phi(1) - \phi(1/3) = \bigg( {3\ln 3\over 2} + {\sqrt{3}\pi\over 6} \bigg)- 0 - 3 + 1 = {3\ln 3\over 2} + {\sqrt{3}\pi\over 6} - 2 \approx 0.5548. $$

EDIT: We deduce the nice result of X-Rui in his answer from formula (*) above.

If the real part of $z$ is positive then the digamma function has the following integral representation due to Gauss (see Wikipedia): $$\psi(z) = \int_0^\infty \left(\frac{e^{-t}}{t} - \frac{e^{-zt}}{1-e^{-t}}\right)\,dt.$$ Therefore $$\psi'(z) = \int_0^\infty t\frac{e^{-zt}}{1-e^{-t}}\,dt.$$ Thus, from (*), $$\varphi(x) = \psi'(x) - {1\over x^2}.$$ This gives the desired density.

Regarding the cumulative function $\phi$, we deduce that $$\phi(x) = \psi(x) + {1\over x} + C.$$ The constant $C$ has to be determined with the condition $$\phi(1) = 1 = \psi(1) + {1\over 1} + C = -\gamma + 1 + C,$$ where $\gamma$ is the Euler-Mascheroni constant. It follows that $C = \gamma$, hence $$\phi(x) = \psi(x) + {1\over x} + \gamma.$$ Also, from the known Laurent expansion of the digamma function $\psi$, it follows that $$ \phi(x)=-\sum_{k\geq 1}\zeta(k+1)(-x)^k . $$

MikeTeX
  • 2,078
0

If $\{a_m\}$ is a sequence of real numbers such that $\sum_{m \leq x} a_m = O(x)$ and $\sum_{m \leq x} a_m/m = r\log x + c + o(1)$ as $x \to \infty$, for some constants $r$ and $c$, then $$ \frac{1}{n}\sum_{m \leq n} a_m\left\{\frac{n}{m}\right\} \to r(1-\gamma) $$ as $n \to \infty$. EDIT: $\gamma$ is Euler's constant.

Example. When $a_m = 1$ for all $m$, the above hypotheses are met using $r = 1$ and $c = \gamma$ (Euler's constant), so $$ \frac{1}{n}\sum_{m \leq n} \left\{\frac{n}{m}\right\} \to 1-\gamma $$ as $n \to \infty$.

KCd
  • 46,062
  • Is $\gamma$ the Euler constant in the first statement? (You only define it in the example.) – Gary Dec 22 '23 at 04:28
  • 1
    @Gary yes. I edited the answer. – KCd Dec 22 '23 at 05:44
  • That's looks very interesting, thought not directly related to the question. But I don't know this theorem at the beginning of your answer. Can you indicate some source or some hint about the proof? – MikeTeX Dec 22 '23 at 06:35