19

This is also related to an older thread in MSE ("what is the half derivative of zeta at zero?") .

One of the possible steps in the problem of that thread was to evaluate the series

$$s_a=\eta^{(0.5)}(0) = î \small{\left(\sqrt{\ln(1)}-\sqrt{\ln(2)}+\sqrt{\ln(3)}-\sqrt{\ln(4)} \cdots \right) \underset{\mathfrak E}{\approx}} - 0.347006596200 \, î $$ as the (regularized by Euler-summation $\mathfrak E$) value for the half-derivative of the alternating zeta (or "Dirichlet's eta"). Note, that the additional imaginary factor $î$ is due to the fact, that we had originally negative values under the square-root terms. In the following I'll leave this factor out for convenience.

Q: But how can I do the nonalternating series $$ s_p=\zeta^{(0.5)}(0)= î \small{ \left( \sqrt{\ln(1)}+\sqrt{\ln(2)}+\sqrt{\ln(3)}+\sqrt{\ln(4)}+ \cdots \right)} \underset{\mathfrak ???}{\approx} ??? $$ $\qquad \qquad$ I don't see so far any possibility for instance in the sense of L. Euler's famous $\eta() \to \zeta() $ - conversion.


[update]: what I've additionally just done was to apply a procedure to find the/a formal power series for the problem of finite sums of consecutive terms.

The procedure is that of the approximation of the Neumann-series of the Carleman-matrix for the function $f(x) = \sqrt{\ln(\exp(x^2)+1)}$ which is the iterative transfer function which produces the term of the series for index $n+1$ from the term at index $n$.

I'll explain this now in more detail:

For the procedure which is also known with the name "indefinite summation" we need first a function, which generates the terms of the series to be summed as iterates of itself. What function can transfer $\sqrt{\ln(x)}$ to $\sqrt{\ln(x+1)}$? This is $f(x)$ as given above, because for instance it gives for $z= \sqrt{\ln(5)}$ the result $f( z) = \sqrt{\ln(6)}$ and $f°^2( z) = \sqrt{\ln(7)}$ and so on.
So we can formally write the series $$ s_p =î \cdot ( z + f(z) + f°^2(z) + f°^3(z) + ... ) \qquad \qquad \text{ with } z=\sqrt{\ln(1)}=0 $$ An approach which I've exercised a couple of times is to implement $f()$ by a Carlemanmatrix based on $f()$'s formal powerseries. That powerseries is $$ \mathcal {\text{Taylor}} (f(x)) \approx \small 0.83255461 + 0.30028060 x^2 + 0.020918484 x^4 - 0.0075447481 x^6 + ... $$

Let now $ C = \text{carleman}(f) $ be the Carlemanmatrix for $f(x)$ then the dotproduct of a vector $V(x)=[1,x,x^2,x^3,x^4,...]$ with $C$ gives $V(x) \cdot C= V(f(x))$ by definition, and if we look at the second column of $C$ only we have $V(x) \cdot C_{0..\infty,1} = f(x)$ at least as formal powerseries, and if it is convergent for small $x$ we can also evaluate numerically.

Now the idea of the Neumann-series comes into play. As $V(x) \cdot C_{0..\infty,1} = f(x)$ we should formally also have $V(x) \cdot C^2_{0..\infty,1} = f°^2(x)$, $V(x) \cdot C^3_{0..\infty,1} = f°^3(x)$ and so on, such that we make the ansatz: $$ V(z) \cdot ( C^0 + C^1 + C^2 + C^3 + ... )_{0..\infty,1} \overset?= z + f°(z) + f°^2(z) + f°^3(z) + ... $$ and the key-observation is here, that we have in the parenthese of the lhs the geometric series of the matrix $C$ (such a construct is also called "Neumann-series"). We can surely expect, that this is no proper sum, but with some examples of alternating geometric series instead I could get meaningful approximations when using empirical approximations to $B = (I+C)^{-1}$ and then approximate for instance $V(z) \underset{\mathfrak E}\cdot B_{0..\infty,1} \approx z - f(z) + f°^2(z) - ... + ...$ where $\mathfrak E$ means Eulersummation in the dotproduct if needed.

This is not so simple and straightforward for the non-alternating geometric series. As the Carlemanmatrix of any function has the eigenvalue $1$ (at least once) by construction we would run into $\frac 10$ and cannot immediately try the approximation with finitely truncated matrices $A = ( I - C)^{-1}$. One of the workarounds, which gives sometimes meaningful results is, to omit the first column of $(I - C)$ which gives then some result, but where the first row is then systematically missing/unknown.

Remark: a case where this workaround is successful is the problem of the sum-of-like-powers where the Carlemanmatrix is the upper-triangular Pascalmatrix $P$ . The removal of the empty first column in $ Q = (I - P)_{0..(n-1),1..n} $ allows inversion and provides the matrix $Q^{-1}$ of coefficients, with which Faulhaber had solved the summing-of-like-powers-problem. The same ansatz was also tried as far as I know by two authors for solving the extension of tetration to real iteration heights. I've similarly attempted some other series-problems with such "iteration-series" of iterated functions $f°^h(x)$ and their according Neumann-series with meaningful approximations

So I've now tested $_n Q = (I - C)_{0..n-1,1..n}$ with increasing $n$ and computed $\,_nA^* = \,_n Q^{-1} $ and made $\,_nA$ by inserting the unknown first row in $\,_nA^*$ The top-left of the heuristically approximated matrix $\,_nA$ is $$\small \,_nA_{0..16,0..1}=\begin{bmatrix} ?? & ?? \\ 0 & 1 \\ -1 & -0.662055270527 \\ 0 & 0 \\ -\frac1{2!} & -0.561866397242 \\ 0 & 0 \\ -\frac1{3!} & -0.249581408503 \\ 0 & 0 \\ -\frac1{4!} & -0.0755503260124 \\ 0 & 0 \\ -\frac1{5!} & -0.0172091887343 \\ 0 & 0 \\ -\frac1{6!} & -0.00315760368955 \\ 0 & 0 \\ -\frac1{7!} & -0.000499047959470 \\ 0 & 0 \\ -\frac1{8!} & -0.0000687607442729 \\ ... & ... \end{bmatrix} $$ Using the coefficients of the first column for a power series in $x$ building the function $a_0(x)$ we get $$ a_0( \sqrt{\ln(n_1)})-a_0( \sqrt{\ln(n_2)}) = n_2 - n_1 $$ which indicates the sum of the $(f°^k(n_1))^0$ - and which is just counting the terms.

Using the coefficients $a_{k,1}$ of the second column for a power series in $x$ building the function $a_1(x)$ we get the finite sum of the function at consecutive arguments: $$ a_1( \sqrt{\ln(n_1)})-a_1( \sqrt{\ln(n_2)}) = \sum_{k=1}^\infty (\sqrt{\ln(n_1)}^k - \sqrt{\ln(n_2)}^k )a_{k,1} = \sum_{k=n_1}^{n_2-1} \sqrt{\ln (k)} $$ which indicates the sum of the $f°^k(n_1)$ which is the desired finite sums $s_p$ for the required terms to a very good (and seemingly arbitrary) numerical approximation.
------ End of lengthy explanation

Q: However - I'm missing the first coefficient for the second power series $a_1(x)$ . That should just contain the representative value for the infinite sum of $\sqrt{\ln(k)} $ with $k=1 ... \infty$

  • Very interesting sum, but I can only half understand what is going on. – Simply Beautiful Art Jun 16 '16 at 15:50
  • What do you need? What a "half-derivative" is? – Gottfried Helms Jun 16 '16 at 19:41
  • No, I understand what fractional derivatives are, but familiar with anything starting from the UPDATE section, which is half the post. – Simply Beautiful Art Jun 17 '16 at 11:42
  • I see... Well, this is a method which needs some more background I'm afraid. The idea is, to derive a solution by converting the problem of a powerseries into one containing iterates of a function, which I've applied several times to similar problems. A related link is http://go.helms-net.de/math/divers/BernoulliForLogSums.pdf where I try to explain what I'm also doing here. Perhaps a more basic introduction is this which deals with the sums-of-like-powers-problem with this method: http://go.helms-net.de/math/binomial_new/04_3_SummingOfLikePowers.pdf , the $\eta()$-function. – Gottfried Helms Jun 17 '16 at 12:23
  • @simpleArt : This is not a complete "private" method, one can find application of this under the term "indefinite summation" and there are question&answers here in MSE as well as in MO which cover this subject. Unfortunately I've no good idea at the moment for a good historical text, maybe the Euler-Maclaurin-formula is a usable example, but I've nothing at the top of my head in the moment... – Gottfried Helms Jun 17 '16 at 12:25
  • @SimpleArt: actually no time due to courses. Let's see next week. – Gottfried Helms Jan 19 '17 at 17:55
  • :-P ok. :D can't wait to take the classes your probably taking. At least the math ones. – Simply Beautiful Art Jan 19 '17 at 17:57
  • @SimpleArt: :-) (It's introductory empirical statistics for social workers which I have to give; I could math only make a hobby... ) – Gottfried Helms Jan 19 '17 at 18:06

2 Answers2

1

One form of the Ramanujan summation is done as follows:

$$\sum_{n\ge1}^\Re f(n)=\lim_{N\to\infty}\sum_{n=1}^Nf(n)-\int_1^Nf(t)\ dt$$

For our case,

$$\sum_{n\ge1}^\Re\sqrt{\ln(n)}=\lim_{N\to\infty}\sum_{n=1}^N\sqrt{\ln(n)}-\int_1^N\sqrt{\ln(t)}\ dt\approx1.6$$

  • Thank you for the input. I'll look at it next week. ( Hmm, doing this in Pari/GP gives a likely result $\gt 1.6$ by N=1000000;sum(k=1,N,sqrt(log(k)))-intnum(x=1,N,sqrt(log(x))) (with 200 digits internal precision) giving = 1.65764... and from $N=10^2$ to $N=10^6$ giving around 0.2 more for each step in the exponent with slowly decreasing tendency) – Gottfried Helms Jan 19 '17 at 18:03
  • @GottfriedHelms with my low tech, I could only manage like $n=60$k or so... – Simply Beautiful Art Jan 19 '17 at 18:08
  • I think it grows around the rate of about $\sqrt{\ln(n)}$, so I might fix this to the constant term in the Euler-Maclaurin expansion. – Simply Beautiful Art Jan 19 '17 at 18:13
  • Sorry, forgot completely this question/your answer after the courses... I'll look at your answer again from tomorrow (It's night already here, my pillow is waiting...) – Gottfried Helms Mar 13 '17 at 00:01
  • XD Same, forgetting and looking at my bed. :D – Simply Beautiful Art Mar 13 '17 at 00:02
  • Also totally random, but you might like this PDF I just finished making. – Simply Beautiful Art Mar 13 '17 at 00:03
  • Hmm, after many experiments with this I still don't see any path to more conclusive results than in my first comment ... – Gottfried Helms Mar 13 '17 at 13:42
  • :-( That sucks. Well, I hope we get results eventually. – Simply Beautiful Art Mar 13 '17 at 13:51
  • Late comment: I tried your sum/integral-formula again, and only with changing the upper limit of the integral to $N+1/2$ instead of $N$ I seem to get convergence; tried it in steps of $2^j$ : N=100*2^10; su=sum(k=1,N,sqrt(log(k)))-intnum(t=1,N+1/2,sqrt(log(t))) $\to$ su = -0.200824720791 with seemingly convergence when the exponent at 100*2^j is increased... But I don't have any idea what to make with this... – Gottfried Helms Feb 14 '21 at 11:33
  • Another late comment: yesterday I looked again at your linked pdf. I got the impression you might like this article on mine about summation of *really* diverging series, the short form in http://go.helms-net.de/math/tetdocs/10_4_Powertower_article.pdf and a more explanative form in http://go.helms-net.de/math/tetdocs/10_4_Powertower.pdf (A bit more of this on my tetration-pages http://go.helms-net.de/math/tetdocs) The articles are very early discussions of mine where I often lacked even correct naming of things, but you might like the explorative style. Perhaps I'll improve some day... – Gottfried Helms Feb 14 '21 at 13:24
  • Thanks for the interesting reads, will check them out. – Simply Beautiful Art Feb 14 '21 at 14:17
  • Looking back on it, it's probably because the summation and integral line up best if we use $f(n)\approx\int_{n-1/2}^{n+1/2}f(x)~\mathrm dx$, which explains the convergence. – Simply Beautiful Art Jun 13 '21 at 03:52
0

there is no problem with this series it has a pole at z=1 enter image description here