1

Let $\mathrm{Exp}_t^{[y]} (x)$ denote the $y$ th iteration of the exponential function with base $t$ : $t^x.$

For example $\mathrm{Exp}_t^{[1]} (x) = t^x$.

Let $\sim\sim$ denote best fit.

Now as $x$ Goes to positive infinity and a pair $(a,b)$ with $e<a<b$ Is given , I wonder how to find the best fit base value $C$ such that

$$\sqrt { \mathrm{Exp}_a^{[1/2]} (x) \cdot \mathrm{Exp}_b^{[1/2]} (x) } \sim\sim \mathrm{Exp}_C^{[1/2]} (x). $$

Lets define then $C = f(a,b)$, assuming $a< f(a,b) < \sqrt{ab} < b $.

How to improve those bounds ?

How to find the value $C$ ?

Below: Edit

There are many solutions to tetration , but I am talking here about solutions where $x>1$ , $b > a>e$ implies $\mathrm{Exp}_b^{[1/2]}(x) > \mathrm{Exp}_a^{[1/2]} (x)$.

Notice that in that case $\mathrm{Exp}_t^{[1/2]} (x) $ is asymptotic to $2 \sinh_t^{[1/2]} (x) $ and

$$ 2\sinh_t (x) = t^x - t^{-x} $$

And $^{[1/2]}$ means half-iterate as usual.

Notice $ 2\sinh_t $ does have a hyperbolic fixpoint at $ x= 0$. So by using koenigs function we get a solution from that fixpoint.

Also notice this implies the entire post can be reformulated by rewriting Every $\mathrm{Exp}_t $ with $2\sinh_t$.


So we get the possibly easier :

Let $2\sinh_t^{[y]} (x) $ denote the $y$ th iteration of the 2 times sinh function with base $t$ : $t^x - t^{-x}.$

For example $2sinh_t^{[1]} (x) = t^x - t^{-x} $

Now as $x$ Goes to positive infinity and a pair $(a,b)$ with $e<a<b$ Is given , I wonder how to find the best fit base value $C$ such that

$$\sqrt { 2\sinh_a^{[1/2]} (x) \cdot 2\sinh_b^{[1/2]} (x) } \sim\sim 2\sinh_C^{[1/2]} (x)~. $$

This edit might be helpful to solve the problem and to clarify the (goal of the) question, avoid confusion, and address Sheldon's comments.

See Koenigs function, tetration, and the strongly related

End of Edit

mick
  • 15,946
  • see http://math.stackexchange.com/questions/208996/half-iterate-of-x2c the same thing happens with exponentials where the half iterates are not always ordered. There are cases where $a<b ;;;\exp_a^{[0.5]}>\exp_b^{[0.5]}$ – Sheldon L Oct 20 '16 at 18:11
  • Yes. I was aware of Sheldon's result. I want to point out that on average intuition is correct ; USUALLY $exp_b^{[0,5]} > exp_a^{[0,5]}$ if $ a < b$. This justifies the idea of " best fit " and therefore the entire OP. Im not saying Sheldon denied the correctness of the OP , but Some people might have gotten that impression. On the other hand Sheldon's comment is to be taken serious ; it might be the warning for a false (dis)proof / computation. – mick Oct 20 '16 at 19:13
  • About Sheldon's post : I wonder if an order can be achieved by playing around with different solutions to tetration ?? Is tommy's 2sinh method ordered ? Is the base change ordered ? ( i think so !? ). – mick Oct 20 '16 at 19:17
  • Consider base_2 and base_e. The first counter example occurs at x1~=4.78924742892085E72. where $\exp_e^{0.5}(x1)=\exp_2^{0.5}(x1)$ The counter example marks the beginning of a region where $\exp_2^{0.5}(x)>\exp_e^{0.5}(x)$. The first region ends at approximately exp(9972.06). The next region begins at x2~=exp(2.075305E20), and ends at $\exp_e^{0.5}(x2)$, the next region begins at $x3=\exp(x2)$. This non-ordering is the case (conjectured) for all analytic exp half iterates; Tommy's 2sinh is nowhere analytic (conjectured) but is ordered. – Sheldon L Oct 20 '16 at 20:50
  • May I ask how is defined what is called the "$y$th iteration". More precisely, what is the meaning in this context of the word "iteration" ? A web ressource maybe ? – Jean Marie Oct 21 '16 at 07:28
  • I disagreed with removal of the tag dynamical systems. As a consequence JeanMarie is confused. As for JeanMarie : wiki tetration. – mick Oct 21 '16 at 18:46
  • Is the basechange ordered ? Sheldon ? – mick Oct 21 '16 at 18:59
  • The base change solution generates tetration for larger bases from base exp(1/e) and is ordered and (conjectured) C00 nowhere analytic. – Sheldon L Oct 22 '16 at 14:11
  • My prediction is $t^x + t^{-x}$ for different bases, with t=a,b, does not resolve the ordering problem, and you wind up with the same 50% duty cycle as with $\exp_{a;b}^{0.5};$. I haven't computed counter examples, but it wouldn't be difficult. I think only If you use $\exp(x)-\exp(-x)$ for all bases a,b even if a,b<>e, then in a way analogous to the basechange, you can generate half iterates for other bases then e, that would be "ordered" as desired by the op. These 2sinh half iterates would be conjectured $C_\infty$ and nowhere analytic except for base(e). – Sheldon L Oct 27 '16 at 11:04
  • Your prediction must be false Sheldon. Notice that if in the interval $[0,t]$ for any $t>0$ it is ordered then by induction it is ordered in $[t,oo]$. Example $b>e$ ; if ordered in $[0,1]$ ( so 2sinh^k =< 2sinh_b^[k] ( [0,1] ) ) then ordered in $[0,2sinh(1)]$ since 2sinh_b^ ( 2sinh^k > 2sinh ( 2sinh^[k] ( [0,1] ). And then induction. Besides Tommy1729 agrees with me. – mick Nov 03 '16 at 02:36
  • Sounds lilke a great mathstack question! There are cases for a<b, where $\exp_a^{0.5}(x)>\exp_b^{0.5}(x)$ as x grows arbitrarily large. Can we show that this also occurs or prove it does not occur for a<b, $\text{2sinh}_a^{0.5}(x)>\text{2sinh}_b^{0.5}(x)$ as x gets arbitrarily large? We define $\text{2sinh}_t(x)=t^x-t^{-x}$ and we generate the half iterates via the formal half iterate at the fixed point of of zero. – Sheldon L Nov 03 '16 at 13:30
  • Sheldon. Im confused. You say the 2sinh method of Tommy is ordered. And proven if i understand correctly. But you doubt 2sinh is ordered ? I thought the argument for ordered 2sinh method came from assuming ordered 2sinh. But apparantly not ???? It is easy to show the 2sinh method is ordered if 2sinh is , but apparantly you went into another way of thinking to claim 2sinh method is ordered. Im fascinated and surprised by that !! Please clarify it , because i am confused. Thank you. – mick Nov 08 '16 at 01:16

2 Answers2

2

Consider the function $$g(x) = \text{slog}_e(\text{sexp}_2(z+0.5))-\text{slog}_e(\text{sexp}_2(z))-0.5$$

if $g(z)=0\;$ then $\;\exp_e^{0.5}(\text{sexp}_2(z))=\exp_2^{0.5}(\text{sexp}_2(z))\;\;$ since $\;\exp_e^{0.5}(\text{sexp}_2(z))=\text{sexp}_2(z+0.5)\;$ so we are comparing the slog_e of z and the half iterate (base2) of z.

g(z) applies for base_2 and base_e, but any bases can work. The op says, "USUALLY $\exp_e^{0.5}>\exp_w^{0.5}$, which would imply $g(z)<0$, but as z gets arbitrarily large, g(z) spends half of its time positive and half of its time negative. If z is large enough, when can easily show that $g(z+1) \approx g(z)$ and $g(z+0.5) \approx -g(z)$, where the approximation gets arbitrarily good as z increases.

First we show, z is large enough, $\text{slog}_e(\text{sexp}_2(z+1)) \approx \text{slog}_e(\text{sexp}_2(z))+1$ Step1: $$\text{slog}_e(\text{sexp}_2(z)) = \text{slog}_e(2^{\text{sexp}_2(z-1))})$$ $$\text{slog}_e(\text{sexp}_2(z)) = \text{slog}_e(\ln(2^{\text{sexp}_2(z-1))}))+1$$ $$\text{slog}_e(\text{sexp}_2(z)) = \text{slog}_e(\ln(2) \cdot \text{sexp}_2(z-1))+1$$ similarly we can write an equation for slog_e(sexp_2(z+1)) in terms of sexp_2(z-1) $$\text{slog}_e(\text{sexp}_2(z+1)) = \text{slog}_e(\ln(2) \cdot \text{sexp}_2(z))+1$$ $$\text{slog}_e(\text{sexp}_2(z+1)) = \text{slog}_e(\ln(2) \cdot 2^{\text{sexp}_2(z-1)})+1$$ $$\text{slog}_e(\text{sexp}_2(z+1)) = \text{slog}_e(\ln(2) \cdot \text{sexp}_2(z-1)+\ln(\ln(2)))+2$$ If z is large enought, then sexp_2(z-1) is large enough to make the ln(ln(2)) term completely insignificant. $$\text{slog}_e(\text{sexp}_2(z+1)) = \text{slog}_e(\ln(2) \cdot \text{sexp}_2(z-1))+2+O\frac{1}{\text{sexp}_2(z-1)} = \text{slog}_e(\text{sexp}_2(z))+1+O\frac{1}{\text{sexp}_2(z-1)} $$ With a little bit of algebra $g(z+1) = g(z)+O\frac{1}{\text{sexp}_2(z-1)}$ where g(z+1) approaches g(z) as z increases. With a little bit algebra, we can also show that $g(z+0.5)=-g(z) + O\frac{1}{\text{sexp}_2(z-1)}\;$ therefore if $g(z)=0\;\;g(z+0.5)\;$ also approaches zero as z increases. Therefore, unless g(z) goes to zero for all z as z gets arbitrarily large, g(z) will spend half of its time positive and half of its time negative.

Here are two graphs of g(z), from -1 to 8, and another showing the asymptotic behavior. The first zero crossing occurs at x1~=4.61986470857217, sexp_2(x1)~=4.78924742892085E72 followed by x2~=4.91660. subsequent zero crossings occur at x~=5.41812556847432+0.5n, for integers n.

g(z) from -1 to 8

g(z) from 4.5 to 8

Sheldon L
  • 4,534
  • Dear sheldon +1. But no accept. You did not answer the question. Sure the bases intersect infinitely often But still base 3 is larger than base 2 in the average sense. I think an integral might show it. Anyways , in the ordered case the question is certainly not resolved. But thanks for the details. – mick Oct 21 '16 at 18:55
  • 1
    @mick bonus: define f(x) to be the Op's function: $$f(x)=\sqrt{\exp_e^{0.5}(x)\cdot \exp_2^{0.5}(x)}$$ Now what does this function look like as z gets arbitrarily large?? And what does this graph say about what Op's question? Hint: I think in the limit h always follows the larger of the two functions ... $$h(x) = \text{slog}(f(\text{sexp(x)}))-0.5$$ – Sheldon L Oct 21 '16 at 19:19
1

Mick, the Op commented, "Your prediction must be false Sheldon. Notice that if in the interval [0,t] for any t>0 it is ordered then by induction it is ordered in [t,oo]....

Mick was seeking to change from the 1/2 iterate of $\exp_a;\;\exp_b$ which are not ordered as x gets arbitrarliy large, to using the half iterates of $\text{2sinh}_a;\;\text{2sinh}_b$ which Mick thought would be ordered. That doesn't match my results. Define $S_e$ as the superfunction for 2sinh for base e, $\text{2sinh}_e(z)=e^z-e^{-z}$, and define $S_2$ as the superfunction for 2sinh for base 2, $\text{2sinh}_2(z)=2^z-2^{-z}$. These half iterates are generated from the fixed of zero by from the Koenig's method, using the Schröder equation to generate the two analytic superfunctions below:

$$S_e(z) = \text{2sinh}_e^{[z]}\;\;\;S_2(z) = \text{2sinh}_2^{[z]}$$ $$S_e(z+1) = \text{2sinh}_e(S_e(z));\;\;\;S_2(z+1) = \text{2sinh}_2(S_2(z))$$

Exactly analogous to before, consider the function $$g(x) = S^{-1}_2(S_e(x+0.5))-S^{-1}_2(S_e(x))-0.5$$

if $g(x)=0\;$ then $\;\text{2sinh}_e^{0.5}(S_e(x))=\text{2sinh}_2^{0.5}(S_e(x))\;\;$ since $\;\text{2sinh}_2^{0.5}(S_e(x))=S_e(z+0.5)\;$ so we are comparing the half iterate base 2 with the half iterate base e.

$g(x)$ applies for base_2 and base_e, but any bases can work. The op hopes that $\forall x; \; \text{2sinh}_e^{0.5}(x)>\text{2sinh}_2^{0.5}(x)$, which would imply $\forall\; x\; g(x)>0$, but computationally as $x \to \infty, g(x)$ spends half of its time positive and half of its time negative. If x is large enough, when can easily show that $g(x+1) \approx g(x)$ and $g(x+0.5) \approx -g(x)$, where the approximation gets arbitrarily good as x increases.

First we show, x is large enough, $S^{-1}_2(S_e(x+1)) \approx S^{-1}_2(S_e(x))+1$, then we show that if x is large enough $g(x+0.5)=-g(x)$. Therefore, unless $g(x)$ goes to $0+\epsilon\;\forall\; x$ as $x\to \infty$, g(x) will spend half of its time positive and half of its time negative. One can write a $\text{basechange}S_2(x)$ like equations as the limit of $\text{2sinh}_2^{[-n]}(S_e(x+n))$, which I conjecture would be the only solution (except for a constant) for $g(x)$ to go to $0+\epsilon \; \forall\;x$ as $x\to\infty\;$. Basechange type equations converge beautifully at the real axis, but they don't converge in any size radius in the conmplex plane; so the basechange is conjectured $C_\infty$ nowhere analytic. That is why I expected that the $\text{basechangeS}_2(x) \ne S_2(x)$ since we know $S_2(z)$ is analytic. And therefore, I wouldn't expect $g(x)$ to go to $0+\epsilon\;\forall\;x$ as $x \to \infty$. Computations agree. The first "zero" crossing corresponds to x=8.92760980698518338019E59, for which $\text{2sinh}^{0.5}_e(x)=\text{2sinh}^{0.5}_2(x).$ And once again from the graph below: $$\lim_{x \to \infty} g(x) \ne 0 \;\forall x$$

This is a graph of $g(x)$ with x ranging from 3..6, showing the 50% duty cycle as gets arbitrarily large.

graph of g(x) from 3..6

For the remaining steps, we assume without being rigorous, that if x is large enough then $\text{2sinh}_e(x) \approx e^x\;$ and likewise for $\text{2sinh}_2(x)=2^x$, and then $\epsilon$ is insignificantly small in the equations below, provided $S_e(x-1)$ is large enough. Then following the same steps as in hte earlier answer, one can conclude: $$S^{-1}_2(S_e(x+1)) = S^{-1}_2\left(\frac{S_e(x-1)}{\ln(2)} -\ln(\ln(2)) +\epsilon\right)+2$$

If x is large enough, then $S_e(x-1)$ is large enough to make the ln(ln(2)) term completely insignificant, and that $\epsilon$ is even more insignificant.

Continuing on as before, with a little bit of algebra $g(x+1) = g(x)+O\frac{1}{S_e(x-1)}$ where g(x+1) approaches g(x) as z increases. With a little bit algebra, we can also show that $g(x+0.5)=-g(x) + O\frac{1}{S_e(z-1)}\;$ therefore if $g(x)=0\;\;g(x+0.5)\;$ also approaches zero as x increases.

Sheldon L
  • 4,534
  • Im not convinced. We Cannot ignore small epsilons in combination with superexponentials. Like sexp(x + eps) may be between the square and cube of sexp(x). You seem to work only with " growth " but forgetting non-exp functions. – mick Nov 08 '16 at 01:40
  • Computations agree. The first "zero" crossing corresponds to x=8.92760980698518338019E59, for which $\text{2sinh}^{0.5}_e(x)=\text{2sinh}^{0.5}_2(x)$, and for numbers a little bigger, $\text{2sinh}^{0.5}_2(x)>\text{2sinh}^{0.5}_e(x);$ by your logic, such a number should not exist. – Sheldon L Nov 08 '16 at 03:08
  • Im sorry you are right. And i misunderstood tommy's comment. – mick Dec 13 '16 at 12:16