23

Richard Laver proved that there is a unique binary operation $*$ on $\{1,\ldots,2^n\}$ which satisfies $$a*1 \equiv a+1 \mod 2^n$$ $$a* (b* c) = (a* b) * (a * c).$$ This is the $n$th Laver table $(A_n,*)$.

There is an algorithm for computing $a * b$ in $A_n$, but in general (and especially for small values of $a$), this requires one to compute much of the rest of $A_n$. What is the largest value for $n$ for which someone can, in a modest amount of time, compute an arbitrary entry in $A_n$? I am able to compute entries in $A_{27}$.

I should note that the map which sends $a$ to $a\ \mathrm{mod}\ 2^m$ defines a homomorphism from $A_n$ to $A_m$ for $m < n$ and hence the problem becomes strictly harder for larger $n$.

Edit: I have actually been able to compute $A_{28}$, not just $A_{27}$.

Justin Moore
  • 3,637
  • Do you have a reference you could give for the algorithm? Thanks, – Apollo Mar 09 '11 at 22:52
  • 2
    @Apollo: The one I used to get to $A_{27}$ is based on ideas in Dehornoy's book "Braids and self distributivity" where he discusses the function $\theta$. The basic idea for computing in $A_n$ is given by the following identities: $ak = (a+1)_{[k+1]}$ for $a < 2^n$ and $2^n k = k$. Here $a_{[k]}$ is the $k$th left associated power of $a$. This allows you to start at the bottom of the table and work up. Implementing this directly allows me to get to $A_{19}$ (there are problems both with time and memory for $A_{20}$). Contact me offlist for code, if you like. – Justin Moore Mar 10 '11 at 00:00
  • Thanks. I'd be surprised if there was a faster method (based on the need (maybe) for very powerful large cardinals (at least more than PRA) to prove facts about the periodicity of the top row) to compute arbitrary entries... – Apollo Mar 10 '11 at 23:42
  • The speed of the algorithm is not so bad, actually, when one considers that $A_n$ has $2^n$ rows. I don't expect to get to $A_{1000}$, but I'm curious whether there are tricks that allow for a single computation in, say, $A_{40}$ on a desktop computer with a typical amount of memory and 24 hours. The naive algorithm makes single computations very fast (just a consultation to memory) but there is a large up front price. The revised algorithm makes single computations a little more expense, but with less up front. I'm asking for even more trade off of this sort. – Justin Moore Mar 11 '11 at 02:18
  • 1
    It might be that you can run the distribution "backwards". Do you know how to find a,b,and c, given m and n, such that ab = m and ac =n ? Also, can you give a reference for Laver's result that left self-distributive * is unique up to isomorphism? Gerhard "Ask Me About System Design" Paseman, 2011.03.11 – Gerhard Paseman Mar 11 '11 at 20:56
  • 1
    @Gerhard: Dehornoy's book is perhaps the best reference for non set theorists. – Andrés E. Caicedo Mar 11 '11 at 21:08
  • @Gerhard:I misspoke on the unique up to isomorphism statement. You have to add the hypothesis that some non-trivial left associated power of 1 is 1. Otherwise there is a counterexample: define * on {1,2,3} by 11 = 2, and ab = 3 if a and b are not both 1. One can check that this is a LD system. I edited my question appropriately. – Justin Moore Mar 11 '11 at 21:31
  • Thanks Andres. Having had some exposure to set theory and foundations, is there a reference for Laver's result for set theorists? (Being on the structure theory mailing list, it's possible I might be able to handle such a reference.) Gerhard "Also Know Some Universal Algebra" Paseman, 2011.03.11 – Gerhard Paseman Mar 11 '11 at 21:51
  • 2
    @Gerhard:Dehornoy's book is good for both set theorists and non set theorists. It won an award. Also read Laver's original papers in Advances in Math. (90's, I think). They are well written. Mostly they concern the algebra of elementary embeddings, but there is something at the end about the Laver tables. – Justin Moore Mar 12 '11 at 01:25
  • Thanks Justin. I hope I can get to Dehornoy's book soon, after I finish some projects. Are there other homomorphisms between the A_i? Perhaps one or two of those can help in the computations. Gerhard "Ask Me About System Design" Paseman, 2011.03.12 – Gerhard Paseman Mar 13 '11 at 05:23
  • @Gerhard:Every row $a$ of $A_n$ defines a monomorphism of some $A_p$ into $A_n$: $b \mapsto a * b$. Here $p$ such that $2^p$ is the period of row $a$. That this is a homomorphism is nothing more than the LD law: $a * (b * c) = (a * b) * (a * c)$. You idea is played out to a certain degree in Dehornoy's book. – Justin Moore Mar 13 '11 at 19:29
  • I am also interested out of curiosity in knowing what is the largest Laver table that has ever been computed by hand. For instance, has anyone ever bothered to compute a 512x512 or a 1024x1024 Laver table by hand? It is not too hard to compute such Laver tables if one uses a hexadecimal or similar number system instead of the usual decimal number system (it just takes a little bit of time). – Joseph Van Name Oct 05 '15 at 01:01

2 Answers2

16

On Azimuth, on May 6, 2016, Joseph van Name wrote:

The largest classical Laver table computed is actually $A_{48}$. The 48th table was computed by Dougherty and the algorithm was originally described in Dougherty's paper here. With today's technology I could imagine that one could compute $A_{96}$ if one has access to a sufficiently powerful computer.

One can compute the classical Laver tables up to the 48th table on your computer here at my website.

John Baez
  • 21,373
  • 1
    What does it mean to "compute" $A_{48}$? The naive interpretation would be to write down all the entries explicitly, but $A_{48}$ has $2^{96}$ entries, which I think is about a billion times the world's data storage capacity. Maybe it just means that someone has code that computes the $(p,q)$ entry of $A_{48}$ in a "reasonable" amount of time? An even weaker interpretation would be that we're just trying to compute the period of the first row of $A_{48}$. – Timothy Chow Oct 28 '20 at 04:16
  • Reading the question more carefully, I see that the OP asks for the second interpretation I suggested above (computing the $(p,q)$ entry for given $p$ and $q$). In that case, my followup question is how one is able to determine an upper bound on the time and space needed to compute the worst-case entry (which is what would seem to be needed to substantiate a claim that $A_n$ "has been computed"). – Timothy Chow Oct 28 '20 at 13:19
  • @TimothyChow Computing the first row in $A_{n}$ is not that hard (though, I have no proof that my calculation is correct). In Hexadecimal, for $2^{8}<n\leq 3\cdot 2^{8}$, the first row of $A_{n}$ is $(2,2^{n}-2^{100}+C,2^{n}-FFF2,2^{n}-FF10,2^{n}-FF0E,2^{n}-FF04,2^{n}-FF02,2^{n}-100,2^{n}-FE,2^{n}-F4,2^{n}-F2,2^{n}-10,2^{n}-E,2^{n}-4,2^{n}-2,2^{n})$. This pattern will probably continue much further than $3\cdot 2^{8}$. Computing $A_{n}$ seems to be much harder than simply computing the first row. – Joseph Van Name Apr 22 '21 at 17:10
  • Randall's algorithm that produces an output $p*q$ in $A_{n}$ on input $(p,q)$ provably takes a very small amount of time on any input (this fact is clear by observing the algorithm). In fact, if one has the pre-computed data (in a compressed format) one can run this algorithm by hand for computing $A_{n}$ for $n\leq 3\cdot 2^{4}$. – Joseph Van Name Apr 22 '21 at 17:13
  • I have made it up to $A(3\cdot 4^4)$ (I am now working on $A(4^5)$), but I have no proof that my calculation is correct not can I formulate a succinct natural looking conjecture that implies that my calculations of $A(3\cdot 4^4)$ are free from any errors (my algorithm consisted of a lot of searching for and correcting non-distributivity combined with Dougherty's algorithm). – Joseph Van Name Apr 22 '21 at 22:07
8

I've been in contact with Patrick Dehornoy and Ales Drapal and both thought that $A_{28}$ is likely the current record for a Laver table computation.

Justin Moore
  • 3,637
  • 1
    Well, in view of John Baez's answer, this answer now seems very outdated. :-) – Todd Trimble May 07 '17 at 01:01
  • More likely, I think, is that people are using different definitions of what it means to "compute" a Laver table. – Timothy Chow Oct 28 '20 at 04:19
  • 1
    But storing the entire Laver table on a drive will take too much space. For $A_{7\cdot 4}$ and without compressing the mostly repeating data at all, you will need about half a millions terabytes of storage space. Anyone who computes the Laver tables beyond about $A_{2\cdot 7}$ uses some sort of compression, and Dougherty's algorithm is simply a more advanced form of compression. – Joseph Van Name Apr 22 '21 at 17:25
  • 1
    In any case, I agree that there are a few different notions of what it means to compute a Laver table that are practically applicable to Laver table computation. For example, if an algorithm returns the correct output on 99.9999% of all inputs, can we say that we have computed the Laver table (I say no because the 0.0001% of all inputs are the most important ones)? Does the algorithm have to come with a proof of its correctness? What if the proof requires strong large cardinal hypotheses? – Joseph Van Name Apr 22 '21 at 17:57
  • There are also issues about how to represent elements of $A_{n}$ and what it means to compute the fundamental operation of $A_{n}$. For example, in generalizations of Laver tables such as multigenic Laver tables and endomorphic Laver tables, one generally cannot fully compute the output $t(x,y,z)$ or $xy$ but one can still compute arbitrary bits of information of $xy$ and $t(x,y,z)$ (there just may be over a googol bits of information). – Joseph Van Name Apr 22 '21 at 18:03