9

I need to create a list that holds, for given integer d, the elements $1,\ldots,d,d+2,\ldots,2d,2d+3,\ldots,3d,3d+4,\ldots,d^2$ where $\ldots$ is just denoting increment by 1 until the next written value is reached (as usual).

My naive attempt is this:

ClearAll[list];
list[d_] := Module[{tmp = {}, n = 0}, While[(n + 1)*d <= d^2, AppendTo[tmp, Range[n*(d + 1) + 1, (n + 1)*d]]; n++]; Flatten@tmp];

which produces what I want, e.g.

list[3]
(* {1,2,3,5,6,9} *)

However, I would be very interested how (or maybe if) this can be achieved without using things like While and For. I guess there is a nice approach for this...

Update

Here are some timings for the current approaches on my machine:

enter image description here

At least for larger d, the second solution by @ciao scales best (although only marginally faster than the one by @gpap). Since ciao's approach is also faster than gpap's for small d, I decided to accept his solution. But all approaches are very nice, so it was a bit difficult to choose the "one" that will be accepted.

Lukas
  • 2,702
  • 1
  • 13
  • 20
  • @MartinBüttner I definitely prefer this over my attempt. Didn't think about using this feature of Table :) So, yes, this would be an option (+1) – Lukas Apr 22 '16 at 11:48

6 Answers6

8

Whenever you're building a list with While or For, there's a good chance Table or Array can help. In this case, the solution with Table is quite simple: just use two iterators and make the bounds of the second dependent on the first iterator:

list[d_] := Join @@ Table[i*d + j, {i, 0, d}, {j, i + 1, d}]

The Join @@ is used to flatten the array. Flatten @ would also do, but I prefer the former when I know that I'm only flattening one level.

As you noted this is quite a bit slower than your own solution. If performance is a concern, you can use this slightly less readable form that combines Table with Range and appears to be about 10 to 20 times faster than your code:

list[d_] := Join @@ Table[Range[i*d + i + 1, (i + 1) d], {i, 0, d}]
Martin Ender
  • 8,774
  • 1
  • 34
  • 60
  • 1
    It is syntax-wise a lot shorter than my approach, but the timings of this approach are pretty bad for large d, e.g. for d=1000 it is 5 times slower than my approach whereas for d=10000 it is already a factor of 8. Do you know /Can you explain why the Table scales so poorly? – Lukas Apr 22 '16 at 12:26
  • @Lukas I assume your Range is a fair bit faster. It should be easily possible to combine Table with Range though, I'll run some timings myself. – Martin Ender Apr 22 '16 at 12:29
  • @Lukas See update. – Martin Ender Apr 22 '16 at 12:36
  • Now it is very competitive to the one by gpap. I have added a plot with timings to my question :) – Lukas Apr 22 '16 at 12:41
7

This works:

f[d_] := Join @@ MapThread[
   Range, 
   Transpose@Table[{(i - 1) d + i, i d}, {i, d}]
   ];

so

f[3]
{1, 2, 3, 5, 6, 9}

and it's pretty fast as well:

AbsoluteTiming[f[10000];]
{0.410494, Null}

same caveat about Join@@ vs Flatten@ as Martin Büttner by whose wise comment this can be simplified to merely:

f[d_] := Join @@ Range @@@ Table[{(i - 1) d + i, i d}, {i, d}]
gpap
  • 9,707
  • 3
  • 24
  • 66
  • This is a really nice one. About 1.5 orders of magnitude faster than my approach. Does this have to do with PackedArray or so? I do only have a brief idea about this, but have read it in varoius answers here where it makes a huge difference in timings... – Lukas Apr 22 '16 at 12:19
  • 1
    exactly. I am not clear on when Table with two indices produces a packed array but there is a relevant reference on this website – gpap Apr 22 '16 at 12:32
  • Thanks alot for this reference! Very useful for this kind of things – Lukas Apr 22 '16 at 12:42
  • 2
    Range @ ## & is the same thing as Range (but a bit slower due to the overhead of calling the function). Also you can void all the MapThread and Transpose shenanigans and also get much better readability by using @@@ (Apply with level spec {1}), i.e. Join @@ Range @@@ Table[{(i - 1) d + i, i d}, {i, d}]. Interestingly, the latter optimisation doesn't actually seem to speed up the code any further. – Martin Ender Apr 22 '16 at 12:44
  • lol, you are right - I was too focused on getting the indices right that I missed the mountain for the finger! – gpap Apr 22 '16 at 12:47
4

different ... but slow :)

f1 = SparseArray[UpperTriangularize[Partition[Range[#^2], #]]][ "NonzeroValues"] &

f1 /@ {3, 4}

{{1, 2, 3, 5, 6, 9},
{1, 2, 3, 4, 6, 7, 8, 11, 12, 16}}

Also different but slower:

f2 = Flatten[UpperTriangularize[Partition[Range[#^2], #]] /. 0 -> (## &[])] &

f3 = Sort[SparseArray[{i_, j_} /; i <= j :> (i - 1) # + j, {#, #}]["NonzeroValues"]] &;
f1 /@ {3, 4} == f2 /@ {3, 4} == f3 /@ {3, 4}

True

kglr
  • 394,356
  • 18
  • 477
  • 896
4

Another way without Table:

listN[d_]:= Join @@ NestList[d + Rest@# &, Range[d], d - 1]

It performs not so bad but slower than the fastest methods.

SquareOne
  • 7,575
  • 1
  • 15
  • 34
4

This seems pretty quick...

Block[{base = ConstantArray[1, Binomial[# + 1, 2]]},
  base[[Accumulate@Range[#, 2, -1] + 1]] += Range[# - 1];
  Accumulate@base] &

and this seems faster...

Block[{r = Range[#, #^2, #]}, Join @@ Range[Subtract[r, Range[# - 1, 0, -1]], r]] &
ciao
  • 25,774
  • 2
  • 58
  • 139
3

This seems a different approach, exchanging Table by ConstantArray and Accumulate

sieve[d_] := 
Module[{u = ConstantArray[1, d (d + 1)/2], o = Range[2, d], index},
  index = 1 + Accumulate[Reverse[o]];
  u[[index]] = o;
  Accumulate[u]
]

However, it does not seem to perform better than the other algorithms in my notebook:

AbsoluteTiming[sieve[10000];]
{0.923706, Null}

The procedural approach using the index table is faster:

proc[d_] := Module[{u = Range[d (d + 1)/2], filling, index},
  filling = Accumulate[ u[[1 ;; d]] ];
  index = 1 + Accumulate[ Reverse[ u[[1 ;; d]] ] ];
  Do[ u[[index[[i]] ;; index[[i + 1]] - 1 ]] += filling[[i]], {i,Length[index] - 1} ];
  u
  ]

AbsoluteTiming[ proc[10000];]
{0.588404, Null}
Vito Vanin
  • 568
  • 2
  • 8