Many experienced users on this site tend to use Map (and its variants, MapAt, MapIndexed, etc.) rather than Table. When applying the same operation to every element of an array, Map does seem more semantically direct. For instance:
test2D = {{a1, a2, a3}, {b1, b2}, {c1, c2, c3, c4}};
Table[g[test2D[[row, col]]] + 1, {row, 1, Length@test2D}, {col, 1, Length@test2D[[row]]}];
MatrixForm[%, TableAlignments -> Left]
Map[g[#] + 1 &, test2D, {2}];
MatrixForm[%, TableAlignments -> Left]
But when I need to carry out index-specific (i.e., position-specific) operations on higher-dimensional(>=2D) arrays, I find Map and its variants more challenging than Table.
For instance, suppose I want to raise each element in a 1D array to a power equal to its position. That I can do with either Table or MapIndexed:
test1D = {a1, a2, a3};
Table[test1D[[col]]^col, {col, 1, Length@test1D}]
Flatten[MapIndexed[#1^#2 &, test1D], 1]
But suppose I want to raise each element in a 2D array to a power equal to its row no. x column no. With Table that's conceptually straightforward:
Table[test2D[[row, col]]^(row*col), {row, 1, Length@test2D}, {col, 1, Length@test2D[[row]]}]
But how would one do that with MapIndexed? It would be nice if it were just something like:
MapIndexed[#1^(#2*#3) &, test2D]
where #2 were the column index and #3 were the row index, but it doesn't work like that.
Finally, suppose you have more detailed index-specific operations in a 2D array. That seems to be where Table really shines, but I'd be interested to hear of alternatives. E.g., suppose that, from each successive 4-element block of data in a row, you need to extract the 2nd and 4th elements, but only when all four elements are present. Thus, in a row of {a1, a2, a3, a4, a5, a6, a7, a8, a9, a10}, you need {{a2, a4}, {a6, a8}}. And you need to do this for each successive row. Further, the rows have variable lengths. With Table, this does the job:
test2Dx = {{a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13,
a14, a15, a16, a17, a18, a19, a20}, {b1, b2, b3, b4, b5, b6, b7,
b8, b9, b10, b11}, {c1, c2, c3, c4, c5, c6, c7}, {d1, d2, d3, d4,
d5, d6, d7, d8, d9, d10, d11, d12, d13, d14, d15, d16, d17}};
Table[{test2Dx[[row, 2 + col*4]], test2Dx[[row, 4 + col*4]]}, {row, 1, Length@test2Dx}, {col, 0, (Floor[N[Length[test2Dx[[row]]]/4]]) - 1}];
MatrixForm[%, TableAlignments -> Left]
Is there a semantically straightforward way to do this using other functions (e.g., Map or its variants and a pure function)—or is this a use case for which Table makes more sense?
myPower[x_, {n1_, n2_}] …should be avoided because function definition based on pattern matching can't be auto-compiled. Related: https://mathematica.stackexchange.com/a/705/1871 – xzczd Aug 03 '19 at 11:36Downsamplestill work easily, or mightPartbe preferred, as follows?:Map[{#[[3]], #[[7]]} &, Map[Partition[#, 11] &, test2Dy], {2}]or, equivalently,{#[[3]], #[[7]]} & /@ Partition[#, 11] & /@ test2Dy. – theorist Aug 04 '19 at 00:17{#[[3]], #[[7]]} &. If I, for example, want to reduce the number of elements by factor of four and avoid the elements which are too close to the ends of a list, then I will use something likeTake[#, {3, -3, 4}]&. This way it will be easier to read and maintain in the future. – Ray Shadow Aug 04 '19 at 01:07test2Dx = RandomReal[{1, 2}, {1000, 1000}], RepeatedTiming gave the following results:Table[{test2Dx[[row, 2 + col*4]], test2Dx[[row, 4 + col*4]]}, {row, 1, Length@test2Dx}, {col, 0, (Floor[N[Length[test2Dx[[row]]]/4]]) - 1}]: 0.4 s;Map[Downsample[#, 2, 2] &, Map[Partition[#, 4] &, test2Dx], {2}]: 20.2 s;{#[[2]], #[[4]]} & /@ Partition[#, 4] & /@ test2Dx: 0.03 s. Any idea why the Downsample code is so much slower? – theorist Aug 04 '19 at 17:50Downsampleis a high-level function which internally usesPart. In addition it performs a lot of checks. (SeeGeneralUtilities`PrintDefinitions[Downsample]). WhilePartis a built-in kernel function. That's why usingPartdirectly is much faster. – Ray Shadow Aug 04 '19 at 23:04