Preamble
This is IMO a very good question. I will try to describe an approach based on code-generation, which in my view would allow one to get the most benefits of declarative rule-based-like definitions without essentially imposing eny limitations or introducing any inconsistencies. The resulting functions can also be Compiled.
General solution via code generation and overloading
Code
As an alternative, one can overload Function, in the following way:
Unprotect[Function];
Function[{left___, {syms__Symbol}, right___}, body_,atts_:{}] :=
Module[{var},
With[{
rules = Thread[# :> Evaluate[ Range[Length[#]]]] &@
Thread[HoldPattern[{syms}]] /. ind_Integer :> var[[ind]]
},
Function @@ (Hold[{left, var, right}, body,atts] /. rules)
]
];
Protect[Function];
Since by itself, Function does not have such extended syntax, this should be reasonably safe.
Usage
You can then call:
Function[{{x,y,z}},x+y-z]
and what you get will look like
(* Function[{var$28109},var$28109[[1]]+var$28109[[2]]-var$28109[[3]]] *)
so that the above code does some code-generation for you. Your second example also works verbatim, without any modiication:
Fold[
Function[{{a, b}, y},If[y~Mod~2 == 0, {a~Join~{y}, b}, {a, b~Join~{y}}]],
{{}, {}},
Range@10]
(* {{2, 4, 6, 8, 10}, {1, 3, 5, 7, 9}} *)
Function's attributes can also be handled. The following, in particular, will do in-place modifications of a list passed as a first argument:
lst = {0, 0, 0};
Function[{{a, b, c}, d, e, f}, a = d; b = e; c = f, HoldAll][lst, 1, 2, 3];
lst
(* {1, 2, 3} *)
again, with a transparent and easy to read syntax. Such behavior does not seem to be easy to reproduce in other approaches, at least without some loss of clarity / readability.
Advantages of this scheme
In my opinion, this scheme has a number of advantages w.r.t. other solutions, in particular those based on replacement rules. Some of them:
- Generality - it can handle all cases without any modifications to the body of the function, w.r.t. how you'd write it in a rule-based style.
- Support of function's attributes.
- No impedance mismatch with
Function: since the result is good old pure function, it does not have any limitations or inconsistencies in terms of how it can be used (e.g. in Compile, see next item, but perhaps not only)
- Such functions can be
Compile-d rather straightforwardly, which is described in the last section of the answer.
Making it safer with a dynamic environment
Since overloading built-in functions globally is generally a bad idea, you can make it safer by creating a local dynamic environment.
Code
This is done with Internal`InheritedBlock:
ClearAll[withAddedFunctionSyntax];
SetAttributes[withAddedFunctionSyntax, HoldAll];
withAddedFunctionSyntax[code_] :=
Internal`InheritedBlock[{Function},
Unprotect[Function];
Function[{left___, {syms__Symbol}, right___}, body_, atts_:{}] :=
Module[{var},
With[{
rules = Thread[# :> Evaluate[ Range[Length[#]]]] &@
Thread[HoldPattern[{syms}]] /. ind_Integer :> var[[ind]]
},
Function @@ (Hold[{left, var, right}, body,atts] /. rules)
]
];
Protect[Function];
code
];
Usage
you can now execute the code in this environment:
withAddedFunctionSyntax[Function[{{x,y,z}},x+y-z][{1,3,5}]]
(* -1 *)
Note that it is enough to execute only the part with your function definition in that environment, you can export it to a global one:
fun= withAddedFunctionSyntax[Function[{{x,y,z}},x+y-z]]
(* Function[{var$357},var$357[[1]]+var$357[[2]]-var$357[[3]]] *)
fun[{1,3,5}]
(* -1 *)
Certain special cases, speed-ups and compilation
As the OP rightly noted, using replacement rules presents also problems for speeding up and / or compiling the functions obtained that way. Here, I will use the OP's added example to show how one can speed up and also compile functions obtained via the procedure I proposed above. I will be using the global (less secure) version of the Function overloading for simplicity, but it is trivial to use also a dynamic environment.
Problems with naive usage in loops etc
First, let us try to run the naive version of the code:
Do[
Function[{{a, b, c, d, e, f, g, h, i}},
If[a < d < g && a/(10 b + c) + d/(10 e + f) + g/(10 h + i) == 1,
Print[{a, b, c, d, e, f, g, h, i}]
]][i], {i,Permutations[Range[9]]}
] // Timing
I had to Abort[] this code, since it was taking way too long. And it is easy to understand why: since Do holds its arguments, the function expansion defined above was used at every function's invocation.
Simple work-around: store a pure function in a variable
The simplest way to deal with this problem is to define a function separately, and store in a variable, like so:
fn =
Function[{{a,b,c,d,e,f,g,h,i}},
If[a<d<g&&a/(10 b+c)+d/(10 e+f)+g/(10 h+i)==1,Print[{a,b,c,d,e,f,g,h,i}]]];
Do[fn[i],{i,Permutations[Range[9]]}]//Timing
During evaluation of In[42]:= {5,3,4,7,6,8,9,1,2}
(* {4.593750,Null} *)
This however is not the most general way.
General way out: adding function-expanding macro
Another way would be to write a function-expanding macro. Here it is:
ClearAll[functionExpand];
SetAttributes[functionExpand, HoldAll];
functionExpand[code_] :=
Unevaluated[code] /. f_Function :> With[{eval = f}, eval /; True]
It uses Trott-Strzebonski in-place evaluation technique, described also here (see also the answer of WReach there), and here, to expand Function inside the code. With it, one can do:
functionExpand[
Do[
Function[{{a,b,c,d,e,f,g,h,i}},
If[a<d<g&&a/(10 b+c)+d/(10 e+f)+g/(10 h+i)==1,Print[{a,b,c,d,e,f,g,h,i}]]
][i],{i,Permutations[Range[9]]}
]
]//Timing
During evaluation of In[43]:= {5,3,4,7,6,8,9,1,2}
(* {4.546875,Null} *)
Compilation
Let me now show how one would compile the code obtain in this way. The recipe is very simple - use functionExpand again. So, for example:
compiled =
functionExpand[
Compile[{{p, _Integer, 1}},
Function[{{a, b, c, d, e, f, g, h, i}},
If[a < d < g && a/(10 b + c) + d/(10 e + f) + g/(10 h + i) == 1,
{a, b, c, d, e, f, g, h, i},
{}
]][p]
]]
where I slightly changed the output so that we return rather than Print, and there are no calls to the main evaluator then. You can check that compiled contains only the byte-code instructions, so it all works. Now, this speeds things up quite a bit:
Do[If[compiled[p]!={},Print[p]],{p,Permutations[Range[9]]}]//Timing
During evaluation of In[36]:= {5,3,4,7,6,8,9,1,2}
(* {0.890625,Null} *)
This is not the end of the story, however. One improvement would be to compile to C. Another improvement is to compile entire loop as well:
fullCompiled =
Compile[{},
Do[If[compiled[p] != {}, Print[p]], {p, Permutations[Range[9]]}],
CompilationOptions -> {"InlineCompiledFunctions" -> True}
];
One can also test that,apart from Print, we get byte-code instructions again (note that the option "InlineCompiledFunctions" was used). So,
fullCompiled[]//Timing
During evaluation of In[49]:= {5,3,4,7,6,8,9,1,2}
(* {0.312500,Null} *)
Finally, we can also inline external definitions:
fullCompiledInlined =
Compile[{},
Do[If[compiled[p] != {}, Print[p]], {p, Permutations[Range[9]]}],
CompilationOptions -> {
"InlineCompiledFunctions" -> True,
"InlineExternalDefinitions" -> True
}];
which gives another 2x speedup in this case:
fullCompiledInlined[]//Timing
During evaluation of In[50]:= {5,3,4,7,6,8,9,1,2}
(* {0.187500,Null} *)
So, we got a 20x speedup due to compilation. Compilation to C target would likely bring further speed enhancements.
What I've shown here is the workflow involving the overloaded Function, and ways to speed up and / or compile code using it.