As a supplement to some of the already excellent answers, there's another factor to consider:
- Regardless of the costs, how are you going to pay for it?
I've encountered a non-trivial number of grants that will not under any circumstances pay for hardware expenses, but will pay for computing time on something like EC2. So under some funding circumstances, while you might be able to fund a small "testbed" cluster with unstructured funds or a lab startup package, for larger-scale projects it may be the only way to have your computing costs funded.
Consider the NIH:
ADP/Computer Services: The services you include here should be research specific computer services- such as reserving computing time on supercomputers or getting specialized software to help run your statistics. This section should not include your standard desktop office computer, laptop, or the standard tech support provided by your institution. Those types of charges should come out of the F&A costs.
While it's possible to put cluster machines down under the $5,000+ equipment heading, and you can make a good argument for it, I've found both reviewers who are skittish about it, and universities that are hesitant about the ongoing costs of maintaining such a system.
Some grants are even more strict. One grant I currently have reads as follows:
Funds may also not be used for computer hardware
It's often simply easier to get a cluster paid for by direct costs if its EC2-based or one of its many analogs than actually buying the hardware, especially if your institution is stingy with the indirect costs. This may not be the case for you, but its the case for some.