1

An earlier post, Can I exit a Map over a list of data pairs? described the outlines of a processing heavy research initiative I have under development.

Does Wolfram or anyone else offer a computing infrastructure platform to which I could upload a notebook and data and run across many computing cores?

Scope

I have:

  • 10 sets (batches) of initialization inputs (assumptions) - these can run in parallel,

  • 11,500 simulations to run on each set of the initialization inputs - these too can run in parallel, and

  • 8,000 sequential sets of data points to process on each of the simulations.

Current computing platform:

  • Mathematica 13.2.1.0 (home desktop license supporting 4 kernels)
  • Apple MacBook Pro w/M1 Pro 10 core processor
  • macOS Ventura 13.2.1

Timing estimate

I've run Timing and AbsoluteTiming[] on one set of the initialization inputs over a few subsets of the simulations.

This leaves me with an estimate of well over 100 hours absolute time to run the simulations on a single set of the inputs.

It could take a couple of months to run all the sets of initialization inputs.

I may eventually come up with a more efficient way to cast the problem, but at this point, I need brute force computing.

I understand that Wolfram offers Wolfram Cloud, but it apparently limits computation to 5 minutes at a time.

So again, does Wolfram or anyone else offer a computing infrastructure platform to which I could upload a notebook and data and run across many more cores?

Happy to pay for such a service, I just need to evaluate it with respect to just purchasing more computers and Mathematica licenses.

Thoughts appreciated.

Jagra
  • 14,343
  • 1
  • 39
  • 81

0 Answers0