16

Recently I came across with the lack of information about Mathematica performance in different machines, even though there's a benchmark process already implemented, the WolframMark. Data available includes only 15 old machines and I couldn't find more online.

As a lot of people consider Mathematica's performance when buying a computer I think it'll be very useful to create a database that includes benchmark results from different processors and operational systems.

It's very simple to ask people to run the benchmark in their computers, my suggestion is to ask them to restart the Kernel (two clicks procedure), run

Needs["Benchmarking`"]
Benchmark[]

and send the output with some key-specs like processor's model, number of cores, frequency and RAM.

If we create such database we can provide useful information about what one should consider when buying a machine and it can be hosted in a website or sent to Wolfram.

I think it's pretty easy to gather data since we can ask our friends and even send institutional/corporate emails. In conclusion, are there people interested in contributing for this project?

OBS: I think WolframMark seems reliable enough to compare machines. After 100 benchmark results my 3th-gen-i3 score was 1.166(9) and one colleague's 5th-gen-i7 got 1.38(4).

Capivara Cometa
  • 433
  • 2
  • 6
  • 2
    Perhaps it might be relevant, but maybe other applications should be closed while performing the benchmark? – DrMrstheMonarch Aug 23 '19 at 16:42
  • 2
    I wonder if other applications have a significant effect, considering Mathematica prefers to use only one thread each time (only alternating between them), though I agree it's a relevant factor to consider! –  Aug 23 '19 at 16:43
  • I think you (or an answer) should provide the code that gathers the benchmark and system information and posts it automatically to a public place easy to retrieve. Then we can all have fun doing stats with the data. – rhermans Aug 23 '19 at 16:57
  • What kind of answer do you expect here? I think asking for an opinion poll is off-topic. Probably asking for code to gather the data would be a better question? – rhermans Aug 23 '19 at 17:00
  • I expect suggestions of how we can create the database and estimate the number of people interested. StackExchange seems to be the best place to reach the community. – Capivara Cometa Aug 23 '19 at 17:09
  • Wolfram Community has this attempt https://community.wolfram.com/groups/-/m/t/1179687 but it flopped. Also, it would be great if someone come out with a code that gathers the benchmark and system information and posts it automatically to a public place easy to retrieve. – Capivara Cometa Aug 23 '19 at 17:11
  • 1
    One issue is that performance likely depends on the version of MMA used, so that needs to be specified. Another issue is that it would be nice to have both single-core and multi-core benchmarks (in the latter case, the number of cores available to the MMA license would need to be specified). WolframMark may be single-core only. Karl Unterkofler used to run a MMA benchmarking site, which was very useful because he collected detailed system info, had separate benchmark lists for each recent version of MMA, and had separate single-core and multi-core benchmarks. – theorist Aug 24 '19 at 04:36
  • WolframMark outputs MMA version, but you're right about the cores issue. Anyway, unfortunately this is another flopped attempt on gathering such useful info. – Capivara Cometa Aug 24 '19 at 15:55
  • I might be able to help with hosting if needed. It could get a subdomain for packagedata.net (that I've created), something like wolframmark.packagedata.net. – C. E. Aug 24 '19 at 21:34
  • I would emphasise the need for (i) separate single-core and multi-core benchmarks, and whether the license restricts the number of cores - e.g. some licenses may not allow more than 4 or 8 cores --- but the computer may have more than that; and (ii) that I don't consider the multi-core Benchmark to be a practical or informative test of actual/practical multi-core performance --- it is more like a best case scenario, because the examples have been designed to suit problems that happen to be well-disposed to being broken up. That's nice if your problems fit that case - but usually most don't. – wolfies Aug 25 '19 at 05:21
  • This post is interesting, but I believe it is better suited as a discussion on meta. – halirutan Aug 25 '19 at 13:10
  • Relevant (but related to v9): https://mathematica.stackexchange.com/questions/28209/hardware-performance-metrics-for-mathematica – Sjoerd C. de Vries Aug 31 '19 at 14:18

0 Answers0