0

I am wondering if you can answer a math question that I require answered for an evolutionary program that I am writing.

With regards to the phenomenon known as the "space-time tradeoff" (wiki it) - what are the raw numbers for with regards to modern computers for this? IE - what advantages are gained by say - using a 2TB HDD to store frequently used variables VS calculating them on the fly?

Cheers Martin

marscom
  • 171
  • 3
  • This is an extremely broad question. Perhaps you can reword the question with what you are doing in your program? If you can precompute quantities and are not memory bound, you will be better off storing them in memory. – James Custer Mar 12 '12 at 18:40
  • What does the time-space trade-off involve for your specific problem? – Emre Mar 12 '12 at 18:51
  • The specific subject it the question, I imagine that there would be uses for this in modern computing, but just how useful would it be? Are we talking about a 10000 fold increase at the optimul memory/processer compared to average? Or is it just for example 10 fold? Cheers - Martin – marscom Mar 12 '12 at 23:32
  • @marscom: The amount of time saved by increasing memory depends upon factors such as the problem you're working on, the algorithm you choose to solve the problem, the cache hierarchy, etc. and cannot be generalized for every possible problem. Do you have a specific problem in mind? – Paul Mar 13 '12 at 03:51
  • I am trying to estimate the benifit that the human brain aquires by storing so much data, I have reasoned that if data storage was not some how more useful than calculation, the brain would be a simple brute force calculator – marscom Mar 13 '12 at 04:13
  • There has been a lot of thought divested on this topic. The upper bound is well understood, but the real number is likely a few orders of magnitude below that. The keywords "efficient coding" should get you reading in the right direction. Basically, you can upper bound the information per neuron, but to be robust, there has to be significant correlates. I am not sure this is a well enough formed question to be posted here sadly . . . – meawoppl Mar 13 '12 at 05:26

1 Answers1

4

Whether saving information makes something faster depends on how much time it takes to retrieve from storage and how much time it takes to recalculate the variables on the fly.

As a general rule, it takes something on the order of fractions of nanoseconds to get something stored in cache or to perform a CPU instruction. Going to RAM can take anywhere from ~1 nanosecond to ~20 nanoseconds, depending on the speed of the memory and if it's being accessed in a contiguous block.

Getting something from disk can take as much as a millisecond. This time is greatly reduced if you're reading a large amount of contiguous stuff in at once.

Whether you can make a program faster by storing intermediate values and precalculating some parts depends on how much time it takes to calculate those values in the first place. There's no simple, general answer to "how much can I gain by storing intermediate values?".

Dan
  • 3,355
  • 3
  • 21
  • 47