How can we define the precision we require in a numerical differential equation solver? What is it that I have to optimize to know? And how do I know that I'm at a sufficient time-step value?
For example, in those programs like Mathematica and Matlab, how does the differential equations solver know the suitable step size for your problem? How could that be automatically determined?
A full explanation of the approach would be nice (or maybe a term that can be googled).