0

I have a sequence of points which was obtained from an iterative algorithm, and I computed the order of convergence $p$ of the method using the formula

$$ p \approx \frac{\log({\rm err}(k+2))-\log({\rm err}(k+1))}{\log({\rm err}(k+1)) - \log({\rm err}(k))}. $$

The first computed ratio is much larger compared to the other ones. Without removing this value from the complete list of values, I obtained $p = 1.2615$ as the order of convergence of my algorithm. However, when I computed the order of convergence after removing the outlier (the first calculated ratio), I got $p = 1.0495$ (see figure).

My question is, which of the two is the correct order of convergence of my algorithm?

  • I'd say the first one is an outlier. Is this data obtained from a one-time run or is each point an average of multiple measurements? In this case I would actually go for non-linear least-squares as a method of fitting to prevent outliers from polluting the results too much. – Nox Jan 28 '19 at 14:08
  • Why not using a least squares approach to find the slope of your line? – nicoguaro Jan 28 '19 at 14:47
  • @Nox The data is a result of a one-time run and the above lines and slopes were obtained via the polyfit function in Matlab. So, which of the two should I take as the order of convergence for my algorithm? – Julienne Franz Jan 29 '19 at 04:58
  • @nicoguaro The above lines and slopes were actually obtained via the polyfit function in Matlab. – Julienne Franz Jan 29 '19 at 04:59
  • 1
    If possible re-run and check if the outlier is still there. Honestly I think it's an outlier and can be ignored, making the 1.0495 value more likely to be correct. – Nox Jan 29 '19 at 14:02

0 Answers0