2

this is a follow up question to Convergence rate vs convergence order

I guess the whole confusion about how rate vs. order of convergence also came from the implementations I saw.

For example, for fixpoint iteration, I saw this: [was part ofa fixpoint iteration]

#Function to calculate order of convergence  
def konvord(x,e):
    p = np.log(e[2:]/e[1:-1]) / np.log(e[1:-1]/e[:-2])
    mw = p.sum()/len(p)
    return mw #

#Function to calculate rate of convergence (for linear convergence)
def konvrate(x,e):
    n = len(e)
    k = np.arange(0,n) #array mit 0,1,2,3,...,n-1 (Iterationsschritte)
    fit = np.polyfit(k,np.log(e),1)
    L = np.exp(fit[0])
    return L

But then I look at code which does some quadratur and there we calculate the order of convergence using this:

p = - polyfit(log(n), log(err), 1)[0]

These implementations are confusing me. I think, that the last one makes sense. But I can't really see, how konvord() gives us an order of convergence and how konvrate() gives us a rate of convergence. To me, konvord() rather looks like it gives the rate of convergence for $p=1$. The function konvrate() should probably be to get an exact value of the order of convergence, but it only works for $p$ very close to $1$ [hence only the error is in a log in the polyfit]

What don't I get? Can someone explain me all three implementations please? What's their idea?

Edit: I tred to play around with it using a fixpointiteration: https://hastebin.com/zevuhiqaya.py (with output) and all three give different results.

Edit: I also wanted to say, how I would do it, ignoring all the above:

For the order of convergence, I have $p\approx\frac{\log(e_{k+1})-\log(e_{k})}{\log(e_k)-\log(e_{k-1})}$ I'd implement this using polyfit. I'd get a linear graph and it's slope would be my $p$.(This is the last from the above implementations)

For getting the rate of convergence for $p=1$ I'd just do the ratio test. I think, that's what he tried at konvrate(). For an arbritary $p$ I'd first calculate p and then basically do the ratio test again.

Question 1: Aren't konvord() and the last implementation the same? If so, why do they result in different results?

Question 2: Is konvrate() correctly implemented?

Question 3: I'm sure, that the function konvord() and the last implementation should result in the same result. What exactly should n be there? I think I didn't quiet get the idea behind it.

xotix
  • 241
  • 2
  • 6

1 Answers1

3

Assuming that the error progression is exactly of the form $$e_{k+1} = C e_k^p,$$ We have $$e_{k+2} = C e_{k+1}^p$$ $$\frac{e_{k+2}}{e_{k+1}} = \left(\frac{e_{k+1}}{e_{k}}\right)^p$$ $$\log\frac{e_{k+2}}{e_{k+1}} = p\log\frac{e_{k+1}}{e_{k}}$$ $$\frac{\log\frac{e_{k+2}}{e_{k+1}}}{\log\frac{e_{k+1}}{e_{k}}} = p$$

This is what konvord() finds. By averaging over all possible values of $k$ from $0$ to $k_{max}-2$, the function attempts to average out inexact behaviour at low or high $k$. If there is too much offset at either end, the averaging might not work and the log-log fit might give a better result as you propose. Perhaps it is better to find a median value of p instead of the average in mw.

For konvrate(), assume a linear convergence $$e_{k+1} = Le_k$$ $$e_{k} = L^k e_0$$ $$\log e_k = \log e_0 - k \log L$$ Fitting $\log e_k$ vs $k$ gives $\log L$ as the slope.

I don't think you can find $p$ from a log-log plot. $p$ is more like a proportionality between $\log\log e_k$ and $k$ rather than between $\log e_k$ and $\log k$.

Raziman T V
  • 326
  • 1
  • 2
  • 8