""" ab4(ivp,n)Apply the Adams-Bashforth 4th order method to solve the given IVPusing `n` time steps. Returns a vector of times and a vector ofsolution values."""functionab4(ivp,n) # Time discretization. a,b = ivp.tspan h = (b-a)/n t = [ a + i*h for i in0:n ]# Constants in the AB4 method. k =4; σ = [55,-59,37,-9]/24;# Find starting values by RK4. u =fill(float(ivp.u0),n+1) rkivp =ODEProblem(ivp.f,ivp.u0,(a,a+(k-1)*h),ivp.p) ts,us =rk4(rkivp,k-1) u[1:k] .= us# Compute history of u' values, from newest to oldest. f = [ ivp.f(u[k-i],ivp.p,t[k-i]) for i in1:k-1 ]# Time stepping.for i in k:n f = [ ivp.f(u[i],ivp.p,t[i]), f[1:k-1]... ] # new value of du/dt u[i+1] = u[i] +h*sum(f[j]*σ[j] for j in1:k) # advance a stependreturn t,uend
ab4
Example
We study the convergence of AB4 using the IVP \(u'=\sin[(u+t)^2]\) over \(0\le t \le 4\), with \(u(0)=-1\). As usual, solve is called to give an accurate reference solution.
Now we perform a convergence study of the AB4 code.
Code
n = @. [ round(Int,4*10^k) for k in0:0.5:3 ]errs = []for n in n t,u =ab4(ivp,n)push!( errs, norm(u_ref.(t)-u,Inf) )endpretty_table([n errs], header=["n","inf-norm error"])
The method should converge as \(O(h^4)\), so a log-log scale is appropriate for the errors.
Code
plot(n,errs,m=3,label="AB4", xaxis=(:log10,"n"),yaxis=(:log10,"inf-norm error"), title="Convergence of AB4",leg=:bottomleft)plot!(n,(n/n[1]).^(-4),l=:dash,label="O(n^{-4})")
fdjac
levenberg
Implicit methods
Implementing an implicit multistep method is more difficult. Consider the second-order implicit formula AM2, also known as the trapezoid method. To advance from step \(i\) to \(i+1\), we need to solve for \(\mathbf{z}\) in the equation
\[\mathbf{z} - \tfrac{1}{2} h \mathbf{f}(t_{i+1},\mathbf{z}) = \mathbf{u}_i + \tfrac{1}{2} h \mathbf{f}(t_i,\mathbf{u}_i)\]
This equation can be written as \(\mathbf{g}(\mathbf{z})=\boldsymbol{0}\), so the rootfinding methods of Chapter 4 can be used. The new value \(\mathbf{u}_{i+1}\) is equal to the root of this equation.
Code
""" am2(ivp,n)Apply the Adams-Moulton 2nd order method to solve given IVP using`n` time steps. Returns a vector of times and a vector ofsolution values."""functionam2(ivp,n)# Time discretization. a,b = ivp.tspan h = (b-a)/n t = [ a + i*h for i in0:n ]# Initialize output. u =fill(float(ivp.u0),n+1)# Time stepping.for i in1:n# Data that does not depend on the new value. known = u[i] + h/2*ivp.f(u[i],ivp.p,t[i])# Find a root for the new value. g = z -> z - h/2*ivp.f(z,ivp.p,t[i+1]) - known unew =levenberg(g,known) u[i+1] = unew[end]endreturn t,uend
am2
Stiff problems
At each time step, an implicit IVP solver requires a rootfinding iteration, requiring multiple calls to evaluate the function \(\mathbf{f}\). This fact makes the cost of an implicit method much greater on a per-step basis than an explicit one.
Given this drawback, you are justified to wonder whether implicit methods are ever competitive!
Once the solution starts to take off, the AB4 result goes catastrophically wrong.
We hope that AB4 will converge in the limit \(h\to 0\), so let’s try using more steps.
Code
plt =scatter(tI,uI,label="AM2, n=200",m=3, xlabel="t",ylabel="u(t)",leg=:bottomright)for n in [1000,1600] tE,uE =ab4(ivp,n)plot!(tE,uE,label="AM4, n=$n")endplt
So AB4, which is supposed to be more accurate than AM2, actually needs something like 8 times as many steps to get a reasonable-looking answer!
This may seem like a contradiction to our understanding of orders of accuracy, but it is not.
A fourth-order explicit formula is more accurate than a second-order implicit one, in the limit \(h\to 0\). But there is another limit to consider, \(t\to \infty\) with \(h\) fixed, and in this one the implicit method wins!
Problems for which implicit methods are much more efficient than explicit counterparts are called stiff. A sure sign of stiffness is the presence of phenomena on widely different time scales.
In this example, there are two slow periods during which the solution changes very little, interrupted by a very fast transition in the state. An explicit method “thinks” that the step size must always be dictated by the time scale of the fast transition, whereas an implicit method can take large steps during the slow periods.
Zero-stability of multistep methods
For one-step methods such as Runge–Kutta, it is guaranteed that the method converges and that the global error is of the same order as the local truncation error.
For multistep methods, however, a new wrinkle is introduced.
Example
Once can check that the two-step method LIAF, given by
It’s clear that the solution is growing exponentially in time.
Zero-stability
Here is the crucial property that LIAF lacks.
Definition: Zero-stability of a multistep IVP method
A multistep method is zero-stable if, as \(h\to 0\), every numerical solution produced by the method remains bounded throughout \(a\le t_i \le b\).
Without zero-stability, any truncation or roundoff error will get exponentially amplified and eventually overwhelm convergence to the exact solution.
Dahlquist theorems
It turns out that lacking zero-stability is the only thing that can go wrong for a consistent multistep method.
Theorem: Dahlquist equivalence
A linear multistep method converges as \(h\to 0\) if and only if it is consistent and zero-stable.
The Dahlquist equivalence theorem is one of the most important and celebrated in the history of numerical analysis. It can be proved more precisely that a zero-stable, consistent method is convergent in the same sense as Euler’s method, with the error between numerical and exact solutions being of the same order as the local truncation error, for a wide class of problems.
Accuracy vs Stability
You may have noticed that the Adams and BD formulas use only about half of the available data from the past \(k\) steps, i.e., they have many possible coefficients set to zero. For instance, a \(k\)-step AB method uses only the \(f_j\)-values and has order \(k\).
The order could be made higher by also using \(u_j\)-values, like the LIAF method does for \(k=2\). Also like the LIAF method, however, such attempts are doomed by instability.
Theorem: First Dahlquist stability barrier
The order of accuracy \(p\) of a stable \(k\)-step linear multistep method satisfies
\[p \le
\begin{cases}
k+2 & \text{if $k$ is even},\\
k+1 & \text{if $k$ is odd},\\
k & \text{if the method is explicit.}
\end{cases}\]
The lesson of the result is that accuracy is not the only important feature, and trying to optimize for it leads to failure.