Code
= x -> exp(sin(x));
f
= 0.05
h = ( -f(-h) + f(h) ) / 2h
CD2 = (f(-2h) -8f(-h) + 8f(h) - f(2h)) / 12h
CD4 @show (CD2, CD4);
(CD2, CD4) = (0.9999995835069508, 1.0000016631938748)
One of the most common applications of interpolants is numerical differentiation – approximating the derivative of a function.
While a finite-difference formula yields a derivative estimate for a single point, typically they are applied for many points simultaneously, using a large evenly spaced grid of points.
Simplest examples:
We evaluate \(f\) at the nodes, \(f(0) = 0, f(1/4) = 1/16, f(1/2) = 1/4, f(3/4) = 9/16, f(1) = 1\).
The forward difference estimates are:
The backward difference estimates are:
Notice that the differences are the same between the forward difference and backward difference estimates but they are interpreted as derivative estimates at different nodes.
The centered difference formula offers a symmetric alternative to forward and backward differences.
For simplicity, let’s set \(x = 0\). Note that in this case, the forward difference formula estimates \(f'(0)\) as the slope of the line through \((0,f(0))\) and \((h,f(h))\), and the backward difference formula estimates \(f'(0)\) as the slope of the line through \((-h,f(-h))\) and \((0,f(0))\).
That is, in these cases, we differentiate the interpolation of two points and use this as the derivative estimate.
Consider now the shortest centered formula with \(p = q = 1\). Here, we have three function values (three nodes) to use, so one option is to interpolate all three points and differentiate this quadratic function, \[Q(x) = \frac{x(x-h)}{2h^2} f(-h) - \frac{x^2 - h^2}{h^2} f(0) + \frac{x(x+h)}{2h^2}f(h).\] This leads us to a centered difference formula \[f'(0) \approx Q'(0) = \frac{f(h) - f(-h)}{2h}\] which has \(a_{-1} = -1/2\), \(a_0 = 0\), and \(a_1 = 1/2\).
In principle, we can derive any finite-difference formula in this way – interpolate nodes points on the graph of the function, then differentiate the interpolant exactly. The main motivation for increasing the number of node points involved in the finite differences formula is the improve the order of accuracy, which we will define soon.
= x -> exp(sin(x));
f
= 0.05
h = ( -f(-h) + f(h) ) / 2h
CD2 = (f(-2h) -8f(-h) + 8f(h) - f(2h)) / 12h
CD4 @show (CD2, CD4);
(CD2, CD4) = (0.9999995835069508, 1.0000016631938748)
= ( -f(0) + f(h) ) / h
FD1 = (-3f(0) + 4f(h) - f(2h)) / 2h
FD2 @show (FD1,FD2);
(FD1, FD2) = (1.024983957209069, 1.0000996111012461)
= ( -f(-h) + f(0)) / h
BD1 = ( f(-2h) - 4f(-h) + 3f(0)) / 2h
BD2 @show (BD1, BD2);
(BD1, BD2) = (0.9750152098048326, 0.9999120340342049)
To estimate a second (or higher) derivative, you might consider stacking finite differences of finite differences, e.g., \[f''(0) \approx \frac{f(-2h) - 2f(0) + f(2h)}{4h^2}.\]
However, this uses estimates at \(\pm 2h\) rather than the closer values at \(\pm h\). A better tactic is to return to the interpolant and compute an exact second (higher) derivative.
This yields \[f''(0) \approx \frac{f(-h) - 2f(0) + f(h)}{h^2},\] which is the simplest centered second-difference formula.
Equally spaced nodes are a common and convenient situation, but node locations can be arbitrary. The general form of a finite-difference formula is \[f^{(m)}(0) \approx \sum_{k=0}^r c_{k,m} f(t_k).\] The weights can be derived from the interpolate/differentiate process, but the algebra is complicated. However, there is a nice recursive algorithm that can calculate the weights for any desired formula known as Fornberg’s algorithm.
Resolving package versions...
No Changes to `~/.julia/environments/v1.11/Project.toml`
No Changes to `~/.julia/environments/v1.11/Manifest.toml`
"""
fdweights(t,m)
Compute weights for the `m`th derivative of a function at zero using values at the nodes in vector `t`.
"""
function fdweights(t,m)
# Recursion for one weight.
function weight(t,m,r,k)
# Inputs
# t: vector of nodes
# m: order of derivative sought
# r: number of nodes to use from t
# k: index of node whose weight is found
if (m<0) || (m>r) # undefined coeffs must be zero
= 0
c elseif (m==0) && (r==0) # base case of one-point interpolation
= 1
c else
if k<r
= (t[r+1]*weight(t,m,r-1,k) - m*weight(t,m-1,r-1,k))/(t[r+1]-t[k+1])
c else
= r > 1 ? prod(t[r]-x for x in t[1:r-1]) : 1
numer = r > 0 ? prod(t[r+1]-x for x in t[1:r]) : 1
denom = numer/denom
β = β*(m*weight(t,m-1,r-1,r-1) - t[r]*weight(t,m,r-1,r-1))
c end
end
return c
end
= length(t) - 1
r = zeros(size(t))
w return [ weight(t,m,r,k) for k=0:r ]
end
fdweights
= [ 0.35,0.5,0.57,0.6,0.75 ] # nodes
t = x -> cos(x^2)
f = x -> -2*x*sin(x^2)
dfdx = dfdx(0.5) exact_value
-0.24740395925452294
= fdweights(t.-0.5,1) w
5-element Vector{Float64}:
-0.5303030303030298
-21.61904761904763
45.09379509379508
-23.333333333333307
0.38888888888888845
= dot(w,f.(t)) fd_value
-0.247307422906135
The finite difference formulas converge to the correct value as the spacing \(h \rightarrow 0\), but the number of nodes (and function evaluations) grows unbounded. We want to choose \(h\) as large as possible while still achieving acceptable accuracy.
The forward difference formula \(f'(0) \approx \frac{f(h) - f(0)}{h}\) yields
\[\begin{align*} \tau_f(h) &= f'(0) - \frac{f(h) - f(0)}{h} \\&= f'(0) - h^{-1}\left[\left(f(0) + h f'(0) + \frac12 h^2 f''(0) + \cdots\right) - f(0)\right] \\&= -\frac12 h f''(0) + O(h^2). \end{align*}\]
The truncation error is \(O(h)\) as \(h \rightarrow 0\).
The forward-difference formula \(f'(0) \approx \frac{f(h) - f(0)}{h}\) is first-order accurate.
= x -> sin(exp(x+1))
f = exp(1)*cos(exp(1)) # derivative at x = 0 exact_value
-2.478349732955235
= [ 5/10^n for n in 1:6 ]
h = []; FD2 = [];
FD1 for h in h
push!(FD1, (f(h)-f(0)) / h )
push!(FD2, (f(h)-f(-h)) / 2h )
end
pretty_table([h FD1 FD2],header=["h","FD1","FD2"])
┌────────┬──────────┬──────────┐
│ h │ FD1 │ FD2 │
├────────┼──────────┼──────────┤
│ 0.5 │ -2.76858 │ -1.97047 │
│ 0.05 │ -2.6128 │ -2.47552 │
│ 0.005 │ -2.49211 │ -2.47832 │
│ 0.0005 │ -2.47973 │ -2.47835 │
│ 5.0e-5 │ -2.47849 │ -2.47835 │
│ 5.0e-6 │ -2.47836 │ -2.47835 │
└────────┴──────────┴──────────┘
= @. exact_value-FD1
error_FD1 = @. exact_value-FD2
error_FD2 = [h error_FD1 error_FD2]
table pretty_table(table,header=["h","error in FD1","error in FD2"])
┌────────┬──────────────┬──────────────┐
│ h │ error in FD1 │ error in FD2 │
├────────┼──────────────┼──────────────┤
│ 0.5 │ 0.290226 │ -0.507878 │
│ 0.05 │ 0.134446 │ -0.00282948 │
│ 0.005 │ 0.0137555 │ -2.80378e-5 │
│ 0.0005 │ 0.00137813 │ -2.80353e-7 │
│ 5.0e-5 │ 0.000137838 │ -2.80297e-9 │
│ 5.0e-6 │ 1.37841e-5 │ 1.53291e-11 │
└────────┴──────────────┴──────────────┘
plot(h,abs.([error_FD1 error_FD2]), m=:o,label=["FD1" "FD2"],
=true,xaxis=(:log10,"h"),yaxis=(:log10,"error"),
xflip="Convergence of finite differences",leg=:bottomleft)
titleplot!(h,[h, h.^2],l=:dash,label=["O(h)" "O(h^2)"])
In addition to the truncation error (the planned and understood error), there is round-off error. Consider the forward difference \[\delta(h) = \frac{f(x+h) - f(x)}{h}.\]
As \(h \rightarrow 0\), the numerator approaches zero even though the values involved are not necessarily near 0. This is the recipe for subtractive cancellation error! In fact, these formulas are inherently ill-conditioned as \(h \rightarrow 0\).
Recall that the condition number for the problem of computing \(f(x+h) - f(x)\) is \[\kappa(h) = \frac{\max\{|f(x+h)|,|f(x)|\}}{|f(x+h)-f(x)|}.\]
Hence, the numerical value we actually compute for \(\delta\) is
\[\begin{align*} \tilde{\delta}(h) &= \frac{f(x+h) - f(x)}{h} (1 + \kappa(h)\epsilon_{\text{mach}}) \\&= \delta(h) + \frac{\max\{|f(x+h)|,|f(x)|\}}{|f(x+h)-f(x)|} \frac{f(x+h) - f(x)}{h} \epsilon_{\text{mach}} \end{align*}\]
As \(h \rightarrow 0\), \[|\tilde{\delta}(h) - \delta(h)| = \frac{\max\{|f(x+h)|,|f(x)|\}}{h} \epsilon_{\text{mach}} \sim |f(x)| \epsilon_{\text{mach}} h^{-1}.\]
Combining the truncation and roundoff errors yields \[|f'(x) - \tilde{\delta}(h)| \le |\tau_f(h)| + |f(x)| \epsilon_{\text{mach}} h^{-1}.\]
As the truncation error decreases as \(h\) decreases, the roundoff error actually increases due to subtractive cancellation.
At around \(h \approx K \sqrt{\epsilon_{\text{mach}}}\), these errors are roughly the same size. For this first-order finite-difference method, the optimal spacing between nodes is proportional to \(\sqrt{\epsilon_{\text{mach}}}\).
= x -> exp(-1.3*x);
f = -1.3
exact
= [ 1/10^n for n in 1:12 ]
h = [], [], []
FD1, FD2, FD4 for h in h
= h*(-2:2)
nodes = @. f(nodes)
vals push!(FD1, dot([ 0 0 -1 1 0 ]/h,vals))
push!(FD2, dot([ 0 -1/2 0 1/2 0 ]/h,vals))
push!(FD4, dot([ 1/12 -2/3 0 2/3 -1/12 ]/h, vals))
end
= [h FD1 FD2 FD4]
table pretty_table(table[1:4,:],header=["h","FD1","FD2","FD4"])
┌────────┬──────────┬──────────┬──────────┐
│ h │ FD1 │ FD2 │ FD4 │
├────────┼──────────┼──────────┼──────────┤
│ 0.1 │ -1.21905 │ -1.30366 │ -1.29999 │
│ 0.01 │ -1.29159 │ -1.30004 │ -1.3 │
│ 0.001 │ -1.29916 │ -1.3 │ -1.3 │
│ 0.0001 │ -1.29992 │ -1.3 │ -1.3 │
└────────┴──────────┴──────────┴──────────┘
= @. abs([FD1 FD2 FD4] - exact)
err
plot(h,err,m=:o,label=["FD1" "FD2" "FD4"],
=(:log10,"h"),xflip=true,yaxis=(:log10,"error"),
xaxis="FD error with roundoff",legend=:bottomright)
title
plot!(h,0.1*eps()./h,l=:dash,color=:black,label="O(h^{-1})")
The error is initially dominated by truncation error, but eventuallly the roundoff error dominates.