# Chapter 7.  Numerical Methods for Initial Value Problems

7.1. Numerical Solution of Initial Value Problems
7.1.1. Explicit Euler Method (Forward Euler Method)
7.1.2. Numerical Stability and Stability Domain
7.1.3. Modified Euler Method (Trapezoidal Rule)
7.1.4. Implicit Euler method
7.1.5. Summary of Euler’s Method
7.1.6. General One-Step Procedures
7.1.6.1. Classical Runge-Kutta Methods
7.1.6.2. General Runga-Kutta Methods
7.1.6.3. Stability Area of Runge-Kutta Methods of Order 1≤p≤4
7.1.7. Step Size Control
7.1.7.1. 3. Order Runge-Kutta, Two Calculations of an Integration Step
7.1.7.2. Embedded Methods
7.1.8. Linear Multi-Step Methods
7.1.9. Activation of Linear Multi-Step Procedures
7.1.10. System of Differential Equations
7.1.11. BDF Methods
7.1.12. Remarks on Stiff Differential Equations
7.1.13. Implicit Runge-Kutta Methods
7.1.14. Comparison of Methods for Numerical Solution of Initial Value Problems (IVP)

## 7.1.  Numerical Solution of Initial Value Problems

The solutions for the ordinary differential equations one has been dealing with till now must, at least for the non-linear problems, be solved by a numerical method solution using a digital computer.

In order to derive appropriate methods, one shall consider the following numerical method used to solve a scalar differential equation of this kind

 $\stackrel{˙}{x}=f\left(x,t\right)$ (7.1)

with the initial condition

 $x\left(0\right)={x}_{0}.$ (7.2)

The methods derived in the following sections can be extended to a differential equation of higher order by means of the previously introduced substitution method. The application of this method on systems of differential equation is also unproblematic.

The solution of initial value problems, in numerical methods, allow for the determination of solutions $x\left({t}_{n}\right)$ for a series of discrete points in time (grid points) ${t}_{n}$ with

 ${t}_{n}={t}_{n-1}+{h}_{n}.$ (7.3)

where ${h}_{n}$ is the time increment which can generally change for every step. For the sake of simplicity , one shall refer to it as constant time increment h from now on.

### 7.1.1.  Explicit Euler Method (Forward Euler Method)

One obtains the Forward Euler Method by substituting the differential term on the left-hand side of (7.1) with a difference term:

 ${\stackrel{˙}{x}}_{n}\approx \frac{{x}_{n+1}-{x}_{n}}{h}$ (7.4)

Solving for ${x}_{n+1}$, this yields an approximate formula for the value of $x$ at position ${t}_{n}$

 ${x}_{n+1}={x}_{n}+h\text{\hspace{0.17em}}f\left({x}_{n},{t}_{n}\right)$ (7.5)

Thus, the solutions can be recursively calculated as follows:

 $\begin{array}{l}{x}_{1}={x}_{0}+hf\left({x}_{0},{t}_{0}\right)\\ {x}_{2}={x}_{1}+hf\left({x}_{1},{t}_{1}\right)\\ ⋮\\ {x}_{n}={x}_{n-1}+hf\left({x}_{n-1},{t}_{n-1}\right)\end{array}$ (7.6)

Hence, an easily applicable and understandable method is made available. But the practical application of this method faces various difficulties:

1. The error which emerges within the process of discretization (substitution of the differential by the difference quotient) strongly depends on the time increments $h$.

2. In some cases one can observe a trend of numerical instability. An analysis of this effect is based on the so-called test equation

 $\stackrel{˙}{x}=-\alpha x\text{ }\text{with}\text{ }x\left(0\right)={x}_{0}<0,\text{ }\alpha \in \Re \text{ }\text{and}\text{ }\alpha <0.$ (7.7)

The exact solution

 $x={x}_{0}{e}^{-\alpha t},$ (7.8)

is always positive and converges with $t\to \infty$ to zero. The least prerequisite for a useful approximation method is that it also converges to zero and always remains positive.

If we apply the forward Euler Law on equation (7.7), we will obtain a solution at position ${t}_{n}$

 ${x}_{n+1}=\left(1-\alpha \right){x}_{n}$ (7.9)

which means that the approximation solution remains positive and it only disappears where $t$ converges to infinity, if it satisfies that $\alpha \text{\hspace{0.17em}}h<2$.

If it yields $2<\alpha \text{\hspace{0.17em}}h<1$, the approximation solution indeed disappears where $t$ converges to infinity. The approximation solution oscillates around the zero point value. The solution increases with each integration section and changes the algebraic sign at the same time where $\alpha \text{\hspace{0.17em}}h<2$. One only obtains a meaningful numerical solution when $h<\frac{1}{\alpha }$. This characteristic of the Euler method is referred to as numerical instability.

Example 7.1: Numerical instability of the Euler method

The solution of the following equation

$\stackrel{˙}{x}=-10x$

should numerically be calculated.

The application of the Euler method in the interval $\left[0,\text{\hspace{0.17em}}16\right]$ with various increments reveals the results represented in [Figure 7.2]. Obviously, the error drastically increases if we double the increment.

Furthermore, the solution oscillates for $h=0.16$ and it finally diverges for $h=0.32$[Figure 7.3].

In order to analyze the characteristics of oscillating solutions, one frequently allows $\alpha$ to be specified as a complex number, see the following section.

### 7.1.2.  Numerical Stability and Stability Domain

For the stability analysis of a numerical integration method the linear test equation

 (7.10)

is applied. The exact solution of this initial value problem

 $x={x}_{0}{e}^{\lambda t}$ (7.11)

always remains positive and converges towards zero for $t\to \infty$. If one tries to solve the test equation by an integration method, the least requirement to obtain a useful approximate solution is that it also converges towards zero and remains ever positive.

Definition 7.1

A numerical integration method is called (numerically) stable in a domain of the complex number field, if the series $\left\{{x}_{n}\right\}$ of the approximate solutions decreases absolutely at the points of time ${t}_{n}$ for ${t}_{n}\to \infty$ (according to the behaviour of the exact solution).

Let’s analyze the explicit Euler’s method. By inserting the test equation (7.10) in (7.5) we obtain

 ${x}_{n+1}=\left(1+\lambda \text{\hspace{0.17em}}h\right){x}_{n}={\left(1+\lambda \text{\hspace{0.17em}}h\right)}^{n+1}{x}_{0}.$ (7.12)

According to the stability condition this method is numerically stable if

 $\underset{n\to \infty }{\mathrm{lim}}|{x}_{n+1}|=0.$ (7.13)

This is only valid for

 $\underset{R\left(z\right)=R\left(\lambda h\right)}{\underbrace{|1+\lambda h|}}<1.$ (7.14)

I.e., $\left(1+z\right)$ must be inside a circle of radius $1.0$ (the so-called unit circle) around the origin. Thus, $z$ must be inside the circle of radius $1.0$ around the point $\left(-1,\text{\hspace{0.17em}}0\right)$, such that the explicit Euler’s method remains numerically stable. The stability domain of the explicit Euler’s method is depicted in [Figure 7.4]. The explicit Euler’s method is neither A-stable nor F-stable.

Definition 7.2

A numerical integration method for differential equations is A-stable or Absolute-stable if and only if its stability domain covers at least the complete left z-half plane.

Definition 7.3

A numerical integration method for differential equations is F-stable or faithfully-stable if and only if its stability domain is exclusively the left z-half plane.

### 7.1.3.  Modified Euler Method (Trapezoidal Rule)

The modified Euler Method results from the application of the Trapezoidal Rule on the integration of the function $f\left(x,t\right)$.

Trapezoidal Rule:

 $\underset{a}{\overset{b}{\int }}f\left(x,t\right)dt\approx \frac{b-a}{2}\left(f\left(a\right)+f\left(b\right)\right)$ (7.15)

applied on $f\left(x,t\right)$, we obtain with ${t}_{n}=a,\text{ }{t}_{n}=b$ and $h=a-b$:

 ${x}_{n+1}-{x}_{n}=\underset{{t}_{n}}{\overset{{t}_{n+1}}{\int }}f\left(t,x\right)dt\approx \frac{h}{2}\left(f\left({x}_{n+1},{t}_{n+1}\right)+f\left({x}_{n},{t}_{n}\right)\right)$ (7.16)

and therefore as a calculation rule:

 ${x}_{n+1}^{}={x}_{n}+\frac{h}{2}\left(f\left({x}_{n+1},{t}_{n+1}\right)+f\left({x}_{n},{t}_{n}\right)\right)$ (7.17)

In contrast to the Forward Euler Method, in this case the resulting calculation rule is not explicitly solved for the searched quantity ${x}_{n+1}$. Thus this method is said to be an implicit integration method. The question emerges, how to solve (7.17) for ${x}_{n+1}$ in practice.

Case 1: f is linear in x

Obviously, (7.17) can easily be solved for ${x}_{n+1}$.

Example 7.2:

$\stackrel{˙}{x}=ax+\mathrm{cos}\left(t\right)$

We obtain a calculation rule:

${x}_{n+1}={x}_{n}+\frac{h}{2}\left(a{x}_{n+1}+\mathrm{cos}\left({t}_{n+1}\right)+a{x}_{n}+\mathrm{cos}\left({t}_{n}\right)\right)$

This special case provides the option to explicitly solve for ${x}_{n+1}$. We obtain the following formula

${x}_{n+1}^{\left(k\right)}={x}_{n}+\frac{h}{2}\left[f\left({x}_{n+1}^{\left(k-1\right)},{t}_{n+1}\right)+f\left({x}_{n},{t}_{n}\right)\right]$.

Case 2: f is a non-linear function of x.

It is only possible in special cases to explicitly solve for ${x}_{n+1}$. Instead, we must generally resort to numerical methods for the solution of non-linear algebraic equation systems. In this case, we have to have a good initial value for ${x}_{n}$for an iterative solution. Thus, we apply an iterative method. One possibility is the method of successive substitution and fixed-point iteration respectively (Section [Section 9.3]):

 ${x}_{n+1}^{\left(k\right)}={x}_{n}+\frac{h}{2}\left[f\left({x}_{n+1}^{\left(k-1\right)},{t}_{n+1}\right)+f\left({x}_{n},{t}_{n}\right)\right]$ (7.18)

where ${x}_{n+1}^{\left(k\right)}$ identifies the value of ${k}^{th}$iteration where ${x}_{n+1}$ and ${x}_{n+1}^{\left(0\right)}$ represents an appropriate initial value for iteration. We chose ${x}_{n}$, as an initial value. Thus, the first iteration increment is identical with the formula of the explicit Euler Rule. As soon as the difference of the successive two iterations is $|{x}_{n+1}^{\left(k\right)}-{x}_{n+1}^{\left(k-1\right)}|´<\epsilon$, we cancel the iteration. Here, $\epsilon$ is a given accuracy bound which should be chosen near to the machine accuracy. However, the stability domain decreases by applying the fixed-point iteration, as depicted in [Figure 7.6]. For this reason, the Newton-Raphson iteration method (Section [Section 9.3]) is applied alternatively as a rule.

To sum up, we obtain the following calculation rule:

 (7.19)

etc. until convergence arrives.

The question is why the modified Euler method has a higher accuracy and an improved stability behavior among numerical integration methods.

In order to answer that question, we have to consult the test equation $\stackrel{˙}{x}=-\alpha x$. In this case, we obtain the following calculation rule

 ${x}_{n+1}={x}_{n}-\frac{\alpha h}{2}\left({x}_{n+1}+{x}_{n}\right)\text{ }⇒\text{ }{x}_{n+1}=\frac{1-\frac{\alpha h}{2}}{1+\frac{\alpha h}{2}}{x}_{n}$ (7.20)

If we develop the coefficient of the factor in front of ${x}_{n}$into a Taylor series, we obtain

 (7.21)

If we apply the same analysis for the forward Euler-method, we obtain

 $1-\alpha h$ (7.22)

as represented above. If we compare the same results (7.21) and (7.22) with the exact result

 ${e}^{-\alpha h}=1-\alpha h+\frac{1}{2}{\left(\alpha h\right)}^{2}-\frac{1}{6}{\left(\alpha h\right)}^{3}+\dots$ (7.23)

we obtain the following cancellation error for the modified Euler formula

 $\frac{1}{12}{\left(\alpha h\right)}^{3}+\dots$ (7.24)

and for the explicit Euler formula as

 $\frac{1}{2}{\left(\alpha h\right)}^{2}+...$ (7.25)

Thus, the Euler formula precisely describes the exact solution locally up to two elements of second order (method of second order). In contrast to that, the forward Euler method only represents an approximation up to the first element of first order (method of first order).

A further important advantage of the Trapezoidal Rule is its stability behavior. One obtains the following equation for ${n}^{th}$ integration section:

 ${x}_{n+1}={\left(\frac{1-\frac{\alpha h}{2}}{1+\frac{\alpha h}{2}}\right)}^{n+1}{x}_{0}$ (7.26)

and the absolute value of the approximation solution exactly coincides where $\mathrm{Re}\left(\alpha \right)<0$, if it yields

 $|1-\frac{\alpha h}{2}|<|1+\frac{\alpha h}{2}|$ (7.27)

But this is the case where all $\mathrm{Re}\left(\alpha \right)<0$, i.e. the trapezoidal rule is stable for arbitrary step sizes. The trapezoidal rule is $A$-stable as well as $F$-stable.

### 7.1.4.  Implicit Euler method

We obtain the implicit Euler method by substituting the forward difference quotient by the backward quotient in the explicit Euler’s process.

The application of this method too needs an iteration to calculate ${x}_{n+1}$.

### 7.1.5.  Summary of Euler’s Method

1. The explicit Euler’s method represents a favored method which is very easily applicable and easy to program, especially in real-time operation. The local discretization error behaves like ${h}^{2}\left(O\left({h}^{2}\right)\right)$ where $h\to 0$, the global error behaves like $h$. A great disadvantage is the instability, which can, as the case may be, be eliminated by the choice of extreme short increments.

2. The modified Euler method (Trapezoidal Rule) is $A$-stable, the local discretization error behaves like $O\left({h}^{3}\right)$, the global error like $O\left({h}^{2}\right)$. A disadvantage (which is similar to all implicit methods) is the necessity to solve a non-linear equation system within each integration section.

3. In respect to the discretization error, the implicit Euler method behaves like the explicit method, but in contrast to the explicit version, the implicit one has the advantage of being $A$-stable.

### 7.1.6. General One-Step Procedures

The Euler methods discussed above are the simplest methods of the great class of one-step procedures. Here, approximations ${x}_{n+1}$ will only be calculated at ${t}_{n+1}={t}_{n}+{h}_{n}$ solely from the approximation ${x}_{n}$ at the points ${t}_{n}$ and the increments ${h}_{n}$.

One-step procedures can generally be written on the form

 ${x}_{n+1}={x}_{n}+{h}_{n}\Phi \left({x}_{n},{t}_{n},{h}_{n}\right)$ (7.28)

with the so-called process regulation $\Phi$.

Example 7.3

For the explicit Euler method it is true:

$\Phi \left({x}_{n},{t}_{n},{h}_{n}\right)=f\left(x,t\right)$.

To measure the quality of one-step procedures, we utilize the following terms:

Initial value problem

$\stackrel{˙}{x}=f\left(x,t\right)\text{\hspace{0.17em}},\text{\hspace{0.17em}}x\left({t}_{0}\right)={x}_{0}$.

This means that

$\epsilon \left(h\right):=x\left({t}_{n}+h\right)-\left({x}_{n}+h\text{\hspace{0.17em}}\Phi \left({x}_{n},{t}_{n},{h}_{n}\right)\right)$

is the local error which emerges in this step of the method defined in (7.28).

This procedure is consistent, if it yields

$\epsilon \left(h\right)=O\left(h\right)$,

it is of ${p}^{th}$ order where

$\epsilon \left(h\right)=O\left({h}^{p+1}\right)$.

Annotations 7.1:

If the approximation ${x}_{n}$ of ${x}_{n}\left({t}_{n}\right)$ with the one-step procedure

${x}_{n+1}={x}_{n}+{h}_{n}\Phi \left({x}_{n},{t}_{n},{h}_{n}\right),\text{ }\text{ }n=0,\dots ,N$

is calculated, $\Phi$ satisfies a Lipschitz condition in respect to $x\in \left[a,b\right]×\Re$

$|\Phi \left(x,t,h\right)-\Phi \left(y,t,h\right)|\le L|x-y|$

and the one-step procedure is consistent of $p+1$ order

$|x\left(t+h\right)-\left(x\left(t\right)+h\text{\hspace{0.17em}}\Phi \left({x}_{n},{t}_{n},{h}_{n}\right)\right)|\le c{h}^{p+1}$,

the global error is true for

$|{\delta }_{n+1}|=|{x}_{n+1}-x\left({t}_{n}\right)|\le c\left(b-a\right){e}^{h\left(b-a\right)}{h}^{p}$

with $h:=\underset{j=0,1,...,n+1}{\underbrace{\mathrm{max}}}{h}_{j}$.

Expressed in words this means that the consistency order of the global error is always one order lower than the local error.

#### 7.1.6.1. Classical Runge-Kutta Methods

A basic disadvantage of Euler’s method is the low accuracy achieved. This demands very short integration increments $h$ and leads to high computing times and an accumulation of round-off errors during the calculation.

Very early endeavors have been made to increase the degree of accuracy. An option is to calculate the right-hand side of differential equations at additional interpolation points.

We consider again the differential equation (7.1). If ${x}_{n}$ is given, we can integrate (7.1) in the interval $\left[{t}_{n},{t}_{n+1}\right]$ in order to calculate the function value ${x}_{n+1}$ where ${t}_{n+1}={t}_{n}+h$:

 ${x}_{n+1}={x}_{n}+\underset{{t}_{n}}{\overset{{t}_{n+1}}{\int }}f\left(x,t\right)dt$ (7.29)

We obtain the Runga-Kutta-method if we approximately integrate the right-hand side of (7.29)

Runga-Kutta Method of Second Order

The following statement results from the application of the Trapezoidal Rule in the approximation integration of the integral in (7.29)

 $\underset{{t}_{n}}{\overset{{t}_{n+1}}{\int }}f\left(x,t\right)dt\approx \frac{h}{2}\left[f\left({x}_{n},{t}_{n}\right)+f\left({x}_{n+1},{t}_{n+1}\right)\right]$ (7.30)

The value for ${x}_{n+1}$ is unknown, thus the term $f\left({x}_{n+1},{t}_{n+1}\right)$ will be approximated by the explicit Euler method. Thus, we obtain the following formula

 ${x}_{n+1}={x}_{n}+\frac{h}{2}\left[f\left({x}_{n},{t}_{n}\right)+f\left({x}_{n}+hf\left({x}_{n},{t}_{n}\right)\right)\right]$ (7.31)

which will usually be formulated in the following calculation formula:

 $\begin{array}{l}{k}_{1}=hf\left({x}_{n},{t}_{n}\right),\\ {k}_{2}=hf\left({x}_{n}+{k}_{1},{t}_{n+1}\right),\\ {x}_{n+1}={x}_{n}+\frac{1}{2}\left[{k}_{1}+{k}_{2}\right].\end{array}$ (7.32)

The Runga-Kutta method at hand is also referred to as a predictor-corrector method on the basis of the Euler method. In this context, the explicit Euler method plays the role of the predictor whereas the Trapezoidal Rule inherits the role of the corrector.

To determine the accuracy, with which this method discretizes the differential equation, we develop $x$ in the surrounding of ${t}_{n}$ with the Taylor series

 ${x}_{n+1}={x}_{n}+hf+\frac{{h}^{2}}{2}\left[{f}_{t}+{f}_{x}f\right]+\frac{{h}^{3}}{6}\left[{f}_{tt}+2{f}_{tx}f+{f}_{tt}{f}^{2}+{f}_{t}{f}_{x}+{f}_{x}{}^{2}f\right]+O\left({h}^{4}\right)$ (7.33)

at the same time, the partial derivatives of f will be shortened by:

${f}_{t}={\left[\frac{\partial f}{\partial t}\right]}_{\begin{array}{l}t={t}_{n}\\ x={x}_{n}\end{array}},\text{ }{f}_{x}={\left[\frac{\partial f}{\partial x}\right]}_{\begin{array}{l}t={t}_{n}\\ x={x}_{n}\end{array}},$ etc.

For means of comparison, we develop (7.31) in an appropriate Taylor series

 ${x}_{n+1}={x}_{n}+hf+\frac{{h}^{2}}{2}\left[{f}_{t}+{f}_{x}f\right]+\frac{{h}^{3}}{4}\left[{f}_{tt}+2{f}_{tx}f+{f}_{xx}{f}^{2}\right]+O\left({h}^{4}\right)$ (7.34)

By comparing (7.33) with (7.34), it becomes clear that the error which occurs with every integration section is proportional to ${h}^{3}$.

Hereafter, the remaining Runga-Kutta methods will only be specified by the error order and the calculation formula without derivation.

Runge-Kutta methods of third order

Calculation formula:

 $\begin{array}{l}{k}_{1}=hf\left({x}_{n},{t}_{n}\right)\\ {k}_{2}=hf\left({x}_{n}+\frac{{k}_{1}}{2},{t}_{n}+\frac{h}{2}\right)\\ {k}_{3}=hf\left({x}_{n}-{k}_{1}+2{k}_{2},{t}_{n}+h\right)\\ {x}_{n+1}={x}_{n}+\frac{1}{6}\left({k}_{1}+4{k}_{2}+{k}_{3}\right)\end{array}$ (7.35)

Local discretization error: $O\left({h}^{4}\right)$.

Runge-Kutta Method of Fourth Order

This method is based on Simpson’s $1/8$-Rule and yields

 $\begin{array}{l}{k}_{1}=hf\left({x}_{n},{t}_{n}\right)\\ {k}_{2}=hf\left({x}_{n}+\frac{{k}_{1}}{2},{t}_{n}+\frac{h}{2}\right)\\ {k}_{3}=hf\left({x}_{n}+\frac{{k}_{2}}{2},{t}_{n}+\frac{h}{2}\right)\\ {k}_{4}=hf\left({x}_{n}+{k}_{3},{t}_{n}+h\right)\\ {x}_{n+1}={x}_{n}+\frac{1}{6}\left({k}_{1}+2{k}_{2}+2{k}_{3}+{k}_{4}\right)\end{array}$ (7.36)

The local error order of this method is $O\left({h}^{5}\right)$.

This version of the Runge-Kutta method is also referred to as classical Runge-Kutta method:

Stability Observation

In order to analyze the Runga-Kutta method, we consider again the test equation with a real $\alpha$ at first

 $\stackrel{˙}{x}=-\alpha x\text{ }\text{with}\text{ }\alpha <0.$ (7.37)

For a given value ${x}_{n}$ we obtain an approximation value where ${x}_{n+1}$

 ${x}_{n+1}={e}^{-\alpha h}{x}_{n}$ (7.38)

By applying the Runge-Kutta method of fourth order in equation (7.38), we obtain the following statement

 ${x}_{n+1}=\underset{\gamma }{\underbrace{\left[1-\alpha h+\frac{1}{2}{\left(\alpha h\right)}^{2}-\frac{1}{6}{\left(\alpha h\right)}^{3}+\frac{1}{24}{\left(\alpha h\right)}^{4}\right]}}\text{ }\text{\hspace{0.17em}}{x}_{n}$ (7.39)

It is obvious that the factor $\gamma$ just contains the first five summands of the expansion power series where ${e}^{-\alpha h}$. A comparison with the real value of the exponential function reveals that the deviation from the true value increases as $\alpha <0$ and $h$ grows and that instability is observable when $\alpha h<-2.785$, [Figure 7.8]. This is due to the numerical solution which increases with each integration section $\left(|\gamma |<1\right)$ while the true solution decreases $\left({e}^{\alpha h}<1\right)$.

We receive similar results when we make the same analysis with further Runge-Kutta methods.

Similar to Euler method, we can more generally consider the case that $\alpha$ is a complex number. Here, we also analyze of the behavior of oscillating solutions. In this case, we do not obtain an interval of the real axis as a stable or instable area but a region in the complex plane, [Figure 7.10]

#### 7.1.6.2. General Runga-Kutta Methods

The calculation formula of an ODE discussed in section [Section 7.1.6.1]

 $\stackrel{˙}{x}=f\left(x,t\right)$ (7.40)

can be generally represented for a method with $m$ function evaluations ( $m$-stepped method)

 ${x}_{n+1}={x}_{n}+h\text{\hspace{0.17em}}\sum _{j=1}^{m}{\beta }_{j}{k}_{j}$ (7.41)

 ${k}_{j}=f\left({t}_{n}+{\varsigma }_{j}h,{x}_{n}+h\sum _{l=1}^{m}{\gamma }_{jl}{k}_{l}\right),\text{ }\text{ }j=1,..,m$ (7.42)

The values ${x}_{nj}={x}_{n}+h\sum _{l=1}^{m}{\gamma }_{jl}{k}_{l}$ can be interpreted as an approximation of the solution with one integration section in the following points in time: ${t}_{n}+{\zeta }_{j}h$

We choose the coefficients to approximate ${x}_{n}$ as much as possible. A clear representation of the coefficients ${\beta }_{i},\text{\hspace{0.17em}}{\zeta }_{i}\text{\hspace{0.17em}}{\gamma }_{ij}\text{ }1\le i,j\le m$ is frequently based on the following schema (Butcher schema)

 $\begin{array}{cccccc}{\varsigma }_{1}& {\gamma }_{11}& {\gamma }_{12}& \cdots & \cdots & {\gamma }_{1m}\\ {\varsigma }_{2}& {\gamma }_{21}& {\gamma }_{22}& \cdots & \cdots & {\gamma }_{2m}\\ ⋮& ⋮& ⋮& & & ⋮\\ ⋮& ⋮& ⋮& & & ⋮\\ {\varsigma }_{m}& {\gamma }_{m1}& {\gamma }_{m2}& \cdots & \cdots & {\gamma }_{mm}\\ & {\beta }_{1}& {\beta }_{2}& \cdots & \cdots & {\beta }_{m}\end{array}$ (7.43)

It is real for

 ${\varsigma }_{i}=\sum _{j=1}^{m}{\gamma }_{ij},\text{ }i=1,...,m.$ (7.44)

If

 ${\gamma }_{ij}=0\text{ }\text{ }\text{for}\text{ }j\ge i.$ (7.45)

the variables ${x}_{i}$ can be directly calculated from already known quantities, i.e. we refer to an explicit method (e.g. the classical Runge-Kutta methods already discussed in section [Section 7.1.6.1]).

Example 7.4: Butcher Schema for Classical Runge-Kutta Methods

Euler’s method:

$\begin{array}{cc}0& 0\\ & 1\end{array}$

Runge-Kutta method of 4th order:

$\begin{array}{ccccc}0& 0& 0& 0& 0\\ \frac{1}{2}& \frac{1}{2}& 0& 0& 0\\ \frac{1}{2}& 0& \frac{1}{2}& 0& 0\\ 1& 0& 0& 1& 0\\ & \frac{1}{6}& \frac{1}{3}& \frac{1}{3}& \frac{1}{6}\end{array}$

Annotations 7.2:

1. From section [Section 7.1.6.1] we infer that that there is at least one Runge-Kutta method of $p=m$ order where $m\le 4$. The question emerges whether there also exists a method where $p and whether there is always, at least, one method where $p=m$. One must negate this in both cases.

2. As a matter of fact, we can prove the following connection ([Table 7.1]):

Table 7.1.  Accumulation of the amount of transition values and achieved order.
 Number of interim values 1 2 3 4 5 6 7 8 9 10 Achievable order 1 2 3 4 4 5 6 6 7 7

i.e. the method of ${4}^{th}$ order represents a kind of optimum in respect to this observation.

#### 7.1.6.3. Stability Area of Runge-Kutta Methods of Order 1≤p≤4

In this case we can prove that the area of absolute stability of a method of order $p$ results from

 $|\underset{R\left(z\right)=R\left(h\lambda \right)}{\underbrace{1+h\lambda +\frac{{\left(h\lambda \right)}^{2}}{2!}+...+\frac{{\left(h\lambda \right)}^{p}}{p!}}}|\le 1$ (7.46)

This particularly means that all methods of order p have the same stability area.

The stability border results from the complex solution $z$ of the statement

$R\left(z\right)={e}^{i\theta }$

with arbitrary $\theta$ of the interval $\left(0,2\pi \right)$.

The stability areas of the Runge-Kutta method up to $p=4$ are represented in [Figure 7.10]

### 7.1.7.  Step Size Control

Errors are inevitable if iterative methods are applied. The emerging error depends on the used step size. Practically however, mostly a desired accuracy is merely known. Thus, one would like to provide the accuracy instead of the step size as input of the numerical integration method.

For this purpose, the algorithm is supposed to choose a step size automatically in every iteration step, such that the local discretization error lies below a desired value $\zeta$. Thus, the error which is made in an iteration step must be estimated in advance.

Therefore, two different possibilities exist. On the one hand, an iteration step can be performed twice by using different step sizes. From the difference of the calculated solutions the local error can be estimated. On the other hand, two different solutions can also be obtained by using two different integration algorithms (embedded methods).

#### 7.1.7.1. 3. Order Runge-Kutta, Two Calculations of an Integration Step

For the local discretization error of the Runge-Kutta method of third order with an integration interval of length $h$ it yields

 ${E}_{h}=B{h}^{4}.$ (7.47)

where the constant $B$ depends from the envisioned differential equation. If one integrates the same interval in two integration steps of length $\frac{h}{2}$, the following approximation statement of the discretization error at the end of the interval is true

 $2{E}_{\frac{h}{2}}=2B{\left(\frac{h}{2}\right)}^{4}=\frac{1}{8}B{h}^{4}$ (7.48)

If one subtracts the equation (7.47) from (7.48), we obtain

 ${E}_{h}-2{E}_{\frac{h}{2}}=B{h}^{4}-\frac{1}{8}B{h}^{4}=\frac{7}{8}B{h}^{4}$ (7.49)

The left-hand side of the equation (7.49) can be calculated by first accomplishing an integration step of length $h$ and subsequently repeating the calculation with the integration increment $\frac{h}{2}$. If the specific results are referred to as ${x}_{h}$ and ${x}_{\frac{h}{2}}$respectively, we obtain in this case

 ${E}_{h}-2{E}_{\frac{h}{2}}={x}_{h}-{x}_{\frac{h}{2}}$ (7.50)

By replacing equation (7.49) with (7.50) and after solving for $B$, it yields

 $B=\frac{8}{7}\left({x}_{h}-{x}_{\frac{h}{2}}\right){h}^{-4}$ (7.51)

If $B$ is known, we can calculate an estimated value by means of equation (7.47)

 $h\approx {\left(\frac{\zeta }{B}\right)}^{1}{4}}.$ (7.52)

#### 7.1.7.2. Embedded Methods

It is not efficient to calculate the same integration step twice in succession in order to estimate the local integration error. For this reason, embedded methods are more advantageous. Hereby, two different algorithms are applied which use the greatest possible number of steps. The Runge-Kutta-Fehlberg $4/5$ algorithm (RKF4/5) is a possible choice for an embedded algorithm. The Butcher scheme has the following form:

 $\begin{array}{ccccccc}0& 0& 0& 0& 0& 0& 0\\ \frac{1}{2}& \frac{1}{4}& 0& 0& 0& 0& 0\\ \frac{3}{8}& \frac{3}{32}& \frac{9}{32}& 0& 0& 0& 0\\ \frac{12}{13}& \frac{1932}{2197}& -\frac{7200}{2197}& \frac{7296}{2197}& 0& 0& 0\\ 1& \frac{439}{216}& -8& \frac{36801}{513}& -\frac{845}{4104}& 0& 0\\ \frac{1}{2}& -\frac{8}{27}& 2& -\frac{3544}{2565}& \frac{1859}{4104}& -\frac{11}{40}& 0\\ {x}_{1}& \frac{25}{216}& 0& \frac{1408}{2565}& \frac{2197}{4104}& -\frac{1}{5}& 0\\ {x}_{2}& \frac{16}{135}& 0& \frac{6656}{12825}& \frac{28561}{56430}& -\frac{9}{50}& \frac{2}{55}\end{array}$ (7.53)

Thereby, ${x}_{1}$ is a Runge-Kutta solution of fourth order and ${x}_{2}$ is a Runge-Kutta solution of order five.

Therefore,

 $\epsilon ~{h}^{5}⇔h~\sqrt[5]{\epsilon }.$ (7.54)

The relative error can be obtained by

 ${\epsilon }_{rel}=\frac{|{x}_{1}-{x}_{2}|}{\mathrm{max}\left(|{x}_{1}|,|{x}_{2}|,\delta \right)}$ (7.55)

with $\delta ={10}^{-10}$.

The objective is to choose a new step size, such that the relative error lies near the relative tolerance.

We thus want the following equation to be fulfilled:

 $\varsigma =\frac{|{x}_{1}-{x}_{2}|}{\mathrm{max}\left(|{x}_{1}|,|{x}_{2}|,\delta \right)}.$ (7.56)

Therefore, the proposal for the choice of the step size is

 ${h}_{new}=\sqrt[5]{\frac{\varsigma ·\mathrm{max}\left(|{x}_{1}|,|{x}_{2}|,\delta \right)}{|{x}_{1}-{x}_{2}|}}·{h}_{old}.$ (7.57)

If the error is too great, the step size is decreased, if the error is too small, the step is increased. Notice, that hereby, steps are never repeated, even if the error is much too great. Therefore, this type of step size control is called optimistic. In contrast, a conservative step size control repeats a step with a new step size if the estimated error is greater than the tolerance.

The idea of step size control can also be understood as a (feedback) control problem. In this case, the above given formula equals a P-controller ([Figure 7.11]).

Kjell Gustafsson developed a PI-controller in order to control the step size. The corresponding formula is:

 ${h}_{new}={\left(\frac{0.8·to{l}_{rel}}{{\epsilon }_{re{l}_{new}}}\right)}^{\frac{0.3}{n}}·{\left(\frac{{\epsilon }_{re{l}_{old}}}{{\epsilon }_{re{l}_{new}}}\right)}^{\frac{0.4}{n}}{h}_{old}.$ (7.58)

### 7.1.8.  Linear Multi-Step Methods

The previous integration methods had all in common that the approximation value ${x}_{n+1}$ directly results from its predecessor value ${x}_{n}$. Within one integration step, we exclusively use the information of the last step. Methods with these characteristics are referred to as one-step procedures.

The question emerges whether one can achieve an enhanced accuracy by consulting the calculated values of the previous steps ${x}_{n-k},{x}_{n-k+1},\dots ,{x}_{n-1}$. These methods are called multi-step procedures.

The derivation of such procedures includes the formal integration of the initial value problem (7.1), (7.2) (compare the derivation of the Runge-Kutta method in section [Section 7.1.6])

 $x\left({t}_{n+1}\right)=x\left({t}_{n-k+1}\right)+\underset{{t}_{n-k+1}}{\overset{{t}_{n+1}}{\int }}f\left(x\left(\tau \right),\tau \right)d\tau .$ (7.59)

The integrand will subsequently be integrated approximately by an interpolative quadrature formula with grid points ${t}_{n-k+1},...,{t}_{n},{t}_{n+1}$. If $k=2$ and the grids equidistant and the increment $h$, we then obtain for e.g The Simpson’s Rule:

 ${x}_{n+1}-{x}_{n-1}=h\left[\frac{1}{3}f\left({x}_{n+1},{t}_{n+1}\right)+\frac{4}{3}f\left({x}_{n},{t}_{n}\right)+\frac{1}{3}f\left({x}_{n-1},{t}_{n-1}\right)\right].$ (7.60)

If , on the other hand, we act on the original differential equation

 $\stackrel{˙}{x}\left(t\right)={f\left(x\left(t\right),t\right)|}_{t={t}_{n+1}}$ (7.61)

by approximating the derivation directly on the basis of a numerical differential formula, as e.g.

 $\stackrel{˙}{x}\left({t}_{n+1}\right)\approx \frac{1}{2h}\left[3x\left({t}_{n+1}\right)-4x\left({t}_{n}\right)+x\left({t}_{n-1}\right)\right]$ (7.62)

we obtain the following procedure

 $\frac{3}{2}{x}_{n+1}-2{x}_{n}+\frac{1}{2}{x}_{n-1}=h\text{\hspace{0.17em}}f\left({x}_{n+1},{t}_{n+1}\right)$ (7.63)

Both methods are examples of linear multi-step procedures. This will lead us to the following definition.

Definition 7.4

A linear multi-step procedure with n increments (also: linear $n$ -step procedure), which aims at the determination of the approximations ${x}_{n}$ for the solution $x\left(t\right)$, with the initial value problem (7.1), (7.2) is defined by the specification of $n$ initial values

 $x\left({t}_{j}\right)={x}_{j},\text{ }j=0,1,...,n-1$ (7.64)

and the calculation rule (difference equation)

 $\sum _{j=0}^{n}{\alpha }_{j}{x}_{n-j+1}=h\sum _{j=0}^{n}{\beta }_{j}f\left({x}_{n-j+1},{t}_{n-j+1}\right)$ (7.65)

with

 ${\alpha }_{j},{\beta }_{j}\in \Re ,\text{ }\text{ }{\alpha }_{0}\ne 0\text{ }\text{and}\text{ }|{\alpha }_{n}|+|{\beta }_{n}|<0.$ (7.66)

Annotations 7.3:

1. The linear multi-step procedure given in (7.64) and (7.65) is referred to as linear because the increment function (method function) depends linearly from the function values $f\left({x}_{n-j+1},{t}_{n-j+1}\right)$

 $\Phi \left({x}_{n-k+1},...,{x}_{n+1},{t}_{i};h\right)\equiv \sum _{j=0}^{n}{\beta }_{j}f\left({x}_{n-j+1},{t}_{n-j+1}\right)$ (7.67)
1. The condition ${\alpha }_{0}\ne 0$ guarantees that the implicit differential equation (7.65) holds an exact solution, at least for sufficient small increments $h$ .

2. By means of the condition $|{\alpha }_{n}|+|{\beta }_{n}|<0$, the step number $n$ is exactly determined.

3. ${\beta }_{0}=0$ is true for explicit linear multi-step procedures.

Example 7.5:

1. By inserting ${\alpha }_{0}=1$ and ${\alpha }_{1}=-1$ in (7.65), we obtain an implicit procedure $\left({\beta }_{0}\ne 0\right)$

 ${x}_{n+1}={x}_{n}+h\sum _{j=0}^{n}{\beta }_{j}f\left({x}_{n-j+1},{t}_{n-j+1}\right),$ (7.68)

which is referred to as Adams-Moulton formula.

1. If we proceed like in $1.$, but insert $\beta =0$, we obtain the following procedure

 ${x}_{n+1}={x}_{n}+h\sum _{j=1}^{n}{\beta }_{j}f\left({x}_{n-j+1},{t}_{n-j+1}\right)$ (7.69)

This procedure constitutes the class of Adams-Bashford formulae.

1. If we approximate the derivative on the basis of backward differences in the differential equation (7.61), we obtain a category of implicit integration procedures

 $\sum _{j=0}^{n}{\alpha }_{j}{x}_{n-j+1}=h{\beta }_{0}f\left({x}_{n+1},{t}_{n+1}\right)$ (7.70)

with representatives which are referred to as Backward Difference Formulae or, abbreviated, BDF formulae, cp. Section [Section 7.1.11]. This class of procedures plays an important role in the solution of stiff initial value problems and in the solution of DAEs.

An example of a favored multi-step procedure is the method of Adams-Bashford, which is based on the following formula:

 ${x}_{n+1}={x}_{n}+\frac{h}{24}\left(55f\left({x}_{n},{t}_{n}\right)-59f\left({x}_{n-1},{t}_{n-1}\right)+37f\left({x}_{n-2},{t}_{n-2}\right)-9f\left({x}_{n-3},{t}_{n-3}\right)\right)$ (7.71)

The local method error yields

 $\frac{251}{720}{h}^{5}{x}^{\left(5\right)}\left({\eta }_{n}\right)=O\left({h}^{5}\right)\text{ }\text{with}\text{ }{\eta }_{n}\in \left[{t}_{n},{t}_{n+1}\right].$ (7.72)

In addition to explicit linear multi-step methods, one often utilizes implicit procedures in practice. Reasons are:

1. they are more accurate when compared to explicit procedures,

2. they have considerably better stability characteristics and

3. they have easy strategies to estimate errors and to control step sizes.

In order to calculate a good starting value, which is necessary for the solution of the non-linear conditional equation where ${x}_{n+1}$, we use a multi-step procedure with the form of a predictor-corrector method. One example is the procedure by Adams-Moulton which is defined by the predictor (7.38) and the corrector

 $\begin{array}{l}{x}_{n+1}={x}_{n-1}+\frac{h}{720}\left(251f\left({x}_{n+1},{t}_{n+1}\right)+646f\left({x}_{n},{t}_{n}\right)-264f\left({x}_{n-1},{t}_{n-1}\right)+\\ \text{ }\text{ }\text{ }\text{ }\text{ }\text{ }106f\left({x}_{n-2},{t}_{n-2}\right)-19f\left({x}_{n-3},{t}_{n-3}\right)\end{array}$ (7.73)

Both formulae can also be used independently from each other.

In this case, the first formula is an explicit method (right hand side does not depend on ${x}_{n+1}$), whereas the second formula is an implicit method (right hand side depends on ${x}_{n+1}$). The first formula predominantly aims at achieving a good starting value for the iterative solution of the second. The second formula describes an implicit method which must be solved iteratively. In practice, we often use one or two iteration steps. The method error for the Adam-Moulton procedure is as follows

 $-\frac{3}{160}{h}^{6}{x}^{\left(6\right)}\left({\eta }_{n}\right)=O\left({h}^{6}\right),\text{ }{\eta }_{n}\in \left[{t}_{n},{t}_{n+1}\right]$ (7.74)

In most cases, a predictor-corrector procedure consists of an explicit method (the predictor) and an implicit method (the corrector) with an error order which is at least equal to the predictor.

An error estimation (e.g. required for the increment control) results from additional computing with a double increment, as described within the scope of the Runge-Kutta methods:

 ${e}_{n,h}\approx \frac{1}{31}\left({f}_{h}\left({x}_{n},{t}_{n}\right)-{f}_{2h}\left({x}_{n},{t}_{n}\right)\right)$ (7.75)

In (7.71) and (7.74) we had encountered a problem which is always involved in the application of multi-step procedures: In order to start the calculation we already need solution values at the four sampling points ${x}_{1},\text{\hspace{0.17em}}{x}_{2},\text{\hspace{0.17em}}{x}_{3},\text{\hspace{0.17em}}{x}_{4}$. These points must be calculated by means of another method (e.g. Runge-Kutta procedure). A further disadvantage is the complicated way of increment variations in the integration process (in contrast to one-step procedures).

An advantage is that the calculation of a new solution value merely requires one analysis of the differential equation. In contrast to that, Runge-Kutta methods of similar error order basically require analysis of differential equation at several points.

Stability analysis can also be made for multi-step procedures. But to deal with it here would go beyond the scope of this lecture.

### 7.1.9.  Activation of Linear Multi-Step Procedures

Because only the starting value ${x}_{0}=x\left(0\right)$ is given in a initial value problem, we must determine further $n-1$ initial values with a sufficient accuracy in order to start the linear multi-step procedure. As a general rule, there are to basic strategies:

1. We use an one-step procedure with a automatic step size control and calculate approximates with given accuracy for the starting values at points ${t}_{1},\dots ,\text{\hspace{0.17em}}{t}_{n-1}$. Here, the following problems might occur:

1. The grid points which are automatically determined by the step size control are not necessarily equidistant. In this case, we must calculate the required values at equidistant sampling points by interpolating the calculated sampling points.

2. Even if the increment $h$ is constant, it does not have to be identical to the increments which are needed by the linear multi-step procedure to comply with the accuracy requirements. Therefore, we advantageously use a one-step procedure which has the same consistency order like the multi-step procedure.

2. We use a family of multi-step procedures with ascending consistency order. We start with a method of lower order and successively increase the order. As discussed in $1$, the grid points might also not be equidistant.

### 7.1.10.  System of Differential Equations

The numerical integration of a system of differential equation ensues analogically to the procedure in scalar differential equation. This will be exemplified by the Runge-Kutta procedure. In this case, the application of (7.59) to the equation system yields

 $\stackrel{˙}{x}=f\left(t,x\right)$ (7.76)

the calculation rule

 $\begin{array}{l}{k}_{1}=hf\left({x}_{n}^{},{t}_{n}\right)\\ {k}_{2}=hf\left({x}_{n}+{k}_{1},{t}_{n+1}\right)\\ {x}_{n+1}={x}_{n}+\frac{1}{2}\left[{k}_{1}+{k}_{2}\right]\end{array}$ (7.77)

i.e. the scalar quantities must be substituted by vectors or, expressed differently, the numerical integration procedure will simply be applied equation wise.

### 7.1.11.  BDF Methods

[10]

The multi-step procedures discussed before are based on numerical solutions of the integral equations (7.59).The class of so-called “Backward Difference Formulae“ (BDF methods) will be constructed with the help of numerical differentiation.

In order to determine an approximate value of ${x}_{n+1}$ for $x\left({t}_{n+1}\right)$, we define an interpolation polynomial $q$ by the points

 $\begin{array}{l}\left({t}_{n-k+1},{x}_{n-k+1}\right)\\ \left({t}_{n-k+2},{x}_{n-k+2}\right)\\ \dots \\ \left({t}_{n},{x}_{n}\right)\\ \left({t}_{n+1},{x}_{n+1}\right)\end{array}$ (7.78)

The polynomial $q\left(\zeta \right)$ can be written as follows

 $q\left(\varsigma \right)=x\left({t}_{n}+\varsigma h\right)=\sum _{j=0}^{k}{\left(-1\right)}^{j}\left(\begin{array}{c}-\varsigma +1\\ j\end{array}\right){\nabla }^{j}{x}_{n+1}$ (7.79)

with the so-called backward differences

 ${\nabla }^{0}{x}_{n}:={x}_{n},\text{ }\text{ }{\nabla }^{j+1}{x}_{n}:={\nabla }^{j}{x}_{n}-{\nabla }^{j}{x}_{n-1}.$ (7.80)

The unknown value xn+1will be determined in such a way that the polynomial satisfies the differential equation

 $\stackrel{˙}{q}\left({t}_{n+l}\right)=f\left({t}_{n+l},{x}_{n+l}\right)\text{\hspace{0.17em}},\text{ }l\in \left\{0,\text{\hspace{0.17em}}1,\text{\hspace{0.17em}}2,\text{\hspace{0.17em}}...\right\}$ (7.81)

at one point of equation (7.78).

When $l=0$ we obtain explicit formulae, namely when $k=1$ the explicit Euler method and when $k=2$ the Midpoint Method (cp. exercise). The formulae when $k=3$ are instable.

When $l=1$, we obtain implicit formulae, the so-called BDF methods.

 $\sum _{j=0}^{k}{\alpha }_{j}{\nabla }^{j}{x}_{n+1}=h{f}_{n+1}$ (7.82)

with the coefficient

 ${\alpha }_{j}={\left(-1\right)}^{j}{\frac{d}{d\varsigma }\left(\begin{array}{c}-\varsigma +1\\ j\end{array}\right)|}_{\varsigma =1}$ (7.83)

With

 ${\left(-1\right)}^{j}\left(\begin{array}{c}-\varsigma +1\\ j\end{array}\right)=\frac{1}{j!}\left(\varsigma -1\right)\text{\hspace{0.17em}}\varsigma \text{\hspace{0.17em}}\left(\varsigma +1\right)...\left(\varsigma +j-2\right)$ (7.84)

we obtain

 (7.85)

thus,

 $\sum _{j=1}^{k}\frac{1}{j}{\nabla }^{j}{x}_{n+1}=h{f}_{n+1}$ (7.86)

Where $k=4$, we obtain for e.g.

 $25{x}_{n+1}-48{x}_{n}+36{x}_{n-1}-16{x}_{n-2}+3{x}_{n-3}=12h\text{\hspace{0.17em}}{f}_{n+1}$ (7.87)

The formulae (7.86) are stable when $k\le 6$; when $k<6$, there are instable.

### 7.1.12.  Remarks on Stiff Differential Equations

The following example will help us to understand the phenomenon which is referred to as stiff differential equations in literature. Let us first of all envision the following example:

Example 7.6:

We have the following equations

 ${\stackrel{˙}{x}}_{1}=\frac{{\lambda }_{1}+{\lambda }_{2}}{2}{x}_{1}+\frac{{\lambda }_{1}-{\lambda }_{2}}{2}{x}_{2},\text{ }{\stackrel{˙}{x}}_{2}=\frac{{\lambda }_{1}-{\lambda }_{2}}{2}{x}_{1}+\frac{{\lambda }_{1}+{\lambda }_{2}}{2}{x}_{2}$ (7.88)

with the constants ${\lambda }_{1},\text{\hspace{0.17em}}{\lambda }_{2}$ where ${\lambda }_{i}<0$.

The exact general solution of this equation is

 ${x}_{1}={c}_{1}{e}^{{\lambda }_{1}t}+{c}_{2}{e}^{{\lambda }_{2}t},\text{ }{x}_{2}={c}_{1}{e}^{{\lambda }_{1}t}-{c}_{2}{e}^{{\lambda }_{2}t}.$ (7.89)

Both solutions converge to zero as $t\to \infty$. Using the explicit Euler method, we obtain the numerical solution

 $\begin{array}{l}{x}_{1n}={c}_{1}{\left(1+h{\lambda }_{1}\right)}^{i}+{c}_{2}{\left(1+h{\lambda }_{2}\right)}^{i}\\ {x}_{2n}={c}_{1}{\left(1+h{\lambda }_{1}\right)}^{i}-{c}_{2}{\left(1+h{\lambda }_{2}\right)}^{i}\end{array}$ (7.90)

These approximate solutions evidently only converge to zero, if

 $|1+h{\lambda }_{1}|<1\text{ }\text{and}\text{ }|1+h{\lambda }_{2}|<1.$ (7.91)

Here, $h$ must be decreased in such way that

 $h<\mathrm{min}\left(\frac{2}{|{\lambda }_{1}|},\frac{2}{|{\lambda }_{2}|}\right)$ (7.92)

In this special technical process, which is described by this equation, suppose that $|{\lambda }_{2}|\gg |{\lambda }_{1}|$.

In the analytical approach, the contribution of the term ${e}^{{\lambda }_{2}t}$is negligibly small when compared to the contribution of ${e}^{{\lambda }_{1}t}$. But in the numerical approach, the solution component of ${e}^{{\lambda }_{2}t}$ becomes relevant for the choice of the minimal values of h due to eq (7.92).

Suppose that ${\lambda }_{1}=-1$ and ${\lambda }_{2}=-1000$, it would be $h\le 0.005$. If the second solution component does not exist, $h\le 2$ would be sufficient. Hence, although ${e}^{-1000t}$ is more or less relevant to solve the problem, the factor $1000$ determines the choice of the integration increment. This characteristic of the differential equation system in numerical integration is referred to as stiff. If the system at hand is stiff, not only the $A$-stability of the integration method has to be considered but the damping behavior as well. For this purpose the term L-stability is introduced.

Definition 7.5

A numerical integration method for differential equations is called L-stable if and only if it is $A$-stable and the following additionally holds:

 $|{x}_{n}|\to 0\text{ }\text{and}\text{ }\text{Re}\left(h\lambda \right)\to -\infty .$ (7.93)

Annotation 7.4:

1. All $F$-stable methods are $A$-stable, but never $L$-stable.

2. While the solution portions including high damping are only weakly numerically damped when using $F$-stable methods, these portions are strongly numerically damped when using $L$-stable integration methods.

A principle difference between the characteristic of implicit and explicit integration methods will be discussed in the following.

Example 7.7: Integration of the Equation of Motion for Single Mass Pendulum [9]

The motion of a linear not stimulated single mass pendulum will be described by the state equation

 $\begin{array}{l}{\stackrel{˙}{x}}_{1}={x}_{2}\\ {\stackrel{˙}{x}}_{2}=-\frac{c}{m}{x}_{1}-\frac{d}{m}{x}_{2}\end{array}$ (7.94)

For simplification, we write

$x:={x}_{1}$

$v:={x}_{2}$

Hence, the equations (7.94) yields

 $\begin{array}{l}\stackrel{˙}{x}=v\\ \stackrel{˙}{v}=-\frac{c}{m}x-\frac{d}{m}v\end{array}$ (7.95)

Application of the explicit Euler method on (7.95) results in

 $\begin{array}{l}{x}_{k+1}={x}_{k}+h{v}_{k}\\ {v}_{k+1}={v}_{k}-h\left(\frac{c}{m}{x}_{k}+\frac{d}{m}{v}_{k}\right)\end{array}$ (7.96)

In the damped case $\left(d=0\right)$, the pendulum oscillates with constant amplitudes and the frequency

${f}_{0}=\frac{{\omega }_{0}}{2\pi }\text{ }\text{with}\text{ }{\omega }_{0}=\sqrt{\frac{c}{m}}$

The amplitude results from the initial condition.

The parameters

$m=1,\text{ }c=50,\text{ }d=0,\text{ }x\left(0\right)=1,\text{ }v\left(0\right)=0$

with the increments

$h=1\text{\hspace{0.17em}}ms,\text{\hspace{0.17em}}5\text{\hspace{0.17em}}ms,\text{\hspace{0.17em}}10\text{\hspace{0.17em}}ms$

result in the outcomes represented in [Figure 7.12].

It becomes obvious that the oscillating amplitude does not remain constant (in contrast to reality), but that it grows with increasing increment.

In order to describe this characteristic, we calculate the total energy of a pendulum:

$E\left(x,v\right)=\frac{1}{2}c{x}^{2}+\frac{1}{2}m{v}^{2}$

By means of the solutions by the Euler method, we obtain

 $\begin{array}{c}{E}_{k+1}=\frac{1}{2}c{x}_{k+1}{}^{2}+\frac{1}{2}m{v}_{k+1}{}^{2}\\ =\frac{1}{2}c{\left(}^{{x}_{k}}+\frac{1}{2}m{\left(}^{{v}_{k}}\\ =\frac{1}{2}c{x}_{k}{}^{2}+c{x}_{k}h{v}_{k}+\frac{1}{2}c{h}^{2}{v}_{k}{}^{2}+\frac{1}{2}m{v}_{k}{}^{2}-mh\frac{c}{m}{x}_{k}+\frac{1}{2}m\frac{{c}^{2}}{{m}^{2}}{x}_{k}{}^{2}\\ =\frac{1}{2}c{x}_{k}{}^{2}+\frac{1}{2}m{v}_{k}{}^{2}+\frac{c{h}^{2}}{m}\left(\frac{1}{2}c{x}_{k}{}^{2}+\frac{1}{2}m{v}_{k}{}^{2}\right)\\ =\left(1+\frac{c{h}^{2}}{m}\right){E}_{k}\end{array}$ (7.97)

(7.97) shows obviously that the total energy of the numerical solution increases with the factor

$1+\frac{c{h}^{2}}{m}=1+{\left(h{\omega }_{0}\right)}^{2}$

in every step.

With the time period

$T=\frac{1}{{f}_{0}}=\frac{2\pi }{{\omega }_{0}}$

and the number of integration steps per period,

$n=\frac{T}{h}=\frac{2\pi }{h{\omega }_{0}}$

the total energy increases per period with the factor of

${\left(1+{\left(h{\omega }_{0}\right)}^{2}\right)}^{n}={\left(1+{\left(h{\omega }_{0}\right)}^{2}\right)}^{\frac{2\pi }{h{\omega }_{0}}}<1$

Since $E=\frac{1}{2}c{x}_{{}_{\mathrm{max}}}^{2}$ at $v=0$ (reversal point), the amplitude increases for each oscillation with the factor

${\left(1+{\left(h{\omega }_{0}\right)}^{2}\right)}^{\frac{2\pi }{h{\omega }_{0}}}$.

Suppose this consideration would be true for the implicit Euler method, we would first obtain

 ${x}_{k+1}={x}_{k}+h{v}_{k+1}$ (7.98)

 ${v}_{k+1}={v}_{k}-h\left(\frac{c}{m}{x}_{k+1}+\frac{d}{m}{v}_{k+1}\right)$ (7.99)

If we insert equation (7.98) in (7.99), we obtain

${v}_{k+1}={v}_{k}-h\frac{c}{m}{x}_{k}-\left({h}^{2}\frac{c}{m}+h\frac{d}{m}\right){v}_{k+1}$

or after transformation

$\left(1+{h}^{2}\frac{c}{m}+h\frac{d}{m}\right){v}_{k+1}={v}_{k}-h\frac{c}{m}{x}_{k}$

and, subsequently, after calculation

$\begin{array}{c}{v}_{k+1}=\frac{\left(1+{h}^{2}\frac{c}{m}+h\frac{d}{m}\right)}{\left(1+{h}^{2}\frac{c}{m}+h\frac{d}{m}\right)}{v}_{k}-\frac{\left({h}^{2}\frac{c}{m}+h\frac{d}{m}\right)}{\left(1+{h}^{2}\frac{c}{m}+h\frac{d}{m}\right)}{v}_{k}-\frac{{h}^{2}\frac{c}{m}+h\frac{d}{m}}{\left(1+{h}^{2}\frac{c}{m}+h\frac{d}{m}\right)}{x}_{k}\\ ={v}_{k}-h\left(\frac{c}{m+hd+{h}^{2}c}{x}_{k}+\frac{ch+d}{m+hd+{h}^{2}c}{v}_{k}\right)\\ ={v}_{k}-h\left(\frac{c}{{m}_{h}}{x}_{k}+\frac{{d}_{h}}{{m}_{h}}{v}_{k}\right)\end{array}$

with the modified damping

${d}_{h}=d+h\text{\hspace{0.17em}}c$

and the modified mass

${m}_{h}=m+h\text{\hspace{0.17em}}d+{h}^{2}c=m+h\text{\hspace{0.17em}}{d}_{h}$

Suppose that the same parameter values of the explicit Euler method would also be real in this case. Hence, we will obtain the following results represented in [Figure 7.13].

Obviously, the oscillating amplitude decreases. After energy observation, analogue to the explicit method, we now obtain a decrease in energy for each oscillating period with the factor

${\left(1+{\left(h{\omega }_{0}\right)}^{2}\right)}^{-\frac{2\pi }{h{\omega }_{0}}}<1$

The behavior of the total energy is represented in [Figure 7.14] in both cases.

This example suggests an essential general characteristic of integration methods, which will be discussed later:

1. Explicit methods insert additional (only numerical qualified) excitement into the system.

2. Implicit methods lead to an additional (numerical qualified) damping of the system.

In fact, this behavior also reoccurs in complex applications in practice.

A-stable methods are best suited to calculate stable problems (cp. [Section 7.1.2]). But in the past, we have only been dealing with the implicit Euler methods and the Trapezoidal Rule, with a local consistency order of $1$ and $2$ respectively, as an integrated part of $A$ -stable methods. According to the theorem by Dahlquist, there is however no “better” method.

Theorem 7.1 (Dahlquist):

1. Explicit multi-step methods are never $A$-stable.

2. The order of an $A$-stable implicit multi-step method is at the most $2$.

3. The Trapezoidal Rule is an $A$-stable method of second order with the lowest error constants.

Therefore, we introduce a further definition, which even though weakens the $A$-stability approach, can be expediently be utilized in many applications.

Definition 7.6:

A method is $A\left(\alpha \right)$-stable if a stability area comprises a sector of the following form

 $A\left(z\right):=\left\{z\in C:|\mathrm{arg}\left(-z\right)|\le \alpha \right\}.$ (7.100)

It is referred to as $A$-stable, if $A\left(0\right)$-stable where $\alpha <0$ (see [Figure 7.15]).

The BDF-methods are e.g. $A\left(\alpha \right)$-stable with the opening angle $\alpha$ ([Table 7.2]).

Table 7.2.  Opening angle α for different k
 $k$ 1 2 3 4 5 6 $\alpha \left[{}^{\circ }\right]$ 90 90 86 73,3 51,8 17,8

### 7.1.13.  Implicit Runge-Kutta Methods

In comparison to the explicit Runge-Kutta methods we dealt with in section [Section 7.1.6], the matrix $\Gamma$ in equation (7.43) can be completely filled. This means that the values ${k}_{i}$ in equation (7.42) cannot be sequentially calculated, but that with each integration step we have to solve a non-linear equation system.

Example 7.8: Implicit Euler Method

The implicit Euler method

${x}_{n+1}={x}_{n}+h\text{\hspace{0.17em}}f\left({x}_{n+1},{t}_{n+1}\right)$

is obviously a single-step implicit Runge-Kutta method.

The Trapezoidal Rule

${x}_{n+1}={x}_{n}+\frac{h}{2}\left(f\left({x}_{n},{t}_{n}\right)+f\left({x}_{n+1},{t}_{n+1}\right)\right)$

can be regarded as a two-step implicit Runge-Kutta method.

${k}_{1}=f\left({x}_{n},{t}_{n}\right)$

${k}_{2}=f\left({x}_{n}+h\left(\frac{1}{2}{k}_{1}+\frac{1}{2}{k}_{2}\right),{t}_{n}+h\right)$

${x}_{n+1}={x}_{n}+h\left(\frac{1}{2}{k}_{1}+\frac{1}{2}{k}_{2}\right)$

Butcher’s scheme is according to (7.43) as follows

Implicit Euler method

$\begin{array}{cc}1& 1\\ & 1\end{array}$

Trapezoidal Rule

$\begin{array}{ccc}0& 0& 0\\ 1& 0,5& 0,5\\ & 0,5& 0,5\end{array}$

A great disadvantage of implicit Runge-Kutta methods is the necessity to solve a non-linear equation system for the $m·N$ variables ${h}_{i}$ (with $N$ representing the dimensions of the equation system which must be solved). On the other hand, the additional constants in (7.43) can thereunto be used in order to

1. achieve a higher consistency order with identical number of steps $m$.

2. seriously improve the stability of the numerical method, which is “almost” $A$-stable.

Example 7.9: The Midpoint Method

The “Midpoint Method”

${k}_{1}=f\left({x}_{n}+\frac{h}{2}h{k}_{1},{t}_{n}+\frac{h}{2}\right)$

${x}_{n+1}={x}_{n}+h\left(\frac{1}{2}{k}_{1}+\frac{1}{2}{k}_{2}\right)$

is obviously a single-step method which, however has a consistency order of 2.

Within the framework of this lecture, we will not cater to the construction and the application of implicit Runge-Kutta methods in detail. Detailled data is given by Jumann, M. (2004) for example.

### 7.1.14.  Comparison of Methods for Numerical Solution of Initial Value Problems (IVP)

The described methods can be divided into the following three classes

1. Euler method,

2. Runge-Kutta method,

3. Multi-step procedure,

4. BDF method

Furthermore, there exist other classes (e.g. extrapolation technique) which will not be discussed in this lecture.

Runge-Kutta methods and multi-step procedures allow for error estimation and increment step control. Additionally, multi-step procedures are open to variations of the method order. Commercially distributed program libraries (e.g. IMSL, NAG) offer sophisticated computer programs for all classes. Furthermore, there are programs with published source code which are freely available (e.g. Shampine and Gordon, 1983).