Numerical Methods

Outline

Introduction

Linear
Equations

Interpolation

Numerical
Solutions

Computer
Measurement

      

LU decomposition

Matrix $A$ can be written as the product $A=LU$ where $L$ is a lower triangular matrix and $U$ is an upper triangular matrix of the form,

\[\begin{equation} \left[ \begin{array}{cccc} A_{11} & A_{12} & A_{13} & A_{14} \\ A_{21} & A_{22} & A_{23} & A_{24} \\ A_{31} & A_{32} & A_{33} & A_{34} \\ A_{41} & A_{42} & A_{43} & A_{44} \end{array} \right]= \left[ \begin{array}{cccc} 1 & 0 & 0 & 0 \\ L_{21} & 1 & 0 & 0 \\ L_{31} & L_{32} & 1 & 0 \\ L_{41} & L_{42} & L_{43} & 1 \end{array} \right]\left[ \begin{array}{cccc} U_{11} & U_{12} & U_{13} & U_{14} \\ 0 & U_{22} & U_{23} & U_{24} \\ 0 & 0 & U_{33} & U_{34} \\ 0 & 0 & 0 & U_{44} \end{array} \right]. \end{equation}\]

Matrix multiplication yields,

\[\begin{equation} \begin{array}{y} A_{11} = U_{11} \\ A_{21} = L_{21}U_{11} \\ A_{31} = L_{31}U_{11}\\ A_{41} = L_{41}U_{11}\\ \end{array} \qquad \begin{array}{y} A_{12} = U_{12} \\ A_{22} = L_{21}U_{12}+U_{22} \\ A_{32} = L_{31}U_{12}+L_{32}U_{22} \\ A_{42} = L_{41}U_{12}+L_{42}U_{22} \\ \end{array} \qquad \begin{array}{y} A_{13} = U_{13} \\ A_{23} = L_{21}U_{13}+U_{23} \\ A_{33} = L_{31}U_{13}+L_{32}U_{23} +U_{33}\\ A_{43} = L_{41}U_{13}+L_{42}U_{23} +L_{43}U_{33}\\ \end{array} \qquad \begin{array}{y} A_{14} = U_{14} \\ A_{24} = L_{21}U_{14}+U_{24} \\ A_{34} = L_{31}U_{14}+L_{32}U_{24} + U_{34}\\ A_{44} = L_{41}U_{14}+L_{42}U_{24} + L_{43}U_{34}+U_{44}\\ \end{array} \end{equation}\]

These equations can be solved starting in the upper-left and solving $U_{1j}$ in the first row then for $L_{i1}$ in the first column, then for $U_{2j}$ in the second row and for $L_{i2}$ in the second column, etc. Code that performs LU decomposition is shown below.

Since the matrix elements $A_{ij}$ are only used once in the calculation, it possible to store the components of $L_{ij}$ and $U_{ij}$ in the same memory locations where $A_{ij}$ was stored. Code that generates the $LU$ matrix is shown below. It is more compact than the version above and uses less memory but is harder to read.

LU-decomposition

This app performs LU decomposition of a square matrix and will solve a set of linear inhomogeneous equations.

$A=$  $LU=$




$\vec{x}=$  $\vec{b}=$

Optimization of the rounding error through partial pivoting

The numerical results obtained are not affected by methodological error (this is a direct method). All deviations from the 'true' results, therefore, must come from rounding errors. We give now a concrete example. Let us consider the following set of three equations:

\begin{eqnarray}\label{lin14} x_{1} + 5923181 x_{2} + 1608 x_{3} = 5924790 \nonumber\\ 5923181 x_{1} + 337116 x_{2} - 7 x_{3} = 6260290 \\ 6114 x_{1} + 2 x_{2} + 9101372 x_{3} = 9107488 \nonumber \end{eqnarray}

The exact solution of this system is: $x_{1}=x_{2}=x_{3}=1$. This can easily be checked by substituting this solution into the equations.

The numerical solution yields the following LU-Matrix and vector:

\begin{equation} LU =\begin{bmatrix}1&5923181&1608\\5923181&-35084072821645&-9524475055\\6114&0.001032215638591928&9101372.101149714\end{bmatrix}, \nonumber \end{equation} \begin{equation} \vec{x} =\begin{bmatrix}0.9999999998590283\\1\\1.0000000000000877\end{bmatrix}. \nonumber \end{equation}

We can see the effect of the rounding error. If we now change the order of the equations to 2/3/1, which in principle should have no influence on the result, the same program returns:

\begin{equation} LU =\begin{bmatrix}6114&2&9101372\\968.7898266274125&335178.4203467452&-8817316608.951586\\0.00016355904481517829&17.671725385975915&155817197873.98508\end{bmatrix}, \nonumber \end{equation} \begin{equation} \vec{x} =\begin{bmatrix}0.9999999999993907\\1.0000000000070886\\1.0000000000000004\end{bmatrix}. \nonumber \end{equation}

The result is now different because of the rounding errors. If we re-order the lines with the sequence 2/1/3 we obtain the correct solution up to machine precision:

\begin{equation} LU =\begin{bmatrix}5923181&337116&-7\\1.6882820227847166e-7&5923180.943085312&1608.0000011817974\\0.0010322156287305756&-0.000058410574861642146&9101372.101149714\end{bmatrix}, \nonumber \end{equation} \begin{equation} \vec{x} =\begin{bmatrix}1\\1\\1\end{bmatrix}. \nonumber \end{equation}

This means that the order in which the equations appear affect the size of the rounding error and on the quality of the numerical solution in a decisive way.

What is the 'right' order? Round off errors occur when large, nearly-equal numbers are subtracted from each other. In the construction of the LU-matrix, we divide by the diagonal elements $U_{ii}$ so large elements will appear if $U_{ii}$ is small. Even worse than the round-off error is the fact that if the upper-left element of the matrix $A$ is zero, our LU-decomposition routine divides by zero and fails. To try to prevent this from happening we rearrange the rows of the equations so that the matrix has as large as possible elements along the diagonal. This is known as pivoting. It is possible to rearrange the rows and the columns. If two rows are swapped, this just changes the order in which the equations appear but has no effect on the solution. If two columns are swapped, it relabels the variables. For instance, if column 1 and column 3 are swapped, the variable that was named $x_1$ is now nammed $x_3$ and the old $x_3$ is now named $x_1$. This also has no effect on the solutions. Often just the rows are exchanged during pivoting. This is known as partial pivoting. The function pivot(A) defined in the code below puts the largest elements of matrix $A$ along the diagonal. It looks down the first column for the element with the largest absolute value and swaps that row to the first row. Then it looks down the second column starting from the second row for the element with the largest absolute value and swaps that with the second row etc.

The function pivot(A) accepts a square matrix as its argument and returns an object that contains two square matrices called $P$ and $PA$. $P$ is the permutation matrix that rearranges the matrix $A$. A permutation matrix has the property that one element equals $1$ in every row and every column. All other elements are zero. The other matrix that pivot(A) returns is the matrix product of $P$ an $A$. This matrix is passed to the LU-decomposition routine.

In general, expecially for higher-order systems, there is no chance to get a correct result without using some kind of pivoting. There are however also sets of equations for which the LU-decomposition is stable also without pivoting. This happens if the matrix is symmetric, tridiagonal, cyclically tridiagonal, diagonal-dominant or positive definite.