# numerical analysis brief overview

Contents

## Interpolation

### Lagrange polynomial

Given a set of $k+1$$k + 1$ data points

$\left({x}_{0},{y}_{0}\right),\dots ,\left({x}_{j},{y}_{j}\right),\dots ,\left({x}_{k},{y}_{k}\right)$

where no two ${x}_{j}$$x_{j}$ are the same, the interpolation polynomial in the Lagrange form is a linear combination

$L\left(x\right):=\sum _{j=0}^{k}{y}_{j}{\ell }_{j}\left(x\right)$

of Lagrange basis polynomials

${\ell }_{j}\left(x\right):=\prod _{\begin{array}{c}0\le m\le k\\ m\ne j\end{array}}\frac{x-{x}_{m}}{{x}_{j}-{x}_{m}}=\frac{\left(x-{x}_{0}\right)}{\left({x}_{j}-{x}_{0}\right)}\cdots \frac{\left(x-{x}_{j-1}\right)}{\left({x}_{j}-{x}_{j-1}\right)}\frac{\left(x-{x}_{j+1}\right)}{\left({x}_{j}-{x}_{j+1}\right)}\cdots \frac{\left(x-{x}_{k}\right)}{\left({x}_{j}-{x}_{k}\right)},$

Remainder in Lagrange interpolation formula

$R\left(x\right)=f\left(x\right)-L\left(x\right)$
$\ell \left(x\right)=\left(x-{x}_{0}\right)\left(x-{x}_{1}\right)\cdots \left(x-{x}_{k}\right)\phantom{\rule{0ex}{0ex}}R\left(x\right)=f\left[{x}_{0},\dots ,{x}_{k},x\right]\ell \left(x\right)=\ell \left(x\right)\frac{{f}^{\left(k+1\right)}\left(\xi \right)}{\left(k+1\right)!},\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}{x}_{0}<\xi <{x}_{k},$

The remainder can be bound as

$\frac{\ell \left(x\right)}{\left(k+1\right)!}\underset{{x}_{0}\le \xi \le {x}_{k}}{min}{f}^{\left(k+1\right)}\left(\xi \right)\le R\left(x\right)\le \frac{\ell \left(x\right)}{\left(k+1\right)!}\underset{{x}_{0}\le \xi \le {x}_{k}}{max}{f}^{\left(k+1\right)}\left(\xi \right)$

### Newton polynomial

#### divided differences

$\begin{array}{cccc}{x}_{0}& f\left({x}_{0}\right)& & \\ & & \frac{f\left({x}_{1}\right)-f\left({x}_{0}\right)}{{x}_{1}-{x}_{0}}& \\ {x}_{1}& f\left({x}_{1}\right)& & \frac{\frac{f\left({x}_{2}\right)-f\left({x}_{1}\right)}{{x}_{2}-{x}_{1}}-\frac{f\left({x}_{1}\right)-f\left({x}_{0}\right)}{{x}_{1}-{x}_{0}}}{{x}_{2}-{x}_{0}}\\ & & \frac{f\left({x}_{2}\right)-f\left({x}_{1}\right)}{{x}_{2}-{x}_{1}}& \\ {x}_{2}& f\left({x}_{2}\right)& & ⋮\\ & & ⋮& \\ ⋮& & & ⋮\\ & & ⋮& \\ {x}_{n}& f\left({x}_{n}\right)& & \end{array}$
$\left(\begin{array}{ccccc}f\left[{x}_{0}\right]& f\left[{x}_{0},{x}_{1}\right]& f\left[{x}_{0},{x}_{1},{x}_{2}\right]& \dots & f\left[{x}_{0},\dots ,{x}_{n}\right]\\ 0& f\left[{x}_{1}\right]& f\left[{x}_{1},{x}_{2}\right]& \dots & f\left[{x}_{1},\dots ,{x}_{n}\right]\\ ⋮& ⋮& ⋮& \ddots & ⋮\\ 0& 0& 0& \dots & f\left[{x}_{n}\right]\end{array}\right)$

Given a set of $k+1$$k + 1$ data points

$\left({x}_{0},{y}_{0}\right),\dots ,\left({x}_{j},{y}_{j}\right),\dots ,\left({x}_{k},{y}_{k}\right)$

where no two ${x}_{j}$$x_{j}$ are the same, the Newton interpolation polynomial is a linear combination of Newton basis polynomials

$N\left(x\right):=\sum _{j=0}^{k}{a}_{j}{n}_{j}\left(x\right)$

with the Newton basis polynomials defined as

${n}_{j}\left(x\right):=\prod _{i=0}^{j-1}\left(x-{x}_{i}\right)$

for $j>0$$j > 0$ and ${n}_{0}\left(x\right)=1$$n_{0}(x)= 1$.

The coefficients are defined as

${a}_{j}:=\left[{y}_{0},\dots ,{y}_{j}\right]$

where

$\left[{y}_{0},\dots ,{y}_{j}\right]$

is the notation for divided differences.

Thus the Newton polynomial can be written as

$N\left(x\right)=\left[{y}_{0}\right]+\left[{y}_{0},{y}_{1}\right]\left(x-{x}_{0}\right)+\cdots +\left[{y}_{0},\dots ,{y}_{k}\right]\left(x-{x}_{0}\right)\left(x-{x}_{1}\right)\cdots \left(x-{x}_{k-1}\right).$

The remainder can be bound as

$\frac{\ell \left(x\right)}{\left(k+1\right)!}\underset{{x}_{0}\le \xi \le {x}_{k}}{min}{f}^{\left(k+1\right)}\left(\xi \right)\le R\left(x\right)\le \frac{\ell \left(x\right)}{\left(k+1\right)!}\underset{{x}_{0}\le \xi \le {x}_{k}}{max}{f}^{\left(k+1\right)}\left(\xi \right)$

### Hermite interpolation

Repeat points divided differences:

### Runge's phenomenon

In the mathematical field of numerical analysis, Runge's phenomenon is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation points. The discovery was important because it shows that going to higher degrees does not always improve accuracy.

### Linear interpolation

If the two known points are given by the coordinates $\left({x}_{0},{y}_{0}\right)$$(x_0, y_0)$ and $\left({x}_{1},{y}_{1}\right)$$(x_1, y_1)$, the linear interpolant is the straight line between these points. For a value $x$$x$ in the interval $\left({x}_{0},{x}_{1}\right)$$(x_0, x_1)$, the value $y$$y$ along the straight line is given from the equation of slopes

$\frac{y-{y}_{0}}{x-{x}_{0}}=\frac{{y}_{1}-{y}_{0}}{{x}_{1}-{x}_{0}}$

### Interpolation remainder

#### Polynomial remainder

$x$$x$ 0 1 2
$f\left(x\right)$$f(x)$ 0 1 1
${f}^{\prime }\left(x\right)$$f'(x)$ 0 1
$P\left(x\right)=\frac{9}{4}{x}^{2}-\frac{3}{2}{x}^{3}+\frac{1}{4}{x}^{3}$

Remainder:

Assume:

$R\left(x\right)=k\left(x\right){x}^{2}\left(x-1{\right)}^{2}\left(x-2\right)\phantom{\rule{0ex}{0ex}}\phi \left(t\right)=R\left(t\right)-k\left(x\right){t}^{2}\left(t-2{\right)}^{2}\left(t-2\right)$

It follows that,

There are $5$$5$ points that

${\phi }^{\prime }\left(x\right)=0$

, as can be seen from Roller's Theorem.

So:

then

$\begin{array}{rl}{\phi }^{\left(5\right)}\left(t\right)& ={R}^{\left(5\right)}\left(t\right)-k\left(x\right)\left[{t}^{2}\left(t-1{\right)}^{2}\left(t-2\right){\right]}^{\left(5\right)}\\ & ={f}^{\left(5\right)}\left(t\right)-{P}^{\left(5\right)}\left(t\right)-k\left(x\right)\cdot 5!\\ & ={f}^{\left(5\right)}\left(t\right)-k\left(x\right)\cdot 5!\end{array}$

substitute

${\phi }^{\left(5\right)}\left({\xi }_{x}\right)={f}^{\left(5\right)}\left({\xi }_{x}\right)-k\left(x\right)\cdot 5!=0\phantom{\rule{0ex}{0ex}}⇓\phantom{\rule{0ex}{0ex}}k\left(x\right)=\frac{{f}^{\left(5\right)}\left({\xi }_{x}\right)}{5!}$

At the end,

#### Linear remainder

$f\left(x\right)$$f(x)$ is primal function, $I\left(x\right)$$I(x)$ is linear interpolation polynomial.

Theorem:

$h=\underset{0\le k\le n-1}{sup}\left({x}_{k+1}-{x}_{k}\right)\phantom{\rule{0ex}{0ex}}\underset{a\le x\le b}{max}|f\left(x\right)-I\left(x\right)|\le \frac{{h}^{2}}{8}\underset{a\le x\le b}{max}|{f}^{\left(2\right)}\left(x\right)|$

Prove:

According to polynomial remainder, we can get:

$\begin{array}{rl}& \underset{{x}_{k}\le x\le {x}_{k+1}}{max}|\frac{{f}^{\left(2\right)}\left(\xi \right)}{2!}|\cdot |\left(x-{x}_{k}\right)\left(x-{x}_{k+1}\right)|\\ & =\underset{{x}_{k}\le x\le {x}_{k+1}}{max}|\frac{\left(x-{x}_{k}\right)\left(x-{x}_{k+1}\right)}{2}|\cdot \underset{{x}_{k}\le x\le {x}_{k+1}}{max}|{f}^{\left(2\right)}\left(\xi \right)|\end{array}$

On one hand,

${\left\{\frac{\left(x-{x}_{k}\right)\left(x-{x}_{k+1}\right)}{2}\right\}}^{\prime }=0\phantom{\rule{0ex}{0ex}}⇓\phantom{\rule{0ex}{0ex}}x=\frac{{x}_{k}+{x}_{k+1}}{2}\phantom{\rule{0ex}{0ex}}{h}_{k+1}={x}_{k+1}-{x}_{k}\phantom{\rule{0ex}{0ex}}\underset{{x}_{k}\le x\le {x}_{k+1}}{max}|\frac{\left(x-{x}_{k}\right)\left(x-{x}_{k+1}\right)}{2}|\le \frac{{h}_{k+1}^{2}}{8}$

On the other hand,

$\underset{{x}_{k}\le x\le {x}_{k+1}}{max}|{f}^{\left(2\right)}\left(\xi \right)|=|{f}^{\left(2\right)}\left(\xi \right)|\le \underset{{x}_{k}\le x\le {x}_{k+1}}{max}|{f}^{\left(2\right)}\left(x\right)|$

It follows that,

So

## Approximation theory

### Theorem

minimax Approximation

$‖f\left(x\right)-y\left(x\right){‖}_{\mathrm{\infty }}=\underset{a\le x\le b}{max}|f\left(x\right)-y\left(x\right)|$

Least-Squares Approximation

$‖f\left(y\right)-y\left(x\right){‖}_{2}^{2}={\int }_{a}^{b}\rho \left(x\right)\left[f\left(x\right)-y\left(x\right){\right]}^{2}d$

### Inner product space

weight function

Inner product between $f\left(x\right)$$f(x)$ and $g\left(x\right)$$g(x)$

$\left(f,g\right)={\int }_{a}^{b}\rho \left(x\right)f\left(x\right)g\left(x\right)dx$

Cauchy-Schwarz

$|\left(f,g\right)|\le ‖f{‖}_{2}‖g{‖}_{2}$

triangle inequality

$‖f+g{‖}_{2}\le ‖f{‖}_{2}+‖g{‖}_{2}$

Parallelogram theorem

$‖f+g{‖}_{2}^{2}+‖f-g{‖}_{2}^{2}=2\left(‖f{‖}_{2}^{2}+‖g{‖}_{2}^{\right)}$

Linearly independent

$\left\{{\phi }_{1}\left(x\right),{\phi }_{1}\left(x\right),\cdots ,{\phi }_{n}\left(x\right)\right\}\phantom{\rule{0ex}{0ex}}{a}_{1}{\phi }_{1}\left(x\right)+{a}_{2}{\phi }_{2}\left(x\right)+\cdots +{a}_{n}{\phi }_{n}\left(x\right)=0\phantom{\rule{thickmathspace}{0ex}}⟺\phantom{\rule{thickmathspace}{0ex}}{a}_{1}={a}_{2}=\cdots ={a}_{n}=0$