# matrix analysis brief overview

Contents

## Introduction

### Image and Kernel (subspace)

Definition. The image of a function consists of all the values the function assumes. If $f:X\to Y$$f:X\rightarrow Y$ is a function from $X$$X$ to $Y$$Y$, then

$\mathrm{i}\mathrm{m}\left(f\right)=\left\{f\left(x\right):x\in X\right\}$

Notice that im($f$$f$) is a subset of $Y$$Y$.

Definition. The kernel of a function whose range is ${\mathbb{R}}^{n}$$\mathbb{R}^n$ consists of all the values in its domain at which the function assumes the value $0$$0$. If $f:X\to {\mathbb{R}}^{n}$$f:X\rightarrow \mathbb{R}^n$ is a function from $X$$X$ to ${\mathbb{R}}^{n}$$\mathbb{R}^n$, then

$\mathrm{k}\mathrm{e}\mathrm{r}\left(f\right)=\left\{x\in X:f\left(x\right)=0\right\}$

Notice that ker($f$$f$) is a subset of X. Also, if $T\left(x\right)=Ax$$T(x)=Ax$ is a linear transformation from ${\mathbb{R}}^{m}$$\mathbb{R}^m$ to ${\mathbb{R}}^{n}$$\mathbb{R}^n$, then ker($T$$T$) (also denote ker($A$$A$)) is the set of solutions to the equation $Ax=0$$Ax= 0$.

Theorem. Let $A$$A$ be an $n×n$$n \times n$ matrix. Then the following statements are equivalent.

1. $A$$A$ is invertible
2. The linear system $Ax=b$$Ax = b$ has a unique solution $x$$x$ for every $b\in {\mathbb{R}}^{n}$$b\in \mathbb{R}^n$
3. rref($A$$A$) = ${I}_{n}$$I_n$
4. rank($A$$A$) = $n$$n$
5. im($A$$A$) = ${\mathbb{R}}^{n}$$\mathbb{R}^n$
6. ker($A$$A$) = $\left\{0\right\}$$\{0\}$

### Invariant subspace

Consider a linear mapping T.

An invariant subspace W of T has the property that all vectors v \in W are transformed by T into vectors also contained in W. This can be stated as

### Vector Space

For having a vector space these two operations must satisfy the eight axioms, which are listed in the following table below, where the equations must be satisfied for every $u$$u$, $v$$v$ and $w$$w$ in $V$$V$, and $a$$a$ and $b$$b$ in $F$$F$.

$u+\left(v+w\right)=\left(u+v\right)+w$$u + (v + w) = (u + v) + w$

$u+v=v+u$$u + v = v + u$

3. Identity element of vector addition

There exists an element $0\in V$$0 \in V$, called the zero vector, such that $v+0=v$$v + 0 = v$ for all $v\in V$$v \in V$.

4. Inverse elements of vector addition

For every $v\in V$$v ∈ V$, there exists an element $-v\in V$$−v ∈ V$, called the additive inverse of $v$$v$, such that $v+\left(-v\right)=0$$v + (−v) = 0$.

5. Compatibility of scalar multiplication with field multiplication

$a\left(bv\right)=\left(ab\right)v$$a(bv) = (ab)v$

6. Identity element of scalar multiplication

$1v=v$$1v = v$ , where $1$$1$ denotes the multiplicative identity in $F$$F$.

7. Distributivity of scalar multiplication with respect to vector addition

$a\left(u+v\right)=au+av$$a(u + v) = au + av$

8. Distributivity of scalar multiplication with respect to field addition

$\left(a+b\right)v=av+bv$$(a + b)v = av + bv$

Exercise:

$V$$V$ is the set of positive real numbers, that is $V=\left\{x\in \mathbb{R}|x>0\right\}$$V= \{x \in \mathbb{R} | x > 0 \}$ and $F=\mathbb{R}$$F=\mathbb{R}$ where vector addition is defined as $x\oplus y=xy$$x\oplus y = xy$ and scalar multiplication is defined as $\alpha \otimes x={x}^{\alpha }$$\alpha \otimes x = x^\alpha$. Prove the above is vector spaces.

## lambda matrix and Jordan

### Smith normal form

$\left(\begin{array}{ccccccc}{\alpha }_{1}& 0& 0& & \cdots & & 0\\ 0& {\alpha }_{2}& 0& & \cdots & & 0\\ 0& 0& \ddots & & & & 0\\ ⋮& & & {\alpha }_{r}& & & ⋮\\ & & & & 0& & \\ & & & & & \ddots & \\ 0& & & \cdots & & & 0\end{array}\right)$

Where di(A) (called i-th determinant divisor) equals the greatest common divisor of all i x i minors of the matrix A

### Jordan normal form

In general, a square complex matrix A is similar to a block diagonal matrix

## Inner product

### porperty

symmetry:

$<{v}_{1},{v}_{2}>=<{v}_{2},{v}_{1}>$

Linear:

$<{v}_{1},{v}_{2}k+{v}_{3}l>=<{v}_{1},{v}_{2}>k+<{v}_{1},{v}_{3}>l$

positive definitenes
s :

### unitary spaces

$v\in \mathbb{C}$

Conjugate symmetry or Hermitian symmetry:

$⟨{v}_{1},{v}_{2}⟩=\overline{⟨{v}_{2},{v}_{1}⟩}$

Linear:

$<{v}_{1},{v}_{2}k+{v}_{3}l>=<{v}_{1},{v}_{2}>k+<{v}_{1},{v}_{3}>l$

positive definiteness :

example:

$<{v}_{1}k,{v}_{2}l>=\overline{k}<{v}_{1},{v}_{2}>l$

normal unitary spaces:

$<{v}_{1},{v}_{2}>={\overline{{v}_{1}}}^{T}{v}_{2}$

### Gram matrix

$G\left({x}_{1},\dots ,{x}_{n}\right)=|\begin{array}{cccc}⟨{x}_{1},{x}_{1}⟩& ⟨{x}_{1},{x}_{2}⟩& \dots & ⟨{x}_{1},{x}_{n}⟩\\ ⟨{x}_{2},{x}_{1}⟩& ⟨{x}_{2},{x}_{2}⟩& \dots & ⟨{x}_{2},{x}_{n}⟩\\ ⋮& ⋮& \ddots & ⋮\\ ⟨{x}_{n},{x}_{1}⟩& ⟨{x}_{n},{x}_{2}⟩& \dots & ⟨{x}_{n},{x}_{n}⟩\end{array}|$

Hermite:

${\overline{G}}^{T}=G$

positive semidefinite:

${\overline{x}}^{\mathsf{\text{T}}}\mathbf{G}x=\sum _{i,j}{\overline{x}}_{i}⟨{v}_{i},{v}_{j}⟩{\overline{x}}_{j}=\sum _{i,j}⟨{v}_{i}{x}_{i},{v}_{j}{x}_{j}⟩=⟨\sum _{i}{v}_{i}{x}_{i},\sum _{j}{v}_{j}{x}_{j}⟩={‖\sum _{i}{x}_{i}{v}_{i}‖}^{2}⩾0$

positive definite:

rank:

### Work on geometric space

$‖x‖=\sqrt{}$

### projection matrices

porperty:

$\begin{array}{rl}& 1.\phantom{\rule{1em}{0ex}}{\overline{{P}_{A}}}^{T}={P}_{A}\\ & 2.\phantom{\rule{1em}{0ex}}{P}_{A}^{2}={P}_{A}\\ & 3.\phantom{\rule{1em}{0ex}}rank\left({P}_{A}\right)=rank\left(A\right)\end{array}$

### Orthonormal basis

Fourier:

$f,g=\mathfrak{C}\left(\left[0,2\pi \right],{\mathbb{R}}^{1}\right)\phantom{\rule{0ex}{0ex}}={\int }_{0}^{2\pi }f\left(t\right)g\left(t\right)dt$

### unitary matrix

${\overline{A}}^{T}A={I}_{n}$

property:

$\begin{array}{rl}& 1.\phantom{\rule{1em}{0ex}}=\\ & 2.\phantom{\rule{1em}{0ex}}‖Ax‖=‖x‖\\ & 3.\phantom{\rule{1em}{0ex}}|det\left(A\right)|=1\end{array}$

### QR decomposition

Any real square matrix A may be decomposed as

$A=QR$

Q is a unitary matrix.

where Q is an orthogonal matrix (its columns are orthogonal unit vectors and R is an upper triangular matrix (also called right triangular matrix). If A is invertible, then the factorization is unique if we require the diagonal elements of R to be positive.

### Schur decomposition

The Schur decomposition reads as follows: if A is an n × n square matrix with complex entries, then A can be expressed as

${\overline{A}}^{T}A=A{\overline{A}}^{T}\phantom{\rule{0ex}{0ex}}A=QU{Q}^{-1}$

where Q is a unitary matrix, and U is an upper triangular matrix, which is called a Schur form of A. Since U is similar to A, it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of U.

### Hermitian matrix

porperty:

1. The eigenvalue of A is belong to real number.

### positive semidefinite

$\left\{\begin{array}{r}A={\overline{A}}^{T}\\ {\overline{x}}^{T}Ax⩾0\end{array}$

necessary and sufficient condition:

${\lambda }_{i}⩾0$

Other:

${\lambda }_{max}\left(A\right)=\underset{‖x‖=1}{max}\left\{{\overline{x}}^{T}Ax\right\}$

### SVD

Question:

singular value:

demonstrate:

${\overline{V}}^{T}\left({\overline{A}}^{T}A\right)V={\left[\begin{array}{cccc}{\lambda }_{1}& \cdots & 0& \\ ⋮& \ddots & ⋮& 0\\ 0& \cdots & {\lambda }_{r}& \\ & 0& & 0\end{array}\right]}_{n×n}={\overline{AV}}^{T}AV$

We can get eigenvectors (V) and eigenvalues by A.

${\overline{AV}}^{T}AV=\left[\begin{array}{c}{\overline{{B}_{1}}}^{T}\\ {\overline{{B}_{2}}}^{T}\end{array}\right]\cdot \left[\begin{array}{cc}{B}_{1}& {B}_{1}\end{array}\right]=\left[\begin{array}{cc}{\overline{{B}_{1}}}^{T}{B}_{1}& {\overline{{B}_{1}}}^{T}{B}_{2}\\ {\overline{{B}_{2}}}^{T}{B}_{1}& {\overline{{B}_{2}}}^{T}{B}_{2}\end{array}\right]$
$\because \left[\begin{array}{cc}{\overline{{B}_{1}}}^{T}{B}_{1}& {\overline{{B}_{1}}}^{T}{B}_{2}\\ {\overline{{B}_{2}}}^{T}{B}_{1}& {\overline{{B}_{2}}}^{T}{B}_{2}\end{array}\right]={\left[\begin{array}{cccc}\lambda & \cdots & 0& \\ ⋮& \ddots & ⋮& 0\\ 0& \cdots & {\lambda }_{r}& \\ & 0& & 0\end{array}\right]}_{n×n}\phantom{\rule{0ex}{0ex}}\therefore {B}_{2}=0,\phantom{\rule{1em}{0ex}}{\overline{{B}_{1}}}^{T}{B}_{1}={\left[\begin{array}{ccc}\lambda & \cdots & 0\\ ⋮& \ddots & ⋮\\ 0& \cdots & {\lambda }_{r}\end{array}\right]}_{r×r}$

normalization:

$\stackrel{~}{{b}_{i}}={b}_{i}\frac{1}{‖{b}_{i}‖}={b}_{i}\frac{1}{\sqrt{{\lambda }_{i}}}$

Expand:

End:

Application: (linear mapping),(decoupling)