Introduction
r(A+B)⩽r(A)+r(B)r(AB)⩽r(A), r(B)ifAB=O, then r(A)+r(B)⩽nr(AB)⩾r(A)+r(B)−n
Image and Kernel (subspace)
Definition. The image of a function consists of all the values the function assumes. If f:X→Y is a function from X to Y, then
im(f)={f(x):x∈X}
Notice that im(f) is a subset of Y.
Definition. The kernel of a function whose range is Rn consists of all the values in its domain at which the function assumes the value 0. If f:X→Rn is a function from X to Rn, then
ker(f)={x∈X:f(x)=0}
Notice that ker(f) is a subset of X. Also, if T(x)=Ax is a linear transformation from Rm to Rn, then ker(T) (also denote ker(A)) is the set of solutions to the equation Ax=0.

Theorem. Let A be an n×n matrix. Then the following statements are equivalent.
- A is invertible
- The linear system Ax=b has a unique solution x for every b∈Rn
- rref(A) = In
- rank(A) = n
- im(A) = Rn
- ker(A) = {0}
linear mapping
V is a m demension linear space.let V′m×m as a group of basis vectors(maybe not standard).V′m×m={v1,v2,…,vm}, vi is one of a group of basic vectors.W is a n demension linear space.let W′n×n as a group of basis vectors(maybe not standard).W′n×n={w1,w2,…,wn}, wi is one of a group of basic vectors.linear mapping:V⟼W,F(vm×1)=wn×1, it can map from V to W.
F(vi)={w1,w2,…,wn}[a1i,a2i,…,ani]Tmatrix representation:An×m, F(V′m×m)n×m=W′n×nAn×mso,F(V′m×m)n×mx=F(V′m×mx)n×m=W′n×nAn×mxCoordinate mapping:x(in V)⟼{An×mx}(in W)
Matrix equivalence
A,B∈Rm×nMatrix equivalence:B=Q−1AP, Q and P is invertible matrix.
Matrix similarity
A,B∈Rn×nsimilar:∃ invertible P, B=P−1AP
Invariant subspace
Consider a linear mapping T.
T:Rn→RnW is subspace of Rn
An invariant subspace W of T has the property that all vectors v \in W are transformed by T into vectors also contained in W. This can be stated as
Invariant subspace:∀ v∈W⟹T(v)∈W
Vector Space
For having a vector space these two operations must satisfy the eight axioms, which are listed in the following table below, where the equations must be satisfied for every u, v and w in V, and a and b in F.
-
Associativity of vector addition
u+(v+w)=(u+v)+w
-
Commutativity of vector addition
u+v=v+u
-
Identity element of vector addition
There exists an element 0∈V, called the zero vector, such that v+0=v for all v∈V.
-
Inverse elements of vector addition
For every v∈V, there exists an element −v∈V, called the additive inverse of v, such that v+(−v)=0.
-
Compatibility of scalar multiplication with field multiplication
a(bv)=(ab)v
-
Identity element of scalar multiplication
1v=v , where 1 denotes the multiplicative identity in F.
-
Distributivity of scalar multiplication with respect to vector addition
a(u+v)=au+av
-
Distributivity of scalar multiplication with respect to field addition
(a+b)v=av+bv
Exercise:
V is the set of positive real numbers, that is V={x∈R|x>0} and F=R where vector addition is defined as x⊕y=xy and scalar multiplication is defined as α⊗x=xα. Prove the above is vector spaces.
lambda matrix and Jordan
lambda matrix
F(λ)=a0λ0+a1λ1+⋯λ matrix:A(λ)m×n=⎡⎣⎢⎢F11(λ)⋮Fm1(λ)⋯⋱⋯F1n(λ)⋮Fmn(λ)⎤⎦⎥⎥
unitary matrix
U∗U=UU∗=IU is invertible with U−1=U∗|det(U)|=1
Smith normal form
⎛⎝⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜⎜α100⋮00α2000⋱αr⋯⋯⋯0⋱000⋮0⎞⎠⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟⎟
αi=di(A)di−1(A),di(A) is i−th determinant divisord0(A):=1
Where di(A) (called i-th determinant divisor) equals the greatest common divisor of all i x i minors of the matrix A
Jordan normal form
Ji=⎡⎣⎢⎢⎢⎢⎢⎢λi1λi⋱⋱1λi⎤⎦⎥⎥⎥⎥⎥⎥Jordan normal form:J=⎡⎣⎢⎢J1⋱Jp⎤⎦⎥⎥
In general, a square complex matrix A is similar to a block diagonal matrix
Inner product
porperty
symmetry:
<v1,v2>=<v2,v1>
Linear:
<v1,v2k+v3l>=<v1,v2>k+<v1,v3>l
positive definitenes
s :
v≠0(<v,v>)inner product>0
unitary spaces
Conjugate symmetry or Hermitian symmetry:
⟨v1,v2⟩=⟨v2,v1⟩¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯
Linear:
<v1,v2k+v3l>=<v1,v2>k+<v1,v3>l
positive definiteness :
v≠0(<v,v>)inner product>0
example:
<v1k,v2l>=k¯¯¯<v1,v2>l
normal unitary spaces:
<v1,v2>=v1¯¯¯¯¯Tv2
Gram matrix
G(x1,…,xn)=∣∣∣∣∣∣∣⟨x1,x1⟩⟨x2,x1⟩⋮⟨xn,x1⟩⟨x1,x2⟩⟨x2,x2⟩⋮⟨xn,x2⟩……⋱…⟨x1,xn⟩⟨x2,xn⟩⋮⟨xn,xn⟩∣∣∣∣∣∣∣
Hermite:
positive semidefinite:
x¯¯¯TGx=∑i,jx¯¯¯i⟨vi,vj⟩x¯¯¯j=∑i,j⟨vixi,vjxj⟩=⟨∑ivixi,∑jvjxj⟩=∥∥∥∑ixivi∥∥∥2⩾0
positive definite:
∀x≠0, v1x1+v2x2+⋯+vnxn≠0⇒x¯¯¯TGx>0
rank:
rank(G(x1,…,xn))=rank([x1,…,xn])A′s image:{Ax | x∈Rn}∈Rm
Work on geometric space
∥x∥=<x,x>−−−−−−−√
1.x=0,∥x∥=0; x≠0,∥x∥≠02.∥λx∥=|λ|⋅∥x∥3.∥x+y∥⩽∥x∥+∥y∥4.|<x,y>|⩽∥x∥⋅∥y∥5.<x,y>=0,x⊥y
projection matrices
β∈Vn, Wssubspace∈Vn, n⩾squestion: α=argminx∈W(∥β−x∥)
∀v≠0, β1v1+β2v2+⋯+βsvs≠0(base)α=β1k1+⋯+βsks=[β1 β2 ⋯ βs]kk=G(β1,β2,…,βs)−1s×s⋅⎡⎣⎢⎢⎢⎢<β1,β><β2,β>⋮<βs,β>⎤⎦⎥⎥⎥⎥s×1
W=imAα=A(A¯¯¯¯TA)−1A¯¯¯¯Tβprojection matrices:PA=A(A¯¯¯¯TA)−1A¯¯¯¯T
porperty:
1.PA¯¯¯¯¯¯T=PA2.P2A=PA3.rank(PA)=rank(A)
Orthonormal basis
Fourier:
f,g=C([0,2π],R1)<f,g>=∫2π0f(t)g(t)dt
unitary matrix
property:
1.<Ax,Ay>=<x,y>2.∥Ax∥=∥x∥3.|det(A)|=1
QR decomposition
Any real square matrix A may be decomposed as
Q is a unitary matrix.
where Q is an orthogonal matrix (its columns are orthogonal unit vectors and R is an upper triangular matrix (also called right triangular matrix). If A is invertible, then the factorization is unique if we require the diagonal elements of R to be positive.
Schur decomposition
The Schur decomposition reads as follows: if A is an n × n square matrix with complex entries, then A can be expressed as
A¯¯¯¯TA=AA¯¯¯¯TA=QUQ−1
where Q is a unitary matrix, and U is an upper triangular matrix, which is called a Schur form of A. Since U is similar to A, it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of U.
Hermitian matrix
A=A¯¯¯¯T∴∃Uunitary matrix, U−1AU=U¯¯¯¯TAU=⎡⎣⎢⎢⎢⎢⎢λ1λ2⋱λn⎤⎦⎥⎥⎥⎥⎥
porperty:
- The eigenvalue of A is belong to real number.
positive semidefinite
{A=A¯¯¯¯Tx¯¯¯TAx⩾0
necessary and sufficient condition:
Other:
λmax(A)=max∥x∥=1{x¯¯¯TAx}
SVD
Question:
A∈Cm×n,findVunitary∈Cn×n,Uunitary∈Cm×mAV=UQ, make Q as simple as possible.
SVD:Q=⎡⎣⎢⎢⎢⎢⎢σ1⋮0⋯⋱⋯00⋮σr 00⎤⎦⎥⎥⎥⎥⎥m×n
singular value:
σi>0, i=1,2,…,rr=rank(A)σi=λi(A¯¯¯¯TA)−−−−−−−√, i=1,2,…,r
demonstrate:
A=[a1,a2,⋯,an], ai∈Cm×1A¯¯¯¯TA=G(a1,a2,…,an)Let:H=A¯¯¯¯TA
V¯¯¯¯T(A¯¯¯¯TA)V=⎡⎣⎢⎢⎢⎢⎢λ1⋮0⋯⋱⋯00⋮λr00⎤⎦⎥⎥⎥⎥⎥n×n=AV¯¯¯¯¯¯¯¯TAV
We can get eigenvectors (V) and eigenvalues by A.
let:B=AV=[b1,b2,…,br⋮ br+1,…,bn]B1=[b1,b2,…,br]B2=[ br+1,…,bn]
AV¯¯¯¯¯¯¯¯TAV=[B1¯¯¯¯¯¯TB2¯¯¯¯¯¯T]⋅[B1B1]=[B1¯¯¯¯¯¯TB1B2¯¯¯¯¯¯TB1B1¯¯¯¯¯¯TB2B2¯¯¯¯¯¯TB2]
∵[B1¯¯¯¯¯¯TB1B2¯¯¯¯¯¯TB1B1¯¯¯¯¯¯TB2B2¯¯¯¯¯¯TB2]=⎡⎣⎢⎢⎢⎢⎢λ⋮0⋯⋱⋯00⋮λr00⎤⎦⎥⎥⎥⎥⎥n×n∴B2=0,B1¯¯¯¯¯¯TB1=⎡⎣⎢⎢λ⋮0⋯⋱⋯0⋮λr⎤⎦⎥⎥r×r
∴b1,b2,…,br is irrelevant.
normalization:
bi˜=bi1∥bi∥=bi1λi−−√
Expand:
Find:βr+1,….βmLet:b1˜,b2˜,…,br˜,βr+1,….βm is irrelevant.∴U=[b1˜,b2˜,…,br˜,βr+1,….βm]
End:
∵B2=0∴B=[b1,b2,…,br⋮ 0]∵U⋅⎡⎣⎢⎢⎢⎢⎢⎢λ1−−√⋮0⋯⋱⋯00⋮λr−−√ 00⎤⎦⎥⎥⎥⎥⎥⎥m×n=[b1,b2,…,br⋮ 0]
∵B=AV∴AV=U⋅⎡⎣⎢⎢⎢⎢⎢⎢λ1−−√⋮0⋯⋱⋯00⋮λr−−√ 00⎤⎦⎥⎥⎥⎥⎥⎥m×n
Application: (linear mapping),(decoupling)