matrix analysis brief overview

Introduction

r(A+B)r(A)+r(B)r(AB)r(A), r(B)ifAB=O, then r(A)+r(B)nr(AB)r(A)+r(B)n

Image and Kernel (subspace)

Definition. The image of a function consists of all the values the function assumes. If f:XY is a function from X to Y, then

im(f)={f(x):xX}

Notice that im(f) is a subset of Y.


Definition. The kernel of a function whose range is Rn consists of all the values in its domain at which the function assumes the value 0. If f:XRn is a function from X to Rn, then

ker(f)={xX:f(x)=0}

Notice that ker(f) is a subset of X. Also, if T(x)=Ax is a linear transformation from Rm to Rn, then ker(T) (also denote ker(A)) is the set of solutions to the equation Ax=0.


Kernel and Image

Theorem. Let A be an n×n matrix. Then the following statements are equivalent.

  1. A is invertible
  2. The linear system Ax=b has a unique solution x for every bRn
  3. rref(A) = In
  4. rank(A) = n
  5. im(A) = Rn
  6. ker(A) = {0}

linear mapping

V is a m demension linear space.let Vm×m as a group of basis vectors(maybe not standard).Vm×m={v1,v2,,vm}, vi is one of a group of basic vectors.W is a n demension linear space.let Wn×n as a group of basis vectors(maybe not standard).Wn×n={w1,w2,,wn}, wi is one of a group of basic vectors.linear mapping:VW,F(vm×1)=wn×1, it can map from V to W.
F(vi)={w1,w2,,wn}[a1i,a2i,,ani]Tmatrix representation:An×m, F(Vm×m)n×m=Wn×nAn×mso,F(Vm×m)n×mx=F(Vm×mx)n×m=Wn×nAn×mxCoordinate mapping:x(in V){An×mx}(in W)

Matrix equivalence

A,BRm×nMatrix equivalence:B=Q1AP, Q and P is invertible matrix.

Matrix similarity

A,BRn×nsimilar: invertible P, B=P1AP

Invariant subspace

Consider a linear mapping T.

T:RnRnW is subspace of Rn

An invariant subspace W of T has the property that all vectors v \in W are transformed by T into vectors also contained in W. This can be stated as

Invariant subspace: vWT(v)W

Vector Space

For having a vector space these two operations must satisfy the eight axioms, which are listed in the following table below, where the equations must be satisfied for every u, v and w in V, and a and b in F.

  1. Associativity of vector addition

    u+(v+w)=(u+v)+w

  2. Commutativity of vector addition

    u+v=v+u

  3. Identity element of vector addition

    There exists an element 0V, called the zero vector, such that v+0=v for all vV.

  4. Inverse elements of vector addition

    For every vV, there exists an element vV, called the additive inverse of v, such that v+(v)=0.

  5. Compatibility of scalar multiplication with field multiplication

    a(bv)=(ab)v

  6. Identity element of scalar multiplication

    1v=v , where 1 denotes the multiplicative identity in F.

  7. Distributivity of scalar multiplication with respect to vector addition

    a(u+v)=au+av

  8. Distributivity of scalar multiplication with respect to field addition

    (a+b)v=av+bv

Exercise:

V is the set of positive real numbers, that is V={xR|x>0} and F=R where vector addition is defined as xy=xy and scalar multiplication is defined as αx=xα. Prove the above is vector spaces.

lambda matrix and Jordan

lambda matrix

F(λ)=a0λ0+a1λ1+λ matrix:A(λ)m×n=[F11(λ)F1n(λ)Fm1(λ)Fmn(λ)]

unitary matrix

UU=UU=IU is invertible with U1=U|det(U)|=1

Smith normal form

(α10000α200000αr000)
αi=di(A)di1(A),di(A) is ith determinant divisord0(A):=1

Where di(A) (called i-th determinant divisor) equals the greatest common divisor of all i x i minors of the matrix A

Jordan normal form

Ji=[λi1λi1λi]Jordan normal form:J=[J1Jp]

In general, a square complex matrix A is similar to a block diagonal matrix

P, P1AP=J

Inner product

porperty

symmetry:

<v1,v2>=<v2,v1>

Linear:

<v1,v2k+v3l>=<v1,v2>k+<v1,v3>l

positive definitenes
s :

v0(<v,v>)inner product>0

unitary spaces

vC

Conjugate symmetry or Hermitian symmetry:

v1,v2=v2,v1¯

Linear:

<v1,v2k+v3l>=<v1,v2>k+<v1,v3>l

positive definiteness :

v0(<v,v>)inner product>0

example:

<v1k,v2l>=k¯<v1,v2>l

normal unitary spaces:

<v1,v2>=v1¯Tv2

Gram matrix

G(x1,,xn)=|x1,x1x1,x2x1,xnx2,x1x2,x2x2,xnxn,x1xn,x2xn,xn|

Hermite:

G¯T=G

positive semidefinite:

x¯TGx=i,jx¯ivi,vjx¯j=i,jvixi,vjxj=ivixi,jvjxj=ixivi20

positive definite:

x0, v1x1+v2x2++vnxn0x¯TGx>0

rank:

rank(G(x1,,xn))=rank([x1,,xn])As image:{Ax | xRn}Rm

Work on geometric space

x=<x,x>
1.x=0,x=0; x0,x02.λx=|λ|x3.x+yx+y4.|<x,y>|xy5.<x,y>=0,xy

projection matrices

βVn, WsubspacesVn, nsquestion: α=argminxW(βx)
v0, β1v1+β2v2++βsvs0(base)α=β1k1++βsks=[β1 β2  βs]kk=G(β1,β2,,βs)s×s1[<β1,β><β2,β><βs,β>]s×1
W=imAα=A(A¯TA)1A¯Tβprojection matrices:PA=A(A¯TA)1A¯T

porperty:

1.PA¯T=PA2.PA2=PA3.rank(PA)=rank(A)

Orthonormal basis

Fourier:

f,g=C([0,2π],R1)<f,g>=02πf(t)g(t)dt

unitary matrix

A¯TA=In

property:

1.<Ax,Ay>=<x,y>2.Ax=x3.|det(A)|=1

QR decomposition

Any real square matrix A may be decomposed as

A=QR

Q is a unitary matrix.

where Q is an orthogonal matrix (its columns are orthogonal unit vectors and R is an upper triangular matrix (also called right triangular matrix). If A is invertible, then the factorization is unique if we require the diagonal elements of R to be positive.

Schur decomposition

The Schur decomposition reads as follows: if A is an n × n square matrix with complex entries, then A can be expressed as

A¯TA=AA¯TA=QUQ1

where Q is a unitary matrix, and U is an upper triangular matrix, which is called a Schur form of A. Since U is similar to A, it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of U.

Hermitian matrix

A=A¯TUunitary matrix, U1AU=U¯TAU=[λ1λ2λn]

porperty:

  1. The eigenvalue of A is belong to real number.

positive semidefinite

{A=A¯Tx¯TAx0

necessary and sufficient condition:

λi0

Other:

λmax(A)=maxx=1{x¯TAx}

SVD

Question:

ACm×n,findVunitaryCn×n,UunitaryCm×mAV=UQ, make Q as simple as possible.
SVD:Q=[σ10  00σr00]m×n

singular value:

σi>0, i=1,2,,rr=rank(A)σi=λi(A¯TA), i=1,2,,r

demonstrate:

A=[a1,a2,,an], aiCm×1A¯TA=G(a1,a2,,an)Let:H=A¯TA
V¯T(A¯TA)V=[λ1000λr00]n×n=AV¯TAV

We can get eigenvectors (V) and eigenvalues by A.

let:B=AV=[b1,b2,,br br+1,,bn]B1=[b1,b2,,br]B2=[ br+1,,bn]
AV¯TAV=[B1¯TB2¯T][B1B1]=[B1¯TB1B1¯TB2B2¯TB1B2¯TB2]
[B1¯TB1B1¯TB2B2¯TB1B2¯TB2]=[λ000λr00]n×nB2=0,B1¯TB1=[λ00λr]r×r
b1,b2,,br is irrelevant.

normalization:

bi~=bi1bi=bi1λi

Expand:

Find:βr+1,.βmLet:b1~,b2~,,br~,βr+1,.βm is irrelevant.U=[b1~,b2~,,br~,βr+1,.βm]

End:

B2=0B=[b1,b2,,br 0]U[λ10  00λr00]m×n=[b1,b2,,br 0]
B=AVAV=U[λ10  00λr00]m×n

Application: (linear mapping),(decoupling)