Transcription

Linear Algebra: Graduate Level Problems and SolutionsIgor Yanovsky1

Linear AlgebraIgor Yanovsky, 20052Disclaimer: This handbook is intended to assist graduate students with qualifyingexamination preparation. Please be aware, however, that the handbook might contain,and almost certainly contains, typos as well as incorrect or inaccurate solutions. I cannot be made responsible for any inaccuracies contained in this handbook.

Linear AlgebraIgor Yanovsky, 20053Contents1 Basic Theory1.1 Linear Maps . . . . . . . . . . .1.2 Linear Maps as Matrices . . . .1.3 Dimension and Isomorphism . .1.4 Matrix Representations Redux1.5 Subspaces . . . . . . . . . . . .1.6 Linear Maps and Subspaces . .1.7 Dimension Formula . . . . . . .1.8 Matrix Calculations . . . . . .1.9 Diagonalizability . . . . . . . .44446677782 Inner Product Spaces2.1 Inner Products . . . . . . . . . . . . . . .2.2 Orthonormal Bases . . . . . . . . . . . . .2.2.1 Gram-Schmidt procedure . . . . .2.2.2 QR Factorization . . . . . . . . . .2.3 Orthogonal Complements and Projections.8889993 Linear Maps on Inner Product Spaces3.1 Adjoint Maps . . . . . . . . . . . . . .3.2 Self-Adjoint Maps . . . . . . . . . . .3.3 Polarization and Isometries . . . . . .3.4 Unitary and Orthogonal Operators . .3.5 Spectral Theorem . . . . . . . . . . . .3.6 Normal Operators . . . . . . . . . . .3.7 Unitary Equivalence . . . . . . . . . .3.8 Triangulability . . . . . . . . . . . . .111113131415151616.4 Determinants174.1 Characteristic Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Linear Operators185.1 Dual Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.2 Dual Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Problems23

Linear Algebra11.1Igor Yanovsky, 20054Basic TheoryLinear MapsLemma. If A M atmxn (F) and B M atnxm (F), thentr(AB) tr(BA).Proof.Pm Note that the (i, i) entry in AB isi 1 βji αij .Thustr(AB) tr(BA) m XnXi 1 j 1n XmXPnj 1 αij βji ,while (j, j) entry in BA isαij βji ,βji αij .j 1 i 11.2Linear Maps as MatricesExample. Let Pn {α0 α1 t · · · αn tn : α0 , α1 , . . . , αn F} be the space ofpolynomials of degree n and D : V V the differential mapD(α0 α1 t · · · αn tn ) α1 · · · nαn tn 1 .If we use the basis 1, t, . . . , tn for V then we see that D(tk ) ktk 1 andthe (n 1)x(n 1) matrix representation is computed via 0 1 0 0 2nn 12n[D(1) D(t) D(t ) · · · D(t )] [0 1 2t · · · nt] [1 t t · · · t ] 0 0 . . . .0 01.3thus02······.0. .0 ··· 00 0 n 0Dimension and IsomorphismA linear map L : V W is isomorphism if we can find K : W V such thatLK IW and KL IV .LV x IV KWx I WV W

Linear AlgebraIgor Yanovsky, 20055Theorem. V and W are isomorphic there is a bijective linear map L : V W .Proof. If V and W are isomorphic we can find linear maps L : V W andK : W V so that LK IW and KL IV . Then for any y IW (y) L(K(y)) sowe can let x K(y), which means L is onto. If L(x1 ) L(x2 ) then x1 IV (x1 ) KL(x1 ) KL(x2 ) IV (x2 ) x2 , which means L is 1 1. Assume L : V W is linear and a bijection. Then we have an inverse map L 1which satisfies L L 1 IW and L 1 L IV . In order for this inverse map to beallowable as K we need to check that it is linear. Select α1 , α2 F and y1 , y2 W .Let xi L 1 (yi ) so that L(xi ) yi . Then we haveL 1 (α1 y1 α2 y2 ) L 1 (α1 L(x1 ) α2 L(x2 )) L 1 (L(α1 x1 α2 x2 )) IV (α1 x1 α2 x2 ) α1 x1 α2 x2 α1 L 1 (y1 ) α2 L 1 (y2 ).Theorem. If Fm and Fn are isomorphic over F, then n m.Proof. Suppose we have L : Fm Fn and K : Fn Fm such that LK IFn andKL IFm . L M atnxm (F) and K M atmxn (F). Thusn tr(IFn ) tr(LK) tr(KL) tr(IFm ) m.Define the dimension of a vector space V over F as dimF V n if V is isomorphicto Fn .Remark. dimC C 1, dimR C 2, dimQ R .The set of all linear maps {L : V W } over F is homomorphism, and is denotedby homF (V, W ).Corollary. If V and W are finite dimensional vector spaces over F, then homF (V, W )is also finite dimensional anddimF homF (V, W ) (dimF W ) · (dimF V )Proof. By choosing bases for V and W there is a natural mappinghomF (V, W ) M at(dimF W ) (dimF V ) (F) ' F(dimF W )·(dimF V )This map is both 1-1 and onto as the matrix represetation uniquely determines thelinear map and every matrix yields a linear map.

Linear Algebra1.4Igor Yanovsky, 20056Matrix Representations ReduxL : V W , bases x1 , . . . , xm for V and y1 , . . . , yn for W . The matrix for L interpretedas a linear map is [L] : Fm Fn . The basis isomorphisms defined by the choices ofbasis for V and W :[x1 · · · xm ] : Fm V , 1[y1 · · · yn ] : Fn W .LV x [x1 ···xm ] Wx [y ···y ] 1 n[L]Fm FnL [x1 · · · xm ] [y1 · · · yn ][L]1.5SubspacesA nonempty subset M V is a subspace if α, β F and x, y M , then αx βy M .Also, 0 M .If M, N V are subspaces, then we can form two new subspaces, the sum and theintersection:\M N {x y : x M, y N },MN {x : x M, x N }.TM and N have trivial intersection if M N {0}.M and N are transversal if M N V .Two spaces are complementary if theyT are transversal and have trivial intersection.LM, N form a direct sum of V if M N {0} and M N V . Write V MN.Example. V R2 . M {(x, 0) : x R}, x-axis, and N {(0, y) : y R}, y-axis.Example. V R2 . M {(x, 0) : x R}, x-axis, and N {(y, y) : y R}, adiagonal.LNote (x, y) (x y, 0) (y, y), which gives V MN.LIf we have a direct sum decomposition V MN , then we can construct theprojection of V onto M along N . The map E : V V is defined using that eachz x y, x M , y N and mapping z to x. E(z) E(x y) E(x) E(y) E(x) x. Thus im(E) M and ker(E) N .Definition. If V is a vector space, a projection of V is a linear operator E on Vsuch that E 2 E.1[x1 · · · xm ] : Fm V means α1 [x1 · · · xm ] . α1 x1 · · · αm xmαm

Linear Algebra1.6Igor Yanovsky, 20057Linear Maps and SubspacesL : V W is a linear map over F.The kernel or nullspace of L isker(L) N (L) {x V : L(x) 0}The image or range of L isim(L) R(L) L(V ) {L(x) W : x V }Lemma. ker(L) is a subspace of V and im (L) is a subspace of W .Proof. Assume that α1 , α2 F and that x1 , x2 ker(L), then L(α1 x1 α2 x2 ) α1 L(x1 ) α2 L(x2 ) 0 α1 x1 α2 x2 ker(L).Assume α1 , α2 F and x1 , x2 V , then α1 L(x1 ) α2 L(x2 ) L(α1 x1 α2 x2 ) im (L).Lemma. L is 1-1 ker(L) {0}.Proof. We know that L(0·0) 0·L(0) 0, so if L is 1 1 we have L(x) 0 L(0)implies that x 0. Hence ker(L) {0}. Assume that ker(L) {0}. If L(x1 ) L(x2 ), then linearity of L tells thatL(x1 x2 ) 0. Then ker(L) {0} implies x1 x2 0, which shows that x1 x2 asdesired.Lemma. L : V W , and dim V dim W . L is 1-1 L is onto dim im (L) dim V .Proof. From the dimension formula, we havedim V dim ker(L) dim im(L).L is 1-1 ker(L) {0} dim ker(L) 0 dim im (L) dim V dim im (L) dim W im (L) W , that is, L is onto.1.7Dimension FormulaTheorem. Let V be finite dimensional and L : V W a linear map, all over F, thenim(L) is finite dimensional anddimF V dimF ker(L) dimF im(L)Proof. We know that dim ker(L) Tdim V and that it has a complement M of dimensionk dim V dim ker(L). Since M ker(L) {0} the linear map L must be 1-1 whenrestricted to M . Thus L M : M im(L) is an isomorphism, i.e. dim im(L) dim M k.1.8Matrix CalculationsChange of Basis Matrix. Given the two basis of R2 , β1 {x1 (1, 1), x2 (1, 0)}and β2 {y1 (4, 3), y2 (3, 2)}, we find the change-of-basis matrix P from β1 to β2 .Write y1 as a linear combination of x1 and x2 , y1 ax1 bx2 . (4, 3) a(1, 1) b(1, 0) a 3, b 1 y1 3x1 x2 .Write y2 as a linear combination of x1 and x2 , y2 ax1 bx2 . (3, 2) a(1, 1) b(1, 0) a 2, b 1 y2 2x1 x2 .· 3 2Write the coordinates of y1 and y2 as columns of P .P .1 1

Linear Algebra1.9Igor Yanovsky, 20058DiagonalizabilityDefinition. Let T be a linear operator on the finite-dimensional space V .T is diagonalizable if there is a basis for V consisting of eigenvectors of T .Theorem. Let v1 , . . . , vn be nonzero eigenvectors of distinct eigenvalues λ1 , . . . , λn .Then {v1 , . . . , vn } is linearly independent.Alternative Statement. If L has n distinct eigenvalues λ1 , . . . , λn , then L isdiagonalizable. (Proof is in the exercises).Definition. Let L be a linear operator on a finite-dimensional vector space V , and letλ be an eigenvalue of L. Define Eλ {x V : L(x) λx} ker(L λIV ). The setEλ is called the eigenspace of L corresponding to the eigenvalue λ.The algebraic multiplicity is defined to be the multiplicity of λ as a root of thecharacteristic polynomial of L, while the geometric multiplicity of λ is defined to bethe dimension of its eigenspace, dim Eλ dim(ker(L λIV )). Also,dim(ker(L λIV )) m.Eigenspaces. A vector v with (A λI)v 0 is an eigenvector for λ.Generalized Eigenspaces. Let λ be an eigenvalue of A with algebraic multiplicity m.A vector v with (A λI)m v 0 is a generalised eigenvector for λ.22.1Inner Product SpacesInner ProductsThe three important properties for complex inner products are:1) (x x) x 2 0 unless x 0.2) (x y) (y x).3) For each y V the map x (x y) is linear.The inner product on Cn is defined by(x y) xt ȳConsequences: (α1 x1 α2 x2 y) α1 (x1 y) α2 (x2 y),(x β1 y1 β2 y2 ) β̄1 (x y1 ) β̄2 (x y2 ),(αx αx) αᾱ(x x) α 2 (x x).2.2Orthonormal BasesLemma. Let e1 , . . . , en be orthonormal. Then e1 , . . . , en are linearly independent andany element x span{e1 , . . . , en } has the expansionx (x e1 )e1 · · · (x en )en .Proof. Note that if x α1 e1 · · · αn en , then(x ei ) (α1 e1 · · · αn en ei ) α1 (e1 ei ) · · · αn (en ei ) α1 δ1i · · · αn δni αi .

Linear Algebra2.2.1Igor Yanovsky, 20059Gram-Schmidt procedureGiven a linearly independent set x1 , . . . , xm in an inner product space V it is possible to construct an orthonormal collection e1 , . . . , em such that span{x1 , . . . , xm } span{e1 , . . . , em }.e1 x1. x1 z2 x2 projx1 (x2 ) x2 proje1 (x2 ) x2 (x2 e1 )e1 ,zk 1 xk 1 (xk 1 e1 )e1 · · · (xk 1 ek )ek ,2.2.2z2. z2 zk 1 . zk 1 e2 ek 1QR Factorization A x1 · · ·xm e1 · · ·em (x1 e1 ) (x2 e1 ) · · ·0(x2 e2 ) · · ·.00···(xm e1 )(xm e2 ). QR (xm em )Example. Consider the vectors x1 (1, 1, 0), x2 (1, 0, 1), x3 (0, 1, 1) in R3 .Perform Gram-Schmidt:¡ e1 xx11 (1,1,0) 12 , 12 , 0 .2¡ ¡ ¡ ( 1 , 1 ,1)z2 (1, 0, 1) 12 12 , 12 , 0 12 , 12 , 1 ,e2 zz22 2 2 16 , 16 , 26 .3/2¡ ¡ ¡z3 x3 (x3 e1 )e1 (x3 e2 )e2 (0, 1, 1) 12 12 , 12 , 0 16 16 , 16 , 26 ¡z3 1 , 1 , 1 , 1 , 1 , 1 .e 3 z 33333332.3Orthogonal Complements and ProjectionsThe orthogonal projection of a vector x onto a nonzero vector y is defined by³ y y(x y) y,projy (x) x y y (y y)¡ (x y) The length of this projection is projy (x) . y The definition of projy (x) immediately implies that it is linear from the linearity of theinner product.The map x projy (x) is a projection.Proof. Need to show projy (projy (x)) projy (x).¶µ(x y)(x y) (y y)(x y)(x y)y projy (y) y y projy (x).projy (projy (x)) projy(y y)(y y)(y y) (y y)(y y)

Linear AlgebraIgor Yanovsky, 200510Cauchy-Schwarz Inequality. V complex inner product space. (x y) x y ,x, y V.Proof. First show projy (x) x projy (x):³ (x y) (x y) ³ (x y) ³ (x y) (x y) (projy (x) x projy (x)) y x y y x y y y 2 y 2 y 2 y 2 y 2(x y)(x y) (x y)(x y)(x y)(y x) (y y) (y x) (x y) 0. y 2 y 2 y 2 y 2 y 2 (x y) (x y) (x y) x projy (x) y . y (y y)(y y) y Triangle Inequality. V complex inner product space. x y x y .Proof. x y 2 (x y x y) x 2 2Re(x y) y 2 x 2 2 (x y) y 2 x 2 2 x y y 2 ( x y )2 .Let M V be a finite dimensional subspace of an innter product space, ande1 , . . . , em an orthonormal basis for M . Using that basis, define E : V V byE(x) (x e1 )e1 · · · (x em )emNote that E(x) M and that if x M , then E(x) x. Thus E 2 (x) E(x) implyingthat E is a projection whose image is M . If x ker(E), then0 E(x) (x e1 )e1 · · · (x em )em (x e1 ) · · · (x em ) 0.This is equivalent to the condition (x z) 0 for all z M . The set of all such vectorsis the orthogonal complement to M in V is denotedM {x V : (x z) 0 for all z M }L Theorem. Let V be an inner product space. Assume V MM , thenLim(projM ) M and ker(projM ) M . If M V is finite dimensional then V MM andprojM (x) (x e1 )e1 · · · (x em )emfor any orthonormal basis e1 , . . . , em for M .Proof. For E defined as above, ker(E) M . x E(x) (I E)(x) and (I E)(x) ker(E) M . Choose z M . x projM (x) 2 x projM (x) 2 projM (x) z 2 x z 2 ,where equality holds when projM (x) z 2 0, i.e., projM (x) is the only closest pointto x among the points in M .

Linear AlgebraIgor Yanovsky, 200511Theorem. LLet E : V V be a projection on to M V with the property thatV ker(E) ker(E) . Then the following conditions are equivalent.1) E projM .2) im(E) ker(E).3)