Buscar

Jimmie_Gilbert_and_Linda_Gilbert_Auth_Linear_Algebra_and_Matrix

Prévia do material em texto

LINEAR ALGEBRA 
AND MATRIX 
THEORY 
JIMMIE GILBERT 
LINDA GILBERT 
University of South Carolina at Spartanburg 
Spartanburg, South Carolina 
® 
ACADEMIC PRESS 
San Diego New York Boston London Sydney Tokyo Toronto 
This book is printed on acid-free paper, fe) 
Copyright © 1995, 1970 by ACADEMIC PRESS, INC. 
All Rights Reserved. 
No part of this publication may be reproduced or transmitted in any form or by any 
means, electronic or mechanical, including photocopy, recording, or any information 
storage and retrieval system, without permission in writing from the publisher. 
Academic Press, Inc. 
A Division of Harcourt Brace & Company 
525 B Street, Suite 1900, San Diego, California 92101-4495 
United Kingdom Edition published by 
Academic Press Limited 
24-28 Oval Road, London NW1 7DX 
Library of Congress Cataloging-in-Publication Data 
Gilbert, Jimmie, date. 
Linear algebra and matrix theory / by Jimmie Gilbert, Linda Gilbert. 
p. cm. 
Includes indexes. 
ISBN 0-12-282970-0 
1. Algebras, Linear. I. Gilbert, Linda. II. Title. 
QA184.G525 1994 
512'.5--dc20 94-38932 
CIP 
PRINTED IN THE UNITED STATES OF AMERICA 
95 96 97 98 99 00 EB 9 8 7 6 5 4 3 2 1 
Preface 
This text was planned for use with one of the following two options: 
• The complete text would be suitable for a one-year undergraduate course for 
mathematics majors and would provide a strong foundation for more abstract 
courses at higher levels of instruction. 
• A one-semester or one-quarter course could be taught from the first five chapters 
together with selections from the remaining chapters. The selections could be 
chosen so as to meet the requirements of students in the fields of business, science, 
economics, or engineering. 
The presentation of material presumes the knowledge and maturity gained from one 
calculus course, and a second calculus course is a desirable prerequisite. 
It is our opinion that linear algebra is well suited to provide the transition from the 
intuitive developments of courses at a lower level to the more abstract treatments en­
countered later. Throughout the treatment here, material is presented from a structural 
point of view: fundamental algebraic properties of the entities involved are emphasized. 
This approach is particularly important because the mathematical systems encountered 
in linear algebra furnish a wealth of examples for the structures studied in more ad­
vanced courses. 
The unifying concept for the first five chapters is that of elementary operations. This 
concept provides the pivot for a concise and efficient development of the basic theory 
of vector spaces, linear transformations, matrix multiplication, and the fundamental 
equivalence relations on matrices. 
A rigorous treatment of determinants from the traditional viewpoint is presented in 
Chapter 6. For a class already familiar with this material, the chapter can be omitted. 
In Chapters 7 through 10, the central theme of the development is the change in 
the matrix representing a vector function when only certain types of basis changes are 
admitted. It is from this approach that the classical canonical forms for matrices are 
derived. 
Numerous examples and exercises are provided to illustrate the theory. Exercises are 
included of both computational and theoretical nature. Those of a theoretical nature 
amplify the treatment and provide experience in constructing deductive arguments, 
while those of a computational nature illustrate fundamental techniques. The amount 
of labor in the computational problems is kept to a minimum. Even so, many of them 
provide opportunities to utilize current technology, if that is the wish of the instructor. 
Answers are provided for about half of the computational problems. 
The exercises are intended to develop confidence and deepen understanding. It is 
assumed that students grow in maturity as they progress through the text and the 
proportion of theoretical problems increases in later chapters. 
Since much of the interest in linear algebra is due to its applications, the solution 
of systems of linear equations and the study of eigenvalue problems appear early in the 
text. Chapters 4 and 7 contain the most important applications of the theory. 
ix 
X Preface 
ACKNOWLEDGMENTS 
We wish to express our appreciation for the support given us by the University of 
South Carolina at Spartanburg during the writing of this book, since much of the work 
was done while we were on sabbatical leave. We would especially like to thank Sharon 
Hahs, Jimmie Cook, and Olin Sansbury for their approval and encouragement of the 
project. 
This entire text was produced using Scientific Word and Scientific Workplace, soft­
ware packages from TCI Software Research, Inc. Special thanks are due to Christopher 
Casey and Fred Osborne for their invaluable assistance throughout the project. 
We would like to acknowledge with thanks the helpful suggestions made by the 
following reviewers of the text: 
Ed Dixon, Tennessee Technological University 
Jack Garner, University of Arkansas at Little Rock 
Edward Hinson, University of New Hampshire 
Melvyn Jeter, Illinois Wesleyan University 
Bob Jones, Belhaven College 
We also wish to express our thanks to Dave Pallai for initiating the project at 
Academic Press, to Peter Renz for his editorial guidance in developing the book, and to 
Michael Early for his patience and encouragement while supervising production of the 
book. 
Jimmie Gilbert 
Linda Gilbert 
Chapter 1 
Real Coordinate Spaces 
1.1 Introduction 
There are various approaches to that part of mathematics known as linear algebra. 
Different approaches emphasize different aspects of the subject such as matrices, ap­
plications, or computational methods. As presented in this text, linear algebra is in 
essence a study of vector spaces, and this study of vector spaces is primarily devoted 
to finite-dimensional vector spaces. The real coordinate spaces, in addition to being 
important in many applications, furnish excellent intuitive models of abstract finite-
dimensional vector spaces. For these reasons, we begin our study of linear algebra with 
a study of the real coordinate spaces. Later it will be found that many of the results 
and techniques employed here will easily generalize to more abstract settings. 
1.2 The Vector Spaces Rn 
Throughout this text the symbol R will denote the set of all real numbers. We assume 
a knowledge of the algebraic properties of R, and begin with the following definition of 
real coordinate spaces. 
Definition 1.1 For each positive integer n, R n will denote the set of all ordered n-
tuples (ui,v,2, ...,un) of real numbers Ui. Two n-tuples (u\,U2,-..,un) and {v\,V2, •••5^n) 
are equal if and only if ui = Vi for i = l ,2, . . . ,n. The set R n is referred to as an n-
dimensional real coordinate space. The elements of R n are called n-dimensional 
real coordinate vectors, or simply vectors. The numbers Ui in a vector (u\, U2, ··., un) 
will be called the components of the vector. The elements of R will be referred to as 
scalar s.1 
xThe terms "vector" and "scalar" are later extended to more general usage, but this will cause no 
confusion since the context will make the meaning clear. 
1 
2 Chapter 1 Real Coordinate Spaces 
The real coordinate spaces and the related terminology described in this definition 
are easily seen to be generalizations and extensions of the two- and three-dimensional 
vector spaces studied in the calculus. 
When we use a single letter to represent a vector, the letter will be printed in 
boldface lower case Roman, such as v, or written with an arrow over it, such as V . In 
handwritten work with vectors, the arrow notation V is commonly used. Scalars will 
be represented by letters printed in lower case italics. 
Definition 1.2 Addition in Rn is defined as follows: for any u = (ui,U2, ...,^n) and 
v = (^1,^2, . · . , vn) in R71; the sum u + v is given by 
U + V = (Ui +Vi,U2 + Ü 2 , . . . , Un +Vn). 
For any scalar a and any vector u = ( ΐ / ι , ι ^ •••^η) in Rn, the product au is defined 
by 
au = {aui^auz, ···, aun). 
The operation that combines the scalar a and the vector u to yield au is referred to as 
multiplication of the vector u by the scalar a, or simply as scalar multiplication. 
Also, the product au is called a scalar multiple of u. 
The following theorem gives the basic properties of the two operations that we have 
defined. 
Theorem 1.3 The following properties are valid for any scalars a and 6, and any vec­
tors u, v, w in R n : 
1. u + v G R n . (Closure under addition) 
2. (u + v) + w = u + (v + w). (Associative property of addition) 
3. There is a vector 0 in R n such that u + 0 = u for all u G R n . (Additive identity) 
4. For each u G R n , there is a vector — u in R n such that u + (—u) = 0. (Additive 
inverses) 
5. u + v = v + u. (Commutative property of addition) 
6. au G R n . (Absorption under scalar multiplication) 
7. a(bu) = (ab)u. (Associative property of scalar multiplication) 
8. a(u + v) =au + av. (Distributive property, vector addition) 
9. (a + b)u = au + bu. (Distributive property, scalar addition) 
10. 1 u = u. 
The proofs of these properties are easily carried out using the definitions of vector 
addition and scalar multiplication, as well as the properties of real numbers. As typical 
examples, properties 3, 4, and 8 will be proved here. The remaining proofs are left as 
an exercise. 
Proof of Property 3. The vector 0 = (0,0, ...,0) is in R n , and if u = (1*1,^2, ...,ixn), 
u + 0 = (ui + 0, u2 + 0,..., un + 0) = u. 
1.2 The Vector Spaces R1 3 
Proof of Property 4. If u = (u\,u2, •••,'^n) £ Rn> then v = (—ixi, —U2, ··., — Ό is in 
R n since all real numbers have additive inverses. And since 
U + V = (Ui + (-Ui),U2 + (-^2), ···, ̂ n + (-i/n)) = 0, 
v is the vector — u as required by property (4). 
Proof of Property 8. Let u = {u\,u2, . . . ,^n) and v = (vi,v2, —·>νη). Then 
a(u + v) =a(ui+vi,U2 +V2,...<,un + vn) 
= (a{ui + vi),a(u2 + v2), ··., a(^n + vn)) 
= (aui + av\,au2 + ai>2, ...,aun + ai;n) 
= (aui,ai/2, ...,emn) + (αυι,αι^, ...,αι;η) 
= au + av. ■ 
The associative property of addition in R n can be generalized so that the terms in 
a sum such as αχΐΐι + CI2U2 + · · · + ajtUfc can be grouped with parentheses in any way 
without changing the value of the result. Such a sum can be written in the compact 
form Σί=ι aiui ΟΓ Σ ϊ a*u* ^ the number of terms in the sum is not important. It is 
always understood that only a finite number of nonzero terms is involved in such a sum. 
Definition 1.4 Let A be a nonempty set of vectors in R n . A vector v in Rn is lin­
early dependent on the set A if there exist vectors Ui,U2,.,Ufc in A and scalars 
ai,a2,..., ak such that v = ΣΪ=Ι aiui- A vector of the form 5 ^ = 1 ciiUi is called a linear 
combination of the u .̂ 
If linear dependence on a finite set B = {vi, V2, .··, v r } is under consideration, the 
statement in Definition 1.4 is equivalent to requiring that all of the vectors in B be 
involved in the linear combination. That is, a vector v is linearly dependent on B = 
{vi, V2,..., v r } if and only if there are scalars &i, 62,..., br such that v = $^^=1 biVi. 
Example 1 □ As an illustration, let B = {vi, v2, v 3 } , where 
V l = ( l , 0 ,2 , l ) , v 2 = (0 , -2 ,2 ,3) , and v3 = (1 , -2 ,4 ,4) , 
and let v = (3, —4,10,9). Now v can be written as 
v = 1 · vi + 0 · v2 + 2 · v3, 
or 
ν = 3 · ν ι +2-V2 + 0- v3. 
Either of these combinations shows that v is linearly dependent on B. 
4 Chapter 1 Real Coordinate Spaces 
To consider a situation involving an infinite set, let A be the set of all vectors in R 3 
that have integral components, and let v = (y/2, §,0). Now Ui = (1,0,1), 112 = (0,1,0), 
and 113 = (0,0,1) are in A, and 
v = \/2ui + - u 2 - λ/2ιΐ3. o 
Thus v is linearly dependent on A. It should be noted that other choices of vectors u^ 
can be made in order to exhibit this dependence. ■ 
In order to decide whether a certain vector is linearly dependent on a given set in 
R n , it is usually necessary to solve a system of equations. This is illustrated in the 
following example. 
Example 2 □ Consider the question as to whether (6,0, —1) is linearly dependent on 
the set A = {(2, —1,1), (0,1, —1), (—2,1,0)}. To answer the question, we investigate the 
conditions on αι,α2, and as that are required by the equation 
αι(2, -1 ,1) + a2(0,1, -1 ) + a3( -2 ,1 ,0) = (6,0, - 1 ) . 
Performing the indicated scalar multiplications and additions in the left member of this 
equation leads to 
(2αι - 2a3, —a\ + a2 + 03, «i — «2 ) = (6,0, - 1 ) . 
This vector equation is equivalent to the system of equations 
2a\ — 2a3 = 6 
-a\ + a2 + 03 = 0 
a\ — a2 = — 1 . 
We decide to work toward the solution of this system by eliminating a\ from two of the 
equations in the system. As steps toward this goal, we multiply the first equation by \ 
and we add the second equation to the third equation. These steps yield the system 
a\ — as = 3 
—a\ + a2 + as = 0 
as = - 1 . 
Adding the first equation to the second now results in 
a\ — as = 3 
a2 = 3 
a3 = - 1 . 
The solution a\ — 2, a2 = 3, as = — 1 is now readily obtained. Thus the vector (6,0,-1) 
is linearly dependent on the set A. M 
1.2 The Vector Spaces R n 5 
Another important type of dependence is given in the definition below. This time, 
the phrase linearly dependent involves only a set instead of involving both a vector and 
a set. 
Definition 1.5 A set A of vectors in R n is l inearly dependen t if there is a collection 
of vectors u i , 112,..., u& in A and scalars ci, C2,..., c/c, not all of which are zero, such that 
C1U1 + C2U2 -+-··· + CkUk — 0. If no such collection of vectors exists in A, then A is 
called l inearly independent . 
Again, the case involving a finite set of vectors is of special interest. It is readily 
seen that a finite set B = {vi, V2,..., v r } is linearly dependent if and only if there are 
scalars 61,62, . . . ,6r, not all zero, such that ΣΓ=ι ^ v * = 0· 
Example 3 D The set B in Example 1 is linearly dependent since Vi + V2 — V3 = 0. 
The set A is also linearly dependent. For if ui,U2, and 113 are as before and U4 = 
( - 3 , - 2 , - 4 ) , then 3ui + 2u2 + u3 + u4 = 0. ■ 
To determine the linear dependence or linear independence of a set {ui, U2,..., u / J of 
vectors in R n , it is necessary to investigate the conditions on the Ci which are imposed 
by requiring that ]C J = 1 CjUj = 0. If u, = (u\j,U2j, ...,unj), we have 
k k 
Σ C3U3 = T,Cj(Ulj,U2j,-,Unj) 
3=1 3=1 
k k k 
= ( £ CjUij, Σ CjU2j, . · . , Σ CjUnj). 
3 = 1 3 = 1 3 = 1 
Thus J2j=i cjuj = 0 if and only if J2j=i cjuij = 0 f°r e a c n * = 1, 2,..., n. This shows 
that the problem of determining the conditions on the Cj is equivalent to investigating 
the solutions of a system of n equations in k unknowns. If ^7=1 cjuj = 0 implies 
ci = c2 = · · · = Ck = 0, then {ui, U2,..., u^} is linearly independent. 
The discussion in the preceding paragraph is illustrated in the next example. 
Example 4 D Consider the set of vectors 
{(1,1,8,1), (1,0,3,0), (3,1,14,1)} 
in R4. To determine the linear dependence or linear independence of this set, we inves­
tigate the solutions Ci,C2,C3 to 
c i ( l , l , 8 , l ) + c 2 ( l , 0 , 3 , 0 ) + c 3 ( 3 , l , 1 4 , l ) = (0,0,0,0). 
Performing the indicated vector operations gives the equation 
(ci + c2 + 3c3, ci + c3,8ci + 3c2 + 14c3, cx + c3) = (0,0,0,0). 
6 Chapter 1 Real Coordinate Spaces 
This vector equation is equivalent to 
ci + c2 + 3c3 = 0 
c\ + c3 = 0 
8ci + 3c2 + 14c3 = 0 
ci + c3 = 0 . 
To solve this system, we first interchange the first two equations to place the equation 
c\ + c3 = 0 at the top. 
c\ + c3 = 0 
ci + c2 + 3c3 = 0 
8ci + 3c2 + 14c3 = 0 
c\ + c3 = 0 
By adding suitable multiples of the first equation to each of the other equations, we 
then eliminate c\ from all but the first equation. This yields the system 
ci + c3 = 0 
c2 + 2c3 = 03c2 + 6c3 = 0 
0 = 0 . 
Eliminating c2 from the third equation gives 
c\ -f c3 = 0 
c2 + 2c3 = 0 
0 - 0 
0 = 0 . 
It is now clear that there are many solutions to the system, and they are given by 
ci = - c 3 
c2 = - 2 c 3 
c3 is arbitrary. 
In particular, it is not necessary that c3 be zero, so the original set of vectors is linearly 
dependent. ■ 
1.2 The Vector Spaces Rg 7 
The following theorem gives an alternative description of linear dependence for a 
certain type of set. 
Theorem 1.6 A set A = {ui,ii2, ...,ιι^} in R n that contains at least two vectors is 
linearly dependent if and only if some Uj is a linear combination of the remaining vectors 
in the set. 
Proof. If the set A = {ui ,u2 , ...,Ufc} is linearly dependent, then there are scalars 
ai, (Z2, ...a/c such that Σί=ι a*u* = 0 with at least one α ,̂ say α ,̂ not zero. This implies 
that 
ajUj — — a\\\\ — · · · — dj-iUj-i — a j + i u J + i — · · · — a^u^ 
so that 
■*=(-*)■—·+(-^-+(-^)—+ (-*)-
Thus Uj can be written as a linear combination of the remaining vectors in the set. 
Now assume that some Uj is a linear combination of the remaining vectors in the 
set, i.e., 
Uj = biui + 62u2 H h bj-iUj-x + fy+iUf+i H h bkuk. 
Then 
&iui + · · · + bj-iUj-x + ( - l ) u j -f 6 j + i u i + i + . · -h^Ufc = 0, 
and since the coefficient of Uj in this linear combination is not zero, the set is linearly 
dependent. ■ 
The different meanings of the word "dependent" in Definitions 1.4 and 1.5 should 
be noted carefully. These meanings, though different, are closely related. The preced­
ing theorem, for example, could be restated as follows: "A set {ιΐι,...,ιι&} is linearly 
dependent if and only if some u^ is linearly dependent on the remaining vectors." This 
relation is further illustrated in some of the exercises at the end of this section. 
In the last section of this chapter, the following definition and theorem are of primary 
importance. Both are natural extensions of Definition 1.4. 
Definition 1.7 A set A Ç R n is linearly dependent on the set B Ç R n if each u £ A 
is linearly dependent on B. 
Thus A is linearly dependent on B if and only if each vector in A is a linear combi­
nation of vectors that are contained in B. 
Other types of dependence are considered in some situations, but linear dependence 
is the only type that we will use. For this reason, we will frequently use the term 
"dependent" in place of "linearly dependent." 
In the proof of the next theorem, it is convenient to have available the notation 
Σί=1 ( Σ ^ Ι Ι Uij) for t h e S u m o f t h e form Σ™=1 U1J + Σ?=1 U2j H + Σ?=1 Ukj- T n e 
8 Chapter 1 Real Coordinate Spaces 
associative and commutative properties for vector addition [Theorem 1.3, (2) and (5)] 
imply that 
k I m \ m / k \ 
Σ Eud = Σ Σ υ 4 
2=1 \j = l ) j = l \ t = l / 
Theorem 1.8 Let A,B, and C be subsets of R n . If A is dependent on B and B is 
dependent on C, then A is dependent on C. 
Proof. Suppose that A is dependent on B and B is dependent on C, and let u be an 
arbitrary vector in A. Since A is dependent on #, there are vectors νχ, V2, ···, v m in B 
and scalars αχ, α<ι,..., am such that 
m 
Since B is dependent on C, each Vj can be written as a linear combination of certain 
vectors in C. In general, for different vectors Vj, different sets of vectors from C would 
be involved in these linear combinations. But each of these m linear combinations (one 
for each Vj) would involve only a finite number of vectors. Hence the set of all vectors 
in C that appear in a term of at least one of these linear combinations is a finite set, say 
{wi,w2,. . . , Wfc}, and each Vj can be written in the form v̂ · = Σ ί = ι hjWi. Replacing 
the Vj's in the above expression for u by these linear combinations, we obtain 
m 
U = Σ djVj 
J = l 
m / k \ 
Σ aJ Σ kW 
7 = 1 \ i = l / 
m / k \ 
= Σ ( Σ OjiyWi 
j = l \i=l J 
= Σ Σ ajbijWi 
ί=ι y = i / 
k I m \ 
= Σ Σ aJbij w*· 
Letting Q = Σ ^ ι ajbij f°r ^ = 1,2,..., fe, we have u = Σ ί = ι c i w i · Thus u is dependent 
onC. 
Since u was arbitrarily chosen in A, A is dependent on C and the theorem is 
proved. ■ 
Exercises 1.2 
1. For any pair of positive integers i and j , the symbol 6ij is defined by 6ij = 0 if 
i φ j and <5̂ = 1 if i = j . This symbol is known as the Kronecker delta. 
1.2 The Vector Spaces R' 9 
(a) Find the value of £)"= 1 fejLi % ) · 
(b) Find the value of ΣΓ=ι ( Σ " = Ι ( 1 ~ ««)) · 
(c) Find the value of £ ? = 1 ( E J = i ( - l ) o y ) · 
(d) Find the value of Σ ^ = 1 öij6jk> 
2. Prove the remaining parts of Theorem 1.3. 
3. In each case, determine whether or not the given vector v is linearly dependent 
on the given set A. 
(a) v = (-2,1,4) , A = {(1,1,1), (0,1,1), (0,0,1)} 
(b) v = (-4,4,2) , A = {(2,1, - 3 ) , (1, -1 ,3 )} 
(c) v = ( 1 , 2 , 1 ) , A = {(1,0, - 2 ) , (0,2,1), (1 ,2 , -1) , (-1,2,3)} 
(d) v = (2,13, - 5 ) , A = {(1,2, - 1 ) , (3,6, - 3 ) , ( -1,1,0) , (0,6, - 2 ) , (2,4, -2 )} 
(e) v - (0,1,2,0), .A ={(1 ,0 , -1 ,1 ) , ( -2 ,0 ,2 , - 2 ) , (1,1,1,1), (2,1,0,2)} 
(f) v = (2,3, 5, 5 ) M = {(0,1,1,1), (1,0,1,1), (1 , -1 ,0 ,0) , (1,1, 2, 2)} 
4. Assuming the properties stated in Theorem 1.3, prove the following statements. 
(a) The vector 0 in property (3) is unique. 
(b) The vector —u in R n which satisfies u + (—u) = 0 is unique. 
(c) - u = ( - l ) u . 
(d) The vector u — v is by definition the vector w such that v + w = u. Prove 
that u — v = u + (—l)v. 
5. Determine whether or not the given set A is linearly dependent. 
(a) Λ = { ( 1 , 0 , - 2 ) , ( 0 , 2 , 1 ) , ( - 1 , 2 , 3 ) } 
(b) A = {(1,4,3), (2,12,6), (5,21,15), (0,2, -1 )} 
(c) Λ = {(1,2,-1) , ( -1,1,0) , (1 ,3 , -1)} 
(d) A = {(1,0,1,2), (2,1,0,0), (4,5,6,0), (1,1,1,0)} 
6. Show that the given set is linearly dependent and write one of the vectors as a 
linear combination of the remaining vectors. 
(a) {(2,1,0), (1,1,0), (0,1,1), ( -1,1,1)} 
(b) {(1,2,1,0), (3, -4 ,5 ,6 ) , (2, -1 ,3 ,3 ) , ( -2 ,6 , - 4 , - 6 ) } 
7. Show that any vector in R 3 is dependent on the set {βχ,β2,β3} where ei = 
( l ,0 ,0) ,e 2 = (0 , l ,0 ) ,e 3 = (0,0,1). 
10 Chapter 1 Real Coordinate Spaces 
8. Show that every vector in R 3 is dependent on the set {iii,U2,u3} where Ui = 
( l ,0 ,0) ,u2 = (1,1,0), and us = (1,1,1). 
9. Show that there is one vector in the set 
{(1,1,0),(0,1,1),(1,0,-1),(1,0,1)} 
that cannot be written as a linear combination of the other vectors in the set. 
10. Prove that if the set {ui, U2,..., u^} of vectors in R n contains the zero vector, it 
is linearly dependent. 
11. Prove that a set consisting of exactly one nonzero vector is linearly independent. 
12. Prove that a set of two vectors in R n is linearly dependent if and only if one of 
the vectors is a scalar multiple of the other. 
13. Prove that a set of nonzero vectors {ui, 112,..., u^} in R n is linearly dependent if 
and only if some u r is a linear combination of the preceding vectors. 
14. Let A = {ui, U2,113} be a linearly independent set of vectors in R n . 
(a) Prove that the set {ui — 112,112 — U3, ui + 113} is linearly independent. 
(b) Prove that the set {ui — 112,112 — 113, Ui — 113} is linearly dependent. 
15. Let {ui,..., u r - i , u r , u r+i , . . . , u / J be a linearly independent set of A: vectors in R n , 
and let ufr = ^7=1 a j u j w ^ n ar Φ 0· Prove that {ui, . . . ,ur_i,u£.,ur+i, ...,ιι^} is 
linearly independent. 
16. Let 0 denote the empty set of vectors in R n . Determine whether or not 0 is 
linearly dependent, and justify your conclusion. 
17. Prove that any subset of a linearly independent set A Ç R n is linearly indepen­
dent. 
18. Let A Ç R n . Prove that if A contains a linearly dependent subset, then A is 
linearly dependent. 
1.3 Subspaces of R n 
There are many subsets of R n that possess the properties stated in Theorem 1.3. A 
study of these subsets furnishes a great deal of insight into the structure of the spaces 
R n , and is of vital importance in subsequent material. 
Définition 1.9 A setW is a subspace of Rn if W is contained in R n and if the 
properties of Theorem 1.3 are valid in W . That is, 
1. u + v G W for all u, v in W. 
1.3 Subspaces of IV 11 
2. (u -h v) + w = u + (v + w) for all u, v, w in W . 
3. 0 G W . 
4. For each u G W , —u is in W . 
5. u - h v = v + u for all u, v in W. 
6. au G W /or a// a G R and a// u G W. 
7. a (bu) = (ab)u for all a, b G R ana7 a// u G W . 
8. a(u + v) = a u + av /or all a eH and all u, v G W . 
9. (a + b)u —au + 6u /or all a, 6 G R ana7 a// u G W . 
10. 1 u = u for all u G W . 
Before considering some examples of subspaces, we observe that the list of properties 
in Definition 1.9 can be shortened a great deal. For example, properties (2), (5), (7), 
(8), (9), and (10) are valid throughout R n , and hence are automatically satisfied in any 
subset of R n . Thus a subset W of R n is a subspace if and only if properties (1), (3), (4), 
and (6) hold in W. This reduces the amount of labor necessary in order to determine 
whether or not a given subset is a subspace, but an even more practical test is given in 
the following theorem. 
Theorem 1.10 Let W be a subset o / R n . Then W is a subspace o / R n if and only if 
the following conditions hold: 
(i) W is nonempty; 
(ii) for any a, b G R and any u, v G W, au + 6v G W. 
Proof. Suppose that W is a subspace of R n . Then W is nonempty, since 0 G W by 
property (3). Let a and b be any elements of R, and let u and v be elements of W . By 
property (6), au and 6v are in W . Hence au + 6v G W by property (1), and condition 
(ii) is satisfied. 
Suppose, on the other hand, that conditions (i) and (ii) are satisfied in W. From 
our discussion above, it is necessary only to show that properties (1), (3), (4), and 
(6) are valid in W . Since W is nonempty, there is at least one vector u G W . By 
condition (ii), l u - f ( - l ) u = 0G W, and property (3) is valid. Again by condition (ii), 
(—l)u = —u G W , so property (4) is valid. For any u, v in W , l - u + l - v = u + v E W , 
and property (1) is valid. For any a G R and any u G W , au + 0 · u = au G W. Thus 
property (6) is valid, and W is a subspace of Rn. ■ 
Example 1 □ The following list gives several examples of subspaces of R n . We will 
prove that the third set in the list forms a subspace and leave it as an exercise to verify 
that the remaining sets are subspaces of R n . 
1. The set {0} , called the zero subspace of R n 
2. The set of all scalar multiples of a fixed vector u G R n . 
12 C h a p t e r 1 R e a l C o o r d i n a t e Spaces 
3 . The set of all vectors tha t are dependent on a given set {u i , 112,..., u^} of vectors 
i n R n . 
4. The set of all vectors (χι ,Χ2, -.·,^η) m R n tha t satisfy a fixed equation 
a\X\ + a2X2 + · · · + anXn = 0. 
For n = 2 or n = 3, this example has a simple geometric interpretation, as we 
shall see later. 
5 . The set of all vectors (xi,X2,...,xn) m R-n tha t satisfy the system of equations 
a\\X\ + CL12X2 + · · ' + CilnXn = 0 
a2\X\ + 022^2 + · * " + &2η%η = 0 
a>mixi + οτη2^2 H l· a m î l x n = 0 . 
Proof for the third set. Let W be the set of all vectors tha t are dependent on the set 
A — {ui ,U2, . . . , u/e} of vectors in R n . From the discussion in the paragraph following 
Definition 1.4, we know tha t W is the set of all vectors tha t can be written in the form 
Σί=ι aiui- The set W is nonempty since 
0 · U! + 0 · u 2 H h 0 ■ uk = 0 is in W . 
Let u, v be arbitrary vectors in W , and let a, b be arbitrary scalars. From the 
definition of W , there are scalars C{ and d{ such tha t 
u = 2_. ciui a n d v = \^ diUi. 
2 = 1 
Thus we have 
a u + bv = a ί Σ ciui ) + b ( Σ diui ) 
k k 
= Σ aCiUi + Σ bdiUi 
i=l i=l 
k 
= Y,(aci\ii + bdi\ii) 
i=l 
k 
= Y,(aci + bdi)\ii. 
i= l 
The last expression is a linear combination of the elements in *4, and consequently 
a u + 6v is an element of W since W contains all vectors tha t are dependent on Λ. Both 
conditions in Theorem 1.10 have been verified, and therefore W is a subspace of R n . ■ 
1.3 Subspaces of R1 13 
Our next theorem has a connection with the sets listed as 4 and 5 in Example 1 
that should be investigated by the student. In this theorem, we are confronted with a 
situation which involves a collection that is not necessarily finite. In situations such as 
this, it is desirable to have available a notational convenience known as indexing. 
Let C and T be nonempty sets. Suppose that with each λ G C there is associated 
a unique element t\ of T, and that each element of T is associated with at least one 
λ G C. (That is, suppose that there is given a function with domain C and range T.) 
Then we say that the set T is indexed by the set £, and refer to C as an index set. We 
write {t\ | λ G C} to denote that the collection of t\'s is indexed by C. 
If {M\ | λ G C} is a collection of sets M\ indexed by £, then \J M\ indicates 
xec 
the union of this collection of sets. Thus |J Λ4\ is the set of all elements that are 
xec 
contained in at least one Ai\. Similarly, Ç\ M.\ denotes the intersection of the sets 
xec 
M\, and consists of all elements that are in every Λ4χ. 
Theorem 1.11 The intersection of any nonempty collection of subspaces of R n is a 
subspace o /Rn. 
Proof. Let {S\ \ X G C} be any nonempty collection of subspaces S\ of R n , and let 
W = f| S\. Now 0 eS\ for each λ G £, so 0 GW and W is nonempty. Let a, b G R, 
xec 
and let u, v G W. Since each of u and v is in «SA for every λ G £, au + bv G S\ for every 
λ G C. Hence au + 6v G W, and W is a subspace by Theorem 1.10. ■ 
Thus the operation of intersection can be used to construct new subspaces from 
given subspaces. 
There are all sorts of subsets in a given subspace W of R n . Some of these have the 
important property of being spanning sets for W , or sets that span W . The following 
definition describes this property. 
Definition 1.12 LetW be a subspace o /R n . A nonempty set Λ of vectors in R n spans 
W if A Ç W and if every vector in W is a linear combination of vectors in A. By 
definition, the empty set 0 spans the zero subspace. 
Intuitively, the word span is a natural choice in Definition 1.12 because a spanning 
set A reaches across (hence spans) the entire subspace when all linear combinations of 
A are formed. 
Example 2 □ We shall show that each of the following sets spans R3: 
ε3 = {(ΐ,ο,ο), (ο,ι,ο), (ο,ο,ΐ)}, 
.A ={(1,1 ,1) , (0,1,1), (0,0,1)}. 
The set 83 spans R 3 since an arbitrary (x,y,z) in R 3 can be written as 
(χ, y, z) = x(l , 0,0) + y(0,1,0) + *(0,0,1). 
14 Chapter 1 Real Coordinate Spaces 
In calculus texts, the vectors in £3 are labeled with the standard notation 
i = (1,0,0), j = (0,1,0), k = (0,0,1) 
and this is used to write (x, y, z) = xi + y] + zk. 
We can take advantage of the way zeros are placed in the vectors of A and write 
(*, y, z) = x(l , 1,1) + (y - x)(0,1,1) + (z - y)(0,0,1). 
The coefficients in this equation can be found by inspection if we start with the first 
component and work from left to right. ■ 
We shall see in Theorem 1.15 that the concept of a spanning set is closely related to 
the set (A) defined as follows. 
Definition 1.13 For any nonempty set A of vectors in R n , (A) is the set of all vectors 
in R n that are dependent on A. By definition, (0) is the zero subspace {0}. 
Thus, for A Φ 0 , (*4) is the set of all vectors u that can be written as u = J^ · χ ÜJUJ 
with ÜJ in R and Uj in A. Since any u in A is dependent on A, the subset relation 
A Ç (A) always holds. 
In Example 1 of this section, the third set listed is (̂ 4) where A = {ui,ii2, ...,Ufc} 
in R n . When the notation (A) is combined with the set notation for this A, the result 
is a somewhat cumbersome notation: 
(A) = ({ui,u2, . . . ,u f c}). 
We make a notational agreement for situations like this to simply write 
(A) = (ui,u2,...,Ufc). 
For example, if A = {(1,3, 7), (2,0,6)}, we would write 
CA> = <(1,3,7),(2,0,6)> 
instead of (A) = ({(1,3, 7), (2,0,6)}) to indicate the set of allvectors that are dependent 
on A. 
It is proved in Example 1 that, for a finite subset A = {ui ,u2 , ...,Ufc} of R n , the 
set (A) is a subspace of R n . The next theorem generalizes this result to an arbitrary 
subset A of Rn. 
Theorem 1.14 For any subset A o /R n , (.4) is a subspace o /R n . 
1.3 Subspaces of R1 15 
Proof. If A is empty, then (A) is the zero subspace by definition. 
Suppose A is nonempty. Then (A) is nonempty, since A Ç (A). Let a, b G R, and 
let u, v G (.A). Now au + bv is dependent on {u, v} and each of u and v is dependent on 
A. Hence {au + bv} is dependent on {u, v} and {u,v} is dependent on A. Therefore 
{au + bv} is dependent on A by Theorem 1.8. Thus au + 6v G (A), and (A) is a 
subspace. ■ 
We state the relation between Definitions 1.12 and 1.13 as a theorem, even though 
the proof is almost trivial. 
Theorem 1.15 Let W be a subspace o /R n , and let A be a subset o /R n . Then A spans 
W if and only if (A) — W. 
Proof. The statement is trivial in case A = 0. Suppose, then, that A is nonempty. 
If A spans W, then every vector in W is dependent on A, so W Ç (.4). Now 
A Ç W, and repeated application of condition (ii), Theorem 1.10, yields the fact that 
any linear combination of vectors in A is again a vector in W. Thus, (A) Ç W, and we 
have (A) = W . 
If (A) = W, if follows at once that A spans W , and this completes the proof. ■ 
We will refer to (A) as the subspace spanned by A. Some of the notations used 
in various texts for this same subspace are 
span (A), sp (A), lin (A), Sp (A), and S [A]. 
We will use span(^4) or (A) in this book. 
Example 3 □ With A and W as follows, we shall determine whether or not A spans 
W. 
.A = {(1,0,1,0), (0,1,0,1), (2,3,2,3)} 
W = ((1,0,1,0), (1,1,1,1), (0,1,1,1)) 
We first check to see whether A Ç W . That is, we check to see if each vector in A 
is a linear combination of the vectors listed in the spanning set for W . By inspection, 
we see that (1,0,1,0) is listed in the spanning set, and 
(0,1,0,1) = (-1)(1,0,1,0) + (1)(1,1,1,1) + (0)(0,1,1,1). 
The linear combination is not as apparent with (2,3,2,3), so we place unknown coeffi­
cients in the equation 
αι(1 ,0 ,1 ,0)+ α2(1,1,1,1) + α3(0,1,1,1) = (2,3,2,3). 
16 Chapter 1 Real Coordinate Spaces 
Using the same procedure as in Example 2 of Section 1.2, we obtain the system of 
equations 
ai + a,2 = 2 
a2 -l· a3 = S 
ai + a2 + a3 = 2 
a2 + a3 = 3 . 
It is then easy to find the solution a\ = — 1, a2 = 3, a$ = 0. That is, 
(-1)(1,0,1,0) + 3(1,1,1,1) + 0(0,1,1,1) = (2,3,2,3). 
Thus we have ^ Ç W . 
We must now decide if every vector in W is linearly dependent on A, and Theorem 
1.8 is of help here. We have W dependent on the set 
B = {(1,0,1,0), (1,1,1,1),(0,1,1,1)}. 
If B is dependent on *4, then W is dependent on A, by Theorem 1.8. On the other 
hand, if B is not dependent on A, then W is not dependent on A since 6 Ç W . Thus 
we need only check to see if each vector in B is a linear combination of vectors in A. 
We see that (1,0,1,0) is listed in A, and 
(1,1,1,1) = (1,0,1,0) + (0,1,0,1). 
Considering (0,1,1,1), we set up the equation 
6i(l, 0 ,1 ,0)+ 62(0,1,0,1)+ 63(2,3,2,3) = (0,1,1,1). 
This is equivalent to the system 
61 + 263 = 0 
62 + 363 = 1 
61 + 263 = 1 
62 + 363 = 1 . 
The first and third equations contradict each other, so there is no solution. Hence W 
is not dependent on *4, and the set A does not span W. ■ 
We saw earlier in this section that the operation of intersection can be used to 
generate new subspaces from known subspaces. There is another operation, given in the 
following definition, that also can be used to form new subspaces from given subspaces. 
Definition 1.16 Let S\ and S2 be nonempty subsets of R n . Then the sum S\ + £2 
is the set of all vectors u G R n that can be expressed in the form u = ui + U2, where 
ui G <Si, and u2 G S2. 
1.3 Subspaces of R1 17 
Although this definition applies to nonempty subsets of R n generally, the more 
interesting situations are those in which both subsets are subspaces. 
Theorem 1.17 / / W i and W2 are subspaces o / R n , then W i + W 2 is a subspace of 
R n . 
Proof. Clearly 0 + 0 = 0 is in W i + W2, so that W i + W2 is nonempty. 
Let a, b G R, and let u and v be arbitrary elements in W i + W 2 . From the definition 
of W i -f W 2 , it follows that there are vectors Ui, vi in W i and 112, v2 in W2 such that 
u = Ui + 112 and v = vi + V2. Now au\ + 6vi G W i and a\i2 + f>v2 G W2 since W i 
and W2 are subspaces. This gives 
au + bv = a(ui + u 2 )+ b(vx + v2) 
= (aui + 6vi) + (au2 + 6v2), 
which clearly is an element of W i + W2. Therefore, W i + W2 is a subspace of R n by 
Theorem 1.10. ■ 
Example 4 D Consider the subspaces W i and W2 as follows: 
W i = ( ( l , - l , 0 ,0 ) , (0 ,0 ,0 , l )> , 
W 2 = {(2,-2,0,0), (0,0,1,0)). 
Then W i is the set of all vectors of the form 
αι(1 , -1 ,0 ,0) + 02(0,0,0,1) = ( a i , - a i , 0 , a 2 ) , 
W2 is the set of all vectors of the form 
6i(2,-2,0,0) + &2(0,0,l,0) = (26i,-26i,62,0), 
and W i + W2 is the set of all vectors of the form 
α ι ( Ι , - Ι ,Ο ,Ο)+α 2 (0 ,0 ,0 ,1 )+ 6 i ( 2 , - 2 , 0 , 0 ) + &2(0,0,1,0) 
= (ai + 2&1, —ai — 2b\, 62,02). 
The last equation describes the vectors in W i + W 2 , but it is not the most efficient 
description possible. Since a\ + 2b\ can take on any real number c\ as a value, we see 
that W i + W2 is the set of all vectors of the form 
(c i , - c i , c 2 , c 3 ) . ■ 
Exercises 1.3 
1. Prove that each of the sets listed as 1, 2, 4, and 5 in Example 1 of this section is 
a subspace of R n . 
18 Chapter 1 Real Coordinate Spaces 
2. Explain the connection between the sets listed as 4 and 5 in Example 1 and 
Theorem 1.11. 
3. Let C denote the set of all real numbers λ such that 0 < λ < 1. For each λ G £, 
let M\ be the set of all x G R such that \x\ < X. Find (J M\ and |°| M\. 
xec xec 
4. Let V = R3. 
(a) Exhibit a set of three vectors that spans V. 
(b) Exhibit a set of four vectors that spans V. 
5. Formulate Definition 1.4 and Definition 1.5 for an indexed set A — {u\ \ X G £} 
of vectors in R n . 
6. For each given set A and subspace W, determine whether or not A spans W. 
(a) .4 = { (1 ,0 ,2 ) , ( -1 ,1 , -3 )} ,W = <(1,0,2)> 
(b) A = {(1,0,2), ( - 1 , 1 , - 3 ) } , W = ((1,1,1), (2, -1 ,5)) 
(c) A = {(1,-2) , ( -1 ,3)} , W = R2 
(d) A = {(2,3,0,-1) , (2 ,1 , -1 ,2)} , W = ( (0 , -2 , -1 ,3 ) , (6,7,-1,0)) 
(e) A = {(3, -1 ,2 ,1) , (4,0,1,0)}, W = ((3, -1 ,2 ,1) , (4,0,1,0), (0, -1,0,1)) 
(f) A = {(3, -1 ,2 ,1) , (4,0,1,0)}, W = ((3, -1 ,2 ,1 ) , (4,0,1,0), (1,1, - 1 , -1)) 
7. Let W i = ((2,-1,5)) and W 2 = ((3 , -2 ,10)) . Determine whether or not the 
given vector u is in W i + W2. 
(a) 11 = (-4,1,-5) 
(b) u = (3 ,2 , -6) 
(c) 11= (-5,3,-2) 
(d) u = (3,0,0) 
8. Let A = {(1,2,0), (1,1,1)}, and let B = {(0,1, - 1 ) , (0,2,2)}. By direct verification 
of the conditions of Theorem 1.10 in each case, show that {A), (B), and (A) + (B) 
are subspaces of R™. 
9. Let Ai = {u, v} and Ai = {u, v, w} be subsets of R™ with w = u + v. Show that 
(,4i) = (^2) by use of Definition 1.13. 
10. Prove or disprove: (A + B) = (A) + (B) for any nonempty subsets A and B of R™. 
11. Let W be a subspace of R™. Use condition (ii) of Theorem 1.10 and mathematical 
induction to show that any linear combination of vectors in W is again a vector 
i n W . 
1.4 Geometrie Interpretations of R2 and R3 19 
12. If A Ç Rn, prove that (A) is the intersection of all of the subspaces of R n that 
contain A. 
13. Let W be a subspace of R n . Prove that (W) = W . 
14. Prove that (A) = (B) if and only if every vector in A is dependent on B and every 
vector in B is dependent on A. 
1.4 Geometric Interpretations of R2 and R3 
For n = 1,2, or 3, the vector space R n has a useful geometric interpretation in which 
a vector is identified with a directed line segment. This procedure is no doubt familiar 
to the student from the study of the calculus. In this section, we briefly review thisinterpretation of vectors and relate the geometric concepts to our work. The procedure 
can be described as follows. 
For n = 1, the vector v = (x) is identified with the directed line segment on the real 
line that has its initial point (tail) at the origin and its terminal point (head) at x. This 
is shown in Figure 1.1. 
I 1 1 I I 
■ 1 0 1 2 
Figure 1.1 
For n = 2 or n = 3, the vector v = (x, y) or v = (x, y, z) is identified with the directed 
line segment that has initial point at the origin and terminal point with rectangular 
coordinates given by the components of v. This is shown in Figure 1.2. 
fry) 
- » * 
n = 2 n = 3 
(*>y>z) 
> y 
Figure 1.2 
In making identifications of vectors with directed line segments, we shall follow the 
convention that any line segment with the same direction and the same length as the 
one we have described may be used to represent the same vector v. 
20 Chapter 1 Real Coordinate Spaces 
In the remainder of this section, we shall concentrate our attention on R3. However, 
it should be observed that corresponding results are valid in R2. 
If u = (ui,u2,U3) and v = (^1,^2,^3), then u + v = {u\ + t>i, ̂ 2 + ^2,^3 + ^3)· Thus, 
in the identification above, u + v is the diagonal of a parallelogram which has u and v 
as two adjacent sides. This is illustrated in Figure 1.3. The vector u + v can be drawn 
by placing the initial point of v at the terminal point of u and then drawing the directed 
line segment from the initial point of u to the terminal point of v. The "heads to tails" 
construction shown in Figure 1.3 is called the parallelogram rule for adding vectors. 
(u, + Vj,U2 + V2,U3 + V3) 
) 
X 
Figure 1.3 
Now u — v is the vector w satisfying v + w = u, so that u — v is the directed line 
segment from the terminal point of v to the terminal point of u, as shown in Figure 1.4. 
Since u — v has its head at u and its tail at v, this construction is sometimes referred 
to as the "heads minus tails" rule. 
Figure 1.4 
In approaching the geometric interpretation of subspaces, it is convenient to consider 
only those line segments with initial point at the origin. We shall do this in the following 
four paragraphs. 
As mentioned earlier, the set of all scalar multiples of a fixed nonzero vector v in 
R 3 is a subspace of R3. From our interpretation above, it is clear that this subspace 
(v) consists of all vectors that lie on a line passing through the origin. This is shown in 
Figure 1.5. 
1.4 Geometrie Interpretations of R2 and R3 21 
Figure 1.5 
If Λ = {vi, V2} is independent, then νχ and V2 are not collinear. If P is any point 
in the plane determined by vi and V2, then the vector OP from the origin to P is the 
diagonal of a parallelogram with sides parallel to Vi and V2, as shown in Figure 1.6. In 
this case, the subspace (A) consists of all vectors in the plane through the origin that 
contains νχ and V2. 
Figure 1.6 
If A = {vi, V2, V3} is linearly independent, then vi and V2 are not collinear and V3 
does not lie in the plane of vi and V2. Vectors v i , V2, and V3 of this type are shown 
in Figure 1.7. An arbitrary vector OP in R 3 is the diagonal of a parallelepiped with 
adjacent edges aiVi,a2V2, and (Z3V3 as shown in Figure 1.7(a). The "heads to tails" 
construction along the edges of the parallelepiped indicated in Figure 1.7(b) shows that 
OP = aiVi + a2v2 + a3v3. 
22 Chapter 1 Real Coordinate Spaces 
(a) (b) 
Figure 1.7 
We shall prove in the next section that a subset of R 3 cannot contain more than three 
linearly independent vectors. Thus the subspaces of R 3 fall into one of four categories: 
1. the origin; 
2. a line through the origin; 
3. a plane through the origin; 
4. the entire space R3. 
It is shown in calculus courses that a plane in R 3 consists of all points with rectan­
gular coordinates (x,y,z) that satisfy a linear equation 
ax + by + cz = d 
in which at least one of a, 6, c is not zero. A connection is made in the next example 
between this fact and our classification of subspaces. 
Example 1 D Consider the problem of finding an equation of the plane (A) if ^t = 
{(1,2,3), (3,5,1)}. 
Now the line segments2 extending from the origin (0,0,0) to (1,2,3) and from the ori­
gin to (3,5,1) lie in the plane (A), so the three points with coordinates (0,0,0), (1,2,3), 
and (3, 5,1) must all lie in the plane. This is shown in Figure 1.8. 
2Note that ordered triples such as (1,2,3) are doing double duty here. Sometimes they are coordi­
nates of points, and sometimes they are vectors. 
1.4 Geometrie Interpretations of R2 and R3 23 
» y 
Figure 1.8 
Since the points lie in the plane, their coordinates must satisfy the equation 
ax + by + cz = d 
of the plane. Substituting in order for (0,0,0), (1, 2,3), and (3,5,1), we obtain 
0 = d 
a + 2b + 3c = d 
3a + 56 + c = d . 
Using d — 0 and subtracting 3 times the second equation from the last one leads to 
a + 26 + 3c = 0 
- 6 - 8c - 0 . 
Solving for a and 6 in terms of c, we obtain the following solutions 
a = 13c 
b= - 8 c 
c is arbitrary. 
With c = 1, we have 
as the equation of the plane (Λ). 
13x - 8y + z = 0 
24 Chapter 1 Real Coordinate Spaces 
In the remainder of our discussion, we shall need the following definition, which 
applies to real coordinate spaces in general. 
Definition 1.18 For any two vectors u = (u\,U2,..., un) and v = (t>i,t>2, ...,ün), the 
inner product (dot product, or scalar product) o /u and v is defined by 
n 
U V = U\V\ + U2V2 H h UnVn = ^2 Ukvk-
fc=l 
The inner product defined in this way is a natural extension of the following defini­
tions that are used in the calculus: 
( z i , 2 / i ) · (0:2,2/2) = X1X2 + 2/12/2, 
(X l , 2 / l ,Z l ) · (x2,2/2,^2) = ^ 1 ^ 2 + 2 / 1 2 / 2 + ^1^2. 
The distance formulas used in the calculus lead to formulas for the length ||v|| of a 
vector v in R2 or R 3 as follows: 
(x,y)\\ = \Jx2 + 2/2, 
||(*,?/,z)|| = ^ 2 + 2/2 + *2. 
We extend these formulas for length to more general use in the next definition. 
Definition 1.19 For any v = (^1,^2, ...,vn) in Rn, ί/ie length for normj of v zs 
denoted by ||v|| and zs defined by 
Ml = \jvi+v'. * + ■ 
The following properties are direct consequences of the definitions involved, and are 
presented as a theorem for convenient reference. 
Theorem 1.20 For any u, v, w in Rn and any a in R: 
(i) u · v = v · u 
(ii) (au) · v — u-(av) = a(u · v) 
fm,) u - ( v + w) = u - v + u - w 
(w,) ||u|| = ^/u · u , or u - u = | | u | | 
(v) ||au|| = \a\ ||u||. 
Our next theorem gives a geometric interpretation of u · v in R2 or R3. In the proof, 
we use the Law of Cosines from trigonometry: / / the sides and angles of an arbitrary 
triangle are labeled according to the pattern in Figure 1.9, then 
a2 + b2- c2 
cosC = 
2ab 
We state and prove the theorem for R3, but the same result holds in R2 with a similar 
proof. 
1.4 Geometrie Interpretations of R2 and R3 25 
Figure 1.9 
Theorem 1.21 For any two nonzero vectors u = (^1,^2^3) and v = (^1,^2,^3) in 
R3? u · v = ||u|| || v || cos Θ, where Θ is the angle between the directions of u and v and 
0° < Θ < 180°. 
Proof. Suppose first that Θ = 0° or Θ = 180°. Then v = cu, where the scalar c is 
positive if Θ = 0° and negative if Θ = 180°. We have 
||u|| ||v||cos0 = ||u|| (\c\ · | |u||)cos0 
= |c|cos0||u|| 
II M2 
= c\\u\\ 
and 
U V = (ui,U2,U3) - (CUI,CU2,CU3) 
c u 
Thus the theorem is true for Θ = 0° or (9 = 180°. 
Suppose now that 0° < 0 < 180°. If u — v is drawn from the head of v to the head 
of u, the vectors u, v and u — v form a triangle with u — v as the side opposite Θ. (See 
Figure 1.10.) 
(Vi,V2,V3) 
VL-\ = (UI-V1,U2-V29U3-V3) 
(u,,u2,u3) 
Figure 1.10 
From the Law of Cosines, we have 
cos# 
l|u||2 + ||v||2 u - v 
2||u||||v|| 
26 Chapter 1 Real Coordinate Spaces 
Thus 
lull llvll cos0 
_ I 
Ni2 + V|| 
2 _i_„,2 
U · v||2) 
= H W l + W 2 + M 3 + * > i + V 5 + u ! 
" [ K - ^l) 2 + (U2 - ^ ) 2 + (U3 - V3)2}} 
U · V. 
Corollary 1.22 In R2 or R3,£wo nonzero vectors u and v are perpendicular (or or­
thogonal) if and only if u · v = 0. 
Proof. This follows at once from the fact that u · v = 0 if and only if cos Θ = 0. ■ 
Suppose that u and v are vectors in R2 or R 3 represented by directed line segments 
with the same initial point, as shown in Figure 1.11. The vector labeled Projuv in the 
figure is called the vector projection of v onto u. In order to construct Projuv, 
we first draw the straight line that contains u. Next we draw a perpendicular segment 
joining the head of v to the line containing u. The vector from the initial point of u to 
the foot of the perpendicular segment is Proju v. The vector Proju v is also called the 
vector component of v along u. 
Pr°juv Projuv 
Figure 1.11 
Let Θ (0° < Θ < 180°) denote the angle between the directions of u and v as labeled 
in Figure 1.11. The number 
d= llvll cos Θ = ^ΓΖ 
u 
is called the scalar projection of v onto u or the scalar component of v along 
u. From Figure 1.11, it is clear that d is the length of Projuv if 0° < Θ < 90° and d is 
the negative of the length of Projuv if Θ > 90°. Thus d can be regarded as the directed 
length of Projuv. 
The geometry involved in having line segments perpendicular to each other breaks 
down in R n if n > 3. Even so, we extend the use of the word orthogonal to all R n . Two 
1.4 Geometrie Interpretations of R2 and R3 27 
vectors u, v in R n are called orthogonal if u · v = 0. A set {\i\ | λ G £} of vectors 
in R n is an orthogonal set if u\x · UA2 — 0 whenever λι φ λ2. 
Exercises 1.4 
1. Use Figures 1.2 and 1.3 as patterns and illustrate the parallelogram rule with the 
vectors u = (1,6), v = (4, —4), and u -h v in an ^-coordinate system. 
2. Use Figures 1.2 and 1.4 as patterns and sketch the vectors u = (5,6), v = (2, —3), 
and u — v in an xy-coordinate system. 
3. For each λ G R, let M\ be the set of all points in the plane with rectangular 
coordinates (x,y) that satisfy y = Xx. Find f] Λ4χ and (J λ4χ. 
xec xec 
4. Find the equation of the plane (*4) for the given set A. 
(a) .A ={(1,0 ,2) , (2, -1 ,1 )} 
(b) . 4 = {(1,0,2),(2,1,5)} 
5. Find the lengths of the following vectors. 
(a) ( 3 , - 4 , - 1 2 ) 
(b) (2,3,6) 
(c) (1 , -2 ,4 ,2) 
(d) (2 ,6 ,0 , -3) 
(e) ( 1 , - 2 , - 4 , 3 ) 
(f) (3 ,0 , -5 ,8) 
6. Determine x so that (x,2) is perpendicular to (—3,9). 
7. A vector of length 1 is called a unit vector. 
(a) Find a unit vector that has the same direction as (3, —4,12). 
(b) Find a vector in the direction of u = (2, —3,6) that has length 4 units. 
8. Find each of the following scalar projections. 
(a) The scalar projection of (2,3,1) onto (—1, —2,4). 
(b) The scalar projection of (-3,4,12) onto (2,3, - 6 ) . 
9. Find the length of the projection of the vector (3,4) onto a vector contained in 
the line x — 2y = 0. 
10. Use projections to write the vector (19,22) as a linear combination of (3,4) and 
(4, —3). (Note that (3,4) and (4, —3) are perpendicular.) 
28 Chapter 1 Real Coordinate Spaces 
11. Let A = {(1,0,2), (2, -1 ,1)} and let B = {(1,1, - 1 ) , (2,1,1)}. 
(a) Find a set of vectors that spans (A) Π (B). 
(b) Find a set of vectors that spans (A) + (B). 
12. Work problem 11 with A = {(3,1, - 2 ) , (-2,1,3)} and B = {(0,1,0), (1,1,1)}. 
13. Give an example of an orthogonal set of vectors in R n . 
14. Prove that an orthogonal set of nonzero vectors in R n is linearly independent. 
15. Let u and v be vectors in R n . Prove that ||u|| 
u — v are orthogonal. 
16. Prove Theorem 1.20. 
||v|| if and only if u + v and 
17. The cross product u x v of two vectors u = (^1,^2,^3) and v = (υι,ν2,υ3) is 
given by 
U X V = (U2V3-U3V2,U3VI -UiV3,UiV2-U2Vi) 
U2 U3 
V2 V3 
ei + 
U3 U\ 
V3 Vi 
e2 + 
U\ U2 
Vi V2 
e3, 
where ei = (1,0,0),β2 = (0,1,0),e3 = (0,0,1). The symbolic determinant below 
is frequently used as a memory aid, since "expansion" about the first row yields 
the value of u x v. 
ei e2 e3 
u x v = I u\ u2 u3 
Vi V2 V3 
Prove the following facts concerning the cross product. 
(a) u x v is perpendicular to each of u, v. 
(b) u x u = 0. 
(c) u x v = —(v x u). 
(d) (au) x v = a(u x v) = ux(av) . 
(e) u x (v + w) = (u x v) -f (u x w). 
(f) | | u x v | | 2 = | | u | | 2 | | v | | 2 - ( u - v ) 2 . 
(g) ||u x v|| = ||u|| ||v|| sin#, where Θ is the angle between the directions of u and 
v, and 0° < Θ < 180°. 
(h) ||u x v|| is the area of a parallelogram with u and v as adjacent sides. 
1.5 Bases and Dimension 29 
1.5 Bases and Dimension 
We have seen in Section 1.3 that a subset A of the subspace W of R n may be a 
spanning set for W and also that A may be a linearly independent set. When both of 
these conditions are imposed, they form the requirements necessary for the subset to be 
a basis of W. 
Definition 1.23 A set B of vectors is a basis of the subspace W if (i) B spans W and 
(ii) B is linearly independent. 
The empty set 0 is regarded as being linearly independent since the condition for 
linear dependence in Definition 1.5 cannot be satisfied. Thus 0 is a basis of the zero 
subspace of Rn. 
Example 1 □ Some of our earlier work helps in providing examples concerning bases. 
We saw in Example 2 of Section 1.3 that each of the sets 
ε3 = {(ΐ,ο,ο), (ο,ι,ο), (ο,ο,ΐ)}, 
.A ={(1,1 ,1) , (0,1,1), (0,0,1)} 
spans R3. To show that the set A is linearly independent, we set up the equation 
ci(l , 1,1) + c2(0,1,1) + c3(0,0,1) = (0,0,0). 
This equation leads directly to the following system of equations. 
ci = 0 
ci + c2 = 0 
ci + c2 + c3 = 0 
The only solution to this system is c\ = 0, c2 = 0, c3 = 0, and therefore A is linearly 
independent. It is even easier to see that £3 is linearly independent. Thus both £3 and 
A are bases for R3. 
We saw in Example 4 of Section 1.2 that the set 
.4 ={(1,1 ,8 ,1) , (1,0,3,0), (3,1,14,1)} 
is linearly dependent. It follows that this set A is not a basis for the subspace 
W = ( ( l , 1,8,1), (1,0,3,0), (3,1,14,1)) 
that it spans. I 
We are concerned in much of our future work with indexed sets of vectors, and we 
use a restricted type of equality for this type of set. Two indexed sets A and B are 
equal if and only if they are indexed A = {u\ \ λ e C} and B = {νχ | λ G C} by 
30 Chapter 1 Real Coordinate Spaces 
the same index set C such that u\ = v\ for each λ G C. In particular, two finite sets 
A — {ui, u2,..., u / J and B — {vi, v2,..., v&} are equal if and only if they consist of the 
same vectors in the same order. 
The equality described in the preceding paragraph is the one we shall use in the 
remainder of this book. For finite sets A — {ui ,u2 , ...,Ufc} of vectors, this equality is 
actually an equality of ordered sets. For example, if Ui φ 112, then 
{ u i , U 2 , . . . , U f c } φ { u 2 , U i , . . . , U f c } . 
When we write 
A = {u i ,u 2 , . . . ,u / J , 
this notation is meant to imply that A is an ordered set with Ui as the first vector, 
U2 as the second vector, and so on. Moreover, we make a notational agreement for the 
remainder of this book that when we list the vectors in a set, this listing from left to 
right specifies their order. For instance, if we write 
.4 = { ( 5 , - 1 , 0 , 2 ) , (-4,0,3,7) , (1 , -1,3,9)} 
this means that (5 , -1 ,0 , 2) is the first vector in A, (—4,0,3,7) is the second vector in 
A, and (1, —1,3,9) is the third vector in A. That is, the vectors in A are automatically 
indexed with positive integers 1,2,3, · · · from left to right without this being stated. 
Suppose now that B = {vi, v2,..., v /J is a basis of the subspace W. Then B spans 
W, so that any v G W can be written as X^ = 1 «iV^. As a matter of fact, this expression 
is unique. For if v = Σί=ι ^ v * a s w e n > w e n a v e 
k k 
ΣaiVi = ΣhiWi 
and therefore 
k 
^ ( a » - bi)vi = 0. 
i=l 
Since B is linearly independent, this requires that α̂ — bi = 0 and α̂ = bi for each i. In 
particular, we observe that if u = 0, then all a^ are zero. This uniqueness of coefficients 
is not valid for spanning setsin general. The set B in Example 1 of Section 1.2 furnishes 
an illustration of this fact. 
Although a given subspace usually has many different bases, it happens that the 
number of vectors in different bases of the same subspace is always the same. The 
derivation of this result is the principal objective of this section. 
Theorem 1.24 LetW be a subspace ofHn. Suppose that a finite set A— {ui, 112,..., u r } 
spans W , and let B be a linearly independent set of vectors in W. Then B contains at 
most r vectors. 
1.5 Bases and Dimension 31 
Proof. Let W, A, and B be as described in the statement of the theorem. If B 
contains less than r vectors, the theorem is true. Suppose then that B contains at least 
r vectors, say {νχ, V2,..., νγ} Ç B. 
Our proof of the theorem follows this plan: We shall show that each of the vectors 
\i in B can be used in turn to replace a suitably chosen vector in A, with A dependent 
on the set obtained after each replacement. The replacement process finally leads to 
the fact that A is dependent on the set {vi, V2,..., v r } . We then prove that this set of 
r vectors must, in fact, be equal to B. 
Since A spans W , vi = ΣΓ=ι anui with at least one an φ 0 because νχ ^ 0. 
Without loss of generality, we may assume that a\\ φ 0 in the equation 
vi = a n u i + a2iu2 Λ h a r i u r . 
(This assumption is purely for notational convenience. We are assuming that the "suit­
ably chosen" vector in A is the first vector listed in A.) The equation for vi implies 
that 
a n u i = vi — (I21U2 — · · · — a r i u r 
and therefore 
\anJ \ anJ \ auJ 
Thus Ui is dependent on {vi,ii2, . . . , u r } , and this clearly implies that A is dependent 
on {v i , u 2 , . . . , u r } . 
Assume now that A is dependent on {vi, V2,..., ν&,ιΐ£+ι, . . . , u r } , where 1 < k < r. 
Since W is dependent on A, then W is dependent on {vi,V2,...,Vfc,Ufc+i,...,ur} by 
Theorem 1.8. In particular, 
k r 
i = l i=k+l 
At least one of the coefficients αι^+ι of the u^ must be nonzero. For if they were all zero, 
then Vfc+i would be a linear combination of νχ, ν 2 , . . . , ν&, and this would contradict the 
linear independence of B. Without loss of generality, we may assume that a/c+i^+i ^ 0. 
Hence we obtain 
k r 
Vfc+1 = 2^6z,fc+lV; + 2_^ a i , H l U i 
i=l i=k+l 
where ak+i,k+i Φ 0· This gives 
k 
« * + i = E ( - J i ! Î ± î - ) v * + ( ^ L - ) v f c + i + Σ, (--^ 
i=l \ a/e+l,/e+l/ \ûfc+l,fc+l/ i=fc+2 ^ ak+l,k+l , 
U 7 . 
Thus 
{vi,v2, . . . ,v f c ,u f c+i , . . . ,u r} 
32 Chapter 1 Real Coordinate Spaces 
is dependent on 
{ v i , V 2 , . . . , V / c , V / c + i , U f c + 2 , . . . , U r } . 
Since A is dependent on {vi, v2,..., ν^, u^+i, . . . , u r } , Theorem 1.8 implies that A is 
dependent on {vi, v2,..., v*, v^+i, ufc+2, .··, u r } . 
Letting k — 1, 2, ...,r — 1 in the iterative argument above, we see that each vz- in 
B can be used to replace a suitably chosen vector in A until we obtain the fact that A 
is dependent on {vi, v2,..., v r } . But B is dependent on A, so we have B dependent on 
{vi, v2,..., v r } . In particular, if B had more than r elements, any v r + i in B would be 
dependent on {vi, v2,..., v r } . But this is impossible since B is independent. Therefore, 
B has r elements, and this completes the proof. ■ 
Corollary 1.25 Any linearly independent set of vectors in R n contains at most n vec­
tors. 
Proof. The set of n vectors βχ = (1,0, ...,0),e2 = (0,1, ...,0), . . . ,en = (0,0, ...,1) 
spans R n since v = (^i,i>2, ...,^n) can be written as v = ΣΓ=ι^ θ * · The c o r o n a r y 
follows at once from the theorem. H 
If we think in terms of geometric models as presented in Section 1.4, the next theorem 
seems intuitively obvious. It certainly seems obvious in R2 and R3, and there is no 
reason to suspect the situation to be different in R n for other values of n. On the 
other hand, there is no compelling reason to suspect that the situation would not be 
different in R n for other values of n. At any rate, we refuse to accept such an important 
statement on faith or intuition, and insist that this result be validated by a logical 
argument based upon our development up to this point. This attitude or frame of mind 
is precisely what is meant when one refers to the "axiomatic method" of mathematics. 
Theorem 1.26 Every subspace o / R n has a basis with a finite number of elements. 
Proof. Let W be a subspace of R n . If W = {0}, then 0 is a basis of W, and the 
theorem is true. 
Suppose W Φ {0}. Then there is at least one nonzero vi in W. The set {νχ} is 
linearly independent by Problem 11 of Exercises 1.2. Thus, there are nonempty subsets 
of W that are linearly independent, and Corollary 1.25 shows that each of these subsets 
contains at most n elements. Let T be the set of all positive integers t such that W 
contains a set of t linearly independent vectors. Then any t in T satisfies the inequality 
1 < t < n. Let r be the largest integer in T, and let B = {νχ, v2,..., v r } be a linearly 
independent set of r vectors in W. We shall show that B spans W. 
Let v be any vector in W. Then {vi, v2,..., v r , v} is linearly dependent, from the 
choice of r. Thus, there are scalars ci,c2, . . . , c r ,c r + i , not all zero, such that 
r 
^ Q V ; + c r + i V = 0. 
2 = 1 
1.5 Bases and Dimension 33 
Now c r + i φ 0 since B is independent. Therefore, v = ]Ci=i(~~c*/cH-i)vi· This shows 
that B spans W , and hence is a basis of W. ■ 
This brings us to the main result of this section. 
Theorem 1.27 Let W be a subspace ofHn, and let A and B be any two bases for W . 
Then A and B have the same number of elements. 
Proof. If W = {0}, then each of A and B must be the empty set 0 , and the number 
of elements in both A and B is 0. 
Suppose W φ {0}. From Corollary 1.25, A and B are both finite. Let A — 
{ui,ii2, .. . ,ur} and B = {νχ, V2,..., v t } . Since A spans W and B is linearly independent, 
t < r by Theorem 1.24. But B spans W and A is linearly independent, so r < t by the 
same theorem. Thus, t = r. ■ 
Definition 1.28 7 / W is any subspace o / R n , the number of vectors in a basis o / W 
is called the dimension of W and is abbreviated as dim(W). 
The following theorem is somewhat trivial, but it serves to confirm that the preceding 
definition of dimension is consistent with our prior experience. 
Theorem 1.29 The dimension o /R n is n. 
Proof. Consider the set En — {ei ,e2 , . . . , e n } , where ei = (1,0, . . . ,0),e2 = (0,1, ...,0), 
. . . ,en = (0,0,0,..., 1), are the same as in the proof of Corollary 1.25. It was noted in 
that proof that an arbitrary vector v = (vi, ^2,···»^η) c a n be written as 
n 
v = Συίθί> 
and therefore £n spans R n . 
The set £n is linearly independent since 
n 
^2,c%^i = 0 implies (ci,c2, ...,cn) = (0,0, ...,0) 
i= l 
and therefore all Q = 0. Thus Sn is a basis of R n with n elements, and it follows that 
R n has dimension n. ■ 
Definition 1.30 The set En = {βχ,β2, . . . ,en} used in the proof of Theorem 1.29 is 
called the standard basis o /R n . 
The discussion of coefficients just before Theorem 1.24 explains why the coefficients 
Ci,C2, .. . ,cn in v = Σ™=1 CiWi are unique whenever B = {vi, V2,..., v n } is a basis of 
R n . The scalars c\ are called the coordinates of v relative to B. For the special basis 
En = {ei ,e2, . . . ,en} , the components of v are the same as the coordinates relative to 
£-77,· 
34 Chapter 1 Real Coordinate Spaces 
Example 2 □ With the basis 
εη = {(ΐ,ο,ο), (ο,ι,ο), (ο,ο,ΐ)}, 
the coordinates of v = (x,y,z) relative to £3 are the same as the components x,y,z, 
respectively. But for the basis 
B= {(1,1,1), (0,1,1), (0,0,1)}, 
the coordinates of v = (x, y, z) relative to B are the numbers x,y — x,z — y because 
(x, y, z) = x(l , 1,1) + (y - x)(0,1, l) + (z- Î /)(0, 0,1). ■ 
There are several types of problems involving "basis" and "dimension" that occur 
often in linear algebra. In dealing with a certain subspace W, it may be necessary to 
find the dimension of W,to find a basis of W , or to determine whether or not a given 
set is a basis of W. Frequently it is desirable to find a basis of W that has certain 
specified properties. The fundamental techniques for attacking problems such as these 
are developed in the remainder of this section. 
Theorem 1.31 Every spanning set of a subspace W o / R n contains a basis of W. 
Proof. Suppose that W is a subspace of R n and that A is a spanning set for W. If A 
is independent, then A is a basis and the theorem is true. Consider now the possibilities 
when A is dependent. 
If A — {0}, then W is the zero subspace. But we have already seen that 0 is 
a basis for the zero subspace and 0 Ç {0}. Thus the theorem is true if A = {0}. If 
A τ̂ {0}, then there exists a vi in A that is nonzero. If A is dependent on {vi}, 
we have a basis of W. If A is not dependent on {νχ}, then there is a V2 in A such 
that {vi,V2J is linearly independent. This procedure can be repeated until we obtain 
a set B = {vi, V2,..., v r } , r < n, that is linearly independent and spans W. For if 
we did not obtain such a linearly independent set, we could continue until we had a 
linearly independent set in W containing more than n vectors, and this would contradict 
Corollary 1.25 since W Ç R n . The set B thus obtained is the required basis of W. ■ 
Although the details of the work would vary, the procedure given in the proof above 
provides a method for "refining" a basis from a given spanning set. This refinement 
procedure is demonstrated in the next example. 
Example 3 D With 
A = {(1,2,1,0), (3, -4 ,5 ,6 ) , (2, -1 ,3 ,3) , ( -2 ,6 , - 4 , - 6 ) } , 
we shall use the procedure in the proof of Theorem 1.31 to find a basis of W = (A) 
that is contained in A. 
It is natural to start the procedure by choosing Vi = (1,2,1,0). We see that A is 
not dependent on {νχ} because the second vector in A, (3 , -4 ,5 ,6) , is not a multiple 
of vi . If we let V2 = (3, —4,5,6), then {vi, V2} is linearly independent. 
1.5 Bases and Dimension 35 
We need to check now to see if A is dependent on {vi, V2}. When we set up the 
equation 
c i ( l ,2 , l ,0 ) + c2(3,-4,5,6) = ( 2 , - l , 3 , 3 ) , 
this leads to the system of equations 
c i + 3 c 2 = 2 
2ci - 4 c 2 = - 1 
c\ + 5c2 = 3 
6c2 = 3 . 
The solution to this system is easily found to be c\ — c2 = \ . Thus the third vector in 
A is dependent on {vi, v 2 } . In similar fashion, we find that 
( l ) ( l ,2 , l ,0 ) + ( - l ) (3 , -4 ,5 ,6 ) = ( - 2 , 6 , - 4 , - 6 ) . 
Thus A is dependent on {νχ, v 2 } , and 
{(1,2,1,0), (3 , -4 ,5 ,6)} 
is a basis for (A). 
The work we have done shows that (̂ 4) has dimension 2. After checking to see that 
no vector in A is a multiple of another vector in A, we can then see that any pair of 
vectors from A forms a basis of (A). H 
In the proof of Theorem 1.31, we have seen how a basis of a subspace W can be 
refined or extracted from an arbitrary spanning set. The spanning set is not required 
to be a finite set, but it could happen to be finite, or course. If a spanning set is 
finite, the natural refining procedure demonstrated in Example 3 can be given a simpler 
description: A basis of W can be obtained by deleting all vectors in the spanning set 
that are linear combinations of the preceding vectors. Problem 13 of Exercises 1.2 
assures us this will lead to an independent set, and Theorem 1.8 assures us this will 
lead to a spanning set. Thus a basis will result from the deletion of all vectors in a finite 
spanning set that are linear combinations of preceding vectors as listed in the spanning 
set. 
Our next theorem looks at a procedure that in a sense is opposite to refining: It 
considers extending a linearly independent set to a basis. 
Theorem 1.32 Any linearly independent set in a subspace W o / R n can be extended 
to a basis o / W . 
Proof. Let A = {ui ,u2 , . . . ,u r} be a basis of W , and let B — {vi, v2,..., vt } be 
a linearly independent set in W . If every U{ is in (B), then A is dependent on B. By 
Theorem 1.8, W is dependent on ß , and B is a basis of W . 
36 Chapter 1 Real Coordinate Spaces 
Suppose that some u^ ^ (23). Let k\ be the smallest integer such that u*^ ^ (23). 
Then 23i = {νχ, v2, .··, v ^ u ^ } is linearly independent. If each u^ G (23i), then B\ 
spans W and forms a basis. If some u^ ^ (B\), we repeat the process. After p steps 
(1 < V < r)> w e arrive at a set Bp = {vi, V2,..., Vt, ι ΐ /^,ι ι^, ...,u/ep} such that all 
vectors of >4 are dependent on 23p. Thus 23p spans W. Since no vector in Bp is a 
linear combination of the preceding vectors, 23p is linearly independent by Problem 13 
of Exercises 1.2. Therefore Bp is a basis of W. ■ 
In the next example, we follow the preceding proof to extend a linearly independent 
set to a basis. 
Example 4 □ Given that 
^ = { ( 1 , 2 , 1 , 3 ) , (1,0,0,0), (0,0,1,0), (0,1,0,1)} 
is a basis of R4 and that 23 = {(1,0,1,0), (0,2,0,3)} is linearly independent, we shall 
extend B to a basis of R4. 
In keeping with the notational agreement made earlier in this section about indexing, 
we assume that A and 23 are indexed with positive integers from left to right so that 
the notation in the proof of Theorem 1.32 applies with 
U! = (1,2,1,3), u2 = (1,0,0,0), us = (0,0,1,0), u4 = (0,1,0,1) 
and 
vi = ( l , 0 , l , 0 ) ,v 2 = (0,2,0,3). 
Following the proof of the theorem, we find that 
(1,2,1,3) = (1,0,1,0) + (0,2,0,3) 
and thus Ui is dependent on 23. By inspection, we see that 
(l ,0,0,0) = c i ( l ,0 , l ,0 ) + c2(0,2,0,3) 
has no solution. Thus u2 = (1,0,0,0) is not in (23), and k\ = 2 is the smallest integer 
such that Ufcj ^ (23). Using the notation in the proof of the theorem, the set 
B\ = {v i ,v 2 ,u 2 } 
= {(1,0,1,0),(0,2,0,3),(1,0,0,0)} 
is linearly independent. We check now for a vector u^2 in A that is not in (B\). We 
find that 
(0,0,1,0) = ( l ) ( l ,0 , l ,0 ) + (0)(0,2,0,3) + ( - l ) ( l ,0 ,0 ,0) 
and U3 = (0,0,1,0) is in (B\), but the equation 
( 0 , l , 0 , l ) = c i ( l , 0 , l , 0 ) + c 2 (0 ,2 ,0 ,3)+c 3 ( l ,0 ,0 ,0) 
1.5 Bases and Dimension 37 
has no solution. Thus U4 = (0,1,0,1) is not in (B\). 
After two steps we have arrived at the set 
#2 = {vi,v2,112,114} 
such that all vectors in A are dependent on B<i. According to the proof of Theorem 1.32, 
this set i?2 is a basis of R4. ■ 
Our last two theorems in this section apply to the very special situations where the 
number of vectors in a set is the same as the dimension r of the subspace involved. For 
sets of this special type, only one of the conditions for a basis needs to be checked. This 
is the substance of the following two theorems. 
Theorem 1.33 Let W be a subspace ofHn of dimension r. Then a set of r vectors in 
W is a basis of W if and only if it is linearly independent. 
Proof. If a set of r vectors in W is a basis, then it is linearly independent (and spans 
W as well) according to Definition 1.23. 
Let B = {vi, V2,..., v r } be a set of r linearly independent vectors in W . Then B 
can be extended to a basis of W, by Theorem 1.32. Now W has a basis of r elements 
since it is of dimension r, and hence all bases of W have r elements. In particular, the 
basis to which B is extended has r elements, and therefore is the same as B. ■ 
Theorem 1.34 Let W be a subspace ofll71 of dimension r. Then a set of r vectors in 
W is a basis if and only if it spans W. 
Proof. If a set of r vectors in W is a basis, then it spans W (and is linearly 
independent as well) by Definition 1.23. 
Suppose A = {vi, v2,..., v r } is a set of r vectors which spans W . According to The­
orem 1.31, A contains a basis of W. But any basis of W contains r vectors. Therefore, 
the basis contained in A is not a proper subset of A, and A is a basis of W . ■ 
Exercises 1.5 
1. Given that each set A below spans R3, find a basis of R 3 that is contained in A. 
(Hint: Follow the proof of Theorem 1.31.) 
(a) A = {(2,6, - 3 ) , (5,15, - 8 ) , (3,9, - 5 ) , (1,3, - 2 ) , (5,3, -2 )} 
(b).A ={(1,0 ,2) , (0,1,1), (2,1,5), (1,1,3), (1,2,1)} 
(c) Λ={(1 ,1 ,0) , (2 ,2 ,0) , (2,4,1), (5,9,2), (7,13,3), (1,2,1)} 
(d) ^={(1 ,1 ,2) , (2 ,2 ,4) , (1 , -1 ,1) , (2 ,0 ,3) , (3 ,1 ,5) , (1 ,1 ,1)} 
2. Given that each set A is a basis of R4 and that each B is linearly independent, 
follow the proof of Theorem 1.32 to extend B to a basis of R4. 
38 Chapter 1 Real Coordinate Spaces 
(a) A = {(1,1,0,0), (0,1,1,0), (0,0,0,1), (0,1,0,1)}, 
B = {(1,0,2,3), (0,1, - 2 , -3 )} 
(b) A = {(1,0,0,0), (0,0,1,0), (5,1,11,0), ( -4 ,0 , -6 ,1 )} , 
B = {(1,0,1,0), (0,2,0,3)} 
(c) A = {(1,1,1,1), (1,1, - 1 , - 1 ) , (1,0,1,0), (0,1,0, - 1 ) } , 
B = {(1,1,0,0), (0,0,1,1)} 
(d) Λ = {(1,1,0,0), (1,0,4,6), (0,0,0,1), (0,1,0,1)}, 
B = {(1,0,2,3), ( 0 , 1 , - 2 , - 3 ) } 
3. Show that Λ is a basis of R 3 by using Theorem 1.33. 
(a) A= {(1,-2,3) , (0 ,1 , -2) , (1 , -1 ,2)} 
(b) „4 = {(1,0,0), (1,1,0), (1,1,1)} 
(c) .4 = {(2,0,0), (4,1,0), (3,3,1)} 
(d) .4 = {(2,-1,1) , (0 ,1 , -1) , (-2,1,0)} 
4. Show that each of the sets A in Problem 3 is a basis of R 3 by using Theorem 1.34. 
5. By direct use of the definition of a basis, show that each of the sets A in Problem 
3 is a basis of R3. 
6. Which of the following sets of vectors in R 3 are linearly dependent? 
(a) {(1,3,1),(1,3,0)} 
(b) {(1,-1,0) , (0,1,1), (1,1,1), (0,0,1)} 
(c) {(1,1,0), (0,1,1), (1,2,1), (1 ,0 , -1)} 
(d) {(1,0,1), (0,1,1), (2,1,3)} 
(e) {(1,0,0), (1,1,0), (1,1,1)} 
(f) {(1,1,0), (0 ,1 , -1) , (1,0,0)} 
7. Which of the sets of vectors in Problem 6 span R3? 
8. Which of the following sets are bases for R3? 
(a) {(1,0,0),(0,1,0),(0,0,1),(1,1,1)} 
(b) {(1,0,0), (0,1,1)} 
(c) {(1,0,0),(1,0,1),(1,1,1)} 
(d) {(1,0,0), (0,1,0), (1,1,0)} 
9. Determine whether or not A is a basis of R4. 
(a) A = {(1,2,1,0), (3, -4 ,5 ,6) , (2, -1 ,3 ,3) , ( -2 ,6 , - 4 , - 6 )} 
(b) A = {(1, - 1 , 2 , - 3 ) , (1,1,2,0), (3, - 1 , 6 , - 6 ) , (0,2,0,3)} 
1.5 Bases and Dimension 39 
(c) . 4 = {(1,0,2,-1) , (0,1,1,2), (1,2,1,4), (2,2,3,0)} 
(d) Λ = {(1,2,1,-1),(0,1,2,3),(1,4,5,5),(2,7,0,2)} 
10. Show that the sets A and Β span the same subspace of R4. 
(a) A = {(1,1,3, - 1 ) , (1,0, - 2 , 0 ) } , B = {(3,2,4, - 2 ) , (0,1,5, -1 )} 
(b) A = {(2,3,0, - 1 ) , (2,1, - 1 , 2 ) } , B = {(0, - 2 , -1 ,3 ) , (6,7, -1 ,0 )} 
(c) Λ = {(1,0,3,0),(1,1,8,4)},β = { ( 1 , - 1 , - 2 , - 4 ) , (1,1,8,4), (3 , -1 ,4 , - 4 )} 
(d) A = {(1, - 1 , - 1 , - 2 ) , (1, - 5 , - 1 , 0 ) } , 
B = {(0,2,0, - 1 ) , (1, - 3 , - 1 , - 1 ) , (3, - 5 , - 3 , - 5 )} 
11 . Find a basis of (A) that is contained in A. 
(a) A = {(1,0,1, - 1 ) , (3, -2 ,3 ,5 ) , (2, -1 ,2 ,2 ) , (5, -2 ,5 ,3)} 
(b) A = {(1,0,1,2), (3,1,0,3), (2,1, - 1 ,1 ) , (1, -1 ,4 ,5)} 
(c) .4 = { ( 2 , - 1 , 0 , 1 ) , (1,2,1,0), ( 3 , - 4 , - 1 , 2 ) , (5,3,2,1)} 
(d) A= {(1,0 ,1 ,1) , (0 ,1 , -1 ,1) , (1 , -1 ,2 ,0) , (2 ,1 ,1 ,7)} 
12. Find the dimension of (A). 
(a) A = {(1,2,1,0), (3, -4 ,5 ,6 ) , (2, -1 ,3 ,3 ) , ( -2 ,6 , - 4 , - 6 )} 
(b) Λ = {(4,3,2,-1) , ( 5 , 4 , 3 , - l ) , ( - 2 , - 2 , - 1 , 2 ) , ( 1 1 , 6 , 4 , 1 ) } 
(c) A = {(1,0,1,2, - 1 ) , (0,1, -2 ,1 ,3 ) , (2,1,0,5,1), (1, - 1 , 3 , 1 , -4 )} 
(d) Λ = {(1,2,0,1,0), (2,4,1,4,3), ( 1 , 2 , 2 , 5 , - 2 ) , ( - l , - 2 , 3 , 5 , 4 ) } 
13. Prove that if W i and W2 are subspaces of R™, then 
dim(Wx + W 2 ) = dim(Wx) + dim(W2) - dim(Wi Π W 2 ) . 
(Hint: Let C = {wi , . . . ,wr} be a basis of W i ΓΊ W2, and extend C to bases 
A = {wi , . . . ,w r ,u i , . . . , u s } and B = {wi , . . . ,w r ,v 1 , . . . ,v f } of W i and W 2 , re­
spectively. Prove that {wi,..., w r , u i , . . . , u s , v i , . . . , v (} is a basis of W i + W2.) 
14. The sum W i + W 2 + · · · + Wfe = £ * = 1 W j of subspaces W3- of R™ is defined to 
be the set of all vectors of the form vi + v2 + · · · + ν&, with v, in W». The sum 
Σί=ι W j is called direct if 
fe 
3=1 
for i = 1,2,..., k. A direct sum is written as W i Θ W 2 Θ · · · Θ W^. Prove that 
k 
dim(Wi Θ W 2 Θ · · · Θ Wfe) = 5^d imiWj ) . 
Chapter 2 
Elementary Operations on 
Vectors 
2.1 Introduction 
The elementary operations are as fundamental in linear algebra as the operations of 
differentiation and integration are in the calculus. These elementary operations are 
indispensable both in the development of the theory of linear algebra and in the appli­
cations of this theory. 
In many treatments of linear algebra, the elementary operations are introduced after 
the development of a certain amount of matrix theory, and the matrix theory is used as 
a tool in establishing the properties of the elementary operations. In the presentation 
here, this procedure is reversed somewhat. The elementary operations are introduced 
as operations on sets of vectors and many of the results in matrix theory are developed 
with the aid of our knowledge of elementary operations. This approach has two main 
advantages. The material in Chapter 1 can be used to efficiently develop several of the 
properties of elementary operations, and the statements of many of these properties are 
simpler when formulated in vector terminology. 
2.2 Elementary Operations and Their Inverses 
We saw in the proof of Theorem 1.29 that R n has the standard basis £n = {βχ, θ2,..., e n} 
in which each vector e* has a very simple form. The Kronecker delta that was introduced 
in Exercises 1.2 can be used to describe the vectors e$ in a concise way. Using the fact 
that 
ί 1 if i = 3 
Oij = < 
41 
42 Chapter 2 Elementary Operations on Vectors 
the vectors e* can be written as 
for i = 1,2, ...,n. 
Example 1 D The vectors in the standard basis £4 = {βι,θ2,β3,β4} are given by 
ei = (6u,6i2,6i3,6u) = (1,0,0,0), e2 = (<$2i, £22, £23,624) = (0,1,0,0), 
e3 = (631, £32, £33, M = (0,0,1,0), e4 = (641,642,643,644) = (0,0,0,1). ■ 
We shall see in this chapter that every subspace W of R n has a basis with a certain 
simple form, and that particular basis is called the standard basis of W. In order to 
develop the concept of this standard basis, we shall need certain operations that change 
one spanning set of W into another spanning set of W . 
More specifically, we define three types of elementary operations on nonempty or­
dered finite sets of vectors. These types of elementary operations will be referred to 
hereafter as types I, II, or III. Let A = {νχ, V2,..., v/~} be a set of vectors in R n . 
(i) An elementary operation of type /multiplies one of the v^ in A by a nonzero scalar. 
(ii) An elementary operation of type II replaces one of the vectors vs by the sum of 
vs and a scalar multiple of a Vt(s / t) in A. 
(iii) An elementary operation of type III interchanges two vectors in A. 
If the number 1 is used as the scalar in an elementary operation of type I, the 
resulting elementary operation is called the identity operation. That is, the identity 
operation on a set is that operation that leaves the set unchanged. 
Example 2 □ Consider the set of vectors 
A = {vi = (1,0,2), v2 = (2,3,19), v3 = (0,1,5)}. 
Multiplication of the first vector in A by 2 is an elementary operation of type I that 
yields the set 
Ai = {vn = (2,0,4), via = (2,3,19), v1 3 = (0,1,5)}. 
If the vector V12 in A\ is replaced by V12 + (—3)νχ3, we have an elementary operation 
of type II that yields the set 
A2 = {V21 = (2,0,4),v22 = (2,0,4),v23 = (0,1,5)}. 
An interchange of V22 and V23 is an elementary operation of type III that produces 
A3 = {V31 = (2,0,4),v32 = (0 , l ,5) ,v3 3 = (2,0,4)}. 
Application of an elementary operation of type II that replaces V33 by V33 + (—l)v3i 
gives 
A4 = {V41 = (2,0,4),V42 = (0,1,5),V43 = (0,0,0)}. ■ 
2.2 Elementary Operations and Their Inverses 43 
As the example above suggests, the application of a sequence of elementary opera­
tions can be used to replace a given set A by a set A' which has a simpler appearance. 
Later in this chapter, we shall turn to an investigation of the properties that A and A! 
have in common. When these properties are known, we shall see that the elementary 
operations can be chosen

Mais conteúdos dessa disciplina