Vector space Study Guide
Study Guide
📖 Core Concepts
Vector space – a non‑empty set \(V\) with vector addition \((u,v)\mapsto u+v\) and scalar multiplication \((a,v)\mapsto av\) satisfying the eight axioms (closure, associativity, commutativity, zero vector, additive inverses, compatibility, identity, distributivity).
Field \(F\) – the set of scalars (e.g., \(\mathbb R\) for real spaces, \(\mathbb C\) for complex spaces).
Subspace – a non‑empty subset \(W\subseteq V\) closed under addition and scalar multiplication; \((W,+)\) is itself a vector space.
Span – all finite linear combinations of a set \(S\); the smallest subspace containing \(S\).
Basis – a linearly independent set whose span is the whole space; coordinates of any vector are the unique scalars in its linear combination of basis vectors.
Dimension – cardinality of a basis; finite if a natural number, otherwise infinite.
Linear map (homomorphism) – \(T:V\to W\) with \(T(u+v)=T(u)+T(v)\) and \(T(av)=aT(v)\).
Isomorphism – bijective linear map; two spaces are isomorphic ⇔ they have the same dimension.
Matrix representation – once bases of \(V\) and \(W\) are fixed, \(T\) corresponds to a unique matrix \(A\) with \([T(v)]W = A\,[v]V\).
Determinant & invertibility – \(\det(A)\neq0\) ⇔ the associated linear map is an isomorphism (invertible).
Eigenpair – non‑zero \(v\) with \(T(v)=\lambda v\); the set of all such \(v\) (plus \(0\)) is the eigenspace for \(\lambda\).
Normed space – a vector space with a norm \(\|\cdot\|\) satisfying positivity, scalability \(\|av\|=|a|\|v\|\), and the triangle inequality.
Inner product space – equipped with \(\langle\cdot,\cdot\rangle\) giving a norm \(\|v\|=\sqrt{\langle v,v\rangle}\) and satisfying linearity, symmetry (or conjugate symmetry), and positivity.
Hilbert space – a complete inner‑product space; a basis means the closure of its span equals the whole space.
---
📌 Must Remember
Vector‑space axioms (list them quickly): closure, associativity, commutativity, zero, additive inverse, scalar‑multiplication compatibility, identity \(1\cdot v=v\), distributivity over both vector and scalar addition.
Dimension theorem: two finite‑dimensional spaces are isomorphic iff they have the same dimension.
Rank‑nullity: \(\displaystyle \dim V = \operatorname{rank}(T)+\operatorname{nullity}(T)\). (Implicit from linear maps; useful for subspaces.)
Determinant test: \(\det A\neq0 \iff A\) invertible \(\iff\) linear map is an isomorphism.
Eigenvalue criterion: \(\lambda\) eigenvalue ⇔ \(\det(T-\lambda I)=0\).
Gram–Schmidt: converts any linearly independent set \(\{v1,\dots,vk\}\) into an orthogonal (or orthonormal) set \(\{u1,\dots,uk\}\) by successive orthogonalization.
Quotient space \(V/W\): elements are cosets \(v+W\); \(\dim(V/W)=\dim V-\dim W\) (finite‑dimensional case).
---
🔄 Key Processes
Checking a Subspace
Verify non‑emptiness (usually \(0\in W\)).
Show closed under addition: if \(u,v\in W\) then \(u+v\in W\).
Show closed under scalar multiplication: \(a u\in W\) for any \(a\in F\).
Finding a Basis
Start with a spanning set.
Apply row‑reduction (or Gaussian elimination) to obtain a set of pivot columns → linearly independent.
Those pivot vectors form a basis.
Gram–Schmidt Orthogonalization
Set \(u1 = v1\).
For \(k\ge2\): \(uk = vk - \sum{j=1}^{k-1}\frac{\langle vk,uj\rangle}{\langle uj,uj\rangle}uj\).
Optional: normalize \(ek = uk/\|uk\|\) for an orthonormal basis.
Matrix of a Linear Map
Choose basis \(\mathcal BV=\{v1,\dots,vn\}\) of \(V\) and \(\mathcal BW=\{w1,\dots,wm\}\) of \(W\).
Compute \(T(vj)\) for each \(j\) and express in \(\mathcal BW\); column \(j\) of matrix \(A\) is the coordinate vector of \(T(vj)\).
Eigenvalue/Eigenspace Computation
Solve \(\det(T-\lambda I)=0\) for \(\lambda\).
For each \(\lambda\), solve \((T-\lambda I)v=0\) to get the eigenspace.
Constructing a Quotient Space
Identify subspace \(W\).
Form cosets \(v+W\).
Define addition/scalar multiplication on cosets as in the outline.
---
🔍 Key Comparisons
Vector space vs. Module
Vector space: scalars form a field (every non‑zero scalar invertible).
Module: scalars form a ring (may lack inverses); bases need not exist.
Real vs. Complex vector space
Real: scalars \(\mathbb R\); inner products are symmetric.
Complex: scalars \(\mathbb C\); inner product is conjugate‑symmetric \(\langle u,v\rangle=\overline{\langle v,u\rangle}\).
Subspace vs. Affine subspace
Subspace: contains the zero vector, closed under addition and scalar multiplication.
Affine subspace: translation \(x+V\) of a subspace; does not contain the origin unless the translation vector is zero.
Basis (finite) vs. Hilbert‑space basis
Finite: every vector is a finite linear combination of basis vectors.
Hilbert: closure of linear span (including limits of infinite series) equals the whole space.
Isomorphism vs. Similarity (matrices)
Isomorphism: existence of a bijective linear map between spaces (dimension equality).
Similarity: two matrices represent the same linear map in different bases; \(A = P^{-1}BP\).
---
⚠️ Common Misunderstandings
“Any spanning set is a basis.”
Missing linear independence; need both properties.
“Zero vector can be an eigenvector.”
Eigenvectors are non‑zero by definition; the zero vector always satisfies \(T(0)=\lambda 0\) trivially but is excluded.
“Determinant zero ⇒ matrix not invertible ⇒ linear map not injective.”
Correct, but remember the converse: non‑zero determinant guarantees both injective and surjective (isomorphism) for square matrices.
“All modules have bases.”
Only free modules have bases; many modules are non‑free.
“Normed space automatically inner‑product space.”
False; a norm need not arise from an inner product (e.g., \(\ell^1\) norm).
---
🧠 Mental Models / Intuition
Vector space = “playground” where you can add arrows and stretch/compress them with scalars—everything obeys the same simple rules (the eight axioms).
Basis = coordinate axes: just like \((x,y,z)\) in \(\mathbb R^3\), a basis lets you encode any vector as a list of numbers (coordinates).
Quotient space = “collapsing” a subspace to a single point; think of gluing all vectors of \(W\) together, leaving cosets as new points.
Eigenvectors = “directions that don’t turn” under a transformation; the transformation only scales them by \(\lambda\).
Gram–Schmidt = “orthogonalizing” a messy set of directions step by step, like adjusting a set of skewed axes to become perpendicular.
---
🚩 Exceptions & Edge Cases
Infinite‑dimensional spaces may lack a finite basis; concepts like dimension become cardinal numbers.
Norms not induced by inner products (e.g., \(\|x\|1\) on \(\mathbb R^n\)).
Non‑free modules have no basis; the structure theorem for finitely generated modules over a PID provides a decomposition instead.
Determinant of non‑square matrix is undefined; invertibility only makes sense for square matrices.
Quotient of infinite‑dimensional spaces can be infinite‑dimensional even if the subspace is large.
---
📍 When to Use Which
To prove two vector spaces are “the same” → check dimensions; construct an explicit isomorphism if needed.
To solve a system of linear equations → use row‑reduction to find a basis for the solution subspace (homogeneous) or a particular solution + nullspace (inhomogeneous).
When you need orthogonal coordinates → apply Gram–Schmidt, especially in inner‑product or Hilbert spaces.
To decide if a linear map is invertible → compute \(\det\) (square matrix) or check rank = dimension of domain.
For spectral analysis (eigenvalues/eigenvectors) → use characteristic polynomial \(\det(T-\lambda I)=0\).
If a problem involves “directions up to translation” → model with an affine subspace rather than a subspace.
When working over a ring rather than a field → treat the structure as a module; avoid assuming existence of bases.
---
👀 Patterns to Recognize
Zero vector appearing in a set → often indicates the set cannot be linearly independent.
Closed under addition & scalar multiplication → hallmark of a subspace; check quickly in multiple‑choice.
Determinant zero in a square matrix → expect non‑invertibility, non‑trivial kernel, possible eigenvalue \(0\).
Characteristic polynomial factoring → each linear factor \((\lambda-\lambdai)\) signals an eigenvalue \(\lambdai\).
Repeated columns/rows → hint at linear dependence ⇒ dimension drop.
Inner product symmetry/positivity violation → not a valid inner product (common distractor).
---
🗂️ Exam Traps
“All norms come from an inner product.” – False; only norms satisfying the parallelogram law are inner‑product‑induced.
“If a set spans \(V\) then it is a basis.” – Missing independence; many spanning sets are larger than needed.
Choosing the wrong basis for a matrix representation – forgetting to express \(T(vj)\) in the target basis leads to swapped rows/columns.
Confusing eigenvectors with eigenvalues – answer choices may list a scalar where a vector is required.
Assuming a quotient space \(V/W\) has dimension \(\dim V - \dim W\) for infinite‑dimensional spaces – the formula holds only when \(\dim V\) is finite.
Misreading “orthogonal basis” as “orthonormal basis.” – orthogonal need not be unit length; extra normalization step required.
Treating a module over a non‑field as a vector space – e.g., \(\mathbb Z\)-module \(\mathbb Z^n\) lacks scalar inverses, so concepts like basis behave differently.
---
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or