Linear algebra Study Guide
Study Guide
📖 Core Concepts
Vector space – a set with addition & scalar multiplication satisfying the eight axioms (abelian group for addition, distributivity, etc.).
Linear map (transformation) – preserves addition and scalar multiplication: $f(u+v)=f(u)+f(v)$, $f(av)=a\,f(v)$.
Subspace – a subset closed under the same addition & scalar multiplication as its parent space.
Span – all linear combinations of a set $S$; the smallest subspace containing $S$.
Linear independence – no non‑trivial combination of the vectors equals the zero vector.
Basis & dimension – a linearly independent spanning set; its size = the dimension, unique for a given space.
Matrix representation – once a basis is chosen, vectors ↔ coordinate columns; a linear map ↔ a matrix whose columns are images of basis vectors.
Gaussian elimination – row‑operations → reduced row‑echelon form (RREF) → rank, kernel, inverse.
Determinant – scalar $\det(M)$; $\det(M)\neq0$ ⇔ $M$ invertible.
Eigenpair – $f(v)=\lambda v$ with $v\neq0$; $\lambda$ solves $\det(M-\lambda I)=0$.
Diagonalizable – there exists a basis of eigenvectors ⇔ $M$ similar to a diagonal matrix of eigenvalues (characteristic polynomial splits into distinct linear factors).
Inner product – $\langle u,v\rangle$ bilinear (or sesquilinear) satisfying symmetry, linearity, positive‑definiteness; yields norm $\|v\|=\sqrt{\langle v,v\rangle}$.
Orthonormal basis – basis vectors have unit length and are mutually orthogonal; constructed by Gram–Schmidt.
---
📌 Must Remember
Vector‑space axioms (closure, associativity, zero vector, inverses, distributivity, etc.).
Linear map test: $f(u+v)=f(u)+f(v)$ and $f(av)=a f(v)$.
Basis ⇔ span + independence; all bases have the same cardinality = dimension.
Rank–nullity theorem (implicit): $\dim(\operatorname{Im}f)+\dim(\ker f)=\dim V$.
Invertibility: $M$ invertible ↔ $\det(M)\neq0$ ↔ $M$ has full rank ↔ unique solution $\mathbf{x}=M^{-1}\mathbf{b}$.
Cramer’s rule: $xi=\dfrac{\det(Mi)}{\det(M)}$, where $Mi$ replaces column $i$ of $M$ with $\mathbf{b}$.
Characteristic polynomial: $p(\lambda)=\det(M-\lambda I)$.
Diagonalizable ⇔ enough independent eigenvectors (i.e., geometric multiplicity = algebraic multiplicity for each eigenvalue).
Gram–Schmidt steps: orthogonalize, then normalize.
Normal operator: $T T^{}=T^{} T$ → admits orthonormal eigenbasis.
---
🔄 Key Processes
Gaussian elimination to RREF
Swap rows, multiply a row by non‑zero scalar, add multiple of one row to another.
Continue until each leading entry is 1 and is the only non‑zero entry in its column.
Finding kernel & image
Reduce $[A\mid 0]$ to RREF; free variables → basis for $\ker A$.
Pivot columns of original matrix → basis for $\operatorname{Im}A$.
Cramer's rule (for $n\times n$ system)
Compute $\det(A)$.
For each variable $xi$, form $Ai$ (replace $i$‑th column with $\mathbf{b}$) and compute $\det(Ai)$.
$xi=\det(Ai)/\det(A)$.
Eigenvalue/eigenvector computation
Form $M-\lambda I$, compute $\det(M-\lambda I)=0$ → eigenvalues.
For each $\lambda$, solve $(M-\lambda I)\mathbf{v}=0$ → eigenvectors.
Diagonalization
Find all eigenvalues & eigenvectors.
If you obtain $n$ independent eigenvectors, assemble $P$ (columns = eigenvectors).
Compute $P^{-1}MP = \operatorname{diag}(\lambda1,\dots,\lambdan)$.
Gram–Schmidt (for $\{v1,\dots,vk\}$)
$u1 = v1$; $e1 = u1/\|u1\|$.
$u2 = v2 - \langle v2,e1\rangle e1$; $e2 = u2/\|u2\|$, etc.
---
🔍 Key Comparisons
Linear independence vs. spanning
Independence: no vector is a combination of the others.
Spanning: every vector in the space can be expressed as a combination of the set.
Homogeneous vs. non‑homogeneous system
Homogeneous: $\mathbf{b}=0$ → solution set = kernel, always at least the trivial solution.
Non‑homogeneous: $\mathbf{b}\neq0$ → solution = particular solution + kernel.
Similarity vs. Equality of matrices
Similarity: $B = P^{-1}AP$ for invertible $P$ → same linear transformation in different bases.
Equality: same entries, same basis.
Diagonalizable vs. defective matrix
Diagonalizable: enough eigenvectors → $A = PDP^{-1}$.
Defective: missing eigenvectors → cannot be diagonalized; may need Jordan form (outside this outline).
Determinant vs. rank
$\det(A)\neq0$ ⇔ $\operatorname{rank}(A)=n$ (full rank).
---
⚠️ Common Misunderstandings
“If $\det(M)=0$, the matrix has no inverse and no eigenvalues.” – False; a singular matrix can still have eigenvalues (e.g., $0$ is always an eigenvalue of a singular matrix).
“Every square matrix is similar to its transpose.” – Similarity holds only when they represent the same linear map under different bases; not automatic.
“A basis must consist of unit vectors.” – Unit length is not required; orthonormal bases are a special case.
“Gaussian elimination always gives the inverse.” – Only works when the matrix is invertible; otherwise you end up with a row of zeros.
“If the characteristic polynomial has repeated roots, the matrix is not diagonalizable.” – Repeated roots can be diagonalizable if there are enough independent eigenvectors (geometric multiplicity equals algebraic multiplicity).
---
🧠 Mental Models / Intuition
Vector space as a “playground”: any linear combination of allowed moves stays inside the playground.
Kernel = “lost directions”: vectors that get flattened to zero by the map – think of directions that disappear under a projection.
Image = “reachable destinations”: outputs you can actually get from the map.
Determinant as “volume scaling”: $\det(M)$ tells how the unit cube’s volume changes; zero volume → collapse → non‑invertible.
Eigenvectors as “steady directions”: the map stretches/compresses but does not rotate these directions.
Similarity = “change of glasses”: you see the same transformation, just described in a different coordinate system.
---
🚩 Exceptions & Edge Cases
Zero matrix: determinant $0$, every vector is an eigenvector with eigenvalue $0$, but the matrix is not invertible.
Repeated eigenvalues: diagonalizable only if the geometric multiplicity equals the algebraic multiplicity.
Non‑square matrices: have no determinant, no eigenvalues; only concepts of rank, kernel, image apply.
Non‑invertible homogeneous system: may have infinitely many solutions (kernel dimension > 0).
Gram–Schmidt failure: if the original set is linearly dependent, the process halts early; you obtain fewer orthonormal vectors than input.
---
📍 When to Use Which
Solve $A\mathbf{x}=\mathbf{b}$
If $A$ is small (≤ 3) and $\det(A)\neq0$: use Cramer’s rule for quick hand calculation.
If $A$ is larger or $\det(A)=0$: apply Gaussian elimination to determine existence/uniqueness.
Find a basis for a subspace
Use row reduction on the spanning set’s matrix; pivot columns give a basis.
Determine diagonalizability
Compute characteristic polynomial → factor → count eigenvalues.
For each eigenvalue, find eigenvectors; compare total count to $n$.
Orthonormalize vectors
Apply Gram–Schmidt when you need an orthonormal basis (e.g., for projections, QR factorization).
Check invertibility quickly
Compute $\det(A)$ (if feasible) or reduce to RREF and look for a pivot in every row/column.
---
👀 Patterns to Recognize
Row of zeros in RREF → dependent equations → either no solution or infinitely many (look at augmented column).
Pivot in every column of $A$ → full column rank → columns form a basis of $\operatorname{Im}A$.
Zero determinant always appears with a repeated row/column after elementary operations.
Characteristic polynomial with factor $(\lambda-\lambdai)^k$ → eigenvalue $\lambdai$ with algebraic multiplicity $k$.
Inner‑product zero between two vectors → orthogonal; often appears in projection problems.
---
🗂️ Exam Traps
Choosing the wrong matrix for Cramer’s rule – forgetting to replace the correct column with $\mathbf{b}$.
Confusing similarity with equality – a matrix similar to a diagonal one is not itself diagonal unless the change‑of‑basis matrix is the identity.
Assuming any $n$ linearly independent eigenvectors guarantee diagonalizability – they must also span the space (i.e., you need exactly $n$).
Misreading “orthogonal” vs. “orthonormal.” – orthogonal vectors may not have unit length; forgetting to normalize leads to incorrect projection formulas.
Overlooking the homogeneous part – when solving $A\mathbf{x}=\mathbf{b}$, forgetting to add the kernel (general solution) after finding a particular solution.
Determinant sign errors – swapping rows changes sign; missing this can flip the answer in Cramer’s rule.
---
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or