RemNote Community
Community

Vector space - Extended Structures and Applications

Learn how vector spaces generalize to normed and inner‑product spaces, modules and algebras, and geometric structures such as affine/projective spaces and vector bundles.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz

Quick Practice

How are the operations of addition and scalar multiplication defined for a vector space of functions $f: \Omega \to F$?
1 of 32

Summary

Common Examples of Vector Spaces Vector spaces aren't just abstract algebraic objects—they arise naturally across mathematics and science. Understanding concrete examples will help you recognize when you're working with a vector space. Function Spaces One of the most important classes of vector spaces consists of functions. Let's say you have any fixed set $\Omega$ and a field $F$ (typically the real numbers $\mathbb{R}$ or complex numbers $\mathbb{C}$). The set of all functions $f:\Omega\to F$ forms a vector space under two operations: Pointwise addition: $(f+g)(\omega)=f(\omega)+g(\omega)$ for each $\omega \in \Omega$ Pointwise scalar multiplication: $(af)(\omega)=a\,f(\omega)$ for each $\omega \in \Omega$ The key insight is that you add and scale functions by doing these operations at each point separately. This applies to many practical spaces: continuous functions on $[0,1]$, differentiable functions on $\mathbb{R}$, polynomial functions, and many others. All of these are vector spaces because they satisfy the vector space axioms under pointwise operations. Solution Sets of Homogeneous Linear Systems Another critical example: if you have a homogeneous linear system $A\mathbf{x}=0$ where $A$ is an $m \times n$ matrix, the set of all solution vectors is a vector subspace of $F^n$. This is one reason why understanding vector subspaces matters—the solution set to a linear system automatically has the structure of a vector space, which means you can analyze it using tools from linear algebra. Vector Spaces with Additional Structure Pure vector spaces allow us to add vectors and scale them, but sometimes we need more structure. We often want to measure how "long" a vector is or how much two vectors "agree" with each other. This is where norms and inner products come in. Normed Vector Spaces A normed vector space adds a notion of length to a vector space. A norm $\|\cdot\|$ is a function that assigns each vector a non-negative number representing its magnitude. For a norm to be useful, it must satisfy three properties: Positivity: $\|v\| \ge 0$, with $\|v\| = 0$ only when $v = 0$ Scalability: $\|av\| = |a|\,\|v\|$ (scaling a vector scales its length proportionally) Triangle inequality: $\|u+v\| \le \|u\|+\|v\|$ (the direct distance between two points is shortest) A familiar example: on $\mathbb{R}^n$, the Euclidean norm is $\|v\| = \sqrt{v1^2 + v2^2 + \cdots + vn^2}$. But there are many other norms—the max norm $\|v\| = \maxi |vi|$ or the taxicab norm $\|v\| = |v1| + |v2| + \cdots + |vn|$ also work. Inner Product Spaces An inner product space goes further by defining how two vectors relate to each other through an inner product $\langle\cdot,\cdot\rangle$. The inner product takes two vectors and returns a scalar (a number). It must satisfy: Linearity in the first argument: $\langle au + bv, w\rangle = a\langle u,w\rangle + b\langle v,w\rangle$ Symmetry: $\langle u,v\rangle = \overline{\langle v,u\rangle}$ (the bar denotes complex conjugate for complex spaces) Positivity: $\langle v,v\rangle > 0$ whenever $v \neq 0$ Crucially, an inner product automatically gives you a norm: $\|v\| = \sqrt{\langle v,v\rangle}$. This is why inner product spaces are so powerful—they encode both length and angle information. The most familiar example is the dot product on $\mathbb{R}^n$: $\langle u,v\rangle = u1v1 + u2v2 + \cdots + unvn$. Basis in Hilbert Spaces When we work with infinite-dimensional spaces like function spaces, the notion of a basis becomes more subtle. A Hilbert space is a special inner product space that is also complete—meaning limits of sequences converge (a technical condition that makes infinite-dimensional analysis work smoothly). What is a Basis in Hilbert Spaces? In finite dimensions, a basis is a set of linearly independent vectors that span the space. In infinite dimensions, we need a more sophisticated definition: A set of vectors $\{v1, v2, v3, \ldots\}$ is a basis of a Hilbert space $H$ when the closure of its linear span equals $H$ itself. What does "closure of the span" mean? It includes: All finite linear combinations of the basis vectors All limits of sequences of such combinations This subtle difference matters because in infinite dimensions, we can have infinite sums like $\sum{i=1}^{\infty} ai vi$ that converge (approach a limit) but aren't themselves finite combinations. The closure captures this. The dimension of a Hilbert space is the cardinality (size) of its basis. Orthogonal Bases Working with an orthogonal basis—where basis vectors are perpendicular to each other ($\langle vi, vj\rangle = 0$ for $i \neq j$)—is much easier than working with a general basis. The Gram–Schmidt process takes any linearly independent set and systematically orthogonalizes it, producing an orthogonal basis. In a Hilbert space, orthogonal bases play the same role as coordinate axes in ordinary $\mathbb{R}^n$: they let you decompose any vector into independent components. When you have an orthogonal basis $\{e1, e2, e3, \ldots\}$ (possibly normalized so each has length 1), you can express any vector $v$ as: $$v = \sum{i=1}^{\infty} \langle v, ei\rangle \, ei$$ This is remarkable: the coefficient for each basis vector is simply the inner product of $v$ with that basis vector. No system of equations needed! Approximation Using Bases One major application of basis functions: if you can find a good basis for your problem, you can approximate complicated functions as linear combinations of simpler basis functions. The coefficients $ci$ in $f \approx \sumi ci \phii(x)$ where $\{\phii\}$ are basis functions tell you how much of each basis function you need. When your basis is orthogonal, finding these coefficients is simple: $ci = \frac{\langle f, \phii\rangle}{\|\phii\|^2}$. Applications to Differential Equations and Quantum Mechanics Many important differential equations produce solutions that form Hilbert spaces. The eigenfunctions of these operators serve as natural basis functions. In quantum mechanics, a key example: The time-dependent Schrödinger equation describes how quantum wavefunctions evolve Solutions are wavefunctions that live in a Hilbert space Physical observables (like energy) correspond to eigenvalues of linear differential operators The eigenstates (eigenvector functions) form a basis for the state space The spectral theorem is a powerful result: it says that under certain conditions, a linear operator can be written as a sum of projections onto its eigenstates, weighted by the corresponding eigenvalues. This decomposition shows that the eigenfunctions "diagonalize" the operator, similar to finding the eigendecomposition of a matrix in finite dimensions. <extrainfo> Modules Definition and Comparison with Vector Spaces A module over a ring $R$ is a generalization of a vector space. It's a set equipped with addition and scalar multiplication by elements of $R$, satisfying the same axioms as a vector space. The key difference: in a vector space, scalars come from a field (so every non-zero element has a multiplicative inverse). In a module, scalars come from a ring (which may not have inverses). When the underlying ring is a field, a module is exactly a vector space. The distinction becomes important in abstract algebra when working with rings that aren't fields, like the integers $\mathbb{Z}$. Free Modules and Non-free Modules A free module is one that possesses a basis. Every vector space is automatically a free module over its field. However, some modules don't have a basis—these are called non-free modules. This cannot happen for vector spaces. </extrainfo> <extrainfo> Algebras Over Fields Definition and Basic Examples An algebra over a field is a vector space with an additional operation: a bilinear multiplication that takes two vectors and produces another vector. The multiplication must be bilinear, meaning it distributes over both vector addition and scalar multiplication. A fundamental example: the polynomial ring $F[x]$ of all polynomials in one variable with coefficients in field $F$. You can add polynomials (vector space operation) and multiply them (algebra operation). Lie Algebras A Lie algebra is a special algebra where the multiplication operation is called the Lie bracket $[\cdot,\cdot]$ and must satisfy three properties: Bilinearity: The bracket is linear in both arguments Antisymmetry: $[v,v] = 0$, which implies $[u,v] = -[v,u]$ Jacobi identity: $[u,[v,w]] + [v,[w,u]] + [w,[u,v]] = 0$ Common examples include: Matrix Lie algebras: $n \times n$ matrices with bracket $[A,B]=AB-BA$ (the commutator) Three-dimensional space with the cross product as the bracket: $[u,v] = u \times v$ Lie algebras are fundamental in physics and geometry because they describe infinitesimal symmetries. Tensor Algebras and Derived Structures The tensor algebra of a vector space $V$ is generated by taking formal products of vectors from $V$. These products are called simple tensors. Two important variations: Symmetric algebra: Impose the relation that tensors commute, so $u \otimes v = v \otimes u$ Exterior algebra (also called Grassmann algebra): Impose antisymmetry, so $u \otimes v = -v \otimes u$ These derived algebras are essential in differential geometry and algebraic topology. </extrainfo> <extrainfo> Affine and Projective Spaces Affine Spaces An affine space is a set of points with a special property: a vector space acts on it in a "free and transitive" way. What does this mean practically? You can add a vector to a point to get another point: $p + v$ This operation is associative: $(p + v) + w = p + (v + w)$ There's no distinguished "origin" point (unlike vector spaces, where the zero vector is special) Affine spaces formalize the intuition of a "bare space" without a coordinate system. Once you pick an origin, an affine space becomes a vector space. Affine Subspaces and Linear Equations An affine subspace is obtained by translating a linear subspace: if $V$ is a linear subspace and $x$ is a fixed vector, then $x + V = \{x + v \mid v \in V\}$ is an affine subspace. This connects to a fundamental fact: the solution set of an inhomogeneous linear system $A\mathbf{x} = \mathbf{b}$ is an affine subspace. Specifically, it equals $\mathbf{x}p + \ker(A)$, where $\mathbf{x}p$ is any particular solution and $\ker(A)$ is the null space of $A$. Projective Space and Its Generalizations Projective space $\mathbb{P}(V)$ is the collection of all one-dimensional linear subspaces of a finite-dimensional vector space $V$. Each point in projective space represents a "direction" or "line through the origin" in $V$. Why is this useful? Projective space formalizes the idea from perspective drawing that parallel lines meet at a point at infinity. This is crucial in computer graphics and algebraic geometry. Grassmannians generalize this idea: the Grassmannian $\text{Gr}(k,n)$ is the set of all $k$-dimensional linear subspaces of an $n$-dimensional vector space. Projective space is the special case $\text{Gr}(1,n)$. An even finer generalization: flag manifolds parametrize chains of nested subspaces like $V0 \subset V1 \subset V2 \subset \cdots \subset V$. These provide detailed classification of subspace configurations. </extrainfo> <extrainfo> Vector Bundles Definition and Structure A vector bundle over a base space $X$ consists of: A total space $E$ A continuous projection map $\pi: E \to X$ For each point $x \in X$, the fiber $\pi^{-1}(x)$ is a vector space You can think of a vector bundle as attaching a vector space to each point of the base space $X$, varying continuously. Trivial and Non-trivial Bundles The simplest example is a trivial bundle: $E = X \times V$ where $V$ is a fixed vector space, and $\pi$ projects onto the first coordinate. This is just the product of $X$ with a vector space. More generally, a bundle is locally trivial if every point of $X$ has a neighborhood $U$ such that $\pi^{-1}(U)$ looks like $U \times V$ (with the fiber-preserving structure respected). This is the definition of a vector bundle, and "local triviality" is what makes them interesting—they may not be globally trivial. Topological Consequences Vector bundles have surprising topological implications: The hairy ball theorem is a classic result: there is no continuous non-zero tangent vector field on the 2-sphere $S^2$. In other words, you cannot comb the hair on a sphere flat—there must be at least one "bald spot" where the vector field vanishes. This theorem follows from studying the tangent bundle of the sphere. The cotangent bundle of a differentiable manifold is another important example: at each point, it contains the dual space to the tangent space. Sections of the cotangent bundle are differential one-forms, which are fundamental in differential geometry and physics. </extrainfo>
Flashcards
How are the operations of addition and scalar multiplication defined for a vector space of functions $f: \Omega \to F$?
Pointwise
What algebraic structure is formed by the set of all solutions to the equation $A\mathbf{x}=0$?
A vector subspace of $F^n$
What three properties must the norm $\| \cdot \|$ satisfy in a normed vector space?
Positivity Scalability ($\parallel a v \parallel = |a| \parallel v \parallel$) Triangle inequality ($\parallel u+v \parallel \le \parallel u \parallel + \parallel v \parallel$)
What properties must an inner product $\langle \cdot, \cdot \rangle$ satisfy?
Linearity in the first argument Symmetry ($\langle u,v \rangle = \overline{\langle v,u \rangle}$) Positivity ($\langle v,v \rangle > 0$ for $v \neq 0$)
How is the norm $\|v\|$ derived from the inner product in an inner product space?
$\|v\| = \sqrt{\langle v,v \rangle}$
When is a set of vectors considered a basis of a Hilbert space?
When the closure of its linear span equals the entire space
What elements are included in the closure of a linear span in a Hilbert space?
Finite linear combinations and all limits of such combinations
What term refers to the cardinality of a basis in a Hilbert space?
Dimension
Which process is used to convert a linearly independent set into an orthogonal basis?
Gram–Schmidt process
If the basis functions of a Hilbert space are orthogonal, how are the approximation coefficients obtained?
Via inner products
How does the spectral theorem express a compact linear operator?
As a sum of its eigenfunctions weighted by their eigenvalues
How does the definition of a module over a ring differ from a vector space?
It does not require multiplicative inverses for scalars
What is a module called if it does not possess a basis?
Non-free module
What is the definition of a free module?
A module that has a basis
What extra structure does an algebra over a field have compared to a standard vector space?
A bilinear multiplication
What specific algebra is formed by the set of all polynomials in one variable?
Polynomial ring
What three properties must the Lie bracket multiplication satisfy?
Bilinearity Antisymmetry Jacobi identity
What is the standard Lie bracket for the vector space of $n \times n$ matrices?
The commutator $[A,B]=AB-BA$
Which algebra is derived from the tensor algebra by imposing the relation that two tensors commute?
Symmetric algebra
Which algebra is derived from the tensor algebra by imposing antisymmetry?
Exterior algebra
What distinguishes an affine space from a vector space regarding its origin?
It has no distinguished origin
How is an affine subspace geometrically related to a linear subspace $V$?
It is the translation of $V$ by a fixed vector $x$ ($x+V$)
What two components make up the solution set (an affine subspace) of an inhomogeneous linear system?
A particular solution and the nullspace of the coefficient matrix
What is the definition of a projective space in terms of linear subspaces?
The collection of all one-dimensional linear subspaces of a finite-dimensional vector space
What geometric notion does projective space formalize regarding parallel lines?
That parallel lines meet at a point at infinity
What do Grassmannians parametrize?
All $k$-dimensional linear subspaces of a fixed vector space
What do flag manifolds parametrize?
Chains of nested subspaces
What are the three core components of a vector bundle?
Total space $E$ Base space $X$ Continuous projection $\pi: E \to X$
In a vector bundle, what algebraic structure is the fiber $\pi^{-1}(x)$ for each point $x$?
A vector space
What is the definition of a trivial vector bundle?
The product $X \times V$ with projection onto the first factor
What does the hairy ball theorem state regarding continuous tangent vector fields on $S^2$?
No such field can be everywhere non-zero
What type of mathematical objects are the sections of a cotangent bundle?
Differential one-forms

Quiz

Which of the following correctly describes the addition and scalar multiplication operations for the vector space of functions $f:\Omega\to F$?
1 of 6
Key Concepts
Vector Spaces and Extensions
Vector space
Normed vector space
Inner product space
Hilbert space
Module (mathematics)
Algebra over a field
Lie algebra
Tensor algebra
Geometric Structures
Affine space
Projective space
Grassmannian
Vector bundle