RemNote Community
Community

Introduction to Discrete Mathematics

Understand the fundamentals of discrete mathematics, covering logic and proofs, set theory and functions, combinatorial counting, graph theory basics, and introductory number theory and algorithmic concepts.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz

Quick Practice

Which common logical connectives are used to formalize statements in propositional logic?
1 of 23

Summary

Introduction to Discrete Mathematics Discrete mathematics is the study of mathematical structures that consist of separate and distinct objects rather than continuous quantities. Unlike calculus, which deals with functions and rates of change over intervals, discrete mathematics focuses on objects that are fundamentally countable and disconnected. This field underpins computer science, cryptography, and many practical applications in computing. You'll find that discrete mathematics provides the logical foundation and counting tools necessary to analyze algorithms, design networks, and solve combinatorial problems. Logic and Proof Techniques Propositional Logic Propositional logic is the formal system for working with statements and their logical relationships. A proposition is a declarative statement that is either true or false—no ambiguity allowed. Logical connectives let you build complex statements from simpler ones: Negation (NOT): Flips the truth value. If $P$ is true, then $\neg P$ (not $P$) is false. Conjunction (AND): $P \land Q$ is true only when both $P$ and $Q$ are true. Disjunction (OR): $P \lor Q$ is true when at least one of $P$ or $Q$ is true. Implication (IF-THEN): $P \rightarrow Q$ is false only when $P$ is true and $Q$ is false. This is crucial: a true statement implying a false statement is the only case where an implication fails. Biconditional (IF AND ONLY IF): $P \leftrightarrow Q$ is true when $P$ and $Q$ have the same truth value. The key thing to remember about implication: $P \rightarrow Q$ doesn't mean $P$ causes $Q$ in real life—it's purely a logical relationship. When $P$ is false, the implication is automatically true regardless of $Q$'s value. Truth Tables A truth table systematically lists all possible combinations of truth values for a statement's components and shows the resulting truth value of the whole expression. For example, here's the truth table for $P \land Q$: | $P$ | $Q$ | $P \land Q$ | |-----|-----|-----------| | T | T | T | | T | F | F | | F | T | F | | F | F | F | Truth tables become increasingly important as you combine multiple propositions. With $n$ propositional variables, you'll have $2^n$ rows. Truth tables allow you to verify logical equivalences—two formulas are equivalent if they have identical truth table columns. A particularly useful equivalence is De Morgan's Laws: $\neg(P \land Q) \equiv \neg P \lor \neg Q$ $\neg(P \lor Q) \equiv \neg P \land \neg Q$ These laws tell you how to push a negation through conjunctions and disjunctions by flipping the connective. Predicate Logic While propositional logic works with complete statements, predicate logic allows you to reason about properties and relationships that apply to specific objects. A predicate is a statement whose truth value depends on one or more variables. For example, "$x > 5$" is a predicate—its truth value depends on what $x$ is. In predicate logic, you use quantifiers to make general statements: Universal quantifier $\forall$ (for all): "$\forall x, P(x)$" means the predicate $P(x)$ is true for every possible value of $x$. To prove a universal statement false, you only need one counterexample. Existential quantifier $\exists$ (there exists): "$\exists x, P(x)$" means there is at least one value of $x$ for which $P(x)$ is true. These quantifiers are related by negation: $\neg(\forall x, P(x)) \equiv \exists x, \neg P(x)$ $\neg(\exists x, P(x)) \equiv \forall x, \neg P(x)$ The order of quantifiers matters significantly. "$\forall x, \exists y, P(x,y)$" (for every $x$, there exists some $y$) means something different from "$\exists y, \forall x, P(x,y)$" (there exists a $y$ that works for all $x$). Rules of Inference Rules of inference are logical patterns that let you derive new true statements from premises you know are true. Rather than checking every possible truth assignment (which becomes impractical quickly), these rules provide shortcuts. The most important rules include: Modus Ponens: If you know $P$ is true and $P \rightarrow Q$ is true, then $Q$ must be true. Modus Tollens: If you know $Q$ is false and $P \rightarrow Q$ is true, then $P$ must be false. Hypothetical Syllogism: If $P \rightarrow Q$ and $Q \rightarrow R$ are both true, then $P \rightarrow R$ is true. Disjunctive Syllogism: If $P \lor Q$ is true and $P$ is false, then $Q$ must be true. These rules form the basis of formal proofs, where you start with given premises and apply rules of inference step-by-step to reach a conclusion. Each step must be justified by a rule, making the logic completely transparent. Set Theory and Functions Sets and Basic Operations A set is simply a collection of distinct objects, called elements or members. Sets are foundational to all of discrete mathematics. We write $a \in S$ to mean "$a$ is an element of set $S$," and $a \notin S$ to mean "$a$ is not in $S$." Sets can be defined by listing elements: $S = \{1, 2, 3\}$, or by a property: $S = \{x \mid x \text{ is even and } 0 < x < 10\} = \{2, 4, 6, 8\}$. Basic set operations combine or modify sets: Union $A \cup B$: All elements in $A$ or $B$ (or both). Intersection $A \cap B$: Only elements in both $A$ and $B$. Complement $A^c$: All elements (in some universal set) that are not in $A$. Difference $A \setminus B$: Elements in $A$ but not in $B$. Important to remember: the empty set $\emptyset$ contains no elements. It's a subset of every set. Two useful properties are: Commutativity: $A \cup B = B \cup A$ and $A \cap B = B \cap A$ Distributivity: $A \cup (B \cap C) = (A \cup B) \cap (A \cup C)$ Cardinality and Size Comparison The cardinality of a set, written $|S|$ or $\#S$, is the number of elements it contains. For finite sets, this is straightforward: if $S = \{1, 2, 3\}$, then $|S| = 3$. For infinite sets, cardinality becomes more subtle. Two sets are said to have the same cardinality if there exists a bijection (a one-to-one correspondence) between them. This might seem strange, but the set of all integers and the set of all even integers have the same cardinality—both are countably infinite—because you can pair up every integer with exactly one even integer ($n \leftrightarrow 2n$). The cardinality of a set matters when counting: if you're choosing elements from disjoint sets, you add their cardinalities; if choosing from independent choices, you multiply. Functions and Mappings A function $f: A \to B$ is a rule that assigns to each element $a \in A$ (the domain) exactly one element $f(a) \in B$ (the codomain). This "exactly one" requirement is essential—if an input maps to two different outputs, it's not a function. Key terminology: Image of $f$: The set of all outputs actually produced, written $\text{Im}(f) = \{f(a) \mid a \in A\}$. This may be a proper subset of the codomain. Injective (one-to-one): Different inputs always produce different outputs. If $f(a) = f(b)$, then $a = b$. Surjective (onto): Every element of the codomain is actually produced. The image equals the entire codomain. Bijective: Both injective and surjective—a perfect one-to-one correspondence. Why does this matter? If $f: A \to B$ is bijective and finite, then $|A| = |B|$. Bijections are the foundation for counting arguments. Relations and Equivalence Classes A relation on a set $A$ is any subset $R$ of $A \times A$ (the set of all ordered pairs from $A$). We write $aRb$ to mean $(a, b) \in R$. Relations describe how elements are connected. An equivalence relation is special: it must satisfy three properties: Reflexive: $aRa$ for all $a$ (every element relates to itself) Symmetric: If $aRb$, then $bRa$ (the relation works both directions) Transitive: If $aRb$ and $bRc$, then $aRc$ (relations can chain) Equality ($=$) is an equivalence relation: everything equals itself, equality is symmetric, and it's transitive. Given an equivalence relation, an equivalence class $[a]$ is the set of all elements related to $a$: $[a] = \{x \in A \mid xRa\}$. These equivalence classes partition the set—they're disjoint and cover all of $A$. This is powerful because it organizes elements into groups of mutually related items, which is fundamental to modular arithmetic and many counting arguments. Combinatorics Basic Counting Principles Combinatorics is about counting objects, and two fundamental principles underpin most counting: The Addition Principle (or sum rule): If you're choosing from disjoint categories, add the counts. If set $A$ has $m$ elements and set $B$ has $n$ elements, and $A$ and $B$ are disjoint (no overlap), then $|A \cup B| = m + n$. The Multiplication Principle (or product rule): If you're making a sequence of independent choices, multiply the counts. If you choose one element from $A$ (say, a shirt) and one from $B$ (say, pants), there are $|A| \times |B|$ total outfits possible. These seem simple, but they're the basis for all counting. The key is recognizing when to add (disjoint cases) versus multiply (independent choices in sequence). Permutations and Combinations When counting arrangements, the distinction between permutations (where order matters) and combinations (where order doesn't) is crucial. A permutation is an ordered arrangement. The number of ways to arrange $n$ distinct objects in order is $n! = n \times (n-1) \times \cdots \times 1$ (pronounced "$n$ factorial"). If you're choosing and arranging $k$ objects from $n$ total, that's: $$P(n, k) = \frac{n!}{(n-k)!}$$ This is because you have $n$ choices for the first position, $n-1$ for the second, and so on. A combination is an unordered selection. When order doesn't matter, many permutations correspond to the same combination (since rearranging doesn't create a new combination). The number of ways to choose $k$ objects from $n$ is: $$C(n, k) = \binom{n}{k} = \frac{n!}{k!(n-k)!}$$ The division by $k!$ accounts for the fact that all $k!$ orderings of the same $k$ objects are counted as one combination. Mistake to avoid: Don't confuse when to use which. If the problem asks "in how many ways can we arrange" or "how many ordered lists," use permutations. If it asks "how many ways can we choose" or "how many groups," use combinations. Pigeonhole Principle The pigeonhole principle is deceptively simple but powerful: If you place more than $n$ objects into $n$ containers, at least one container must hold more than one object. Formally: If $n+1$ or more objects are distributed among $n$ sets, at least one set contains two or more objects. This principle is often used to prove existence without explicit construction. For example: In any group of 13 people, at least two must share the same birth month (13 people into 12 months means at least one month has 2+ people). The generalized version states: If you distribute $nk + 1$ objects into $n$ containers, at least one container has at least $k+1$ objects. Inclusion–Exclusion Principle When you want to count a union of overlapping sets, simply adding the sizes overcounts the overlaps. The inclusion–exclusion principle corrects for this. For two sets: $$|A \cup B| = |A| + |B| - |A \cap B|$$ For three sets: $$|A \cup B \cup C| = |A| + |B| + |C| - |A \cap B| - |A \cap C| - |B \cap C| + |A \cap B \cap C|$$ The pattern: Add all individual sizes, subtract all pairwise intersections, add back all triple intersections, and continue alternating. This ensures each element is counted exactly once. Why this matters: When counting things with multiple properties, this principle lets you handle overlaps systematically. For instance, counting numbers up to 100 divisible by 2 or 3 requires including both sets but excluding the overlap (divisible by 6). Recurrence Relations A recurrence relation defines a sequence where each term depends on previous terms. Instead of a closed formula, you describe how to build the next term. The classic example is the Fibonacci sequence: $F(n) = F(n-1) + F(n-2)$ with $F(1) = 1, F(2) = 1$. Each term is the sum of the two before it. To solve a recurrence relation (find a closed formula), you can: Compute terms directly by applying the recurrence repeatedly Guess and verify a formula, then use induction to prove it correct Use generating functions or other advanced techniques Recurrence relations appear everywhere in computer science—analyzing algorithm complexity, counting problems with recursive structure, and dynamic programming all use them. Graph Theory Basic Terminology A graph $G = (V, E)$ consists of a set of vertices (or nodes) $V$ and a set of edges $E$, where each edge connects two vertices. Graphs model relationships and networks. If an edge between vertices $u$ and $v$ is unordered, the graph is undirected: the edge is simply $\{u, v\}$. If the edge is ordered (pointing from $u$ to $v$), the graph is directed: the edge is $(u, v)$. Key terminology: Adjacent vertices: Two vertices connected by an edge Degree: The number of edges incident to a vertex. In directed graphs, we distinguish in-degree (edges pointing in) and out-degree (edges pointing out) Isolated vertex: A vertex with degree 0 One fundamental fact: the sum of all degrees in an undirected graph equals twice the number of edges (since each edge contributes 1 to the degree of two vertices). Paths, Cycles, and Connectivity A path is a sequence of vertices where consecutive vertices are connected by edges, and no vertex is repeated. A simple path allows no repeated edges either. A cycle is a path that starts and ends at the same vertex (with at least 3 vertices). A graph is acyclic if it contains no cycles. A graph is connected if there's a path between any two vertices. A connected component is a maximal connected subgraph—a connected piece of the graph. These concepts help characterize graph structure. For example, a graph with no cycles is a forest, and a connected forest is a tree. Trees and Forests A tree is a connected, acyclic graph. Trees are among the most important structures in computer science because of their simple but powerful properties. Key properties of a tree with $n$ vertices: It has exactly $n - 1$ edges There is a unique simple path between any two vertices Removing any edge disconnects it A forest is a collection of disjoint trees. If a forest has $n$ vertices and $c$ connected components (trees), it has exactly $n - c$ edges. In many applications, you designate one vertex as the root, creating a rooted tree where edges point away from the root. This adds hierarchy: vertices have parents (predecessors) and children (successors), and leaves are vertices with no children. Planar Graphs and Euler's Formula A planar graph is one that can be drawn on a plane without edge crossings. This is important for applications like circuit design and map coloring. Euler's formula for connected planar graphs states: $$V - E + F = 2$$ where $V$ is the number of vertices, $E$ is the number of edges, and $F$ is the number of faces (regions bounded by edges, including the infinite outer region). This remarkable formula connects three seemingly different quantities. From it, you can derive that for any connected planar graph: $E \leq 3V - 6$ (which tells you roughly how many edges a planar graph can have relative to its vertices). <extrainfo> For additional insight: Planarity is a deep concept in graph theory. Not all graphs are planar—for instance, the complete graph $K5$ (where every vertex connects to every other vertex) cannot be drawn without crossings. Determining whether an arbitrary graph is planar is a classic problem in computer science. </extrainfo> Graph Coloring Graph coloring assigns colors to vertices such that no two adjacent vertices share the same color. The minimum number of colors needed is the chromatic number of the graph. Chromatic number has practical applications: scheduling exams (vertices are exams, edges connect conflicting exams) requires assigning time slots (colors) so no conflicts occur in the same slot. Register allocation in compilers uses graph coloring to assign limited processor registers to variables. The Four Color Theorem (a famous result, proven with computer assistance) states that any planar graph is 4-colorable—you never need more than 4 colors. For general graphs, determining the chromatic number is NP-hard (very difficult computationally), but upper bounds exist. For instance, a graph where the maximum degree is $d$ can always be colored with $d + 1$ colors (by a greedy algorithm). Number Theory and Algorithmic Thinking Divisibility and Prime Numbers Divisibility is fundamental to number theory. We say $a$ divides $b$ (written $a \mid b$) if there exists an integer $k$ such that $b = ak$. For example, $3 \mid 12$ because $12 = 3 \times 4$. Prime numbers are integers greater than 1 with exactly two positive divisors: 1 and themselves. The first few primes are 2, 3, 5, 7, 11, ... Composite numbers have other divisors. The Fundamental Theorem of Arithmetic states that every integer greater than 1 can be uniquely expressed as a product of primes (up to order): $12 = 2^2 \times 3$. The Greatest Common Divisor $\gcd(a, b)$ is the largest positive integer dividing both $a$ and $b$. The Euclidean algorithm computes this efficiently by repeatedly applying $\gcd(a, b) = \gcd(b, a \bmod b)$ until the remainder is zero. Modular Arithmetic Modular arithmetic works with remainders. We write $a \equiv b \pmod{n}$ (read "$a$ is congruent to $b$ modulo $n$") when $a$ and $b$ have the same remainder upon division by $n$, or equivalently, when $n$ divides $a - b$. For example, $17 \equiv 5 \pmod{12}$ because both leave remainder 5 when divided by 12, or because $12 \mid (17 - 5)$. Key properties: Reflexive: $a \equiv a \pmod{n}$ Symmetric: If $a \equiv b \pmod{n}$, then $b \equiv a \pmod{n}$ Transitive: If $a \equiv b \pmod{n}$ and $b \equiv c \pmod{n}$, then $a \equiv c \pmod{n}$ Congruence respects arithmetic: If $a \equiv b \pmod{n}$ and $c \equiv d \pmod{n}$, then: $a + c \equiv b + d \pmod{n}$ $a \times c \equiv b \times d \pmod{n}$ This is why modular arithmetic is used in cryptography: you can do calculations with reduced numbers, then check results modulo $n$. Big-O Notation Big-O notation $O(f(n))$ describes the asymptotic growth rate of a function—how fast it grows as input size $n$ increases. Formally, $g(n) = O(f(n))$ means there exist constants $c > 0$ and $n0 > 0$ such that $g(n) \leq c \cdot f(n)$ for all $n \geq n0$. In practice, this means $g(n)$ grows no faster than $f(n)$ (ignoring constant factors and small inputs). For example: Linear search takes $O(n)$ time (checking up to $n$ elements) Binary search takes $O(\log n)$ time (halving the search space each step) Bubble sort takes $O(n^2)$ time Common growth rates in increasing order: $O(1) < O(\log n) < O(n) < O(n \log n) < O(n^2) < O(2^n)$ Why this matters: When $n$ is large, $O(\log n)$ is vastly better than $O(n)$, which is vastly better than $O(n^2)$. This notation lets you compare algorithms without worrying about constant factors or implementation details. Other related notations: $\Omega(f(n))$: Lower bound (grows at least as fast as $f(n)$) $\Theta(f(n))$: Tight bound (grows at exactly the rate of $f(n)$) <extrainfo> Basic Sorting Algorithms Sorting arranges data in increasing or decreasing order. While sorting is practical and important, the specific algorithms (insertion sort, selection sort, bubble sort) are details of implementation rather than core discrete mathematics concepts typically emphasized on exams focused on theoretical foundations. Insertion Sort: Builds a sorted array one element at a time, inserting each new element into its correct position. Time complexity: $O(n^2)$ in the worst case, but $O(n)$ if the array is nearly sorted. Selection Sort: Repeatedly finds the minimum element and moves it to its correct position. Time complexity: Always $O(n^2)$, even in the best case, because it examines all remaining elements regardless of their order. Basic Searching Algorithms Linear Search: Scans elements sequentially until finding the target. Time complexity: $O(n)$ in the worst case (target might be last or absent). Binary Search: Works on sorted data by repeatedly halving the search space. If your target is in the middle, you eliminate half the remaining elements. Time complexity: $O(\log n)$, making it exponentially faster than linear search for large datasets. The trade-off: Binary search requires the data to be sorted first, which takes $O(n \log n)$ time with efficient algorithms. Linear search works on unsorted data. </extrainfo> Conclusion You now have an overview of discrete mathematics' major areas: logical reasoning with formal proofs, set theory and functions, combinatorial counting, graph theory structures, and number-theoretic foundations. These topics form the mathematical backbone of computer science and are essential for analyzing algorithms, designing systems, and solving computational problems. As you study deeper into each area, remember that these topics frequently intersect—a counting problem might involve graph structures, or a proof might use logical inference with sets. The connections between topics are where the real depth lies.
Flashcards
Which common logical connectives are used to formalize statements in propositional logic?
“And” “Or” “Not”
Which quantifiers does predicate logic add to propositional logic?
“For all” “There exists”
What is the purpose of a truth table in logic?
To evaluate the truth value of logical expressions for every possible combination of truth values of their components.
What is the function of rules of inference within a proof?
To provide systematic steps for deriving new statements from given premises.
What is the definition of a set?
A collection of distinct objects.
What does the cardinality of a set measure?
The size of the set.
How does a function map elements between a domain and a codomain?
It assigns each element of the domain to exactly one element of the codomain.
What is the purpose of equivalence classes in set theory?
To partition a set into groups of mutually related elements.
In counting principles, when is the addition principle typically applied?
When counting objects in disjoint sets.
When is the multiplication principle used in basic counting?
When counting ordered pairs from independent choices.
What is the primary difference between a permutation and a combination?
Permutations count ordered arrangements, while combinations count unordered selections.
What does the pigeonhole principle conclude if more objects are placed into fewer containers?
At least one container must hold multiple objects.
What are the two basic components that make up a graph?
Vertices (nodes) Edges (links)
In graph theory, what is a cycle?
A closed path (a sequence of edges connecting vertices that returns to the start).
How is a tree defined in graph theory?
A connected acyclic graph.
What is a forest in the context of graph theory?
A collection of disjoint trees.
What is Euler’s formula for planar graphs?
$V - E + F = 2$ (where $V$ is vertices, $E$ is edges, and $F$ is faces).
What is the core rule of graph coloring?
Adjacent vertices must be assigned different colors.
What is the defining characteristic of a prime number?
It has no divisors other than 1 and itself.
What does the notation $a \equiv b \pmod{n}$ signify in modular arithmetic?
That $a$ and $b$ have the same remainder when divided by $n$.
What does Big-O notation $O(f(n))$ describe regarding an algorithm?
An upper bound on the growth rate of its running time or space usage.
How does a linear search locate a target value?
By scanning each element in a list sequentially.
How does a binary search locate a target value in a sorted list?
By repeatedly halving the list.

Quiz

What type of objects does discrete mathematics primarily study?
1 of 21
Key Concepts
Mathematical Foundations
Discrete mathematics
Set theory
Number theory
Logic and Counting
Propositional logic
Combinatorics
Pigeonhole principle
Inclusion–exclusion principle
Graph Theory and Algorithms
Graph theory
Euler’s formula (planar graphs)
Big‑O notation