Introduction to Algorithms
Understand what algorithms are, their key qualities and efficiency analysis, and common examples such as search, sorting, and graph traversal.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz
Quick Practice
What is the general definition of an algorithm?
1 of 10
Summary
Understanding Algorithms
What Is an Algorithm?
An algorithm is a step-by-step recipe or procedure for solving a problem or performing a task. Think of it like a cooking recipe: just as a recipe tells you the exact order of ingredients and actions to bake a cake, an algorithm tells a computer the exact sequence of operations needed to solve a problem.
Algorithms are fundamental to computer science because they're the bridge between a problem we want to solve and the code that actually solves it. When you search for a word in a document, sort a list of names, or get directions on a map, an algorithm is doing the work behind the scenes.
How Are Algorithms Represented?
Algorithms are typically written in pseudocode or in an actual programming language like Python, Java, or C++. Pseudocode is a simplified, human-readable way of writing algorithms that looks like code but isn't tied to any specific programming language. This makes it easier to focus on the logic of the algorithm without getting bogged down in language syntax.
Where Do Algorithms Show Up?
Algorithms are everywhere in the software you use daily. Here are some real-world examples:
Searching: Finding a name in a contact list
Sorting: Arranging emails by date
Routing: Computing the shortest route on a GPS
Compression: Reducing image file sizes
Recommendation: Suggesting movies or products based on your preferences
Understanding algorithms helps you appreciate how these systems work and why some are faster than others.
Key Qualities of Algorithms
When we evaluate an algorithm, we ask three critical questions:
Does It Work? (Correctness)
A correct algorithm produces the right answer for all valid inputs. This might sound obvious, but it's easy to write code that works for some cases but fails on others. A correct algorithm must handle edge cases (unusual or extreme inputs) properly.
For example, a sorting algorithm must work whether the list is already sorted, completely reversed, contains duplicates, or is empty.
Does It Finish? (Finiteness)
A finite algorithm must always terminate—that is, it must finish after a limited number of steps. An algorithm that loops forever or gets stuck is useless. This quality ensures that the algorithm is guaranteed to produce an answer eventually, not run indefinitely.
Does It Use Resources Wisely? (Efficiency)
An efficient algorithm uses as little time (speed) and memory (space) as reasonably possible. Two algorithms might both be correct and finite, but one could take hours while the other takes seconds. Efficiency matters because computing resources are limited, and users want fast, responsive software.
Measuring Efficiency: Big-O Notation
Since we care deeply about efficiency, we need a way to measure and compare it. Big-O notation describes how an algorithm's running time or memory usage grows as the input size increases.
The key insight is that we don't need exact numbers—we need to understand the trend. For example:
If an algorithm takes the same amount of time regardless of input size, it's $O(1)$ (constant time)
If time doubles when the input doubles, it's $O(n)$ (linear time)
If time quadruples when the input doubles, it's $O(n^2)$ (quadratic time)
If time barely changes when the input doubles, it might be $O(\log n)$ (logarithmic time)
Big-O notation uses the worst-case scenario—we describe how long an algorithm might take in the unluckiest situation. This gives us a reliable upper bound on performance.
Why Big-O matters: An $O(n)$ algorithm is vastly faster than an $O(n^2)$ algorithm when the input is large. Understanding Big-O lets you choose better algorithms before you even write code.
Common Algorithm Patterns
Linear Search: Scanning from the Start
Linear search is the simplest searching algorithm. It starts at the beginning of a list and checks each element one by one until it finds the target value (or reaches the end).
How it works: Imagine looking for a specific name in an unsorted phone book by checking every entry sequentially.
Performance: Linear search has $O(n)$ time complexity, where $n$ is the number of elements. In the worst case, you might need to check every single element.
When to use it: When the list is unsorted or very small. For large, unsorted lists, linear search is acceptable but slow.
Binary Search: Divide and Conquer
Binary search is a much faster searching algorithm, but it requires the list to already be sorted. It works by repeatedly splitting the search interval in half.
How it works: Think of finding a word in a dictionary by guessing whether the word is in the first or second half, then repeating with whichever half you choose. Each guess eliminates half the remaining possibilities.
Performance: Binary search has $O(\log n)$ time complexity. Even with a million elements, you'd need at most about 20 comparisons! This is drastically faster than linear search.
Key requirement: The list must be sorted before using binary search.
Example: Searching for 7 in [1, 3, 5, 7, 9, 11]
Compare with middle (7): Found! → Done in 1 step
In a larger list, you'd keep halving until you found it.
Sorting: Arranging Data
Simple Sorting Algorithms
Bubble sort and insertion sort are straightforward sorting algorithms that arrange elements into order. They work by comparing adjacent or nearby elements and swapping them if they're in the wrong order.
Performance: Both run in $O(n^2)$ time, which means if you double the list size, it takes about four times as long. For small lists, this is acceptable, but for large lists, it becomes very slow.
When to use them: These are primarily useful for learning or for very small datasets. For real-world problems, use faster algorithms.
Advanced Sorting: Divide-and-Conquer
Merge sort and quicksort are faster sorting algorithms that use a divide-and-conquer strategy: break the problem into smaller pieces, solve those pieces, then combine the results.
Performance: Both achieve $O(n \log n)$ time complexity on average. This is exponentially faster than $O(n^2)$ for large lists. For a list of one million elements:
Bubble sort: 1 trillion operations
Merge sort: 20 million operations
The difference is staggering.
When to use them: For real-world sorting tasks, especially with large datasets, these are the standard choice.
<extrainfo>
Graph Traversal Algorithms
Breadth-First Search (BFS) and Dijkstra's shortest-path algorithm are used to explore connections between nodes in a graph (a network of connected points). These algorithms answer questions like "Is there a path from A to B?" or "What's the shortest route?"
Graph algorithms are important in applications like GPS navigation, social networks, and network routing, but they may not be heavily emphasized in an introductory algorithms course.
</extrainfo>
Analyzing Algorithms
Time and Space Complexity
When you develop an algorithm, you need to analyze two things:
Time Complexity: How long does the algorithm take as the input grows larger? You express this in Big-O notation. A $O(n)$ algorithm is "linear" (takes twice as long for twice the input), while a $O(n^2)$ algorithm is "quadratic" (takes four times as long for twice the input).
Space Complexity: How much memory does the algorithm use? Some algorithms use extra memory to store intermediate results, while others work "in place" using minimal extra space. This matters when working with huge datasets.
How to analyze: Count the number of basic operations (comparisons, assignments, etc.) the algorithm performs in the worst case. As the input size grows very large, the dominant term determines the Big-O complexity.
Example: A simple algorithm that compares each element to every other element performs roughly $n \times n = n^2$ comparisons, so it's $O(n^2)$.
Flashcards
What is the general definition of an algorithm?
A step‑by‑step recipe for solving a problem or performing a task.
What does it mean for an algorithm to be considered "correct"?
It produces the right answer for all valid inputs.
What is the definition of finiteness in the context of algorithms?
The algorithm finishes after a limited number of steps.
Which two primary resources does an efficient algorithm aim to minimize?
Time (speed) and space (memory).
What does Big‑O notation describe regarding an algorithm's performance?
How running time or memory usage grows as the input size grows.
How does a linear search locate a target element in a list?
It scans the list one element at a time until the target is found.
What is the running time complexity of a linear search?
$O(n)$ (where $n$ is the number of elements).
What is the prerequisite for a list to be searched using binary search?
The list must be sorted.
How does binary search narrow down the location of a target?
It repeatedly cuts the search interval in half.
What is the running time complexity of binary search?
$O(\log n)$ (where $n$ is the number of elements).
Quiz
Introduction to Algorithms Quiz Question 1: Which statement best defines an algorithm?
- A step‑by‑step recipe for solving a problem (correct)
- A high‑level programming language
- A data structure for storing information
- A hardware component that executes code
Introduction to Algorithms Quiz Question 2: What is the time‑complexity of binary search on a sorted list?
- O(log n) (correct)
- O(n)
- O(n log n)
- O(n²)
Introduction to Algorithms Quiz Question 3: When analyzing an algorithm, which two aspects are typically quantified using basic counting arguments and Big‑O notation?
- Running time and memory usage (correct)
- Number of source‑code lines and comment density
- User‑interface responsiveness and power consumption
- Network latency and bandwidth
Introduction to Algorithms Quiz Question 4: Which scenario indicates that an algorithm is not correct?
- It returns a wrong result for some valid input (correct)
- It runs longer than expected for large inputs
- It uses more memory than necessary
- It terminates after a fixed number of steps
Introduction to Algorithms Quiz Question 5: Big‑O notation characterizes how what aspect of an algorithm changes as input size grows?
- Its running time or memory usage (correct)
- The number of lines of source code
- The programmer's effort
- The hardware cost
Introduction to Algorithms Quiz Question 6: What is the time‑complexity of linear search in the worst case?
- $O(n)$ (correct)
- $O(\log n)$
- $O(n \log n)$
- $O(1)$
Introduction to Algorithms Quiz Question 7: Which of the following best describes the time‑complexity of bubble sort?
- $O(n^{2})$ (correct)
- $O(n)$
- $O(n \log n)$
- $O(\log n)$
Introduction to Algorithms Quiz Question 8: Which of the following is NOT a typical way to write an algorithm for a computer to execute?
- In raw binary code without abstraction (correct)
- In a high‑level programming language
- As pseudocode
- As a detailed flowchart
Introduction to Algorithms Quiz Question 9: Which of the following operations typically depends on an algorithm in software?
- Sorting a list of contacts alphabetically (correct)
- Adjusting screen brightness manually
- Plugging in a USB device
- Turning off the computer with a power button
Introduction to Algorithms Quiz Question 10: What characterizes a finite algorithm?
- It halts after a limited number of steps (correct)
- It runs in constant time for all inputs
- It never uses more than a fixed amount of memory
- It can process an infinite stream of data
Introduction to Algorithms Quiz Question 11: Which description best reflects algorithmic efficiency?
- It uses as little time and memory as reasonable for the task (correct)
- It always executes in O(1) time
- It requires no memory during execution
- It guarantees the correct result regardless of resource use
Introduction to Algorithms Quiz Question 12: In a divide‑and‑conquer sorting algorithm, what is the purpose of the “divide” step?
- To split the input list into smaller sublists (correct)
- To merge previously sorted sublists into a final list
- To choose a pivot element for partitioning
- To swap adjacent elements repeatedly
Which statement best defines an algorithm?
1 of 12
Key Concepts
Algorithm Fundamentals
Algorithm
Pseudocode
Big‑O notation
Search Algorithms
Linear search
Binary search
Breadth‑first search
Dijkstra’s algorithm
Sorting Algorithms
Merge sort
Quick sort
Divide‑and‑conquer
Definitions
Algorithm
A step‑by‑step procedure for solving a problem or performing a task.
Pseudocode
A high‑level, language‑independent description of an algorithm used to convey its logic.
Big‑O notation
A mathematical notation that describes the upper bound of an algorithm’s running time or space usage as a function of input size.
Linear search
A simple algorithm that checks each element of a list sequentially until the target value is found, with O(n) time complexity.
Binary search
An efficient algorithm for finding an item in a sorted list by repeatedly halving the search interval, with O(log n) time complexity.
Merge sort
A divide‑and‑conquer sorting algorithm that recursively splits a list and merges sorted sublists, achieving O(n log n) performance.
Quick sort
A divide‑and‑conquer sorting algorithm that selects a pivot to partition the list and recursively sorts the partitions, also O(n log n) on average.
Breadth‑first search
A graph traversal algorithm that explores all neighbor nodes at the present depth before moving to nodes at the next depth level.
Dijkstra’s algorithm
A graph algorithm that computes the shortest path from a source node to all other nodes in a weighted graph with non‑negative edge weights.
Divide‑and‑conquer
An algorithmic paradigm that solves a problem by recursively breaking it into smaller subproblems, solving each subproblem, and combining the results.