RemNote Community
Community

Introduction to Algorithms

Understand what algorithms are, their key qualities and efficiency analysis, and common examples such as search, sorting, and graph traversal.
Summary
Read Summary
Flashcards
Save Flashcards
Quiz
Take Quiz

Quick Practice

What is the general definition of an algorithm?
1 of 10

Summary

Understanding Algorithms What Is an Algorithm? An algorithm is a step-by-step recipe or procedure for solving a problem or performing a task. Think of it like a cooking recipe: just as a recipe tells you the exact order of ingredients and actions to bake a cake, an algorithm tells a computer the exact sequence of operations needed to solve a problem. Algorithms are fundamental to computer science because they're the bridge between a problem we want to solve and the code that actually solves it. When you search for a word in a document, sort a list of names, or get directions on a map, an algorithm is doing the work behind the scenes. How Are Algorithms Represented? Algorithms are typically written in pseudocode or in an actual programming language like Python, Java, or C++. Pseudocode is a simplified, human-readable way of writing algorithms that looks like code but isn't tied to any specific programming language. This makes it easier to focus on the logic of the algorithm without getting bogged down in language syntax. Where Do Algorithms Show Up? Algorithms are everywhere in the software you use daily. Here are some real-world examples: Searching: Finding a name in a contact list Sorting: Arranging emails by date Routing: Computing the shortest route on a GPS Compression: Reducing image file sizes Recommendation: Suggesting movies or products based on your preferences Understanding algorithms helps you appreciate how these systems work and why some are faster than others. Key Qualities of Algorithms When we evaluate an algorithm, we ask three critical questions: Does It Work? (Correctness) A correct algorithm produces the right answer for all valid inputs. This might sound obvious, but it's easy to write code that works for some cases but fails on others. A correct algorithm must handle edge cases (unusual or extreme inputs) properly. For example, a sorting algorithm must work whether the list is already sorted, completely reversed, contains duplicates, or is empty. Does It Finish? (Finiteness) A finite algorithm must always terminate—that is, it must finish after a limited number of steps. An algorithm that loops forever or gets stuck is useless. This quality ensures that the algorithm is guaranteed to produce an answer eventually, not run indefinitely. Does It Use Resources Wisely? (Efficiency) An efficient algorithm uses as little time (speed) and memory (space) as reasonably possible. Two algorithms might both be correct and finite, but one could take hours while the other takes seconds. Efficiency matters because computing resources are limited, and users want fast, responsive software. Measuring Efficiency: Big-O Notation Since we care deeply about efficiency, we need a way to measure and compare it. Big-O notation describes how an algorithm's running time or memory usage grows as the input size increases. The key insight is that we don't need exact numbers—we need to understand the trend. For example: If an algorithm takes the same amount of time regardless of input size, it's $O(1)$ (constant time) If time doubles when the input doubles, it's $O(n)$ (linear time) If time quadruples when the input doubles, it's $O(n^2)$ (quadratic time) If time barely changes when the input doubles, it might be $O(\log n)$ (logarithmic time) Big-O notation uses the worst-case scenario—we describe how long an algorithm might take in the unluckiest situation. This gives us a reliable upper bound on performance. Why Big-O matters: An $O(n)$ algorithm is vastly faster than an $O(n^2)$ algorithm when the input is large. Understanding Big-O lets you choose better algorithms before you even write code. Common Algorithm Patterns Linear Search: Scanning from the Start Linear search is the simplest searching algorithm. It starts at the beginning of a list and checks each element one by one until it finds the target value (or reaches the end). How it works: Imagine looking for a specific name in an unsorted phone book by checking every entry sequentially. Performance: Linear search has $O(n)$ time complexity, where $n$ is the number of elements. In the worst case, you might need to check every single element. When to use it: When the list is unsorted or very small. For large, unsorted lists, linear search is acceptable but slow. Binary Search: Divide and Conquer Binary search is a much faster searching algorithm, but it requires the list to already be sorted. It works by repeatedly splitting the search interval in half. How it works: Think of finding a word in a dictionary by guessing whether the word is in the first or second half, then repeating with whichever half you choose. Each guess eliminates half the remaining possibilities. Performance: Binary search has $O(\log n)$ time complexity. Even with a million elements, you'd need at most about 20 comparisons! This is drastically faster than linear search. Key requirement: The list must be sorted before using binary search. Example: Searching for 7 in [1, 3, 5, 7, 9, 11] Compare with middle (7): Found! → Done in 1 step In a larger list, you'd keep halving until you found it. Sorting: Arranging Data Simple Sorting Algorithms Bubble sort and insertion sort are straightforward sorting algorithms that arrange elements into order. They work by comparing adjacent or nearby elements and swapping them if they're in the wrong order. Performance: Both run in $O(n^2)$ time, which means if you double the list size, it takes about four times as long. For small lists, this is acceptable, but for large lists, it becomes very slow. When to use them: These are primarily useful for learning or for very small datasets. For real-world problems, use faster algorithms. Advanced Sorting: Divide-and-Conquer Merge sort and quicksort are faster sorting algorithms that use a divide-and-conquer strategy: break the problem into smaller pieces, solve those pieces, then combine the results. Performance: Both achieve $O(n \log n)$ time complexity on average. This is exponentially faster than $O(n^2)$ for large lists. For a list of one million elements: Bubble sort: 1 trillion operations Merge sort: 20 million operations The difference is staggering. When to use them: For real-world sorting tasks, especially with large datasets, these are the standard choice. <extrainfo> Graph Traversal Algorithms Breadth-First Search (BFS) and Dijkstra's shortest-path algorithm are used to explore connections between nodes in a graph (a network of connected points). These algorithms answer questions like "Is there a path from A to B?" or "What's the shortest route?" Graph algorithms are important in applications like GPS navigation, social networks, and network routing, but they may not be heavily emphasized in an introductory algorithms course. </extrainfo> Analyzing Algorithms Time and Space Complexity When you develop an algorithm, you need to analyze two things: Time Complexity: How long does the algorithm take as the input grows larger? You express this in Big-O notation. A $O(n)$ algorithm is "linear" (takes twice as long for twice the input), while a $O(n^2)$ algorithm is "quadratic" (takes four times as long for twice the input). Space Complexity: How much memory does the algorithm use? Some algorithms use extra memory to store intermediate results, while others work "in place" using minimal extra space. This matters when working with huge datasets. How to analyze: Count the number of basic operations (comparisons, assignments, etc.) the algorithm performs in the worst case. As the input size grows very large, the dominant term determines the Big-O complexity. Example: A simple algorithm that compares each element to every other element performs roughly $n \times n = n^2$ comparisons, so it's $O(n^2)$.
Flashcards
What is the general definition of an algorithm?
A step‑by‑step recipe for solving a problem or performing a task.
What does it mean for an algorithm to be considered "correct"?
It produces the right answer for all valid inputs.
What is the definition of finiteness in the context of algorithms?
The algorithm finishes after a limited number of steps.
Which two primary resources does an efficient algorithm aim to minimize?
Time (speed) and space (memory).
What does Big‑O notation describe regarding an algorithm's performance?
How running time or memory usage grows as the input size grows.
How does a linear search locate a target element in a list?
It scans the list one element at a time until the target is found.
What is the running time complexity of a linear search?
$O(n)$ (where $n$ is the number of elements).
What is the prerequisite for a list to be searched using binary search?
The list must be sorted.
How does binary search narrow down the location of a target?
It repeatedly cuts the search interval in half.
What is the running time complexity of binary search?
$O(\log n)$ (where $n$ is the number of elements).

Quiz

Which statement best defines an algorithm?
1 of 12
Key Concepts
Algorithm Fundamentals
Algorithm
Pseudocode
Big‑O notation
Search Algorithms
Linear search
Binary search
Breadth‑first search
Dijkstra’s algorithm
Sorting Algorithms
Merge sort
Quick sort
Divide‑and‑conquer