Computer programming Study Guide
Study Guide
📖 Core Concepts
Programming – writing instruction sequences (programs) that tell a computer what to do.
Algorithm – a precise step‑by‑step procedure that solves a problem; implemented by code.
High‑level vs. Machine Language – high‑level languages are human‑readable; machine code is binary, executed directly by the CPU.
Readability – how easily a human can grasp a program’s purpose, control flow, and operation.
Quality Requirements – reliability, robustness, usability, portability, maintainability, and efficiency.
Big‑O Notation – describes how time or memory usage grows with input size.
Methodologies – structured ways to develop software (requirements analysis, testing, Agile, OOAD, etc.).
Debugging – process of reproducing, isolating, and fixing defects using tools and systematic techniques.
---
📌 Must Remember
Reliability = correctness of results; depends on algorithmic correctness & minimal logic/resource errors.
Robustness = graceful handling of bad data, missing resources, user mistakes, power loss.
Maintainability hinges on readability (indentation, comments, naming, decomposition).
Efficiency = low consumption of CPU time, memory, I/O, network bandwidth.
Big‑O classes guide algorithm choice: e.g., $O(n)$ vs. $O(n^2)$ for given input limits.
Agile cycles are short (weeks) and combine requirements, design, coding, testing.
Debugging first step: reproduce the bug reliably.
Static analysis finds potential issues without running the program.
---
🔄 Key Processes
Developing a Program
Analyze requirements → design (OOAD/UML or ER modeling) → choose language/paradigm → implement code → refactor for readability → test → debug → maintain.
Refactoring for Readability
Identify tangled code → apply decomposition (functions/classes) → improve naming → adjust indentation/comments → run tests to ensure behavior unchanged.
Debugging Workflow
Reproduce bug → simplify test case → divide‑and‑conquer (remove parts) → set breakpoints / step through → inspect variables → fix and verify.
Choosing an Algorithm
Assess input size & constraints → list candidate algorithms → compare Big‑O complexities → pick the lowest‑order that meets resource limits.
---
🔍 Key Comparisons
High‑level language vs. Machine code – human‑friendly syntax vs. binary CPU instructions.
Reliability vs. Robustness – correctness of results vs. ability to survive bad inputs/environment.
Maintainability vs. Efficiency – ease of change (readability, modularity) vs. resource consumption (speed, memory).
Static analysis vs. Dynamic debugging – finds issues without execution vs. isolates bugs while program runs.
---
⚠️ Common Misunderstandings
“Fast code is always better.” – Speed matters only when resource constraints demand it; readability and maintainability often outweigh marginal gains.
“Big‑O tells the exact runtime.” – It describes growth trends, not absolute time; constant factors and hardware matter.
“Testing eliminates bugs.” – Testing can show presence of bugs, not guarantee their absence.
“Refactoring changes behavior.” – Proper refactoring reorganizes code without altering external functionality.
---
🧠 Mental Models / Intuition
“Code is a conversation.” – Treat source as dialogue with future readers; clear naming and spacing are the tone.
“Algorithmic cost is a slope.” – Visualize Big‑O as the steepness of a hill; flatter (e.g., $O(n)$) means slower growth than steep (e.g., $O(n^2)$).
“Debugging is detective work.” – Reproduction = establishing the crime scene; simplification = narrowing suspects; divide‑and‑conquer = interrogating each suspect.
---
🚩 Exceptions & Edge Cases
Robustness edge – Power outages or hardware failures are rarely simulated in tests but must be considered for critical systems.
Portability – Some high‑level languages still rely on platform‑specific libraries; code may compile everywhere but fail at runtime.
Big‑O caveat – An $O(n \log n)$ algorithm can be slower than an $O(n^2)$ algorithm for tiny inputs due to larger constant factors.
---
📍 When to Use Which
Choose a language – Use domain‑specific languages (e.g., COBOL for legacy finance, FORTRAN for scientific computing) when existing ecosystems dominate; otherwise pick a modern, well‑supported language.
Select a paradigm – Imperative (procedural) for straightforward step‑wise tasks; functional for stateless transformations; logic for rule‑based inference.
Apply refactoring – When code readability scores (indentation, naming) are low or when maintenance tasks become error‑prone.
Pick debugging tool – IDE built‑in debuggers for quick breakpoints; GDB for low‑level inspection; static analysis for large codebases or CI pipelines.
---
👀 Patterns to Recognize
Repeated code blocks → candidate for function extraction.
Deeply nested conditionals → sign of poor decomposition; consider early returns or strategy pattern.
Performance hot spots → loops with $O(n^2)$ or higher complexity; look for algorithmic improvements.
Error‑handling gaps → missing checks for null/invalid inputs → robustness issue.
---
🗂️ Exam Traps
Confusing “reliability” with “robustness.” – Reliability = correct output; robustness = graceful handling of bad situations.
Selecting the highest‑order algorithm – An $O(n^3)$ answer may seem “thorough” but is penalized if a lower‑order solution exists.
Assuming readability is optional. – Questions on maintainability will penalize lack of indentation/comments.
Choosing static analysis over testing. – Static tools catch certain bugs but cannot replace runtime validation; exam may ask which method finds runtime errors.
Over‑generalizing language popularity. – Remember that COBOL and FORTRAN remain dominant in specific niches; “scripting languages dominate everywhere” is too broad.
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or