RemNote Community
Community

Study Guide

📖 Core Concepts Search Engine Optimization (SEO) – Improving the quality and quantity of organic (unpaid) traffic from search engines. Organic traffic – Visits from SERPs (image, video, news, academic, vertical, LLM‑generated answers). Search engine components – Crawling (discovery), indexing (storing), and ranking algorithms (ordering results). On‑page factors – Keyword usage, meta tags, headings, internal links, site structure, title tags, meta descriptions, canonical tags, $301$ redirects. Off‑page factors – Inbound link quantity, quality, and PageRank (link‑equity score). White‑hat vs. Black‑hat – White‑hat follows guidelines & serves users; Black‑hat uses deceptive tactics (hidden text, cloaking, paid links). --- 📌 Must Remember SEO = organic (unpaid) traffic; SEM = paid ads. PageRank = probability a random surfer reaches a page via links → depends on number + strength of inbound links. Major algorithm updates: Panda (2011) → penalizes low‑quality content. Penguin (2012) → targets link spam. Hummingbird (2013) → adds natural‑language processing. BERT (2019) → improves query understanding. Robots.txt = hint for crawlers; meta robots “noindex” = reliable block. Mobile‑first indexing (2016) → mobile version is primary source for indexing. Canonical tag / $301$ redirect = consolidate duplicate URL equity. Nofollow (2020) = hint, not a hard directive. --- 🔄 Key Processes Getting Indexed Submit XML sitemap via Google Search Console / Bing Webmaster Tools. Ensure internal links allow crawlers to discover new pages. Mobile‑first: verify mobile version is complete. Preventing Crawling/Indexing Add disallowed paths in robots.txt. Use <meta name="robots" content="noindex"> on pages you don’t want indexed. Building Link Equity Create high‑quality, shareable content → earn inbound backlinks. Use internal cross‑linking to pass equity to important pages. Consolidate duplicates with canonical tags or $301$ redirects. Content Optimization Cycle Research high‑search‑volume keywords. Place keywords in title tag, meta description, headings, and body. Publish, then update regularly to invite fresh crawls. --- 🔍 Key Comparisons White‑hat vs. Black‑hat SEO White‑hat: follows guidelines, focuses on user value, low penalty risk. Black‑hat: hidden text, cloaking, paid links; high penalty risk. SEO vs. SEM SEO: unpaid, builds relevance & authority over time. SEM: paid ads, immediate visibility, costs per click. Robots.txt vs. Meta Robots Robots.txt: hint for crawlers; may be ignored. Meta robots: explicit “noindex” directive; reliably blocks indexing. On‑page vs. Off‑page factors On‑page: keyword placement, tags, structure, internal links. Off‑page: inbound links, PageRank, external authority signals. --- ⚠️ Common Misunderstandings “Nofollow removes PageRank” – Since 2020 it is only a hint; it no longer fully blocks link equity. “More keywords = higher rank” – Keyword stuffing can be penalized; relevance & natural usage matter. “Robots.txt blocks indexing” – It only blocks crawling; pages may still be indexed from external links. “SEO guarantees traffic” – Algorithm updates can cause sudden rank drops; ROI is not guaranteed. --- 🧠 Mental Models / Intuition Link Equity as Water Flow – Think of inbound links as pipes delivering water (PageRank). High‑quality, few pipes flow more water than many low‑quality pipes. Crawl Budget = Search Engine Time – Search engines allocate limited “time” per site; make it count with clean URLs, good internal linking, and updated content. Mobile‑First = “Front Door” – The mobile version is the front door for indexing; if it’s broken, the whole house stays hidden. --- 🚩 Exceptions & Edge Cases Meta robots “noindex, follow” – Allows crawlers to follow links (passing equity) while keeping the page out of SERPs. Canonical on duplicate content – Must point to the preferred URL; mis‑pointed canonicals can waste equity. International targeting – hreflang tags are required only when you have multiple language/region versions; otherwise they’re ignored. --- 📍 When to Use Which Use XML sitemap when you have many pages or deep site architecture. Use robots.txt to block large sections (e.g., admin panels) but keep meta robots for precise “noindex”. Choose white‑hat tactics for long‑term stability; reserve black‑hat only if you accept high penalty risk (not recommended for exams). Apply mobile‑first checks for any new site or major redesign. Deploy hreflang when serving the same content in multiple languages/regions. --- 👀 Patterns to Recognize Algorithm update clues – Sudden traffic drop + low‑quality content → Panda; sudden link‑profile loss → Penguin. High bounce rate + low dwell time – Signals to search engines that page relevance is low → may hurt rankings. Duplicate URL patterns – Presence of ?session= or tracking parameters → need canonical or redirect. --- 🗂️ Exam Traps “Nofollow always blocks PageRank” – Recent change makes it a hint only. “Robots.txt guarantees a page won’t appear in SERPs” – It only blocks crawling; external links can still cause indexing. “More backlinks always improve rank” – Quality matters more than sheer quantity; spammy links trigger Penguin penalties. “Mobile‑first indexing only matters for responsive sites” – It matters for all sites; a non‑responsive mobile version can be ignored entirely. ---
or

Or, immediately create your own study flashcards:

Upload a PDF.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or