Questionnaire Study Guide
Study Guide
📖 Core Concepts
Questionnaire – a research tool consisting of a set of written questions used to collect data from respondents.
Variable‑based questionnaire – measures separate variables (e.g., behaviours, facts).
Scale‑based questionnaire – aggregates items into a composite score that taps a latent trait (e.g., attitude).
Structured vs. Unstructured – Structured: identical item order for everyone; Unstructured: free‑form text, no fixed format.
Closed‑ended vs. Open‑ended – Closed: respondent chooses from given options; Open: respondent generates their own answer, later coded.
Response scales – dichotomous (yes/no), nominal‑polytomous (unordered categories), ordinal‑polytomous (ordered categories), bounded continuous (numeric range).
Multi‑item scale – ≥3 items measuring the same construct, usually on a 5‑ to 7‑point Likert‑type rating scale.
Reliability & validity – internal consistency, test‑retest, content, construct, and criterion validity ensure a scale measures what it should.
📌 Must Remember
Standardized answers → easy coding & analysis.
Exhaustive & mutually exclusive options are required for closed‑ended items.
One construct per item – avoid double‑barreled questions.
Positive wording preferred; avoid negatives & double negatives.
Logical flow: screening → warm‑up → main items → sensitive/difficult → demographics.
Multi‑item scales need ≥3–5 items, balanced verbal anchors, and a consistent rating scale.
Reliability types: internal (item‑item consistency) and test‑retest (stability over time).
Validity hierarchy: content → construct → criterion.
TRAPD translation steps – Translation, Review, Adjudication, Pretest, Documentation.
🔄 Key Processes
Designing a questionnaire
Define constructs → write one‑construct items → choose wording (positive, clear).
Decide questionnaire type (variable‑based vs. scale‑based; structured vs. mixed).
Select response format (dichotomous, nominal, ordinal, continuous).
Order items using logical and sensitivity progression.
Building a multi‑item scale
Draft ≥5 items per construct.
Apply a consistent 5‑ to 7‑point Likert rating (e.g., 1 = Strongly disagree, 7 = Strongly agree).
Conduct pilot, run factor analysis → keep items loading on intended factor, drop weak items.
Compute internal reliability (e.g., Cronbach’s α).
Translation (TRAPD)
Translation: multiple translators produce drafts.
Review: compare drafts, resolve discrepancies.
Adjudication: expert panel decides final wording.
Pretest: administer to native speakers, check comprehension.
Documentation: record decisions for future reference.
🔍 Key Comparisons
Variable‑based vs. Scale‑based – measures separate facts vs. aggregates into a latent score.
Structured vs. Unstructured – fixed order & options vs. free‑form text.
Open‑ended vs. Closed‑ended – respondent creates answer vs. selects from list.
Dichotomous vs. Nominal‑polytomous – 2 mutually exclusive options vs. >2 unordered categories.
Ordinal‑polytomous vs. Bounded continuous – ordered categories (e.g., “agree” levels) vs. numeric interval (e.g., 0–100).
⚠️ Common Misunderstandings
“Open‑ended = no coding needed.” – responses must still be coded for analysis.
“More items always improve reliability.” – irrelevant or poorly worded items can lower internal consistency.
“Translation is word‑for‑word.” – cultural adaptation is essential; literal translation may change meaning.
“All respondents understand every item.” – assume literacy; pilot test for clarity.
🧠 Mental Models / Intuition
“One‑construct, one‑item” – picture each question as a single spotlight shining on one trait.
“Scale as a ruler” – a multi‑item scale is like a ruler: each tick (item) adds precision to the measurement of an invisible length (latent trait).
“Flow as a story arc” – start easy (intro), build tension (core), climax (sensitive), wrap up (demographics).
🚩 Exceptions & Edge Cases
Low‑literacy populations – may require pictorial or interviewer‑administered modes.
Highly sensitive topics – sometimes better to place early if “response mode” cannot be guaranteed later.
Online surveys – risk of extremely low response rates; may need incentives or follow‑ups.
📍 When to Use Which
Variable‑based questionnaire → when you need distinct, factual data points (behaviour, demographics).
Scale‑based questionnaire → when measuring attitudes, socioeconomic status, or other latent constructs.
Structured format → large samples, need for comparability.
Unstructured/pictorial → exploratory work, low‑literacy or visual‑learning groups.
Dichotomous → simple presence/absence decisions, screening.
Ordinal‑polytomous → attitudinal intensity (e.g., Likert).
Bounded continuous → precise numeric input (e.g., temperature, rating 0–100).
👀 Patterns to Recognize
Mutual exclusivity & exhaustiveness → every closed‑ended set should cover all possible answers without overlap.
Balanced anchors → Likert scales should have equal intervals (e.g., “Strongly disagree” ↔ “Strongly agree”).
Skip‑logic branches → look for “If = yes, go to Q5; else go to Q6.”
Item‑total correlation → low correlation flags a problematic item in a scale.
🗂️ Exam Traps
“All closed‑ended questions are dichotomous.” – false; closed‑ended includes nominal and ordinal polytomous scales.
“A multi‑item scale must have exactly 5 items.” – not required; ≥3 is sufficient if reliability is high.
“Translation guarantees equivalence.” – without pretesting, cultural nuance can be lost, leading to measurement error.
“Higher response rate always means better data.” – if non‑respondents differ systematically, bias remains.
“Skip logic can be ignored in analysis.” – it changes the sample composition for downstream items; must be accounted for.
or
Or, immediately create your own study flashcards:
Upload a PDF.
Master Study Materials.
Master Study Materials.
Start learning in seconds
Drop your PDFs here or
or