Cherreads

Chapter 7 - research methods

Got it. I'll slow it down and actually explain things so you can understand, not just memorize.

🔬 Research Methods – Full Study Guide (Clear + Complete)

1. Variables (this is foundational)

In any study, you are dealing with variables.

The independent variable (IV) is what the researcher changes or controls. Think: the cause.

The dependent variable (DV) is what gets measured. Think: the effect.

Example:

If a study tests whether exercise lowers heart rate:

Exercise = independent variable

Heart rate = dependent variable

If a question says the dependent variable is manipulated, that is wrong.

2. Levels of Evidence (which studies are strongest)

Not all studies are equal. Some are more reliable than others.

From strongest to weakest:

Systematic reviews and meta-analyses (they combine multiple studies)

Randomized Controlled Trials (RCTs)

Cohort studies

Case-control studies

Cross-sectional studies

Case studies or reports

Key idea:

If something asks for the highest level of evidence, it is NOT a cohort study. That statement is false.

3. Study Designs (you need to recognize these instantly)

Case Study

This focuses on one person or a very small group. It gives deep detail but cannot be generalized to everyone.

Cross-Sectional Study

This looks at data at a single point in time.

Think of it as a snapshot.

Example: surveying people today about their diet and weight.

It tells you what is happening, not what caused it.

Cohort Study

This follows a group of people over time.

You start with an exposure (like smoking) and track what happens.

Example: follow smokers vs non-smokers for 10 years and see who develops lung disease.

Case-Control Study

This starts with the outcome, then looks backward.

Example:

Group 1: people with cancer

Group 2: people without cancer

Then you look back to see differences (like smoking history)

Your exam cancer question = case-control study

4. Validity (this is heavily tested)

Validity asks: are we measuring what we think we're measuring?

Internal Validity

This is about cause and effect.

It asks: did the independent variable actually cause the change?

High internal validity means:

Few confounding variables

Strong control

External Validity

This is about generalization.

It asks: can we apply these results to the real world?

Example:

If a study only used college athletes, can you apply it to older adults? Probably not.

Construct Validity (most important type)

This asks whether the test truly measures the concept.

Example:

If you're measuring intelligence, does your test actually measure intelligence—or just memory?

Face Validity

This is the weakest type.

It just means the test looks like it works.

No deep proof.

5. Reliability (consistency)

Reliability asks: are the results consistent?

Intrarater Reliability

Same person measures something multiple times and gets the same result.

Interrater Reliability

Different people measure the same thing and get the same result.

6. Bias (common mistakes in research)

Bias is anything that skews results.

Sampling Bias

Your sample does not represent the population.

Example: only surveying athletes when studying all adults.

Hawthorne Effect

People change behavior because they know they are being watched.

Experimenter Bias

The researcher unintentionally influences results.

Placebo Effect

People improve because they believe they are receiving treatment.

7. Peer Review

Before research is published, experts evaluate it.

This is called the peer review process.

It helps catch errors and improves quality.

8. Primary Sources

A primary source is original research.

Always better than summaries when possible.

9. MAXICON Principle

This principle is about reducing error.

If a statement says it minimizes true variance, that is wrong.

It minimizes error, not real differences.

10. Research Proposal

A proposal includes:

Introduction

Literature review

Methods

It does NOT include results yet.

11. Sampling Methods

Random Sampling

Everyone has an equal chance.

Stratified Random Sampling

You divide people into groups (like age or gender), then sample from each.

This improves representation.

Convenience Sampling

You take whoever is easy to access.

This is weak and often biased.

12. Inclusion vs Exclusion Criteria

Inclusion criteria = who is allowed in the study

Exclusion criteria = who is not allowed

13. Power Analysis

This determines how many participants you need.

Too few = unreliable results

Too many = waste of resources

14. Likert Scale

This is a rating scale.

Example:

Strongly agree → Agree → Neutral → Disagree → Strongly disagree

15. Controlling Bias (important short answer)

Ways to reduce bias:

Randomization (assign groups randomly)

Blinding (participants or researchers don't know groups)

Control groups

Standard procedures

16. Hypothesis

A hypothesis is a testable prediction.

It must clearly connect variables.

Example:

"People who sleep more will have faster reaction times."

17. Research Process (know this order exactly)

Identify the problem → Review literature → Form hypothesis → Design study → Collect data → Analyze data → Draw conclusions → Communicate results

18. Research Protocol (what goes into a study plan)

A protocol includes:

The research question

Study design

Sample/population

Inclusion and exclusion criteria

Variables and how they are measured

Data collection methods

Data analysis plan

Ethical considerations

đź§  What You Actually Need to Be Ready For

Most questions are pattern recognition. Here's how to think:

"At one point in time" → cross-sectional

"Looking backward" → case-control

"Following over time" → cohort

"Generalize results" → external validity

"Cause and effect control" → internal validity

"Same tester repeating" → intrarater reliability

"People act different when watched" → Hawthorne effect

If you want to lock this in fast, I can run you through a realistic practice test that matches your exam exactly.

More Chapters