-
-
Notifications
You must be signed in to change notification settings - Fork 2
/
Copy pathresearch-methods.qmd
680 lines (459 loc) · 43.3 KB
/
research-methods.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
# Research Methods {#sec-researchMethods}
## Getting Started {#sec-researchMethodsGettingStarted}
### Load Packages {#sec-researchMethodsLoadPackages}
```{r}
```
## Sample vs Population {#sec-sampleVsPopulation}
In research, it is important to distinguish between the sample and the target population.
The target *population* is who you want your study's findings to generalize to.
For instance, if we want our findings to lead to inferences we can draw regarding all current NFL players, then NFL players are our target population.
However, despite our best efforts to recruit all NFL players into our study, we may not succeed in doing that.
The participants (i.e., people or players) who we successfully recruit to be in our study represent our *sample*.
The number of participants in the study is our *sample size*.
It is rare for the sample to include all people who are in the target population.
It can be costly to recruit large samples, and many potential participants may decline to participate for a variety of reasons (insufficient time, lack of interest in the study, distrust of scientists, etc.).
Thus, our goals are (a) to recruit as many people from the population as possible and (b) for the sample to be as *representative* of the population as possible.
For increasing the representativeness of the sample (with respect to the population), we might conduct a *random sample*, in which each person in the population (i.e., each NFL player) has equal likelihood of being selected.
For instance, we might randomly select 250 players to recruit to the study.
True random samples, though strong in aspiration, are difficult and costly to achieve.
In reality, many researchers conduct convenience sampling.
A convenience sample is recruited because it is convenient (i.e., less costly and time-consuming).
For instance, many studies examine college students—in part, because they are easy to recruit.
If our target population is NFL players but we are unable to recruit NFL players into our study, we could easily recruit a large sample of college students.
Although the convenience sample may afford a very large sample, the college student sample may not be representative of the target population (NFL players).
Thus, the findings in our study may not *generalize* to NFL players—that is, what we learn in college students may not apply in the same way among NFL players.
For instance, if we learn that consumption of sports drinks (compared to drinking only water) improves running speed among college students, that may not be the case among NFL players.
## Research Questions, Hypotheses, and Predictions {#sec-hypothesisVsPrediction}
A research question is a question that the investigator (you!) wants to know the answer to.
For example, a research question might be: "Does consumption of sports drink improve player performance?"
A hypothesis is a proposed explanation.
A prediction is "the expected result of a test that is derived, by deduction, from a hypothesis or theory" [@Eastwell2014, p. 17; archived at <https://perma.cc/8EX4-8JYN>].
Here is an example of a hypothesis and the resulting prediction:
> The present study evaluates whether consumption of sports drink improves player performance.
> I <u>*hypothesize*</u> that consumption of sports drink leads football players to perform better in games because of greater endurance owing to restoration of electrolytes.
> If the hypothesis is true, I <u>*predict*</u> that players who consume sports drink during a game will score more fantasy points than players who do not consume sports drink during the game.
## Research Designs {#sec-researchDesigns}
There are three broad types of research designs:
- experiment
- correlational/observational study
- case study
### Experiment {#sec-experiment}
In an *experiment*, there are one or more things (i.e., variables) that we manipulate to see how the manipulation influences the process of interest.
The variable that we manipulate is the *independent variable*.
By contrast, the *dependent variable* is the variable that we evaluate to determine whether it was influenced by the manipulation (i.e., by the independent variable).Besides the independent and dependent variables, the researcher attempts to hold everything else constant through processes including standardization and random assignment.
*Standardization* involves using the same procedures to assess each participant, so that scores can be fairly compared across participants (and groups).
Random assignment involves randomly assigning participants to conditions of the independent variable, so the people in each condition are comparable and do not differ systematically.
#### Intervention Study {#sec-intervention}
An intervention study is a study that involves some modification (e.g., a treatment) with the intent to improve people's standing on the dependent variable (e.g., depression).
Some intervention studies have a control group, whereas intervention studies do not.
Inclusion of a control group is valuable; without a control group, you do not know whether any apparent gains in the treatment condition were due to the treatment per se versus just the mere passage of time, regression effects, or other things that were going on in the participants' lives.
An intervention that includes random assignment (e.g., to the intervention or control group) is an experiment.
A randomized controlled trial (RCT) is an example of an experiment because it is an intervention with random assignment.
For instance, we may be interested to evaluate whether players perform better (e.g., run faster) if they drink a sports drink compared when they drink only water.
Our hypothesis might be that players will be expected to perform better when they drink a sports drink (compared to when they drink only water), for the reasons specified in @sec-hypothesisVsPrediction.
To this this research question and hypothesis, we might conduct an experiment by randomly assigning some players during practice to receive a sports drink and some players to receive only water.
In this case, our independent variable is whether the player receives a sports drink.
Our dependent variable might be their 40-yard dash time during practice.
### Correlational/Observational Study {#sec-correlationalStudy}
In a correlational (aka observational) study, we do not manipulate a variable to see how the manipulation influences another variable.
Instead, we examine how two variables, a predictor and an outcome variable, are associated.
The hypothesized cause is called the predictor variable.
The hypothesized effect is called the outcome variable.
In this way, the predictor variable is similar to the independent variable, and the outcome variable is similar to the dependent variable.
However, unlike the independent and dependent variables in an experiment, the predictor and outcome variables in a correlational study are not manipulated.
For instance, to use a correlational study to test the possibility that players who drink sports drinks perform better than players who drink only water, we could examine whether the players who drink sports drinks during a game score more fantasy points than players who drink only water during the game.
In this case, our predictor variable is whether the players drinks sports drinks during a game.
Our outcome variable is the number of fantasy points the player scored.
#### Correlation Does Not Imply Causation {#sec-correlationCausation}
As the maxim goes, "correlation does not imply causation"—just because two variables are associated does not necessarily mean that they are causally related.
Just because `X` is associated with `Y` does not mean that `X` causes `Y`.
Consider that you find an association between variables `X` and `Y`.
There are several reasons why you might observe an association between `X` and `Y`:
- `X` causes `Y`
- `Y` causes `X`
- `X` and `Y` are bidirectional: `X` causes `Y` and `Y` causes `X`
- a third variable (i.e., confound), `Z`, influences both `X` and `Y`
- the association between `X` and `Y` is spurious
For instance, one possibility is that the association we observed reflects our hypothesis that `X` causes `Y`, as depicted in @fig-correlationAndCausation1.
That is, consumption of more sports drink may improve players' performance.
::: {#fig-correlationAndCausation1}
![](images/correlationAndCausation_1.png){fig-alt="Hypothesized Causal Effect Based on an Observed Association Between `X` and `Y`, Such That `X` Causes `Y`."}
Hypothesized Causal Effect Based on an Observed Association Between `X` and `Y`, Such That `X` Causes `Y`.
:::
However, a second possibility is that the association reflects the opposite direction of effect, where `Y` actually causes `X`, as depicted in @fig-correlationAndCausation2.
For instance, greater performance may lead players to drink more sports drink (rather than the reverse).
::: {#fig-correlationAndCausation2}
![](images/correlationAndCausation_2.png){fig-alt="Reverse (Opposite) Direction of Effect From the Hypothesized Effect, Where `Y` Causes `X`."}
Reverse (Opposite) Direction of Effect From the Hypothesized Effect, Where `Y` Causes `X`.
:::
A third possibility is that the association reflects a bidirectional effect, where `X` causes `Y` and `Y` causes `X`, as depicted in @fig-correlationAndCausation3.
For instance, consumption of more sports drink may improve players' performance, and greater performance in turn may lead players to drink more sports drink.
::: {#fig-correlationAndCausation3}
![](images/correlationAndCausation_3.png){fig-alt="Bidirectional Effect Between `X` and `Y`, such that `X` causes `Y` and `Y` causes `X`."}
Bidirectional Effect Between `X` and `Y`, such that `X` causes `Y` and `Y` causes `X`.
:::
A fourth possibility is that the association could reflect the influence of a third variable.
If a third variable is a common cause of each and accounts for their association, it is a *confound*.
An observed association between `X` and `Y` could reflect a confound—i.e., a cause (`Z`) that influences both `X` and `Y`, which explains why `X` and `Y` are correlated even though they are not causally related.
A third variable confound that is a common cause of both `X` and `Y` is depicted in @fig-correlationAndCausation4.
For instance, it may not be that sport drink consumption per se influences player performance; rather, it may be that players who are more intelligent or have more financial resources tend to drink more sports drinks and also tend to perform better.
In this case, intelligence or financial resources may be a confound that influences both sports drink consumption and player performance, but sports drink consumptions—though correlated with player performance—does not influence player performance.
For another example, consider that ice cream sales are associated with shark attacks.
It is unlikely that more people eating ice creams leads to shark attacks.
There is a likely a third variable—heat waves—that is a confound because it influences both ice cream sales and shark attacks and explains their association.
::: {#fig-correlationAndCausation4}
![](images/correlationAndCausation_4.png){fig-alt="Confounded Association Between `X` and `Y` due to a Common Cause, `Z`."}
Confounded Association Between `X` and `Y` due to a Common Cause, `Z`.
:::
Lastly, the association might be spurious.
It might just reflect random variation (i.e., chance), and that when tested on an independent sample, what appeared as an association in the original dataset may not hold when testing the association in a new dataset.
### Case Study {#sec-caseStudy}
In a case study, we assess a small sample of individuals (commonly only one person or a few people), often with rich qualitative information.
Themes may be coded from the qualitative information, which may help inform inferences about whether some process may have played a role in influencing the outcome of interest.
The inferences are then drawn in a subjective, qualitative way.
Testimonials and anecdotes are examples are case studies.
For instance, to use a case study to evalute the possibilty that players who drink sports drinks perform better than players who drink only water, we could conduct an in-depth interview with a player.
In the interview, we might ask the player how they performed in games with versus without a sports drink and have them discuss whether they believe the sports drink improved their performance (and if so, how).
Then, based on the player's responses, we might code the responses to extract themes and to make a qualitative judgement of whether or not the player likely performed better during games in which they had a sports drink.
### Other Features of the Research Design {#sec-researchDesignOtherFeatures}
#### Number of Timepoints {#sec-researchDesignTimepoints}
In addition to whether the research design is an [experiment](#sec-experiment), [correlational/observational study](#sec-correlationalStudy), or a [case study](#sec-caseStudy), a research design can also have one or multiple timepoints.
The differing number of timepoints allow studies to be characterized as one of the following:
- cross-sectional
- longitudinal
##### Cross-Sectional {#sec-crossSectional}
A *cross-sectional study* is a study with one timepoint.
For instance, in a cross-sectional study evaluating whether having a sports drink improves player performance, we might assess players' drinking behavior and performance during only game 1.
Cross-sectional studies are more common than longitudinal studies because cross-sectional studies are less costly and time-consuming.
They can provide a helpful starting point to test findings more rigorously in subsequent longitudinal studies.
##### Longitudinal Design {#sec-longitudinal}
A *longitudinal study* is a study with more than one timepoint.
When the <u>same measures</u> are assessed at each of multiple timepoints, we refer to this as a "repeated measures" design.
In a longitudinal study evaluating whether having a sports drink improves player performance, we might assess players' drinking behavior and performance during each game of the season, and possibly across multiple seasons.
Longitudinal studies are less common than cross-sectional studies because longitudinal studies are more costly and time-consuming.
Nevertheless, longitudinal studies can allow us test our hypotheses more rigorously, because they can allow us to test whether changes in the predictor/indepdnent variable leads to changes in the outcome/dependent variable.
Thus, compared to cross-sectional studies, longitudinal studies can provide greater confidence in causal inferences.
#### Within- or Between-Subject {#sec-withinBetweenSubject}
A research design can also be within-subject, between-subject, or both.
A study can involve both within-subject and between-subject comparisons if one predictor/independent variable is within-subject and another predictor/independent variable is between-subject.
##### Within-Subject Design {#sec-withinSubject}
A *within-subject design* is one in which each participant (i.e., person or player) receives multiple levels of the independent variable (or predictor).
For instance, in an experiment evaluating whether having a sports drink improves player performance, we might assign players to drink the sports drink in the first half of the game and to drink only water in the second half of the game.
Or we could assign some of the players to drink sports drink in the first half and water in the second half, and assign the other players to drink water in the first half and sports drink in the second half.
In a correlational study evaluating whether having a sports drink improves player performance, we might evaluate how within-person changes in sports drink consumption are associated with within-person changes in performance.
That is, we could evaluate, when a given player has a sports drink (or more sports drinks), do they perform better than <u>when the same individual</u> has only water (or fewer sports drinks)?
Within-subject designs tend to have greater statistical power than between-subject designs.
However, within-subject designs often have *carryover effects*.
For instance, consider the study in which we assign players to drink only water in the first and third quarters and to drink sports drink in the second and fourth quarters (an A-B-A-B design).
Drinking sports drink in the second quarter could increase how much hydration a player has throughout the rest of the game, which could lead to altered performance in the third and fourth quarters that is not due to what they drink in third and fourth quarters.
##### Between-Subject Design {#sec-betweenSubject}
A *between-subject design* is one in which each participant (i.e., person or player) receives only one level of the independent variable.
For instance, in an experiment evaluating whether having a sports drink improves player performance, we might assign some players to drink the sports drink but the other players to drink only water.
In a correlational study evaluating whether having a sports drink improves player performance, we might evaluate whether people who drink sports drinks tend to perform better than players who drink only water.
Or, we could evaluate whether players who drink more sports drinks perform better than players who drink fewer sports drinks (i.e., whether the number of sports drinks consumed during a game is correlated with player performance).
## Research Design Validity {#sec-researchDesignValidity}
Research design validity involves the accuracy of inferences from a study.
There are three types of research design validity:
- internal validity
- external validity
- conclusion validity
### Internal Validity {#sec-internalValidity}
Internal validity is the extent to which we can be confident that the associations identified in the study are causal.
### External Validity {#sec-externalValidity}
External validity is the extent to which we can be confident that findings from the study play out similarly in the real world—that is, the findings generalize to the target population.
### Tradeoffs Between Internal and External Validity {#sec-internalExternalValidityTradeoffs}
There is a tradeoff between [internal](#sec-internalValidity) and [external](#sec-externalValidity) validity—a single research design cannot have both high [internal](#sec-internalValidity) and high [external validity](#sec-externalValidity).
Each study and design has weaknesses.
Some research designs are better suited for making causal inferences, whereas other designs tend to be better suited for making inferences that generalize to the real world.
The research design that is best suited to making causal inferences is an [experiment](#sec-experiment) because it is the design in which the researcher has the greatest control over the variables.
Thus, [experiments](#sec-experiment) tend to have higher [internal validity](#sec-internalValidity) than other research designs.
However, by manipulating one variable and holding everything else constant, the research takes place in a very standardized fashion that can become like studying a process in a vacuum.
So, even if a process is theoretically causal in a vacuum, it may act differently in the real world when it interacts with other processes.
[Correlational designs](#sec-correlationalStudy) have greater capacity for [external validity](#sec-externalValidity) than [experimental designs](#sec-experiment) because the participants can be observed in their natural environments to evaluate how variables are related in the real world.
However, the greater [external validity](#sec-externalValidity) comes at a cost of lower [internal validity](#sec-internalValidity).
[Correlational designs](#sec-correlationalStudy) are not well-positioned to make causal inferences.
[Correlational studies](#sec-correlationalStudy) can account for potential confounds using *covariates* or for the reverse direction of effect using longitudinal designs, but the researcher has less control over the variables than in an [experiment](#sec-experiment).
As the [internal validity](#sec-internalValidity) of a study's design increases, its [external validity](#sec-externalValidity) tends to decrease.
The greater control we have over variables (and, therefore, have greater confidence about causal inferences), the lower the likelihood that the findings reflect what happens in the real world because it is studying things in a metaphorical vacuum.
Because no single research design can have both high [internal](#sec-internalValidity) and [external](#sec-externalValidity) validity, scientific inquiry needs a combination of many different research designs so we can be more confident in our inferences—[experimental designs](#sec-experiment) for making causal inferences and [correlational designs](#sec-correlationalStudy) for making inferences that are more likely to reflect the real world.
[Case studies](#sec-caseStudy), because they have smaller sample sizes and inferences drawn in a subjective, qualitative way, tend to have lower [external validity](#sec-externalValidity) than both [experimental](#sec-experiment) and [correlational](#sec-correlationalStudy) studies.
[Case studies](#sec-caseStudy) also tend to have lower [internal validity](#sec-internalValidity) because they have less control over variables, and thus fail to remove the possibility of illusory correlations, potential confounds, or the reverse direction of effect.
Thus, [case studies](#sec-caseStudy) are among the weakest forms of evidence.
Nevertheless, case studies can still be useful for generating hypotheses that can then be tested empirically with a larger sample in [experimental](#sec-experiment) or [correlational](#sec-correlationalStudy) studies.
### Conclusion Validity {#sec-conclusionValidity}
Conclusion validity is the extent to which a study's conclusions are reasonable about the association among variables based on the data.
That is, were the correct statistical analyses performed, and are the interpretations of the findings from those analyses correct?
## Mediation vs Moderation {#sec-mediationModeration}
Both types of effects involve (at least) three variables:
1. An independent/predictor variable, which will be labeled as `X`.
1. A dependent/outcome variable, which will be labeled as `Y`.
1. The mediator or moderator variable, which will be labeled as `M`.
A mnemonic to help remember the difference between [mediation](#sec-mediation) and [moderation](#sec-moderation) is in @fig-mediationModerationMnemonic.
::: {#fig-mediationModerationMnemonic}
![](images/mediationModerationMnemonic.png){fig-alt="Mediation Versus Moderation Mnemonic."}
Mediation Versus Moderation Mnemonic.
:::
### Mediation {#sec-mediation}
#### Overview {#sec-mediationOverview}
**Mediation** is a <u>causal chain of events</u>, where one variable (a mediator variable) at least partially <u>explains (or accounts for) the association between two other variables</u> (the predictor variable and the outcome variable).
In mediation, a predictor (`X`) leads to a mediator (`M`), which leads to an outcome (`Y`).
Mediation answers the question of, "<u>**Why (or how)** does `X` influence `Y`?</u> A <u>mediator</u> (`M`) is a variable that helps explain the assocation between two other variables, and it answers the question of why/how `X` influences `Y`.
That is, the mediator is the variable that helps explain how/why `X` is related to `Y`.
In other words, you can think of the mediator as the <u>mechanism</u> that helps explain why `X` has an impact on `Y`.
The association between `X` and `Y` <u>gets smaller</u> when accounting for `M`.
Visually this can be written as in @fig-mediation:
::: {#fig-mediation}
![](images/mediation.png){fig-alt="Mediation."}
Mediation.
:::
where `X` is causing `M`, which in turn is causing `Y`.
In other words, `X` leads to `M`, and `M` leads to `Y`.
For instance, if we determine that consuming sports drinks improves player performance, we may want to know how/why.
That is, what is the mechanism that leads consumption of sports drinks <u>to improve</u> player performance?
We might hypothesize that consumption of sports drink helps increase a player's hydration, which in turn will improve the player's performance.
In this case, increased hydration <u>mediates</u> (i.e., helps explain or account for) the effect of the sports drink consumption on improved player performance.
Question: <u>Why/how</u> does sports drink consumpion lead players to perform better?\
Answer: increased hydration
As a picture, we can draw this assocation as in @fig-mediationExample:
::: {#fig-mediationExample}
![](images/mediationExample.png){fig-alt="Mediation Example."}
Mediation Example.
:::
#### Types of Mediation {#sec-mediationTypes}
##### Full Mediation {#sec-fullMediation}
When one mechanism fully accounts for the effect of the predictor variable on the outcome variable, this is known as **full mediation**, as depicted in @fig-fullMediation:
::: {#fig-fullMediation}
![](images/mediation.png){fig-alt="Full Mediation."}
Full Mediation.
:::
##### Partial Mediation {#sec-partialMediation}
When a single process partially—but does not fully—accounts for the effect of the predictor variable on the outcome variable; this is known as **partial mediation** and is depicted in @fig-partialMediation:
::: {#fig-partialMediation}
![](images/mediationPartial.png){fig-alt="Partial Mediation."}
Partial Mediation.
:::
##### Multiple Mediators {#sec-multipleMediators}
In addition, there can be multiple mediators/mechanisms that account for the effect of a predictor variable on an outcome variable, as depicted in @fig-multipleMediators:
::: {#fig-multipleMediators}
![](images/mediationMultiple.png){fig-alt="Multiple Mediators."}
Multiple Mediators.
:::
### Moderation (i.e., Interaction) {#sec-moderation}
#### Overview {#sec-moderationOverview}
**Moderation** (sometimes called an "interaction"), on the other hand, occurs when there is a variable or condition (`M`; called a "moderator") that <u>changes the assocation</u> between `X` and `Y`.
That is, the effect of the predictor variable on the outcome variable differs at different levels of the moderator variable.
In these cases, `X` and `M` <u>work together</u> to have an effect on `Y`; here `X` <u>does not have a direct effect</u> on `M`.
Moderation answers the question of, "<u>**For whom** does `X` influence `Y`?</u>"
If `X` influences `Y` more strongly for some people or in some circumstances, we would say that there is an interaction such that the <u>effect of `X` on `Y` depends on `M`</u>, as depicted in @fig-moderation:
::: {#fig-moderation}
![](images/moderation.png){fig-alt="Moderation."}
Moderation.
:::
For example, if the effect of consuming sports drinks on player performance differs for Quarterbacks and Wide Receivers, the interaction could be depicted in Figures [-@fig-moderationExamplePathDiagram] and [-@fig-moderationExampleInteractionGraph]:
::: {#fig-moderationExamplePathDiagram}
![](images/moderationExample1.png){fig-alt="Moderation Example: Path Diagram."}
Moderation Example: Path Diagram.
:::
::: {#fig-moderationExampleInteractionGraph}
![](images/moderationExample2.png){fig-alt="Moderation Example: Interaction Graph."}
Moderation Example: Interaction Graph.
:::
An interaction can be identified visually by <u>non-parallel lines</u> at different levels of the moderator.
In this example, the player's position moderates the effect consuming sports drinks on player performance.
In particular, there is a <u>strong positive association</u> between consuming sports drinks and player performance for Wide Receivers (as evidenced by the upward slope of the best-fit regression line), whereas there is <u>no association</u> between consuming sports drinks and player performance for Quarterbacks (as evidenced by the flat line).
## Levels of Measurement {#sec-levelsOfMeasurement}
It is important to know the levels of measurement of your data, because the level(s) of measurement of your data constrain the types of comparisons and analyses that you can meaningfully perform.
There are four levels of measurement that any variable can have:
- nominal
- ordinal
- interval
- ratio
Each is described below:
### Nominal {#sec-nominal}
A variable is considered nominal if it is composed of qualitative classifications.
You cannot meaningfully evaluate whether one number in the variable is larger than another number in the variable because higher numbers do not reflect higher levels of the concept.
Examples of nominal variables include:
- sex (e.g., 1 = male; 2 = female)
- race (e.g., 1 = American Indian; 2 = Asian; 3 = Black; 4 = Pacific Islander; 5 = White)
- ethnicity (e.g., 0 = Non-Hispanic/Latino; 1 = Hispanic/Latino)
- zip code
- jersey number
A football player's jersey number is an example of a nominal variable.
A jersey number of 7 is not higher on whatever concept of interest compared to a jersey number of 6.
To examine the central tendency of a nominal variable, you can determine the mode, but you cannot calculate a mean or median.
### Ordinal {#sec-ordinal}
A variable is considered ordinal if the classifications are ordered.
However, ordinal variables do not have equally spaced intervals.
Examples of ordinal intervals include:
- likert response scales (e.g., 1 = strongly disagree; 2 = disagree; 3 = neutral; 4 = agree; 5 = strongly agree)
- educational attainment (e.g., 1 = no formal education; 2 = elementary school; 3 = middle school; 4 = high school; 5 = college; 6 = graduate degree)
- academic grades on A–F scale (e.g., 1 = A; 2 = B; 3 = C; 4 = D; 5 = F)
- player rank (1 = 1st; 2 = 2nd; 3 = 3rd, etc.)
A football player's fantasy rank is an example of an ordinal variable.
A player with a fantasy rank of 1 has a higher rank than a player with a rank of 2, but it is not known how far apart each player is—i.e., the intervals do not all reflect the same distance.
For instance, the distance between the top-ranked player and the 2nd-best player might be 30 points, whereas the distance between the 2nd-best player and the 3rd-best player might be 2 points.
To examine the central tendency of ordinal data, the median and mode are most appropriate; however, the mean may be used (unlike for nominal data).
### Interval {#sec-interval}
A variable is considered interval if the classifications are ordered (similar to ordinal data) and have equally spaced intervals (unlike ordinal data).
However, interval variables do not have a meaningful zero that reflects absence.
Examples of interval data include:
- temperature on the Fahrenheit or Celsius scale
- time of day
For instance, the temperature difference between 80 and 90 degrees Fahrenheit is the same as the temperature difference between 90 and 100 degrees Fahrenheit.
However, 0 degrees Fahrenheit does not reflect absence of temperature/heat.
Interval data can be meaningfully added or subtracted.
For instance, if a game starts at 4 pm and ends at 7 pm, you know the game lasted 3 hours ($7 - 4 = 3$).
However, interval data cannot be meaningfully multiplied or divided.
For instance, 100 degrees Fahrenheit is not twice as hot as 50 degrees Fahrenheit.
To examine the central tendency of interval data, you can compute the mean, median, or mode.
### Ratio {#sec-ratio}
A variable is considered ratio if the classifications are ordered (similar to ordinal data), have equally spaced intervals (like interval data), and have an absolute zero point that reflects absence of the concept.
Examples of ratio data include:
- temperature on the Kelvin scale
- height
- weight
- age
- distance
- speed
- volume
- time elapsed
- income
- stock price
- years of formal education
- points in football
For instance, points in football has order, equally spaced intervals, and an absolute zero—a team cannot score less than zero points, and zero points reflects absence of points (though it could be argued to be interval data because zero points does not reflect absence of skill.)
Ratio data can be meaningfully added, subtracted, multiplied, or divided.
A player who weighs 350 pounds weighs twice as much as someone who weighs 175 pounds.
To examine the central tendency of ratio data, you can compute the mean, median, or mode.
## Psychometrics {#sec-psychometrics}
Below, I provide brief discussions of various aspects of measurement reliability and validity.
For more information on these and other aspects of psychometrics, see @Petersen2024a and @PetersenPrinciplesPsychAssessment.
### Measurement Reliability {#sec-reliability}
The *reliability* of a measure's scores deals with the *consistency* of measurement.
This book focuses on the following types of reliability:
- [test–retest reliability](#sec-testRetestReliability)
- [inter-rater reliability](#sec-interraterReliability)
- [intra-rater reliability](#sec-intraraterReliability)
- [internal consistency](#sec-internalConsistency)
- [parallel-forms reliability](#sec-parallelFormsReliability)
For more information on these and other aspects of reliability, see <https://isaactpetersen.github.io/Principles-Psychological-Assessment/reliability.html> [@Petersen2024a; @PetersenPrinciplesPsychAssessment].
#### Test–Retest Reliability {#sec-testRetestReliability}
Test–retest reliability evaluates the consistency of scores across time.
For a construct that is expected to be stable across time (e.g., hand size in adults), we would expect our measurements to be consistent across time.
The consistency of scores across time can be examined in terms of relative or absolute test–retest reliability.
Relative test–retest reliability—i.e., the consistency of individual differences across time—is commonly evaluated using the coefficient of stability (i.e., the Pearson correlation coefficent).
Absolute test–retest reliability—i.e., the absolute consistency of people's scores across time—is commonly evaluated using the coefficient of repeatability.
```{r}
```
#### Inter-Rater Reliability {#sec-interraterReliability}
Inter-rater reliability evaluates the consistency of scores across raters.
For instance, if we have a strong measure for assessing college players' aptitude to succeed in the NFL, the measure should yield a similar score for a given player regardless of which (trained) rater (e.g., coach or talent scout) uses it to rate the player.
The consistency of scores across raters is commonly evaluated using the intraclass correlation coefficient (for continuous variables) and Cohen's kappa ($\kappa$; for categorical variables).
```{r}
```
#### Intra-Rater Reliability {#sec-intraraterReliability}
Intra-rater reliability evaluates the consistency of scores within a given rater.
If we have a strong measure for assessing college players' aptitude to succeed in the NFL, the measure should yield a similar score for a given player from the same (trained) rater (e.g., coach or talent scout) each time they rate the same player (assuming the player's aptitude has not changed).
The consistency of scores within raters can be evaluated using similar approaches as those evaluating [inter-rater reliability](#sec-interraterReliability).
```{r}
```
#### Internal Consistency {#sec-internalConsistency}
Internal consistency evaluates the consistency of scores across items within a measure.
If we develop a strong questionnaire measure to assess a college players' aptitude to succeed in the NFL, the scores should be relatively consistent across items.
The consistency of scores across items within a measure is commonly evaluated using Cronbach's alpha ($\alpha$) or McDonald's omega ($\omega$).
```{r}
```
#### Parallel-Forms Reliability {#sec-parallelFormsReliability}
Parallel-forms reliability evaluates the consistency of scores across different but equivalent forms of a measure.
If we develop two equivalent versions of the Wonderlic Contemporary Cognitive Ability Test (Form A and Form B) so that players sitting next to each other do not receive the same items, we would expect a player's score on Form A would be similar to their score on Form B.
Parallel-forms reliability is is commonly evaluated using the coefficient of equivalence (i.e., the Pearson correlation coefficent).
```{r}
```
### Measurement Validity {#sec-validity}
The *validity* of a measure's scores deals with the *accuracy* of measurement.
This book focuses on the following types of validity:
- [face validity](#sec-faceValidity)
- [content validity](#sec-contentValidity)
- [criterion-related validity](#sec-criterionValidity)
- [concurrent (criterion-related) validity](#sec-concurrentValidity)
- [predictive (criterion-related) validity](#sec-predictiveValidity)
- [construct validity](#sec-constructValidity)
- [convergent validity](#sec-convergentValidity)
- [discriminant validity](#sec-discriminantValidity)
- [incremental validity](#sec-incrementalValidity)
- [ecological validity](#sec-ecologicalValidity)
For more information on these and other aspects of validity, see <https://isaactpetersen.github.io/Principles-Psychological-Assessment/validity.html> [@Petersen2024a; @PetersenPrinciplesPsychAssessment].
#### Face Validity {#sec-faceValidity}
*Face validity* evaluates the extent to which a measure "looks like" (on its face) it assesses the construct of interest.
For instance, if a measure is developed to assess aptitude of Wide Receivers for the position, it would be considered to have face validity if everyday (lay) people believe that it assesses aptitude for being a successful Wide Receiver.
#### Content Validity {#sec-contentValidity}
*Content validity* evaluates the extent to which the measure assesses the full breadth of the content, as determined by context experts.
For the measure to have content validity, it should not have gaps (missing content facets) or intrusions (facets of other constructs).
For instance, a strong measure for assessing a player's aptitude to succeed in the NFL might need to include a player's speed, strength, size, lateral quickness, etc.
If the measure is missing their speed, this would be a content gap.
If the measure assesses a construct-irrelevant facet (e.g., their attractiveness), this would be a content intrusion.
#### Criterion-Related Validity {#sec-criterionValidity}
*Criterion-related validity* evaluates the extent to which the measure's scores are related to meaningful variables of interest.
Criterion-related validity is commonly evaluated using a Pearson correlation or some form of regression.
There are two types of criterion-related validity:
- concurrent (criterion-related) validity
- predictive (criterion-related) validity
##### Concurrent (Criterion-Related) Validity {#sec-concurrentValidity}
*Concurrent criterion-related validity* (aka concurrent validity) evaluates the extent to which the measure's scores are related to meaningful variables of interest assessed at the same point in time.
That is, concurrent validity could evaluate whether current player statistics (e.g., passing yards) are associated with their fantasy points.
```{r}
```
##### Predictive (Criterion-Related) Validity {#sec-predictiveValidity}
*Predictive criterion-related validity* (aks predictive validity) evaluates the extent to which the measure's scores are related to meaningful variables of interest that are assessed at a later point in time.
For example, predictive validity could evaluate whether scores on the measure we developed to assess a player's aptitude to succeed in the NFL predicts later performance in the NFL.
```{r}
```
#### Construct Validity {#sec-constructValidity}
*Construct validity* evaluates the extent to which the measure's scores accurately assess the construct of interest.
If we develop a measure with intent to assess aptitude for being a successful Running Back, and it appears to more accurately assess aptitude for being a successful Wide Receiver, then our measure has poor construct validity for assessing aptitude for being a successful Running Back.
Construct validity subsumes [convergent](#sec-convergentValidity) and [discriminant](##sec-discriminantValidity) validity, in addition to all of the other forms of measurement validity.
#### Convergent Validity {#sec-convergentValidity}
*Convergent validity* evaluates the extent to which the measure's scores are related to other measures of the same construct.
For instance, if we develop a new measure to assess intelligence, its scores should be related to scores from other measures designed to assess intelligence (e.g., Wonderlic Contemporary Cognitive Ability Test).
```{r}
```
#### Discriminant Validity {#sec-discriminantValidity}
*Discriminant validity* evaluates the extent to which the measure's scores are unrelated to measures of the different constructs.
For instance, if we develop a new measure to assess intelligence, its scores should be less strongly associated with measures of other constructs (e.g., measures of happiness).
```{r}
```
#### Incremental Validity {#sec-incrementalValidity}
*Incremental validity* evaluates the extent to which the measure's scores provide an increase in predictive accuracy compared to other information that is easily and cheaply available.
That is, in order to be useful, a strong measure should tell us something that we did not already know.
For instance, if we develop a strong measure of intelligence, it should result in increased predictive accuracy (for success in the NFL) compared to when just relying on the Wonderlic Contemporary Cognitive Ability Test.
```{r}
```
#### Ecological Validity {#sec-ecologicalValidity}
*Ecological validity* evaluates the extent to which the measures' scores are indicative of the behavior of a person in the natural environment.
For instance, measures of a players' speed during a game has higher ecological validity (and is more predictive of their performance) than their speed during the NFL Combine [@Lyons2011].
For instance, compared to tests of speed, power, and agility at the NFL Combine, collegiate performance is a stronger predictor of performance in the NFL [@Lyons2011].
That is, previous sports performance is the best predictor of future performance [for a review, see @DenHartigh2018].
### Reliability vs Validity {#sec-reliabilityVsValidity}
Reliability and validity are different but related.
Reliability refers to the *consistency* of scores, whereas accuracy refers to the *accuracy* of scores.
Validity depends on reliability.
Reliability is necessary—but insufficient for—validity.
That is, consistency is necessary—but insufficient for—accuracy.
As depicted in @fig-reliabilityVsValidity, a measure can be no more valid than it is reliable.
A measure can be consistent but inaccurate; however, a measure cannot be accurate but inconsistent.
::: {#fig-reliabilityVsValidity}
![](images/reliabilityVsValidity.png){fig-alt="Reliability Versus Validity."}
Reliability Versus Validity.
:::
## Conclusion {#sec-researchMethodsConclusion}
There are various types of research designs.
Each type of research design differs in the extent to which it supports the ability to draw causal inferences ([internal validity](#sec-internalValidity)) versus the extent to which it supports the ability to identify processes that generalize to the real-world ([external validity](#sec-externalValidity)).
In addition, it is important to understand the distinction between [sample](#sec-sampleVsPopulation) and [population](#sec-sampleVsPopulation), and the distinction between [mediation](#sec-mediation) and [moderation](#sec-moderation).
It is also important to consider the [levels of measurement](#sec-levelsOfMeasurement) used because they constrain the types of analyses that may be performed.
In addition, it is important to consider the [psychometrics](#sec-psychometrics) of measurements, including multiple aspects of [reliability](#sec-reliability) (consistency) and [validity](#sec-validity) (accuracy).
::: {.content-visible when-format="html"}
## Session Info {#sec-researchMethodsSessionInfo}
```{r}
sessionInfo()
```
:::