The Schmidt (1992) Case Study
The Illusion of Conflict
The Scenario
Imagine you are reviewing 21 studies on the validity of an employment test.
- True Population Correlation (\(\rho\)): 0.22
- Sample Size per Study (\(N\)): 68
- Number of Studies (\(k\)): 21
Since we know the “Truth” (\(\rho = 0.22\)), we can see how well individual studies capture it.
The “Conflicting” Results
If we look at the P-values of these 21 studies, we see a confusing picture.
This shows the full “conflicting” literature.
These are the studies that got published easily. Note their effect sizes are often inflated (higher than 0.22).
These studies often end up in the “File Drawer”. Note that many have effect sizes near the truth (0.22), just not enough power.
The Vote Counting Trap
A narrative reviewer would look at this table and say: > “Only about 30-40% of studies found a significant effect. The results are inconsistent.”
This conclusion is False. Every single one of these studies was drawn from the exact same population (\(\rho = 0.22\)). The “variation” is purely Sampling Error.
The Lesson
Small studies (\(N=68\)) have low Power to detect medium effects (\(r=0.22\)). Meta-analysis solves this by weighing studies by their precision, allowing us to see the true mean (0.22) through the noise of sampling error.