Handling Dependencies

The Unit-of-Analysis Error

Author

A. C. Del Re

The “Double-Counting” Trap

The Cardinal Rule of Meta-Analysis

One Study = One Effect Size. If you include multiple outcomes from the same subjects in the same meta-analysis, you violate the assumption of independence.

Why Good Researchers Make Bad Errors

Imagine Study A measures Depression using correlation (\(r\)). They report: 1. Beck Depression Inventory (BDI): \(r = 0.50\) 2. Hamilton Rating Scale (HAM-D): \(r = 0.60\)

If you treat these as two separate studies (\(k=2\)), you are cheating. You are effectively counting the sample size (\(N\)) twice. This deflates your Standard Errors and leads to Type I Errors (False Positives).

Interactive Simulator: The Effect of Correlation

The solution is Aggregation. We combine the effects into a single composite score. But how do we calculate the variance of that composite? We need to know how correlated the outcomes are.

\[ Var_{composite} = (\frac{1}{k})^2 [\sum V_i + \sum r_{ij} \sqrt{V_i}\sqrt{V_j}] \]

Change the assumed correlation r_b/w below.

  • 0.00 = High Variance (Treats them as fully independent info)
  • 1.00 = Low Variance (Treats them as redundant info)

See how the Confidence Interval shrinks or expands based on your correlation assumption.

The MAd Solution: agg()

Your package, MAd, automates this.



Next Section: Aggregation >