Handling Dependencies
The Composite Effect Size
The Independence Assumption
Standard meta-analytic methods (like standard ANOVA) assume that all errors are independent. In practice, studies often report: 1. Multiple Outcomes: (e.g., Depression measured by BDI and HAM-D). 2. Multiple Timepoints: (e.g., Post-test and Follow-up).
If we treat these as separate studies (\(k=2\)), we Double Count the sample size (\(N\)). * This artificially lowers the Standard Error. * It inflates Type I Error rates (False Positives).
The Solution: Aggregation
We can combine dependent effects into a single Composite Effect Size. To do this correctly, we need to know the correlation between the outcomes (\(r\)).
The Formula (Borenstein et al., 2009)
The variance of a composite effect is:
\[ Var_{comp} = (\frac{1}{k})^2 \left[ \sum V_i + \sum_{i \ne j} r_{ij} \sqrt{V_i}\sqrt{V_j} \right] \]
- \(k\): Number of outcomes being aggregated.
- \(V_i\): Variance of individual effect \(i\).
- \(r_{ij}\): Correlation between outcome \(i\) and \(j\).
Sensitivity: * If \(r = 1.0\), the outcomes are redundant. \(Var_{comp}\) is higher (No new info). * If \(r = 0.0\), the outcomes are independent. \(Var_{comp}\) is lower (Max info). ## The Gleser & Olkin (1994, 2009) Procedure
The gold standard for aggregation:
- Simple Mean: Fails because it ignores the correlation (\(\rho\)), underestimating the error.
- Gleser & Olkin: Computes a weighted average that properly accounts for \(\rho\).
Using MAd::agg()
The MAd package implements Gleser & Olkinβs procedure automatically.
Alternative: Hierarchical Modeling
Instead of aggregating, we can use Multi-Level Meta-Analysis (3-Level Models). * Level 1: Participants (Sampling Variance) * Level 2: Outcomes within Studies * Level 3: Between-Study Variance
This preserves the data structure but requires advanced packages like metafor (rma.mv).