“The problems are solved, not by giving new information, but by arranging what we have known since long.”
– Ludwig Wittgenstein, Philosophical Investigations
参考文献 Harrer, M., Cuijpers, P., Furukawa, T.A., & Ebert, D.D. (2021). Doing Meta-Analysis with R: A Hands-On Guide. Boca Raton, FL and London: Chapman & Hall/CRC Press. ISBN 978-0-367-61007-4.
为了进行Meta分析,我们必须找到一个可以概括所有研究的效应量。这种效应量可以直接从文献中提取或从文献中的其他数据中计算出来。效应量必须可比较(Comparable)、可计算(Computable)、可靠(Reliable)、可解释(Interpretable)
均值(Mean)
比例(Proportion)
相关(Correlation)
\[MD_{between} = \bar{x}_1 - \bar{x}_2\] \[SE_{\text{MD}_{\text{between}}} = s_{\text{pooled}}\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}\] \[s_{\text{pooled}} = \sqrt{\frac{(n_1-1)s^2_1+(n_2-1)s^2_2}{(n_1-1)+(n_2-1)}}\]
\[\text{SMD}_{\text{between}} = \frac{\bar{x}_1 - \bar{x}_2}{s_{\text{pooled}}}\] 评估方法(Cohen,1988):0.2小效应,0.5中等效应,0.8大效应
\[SE_{\text{SMD}_{\text{between}}} = \sqrt{\frac{n_1+n_2}{n_1n_2} + \frac{\text{SMD}^2_{\text{between}}}{2(n_1+n_2)}}\]
# Load esc package
library(esc)
# Define the data we need to calculate SMD/d
# This is just some example data that we made up
grp1m <- 50 # mean of group 1
grp2m <- 60 # mean of group 2
grp1sd <- 10 # sd of group 1
grp2sd <- 10 # sd of group 2
grp1n <- 100 # n of group1
grp2n <- 100 # n of group2
# Calculate effect size
esc_mean_sd(grp1m = grp1m, grp2m = grp2m,
grp1sd = grp1sd, grp2sd = grp2sd,
grp1n = grp1n, grp2n = grp2n)
##
## Effect Size Calculation for Meta Analysis
##
## Conversion: mean and sd to effect size d
## Effect Size: -1.0000
## Standard Error: 0.1500
## Variance: 0.0225
## Lower CI: -1.2940
## Upper CI: -0.7060
## Weight: 44.4444
对SMD(Cohen’s d)进行小样本校正后的效应量叫Hedges’ g.
\[\text{MD}_{\text{within}} = \bar{x}_{\text{t}_2} - \bar{x}_{\text{t}_1}\] \[\text{SMD}_{\text{within}} = \frac{\bar{x}_{\text{t}_2} - \bar{x}_{\text{t}_1}}{s_{\text{pooled}}}\] \[SE_{\text{MD}_{\text{within}}}=\sqrt{\dfrac{s^2_{\text{t}_1}+s^2_{\text{t}_2}-(2r_{\text{t}_1\text{t}_2}s_{\text{t}_1}s_{\text{t}_2})}{n}}\] \[SE_{\text{SMD}_{\text{within}}} = \sqrt{\frac{2(1-r_{\text{t}_1\text{t}_2})}{n}+\frac{\text{SMD}^2_{\text{within}}}{2n}}\]
发生 | 不发生 | ||
---|---|---|---|
实验组 | a | b | n_treat |
对照组 | c | d | n_control |
\[{p_{E}}_{\text{treat}} = \frac{a}{a+b} = \frac{a}{n_{\text{treat}}}\] \[{p_{E}}_{\text{control}} = \frac{c}{c+d} = \frac{c}{n_{\text{control}}}\] \[\text{RR} = \frac{{p_{E}}_{\text{treat}}}{{p_{E}}_{\text{control}}}\] 通常进行对数转换 \[\log \text{RR} = \ln\text{RR}\] \[SE_{\log \text{RR}} = \sqrt{\frac{1}{a}+\frac{1}{c} - \frac{1}{a+b} - \frac{1}{c+d}}\] 由于分母不能为0,因此有时会使用连续性校正,即每个格子+0.5.但是连续性校正会导致偏差,因此Meta分析中(固定效应)通常使用Mantel-Haenszel方法合并效应量.
\[\text{Odds}_{\text{treat}} = \frac{a}{b}\] \[\text{Odds}_{\text{control}} = \frac{c}{d}\] \[\text{OR} = \frac{a/b}{c/d}\] \[\log \text{OR} = \ln\text{OR}\] \[SE_{\log \text{OR}} = \sqrt{\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}}\]
library(esc)
# Define data
grp1yes <- 45 # events in the treatment group
grp1no <- 98 # non-events in the treatment group
grp2yes <- 67 # events in the control group
grp2no <- 76 # non-events in the control group
# Calculate OR by setting es.type to "or"
esc_2x2(grp1yes = grp1yes, grp1no = grp1no,
grp2yes = grp2yes, grp2no = grp2no,
es.type = "or")
##
## Effect Size Calculation for Meta Analysis
##
## Conversion: 2x2 table (OR) coefficient to effect size odds ratio
## Effect Size: 0.5209
## Standard Error: 0.2460
## Variance: 0.0605
## Lower CI: 0.3216
## Upper CI: 0.8435
## Weight: 16.5263
# Calculate logOR by setting es.type to "logit"
esc_2x2(grp1yes = grp1yes, grp1no = grp1no,
grp2yes = grp2yes, grp2no = grp2no,
es.type = "logit")
##
## Effect Size Calculation for Meta Analysis
##
## Conversion: 2x2 table (OR) to effect size logits
## Effect Size: -0.6523
## Standard Error: 0.2460
## Variance: 0.0605
## Lower CI: -1.1344
## Upper CI: -0.1701
## Weight: 16.5263
\[\text{RR} = \frac{\text{OR}}{\left(1-\dfrac{c}{n_{\text{control}}}\right)+ \left(\dfrac{c}{n_{\text{control}}}\times \text{OR} \right)}\] ### 发病率比(Incidence Rate Ratios)
发病率比考虑了时间
\[\text{IR} = \frac{E}{T}\] \[\text{IRR} = \frac{ E_{\text{treat}}/T_{\text{treat}} }{E_{\text{control}}/T_{\text{control}}}\] \[\log \text{IRR} = \log_{e}(\text{IRR})\] \[SE_{\log \text{IRR}} = \sqrt{\frac{1}{E_{\text{treat}}}+\frac{1}{E_{\text{control}}}}\] 在比较组间差异时更常用的是危险比(Hazard Ratio),详见生存分析(Survival Analysis)
之前所说的效应量的计算方法有些存在过度简化的问题,会导致系统误差或者偏差(bias)的出现,需要采用某些方法进行校正
小样本(一般是n<20)会导致效应量增大,对于SMD可以采用Hedges’ g作为校正
\[g = \text{SMD} \times (1-\frac{3}{4n-9})\]
# Load esc package
library(esc)
# Define uncorrected SMD and sample size n
SMD <- 0.5
n <- 30
# Convert to Hedges g
g <- hedges_g(SMD, n)
g
## [1] 0.4864865
测量方法导致低信度(Unreliability)会使相关性的测量出现误差
可采用校正后的相关系数
\[{r_{xy}}_{c} = \frac{r_{xy}}{\sqrt{r_{xx}}}\] \[{r_{xy}}_{c} = \frac{r_{xy}}{\sqrt{r_{xx}}\sqrt{r_{yy}}}\] \[SE_c = \frac{SE}{\sqrt{r_{xx}}}\] \[SE_c = \frac{SE}{\sqrt{r_{xx}}\sqrt{r_{yy}}}\]
当一项研究招募的样本非常具有选择性时,通常会出现这种情况,因为这些样本可能并不代表整个人群
\[U = \frac{s_{\text{unrestricted}}}{s_{\text{restricted}}}\] \[{r_{xy}}_c = \frac{U\times r_{xy}}{\sqrt{(U^2-1)r_{xy}^2+1}}\] \[\text{SMD}_c = \frac{U\times \text{SMD}}{\sqrt{(U^2-1)\text{SMD}^2+1}}\] \[SE_{{r_{xy}}_c} = \frac{{r_{xy}}_c}{r_{xy}}SE_{r_{xy}}\] \[SE_{{\text{SMD}}_c} = \frac{{\text{SMD}}_c}{\text{SMD}}SE_{\text{SMD}}\]
固定效应模型假定所有效应量均来自单一、同质的总体。该模型认为所有研究的真实效应量相同。这个真实效应量就是我们在Meta分析中要计算的总体效应量,用\(\theta\)表示.
\[\hat\theta_k = \theta + \epsilon_k\]
固定效应模型背后的理念是,不同研究观察到的效应量可能不同,但这只是因为抽样误差。实际上,它们的真实效应量都是一样的:都是固定的.
标准误越小的研究,其抽样误差越小,因此其对总体效应量的估计值更有可能接近事实。尽管所有观察到的效应量都是真实效应的估计值,但有些效应量比其他效应量更好。因此,当我们在Meta分析中合并效应量时,我们应该给予精度较高(即标准误较小)的效应量更大的权重。如果我们想计算固定效应模型下的合并效应量,我们只需使用所有研究的加权平均值.
第k项研究的效应量的权重如下
\[w_k = \frac{1}{s^2_k}\]
总效应量的计算如下,该方法又称为“逆方差加权”(inverse-variance weighting):
\[\hat\theta = \frac{\sum^{K}_{k=1} \hat\theta_kw_k}{\sum^{K}_{k=1} w_k}\]
对于二分变量的效应量,有其他方法来计算加权平均值,包括Mantel-Haenszel法、Peto法或Bakbergenuly(2020)的样本量加权法.
下面使用{dmetar}包中的自杀干预(SuisidePrevention)数据集举例.
# Load dmetar, esc and tidyverse (for pipe)
library(dmetar)
## Extensive documentation for the dmetar package can be found at:
## www.bookdown.org/MathiasHarrer/Doing_Meta_Analysis_in_R/
library(esc)
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.2 ✔ readr 2.1.4
## ✔ forcats 1.0.0 ✔ stringr 1.5.0
## ✔ ggplot2 3.4.2 ✔ tibble 3.2.1
## ✔ lubridate 1.9.2 ✔ tidyr 1.3.0
## ✔ purrr 1.0.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
# Load data set from dmetar
data(SuicidePrevention)
# Calculate Hedges' g and the Standard Error
# - We save the study names in "study".
# - We use the pmap_dfr function to calculate the effect size
# for each row.
SP_calc <- pmap_dfr(SuicidePrevention,
function(mean.e, sd.e, n.e, mean.c,
sd.c, n.c, author, ...){
esc_mean_sd(grp1m = mean.e,
grp1sd = sd.e,
grp1n = n.e,
grp2m = mean.c,
grp2sd = sd.c,
grp2n = n.c,
study = author,
es.type = "g") %>%
as.data.frame()})
# Let us catch a glimpse of the data
# The data set contains Hedges' g ("es") and standard error ("se")
glimpse(SP_calc)
## Rows: 9
## Columns: 9
## $ study <chr> "Berry et al.", "DeVries et al.", "Fleming et al.", "Hunt …
## $ es <dbl> -0.14279447, -0.60770928, -0.11117965, -0.12698011, -0.392…
## $ weight <dbl> 46.09784, 34.77314, 14.97625, 32.18243, 24.52054, 54.50431…
## $ sample.size <dbl> 185, 146, 60, 129, 100, 220, 120, 80, 107
## $ se <dbl> 0.1472854, 0.1695813, 0.2584036, 0.1762749, 0.2019459, 0.1…
## $ var <dbl> 0.02169299, 0.02875783, 0.06677240, 0.03107286, 0.04078214…
## $ ci.lo <dbl> -0.4314686, -0.9400826, -0.6176413, -0.4724727, -0.7882811…
## $ ci.hi <dbl> 0.145879624, -0.275335960, 0.395282029, 0.218512440, 0.003…
## $ measure <chr> "g", "g", "g", "g", "g", "g", "g", "g", "g"
# Calculate the inverse variance-weights for each study
SP_calc$w <- 1/SP_calc$se^2
# Then, we use the weights to calculate the pooled effect
pooled_effect <- sum(SP_calc$w*SP_calc$es)/sum(SP_calc$w)
pooled_effect
## [1] -0.2311121
固定效应模型认为所有研究都来自一个同质的总体,差异的唯一来源是抽样误差,但实际绝非如此,可能有无数原因导致不同研究的真实效应量存在差异.
在随机效应模型中,我们希望考虑这样一个事实,即效应量比从单一同质群体中抽取时显示出更大的差异。因此,我们假定单个研究的效应量不仅仅由于抽样误差而产生偏差,还存在另一个变异来源.
\[\hat\theta_k = \mu + \zeta_k + \epsilon_k\]
\(\mu\)是合并的总效应量,\(\zeta_k\)表示第k个研究在总体分布中相对真实总效应的误差,与k无关,\(\epsilon\)是抽样误差.
在包括医学和社会科学在内的许多领域,常规的做法是始终使用随机效应模型,因为实际上总是可以预见到一定程度的研究间异质性。只有当我们无法检测到研究间的异质性时(我们将在第五章讨论如何检测),并且我们有很好的理由假定真实效应是固定的,才可以使用固定效应模型.
随机效应模型引入了\(\zeta_k\),它的方差用\(\tau^2\)表示。第k个研究的权重如下:
\[w^*_k = \frac{1}{s^2_k+\tau^2}\]
合并效应量如下:
\[\hat\theta = \frac{\sum^{K}_{k=1} \hat\theta_kw^*_k}{\sum^{K}_{k=1} w^*_k}\]
估计\(\tau^2\)的方法有很多,在{meta}包中可以实现的方法如下:
DerSimonian-Laird(“DL”)估计法(DerSimonian & Laird, 1986)
限制最大似然(“REML”)或最大似然(“ML”)法(Viechtbauer, 2005)
Paule-Mandel(“PM”)法(Paule & Mandel, 1982)
经验贝叶斯(“EB”)法(Sidik & Jonkman, 2019),与Paule-Mandel方法基本相同
Sidik-Jonkman(“SJ”)估计法(Sidik & Jonkman, 2005)
最常用的是DerSimonian-Laird估计法,这是Crochane的RevMan也是{meta}中默认的估计法,但该方法存在偏差,特别是研究数量少且异质性高的情况下。Veroniki和同事(2016)推荐使用Paule-Mandel方法来处理二元和连续数据效应量,并推荐使用限制最大似然估计法来处理连续结果。限制最大似然估计法也是{metafor}默认的方法.
选择计算方法的一些建议:
对于基于连续变量数据的效应量,可以首先使用限制最大似然估计法
对于二分变量效应量,如果样本大小没有极端变化,Paule-Mandel是一个很好的选择
当有充分的理由相信样本中效应量的异质性非常大,并且如果避免假阳性具有非常高的优先级,可以使用Sidik-Jonkman法
如果希望他人能够在R语言之外尽可能精确地复制您的结果,DerSimonian-Laird估计器是首选方法
Knapp-Hartung调整试图控制我们对研究间异质性估计的不确定性,可以降低假阳性的概率。对合并效应量的显著性检验通常假设为正态分布(即所谓的Wald型检验),而Knapp-Hartung方法则基于t分布,因此置信区间会相应扩大。Knapp-Hartung调整只能用于随机效应模型.
{meta}包中有很多函数,最通用的是metagen(generic inverse variance meta-analysis)。当我们可以使用原始数据时,{meta}为每种效应大小类型提供了专门的函数。我们可以使用metamean、metacont和metacor函数分别计算均值、(标准化)均值差和相关系数。我们可以使用metarate、metaprop和metainc函数合并比率、比例和发病率比。当我们处理OR或RR时,可以使用metabin函数.
library(tidyverse) # needed for 'glimpse'
library(dmetar)
library(meta)
## Loading 'meta' package (version 6.5-0).
## Type 'help(meta)' for a brief overview.
## Readers of 'Meta-Analysis with R (Use R!)' should install
## older version of 'meta' package: https://tinyurl.com/dt4y5drs
data(ThirdWave) #效应量是Hedges' g
glimpse(ThirdWave)
## Rows: 18
## Columns: 8
## $ Author <chr> "Call et al.", "Cavanagh et al.", "DanitzOrsillo"…
## $ TE <dbl> 0.7091362, 0.3548641, 1.7911700, 0.1824552, 0.421…
## $ seTE <dbl> 0.2608202, 0.1963624, 0.3455692, 0.1177874, 0.144…
## $ RiskOfBias <chr> "high", "low", "high", "low", "low", "low", "high…
## $ TypeControlGroup <chr> "WLC", "WLC", "WLC", "no intervention", "informat…
## $ InterventionDuration <chr> "short", "short", "short", "short", "short", "sho…
## $ InterventionType <chr> "mindfulness", "mindfulness", "ACT", "mindfulness…
## $ ModeOfDelivery <chr> "group", "online", "group", "group", "online", "g…
m.gen <- metagen(TE = TE,
seTE = seTE,
studlab = Author,
data = ThirdWave,
sm = "SMD",
fixed = FALSE,
random = TRUE,
method.tau = "REML",
hakn = TRUE,
title = "Third Wave Psychotherapies")
summary(m.gen)
## Review: Third Wave Psychotherapies
##
## SMD 95%-CI %W(random)
## Call et al. 0.7091 [ 0.1979; 1.2203] 5.0
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 6.3
## DanitzOrsillo 1.7912 [ 1.1139; 2.4685] 3.8
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 7.9
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 7.3
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 6.3
## Gallego et al. 0.7249 [ 0.2846; 1.1652] 5.7
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 6.0
## Hintz et al. 0.2840 [-0.0453; 0.6133] 6.9
## Kang et al. 1.2751 [ 0.6142; 1.9360] 3.9
## Kuhlmann et al. 0.1036 [-0.2781; 0.4853] 6.3
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 5.6
## Phang et al. 0.5407 [ 0.0619; 1.0196] 5.3
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 5.1
## Ratanasiripong 0.5154 [-0.1731; 1.2039] 3.7
## Shapiro et al. 1.4797 [ 0.8618; 2.0977] 4.2
## Song & Lindquist 0.6126 [ 0.1683; 1.0569] 5.7
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 5.2
##
## Number of studies: k = 18
##
## SMD 95%-CI t p-value
## Random effects model (HK) 0.5771 [0.3782; 0.7760] 6.12 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0820 [0.0295; 0.3533]; tau = 0.2863 [0.1717; 0.5944]
## I^2 = 62.6% [37.9%; 77.5%]; H = 1.64 [1.27; 2.11]
##
## Test of heterogeneity:
## Q d.f. p-value
## 45.50 17 0.0002
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 17)
m.gen$TE.random #合并效应量
## [1] 0.5771158
m.gen$TE.fixed
## [1] 0.4805045
# 改为使用Paule-Mandel法
m.gen_update <- update.meta(m.gen,
method.tau = "PM")
# Get pooled effect
m.gen_update$TE.random
## [1] 0.5873544
# Get tau^2 estimate
m.gen_update$tau2
## [1] 0.1104957
#save(m.gen, file = "path/to/my/meta-analysis.rda") # example path
# Make sure meta and dmetar are already loaded
library(meta)
library(dmetar)
library(meta)
# Load dataset from dmetar (or download and open manually)
data(SuicidePrevention)
# Use metcont to pool results.
m.cont <- metacont(n.e = n.e,
mean.e = mean.e,
sd.e = sd.e,
n.c = n.c,
mean.c = mean.c,
sd.c = sd.c,
studlab = author,
data = SuicidePrevention,
sm = "SMD",
method.smd = "Hedges",
fixed = FALSE,
random = TRUE,
method.tau = "REML",
hakn = TRUE,
title = "Suicide Prevention")
summary(m.cont)
## Review: Suicide Prevention
##
## SMD 95%-CI %W(random)
## Berry et al. -0.1428 [-0.4315; 0.1459] 15.6
## DeVries et al. -0.6077 [-0.9402; -0.2752] 12.3
## Fleming et al. -0.1112 [-0.6177; 0.3953] 5.7
## Hunt & Burke -0.1270 [-0.4725; 0.2185] 11.5
## McCarthy et al. -0.3925 [-0.7884; 0.0034] 9.0
## Meijer et al. -0.2676 [-0.5331; -0.0021] 17.9
## Rivera et al. 0.0124 [-0.3454; 0.3703] 10.8
## Watkins et al. -0.2448 [-0.6848; 0.1952] 7.4
## Zaytsev et al. -0.1265 [-0.5062; 0.2533] 9.7
##
## Number of studies: k = 9
## Number of observations: o = 1147
##
## SMD 95%-CI t p-value
## Random effects model -0.2304 [-0.3734; -0.0874] -3.71 0.0059
##
## Quantifying heterogeneity:
## tau^2 = 0.0044 [0.0000; 0.0924]; tau = 0.0661 [0.0000; 0.3040]
## I^2 = 7.4% [0.0%; 67.4%]; H = 1.04 [1.00; 1.75]
##
## Test of heterogeneity:
## Q d.f. p-value
## 8.64 8 0.3738
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 8)
## - Hedges' g (bias corrected standardised mean difference; using exact formulae)
使用metabin函数合并效应量。传统的逆方差加权法并不理想,因此有一些其他的方法.
Mantel-Haenszel法通常被用作计算二元数据权重的替代方法。这也是metabin中使用的默认方法。该方法使用实验组和对照组的事件和非事件数来确定研究的权重.
RR的计算公式
\[w_k = \frac{(a_k+b_k) c_k}{n_k}\]
OR的计算公式
\[w_k = \frac{b_kc_k}{n_k}\]
Peto法使用了一种特别的效应量:Peto odds ratio \(\hat{\psi}_k\)
\[O_k = a_k\\E_k = \frac{(a_k+b_k)(a_k+c_k)}{a_k+b_k+c_k+d_k}\\V_k = \frac{(a_k+b_k)(c_k+d_k)(a_k+c_k)(b_k+d_k)}{{(a_k+b_k+c_k+d_k)}^2(a_k+b_k+c_k+d_k-1)}\\\log\hat\psi_k = \frac{O_k-E_k}{V_k}\] ##### Bakbergenuly样本量法(Bakbergenuly-Sample Size Method)
最近,Bakbergenuly及其同事(2020)提出了另一种方法,即效应权重仅由研究的样本量决定,并表明这种方法可能优于Mantel和Haenszel的方法。我们称之为样本量法(Sample Size Method)。这种方法的计算公式非常简单.
\[w_k = \frac{n_{\text{treat}_k}n_{\text{control}_k}}{n_{\text{treat}_k} + n_{\text{control}_k} }\]
在metabin中实施这种方法时,使用固定效应和随机效应模型的权重和总效应将是相同的,只有p值不同.
在大多数情况下,遵循Cochrane的一般评估,使用Mantel-Haenszel方法(无连续性校正)可能是可取的。当OR是所需的效应量,且感兴趣的事件预计罕见时,可使用Peto法.
library(dmetar)
library(tidyverse)
library(meta)
data(DepressionMortality)
glimpse(DepressionMortality)
## Rows: 18
## Columns: 6
## $ author <chr> "Aaroma et al., 1994", "Black et al., 1998", "Bruce et al., 19…
## $ event.e <dbl> 25, 65, 5, 26, 32, 1, 24, 15, 15, 173, 37, 41, 29, 61, 15, 21,…
## $ n.e <dbl> 215, 588, 46, 67, 407, 44, 60, 61, 29, 1015, 105, 120, 258, 38…
## $ event.c <dbl> 171, 120, 107, 1168, 269, 87, 200, 437, 227, 250, 66, 9, 24, 3…
## $ n.c <dbl> 3088, 1901, 2479, 3493, 6256, 1520, 882, 2603, 853, 3375, 409,…
## $ country <chr> "Finland", "USA", "USA", "USA", "Sweden", "USA", "Canada", "Ne…
m.bin <- metabin(event.e = event.e,
n.e = n.e,
event.c = event.c,
n.c = n.c,
studlab = author,
data = DepressionMortality,
sm = "RR",
method = "MH",
MH.exact = TRUE,
fixed = FALSE,
random = TRUE,
method.tau = "PM",
hakn = TRUE,
title = "Depression and Mortality")
summary(m.bin)
## Review: Depression and Mortality
##
## RR 95%-CI %W(random)
## Aaroma et al., 1994 2.0998 [1.4128; 3.1208] 6.0
## Black et al., 1998 1.7512 [1.3139; 2.3341] 6.6
## Bruce et al., 1989 2.5183 [1.0785; 5.8802] 3.7
## Bruce et al., 1994 1.1605 [0.8560; 1.5733] 6.5
## Enzell et al., 1984 1.8285 [1.2853; 2.6014] 6.3
## Fredman et al., 1989 0.3971 [0.0566; 2.7861] 1.2
## Murphy et al., 1987 1.7640 [1.2644; 2.4610] 6.4
## Penninx et al., 1999 1.4647 [0.9361; 2.2919] 5.8
## Pulska et al., 1998 1.9436 [1.3441; 2.8107] 6.2
## Roberts et al., 1990 2.3010 [1.9206; 2.7567] 7.0
## Saz et al., 1999 2.1837 [1.5533; 3.0700] 6.3
## Sharma et al., 1998 2.0500 [1.0744; 3.9114] 4.7
## Takeida et al., 1997 6.9784 [4.1303; 11.7902] 5.3
## Takeida et al., 1999 5.8124 [3.8816; 8.7035] 6.0
## Thomas et al., 1992 1.3303 [0.7780; 2.2745] 5.3
## Thomas et al., 1992 1.7722 [1.1073; 2.8363] 5.6
## Weissman et al., 1986 1.2500 [0.6678; 2.3398] 4.8
## Zheng et al., 1997 1.9803 [1.4001; 2.8011] 6.3
##
## Number of studies: k = 18
## Number of observations: o = 94770
## Number of events: e = 5439
##
## RR 95%-CI t p-value
## Random effects model 2.0217 [1.5786; 2.5892] 6.00 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.1865 [0.0739; 0.5568]; tau = 0.4319 [0.2718; 0.7462]
## I^2 = 77.2% [64.3%; 85.4%]; H = 2.09 [1.67; 2.62]
##
## Test of heterogeneity:
## Q d.f. p-value
## 74.49 17 < 0.0001
##
## Details on meta-analytical method:
## - Inverse variance method
## - Paule-Mandel estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 17)
m.bin_update <- update.meta(m.bin,
method.tau = "REML")
exp(m.bin_update$TE.random)
## [1] 2.02365
m.bin_update$tau2
## [1] 0.1647315
m.bin_or <- update.meta(m.bin, sm = "OR")
m.bin_or
## Review: Depression and Mortality
##
## Number of studies: k = 18
## Number of observations: o = 94770
## Number of events: e = 5439
##
## OR 95%-CI t p-value
## Random effects model 2.2901 [1.7512; 2.9949] 6.52 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.2032 [0.0744; 0.6314]; tau = 0.4508 [0.2728; 0.7946]
## I^2 = 72.9% [56.7%; 83.0%]; H = 1.92 [1.52; 2.43]
##
## Test of heterogeneity:
## Q d.f. p-value
## 62.73 17 < 0.0001
##
## Details on meta-analytical method:
## - Inverse variance method
## - Paule-Mandel estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 17)
若数据为计算好的效应量
DepressionMortality$TE <- m.bin$TE
DepressionMortality$seTE <- m.bin$seTE
# Set seTE of study 7 to NA
DepressionMortality$seTE[7] <- NA
# Create empty columns 'lower' and 'upper'
DepressionMortality[,"lower"] <- NA
DepressionMortality[,"upper"] <- NA
# Fill in values for 'lower' and 'upper' in study 7
# As always, binary effect sizes need to be log-transformed
DepressionMortality$lower[7] <- log(1.26)
DepressionMortality$upper[7] <- log(2.46)
DepressionMortality[,c("author", "TE", "seTE", "lower", "upper")]
## author TE seTE lower upper
## 1 Aaroma et al., 1994 0.7418532 0.20217061 NA NA
## 2 Black et al., 1998 0.5603039 0.14659887 NA NA
## 3 Bruce et al., 1989 0.9235782 0.43266994 NA NA
## 4 Bruce et al., 1994 0.1488720 0.15526121 NA NA
## 5 Enzell et al., 1984 0.6035076 0.17986279 NA NA
## 6 Fredman et al., 1989 -0.9236321 0.99403676 NA NA
## 7 Murphy et al., 1987 0.5675840 NA 0.2311117 0.9001613
## 8 Penninx et al., 1999 0.3816630 0.22842369 NA NA
## 9 Pulska et al., 1998 0.6645639 0.18819368 NA NA
## 10 Roberts et al., 1990 0.8333374 0.09218909 NA NA
## 11 Saz et al., 1999 0.7810180 0.17380951 NA NA
## 12 Sharma et al., 1998 0.7178398 0.32962024 NA NA
## 13 Takeida et al., 1997 1.9428138 0.26758609 NA NA
## 14 Takeida et al., 1999 1.7599912 0.20599112 NA NA
## 15 Thomas et al., 1992 0.2853747 0.27366749 NA NA
## 16 Thomas et al., 1992 0.5721946 0.23995571 NA NA
## 17 Weissman et al., 1986 0.2231436 0.31986687 NA NA
## 18 Zheng et al., 1997 0.6832705 0.17690650 NA NA
m.gen_bin <- metagen(TE = TE,
seTE = seTE,
lower = lower,
upper = upper,
studlab = author,
data = DepressionMortality,
sm = "RR",
method.tau = "PM",
fixed = FALSE,
random = TRUE,
title = "Depression Mortality (Pre-calculated)")
summary(m.gen_bin)
## Review: Depression Mortality (Pre-calculated)
##
## RR 95%-CI %W(random)
## Aaroma et al., 1994 2.0998 [1.4128; 3.1208] 6.0
## Black et al., 1998 1.7512 [1.3139; 2.3341] 6.6
## Bruce et al., 1989 2.5183 [1.0785; 5.8802] 3.7
## Bruce et al., 1994 1.1605 [0.8560; 1.5733] 6.5
## Enzell et al., 1984 1.8285 [1.2853; 2.6014] 6.3
## Fredman et al., 1989 0.3971 [0.0566; 2.7861] 1.2
## Murphy et al., 1987 1.7640 [1.2600; 2.4600] 6.4
## Penninx et al., 1999 1.4647 [0.9361; 2.2919] 5.8
## Pulska et al., 1998 1.9436 [1.3441; 2.8107] 6.2
## Roberts et al., 1990 2.3010 [1.9206; 2.7567] 7.1
## Saz et al., 1999 2.1837 [1.5533; 3.0700] 6.3
## Sharma et al., 1998 2.0500 [1.0744; 3.9114] 4.7
## Takeida et al., 1997 6.9784 [4.1303; 11.7902] 5.3
## Takeida et al., 1999 5.8124 [3.8816; 8.7035] 6.0
## Thomas et al., 1992 1.3303 [0.7780; 2.2745] 5.3
## Thomas et al., 1992 1.7722 [1.1073; 2.8363] 5.6
## Weissman et al., 1986 1.2500 [0.6678; 2.3398] 4.8
## Zheng et al., 1997 1.9803 [1.4001; 2.8011] 6.3
##
## Number of studies: k = 18
##
## RR 95%-CI z p-value
## Random effects model 2.0218 [1.6066; 2.5442] 6.00 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.1865 [0.0739; 0.5568]; tau = 0.4319 [0.2718; 0.7462]
## I^2 = 77.2% [64.3%; 85.4%]; H = 2.09 [1.67; 2.62]
##
## Test of heterogeneity:
## Q d.f. p-value
## 74.49 17 < 0.0001
##
## Details on meta-analytical method:
## - Inverse variance method
## - Paule-Mandel estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
library(dmetar)
library(tidyverse)
library(meta)
data(EatingDisorderPrevention)
glimpse(EatingDisorderPrevention)
## Rows: 5
## Columns: 5
## $ Author <chr> "Stice et al., 2013", "Stice et al., 2017a", "Stice et al., 20…
## $ event.e <dbl> 6, 22, 6, 8, 22
## $ time.e <dbl> 362, 235, 394, 224, 160
## $ event.c <dbl> 16, 8, 9, 13, 29
## $ time.c <dbl> 356, 74, 215, 221, 159
m.inc <- metainc(event.e = event.e,
time.e = time.e,
event.c = event.c,
time.c = time.c,
studlab = Author,
data = EatingDisorderPrevention,
sm = "IRR",
method = "MH",
fixed = FALSE,
random = TRUE,
method.tau = "PM",
hakn = TRUE,
title = "Eating Disorder Prevention")
summary(m.inc)
## Review: Eating Disorder Prevention
##
## IRR 95%-CI %W(random)
## Stice et al., 2013 0.3688 [0.1443; 0.9424] 13.9
## Stice et al., 2017a 0.8660 [0.3855; 1.9450] 18.7
## Stice et al., 2017b 0.3638 [0.1295; 1.0221] 11.5
## Taylor et al., 2006 0.6071 [0.2516; 1.4648] 15.8
## Taylor et al., 2016 0.7539 [0.4332; 1.3121] 40.0
##
## Number of studies: k = 5
## Number of events: e = 139
##
## IRR 95%-CI t p-value
## Random effects model 0.6223 [0.3955; 0.9791] -2.91 0.0439
##
## Quantifying heterogeneity:
## tau^2 = 0 [0.0000; 1.1300]; tau = 0 [0.0000; 1.0630]
## I^2 = 0.0% [0.0%; 79.2%]; H = 1.00 [1.00; 2.19]
##
## Test of heterogeneity:
## Q d.f. p-value
## 3.34 4 0.5033
##
## Details on meta-analytical method:
## - Inverse variance method
## - Paule-Mandel estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 4)
相关系数的合并可以使用metacor函数实现,使用逆方差加权法。在合并前metacor会对相关系数进行Fisher’s z转换:
\[z = 0.5\log_{e}\left(\frac{1+r}{1-r}\right)\]
library(dmetar)
library(tidyverse)
library(meta)
data(HealthWellbeing)
glimpse(HealthWellbeing)
## Rows: 29
## Columns: 5
## $ author <chr> "An, 2008", "Angner, 2013", "Barger, 2009", "Doherty, 2013"…
## $ cor <dbl> 0.620, 0.372, 0.290, 0.333, 0.730, 0.405, 0.292, 0.388, 0.3…
## $ n <dbl> 121, 383, 350000, 1764, 42331, 112, 899, 870, 70, 67, 246, …
## $ population <chr> "general population", "chronic condition", "general populat…
## $ country <chr> "South Korea", "USA", "USA", "Ireland", "Poland", "Australi…
m.cor <- metacor(cor = cor,
n = n,
studlab = author,
data = HealthWellbeing,
fixed = FALSE,
random = TRUE,
method.tau = "REML",
hakn = TRUE,
title = "Health and Wellbeing")
summary(m.cor)
## Review: Health and Wellbeing
##
## COR 95%-CI %W(random)
## An, 2008 0.6200 [0.4964; 0.7189] 2.8
## Angner, 2013 0.3720 [0.2823; 0.4552] 3.4
## Barger, 2009 0.2900 [0.2870; 0.2930] 3.8
## Doherty, 2013 0.3330 [0.2908; 0.3739] 3.7
## Dubrovina, 2012 0.7300 [0.7255; 0.7344] 3.8
## Fisher, 2010 0.4050 [0.2373; 0.5493] 2.8
## Gana, 2013 0.2920 [0.2310; 0.3507] 3.6
## Garrido, 2013 0.3880 [0.3300; 0.4430] 3.6
## Goldbeck, 2001 0.3600 [0.1366; 0.5486] 2.4
## Jacobsson, 2010 0.3080 [0.0732; 0.5104] 2.3
## Kim, 2012 0.3500 [0.2352; 0.4551] 3.3
## Koots-Ausmees, 2015 0.3400 [0.3367; 0.3432] 3.8
## Kulczycka, 2010 0.4200 [0.2247; 0.5829] 2.5
## Lacruz, 2012 0.1900 [0.1532; 0.2263] 3.7
## Liang, 2014 0.4900 [0.4791; 0.5007] 3.8
## Matthews, 2002 0.4700 [0.4059; 0.5295] 3.6
## Mukuria, 2013 0.2940 [0.2794; 0.3085] 3.8
## Mukuria, 2015 0.3900 [0.3697; 0.4100] 3.8
## Ngamaba, 2016 0.4980 [0.4707; 0.5243] 3.8
## Ngamaba, 2017 0.2900 [0.2838; 0.2961] 3.8
## Patten, 2010 0.1600 [0.0354; 0.2797] 3.3
## Sabatini, 2014 0.2200 [0.1537; 0.2843] 3.6
## Takeyachi, 2003 0.2020 [0.1352; 0.2669] 3.6
## Tuchtenhagen, 2015 0.2900 [0.2358; 0.3424] 3.7
## Wang, 2002 0.3200 [0.1639; 0.4605] 2.9
## Wang, 2015 0.3200 [0.2968; 0.3428] 3.8
## Yildirim, 2013 0.3900 [0.3031; 0.4705] 3.4
## Zajacova, 2014 0.2440 [0.2135; 0.2740] 3.8
## Zagorski, 2013 0.3600 [0.3494; 0.3705] 3.8
##
## Number of studies: k = 29
## Number of observations: o = 853794
##
## COR 95%-CI t p-value
## Random effects model 0.3632 [0.3092; 0.4148] 12.81 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0241 [0.0141; 0.0436]; tau = 0.1554 [0.1186; 0.2088]
## I^2 = 99.8%; H = 24.14
##
## Test of heterogeneity:
## Q d.f. p-value
## 16320.87 28 0
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 28)
## - Fisher's z transformation of correlations
使用metamean函数可以对均值进行Meta分析。该函数使用通用的逆方差法来合并,使用 时,必须先确定是对原始均值还是对数变换均值进行Meta分析。
library(dmetar)
library(tidyverse)
library(meta)
data(BdiScores)
# We only need the first four columns
glimpse(BdiScores[,1:4])
## Rows: 6
## Columns: 4
## $ author <chr> "DeRubeis, 2005", "Dimidjian, 2006", "Dozois, 2009", "Lesperanc…
## $ n <dbl> 180, 145, 48, 142, 301, 104
## $ mean <dbl> 32.6, 31.9, 28.6, 30.3, 31.9, 29.8
## $ sd <dbl> 9.4, 7.4, 9.9, 9.1, 9.2, 8.6
m.mean <- metamean(n = n,
mean = mean,
sd = sd,
studlab = author,
data = BdiScores,
sm = "MRAW", # 若经过log转换则为MLN
fixed = FALSE,
random = TRUE,
method.tau = "REML",
hakn = TRUE,
title = "BDI-II Scores")
summary(m.mean)
## Review: BDI-II Scores
##
## mean 95%-CI %W(random)
## DeRubeis, 2005 32.6000 [31.2268; 33.9732] 18.0
## Dimidjian, 2006 31.9000 [30.6955; 33.1045] 19.4
## Dozois, 2009 28.6000 [25.7993; 31.4007] 9.1
## Lesperance, 2007 30.3000 [28.8033; 31.7967] 17.0
## McBride, 2007 31.9000 [30.8607; 32.9393] 20.7
## Quilty, 2014 29.8000 [28.1472; 31.4528] 15.8
##
## Number of studies: k = 6
## Number of observations: o = 920
##
## mean 95%-CI
## Random effects model 31.1221 [29.6656; 32.5786]
##
## Quantifying heterogeneity:
## tau^2 = 1.0937 [0.0603; 12.9913]; tau = 1.0458 [0.2456; 3.6043]
## I^2 = 64.3% [13.8%; 85.2%]; H = 1.67 [1.08; 2.60]
##
## Test of heterogeneity:
## Q d.f. p-value
## 14.00 5 0.0156
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 5)
## - Untransformed (raw) means
metaprop函数可以用来合并比例。如果我们指定sm = “PLOGIT”,metaprop 函数会自动完成logit转换。如果需要对原始比例进行合并,我们可以使用 sm = “PRAW”,但是不鼓励这样做。
默认的metaprop比例合并方法比较特殊。如果我们使用logit转换值,函数不会使用逆方差法进行合并,而是建立一个广义线性混合效应模型(GLMM)。从本质上讲,该函数对我们的数据拟合了一个逻辑回归模型,其中包括随机效应,以考虑不同研究之间真实效应量不同的事实。
library(dmetar)
library(meta)
library(tidyverse)
data(OpioidMisuse)
glimpse(OpioidMisuse)
## Rows: 15
## Columns: 3
## $ author <chr> "Becker, 2008", "Boyd, 2009", "Boyd, 2007", "Cerda, 2014", "Fie…
## $ event <dbl> 2186, 91, 126, 543, 6496, 10850, 86, 668, 843, 647, 11521, 1111…
## $ n <dbl> 21826, 912, 1084, 7646, 55215, 114783, 527, 9403, 11274, 8888, …
m.prop <- metaprop(event = event,
n = n,
studlab = author,
data = OpioidMisuse,
method = "GLMM", #逆方差法设置为Inverse
sm = "PLOGIT",
fixed = FALSE,
random = TRUE,
hakn = TRUE,
title = "Opioid Misuse")
summary(m.prop)
## Review: Opioid Misuse
##
## proportion 95%-CI
## Becker, 2008 0.1002 [0.0962; 0.1042]
## Boyd, 2009 0.0998 [0.0811; 0.1211]
## Boyd, 2007 0.1162 [0.0978; 0.1368]
## Cerda, 2014 0.0710 [0.0654; 0.0770]
## Fiellin, 2013 0.1176 [0.1150; 0.1204]
## Jones, 2013 0.0945 [0.0928; 0.0962]
## Lord, 2011 0.1632 [0.1327; 0.1976]
## McCabe, 2005 0.0710 [0.0659; 0.0764]
## McCabe, 2012 0.0748 [0.0700; 0.0798]
## McCabe, 2013 0.0728 [0.0675; 0.0784]
## Nakawai, 2012 0.0909 [0.0893; 0.0925]
## Sung, 2005 0.0962 [0.0908; 0.1017]
## Tetrault, 2007 0.1259 [0.1209; 0.1311]
## Wu, 2008 0.0873 [0.0838; 0.0908]
## Zullig, 2012 0.0840 [0.0804; 0.0876]
##
## Number of studies: k = 15
## Number of observations: o = 434385
## Number of events: e = 41364
##
## proportion 95%-CI
## Random effects model 0.0944 [0.0836; 0.1066]
##
## Quantifying heterogeneity:
## tau^2 = 0.0558; tau = 0.2362; I^2 = 98.3% [97.9%; 98.7%]; H = 7.74 [6.92; 8.66]
##
## Test of heterogeneity:
## Q d.f. p-value
## Wald 838.21 14 < 0.0001
## LRT 826.87 14 < 0.0001
##
## Details on meta-analytical method:
## - Random intercept logistic regression model
## - Maximum-likelihood estimator for tau^2
## - Random effects confidence interval based on t-distribution (df = 14)
## - Logit transformation
## - Clopper-Pearson confidence interval for individual studies
在Meta分析中,真实效应量的变化程度被称为研究间异质性。我们在上一章的随机效应模型中已经简单提到了这一概念。随机效应模型假定研究间异质性会导致研究的真实效应量不同。在Meta分析中不仅要报告总体效应,还要说明这一估计值的可信度。其中一个重要部分就是量化和分析研究间的异质性.
根据随机效应模型,研究间差异的来源有两个:抽样误差\(\epsilon\)和研究间异质性\(\zeta\)。当我们想量化研究间的异质性时,困难在于确定有多少差异可归因于抽样误差,有多少可归因于真正的效应量的差异。
传统的Meta分析使用Cochran’s Q区分实际的研究间异质性和抽样误差。Cochran’s Q实际上是加权的平方和:
\[Q = \sum^K_{k=1}w_k(\hat\theta_k-\hat\theta)^2\] 其中\(w_k\)是逆方差权重,\(\hat{\theta}\)是根据固定效应模型得到的合并效应.
Q近似服从自由度为K-1的卡方分布。根据这个性质可以进行研究间异质性的检验,即Cochran’s Q检验.
然而,Q在Meta分析中的性质不完全服从df为K-1的卡方分布,且Q易受样本量影响,因此研究中不常推荐Q检验.
I^2是另一个测量研究间异质性的统计量,是在Q的基础上得到的,它被定义为非抽样误差引起的效应量变异所占的百分比.
\[I^2 = \frac{Q-(K-1)}{Q}\]
I2不能小于0,若Q小于K-1则I2=0
I^2的大小可以做如下解释(J.P.Higgins and Thompson, 2002):
I^2=25%: 低异质性
I^2=50%: 中等异质性
I^2=75%: 高异质性
\[H^2 = \frac{Q}{K-1}\] H2相比I2使用得少一些.
\(\tau^2\)是研究间变异的方差,\(\tau\)是标准差,可用来计算实际效应量的置信区间.
由于上述统计量都有其局限,我们可以使用预测区间(Prediction Intervals, PIs)预测未来研究的效应量的范围.
\[\hat{\mu} \pm t_{K-1, 0.975}\sqrt{SE_{\hat\mu}^2+\hat\tau^2}\]
m.gen <- update.meta(m.gen, prediction = TRUE)
summary(m.gen)
## Review: Third Wave Psychotherapies
##
## SMD 95%-CI %W(random)
## Call et al. 0.7091 [ 0.1979; 1.2203] 5.0
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 6.3
## DanitzOrsillo 1.7912 [ 1.1139; 2.4685] 3.8
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 7.9
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 7.3
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 6.3
## Gallego et al. 0.7249 [ 0.2846; 1.1652] 5.7
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 6.0
## Hintz et al. 0.2840 [-0.0453; 0.6133] 6.9
## Kang et al. 1.2751 [ 0.6142; 1.9360] 3.9
## Kuhlmann et al. 0.1036 [-0.2781; 0.4853] 6.3
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 5.6
## Phang et al. 0.5407 [ 0.0619; 1.0196] 5.3
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 5.1
## Ratanasiripong 0.5154 [-0.1731; 1.2039] 3.7
## Shapiro et al. 1.4797 [ 0.8618; 2.0977] 4.2
## Song & Lindquist 0.6126 [ 0.1683; 1.0569] 5.7
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 5.2
##
## Number of studies: k = 18
##
## SMD 95%-CI t p-value
## Random effects model (HK) 0.5771 [ 0.3782; 0.7760] 6.12 < 0.0001
## Prediction interval [-0.0572; 1.2115]
##
## Quantifying heterogeneity:
## tau^2 = 0.0820 [0.0295; 0.3533]; tau = 0.2863 [0.1717; 0.5944]
## I^2 = 62.6% [37.9%; 77.5%]; H = 1.64 [1.27; 2.11]
##
## Test of heterogeneity:
## Q d.f. p-value
## 45.50 17 0.0002
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 17)
## - Prediction interval based on t-distribution (df = 16)
报告格式:
“The between-study heterogeneity variance was estimated at \(\tau^2\) = 0.08 (95%CI: 0.03-0.35), with an \(I^2\) value of 63% (95%CI: 38-78%). The prediction interval ranged from \(g\) = -0.06 to 1.21, indicating that negative intervention effects cannot be ruled out for future studies.”
指标告诉我们,我们的数据中存在中度到严重的异质性。我们的Meta分析中的效应并非完全异质性,但不同研究之间的真实效应大小显然存在一些差异。因此,我们不妨探讨一下造成这种异质性的原因。可能有一两项研究并不真正”适合”,因为它们的效应大小要高得多。这可能扩大了我们分析中的异质性,更糟糕的是:它可能导致对真实效应的高估。另一方面,我们的合并效应量也可能受到一项样本量非常大的研究的严重影响,该研究报告的效应大小出乎意料地小。这可能意味着合并效应低估了治疗的真正效果.
为了解决这些问题,我们现在将转向评估我们合并结果稳健性的程序:离群值(outlier)分析和影响(influence)分析.
如前所述,研究间异质性可能由一项或多项效应量极端的研究引起,这些研究并不完全 “适合”。这可能会扭曲我们的合并效应估计值,因此在分析中剔除此类异常值后,最好重新检查合并效应.
另一方面,我们也想知道我们发现的合并效应估计值是否稳健,这意味着它并不严重依赖于某一项研究。因此,我们还想知道,是否有研究将我们的分析效果严重推向一个方向。此类研究被称为有影响的案例.
简单地可以按照95%置信区间与合并效应量的置信区间是否重叠判断离群值。这种方法的原理非常简单。抽样误差大的研究预计会严重偏离合并效应。然而,由于此类研究的置信区间也会很大,这就增加了置信区间与集合效应置信区间重叠的可能性.
然而,如果某项研究的标准误差较低,但仍然(出乎意料地)严重偏离合并效应,则置信区间很有可能不会重叠,该研究将被归类为离群值.
{dmetar}包中包含一个名为find.outliers的函数,它实现了这种简单的离群值去除算法。它在{meta}对象中搜索离群研究,将其删除,然后重新计算结果.
summary(m.gen)
## Review: Third Wave Psychotherapies
##
## SMD 95%-CI %W(random)
## Call et al. 0.7091 [ 0.1979; 1.2203] 5.0
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 6.3
## DanitzOrsillo 1.7912 [ 1.1139; 2.4685] 3.8
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 7.9
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 7.3
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 6.3
## Gallego et al. 0.7249 [ 0.2846; 1.1652] 5.7
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 6.0
## Hintz et al. 0.2840 [-0.0453; 0.6133] 6.9
## Kang et al. 1.2751 [ 0.6142; 1.9360] 3.9
## Kuhlmann et al. 0.1036 [-0.2781; 0.4853] 6.3
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 5.6
## Phang et al. 0.5407 [ 0.0619; 1.0196] 5.3
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 5.1
## Ratanasiripong 0.5154 [-0.1731; 1.2039] 3.7
## Shapiro et al. 1.4797 [ 0.8618; 2.0977] 4.2
## Song & Lindquist 0.6126 [ 0.1683; 1.0569] 5.7
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 5.2
##
## Number of studies: k = 18
##
## SMD 95%-CI t p-value
## Random effects model (HK) 0.5771 [ 0.3782; 0.7760] 6.12 < 0.0001
## Prediction interval [-0.0572; 1.2115]
##
## Quantifying heterogeneity:
## tau^2 = 0.0820 [0.0295; 0.3533]; tau = 0.2863 [0.1717; 0.5944]
## I^2 = 62.6% [37.9%; 77.5%]; H = 1.64 [1.27; 2.11]
##
## Test of heterogeneity:
## Q d.f. p-value
## 45.50 17 0.0002
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 17)
## - Prediction interval based on t-distribution (df = 16)
find.outliers(m.gen)
## Identified outliers (random-effects model)
## ------------------------------------------
## "DanitzOrsillo", "Shapiro et al."
##
## Results with outliers removed
## -----------------------------
## Review: Third Wave Psychotherapies
##
## Number of studies: k = 16
##
## SMD 95%-CI t p-value
## Random effects model (HK) 0.4528 [0.3257; 0.5800] 7.59 < 0.0001
## Prediction interval [0.1687; 0.7369]
##
## Quantifying heterogeneity:
## tau^2 = 0.0139 [0.0000; 0.1032]; tau = 0.1180 [0.0000; 0.3213]
## I^2 = 24.8% [0.0%; 58.7%]; H = 1.15 [1.00; 1.56]
##
## Test of heterogeneity:
## Q d.f. p-value
## 19.95 15 0.1739
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 15)
## - Prediction interval based on t-distribution (df = 14)
有些研究,即使其效应量不是特别高或特别低,仍然会对我们的总体结果产生非常大的影响。例如,我们可能在meta分析中发现了总效应,但其显著性取决于一项大型研究。这就意味着,一旦剔除该有影响的研究,合并效应就不再具有统计学意义。当然,异常值通常也有影响力,上一章的例子就说明了这一点。但它们并非必须如此.
我们可以使用留一法(leave-one-out)判别高影响案例。基于这些数据,我们可以计算不同的影响诊断(influence diagnostics) 。影响诊断允许我们发现对meta分析总体估计值影响最大的研究,并让我们评估这种巨大的影响是否扭曲了我们的合并效应.
m.gen.inf <- InfluenceAnalysis(m.gen, random = TRUE)
## [===========================================================================] DONE
InfluenceAnalysis为我们绘制了一个Baujat图检测高影响力研究.
plot(m.gen.inf, "baujat")
## Warning: ggrepel: 12 unlabeled data points (too many overlaps). Consider
## increasing max.overlaps
InfluenceAnalysis还包括了一组影响力诊断的图
plot(m.gen.inf, "influence")
第一幅图显示了各项研究的外部标准化残差。第二幅图展示的是DFFITS值,表示的是去掉该研究后合并效应量的变化。第三幅图展示的是Cook距离,与DFFITS类似。第四幅图展示的是协方差率(Covariance
Ratio),小于1表明去掉这项研究后合并效应量的计算更准确。第五、六图展示的是留一法tau^2和Q。最后两幅图展示的是hat和权重,都表示的是一项研究的权重.
\[t_{k} = \frac{\hat\theta_{k}-\hat\mu_{\setminus k}}{\sqrt{\mathrm{Var}(\hat\mu_{\setminus k})+\hat\tau^2_{\setminus k}+s^2_k}}\\\mathrm{DFFITS}_k = \dfrac{\hat\mu-\hat\mu_{\setminus k}}{\sqrt{\dfrac{w_k^{(*)}}{\sum^{K}_{k=1}w_k^{(*)}}(s^2_k+\hat\tau^2_{\setminus k})}}\\D_k = \frac{(\hat\mu-\hat\mu_{\setminus k})^2}{\sqrt{s^2_k+\hat\tau^2}}\\\mathrm{CovRatio}_k = \frac{\mathrm{Var}(\hat\mu_{\setminus k})}{\mathrm{Var}(\hat\mu)}\] 对于上述指标有如下经验标准:
\[\mathrm{DFFITS}_k > 3\sqrt{\frac{1}{k-1}}\\D_k > 0.45\\\mathrm{hat_k} > 3\frac{1}{k}\]
在我们的示例中,只有Dan,即DanitzOrsillo研究是这种情况。然而,虽然只有这项研究被定义为有影响力的案例,但在大多数图中实际上存在两个峰值。我们也可以将Sha(Shapiro et al.)定义为有影响的案例,因为这项研究的值也非常极端。因此,我们发现DanitzOrsillo和Shapiro et al. 的研究可能具有影响力。这是一个有趣的发现,因为我们根据Baujat图选择了相同的研究,并且只关注统计异常值。这进一步证实,这两项研究可能扭曲了我们的合并效应估计值,并导致我们在初步Meta分析中发现的部分研究间异质性.
还可以通过森林图(forest plot)进行分析。
plot(m.gen.inf, "es")
plot(m.gen.inf, "i2")
在这两幅森林图中,我们看到了重新计算的合并效应,每次都省略了一项研究。在这两个图中,都有一个阴影区域,其中心有一条虚线。这代表原始汇总效应大小的95%置信区间,以及估计的合并效应本身.
第一幅图按效应大小(从低到高)排序。在这里,我们可以看到当剔除不同的研究时,总体效应估计值是如何变化的。由于 “Danitz-Orsillo”和 “Shapiro et al. ”这两项离群且有影响力的研究具有非常高的效应大小,我们发现去除这两项研究后,总体效应最小.
另一种探索数据异质性模式的方法是所谓的异质性图形显示(Graphic Display of Heterogeneity, GOSH)图.
该方法不像留一法,它会计算所有可能的研究的组合,即2^(K-1)种组合。由于运算量非常大,R中最多只能拟合1,000,000个模型.
library(metafor)
## 载入需要的程辑包:Matrix
##
## 载入程辑包:'Matrix'
## The following objects are masked from 'package:tidyr':
##
## expand, pack, unpack
## 载入需要的程辑包:metadat
## 载入需要的程辑包:numDeriv
##
## Loading the 'metafor' package (version 4.2-0). For an
## introduction to the package please type: help(metafor)
m.rma <- rma(yi = m.gen$TE,
sei = m.gen$seTE,
method = m.gen$method.tau,
test = "knha")
res.gosh <- gosh(m.rma)
## Fitting 262143 models (based on all possible subsets).
我们可以将 res.gosh 对象插入plot函数来显示曲线图。附加的alpha参数控制着图形中点的透明程度,1表示完全不透明。由于图中有很多数据点,因此使用较小的alpha值是合理的,这样可以更清楚地显示数值”堆积”的位置.
plot(res.gosh, alpha = 0.01)
我们在数据中看到了一个有趣的图案:虽然大多数值都集中在一个具有相对高效应和高异质性的聚类中,但是在一个具有相对低效应和高异质性的聚类中,I^2的分布却很不均匀:严重右斜和双峰的。似乎有一些研究组合的估计异质性要低得多,但其合并效应大小也较小,从而形成了 “彗星状”的尾部形状.
在看到效应大小-异质性图案后,真正重要的问题是:哪些研究导致了这种形状?为了回答这个问题,我们可以使用gosh.diagnostics函数.
该函数使k-means、DBSCAN(density reachability and connectivity clustering)算法 、高斯混合模型(gaussian mixture models)算法来检测GOSH图数据中的聚类。根据识别出的聚类,该函数自动确定哪些研究对每个聚类的贡献最大。例如,如果我们发现一项或几项研究在异质性较高的聚类中比例过高,这表明这些研究单独或联合可能导致异质性较高.
res.gosh.diag <- gosh.diagnostics(res.gosh,
km.params = list(centers = 2),
db.params = list(eps = 0.08,
MinPts = 50))
##
## Perform Clustering...
## |==========================================================================================| DONE
res.gosh.diag
## GOSH Diagnostics
## ================================
##
## - Number of K-means clusters detected: 2
## - Number of DBSCAN clusters detected: 4
## - Number of GMM clusters detected: 7
##
## Identification of potential outliers
## ---------------------------------
##
## - K-means: Study 3, Study 16
## - DBSCAN: Study 3, Study 4, Study 16
## - Gaussian Mixture Model: Study 3, Study 4, Study 11, Study 16
plot(res.gosh.diag)
update.meta(m.gen, exclude = c(3, 4, 16)) %>%
summary()
## Review: Third Wave Psychotherapies
##
## SMD 95%-CI %W(random) exclude
## Call et al. 0.7091 [ 0.1979; 1.2203] 4.6
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 8.1
## DanitzOrsillo 1.7912 [ 1.1139; 2.4685] 0.0 *
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 0.0 *
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 14.8
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 8.1
## Gallego et al. 0.7249 [ 0.2846; 1.1652] 6.2
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 7.0
## Hintz et al. 0.2840 [-0.0453; 0.6133] 11.0
## Kang et al. 1.2751 [ 0.6142; 1.9360] 2.7
## Kuhlmann et al. 0.1036 [-0.2781; 0.4853] 8.2
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 5.8
## Phang et al. 0.5407 [ 0.0619; 1.0196] 5.2
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 4.7
## Ratanasiripong 0.5154 [-0.1731; 1.2039] 2.5
## Shapiro et al. 1.4797 [ 0.8618; 2.0977] 0.0 *
## Song & Lindquist 0.6126 [ 0.1683; 1.0569] 6.1
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 5.0
##
## Number of studies: k = 15
##
## SMD 95%-CI t p-value
## Random effects model (HK) 0.4819 [0.3595; 0.6043] 8.44 < 0.0001
## Prediction interval [0.3614; 0.6024]
##
## Quantifying heterogeneity:
## tau^2 < 0.0001 [0.0000; 0.0955]; tau = 0.0012 [0.0000; 0.3091]
## I^2 = 4.6% [0.0%; 55.7%]; H = 1.02 [1.00; 1.50]
##
## Test of heterogeneity:
## Q d.f. p-value
## 14.67 14 0.4011
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 14)
## - Prediction interval based on t-distribution (df = 13)
最常用的可视化meta分析的方法是绘制森林图。这种图以图形方式显示观察到的效应量、置信区间,通常还包括每项研究的权重。它们还显示了我们在meta分析中计算出的合并效应。这可以让其他人快速查看纳入研究的精确度和分布,以及合并效应与每个效应量之间的关系.
forest.meta(m.gen,
sortvar = TE,
prediction = TRUE,
print.tau2 = FALSE,
leftcols = c("studlab", "TE", "seTE", "RiskOfBias"),
leftlabs = c("Author", "g", "SE", "Risk of Bias"))
# JAMA
forest.meta(m.gen, layout = "JAMA")
# Cochrane’s Review Manager 5
forest.meta(m.gen, layout = "RevMan5")
保存方法
# PDF
pdf(file = "forestplot.pdf", width = 8, height = 7)
forest.meta(m.gen,
sortvar = TE,
prediction = TRUE,
print.tau2 = FALSE,
leftlabs = c("Author", "g", "SE"))
dev.off()
## png
## 2
# PNG
png(file = "C:/Users/29136/Desktop/forestplot.png", width = 2800, height = 2400, res = 300)
forest.meta(m.gen,
sortvar = TE,
prediction = TRUE,
print.tau2 = FALSE,
leftlabs = c("Author", "g", "SE"))
dev.off()
## png
## 2
森林图是最常用的可视化meta分析的图表,但不是唯一的一种。由于森林图在绘制中根据p<0.05这一标准,但由于可重复性危机的影响,很多时候发表研究不再依赖p<0.05,这时窗帘图(Drapery Plot)是一种很好的方法.
窗帘图使用p值函数绘制出不同p值下置信区间的长度.
drapery(m.gen,
labels = "studlab",
type = "pval",
legend = FALSE)
通过离群值和影响分析,我们能够发现潜在的研究间异质性。这时我们需要一种不同的方法,一种能够让我们确定为什么在我们的数据中能够发现特定的异质性的方法。亚组分析(也称为调节分析)是实现这一目标的方法之一。它们允许我们检验特定的假设,描述为什么某些类型的研究比另一种类型的研究产生更小或更大的效应.
例如,我们可以研究某种药物是否比另一种药物产生更大的效应。或者,我们可以将随访时间较短的研究与随访时间较长的研究进行比较。我们还可以研究观察到的效果是否因研究进行的文化区域而有所不同.
亚组分析背后的理念是,meta分析不仅仅是计算平均效应大小,它也可以是研究证据变异的工具。在亚组分析中,我们不仅将异质性视为一种困扰,还将其视为一种有趣的变异,这种变异可能是科学假说所能解释的,也可能是科学假说所不能解释的.
在亚组分析中,我们假设meta分析中的研究并非来自一个总体,而是属于不同的亚组,每个亚组都有自己的真实总体效应。我们的目的是拒绝 “亚组间效应大小无差异”的零假设.
亚组分析的计算包括两部分:首先,我们合并每个亚组的效应。随后,使用统计检验比较各亚组的效应(Borenstein & Higgins, 2013).
如果我们假设亚组中的所有研究都来自同一人群,并且有一个共同的真实效应,我们就可以使用固定效应模型。但是在实践中,即使我们将研究划分为更小的组,这一假设也往往不现实.
因此,另一种方法是使用随机效应模型。这种方法假定一个亚组内的研究都来自一个群体,我们要估计的是这个群体的平均值。与普通meta分析不同的是,我们要进行多个独立的随机效应meta分析,每个亚组一个。即每个亚组各有一个合并效应\(\hat{\mu}_g\)和异质性\(\hat{\tau}^2_g\),但操作中常常假设各亚组的变异是一样的即\(\tau^2\)。当k<5时这个假设常常有偏差,因此会使用一个合并后的\(\tau^2\).
当我们想检验G个亚组的效应是否不同时,可以假定一个亚组的合并效应实际上不过是一个大型研究的观察效应大小。这是我们只需要研究效应量的差异是由于抽样误差造成的,还是由于效应量的真正差异造成的.
在亚组内我们选择随机效应模型,亚组间使用固定效应模型,因此整个亚组分析的方法称为Fixed-Effects Model,这里的effects是复数形式.
亚组分析实际上是meta回归的特例.
由于亚组分析增加了检验的次数,因此统计功效可能不足.
如果亚组内存在大量的研究间异质性,这将增加合并效应量的标准误,这意味着它们的置信区间可能有很大的重叠。因此,即使存在差异,也很难发现亚组间的显著差异;
亚组之间的差异通常很小,这需要的统计功效(power)更大;
如果我们没有发现亚组间效应大小的差异,这并不自动意味着亚组产生了等同的结果;
可以事先进行亚组的功效分析。一般的经验法则是:每个亚组至少包含K=10个研究.
亚组分析纯粹是观察性的,因此我们应始终牢记,效应差异也可能是由混杂变量引起的.
另外,对于分组的定义也可能成为问题。使用合并信息(aggregate information)可能导致生态偏倚(ecological bias).
总之对亚组分析必须批判看待.
library(dmetar)
library(meta)
data("ThirdWave")
# Show first entries of study name and 'RiskOfBias' column
head(ThirdWave[,c("Author", "RiskOfBias")])
## Author RiskOfBias
## 1 Call et al. high
## 2 Cavanagh et al. low
## 3 DanitzOrsillo high
## 4 de Vibe et al. low
## 5 Frazier et al. low
## 6 Frogeli et al. low
update.meta(m.gen,
subgroup = RiskOfBias,
tau.common = FALSE)
## Review: Third Wave Psychotherapies
##
## Number of studies: k = 18
##
## SMD 95%-CI t p-value
## Random effects model (HK) 0.5771 [ 0.3782; 0.7760] 6.12 < 0.0001
## Prediction interval [-0.0572; 1.2115]
##
## Quantifying heterogeneity:
## tau^2 = 0.0820 [0.0295; 0.3533]; tau = 0.2863 [0.1717; 0.5944]
## I^2 = 62.6% [37.9%; 77.5%]; H = 1.64 [1.27; 2.11]
##
## Test of heterogeneity:
## Q d.f. p-value
## 45.50 17 0.0002
##
## Results for subgroups (random effects model (HK)):
## k SMD 95%-CI tau^2 tau Q I^2
## RiskOfBias = high 7 0.8126 [0.2835; 1.3417] 0.2423 0.4922 25.89 76.8%
## RiskOfBias = low 11 0.4300 [0.2770; 0.5830] 0.0099 0.0997 13.42 25.5%
##
## Test for subgroup differences (random effects model (HK)):
## Q d.f. p-value
## Between groups 2.84 1 0.0917
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 17)
## - Prediction interval based on t-distribution (df = 16)
update.meta(m.gen, subgroup = RiskOfBias, tau.common = TRUE)
## Review: Third Wave Psychotherapies
##
## Number of studies: k = 18
##
## SMD 95%-CI t p-value
## Random effects model (HK) 0.5771 [ 0.3782; 0.7760] 6.12 < 0.0001
## Prediction interval [-0.0572; 1.2115]
##
## Quantifying heterogeneity:
## tau^2 = 0.0820 [0.0295; 0.3533]; tau = 0.2863 [0.1717; 0.5944]
## I^2 = 62.6% [37.9%; 77.5%]; H = 1.64 [1.27; 2.11]
##
## Quantifying residual heterogeneity:
## tau^2 = 0.0691 [0.0208; 0.3268]; tau = 0.2630 [0.1441; 0.5717]
## I^2 = 59.3% [30.6%; 76.1%]; H = 1.57 [1.20; 2.05]
##
## Test of heterogeneity:
## Q d.f. p-value
## 45.50 17 0.0002
##
## Results for subgroups (random effects model (HK)):
## k SMD 95%-CI tau^2 tau Q I^2
## RiskOfBias = high 7 0.7691 [0.2533; 1.2848] 0.0691 0.2630 25.89 76.8%
## RiskOfBias = low 11 0.4698 [0.3015; 0.6382] 0.0691 0.2630 13.42 25.5%
##
## Test for subgroup differences (random effects model (HK)):
## Q d.f. p-value
## Between groups 1.79 1 0.1814
## Within groups 39.31 16 0.0010
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## (assuming common tau^2 in subgroups)
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 17)
## - Prediction interval based on t-distribution (df = 16)
# subgroup-specific p-values
subgroup <- update.meta(m.gen,
subgroup = RiskOfBias,
tau.common = FALSE)
subgroup$pval.random.w
## high low
## 9.418657e-03 9.369461e-05
Meta回归使用混合效应模型通过自变量x预测效应量的变化.
\[\hat\theta_k = \theta + \beta x_{k} + \epsilon_k+\zeta_k\]
哑变量编码(dummy-coding):
\[\begin{equation}D_g=\begin{cases} 0: & \text{Subgroup A}\\ 1: & \text{Subgroup B} \end{cases}\end{equation}\] \[\hat\theta_k = \theta + \beta D_g +\epsilon_k+\zeta_k\] \[\begin{equation} D_g=\begin{cases} 0: & \hat\theta_k = \theta_A + \epsilon_k+\zeta_k\\ 1: & \hat\theta_k = \theta_A + \theta_{\Delta} +\epsilon_k+\zeta_k \end{cases}\end{equation}\]
Meta回归使用加权最小二乘法(weighted least squares, WLS)拟合,
测定系数: \[R^2_* = 1- \frac{\hat\tau^2_{\text{unexplained}}}{\hat\tau^2_{\text{(total)}}}\\R^2_* = \frac{\hat\tau^2_{\text{REM}}-\hat\tau^2_{\text{MEM}}}{\hat\tau^2_{\text{REM}}}\]
Wald检验: \[z = \frac{\hat\beta}{SE_{\hat\beta}}\]
library(meta)
year <- c(2014, 1998, 2010, 1999, 2005, 2014,
2019, 2010, 1982, 2020, 1978, 2001,
2018, 2002, 2009, 2011, 2011, 2013)
m.gen.reg <- metareg(m.gen, ~year)
m.gen.reg
##
## Mixed-Effects Model (k = 18; tau^2 estimator: REML)
##
## tau^2 (estimated amount of residual heterogeneity): 0.0188 (SE = 0.0226)
## tau (square root of estimated tau^2 value): 0.1371
## I^2 (residual heterogeneity / unaccounted variability): 29.26%
## H^2 (unaccounted variability / sampling variability): 1.41
## R^2 (amount of heterogeneity accounted for): 77.08%
##
## Test for Residual Heterogeneity:
## QE(df = 16) = 27.8273, p-val = 0.0332
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 16) = 9.3755, p-val = 0.0075
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt -36.1546 11.9800 -3.0179 16 0.0082 -61.5510 -10.7582 **
## year 0.0183 0.0060 3.0619 16 0.0075 0.0056 0.0310 **
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# bubble plot
bubble(m.gen.reg, studlab = TRUE)
# subgroup
metareg(m.gen, RiskOfBias)
##
## Mixed-Effects Model (k = 18; tau^2 estimator: REML)
##
## tau^2 (estimated amount of residual heterogeneity): 0.0691 (SE = 0.0424)
## tau (square root of estimated tau^2 value): 0.2630
## I^2 (residual heterogeneity / unaccounted variability): 60.58%
## H^2 (unaccounted variability / sampling variability): 2.54
## R^2 (amount of heterogeneity accounted for): 15.66%
##
## Test for Residual Heterogeneity:
## QE(df = 16) = 39.3084, p-val = 0.0010
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 16) = 2.5066, p-val = 0.1329
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.7691 0.1537 5.0022 16 0.0001 0.4431 1.0950 ***
## RiskOfBiaslow -0.2992 0.1890 -1.5832 16 0.1329 -0.6999 0.1014
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
\[\hat \theta_k = \theta + \beta_1x_{1k} + ... + \beta_nx_{nk} + \epsilon_k + \zeta_k\]
\[\hat \theta_k = \theta + \beta_1x_{1k} + \beta_2x_{2k} + \beta_3x_{1k}x_{2k}+ \epsilon_k + \zeta_k\]
过拟合
多重共线性
拟合步骤
library(metafor)
library(tidyverse)
library(dmetar)
data(MVRegressionData)
glimpse(MVRegressionData)
## Rows: 36
## Columns: 6
## $ yi <dbl> 0.09437543, 0.09981923, 0.16931607, 0.17511107, 0.27301641,…
## $ sei <dbl> 0.1959031, 0.1918510, 0.1193179, 0.1161592, 0.1646946, 0.17…
## $ reputation <dbl> -11, 0, -11, 4, -10, -9, -8, -8, -8, 0, -5, -5, -4, -4, -3,…
## $ quality <dbl> 6, 9, 5, 9, 2, 10, 6, 3, 10, 3, 1, 5, 10, 2, 1, 2, 4, 1, 8,…
## $ pubyear <dbl> -0.85475360, -0.75277184, -0.66048349, -0.56304843, -0.4308…
## $ continent <fct> 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1,…
# Checking for Multi-Collinearity
MVRegressionData[,c("reputation", "quality", "pubyear")] %>% cor()
## reputation quality pubyear
## reputation 1.0000000 0.3015694 0.3346594
## quality 0.3015694 1.0000000 -0.1551123
## pubyear 0.3346594 -0.1551123 1.0000000
library(PerformanceAnalytics)
## 载入需要的程辑包:xts
## 载入需要的程辑包:zoo
##
## 载入程辑包:'zoo'
## The following objects are masked from 'package:base':
##
## as.Date, as.Date.numeric
##
## ################################### WARNING ###################################
## # We noticed you have dplyr installed. The dplyr lag() function breaks how #
## # base R's lag() function is supposed to work, which breaks lag(my_xts). #
## # #
## # Calls to lag(my_xts) that you enter or source() into this session won't #
## # work correctly. #
## # #
## # All package code is unaffected because it is protected by the R namespace #
## # mechanism. #
## # #
## # Set `options(xts.warn_dplyr_breaks_lag = FALSE)` to suppress this warning. #
## # #
## # You can use stats::lag() to make sure you're not using dplyr::lag(), or you #
## # can add conflictRules('dplyr', exclude = 'lag') to your .Rprofile to stop #
## # dplyr from breaking base R's lag() function. #
## ################################### WARNING ###################################
##
## 载入程辑包:'xts'
## The following objects are masked from 'package:dplyr':
##
## first, last
##
## 载入程辑包:'PerformanceAnalytics'
## The following object is masked from 'package:graphics':
##
## legend
MVRegressionData[,c("reputation", "quality", "pubyear")] %>%
chart.Correlation()
## Warning in par(usr): argument 1 does not name a graphical parameter
## Warning in par(usr): argument 1 does not name a graphical parameter
## Warning in par(usr): argument 1 does not name a graphical parameter
# fit a multiple meta-regression model
m.qual <- rma(yi = yi,
sei = sei,
data = MVRegressionData,
method = "ML", # It is advisable to use "ML", because this allows one to compare different meta-regression models later on
mods = ~ quality,
test = "knha")
m.qual
##
## Mixed-Effects Model (k = 36; tau^2 estimator: ML)
##
## tau^2 (estimated amount of residual heterogeneity): 0.0667 (SE = 0.0275)
## tau (square root of estimated tau^2 value): 0.2583
## I^2 (residual heterogeneity / unaccounted variability): 60.04%
## H^2 (unaccounted variability / sampling variability): 2.50
## R^2 (amount of heterogeneity accounted for): 7.37%
##
## Test for Residual Heterogeneity:
## QE(df = 34) = 88.6130, p-val < .0001
##
## Test of Moderators (coefficient 2):
## F(df1 = 1, df2 = 34) = 3.5330, p-val = 0.0688
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.3429 0.1354 2.5318 34 0.0161 0.0677 0.6181 *
## quality 0.0356 0.0189 1.8796 34 0.0688 -0.0029 0.0740 .
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
m.qual.rep <- rma(yi = yi,
sei = sei,
data = MVRegressionData,
method = "ML",
mods = ~ quality + reputation,
test = "knha")
m.qual.rep
##
## Mixed-Effects Model (k = 36; tau^2 estimator: ML)
##
## tau^2 (estimated amount of residual heterogeneity): 0.0238 (SE = 0.0161)
## tau (square root of estimated tau^2 value): 0.1543
## I^2 (residual heterogeneity / unaccounted variability): 34.62%
## H^2 (unaccounted variability / sampling variability): 1.53
## R^2 (amount of heterogeneity accounted for): 66.95%
##
## Test for Residual Heterogeneity:
## QE(df = 33) = 58.3042, p-val = 0.0042
##
## Test of Moderators (coefficients 2:3):
## F(df1 = 2, df2 = 33) = 12.2476, p-val = 0.0001
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.5005 0.1090 4.5927 33 <.0001 0.2788 0.7222 ***
## quality 0.0110 0.0151 0.7312 33 0.4698 -0.0197 0.0417
## reputation 0.0343 0.0075 4.5435 33 <.0001 0.0189 0.0496 ***
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# model comparison
anova(m.qual, m.qual.rep)
##
## df AIC BIC AICc logLik LRT pval QE tau^2
## Full 4 19.4816 25.8157 20.7720 -5.7408 58.3042 0.0238
## Reduced 3 36.9808 41.7314 37.7308 -15.4904 19.4992 <.0001 88.6130 0.0667
## R^2
## Full
## Reduced 64.3197%
# modeling interactions
# Add factor labels to 'continent'
# 0 = Europe
# 1 = North America
levels(MVRegressionData$continent) = c("Europe", "North America")
# Fit the meta-regression model
m.qual.rep.int <- rma(yi = yi,
sei = sei,
data = MVRegressionData,
method = "REML",
mods = ~ pubyear * continent,
test = "knha")
m.qual.rep.int
##
## Mixed-Effects Model (k = 36; tau^2 estimator: REML)
##
## tau^2 (estimated amount of residual heterogeneity): 0 (SE = 0.0098)
## tau (square root of estimated tau^2 value): 0
## I^2 (residual heterogeneity / unaccounted variability): 0.00%
## H^2 (unaccounted variability / sampling variability): 1.00
## R^2 (amount of heterogeneity accounted for): 100.00%
##
## Test for Residual Heterogeneity:
## QE(df = 32) = 24.8408, p-val = 0.8124
##
## Test of Moderators (coefficients 2:4):
## F(df1 = 3, df2 = 32) = 28.7778, p-val < .0001
##
## Model Results:
##
## estimate se tval df pval ci.lb
## intrcpt 0.3892 0.0421 9.2472 32 <.0001 0.3035
## pubyear 0.1683 0.0834 2.0184 32 0.0520 -0.0015
## continentNorth America 0.3986 0.0658 6.0539 32 <.0001 0.2645
## pubyear:continentNorth America 0.6323 0.1271 4.9754 32 <.0001 0.3734
## ci.ub
## intrcpt 0.4750 ***
## pubyear 0.3380 .
## continentNorth America 0.5327 ***
## pubyear:continentNorth America 0.8911 ***
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
该模型的R^2达到了100%,这是因为这个数据集只是为了举例设计的,现实中几乎不会出现这种情况,如果出现的话甚至需要考虑过拟合的情况.
置换检验是一种特殊的重采样方法。这种方法可以更好地评估我们模型中的系数是否确实捕捉到了数据背后的真实模式;或者我们是否过度拟合了我们的模型,从而错误地假设了数据中的模式,而这些模式实际上是统计噪声。使用置换检验可以来评估Meta回归模型的稳健性.
permutest(m.qual.rep)
## Running 1000 iterations for an approximate permutation test.
##
## Test of Moderators (coefficients 2:3):¹
## F(df1 = 2, df2 = 33) = 12.2476, p-val = 0.0010
##
## Model Results:
##
## estimate se tval df pval¹ ci.lb ci.ub
## intrcpt 0.5005 0.1090 4.5927 33 0.1750 0.2788 0.7222
## quality 0.0110 0.0151 0.7312 33 0.4310 -0.0197 0.0417
## reputation 0.0343 0.0075 4.5435 33 0.0010 0.0189 0.0496 ***
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## 1) p-values based on permutation testing
我们也可以尝试对所有可能的预测因子组合进行建模,这就是所谓的多模型推断。这可以检验哪种可能的预测因子组合拟合效果最好,以及哪种预测因子是最重要的预测因子.
multimodel.inference(TE = "yi",
seTE = "sei",
data = MVRegressionData,
predictors = c("pubyear", "quality",
"reputation", "continent"),
interaction = FALSE)
##
|
| | 0%
|
|==== | 6%
|
|========= | 12%
|
|============= | 19%
|
|================== | 25%
|
|====================== | 31%
|
|========================== | 38%
|
|=============================== | 44%
|
|=================================== | 50%
|
|======================================= | 56%
|
|============================================ | 62%
|
|================================================ | 69%
|
|==================================================== | 75%
|
|========================================================= | 81%
|
|============================================================= | 88%
|
|================================================================== | 94%
##
##
## Multimodel Inference: Final Results
## --------------------------
##
## - Number of fitted models: 16
## - Full formula: ~ pubyear + quality + reputation + continent
## - Coefficient significance test: knha
## - Interactions modeled: no
## - Evaluation criterion: AICc
##
##
## Best 5 Models
## --------------------------
##
##
## Global model call: metafor::rma(yi = TE, sei = seTE, mods = form, data = glm.data,
## method = method, test = test)
## ---
## Model selection table
## (Intrc) cntnn pubyr qulty rpttn df logLik AICc delta weight
## 12 + + 0.3533 0.02160 5 2.981 6.0 0.00 0.536
## 16 + + 0.4028 0.02210 0.01754 6 4.071 6.8 0.72 0.375
## 8 + + 0.4948 0.03574 5 0.646 10.7 4.67 0.052
## 11 + 0.2957 0.02725 4 -1.750 12.8 6.75 0.018
## 15 + 0.3547 0.02666 0.02296 5 -0.395 12.8 6.75 0.018
## Models ranked by AICc(x)
##
##
## Multimodel Inference Coefficients
## --------------------------
##
##
## Estimate Std. Error z value Pr(>|z|)
## intrcpt 0.38614661 0.106983583 3.6094006 0.0003069
## continentNorth America 0.24743836 0.083113174 2.9771256 0.0029096
## pubyear 0.37816796 0.083045572 4.5537402 0.0000053
## reputation 0.01899347 0.007420427 2.5596198 0.0104787
## quality 0.01060060 0.014321158 0.7402055 0.4591753
##
##
## Predictor Importance
## --------------------------
##
##
## model importance
## 1 pubyear 0.9988339
## 2 continent 0.9621839
## 3 reputation 0.9428750
## 4 quality 0.4432826
# AICc is default, small sample-corrected Akaike’s information criterion
注意,多模型推断是一种探索性的方法,我们根据多模型推断结果进行建模时并非依赖先验知识,因此在建立模型时必须报告多模型推断的结果.
早在20世纪90年代,抗抑郁药物(如选择性5-羟色胺再摄取抑制剂,或SSRIs)对治疗抑郁症患者有效已成为共识。这些证据大多由已发表的药物治疗试验的meta分析提供,在这些试验中,抗抑郁药与安慰剂进行了比较。考虑到抗抑郁药物市场价值数十亿美元,而且还在稳步增长,因此抗抑郁药物的疗效是一个重要问题.
2002年Irving Kirsch及其同事撰写的一篇名为《皇帝的新药》(The Emperor’s New Drugs)的文章。根据 “信息自由法”,Kirsch及其同事获得了制药公司向美国食品药品管理局提供的此前未公开的抗抑郁药试验数据。他们发现,如果同时考虑这些未公开的数据,抗抑郁药与安慰剂相比所带来的益处充其量微乎其微,在临床上可以忽略不计。Kirsch及其同事认为,这是因为制药公司只公布了具有有利结论的研究,而那些具有 “令人失望”证据的研究则被隐瞒了.
当一项研究发表的概率受到其结果的影响时,就会出现发表偏倚(Publication bias)。有大量证据表明,如果一项研究的结果具有统计学意义,或者证实了最初的假设,那么该研究更有可能被公开发表.
发表偏倚只是众多非报告偏倚(non-reporting bias)的一种,非报告偏差还有:引用偏倚(citation bias)、时滞偏倚(time-lag bias)、多重发表偏倚(Multiple publication bias)、语言偏倚(Language bias)、结果报告偏倚(Outcome reporting bias).
除了非报告偏倚,研究人员在还可能使用有问题研究方法(questionable research practices, QRPs),如p-hacking和HARKing(hypothesizing after the results are known).
可以通过检索文献时采用一些方法和使用统计手段.
有多种小研究效应方法可用于评估和纠正meta分析中的发表偏倚。正如其名称所示,这些方法特别关注小规模研究。从统计学的角度来看,这意味着研究的标准误差较高。小研究效应方法假定小研究更容易受到发表偏倚的影响.
这一假设基于三个核心理念:
由于需要投入大量的资源和时间,因此无论结果是否显著,大型研究都有可能发表.
中等规模的研究不被发表的风险更大。然而,即使统计能力仅为中等,通常也足以产生有意义的结果。这意味着,只有部分研究会因为得出 “不理想”(即不显著)的结果而无法发表.
小型研究产生非显著性结果的风险最大,因此最有可能成为 “抽屉里的档案”。在小型研究中,只有非常大的效应才具有显著性。这就意味着,只有效果非常大的小型研究才会被发表.
我们看到,这些假设背后的所谓机制非常简单。从根本上说,发表偏倚的存在是因为只有显著的效应才会被发表。由于获得显著结果的概率随着样本量的增大而增大,因此发表偏差将不成比例地影响小型研究.
通过漏斗图检查小研究效应是一种常规方法。漏斗图是研究的观察效应大小在X轴上与标准误差在Y轴上的散点图。通常,漏斗图中的Y轴是倒置的(即Y轴上 “较高”的值代表较低的标准误差).
当不存在发表偏倚时,漏斗图中的数据点应形成一个大致对称的倒置漏斗。这就是它们被称为漏斗图的原因。在漏斗图的上部(标准误差较低的研究),研究结果应紧紧靠在一起,且与汇集效应大小相距不远。在图的下部,随着标准误差的增大,漏斗 “张开”,效应大小将更多地分散到集合效应的左右两侧.
# Load 'meta' package
library(meta)
# Produce funnel plot
funnel.meta(m.gen,
xlim = c(-0.5, 2),
studlab = TRUE)
# Add title
title("Funnel Plot (Third Wave Psychotherapies)")
该数据集在漏斗图中显示出不对称的模式,这可能表明存在发表偏倚。可能是这三项小型研究幸运地发现了足以显著的效应,而还有一些标准误差相似但效应较小因而不显著的未发表研究未能入选.
检查不对称模式与统计显著性之间关系的一个好方法是生成等高线增强漏斗图( contour-enhanced funnel plots)。这种图有助于区分发表偏倚和其他形式的不对称。等高线增强漏斗图包括表示图中每项研究显著性水平的颜色.
# Define fill colors for contour
col.contour = c("gray75", "gray85", "gray95")
# Generate funnel plot (we do not include study labels here)
funnel.meta(m.gen, xlim = c(-0.5, 2),
contour = c(0.9, 0.95, 0.99),
col.contour = col.contour)
# Add a legend
legend(x = 1.6, y = 0.01,
legend = c("p < 0.1", "p < 0.05", "p < 0.01"),
fill = col.contour)
# Add a title
title("Contour-Enhanced Funnel Plot (Third Wave Psychotherapies)")
然而,仅仅通过观察漏斗图来进行解释也有其局限性。没有明确的规则规定我们的结果何时 “过于不对称”,这意味着从漏斗图得出的推论总是带有一定的主观性。因此,定量评估漏斗图不对称的存在是有帮助的.
\[\frac{\hat\theta_k}{SE_{\hat\theta_k}} = \beta_0 + \beta_1 \frac{1}{SE_{\hat\theta_k}}\]
我们关注截距\(\beta_0\)
# Load required package
library(tidyverse)
m.gen$data %>%
mutate(y = TE/seTE, x = 1/seTE) %>%
lm(y ~ x, data = .) %>%
summary()
##
## Call:
## lm(formula = y ~ x, data = .)
##
## Residuals:
## Min 1Q Median 3Q Max
## -1.82920 -0.55227 -0.03299 0.66857 2.05815
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 4.1111 0.8790 4.677 0.000252 ***
## x -0.3407 0.1837 -1.855 0.082140 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 1.096 on 16 degrees of freedom
## Multiple R-squared: 0.177, Adjusted R-squared: 0.1255
## F-statistic: 3.44 on 1 and 16 DF, p-value: 0.08214
metabias(m.gen, method.bias = "linreg")
## Review: Third Wave Psychotherapies
##
## Linear regression test of funnel plot asymmetry
##
## Test result: t = 4.68, df = 16, p-value = 0.0003
##
## Sample estimates:
## bias se.bias intercept se.intercept
## 4.1111 0.8790 -0.3407 0.1837
##
## Details:
## - multiplicative residual heterogeneity variance (tau^2 = 1.2014)
## - predictor: standard error
## - weight: inverse variance
## - reference: Egger et al. (1997), BMJ
Pustejovsky和Rodgers (2019)建议在检验标准化均值差异的漏斗图不对称时使用修正版的标准误差:
\[SE^*_{\text{SMD}_{\text{between}}}= \sqrt{\frac{n_1+n_2}{n_1n_2}}\]
# Add experimental (n1) and control group (n2) sample size
n1 <- c(62, 72, 44, 135, 103, 71, 69, 68, 95,
43, 79, 61, 62, 60, 43, 42, 64, 63)
n2 <- c(51, 78, 41, 115, 100, 79, 62, 72, 80,
44, 72, 67, 59, 54, 41, 51, 66, 55)
# Calculate modified SE
ThirdWave$seTE_c <- sqrt((n1+n2)/(n1*n2))
# Re-run 'metagen' with modified SE to get meta-analysis object
m.gen.c <- metagen(TE = TE, seTE = seTE_c,
studlab = Author, data = ThirdWave, sm = "SMD",
fixed = FALSE, random = TRUE,
method.tau = "REML", hakn = TRUE,
title = "Third Wave Psychotherapies")
# Egger's test
metabias(m.gen.c, method = "linreg")
## Review: Third Wave Psychotherapies
##
## Linear regression test of funnel plot asymmetry
##
## Test result: t = 4.36, df = 16, p-value = 0.0005
##
## Sample estimates:
## bias se.bias intercept se.intercept
## 11.1903 2.5667 -1.3535 0.4432
##
## Details:
## - multiplicative residual heterogeneity variance (tau^2 = 2.5334)
## - predictor: standard error
## - weight: inverse variance
## - reference: Egger et al. (1997), BMJ
# another way
m.gen$n.e = n1; m.gen$n.c = n2
metabias(m.gen, method.bias = "Pustejovsky")
## Review: Third Wave Psychotherapies
##
## Linear regression test of funnel plot asymmetry
##
## Test result: t = 4.41, df = 16, p-value = 0.0004
##
## Sample estimates:
## bias se.bias intercept se.intercept
## 9.8243 2.2254 -1.1220 0.3673
##
## Details:
## - multiplicative residual heterogeneity variance (tau^2 = 1.2821)
## - predictor: square root of the sum of the inverse group sample sizes
## - weight: inverse variance
## - reference: Pustejovsky & Rodgers (2019), RSM
为了避免在使用二分变量效应量时出现假阳性的风险,我们可以使用Peters及其同事提出的另一种回归检验方法。为了得到Peters检验的结果,需要将对数变换后的效应大小与样本量的倒数进行回归:
\[\log\psi_k = \beta_0 + \beta_1\frac{1}{n_k}\]
拟合回归模型时使用加权的线性模型,权重:
\[w_k = \frac{1}{\left(\dfrac{1}{a_k+c_k}+\dfrac{1}{b_k+d_k}\right)}\]
metabias(m.bin, method.bias = "peters")
## Review: Depression and Mortality
##
## Linear regression test of funnel plot asymmetry
##
## Test result: t = -0.08, df = 16, p-value = 0.9368
##
## Sample estimates:
## bias se.bias intercept se.intercept
## -11.1728 138.6121 0.5731 0.1076
##
## Details:
## - multiplicative residual heterogeneity variance (tau^2 = 40.2747)
## - predictor: inverse of total sample size
## - weight: inverse variance of average event probability
## - reference: Peters et al. (2006), JAMA
只有当我们的meta分析包含足够多的研究时,我们才建议检测漏斗图的不对称性。当研究数量较少时,Eggers或Peters检验的统计能力可能不足以检测出真正的不对称。一般推荐K>10.
当我们知道我们的研究存在发表偏倚时,我们需要一种方法来计算偏倚校正后的真实效应量估计值。然而,我们已经知道发表偏倚无法直接测量。我们只能用小规模研究的效应来代替发表偏倚。因此,我们只能对小研究效应进行调整以获得校正效应估计值,而不能对发表偏倚本身进行调整.
调整漏斗图不对称的最常用方法之一是Duval和Tweedie修剪和填充法。这种方法的原理很简单:在漏斗图对称之前,对”缺失”效应进行估算。由此产生的”扩展”数据集的合并效应量代表了校正小研究效应时的估计值。这是通过一个简单的算法实现的,涉及效应的”修剪”和”填充”.
修剪:识别漏斗图中的所有离群研究,一旦识别出这些研究,就对其进行修剪:将其从分析中剔除,然后重新计算合并效应,这一步通常使用固定效应模型;
填充:重新计算的合并效应被假定为所有效应量的中心。对于每项修剪后的研究,增加一个额外的效应量,以反映其在漏斗另一侧的结果。例如,如果重新计算的平均效应为0.5,而一项修剪研究的效应为0.8,则镜像研究的效应为0.2。对所有经过修剪的研究进行上述处理后,漏斗图将看起来大致对称。基于所有数据,包括修剪和估算的效应量,然后再次重新计算平均效应(通常使用随机效应模型)。然后将结果作为校正后的合并效应量的估计值.
一个重要注意事项是,当研究间异质性较大时,该方法无法产生可靠的结果。当研究不共享一个真实效应时,即使是大型研究也有可能严重偏离平均效应。这意味着这些研究也会被修剪和填充,尽管它们不太可能受到发表偏倚的影响。不难看出,这可能导致无效结果。
m.gen$I2
## [1] 0.6263947
# Using all studies
tf <- trimfill(m.gen)
# Analyze with outliers removed
tf.no.out <- trimfill(update(m.gen,
subset = -c(3, 16)))
summary(tf)
## Review: Third Wave Psychotherapies
##
## SMD 95%-CI %W(random)
## Call et al. 0.7091 [ 0.1979; 1.2203] 3.8
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 4.2
## DanitzOrsillo 1.7912 [ 1.1139; 2.4685] 3.3
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 4.5
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 4.4
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 4.2
## Gallego et al. 0.7249 [ 0.2846; 1.1652] 4.0
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 4.1
## Hintz et al. 0.2840 [-0.0453; 0.6133] 4.3
## Kang et al. 1.2751 [ 0.6142; 1.9360] 3.3
## Kuhlmann et al. 0.1036 [-0.2781; 0.4853] 4.2
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 4.0
## Phang et al. 0.5407 [ 0.0619; 1.0196] 3.9
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 3.8
## Ratanasiripong 0.5154 [-0.1731; 1.2039] 3.2
## Shapiro et al. 1.4797 [ 0.8618; 2.0977] 3.4
## Song & Lindquist 0.6126 [ 0.1683; 1.0569] 4.0
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 3.8
## Filled: Warnecke et al. 0.0520 [-0.4360; 0.5401] 3.8
## Filled: Song & Lindquist 0.0395 [-0.4048; 0.4837] 4.0
## Filled: Frogeli et al. 0.0220 [-0.3621; 0.4062] 4.2
## Filled: Call et al. -0.0571 [-0.5683; 0.4541] 3.8
## Filled: Gallego et al. -0.0729 [-0.5132; 0.3675] 4.0
## Filled: Kang et al. -0.6230 [-1.2839; 0.0379] 3.3
## Filled: Shapiro et al. -0.8277 [-1.4456; -0.2098] 3.4
## Filled: DanitzOrsillo -1.1391 [-1.8164; -0.4618] 3.3
##
## Number of studies: k = 26 (with 8 added studies)
## Number of observations: o = 3330
##
## SMD 95%-CI t p-value
## Random effects model (HK) 0.3428 [ 0.1015; 0.5841] 2.93 0.0072
## Prediction interval [-0.7254; 1.4110]
##
## Quantifying heterogeneity:
## tau^2 = 0.2557 [0.1456; 0.6642]; tau = 0.5056 [0.3816; 0.8150]
## I^2 = 76.2% [65.4%; 83.7%]; H = 2.05 [1.70; 2.47]
##
## Test of heterogeneity:
## Q d.f. p-value
## 105.15 25 < 0.0001
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 25)
## - Prediction interval based on t-distribution (df = 24)
## - Trim-and-fill method to adjust for funnel plot asymmetry (L-estimator)
summary(tf.no.out)
## Review: Third Wave Psychotherapies
##
## SMD 95%-CI %W(random)
## Call et al. 0.7091 [ 0.1979; 1.2203] 3.8
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 5.2
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 7.5
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 6.7
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 5.2
## Gallego et al. 0.7249 [ 0.2846; 1.1652] 4.5
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 4.9
## Hintz et al. 0.2840 [-0.0453; 0.6133] 6.0
## Kang et al. 1.2751 [ 0.6142; 1.9360] 2.7
## Kuhlmann et al. 0.1036 [-0.2781; 0.4853] 5.3
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 4.4
## Phang et al. 0.5407 [ 0.0619; 1.0196] 4.1
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 3.9
## Ratanasiripong 0.5154 [-0.1731; 1.2039] 2.5
## Song & Lindquist 0.6126 [ 0.1683; 1.0569] 4.5
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 4.0
## Filled: Warnecke et al. 0.0520 [-0.4360; 0.5401] 4.0
## Filled: Song & Lindquist 0.0395 [-0.4048; 0.4837] 4.5
## Filled: Frogeli et al. 0.0220 [-0.3621; 0.4062] 5.2
## Filled: Call et al. -0.0571 [-0.5683; 0.4541] 3.8
## Filled: Gallego et al. -0.0729 [-0.5132; 0.3675] 4.5
## Filled: Kang et al. -0.6230 [-1.2839; 0.0379] 2.7
##
## Number of studies: k = 22 (with 6 added studies)
## Number of observations: o = 2974
##
## SMD 95%-CI t p-value
## Random effects model (HK) 0.3391 [ 0.1904; 0.4878] 4.74 0.0001
## Prediction interval [-0.1100; 0.7882]
##
## Quantifying heterogeneity:
## tau^2 = 0.0421 [0.0116; 0.2181]; tau = 0.2053 [0.1079; 0.4671]
## I^2 = 50.5% [19.1%; 69.7%]; H = 1.42 [1.11; 1.82]
##
## Test of heterogeneity:
## Q d.f. p-value
## 42.42 21 0.0037
##
## Details on meta-analytical method:
## - Inverse variance method
## - Restricted maximum-likelihood estimator for tau^2
## - Q-Profile method for confidence interval of tau^2 and tau
## - Hartung-Knapp adjustment for random effects model (df = 21)
## - Prediction interval based on t-distribution (df = 20)
## - Trim-and-fill method to adjust for funnel plot asymmetry (L-estimator)
# Define fill colors for contour
contour <- c(0.9, 0.95, 0.99)
col.contour <- c("gray75", "gray85", "gray95")
ld <- c("p < 0.1", "p < 0.05", "p < 0.01")
# Use 'par' to create two plots in one row (row, columns)
par(mfrow=c(1,2))
# Contour-enhanced funnel plot (full data)
funnel.meta(tf,
xlim = c(-1.5, 2), contour = contour,
col.contour = col.contour)
legend(x = 1.1, y = 0.01,
legend = ld, fill = col.contour)
title("Funnel Plot (Trim & Fill Method)")
# Contour-enhanced funnel plot (outliers removed)
funnel.meta(tf.no.out,
xlim = c(-1.5, 2), contour = contour,
col.contour = col.contour)
legend(x = 1.1, y = 0.01,
legend = ld, fill = col.contour)
title("Funnel Plot (Trim & Fill Method) - Outliers Removed")
PET-PEESE实际上是两种方法的组合:精确效应检验(precision-effect test , PET)和带标准误差的精确效应估计(precision-effect estimate with standard error, PEESE)。让我们从前者开始。PET方法基于一个简单的回归模型,我们将一项研究的效应量与其标准误差进行回归:
\[\theta_k = \beta_0 + \beta_1SE_{\theta_k}\]
PET使用\(\beta_1\)量化非对称性,但我们关注\(\beta_0\)。这是因为在上述公式中,截距代表所谓的极限效应(limit effect)。极限效应是标准误差为零的研究的预期效应量。这相当于在没有抽样误差的情况下测得的观察效应量。PET方法背后的理念是通过将标准误差作为预测因子来控制小规模研究的影响.
\[\hat\theta_{\text{PET}} = \hat\beta_{0_{\mathrm{PET}}}\]
PEESE方法的计算公式非常相似。唯一不同的是,我们使用标准误差的平方作为预测因子
\[\theta_k = \beta_0 + \beta_1SE_{\theta_k}^2\]
标准误差平方背后的理念是,小型研究特别容易报告高估的效应。据推测,这一问题在统计功效大的研究中并不明显.
\[\hat\theta_{\text{PET-PEESE}}=\begin{cases} \mathrm{P}(\beta_{0_{\text{PET}}} = 0) <0.1~\mathrm{and}~\hat\beta_{0_{\text{PET}}} > 0: & \hat\beta_{0_{\text{PEESE}}}\\ \text{else}: & \hat\beta_{0_{\text{PET}}} \end{cases}\]
目前{meta}中还没有PET-PEESE的直接实现方法,因此我们使用线性模型函数lm编写自己的代码.
# Build data set, starting with the effect size
dat.petpeese <- data.frame(TE = m.gen$TE)
# Experimental (n1) and control group (n2) sample size
n1 <- c(62, 72, 44, 135, 103, 71, 69, 68, 95,
43, 79, 61, 62, 60, 43, 42, 64, 63)
n2 <- c(51, 78, 41, 115, 100, 79, 62, 72, 80,
44, 72, 67, 59, 54, 41, 51, 66, 55)
# Calculate modified SE
dat.petpeese$seTE_c <- sqrt((n1+n2)/(n1*n2))
# Add squared modified SE (= variance)
dat.petpeese$seTE_c2 <- dat.petpeese$seTE_c^2
dat.petpeese$w_k <- 1/dat.petpeese$seTE_c^2
# PET
pet <- lm(TE ~ seTE_c, weights = w_k, data = dat.petpeese)
summary(pet)$coefficients
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -1.353476 0.443164 -3.054119 0.0075732464
## seTE_c 11.190288 2.566719 4.359764 0.0004862409
# PEESE
peese <- lm(TE ~ seTE_c2, weights = w_k, data = dat.petpeese)
summary(peese)$coefficients
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) -0.4366495 0.2229352 -1.958639 0.0678222117
## seTE_c2 33.3609862 7.1784369 4.647389 0.0002683009
PET-PEESE可能会过度估计和校正.
Rücker法背后的理念是建立一个meta分析模型,明确考虑小研究效应导致的偏差。我们考虑到当存在小研究效应时,研究的效应量和标准误差并不是独立的。之所以这样假设,是因为我们知道发表偏倚尤其会影响小型研究,因此小型研究的效应量会大于大型研究.
\[\hat\theta_k = \mu_* + \theta_{\text{Bias}}(\epsilon_k+\zeta_k)\\\mathrm{E}(\hat\theta_k) \rightarrow \mu_{*} + \theta_{\text{Bias}}\zeta_k ~ ~ ~ ~ \text{as} ~ ~ ~ ~ \epsilon_k \rightarrow 0\\\hat\theta_{*} = \mu_* + \theta_{\mathrm{Bias}}\tau\\\hat\theta_{{*}_k} = \mu_* + \sqrt{\dfrac{\tau^2}{SE^2_k + \tau^2}}(\hat\theta_k - \mu_*)\] Rücker法相比PET-PEESE的优势就是纳入了异质性\(\tau^2\).
# Install 'metasens', then load from library
library(metasens)
## Loading 'metasens' package (version 1.5-2).
## Readers of 'Meta-Analysis with R (Use R!)' should install
## older version of 'metasens' package: https://tinyurl.com/jhv33bfm
# Run limit meta-analysis
limitmeta(m.gen)
## Review: Third Wave Psychotherapies
##
## Result of limit meta-analysis:
##
## Random effects model SMD 95%-CI z pval
## Adjusted estimate -0.0345 [-0.4243; 0.3552] -0.17 0.8621
## Unadjusted estimate 0.5771 [ 0.3782; 0.7760] 6.12 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.0820; I^2 = 62.6% [37.9%; 77.5%]; G^2 = 99.0%
##
## Test of heterogeneity:
## Q d.f. p-value
## 45.50 17 0.0002
##
## Test of small-study effects:
## Q-Q' d.f. p-value
## 26.28 1 < 0.0001
##
## Test of residual heterogeneity beyond small-study effects:
## Q' d.f. p-value
## 19.22 16 0.2573
##
## Details on adjustment method:
## - expectation (beta0)
# Create limitmeta object
lmeta <- limitmeta(m.gen)
# Funnel with curve
funnel.limitmeta(lmeta, xlim = c(-0.5, 2))
# Funnel with curve and shrunken study estimates
funnel.limitmeta(lmeta, xlim = c(-0.5, 2), shrunken = TRUE)
# binary data
limitmeta(m.bin)
## Review: Depression and Mortality
##
## Result of limit meta-analysis:
##
## Random effects model RR 95%-CI z pval
## Adjusted estimate 2.2604 [1.6805; 3.0405] 5.39 < 0.0001
## Unadjusted estimate 2.0217 [1.5786; 2.5892] 6.00 < 0.0001
##
## Quantifying heterogeneity:
## tau^2 = 0.1865; I^2 = 77.2% [64.3%; 85.4%]; G^2 = 61.9%
##
## Test of heterogeneity:
## Q d.f. p-value
## 74.49 17 < 0.0001
##
## Test of small-study effects:
## Q-Q' d.f. p-value
## 0.34 1 0.5605
##
## Test of residual heterogeneity beyond small-study effects:
## Q' d.f. p-value
## 74.15 16 < 0.0001
##
## Details on adjustment method:
## - expectation (beta0)
发表偏倚在很多情况下是由p值引起的。p曲线关注的是p值的分布,能够应对p-hacking的问题。p曲线法的基础是p值直方图的形状取决于研究的样本大小,更重要的是取决于我们数据背后的真实效应大小.
当出现p-hacking时,接近0.05的p值出现的频率会变多。p曲线分析不关注发表偏倚本身,而是关注证据价值(evidential value),我们希望确保我们估计的效应不是虚假的;不是选择性报告造成的假象.
为了评估证据价值的存在,p曲线使用两种类型的检验:右偏检验(test for right-skewness)和33%功效检验(test for 33% power).
如果p值分布呈右偏态,说明我们的结果是由真实的效应驱动的。右偏检验使用二项检验(binomial test).
k <- 7 # number of studies p<0.025
n <- 8 # total number of significant studies
p <- 0.5 # assumed probability of k (null hypothesis)
binom.test(k, n, p, alternative = "greater")$p.value
## [1] 0.03515625
但是这种检验方法会丢失很多信息,因此我们使用另一种方法。我们计算每个p值的p值(pp-value)。pp值表示当p值服从均匀分布(p曲线是平的)时得到此值的可能性。使用Fisher法进行检验(Fisher’s method).
\[\chi^2_{2K} = -2 \sum^K_{k=1} \log(pp_k)\]
p <- c(0.001, 0.002, 0.003, 0.004, 0.03)
pp <- p*20
# Show pp values
pp
## [1] 0.02 0.04 0.06 0.08 0.60
chi2 <- -2*sum(log(pp))
chi2
## [1] 25.96173
pchisq(26.96, df = 10, lower.tail = FALSE)
## [1] 0.002642556
上述检验取决于我们数据的统计功效。因此,当右偏态检验不显著时,并不自动意味着没有证据价值。解决方案是33%功效检验,理念简单说就是检验曲线是否是平的.
library(dmetar)
library(meta)
# Update m.gen to exclude outliers
m.gen_update <- update.meta(m.gen, subset = -c(3, 16))
# Run p-curve analysis
pcurve(m.gen_update)
## P-curve analysis
## -----------------------
## - Total number of provided studies: k = 16
## - Total number of p<0.05 studies included into the analysis: k = 9 (56.25%)
## - Total number of studies with p<0.025: k = 8 (50%)
##
## Results
## -----------------------
## pBinomial zFull pFull zHalf pHalf
## Right-skewness test 0.020 -3.797 0.000 -2.743 0.003
## Flatness test 0.952 1.540 0.938 3.422 1.000
## Note: p-values of 0 or 1 correspond to p<0.001 and p>0.999, respectively.
## Power Estimate: 66% (31.1%-87.8%)
##
## Evidential value
## -----------------------
## - Evidential value present: yes
## - Evidential value absent/inadequate: no
这些结果表明了证据价值的存在,以及存在真正的非零效应。我们仍然不能排除发表偏倚对meta分析结果的影响。有趣的是,这一发现与我们使用一些小研究效应方法得出的结果并不一致.
p曲线的效应量估计
# Add experimental (n1) and control group (n2) sample size
# Sample sizes of study 3 and 16 removed
n1 <- c(62, 72, 135, 103, 71, 69, 68, 95,
43, 79, 61, 62, 60, 43, 64, 63)
n2 <- c(51, 78, 115, 100, 79, 62, 72, 80,
44, 72, 67, 59, 54, 41, 66, 55)
# Run p-curve analysis with effect estimation
pcurve(m.gen_update,
effect.estimation = TRUE,
N = n1+n2,
dmin = 0,
dmax = 1)
## P-curve analysis
## -----------------------
## - Total number of provided studies: k = 16
## - Total number of p<0.05 studies included into the analysis: k = 9 (56.25%)
## - Total number of studies with p<0.025: k = 8 (50%)
##
## Results
## -----------------------
## pBinomial zFull pFull zHalf pHalf
## Right-skewness test 0.020 -3.797 0.000 -2.743 0.003
## Flatness test 0.952 1.540 0.938 3.422 1.000
## Note: p-values of 0 or 1 correspond to p<0.001 and p>0.999, respectively.
## Power Estimate: 66% (31.1%-87.8%)
##
## Evidential value
## -----------------------
## - Evidential value present: yes
## - Evidential value absent/inadequate: no
##
## P-curve's estimate of the true effect size: d=0.389
选择模型可以看作是之前方法的通用版本。它们可以对我们认为发表偏倚影响结果的任何过程进行建模.
所有选择模型背后的理念都是指定一个分布,该分布通常以一种高度理想化的方式预测某项研究根据其结果被发表(即”被选择”)的可能性有多大。通常,这个结果就是该研究的p值,选择模型可以看成是一个函数,它返回不同p值下的发表概率。一旦定义了这样一个选择函数,就可以用它来”去除”由于选择性发表而产生的假定偏倚,并得出真实效应量的校正估计值.
选择模型是一种非常通用的方法,可用于模拟不同的发表偏倚过程。然而,只有当假定的模型足够充分时,它们才能提供有效的结果,而且通常需要大量的研究。一个非常简单的选择模型,即三参数模型,也可用于较小的数据集.
\[f^*(x_k) = \frac{w(p_k)f(x_k)}{\int w(p_k) f(x_k) dx_k} \\ w(p_k) =\begin{cases} \omega_1~~~\text{if}~~~0 \leq p_k \leq a_1 \\ \omega_2~~~\text{if}~~~a_1 \leq p_k \leq a_2 \\ \omega_3~~~\text{if}~~~a_2 \leq p_k \leq a_3 \\ \omega_4~~~\text{if}~~~a_3 \leq p_k \leq a_4~~~(\text{where}~~~a_4=1)\end{cases}\]
library(metafor)
# Three-Parameter Selection Model
# We name the new object 'm.rma'
m.rma <- rma(yi = TE,
sei = seTE,
data = ThirdWave,
slab = Author,
method = "REML",
test = "knha")
selmodel(m.rma,
type = "stepfun",
steps = 0.025)
##
## Random-Effects Model (k = 18; tau^2 estimator: ML)
##
## tau^2 (estimated amount of total heterogeneity): 0.0714 (SE = 0.0478)
## tau (square root of estimated tau^2 value): 0.2672
##
## Test for Heterogeneity:
## LRT(df = 1) = 7.5647, p-val = 0.0060
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## 0.5893 0.1274 4.6260 <.0001 0.3396 0.8390 ***
##
## Test for Selection Model Parameters:
## LRT(df = 1) = 0.0337, p-val = 0.8544
##
## Selection Model Results:
##
## k estimate se zval pval ci.lb ci.ub
## 0 < p <= 0.025 11 1.0000 --- --- --- --- ---
## 0.025 < p <= 1 7 1.1500 0.8755 0.1714 0.8639 0.0000 2.8660
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
selmodel(m.rma,
type = "stepfun",
steps = 0.05)
##
## Random-Effects Model (k = 18; tau^2 estimator: ML)
##
## tau^2 (estimated amount of total heterogeneity): 0.1038 (SE = 0.0772)
## tau (square root of estimated tau^2 value): 0.3222
##
## Test for Heterogeneity:
## LRT(df = 1) = 7.4587, p-val = 0.0063
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## 0.3661 0.1755 2.0863 0.0370 0.0222 0.7100 *
##
## Test for Selection Model Parameters:
## LRT(df = 1) = 3.9970, p-val = 0.0456
##
## Selection Model Results:
##
## k estimate se zval pval ci.lb ci.ub
## 0 < p <= 0.05 15 1.0000 --- --- --- --- ---
## 0.05 < p <= 1 3 0.1786 0.1665 -4.9327 <.0001 0.0000 0.5050 ***
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Fixed Weights Selection Model
# Define the cut-points
a <- c(0.005, 0.01, 0.05, 0.10, 0.25, 0.35, 0.50,
0.65, 0.75, 0.90, 0.95, 0.99, 0.995)
# Define the selection likelihood for each interval
# (moderate/severe selection)
w.moderate <- c(1, 0.99, 0.95, 0.80, 0.75, 0.65, 0.60,
0.55, 0.50, 0.50, 0.50, 0.50, 0.50, 0.50)
w.severe <- c(1, 0.99, 0.90, 0.75, 0.60, 0.50, 0.40, 0.35,
0.30, 0.25, 0.10, 0.10, 0.10, 0.10)
# Fit model assuming moderate selection
selmodel(m.rma, type = "stepfun", steps = a, delta = w.moderate)
##
## Random-Effects Model (k = 18; tau^2 estimator: ML)
##
## tau^2 (estimated amount of total heterogeneity): 0.0847 (SE = 0.0589)
## tau (square root of estimated tau^2 value): 0.2910
##
## Test for Heterogeneity:
## LRT(df = 1) = 7.9631, p-val = 0.0048
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## 0.5212 0.0935 5.5741 <.0001 0.3380 0.7045 ***
##
## Test for Selection Model Parameters:
## LRT(df = 0) = 2.0263, p-val = NA
##
## Selection Model Results:
##
## k estimate se zval pval ci.lb ci.ub
## 0 < p <= 0.005 8 1.0000 --- --- --- --- ---
## 0.005 < p <= 0.01 2 0.9900 --- --- --- --- ---
## 0.01 < p <= 0.05 5 0.9500 --- --- --- --- ---
## 0.05 < p <= 0.1 2 0.8000 --- --- --- --- ---
## 0.1 < p <= 0.25 0 0.7500 --- --- --- --- ---
## 0.25 < p <= 0.35 1 0.6500 --- --- --- --- ---
## 0.35 < p <= 0.5 0 0.6000 --- --- --- --- ---
## 0.5 < p <= 0.65 0 0.5500 --- --- --- --- ---
## 0.65 < p <= 0.75 0 0.5000 --- --- --- --- ---
## 0.75 < p <= 0.9 0 0.5000 --- --- --- --- ---
## 0.9 < p <= 0.95 0 0.5000 --- --- --- --- ---
## 0.95 < p <= 0.99 0 0.5000 --- --- --- --- ---
## 0.99 < p <= 0.995 0 0.5000 --- --- --- --- ---
## 0.995 < p <= 1 0 0.5000 --- --- --- --- ---
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
# Fit model assuming severe selection
selmodel(m.rma, type = "stepfun", steps = a, delta = w.severe)
##
## Random-Effects Model (k = 18; tau^2 estimator: ML)
##
## tau^2 (estimated amount of total heterogeneity): 0.1118 (SE = 0.0849)
## tau (square root of estimated tau^2 value): 0.3344
##
## Test for Heterogeneity:
## LRT(df = 1) = 8.4657, p-val = 0.0036
##
## Model Results:
##
## estimate se zval pval ci.lb ci.ub
## 0.4601 0.1211 3.8009 0.0001 0.2229 0.6974 ***
##
## Test for Selection Model Parameters:
## LRT(df = 0) = 3.8589, p-val = NA
##
## Selection Model Results:
##
## k estimate se zval pval ci.lb ci.ub
## 0 < p <= 0.005 8 1.0000 --- --- --- --- ---
## 0.005 < p <= 0.01 2 0.9900 --- --- --- --- ---
## 0.01 < p <= 0.05 5 0.9000 --- --- --- --- ---
## 0.05 < p <= 0.1 2 0.7500 --- --- --- --- ---
## 0.1 < p <= 0.25 0 0.6000 --- --- --- --- ---
## 0.25 < p <= 0.35 1 0.5000 --- --- --- --- ---
## 0.35 < p <= 0.5 0 0.4000 --- --- --- --- ---
## 0.5 < p <= 0.65 0 0.3500 --- --- --- --- ---
## 0.65 < p <= 0.75 0 0.3000 --- --- --- --- ---
## 0.75 < p <= 0.9 0 0.2500 --- --- --- --- ---
## 0.9 < p <= 0.95 0 0.1000 --- --- --- --- ---
## 0.95 < p <= 0.99 0 0.1000 --- --- --- --- ---
## 0.99 < p <= 0.995 0 0.1000 --- --- --- --- ---
## 0.995 < p <= 1 0 0.1000 --- --- --- --- ---
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
不同发表偏倚的检验方法各有优劣,结果也可能大相径庭。最好的办法是各种方法都进行检验,但是要谨慎解释检验结果。重要的是要记住,控制发表偏倚的最佳方法是充分搜索未发表的证据.
实际上我们之前讨论的meta分析就是一种“分层”的模型.
为了更好地捕捉产生我们数据的某些机制,可以进一步扩展这一结构。这就是三层次模型(three-level models).
统计独立性是我们在荟萃分析中汇集效应大小的核心假设之一。如果效应大小之间存在依赖性(即效应大小相关),则会人为地降低异质性,从而导致假阳性结果。这个问题被称为分析单位误差(unit-of-analysis error),我们之前已经讨论过。效应量依赖性可能来自不同的来源:
个别研究的作者引入的依赖性。例如,进行研究的科学家可能从多个地点收集数据,将多种干预措施与单一对照组进行比较,或使用不同的问卷来测量同一结果。在所有这些情况下,我们都可以假设报告数据中引入了某种依赖性
Meta分析本身引入的依赖性。举例来说,Meta分析的重点是某种心理机制。该Meta分析包括在世界不同文化区域(如东亚和西欧社会)进行的研究。根据心理机制的类型,在同一文化区域进行的研究结果可能与在不同文化区域进行的研究结果更为相似.
我们可以通过在Meta分析模型的结构中加入第三层来考虑这种依赖关系。例如,我们可以将基于不同问卷的效应量嵌套在研究中。或者我们也可以创建一个模型,将研究结果嵌套在文化区域中。这样就形成了一个三级Meta分析模型,如下图所示
三层模型如下:
\[\hat\theta_{ij} = \theta_{ij} + \epsilon_{ij}\\\theta_{ij} = \kappa_{j} + \zeta_{(2)ij}\\\kappa_{j} = \mu + \zeta_{(3)j}\]
\[\hat\theta_{ij} = \mu + \zeta_{(2)ij} + \zeta_{(3)j} + \epsilon_{ij}\]
下面我们将使用{metafor}包拟合分层模型.
library(metafor)
# Load data set from 'dmetar'
library(dmetar)
data("Chernobyl")
head(Chernobyl)
## author cor n z se.z var.z
## 1 Aghajanyan & Suskov (2009) 0.2061 91 0.2090949 0.10660036 0.011363636
## 2 Aghajanyan & Suskov (2009) 0.2687 91 0.2754621 0.10660036 0.011363636
## 3 Aghajanyan & Suskov (2009) 0.2049 92 0.2078420 0.10599979 0.011235955
## 4 Aghajanyan & Suskov (2009) 0.2672 92 0.2738461 0.10599979 0.011235955
## 5 Alexanin et al. (2010) 0.9317 559 1.6711230 0.04240945 0.001798561
## 6 Alexanin et al. (2010) 0.4429 559 0.4758327 0.04240945 0.001798561
## radiation es.id
## 1 low id_1
## 2 low id_2
## 3 low id_3
## 4 low id_4
## 5 low id_5
## 6 low id_6
该数据集的一个奇特之处在于它包含了作者的重复条目。这是因为meta分析中的大多数研究贡献了一个以上的观察效应大小。从这个结构来看,我们的数据集中的效应大小显然不是独立的。它们遵循嵌套结构,其中各种效应大小嵌套在一项研究中。因此,为了充分模拟我们数据中的这些依赖关系,拟合一个三级meta分析可能是一个好主意.
full.model <- rma.mv(yi = z,
V = var.z,
slab = author,
data = Chernobyl,
random = ~ 1 | author/es.id, #~ 1 | cluster/effects_within_cluster
test = "t",
method = "REML")
summary(full.model)
##
## Multivariate Meta-Analysis Model (k = 33; method: REML)
##
## logLik Deviance AIC BIC AICc
## -21.1229 42.2458 48.2458 52.6430 49.1029
##
## Variance Components:
##
## estim sqrt nlvls fixed factor
## sigma^2.1 0.1788 0.4229 14 no author
## sigma^2.2 0.1194 0.3455 33 no author/es.id
##
## Test for Heterogeneity:
## Q(df = 32) = 4195.8268, p-val < .0001
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## 0.5231 0.1341 3.9008 32 0.0005 0.2500 0.7963 ***
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
library(esc)
convert_z2r(0.52)
## [1] 0.4777
我们可以通过计算多层次版本的I2来回答这个问题。在传统的meta分析中,I2代表不可归因于抽样误差的变异量,即研究间异质性。在三级模型中,这种异质性方差被分成两部分:一部分归因于组内真实效应大小差异,另一部分归因于组间差异。因此,有两个I^2值,量化了与第2级或第3级相关的总变异的百分比.
i2 <- var.comp(full.model)
summary(i2)
## % of total variance I2
## Level 1 1.254966 ---
## Level 2 39.525499 39.53
## Level 3 59.219534 59.22
## Total I2: 98.75%
plot(i2)
根据奥卡姆剃刀原理(Occam’s razor),我们应当选择尽量简单的模型.
{metafor}软件包可以将我们的三层次模型与去掉一个层次的模型进行比较。为此,我们再次使用rma.mv函数,但这次要将一个水平的方差分量设为零.
l3.removed <- rma.mv(yi = z,
V = var.z,
slab = author,
data = Chernobyl,
random = ~ 1 | author/es.id,
test = "t",
method = "REML",
sigma2 = c(0, NA))
summary(l3.removed)
##
## Multivariate Meta-Analysis Model (k = 33; method: REML)
##
## logLik Deviance AIC BIC AICc
## -29.1742 58.3483 62.3483 65.2798 62.7621
##
## Variance Components:
##
## estim sqrt nlvls fixed factor
## sigma^2.1 0.0000 0.0000 14 yes author
## sigma^2.2 0.3550 0.5959 33 no author/es.id
##
## Test for Heterogeneity:
## Q(df = 32) = 4195.8268, p-val < .0001
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## 0.5985 0.1051 5.6938 32 <.0001 0.3844 0.8126 ***
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(full.model, l3.removed)
##
## df AIC BIC AICc logLik LRT pval QE
## Full 3 48.2458 52.6430 49.1029 -21.1229 4195.8268
## Reduced 2 62.3483 65.2798 62.7621 -29.1742 16.1025 <.0001 4195.8268
一旦我们设定了三级模型,就有可能评估总体效应的假定调节因子。在前文中,我们发现亚组分析可以表示为带有哑编码预测因子的meta回归模型。类似地,我们可以在 “多水平”模型中加入回归项,从而得到三水平混合效应模型(three-level mixed-effects model):
\[\hat\theta_{ij} = \theta + \beta x_i + \zeta_{(2)ij} + \zeta_{(3)j} + \epsilon_{ij}\]
mod.model <- rma.mv(yi = z, V = var.z,
slab = author, data = Chernobyl,
random = ~ 1 | author/es.id,
test = "t", method = "REML",
mods = ~ radiation)
## Warning: 2 rows with NAs omitted from model fitting.
summary(mod.model)
##
## Multivariate Meta-Analysis Model (k = 31; method: REML)
##
## logLik Deviance AIC BIC AICc
## -18.0031 36.0063 46.0063 52.6673 48.7335
##
## Variance Components:
##
## estim sqrt nlvls fixed factor
## sigma^2.1 0.2209 0.4700 12 no author
## sigma^2.2 0.1220 0.3493 31 no author/es.id
##
## Test for Residual Heterogeneity:
## QE(df = 28) = 4064.4837, p-val < .0001
##
## Test of Moderators (coefficients 2:3):
## F(df1 = 2, df2 = 28) = 0.4512, p-val = 0.6414
##
## Model Results:
##
## estimate se tval df pval ci.lb ci.ub
## intrcpt 0.5898 0.3606 1.6357 28 0.1131 -0.1488 1.3284
## radiationlow -0.1967 0.4075 -0.4827 28 0.6330 -1.0315 0.6381
## radiationmedium 0.2035 0.5410 0.3761 28 0.7097 -0.9047 1.3116
##
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
与”传统”meta分析相比,我们之前拟合的分层模型显然能更好地代表我们的数据,因为meta分析假定所有效应量都是完全独立的。但这仍然是对现实的简化。在实践中,效应量之间往往存在着比我们的嵌套模型更复杂的依赖关系.
当我们回到切尔诺贝利数据集时,我们已经看到了这一点。在数据中,大多数研究提供了一个以上的效应量,但不同研究之间的原因不同。一些研究比较了辐射对不同目标人群的影响,因此报告了不止一个效应大小。其他研究对同一样本使用了不同的方法,这也意味着研究提供了不止一个效应大小.
当一项研究中的多个效应量来自同一样本时,我们预计它们的抽样误差是相关的。然而,我们的三层次模型还没有考虑到这一点。我们的上述模型假定,在群组/研究内部,抽样误差之间的相关性(以及协方差)为零。或者换句话说,它假设在一个群组或研究中,效应量估计值是独立的.
因此,我们将用一些时间来讨论一个扩展的三层次结构,即所谓的相关分层效应(Correlated and Hierarchical Effects, CHE) 模型。与我们之前的三层次模型一样,CHE模型允许根据某些共性(例如,因为它们来自相同的研究、工作组、文化区域等)将多个效应量组合成更大的群组。
此外,该模型还明确考虑到群组中的某些效应量基于相同的样本(如进行了多次测量),因此它们的抽样误差是相关的。因此,在许多实际情况下,CHE模型应该是一个很好的起点;特别是当我们的数据依赖结构很复杂,或者只有部分已知时.
除了CHE模型,我们还将讨论meta分析中的稳健方差估计(Robust Variance Estimation, RVE)。这是一套过去经常用于处理meta分析中依赖效应大小的方法。RVE的核心是所谓的Sandwich估计法。该估计法可与CHE模型以及其他模型结合使用,以获得稳健的置信区间和P值;即使我们所选的模型不能很好地捕捉数据错综复杂的依赖结构.
我们在此介绍的稳健方差估计器只是”正常”回归模型中使用的原始方法的一个特殊版本。Hedges、Tipton和Jackson(2010)提出了一种经过调整的RVE类型,可用于具有依赖效应大小的meta回归模型,这种方法在过去几年中得到了发展.
为了理解它,我们首先要再看一下meta回归的公式
\[\boldsymbol{T}_{j}=\boldsymbol{X}_{j}\boldsymbol{\beta} + \boldsymbol{u}_j +\boldsymbol{e}_j\]
\[\begin{bmatrix}\boldsymbol{T}_1 \\\boldsymbol{T}_2 \\\vdots \\\boldsymbol{T}_J\end{bmatrix}=\begin{bmatrix}\boldsymbol{X}_1 \\\boldsymbol{X}_2 \\\vdots \\\boldsymbol{X}_J\end{bmatrix}\boldsymbol{\beta}+\begin{bmatrix}\boldsymbol{u}_1 \\\boldsymbol{u}_2 \\\vdots \\\boldsymbol{u}_J\end{bmatrix}+\begin{bmatrix}\boldsymbol{e}_1 \\\boldsymbol{e}_2 \\\vdots \\\boldsymbol{e}_J\end{bmatrix}\]
\[\boldsymbol{V}^{\text{R}}_{\boldsymbol{\hat\beta}} = \left(\sum^J_{j=1}\boldsymbol{X}_j^\top\boldsymbol{W}_j\boldsymbol{X}_j \right)^{-1} \left(\sum^J_{j=1}\boldsymbol{X}_j^\top\boldsymbol{W}_j \boldsymbol{A}_j\Phi_j \boldsymbol{A}_j \boldsymbol{W}_j \boldsymbol{X}_j \right) \left(\sum^J_{j=1}\boldsymbol{X}_j^\top\boldsymbol{W}_j\boldsymbol{X}_j \right)^{-1}\]
library(clubSandwich)
## Registered S3 method overwritten by 'clubSandwich':
## method from
## bread.mlm sandwich
# constant sampling correlation assumption
rho <- 0.6
# constant sampling correlation working model
V <- with(Chernobyl,
impute_covariance_matrix(vi = var.z,
cluster = author,
r = rho))
che.model <- rma.mv(z ~ 1 + radiation,
V = V,
random = ~ 1 | author/es.id,
data = Chernobyl,
sparse = TRUE)
## Warning: 2 rows with NAs omitted from model fitting.
conf_int(che.model, vcov = "CR2")
## Coef. Estimate SE d.f. Lower 95% CI Upper 95% CI
## intrcpt 0.584 0.578 1.00 -6.76 7.93
## radiationlow -0.190 0.605 1.60 -3.52 3.14
## radiationmedium 0.207 0.603 1.98 -2.41 2.83
coef_test(che.model, vcov = "CR2")
## Coef. Estimate SE t-stat d.f. (Satt) p-val (Satt) Sig.
## intrcpt 0.584 0.578 1.010 1.00 0.497
## radiationlow -0.190 0.605 -0.315 1.60 0.789
## radiationmedium 0.207 0.603 0.344 1.98 0.764
另一种有时也是检验我们模型中系数的有利方法是自助抽样(Bootstrapping)程序,其特殊的变体是所谓的Cluster Wild Bootstrapping。如果meta分析中的研究总数较少,这种方法就非常适合;特别是与RVE相比,RVE在小样本中会导致过于保守的结果(正如我们在切尔诺贝利的例子中看到的那样).
Wild bootstrap是一种基于空模型残差的方法,该方法的基本算法是:计算出全模型及相应统计量(如t,F),根据原始数据拟合一个空模型并提取其残差e,对于每个群组或研究从分布中随机抽取再乘以该组残差,在原始数据的基础上,将转换后的残差与空模型的预测值相加,生成新的重抽样效应量,使用重抽样效应量再次拟合完整模型;再次计算检验统计量
# Make sure {wildmeta} and {tidyverse} is loaded
library(wildmeta)
library(tidyverse)
# Add year as extra variable
Chernobyl$year <- str_extract(Chernobyl$author,
"[0-9]{4}") %>% as.numeric()
che.model.bs <- rma.mv(z ~ 0 + radiation + scale(year),
V = V,
random = ~ 1 | author/es.id,
data = Chernobyl,
sparse = TRUE)
## Warning: 2 rows with NAs omitted from model fitting.
rad.constraints <- constrain_equal(constraints = 1:3,
coefs = coef(che.model.bs))
rad.constraints
## [,1] [,2] [,3] [,4]
## [1,] -1 1 0 0
## [2,] -1 0 1 0
cw.boot <- Wald_test_cwb(full_model = che.model.bs,
constraints = rad.constraints,
adjust = "CR2",
R = 100)
cw.boot
## Test Adjustment CR_type Statistic R p_val
## 1 CWB Adjusted CR2 CR0 Naive-F 100 0.28
# visualize the density of the test statistics
plot(cw.boot,
fill = "lightblue",
alpha = 0.5)
多层次模型视为结构方程模型(structural equation model, SEM)的一种特殊形式。正如我们所了解的,每一个meta分析都是基于多层次模型的。因此,我们也可以将meta分析视为结构方程模型,其中合并效应量被视为一个潜变量。简而言之:meta分析是多层次模型,因此也可以表示为结构方程模型.
SEMmeta分析可以让我们建立更复杂的模型,我们可以建立因子分析(factor analysis)模型和多变量meta分析(multivariate meta-analysis).
当然,应用SEM元分析技术的前提是对结构方程模型有基本的了解.
SEM是一种统计技术,用于检验观测变量和潜变量之间关系的假设。在SEM中,观测变量和潜变量之间的假定关系(“结构”)通过观测变量来建模,同时考虑其测量误差.
通常情况下,SEM是通过一系列矩阵表示的。可视化的SEM可以表示为路径图。这种路径图通常非常直观,解释起来也很简单.
通过矩阵表示SEM有多种方法。在此,我们将重点讨论网状行动模型(Reticular Action Model, RAM)。主要需要三个矩阵: A,S,F.
现在,我们将结合meta分析模型和SEM知识,将meta分析表述为结构方程模型
随机效应模型
level 1 \[\hat\theta_k = \theta_k + \epsilon_k\] level 2 \[\theta_k = \mu + \zeta_k\]
为了真正发挥meta分析SEM的多功能性,需要采用两阶段方法。在两阶段结构方程模型(Two-Stage Structural Equation Modeling, TSSEM)中,我们首先合并每项研究的效应量。通常,这些效应量是我们希望用于建模的几个变量之间的相关系数.
\[\begin{align}\boldsymbol{r_k} &= \boldsymbol{\rho} + \boldsymbol{\zeta_k} + \boldsymbol{\epsilon_k} \notag \\\begin{bmatrix} r_1 \\ r_2 \\ \vdots \\ r_p \end{bmatrix} &=\begin{bmatrix} \rho_1 \\ \rho_2 \\ \vdots \\ \rho_p \end{bmatrix} +\begin{bmatrix} \zeta_1 \\ \zeta_2 \\ \vdots \\ \zeta_p \end{bmatrix} +\begin{bmatrix} \epsilon_1 \\ \epsilon_2 \\ \vdots \\\epsilon_p\end{bmatrix}\end{align}\]
第一个步骤可以评估研究间效应的异质性,以及是否应该使用随机效应模型或亚组分析。由于{metaSEM}软件包采用了基于最大似然法的方法,即使是部分数据缺失的研究也可以纳入这一步.
在第二步中,我们使用加权最小二乘法来拟合我们指定的结构方程模型.
\[F_{\text{WLS}}(\hat\theta) = (\boldsymbol{r} - \rho(\hat\theta))^\top \boldsymbol{V}^{-1} ({r} - \rho(\hat\theta))\]
是时候深入研究我们的第一个meta分析SEM示例了。我们将首先使用SEM方法进行多变量meta分析,这是我们尚未涉及的内容。在多变量meta分析中,我们试图同时估计不止一个效应。如果我们研究的目标有多个主要结果,而不仅仅是一个结果,那么这种类型的meta分析就很有帮助.
想象一下,我们正在研究某种治疗方法的效果。对于这种治疗,可能有两类结果被大多数专家认为是重要的,因此在大多数研究中都会对它们进行评估。多变量meta分析可以通过在一个模型中联合估计两种结果的效应量来解决这个问题。这种多变量方法还允许我们考虑两个结果之间的相关性。这可以用来确定在一个结果上具有高效应量的研究是否在另一个结果上也具有更高的效应量。或者,我们也可能发现两个结果之间存在负相关或根本不相关.
# install.packages('metaSEM')
library(metaSEM)
## 载入需要的程辑包:OpenMx
##
## 载入程辑包:'OpenMx'
## The following objects are masked from 'package:Matrix':
##
## %&%, expm
## Registered S3 methods overwritten by 'metaSEM':
## method from
## summary.meta meta
## print.meta meta
## print.summary.meta meta
## "SLSQP" is set as the default optimizer in OpenMx.
## mxOption(NULL, "Gradient algorithm") is set at "central".
## mxOption(NULL, "Optimality tolerance") is set at "6.3e-14".
## mxOption(NULL, "Gradient iterations") is set at "2".
##
## 载入程辑包:'metaSEM'
## The following objects are masked from 'package:meta':
##
## print.summary.meta, summary.meta
## The following object is masked from 'package:purrr':
##
## rerun
library(tidyverse) # needed for 'glimpse'
library(dmetar)
library(meta)
data(ThirdWave)
glimpse(ThirdWave)
## Rows: 18
## Columns: 8
## $ Author <chr> "Call et al.", "Cavanagh et al.", "DanitzOrsillo"…
## $ TE <dbl> 0.7091362, 0.3548641, 1.7911700, 0.1824552, 0.421…
## $ seTE <dbl> 0.2608202, 0.1963624, 0.3455692, 0.1177874, 0.144…
## $ RiskOfBias <chr> "high", "low", "high", "low", "low", "low", "high…
## $ TypeControlGroup <chr> "WLC", "WLC", "WLC", "no intervention", "informat…
## $ InterventionDuration <chr> "short", "short", "short", "short", "short", "sho…
## $ InterventionType <chr> "mindfulness", "mindfulness", "ACT", "mindfulness…
## $ ModeOfDelivery <chr> "group", "online", "group", "group", "online", "g…
# Define vector with effects on anxiety (Hedges g)
Anxiety <- c(0.224,0.389,0.913,0.255,0.615,-0.021,0.201,
0.665,0.373,1.118,0.158,0.252,0.142,NA,
0.410,1.139,-0.002,1.084)
# Standard error of anxiety effects
Anxiety_SE <- c(0.193,0.194,0.314,0.165,0.270,0.233,0.159,
0.298,0.153,0.388,0.206,0.256,0.256,NA,
0.431,0.242,0.274,0.250)
# Covariance between stress and anxiety outcomes
Covariance <- c(0.023,0.028,0.065,0.008,0.018,0.032,0.026,
0.046,0.020,0.063,0.017,0.043,0.037,NA,
0.079,0.046,0.040,0.041)
ThirdWaveMV <- data.frame(Author = ThirdWave$Author,
Stress = ThirdWave$TE,
Stress_var = ThirdWave$seTE^2,
Anxiety = Anxiety,
Anxiety_var = Anxiety_SE^2,
Covariance = Covariance)
format(head(ThirdWaveMV), digits = 2)
## Author Stress Stress_var Anxiety Anxiety_var Covariance
## 1 Call et al. 0.71 0.068 0.224 0.037 0.023
## 2 Cavanagh et al. 0.35 0.039 0.389 0.038 0.028
## 3 DanitzOrsillo 1.79 0.119 0.913 0.099 0.065
## 4 de Vibe et al. 0.18 0.014 0.255 0.027 0.008
## 5 Frazier et al. 0.42 0.021 0.615 0.073 0.018
## 6 Frogeli et al. 0.63 0.038 -0.021 0.054 0.032
# 如果我们不知道协方差就需要假定两个变量的相关系数,并进行敏感性分析
m.mv <- metaSEM::meta(y = cbind(Stress, Anxiety),
v = cbind(Stress_var, Covariance, Anxiety_var),
data = ThirdWaveMV)
#若同时加载了{meta}可能会报错
summary(m.mv)
##
## Call:
## metaSEM::meta(y = cbind(Stress, Anxiety), v = cbind(Stress_var,
## Covariance, Anxiety_var), data = ThirdWaveMV)
##
## 95% confidence intervals: z statistic approximation (robust=FALSE)
## Coefficients:
## Estimate Std.Error lbound ubound z value Pr(>|z|)
## Intercept1 0.570202 0.087113 0.399463 0.740942 6.5455 0.00000000005929 ***
## Intercept2 0.407506 0.083154 0.244527 0.570485 4.9006 0.00000095539881 ***
## Tau2_1_1 0.073312 0.049331 -0.023375 0.169999 1.4861 0.1372
## Tau2_2_1 0.028943 0.035998 -0.041612 0.099499 0.8040 0.4214
## Tau2_2_2 0.057533 0.042169 -0.025117 0.140183 1.3643 0.1725
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Q statistic on the homogeneity of effect sizes: 92.05639
## Degrees of freedom of the Q statistic: 33
## P value of the Q statistic: 0.0000001752555
##
## Heterogeneity indices (based on the estimated Tau2):
## Estimate
## Intercept1: I2 (Q statistic) 0.6203
## Intercept2: I2 (Q statistic) 0.5292
##
## Number of studies (or clusters): 18
## Number of observed statistics: 35
## Number of estimated parameters: 5
## Degrees of freedom: 30
## -2 log likelihood: 24.3842
## OpenMx status1: 0 ("0" or "1": The optimization is considered fine.
## Other values may indicate problems.)
#若OpenMx Status不是0或1则需要
#rerun(m.mv)
tau.coefs <- coef(m.mv, select = "random")
# Create matrix
tc.mat <- vec2symMat(tau.coefs)
# Label rows and columns
dimnames(tc.mat)[[1]] <- dimnames(tc.mat)[[2]] <- c("Stress",
"Anxiety")
tc.mat
## Stress Anxiety
## Stress 0.07331199 0.02894342
## Anxiety 0.02894342 0.05753271
cov2cor(tc.mat)
## Stress Anxiety
## Stress 1.0000000 0.4456613
## Anxiety 0.4456613 1.0000000
#使用Wald法估计置信区间可能出现小样本偏差,可以设置intervals.type为似然法
#m.mv <- meta(y = cbind(Stress, Anxiety), v = cbind(Stress_var, Covariance, Anxiety_var),data = ThirdWaveMV,intervals.type = "LB")
#拟合固定效应模型
#m.mv <- meta(y = cbind(Stress, Anxiety), v = cbind(Stress_var, Covariance, Anxiety_var), data = ThirdWaveMV, RE.constraints = matrix(0, nrow=2, ncol=2))
plot(m.mv,
axis.labels = c("Perceived Stress", "Anxiety"),
randeff.ellipse.col = "#014d64",
univariate.arrows.col = "gray40",
univariate.arrows.lwd = 9,
univariate.polygon.col = "gray40",
estimate.ellipse.col = "gray40",
estimate.col = "firebrick")
验证性因子分析(CFA)是一种流行的SEM方法,通过该方法可以明确观测变量与假定潜变量之间的关系。CFA通常用于评估问卷或其他类型评估的心理测量特性。它使研究人员能够确定评估变量是否确实测量了他们想要测量的潜变量,以及几个潜变量之间的关系.
在本例中,我们希望验证一份(虚构的)睡眠问题问卷的潜在因素结构。该问卷被假定为测量两个不同的潜在变量,分别描述睡眠问题的特征:失眠(insomnia)和倦怠(lassitude)。Koffel和Watson(2009)认为,睡眠问题确实可以用这两个潜在因素来描述。我们模拟了11项研究的结果,其中对我们假想的睡眠问卷进行了评估。我们将该数据集命名为SleepProblems。每项研究都包含由我们的问卷直接测量的睡眠主诉症状之间的相互关系。这些测量指标包括sleep quality, sleep latency, sleep efficiency, daytime dysfunction和 hypersomnia。我们假设前三个症状是相关的,因为它们都测量失眠这一潜在变量,而daytime dysfunction和hypersomnia是相关的,因为它们是倦怠因素的症状.
data(SleepProblems)
names(SleepProblems)
## [1] "data" "n"
names(SleepProblems$data)
## [1] "Coleman et al. (2003)" "Salazar et al. (2008)" "Newman et al. (2016)"
## [4] "Delacruz et al. (2009)" "Wyatt et al. (2002)" "Pacheco et al. (2016)"
## [7] "Riggs et al. (2015)" "Guerrero et al. (2010)" "Brewer et al. (2007)"
## [10] "Bridges et al. (1999)" "Esparza et al. (2019)"
SleepProblems$data$`Coleman et al. (2003)`
## Quality Latency Efficiency DTDysf HypSomnia
## Quality 1.00 0.39 0.53 -0.30 -0.05
## Latency 0.39 1.00 0.59 0.07 0.44
## Efficiency 0.53 0.59 1.00 0.09 0.22
## DTDysf -0.30 0.07 0.09 1.00 0.45
## HypSomnia -0.05 0.44 0.22 0.45 1.00
# Stage 1
# pool our correlation matrices using the tssem1 function
cfa1 <- tssem1(SleepProblems$data,
SleepProblems$n,
method="REM",
RE.type = "Diag") #we assume that the random effects are independent
summary(cfa1)
##
## Call:
## meta(y = ES, v = acovR, RE.constraints = Diag(paste0(RE.startvalues,
## "*Tau2_", 1:no.es, "_", 1:no.es)), RE.lbound = RE.lbound,
## I2 = I2, model.name = model.name, suppressWarnings = TRUE,
## silent = silent, run = run)
##
## 95% confidence intervals: z statistic approximation (robust=FALSE)
## Coefficients:
## Estimate Std.Error lbound ubound z value
## Intercept1 0.444432236 0.057467776 0.331797464 0.557067007 7.7336
## Intercept2 0.478173063 0.042506059 0.394862720 0.561483407 11.2495
## Intercept3 0.032786305 0.071282431 -0.106924692 0.172497303 0.4599
## Intercept4 0.132711876 0.048151189 0.038337280 0.227086471 2.7561
## Intercept5 0.509594150 0.036490787 0.438073522 0.581114779 13.9650
## Intercept6 0.120881305 0.040915599 0.040688204 0.201074406 2.9544
## Intercept7 0.192545120 0.060739566 0.073497759 0.311592481 3.1700
## Intercept8 0.221265205 0.039608121 0.143634715 0.298895695 5.5864
## Intercept9 0.189786602 0.045589099 0.100433610 0.279139595 4.1630
## Intercept10 0.509204468 0.023983698 0.462197284 0.556211651 21.2313
## Tau2_1_1 0.032324879 0.015012416 0.002901083 0.061748674 2.1532
## Tau2_2_2 0.016349084 0.008326703 0.000029047 0.032669121 1.9635
## Tau2_3_3 0.049769970 0.023797989 0.003126768 0.096413171 2.0914
## Tau2_4_4 0.019828043 0.010038215 0.000153503 0.039502583 1.9753
## Tau2_5_5 0.010963532 0.006133040 -0.001057005 0.022984070 1.7876
## Tau2_6_6 0.012511016 0.007794973 -0.002766852 0.027788883 1.6050
## Tau2_7_7 0.034863896 0.016839497 0.001859088 0.067868704 2.0704
## Tau2_8_8 0.012156429 0.006571848 -0.000724157 0.025037015 1.8498
## Tau2_9_9 0.017532003 0.009481599 -0.001051590 0.036115595 1.8491
## Tau2_10_10 0.003543432 0.002549068 -0.001452649 0.008539512 1.3901
## Pr(>|z|)
## Intercept1 0.00000000000001044 ***
## Intercept2 < 0.00000000000000022 ***
## Intercept3 0.645553
## Intercept4 0.005849 **
## Intercept5 < 0.00000000000000022 ***
## Intercept6 0.003133 **
## Intercept7 0.001524 **
## Intercept8 0.00000002318788428 ***
## Intercept9 0.00003141180601918 ***
## Intercept10 < 0.00000000000000022 ***
## Tau2_1_1 0.031302 *
## Tau2_2_2 0.049594 *
## Tau2_3_3 0.036497 *
## Tau2_4_4 0.048239 *
## Tau2_5_5 0.073838 .
## Tau2_6_6 0.108491
## Tau2_7_7 0.038418 *
## Tau2_8_8 0.064346 .
## Tau2_9_9 0.064450 .
## Tau2_10_10 0.164502
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Q statistic on the homogeneity of effect sizes: 908.1438
## Degrees of freedom of the Q statistic: 100
## P value of the Q statistic: 0
##
## Heterogeneity indices (based on the estimated Tau2):
## Estimate
## Intercept1: I2 (Q statistic) 0.9316
## Intercept2: I2 (Q statistic) 0.8837
## Intercept3: I2 (Q statistic) 0.9336
## Intercept4: I2 (Q statistic) 0.8547
## Intercept5: I2 (Q statistic) 0.8315
## Intercept6: I2 (Q statistic) 0.7800
## Intercept7: I2 (Q statistic) 0.9093
## Intercept8: I2 (Q statistic) 0.7958
## Intercept9: I2 (Q statistic) 0.8366
## Intercept10: I2 (Q statistic) 0.6486
##
## Number of studies (or clusters): 11
## Number of observed statistics: 110
## Number of estimated parameters: 20
## Degrees of freedom: 90
## -2 log likelihood: -100.688
## OpenMx status1: 0 ("0" or "1": The optimization is considered fine.
## Other values may indicate problems.)
# Extract the fixed coefficients (correlations)
fixed.coefs <- coef(cfa1, "fixed")
# Make a symmetric matrix
fc.mat <- vec2symMat(fixed.coefs, diag = FALSE)
# Label rows and columns
dimnames(fc.mat)[[1]] <- c("Quality", "Latency",
"Efficiency", "DTDysf", "HypSomnia")
dimnames(fc.mat)[[2]] <- c("Quality", "Latency",
"Efficiency", "DTDysf", "HypSomnia")
# Print correlation matrix (3 digits)
round(fc.mat, 3)
## Quality Latency Efficiency DTDysf HypSomnia
## Quality 1.000 0.444 0.478 0.033 0.133
## Latency 0.444 1.000 0.510 0.121 0.193
## Efficiency 0.478 0.510 1.000 0.221 0.190
## DTDysf 0.033 0.121 0.221 1.000 0.509
## HypSomnia 0.133 0.193 0.190 0.509 1.000
# Stage 2
# Create vector of column/row names
dims <- c("Quality", "Latency", "Efficiency",
"DTDysf", "HypSomnia", "f_Insomnia", "f_Lassitude")
# Create 7x7 matrix of zeros
mat <- matrix(rep(0, 7*7), nrow = 7, ncol = 7)
# Label the rows and columns
dimnames(mat)[[1]] <- dimnames(mat)[[2]] <- dims
mat
## Quality Latency Efficiency DTDysf HypSomnia f_Insomnia f_Lassitude
## Quality 0 0 0 0 0 0 0
## Latency 0 0 0 0 0 0 0
## Efficiency 0 0 0 0 0 0 0
## DTDysf 0 0 0 0 0 0 0
## HypSomnia 0 0 0 0 0 0 0
## f_Insomnia 0 0 0 0 0 0 0
## f_Lassitude 0 0 0 0 0 0 0
A <- matrix(c(0, 0, 0, 0, 0, "0.3*Ins_Q", 0 ,
0, 0, 0, 0, 0, "0.3*Ins_L", 0 ,
0, 0, 0, 0, 0, "0.3*Ins_E", 0 ,
0, 0, 0, 0, 0, 0 , "0.3*Las_D",
0, 0, 0, 0, 0, 0 , "0.3*Las_H",
0, 0, 0, 0, 0, 0 , 0 ,
0, 0, 0, 0, 0, 0 , 0
), nrow = 7, ncol = 7, byrow=TRUE)
# Label columns and rows
dimnames(A)[[1]] <- dimnames(A)[[2]] <- dims
A <- as.mxMatrix(A)
# Make a diagonal matrix for the variances
Vars <- Diag(c("0.2*var_Q", "0.2*var_L",
"0.2*var_E", "0.2*var_D", "0.2*var_H"))
# Make the matrix for the latent variables
Cors <- matrix(c(1, "0.3*cor_InsLas",
"0.3*cor_InsLas", 1),
nrow=2, ncol=2)
# Combine
S <- bdiagMat(list(Vars, Cors))
# Label columns and rows
dimnames(S)[[1]] <- dimnames(S)[[2]] <- dims
S <- as.mxMatrix(S)
# Construct diagonal matrix
F <- Diag(c(1, 1, 1, 1, 1, 0, 0))
# Only select non-null rows
F <- F[1:5,]
# Specify row and column labels
dimnames(F)[[1]] <- dims[1:5]
dimnames(F)[[2]] <- dims
F <- as.mxMatrix(F)
# Model Fitting
cfa2 <- tssem2(cfa1,
Amatrix = A,
Smatrix = S,
Fmatrix = F,
diag.constraints = FALSE)
summary(cfa2)
##
## Call:
## wls(Cov = pooledS, aCov = aCov, n = tssem1.obj$total.n, RAM = RAM,
## Amatrix = Amatrix, Smatrix = Smatrix, Fmatrix = Fmatrix,
## diag.constraints = diag.constraints, cor.analysis = cor.analysis,
## intervals.type = intervals.type, mx.algebras = mx.algebras,
## mxModel.Args = mxModel.Args, subset.variables = subset.variables,
## model.name = model.name, suppressWarnings = suppressWarnings,
## silent = silent, run = run)
##
## 95% confidence intervals: z statistic approximation
## Coefficients:
## Estimate Std.Error lbound ubound z value Pr(>|z|)
## Las_D 0.688251 0.081845 0.527838 0.848665 8.4092 < 0.00000000000000022
## Ins_E 0.789438 0.060605 0.670654 0.908221 13.0260 < 0.00000000000000022
## Las_H 0.741372 0.088425 0.568063 0.914681 8.3842 < 0.00000000000000022
## Ins_L 0.658587 0.053650 0.553435 0.763739 12.2756 < 0.00000000000000022
## Ins_Q 0.613591 0.051384 0.512879 0.714303 11.9412 < 0.00000000000000022
## cor_InsLas 0.330274 0.045607 0.240886 0.419662 7.2418 0.0000000000004428
##
## Las_D ***
## Ins_E ***
## Las_H ***
## Ins_L ***
## Ins_Q ***
## cor_InsLas ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Goodness-of-fit indices:
## Value
## Sample size 3272.0000
## Chi-square of target model 5.2640
## DF of target model 4.0000
## p value of target model 0.2613
## Number of constraints imposed on "Smatrix" 0.0000
## DF manually adjusted 0.0000
## Chi-square of independence model 809.5367
## DF of independence model 10.0000
## RMSEA 0.0098
## RMSEA lower 95% CI 0.0000
## RMSEA upper 95% CI 0.0297
## SRMR 0.0413
## TLI 0.9960
## CFI 0.9984
## AIC -2.7360
## BIC -27.1086
## OpenMx status1: 0 ("0" or "1": The optimization is considered fine.
## Other values indicate problems.)
library(semPlot)
cfa.plot <- meta2semPlot(cfa2)
# Create Plot labels (left to right, bottom to top)
labels <- c("Sleep\nQuality",
"Sleep\nLatency",
"Sleep\nEfficiency",
"Daytime\nDysfunction",
"Hyper-\nsomnia","Insomnia",
"Lassitude")
# Plot
semPaths(cfa.plot,
whatLabels = "est",
edge.color = "black",
nodeLabels = labels,
sizeMan = 10,
sizeLat = 10,
edge.label.cex = 1)
在临床研究中,为了研究某个干预/药物是否有效,我们通常会将实验组与对照组(安慰剂等)比较。然而在许多研究领域,并非只有一种”确定”的治疗方法,而是有多种治疗方法。例如,偏头痛可以用各种药物治疗,也存在非药物治疗方案。特别是在”成熟”的研究领域,证明某种治疗方法是有效的往往不那么重要。相反,我们希望找出哪种治疗方法对某些特定的适应症最有效.
这就带来了新的问题。在传统的meta分析中,要评估几种治疗方法的比较效果,就必须有两种治疗方法之间的充分正面比较。可惜,在许多研究领域中,经常会发现只有极少数试验直接比较了两种治疗方法的效果,而没有”较弱”的对照组。不过,虽然两种或多种治疗方法之间可能不存在直接比较,但通常可以获得间接证据。不同的治疗方法可能在不同的试验中进行过评估,但所有这些试验可能都使用了相同的对照组.
网络meta分析可用于纳入此类间接比较,从而使我们能够同时比较几种干预措施的效果。网络meta分析也被称为混合治疗比较meta分析(mixed-treatment comparison meta-analysis)。这是因为它将多种直接和间接治疗比较整合到一个模型中,该模型可表示为一个比较”网络”.
首先,我们必须理解治疗”网络”的含义。假设我们从某项随机对照试验i中提取了数据,该试验比较了治疗A和另一种条件B(如等待对照组wait-list control group)的效果。我们可以用图来说明这种比较:
这条线叫做边(edge)。这条边代表了A和B之间的关系.
B称为参照组(reference group),在两个比较中B都作为对照.
B称为A,C之间的桥(bridge).
\[\hat\theta_{\text{A,C}}^{\text{indirect}} = \hat\theta_{\text{B,A}}^{\text{direct}} - \hat\theta_{\text{B,C}}^{\text{direct}}\]
\[\text{Var} \left(\hat\theta_{\text{A,C}}^{\text{indirect}} \right) = \text{Var} \left(\hat\theta_{\text{B,A}}^{\text{direct}} \right) + \text{Var} \left(\hat\theta_{\text{B,C}}^{\text{direct}} \right)\] 该公式需要满足传递性假设(the assumption of transitivity),即网络一致性(network consistency).
传递性假设的核心原则是,我们可以将直接证据(如来自AB和CB的比较)与相关比较的间接证据(如 AC)结合起来。这一假设与随机效应模型的可交换性(exchangeability)有关.
关键的假设是,比较(如AB)的效果可以在试验之间交换。传递性在统计学上就是一致性.
\[\theta_{\text{A,B}}^{\text{indirect}} = \theta_{\text{A,B}}^{\text{direct}}\]
实际中的网络往往很复杂,如下:
根据组合原理,如果有S种治疗,则有C=S(S-1)/2组比较。因此,我们需要一个计算模型,让我们能以高效、内部一致的方式汇集所有可用的网络数据。针对网络meta分析开发了多种统计方法。在接下来的章节中,我们将讨论频率学模型(frequentist)和贝叶斯层次模型(Bayesian hierarchical model),以及如何在R中实现这些模型.
这一模型基于概率的频率学派的理论.
假设有K个试验,M组比较。所有效应量为\(\boldsymbol{\hat\theta} = (\hat\theta_1, \hat\theta_2, \dots, \hat\theta_M)\)。模型如下
\[\begin{align} \boldsymbol{\hat\theta} &= \boldsymbol{X}\boldsymbol{\theta}_{\text{treat}} + \boldsymbol{\epsilon} \notag \\\begin{bmatrix}\hat\theta_{1\text{,A,B}} \\\hat\theta_{2\text{,A,C}} \\\hat\theta_{3\text{,A,D}} \\\hat\theta_{4\text{,B,C}} \\\hat\theta_{5\text{,B,D}} \\\end{bmatrix}&=\begin{bmatrix}1 & -1 & 0 & 0 \\1 & 0 & -1 & 0 \\1 & 0 & 0 & -1 \\0 & 1 & -1 & 0 \\0 & 1 & 0 & -1 \\\end{bmatrix}\begin{bmatrix}\theta_{\text{A}} \\\theta_{\text{B}} \\\theta_{\text{C}} \\\theta_{\text{D}} \\\end{bmatrix}+\begin{bmatrix}\epsilon_{1} \\\epsilon_{2} \\\epsilon_{3} \\\epsilon_{4} \\\epsilon_{5} \\\end{bmatrix}\end{align}\]
由于X是满秩矩阵,模型是参数过度的.
{netmeta}中的图论方法提供了解决方案。我们就不赘述这种方法背后繁琐的数学细节了,尤其是{netmeta}软件包会为我们完成这些繁重的工作。我们只需指出,这种方法涉及构建所谓的摩尔-彭罗斯伪逆矩阵(Moore-Penrose pseudoinverse matrix),然后使用加权最小二乘法计算网络模型的拟合值.
该程序也适用于多臂研究(multi-arm),即进行了不止一次成对比较的研究(即对两个以上的条件进行比较的研究).
网络模型计算了用以表示非一致性的I^2:
\[I^2 = \text{max} \left(\frac{Q_{\text{total}}-\text{d.f.}} {Q_{\text{total}}}, 0 \right)\\\text{d.f.} = \left( \sum^K_{k=1}p_k-1 \right)- (n-1)\]
#install.packages('netmeta')
library(netmeta)
## Loading 'netmeta' package (version 2.8-2).
## Type 'help("netmeta-package")' for a brief overview.
## Readers of 'Meta-Analysis with R (Use R!)' should install
## older version of 'netmeta' package: https://tinyurl.com/kyz6wjbb
library(dmetar)
data(TherapyFormats) #CBT for depression
head(TherapyFormats[1:5])
## author TE seTE treat1 treat2
## 1 Ausbun, 1997 0.092 0.195 ind grp
## 2 Crable, 1986 -0.675 0.350 ind grp
## 3 Thiede, 2011 -0.107 0.198 ind grp
## 4 Bonertz, 2015 -0.090 0.324 ind grp
## 5 Joy, 2002 -0.135 0.453 ind grp
## 6 Jones, 2013 -0.217 0.289 ind grp
as.matrix(table(TherapyFormats$author))
## [,1]
## Aaron, 2004 1
## Allen, 2003 1
## Amsberry, 2010 1
## Ancmon, 2001 1
## Ausbun, 1997 1
## Austin, 1988 1
## Ayers, 2008 1
## Barker, 1997 1
## Barrett, 2005 1
## Belk, 1986 1
## Bengston, 2004 1
## Blevins, 2003 1
## Bond, 1988 1
## Bonertz, 2015 1
## Breiman, 2001 3
## Brown, 1985 1
## Brown, 2017 1
## Buchholtz, 2010 1
## Burgan, 2012 1
## Burges, 1993 1
## Campbell, 2000 1
## Carpenter, 1994 1
## Carrell, 2008 1
## Carvey, 2008 1
## Ceurvorst, 2011 1
## Chamberlain, 2011 1
## Chambers, 2010 1
## Christenson, 2018 1
## Ciccone, 1997 1
## Clear, 1989 1
## Congour, 1996 1
## Cook, 2013 1
## Cowan, 2010 1
## Crable, 1986 1
## Craft, 2014 1
## Crane, 1984 1
## Dahl, 2002 1
## De Venecia, 2011 1
## Debenedictis, 1993 1
## Delaney, 1995 1
## Dewitt, 1995 1
## Dobbs, 2000 1
## Duba, 1990 1
## Durgan, 1998 1
## Ebberts, 1992 1
## Edgar, 2015 1
## Edwards, 2010 1
## Eisenberg, 1995 1
## Englund, 2017 1
## Erickson, 1998 1
## Fair, 2003 1
## Fay, 2002 1
## Fergerson, 2014 1
## Fladland, 1997 1
## Fotenos, 1994 1
## Fourroux, 1999 1
## Frenzel, 2006 1
## Germuska, 1996 1
## Gilleland, 1989 1
## Gilly, 2014 1
## Gilstrap, 2015 1
## Gleave, 2004 1
## Godinez, 2005 1
## Graesch, 2000 1
## Graham, 1986 1
## Gruber, 2013 1
## Hall, 1997 1
## Hanrahan, 1999 1
## Harbert, 1999 1
## Harder, 2005 1
## Hassan, 2011 1
## Hayko, 1999 1
## Heath, 1998 1
## Hetherington, 2003 1
## Hiatt, 1985 1
## Homrighausen, 2002 1
## Hopkins, 2003 1
## Horton, 1985 1
## Huebner, 1997 1
## Hughey, 2012 1
## Hurla, 1985 1
## Isles, 2003 1
## James, 2012 1
## Jeffers, 1996 1
## Johnson, 1989 1
## Jones, 2013 1
## Joy, 2002 1
## Kilgore, 2003 1
## King, 1990 1
## Knickerbocker, 1987 1
## Koslosky, 1991 1
## Kramer, 2007 1
## Kurth, 1992 1
## Laird, 2008 1
## Lange, 1993 1
## Leach, 2014 1
## Ledbetter, 1984 1
## Lemon, 2005 1
## Leonard, 2010 1
## Lesley, 1999 1
## Lopez, 2011 1
## Lucero, 2001 1
## Mai, 2003 1
## Maranto, 1997 1
## Marinkovich, 2003 1
## Martin, 2017 1
## Maurath, 1994 1
## Mcmahon, 1998 1
## Mcmanus, 1987 1
## Menzies, 2003 1
## Meraz, 2014 1
## Metzger, 1988 1
## Mickey, 1989 1
## Miller, 1989 1
## Muhlenbruck, 1998 1
## Murray, 1996 1
## Narum, 1986 1
## Neil, 2015 1
## Nelson, 1987 1
## Nelson, 2006 1
## Nguyen, 1996 1
## Oberholser, 2008 1
## Ono, 1985 1
## Orman, 2012 1
## Parker, 2002 1
## Pauly, 2003 1
## Plewe, 1991 1
## Pylkka, 2008 1
## Quick, 2006 1
## Quinonez Dominguez, 1993 1
## Ralph, 1989 1
## Reeve, 2013 1
## Reynolds, 1989 1
## Riley, 2002 1
## Ristau, 1992 1
## Roberts, 1996 1
## Roberts, 1997 1
## Roberts, 2017 1
## Robinson, 2015 1
## Rodgers, 1993 1
## Runnels, 2004 1
## Rusciolelli, 2000 1
## Santistevan-Gettel, 2012 1
## Schlaver, 2004 1
## Schlegel, 1989 1
## Scholer, 2017 1
## Schulker, 2009 1
## Schwartz, 2012 1
## Scott, 1993 1
## Shea, 2011 1
## Sheehan, 2013 1
## Sheppard, 1994 1
## Sherratt, 2000 1
## Shorb, 1998 1
## Shrednik, 2000 1
## Silva, 1986 1
## Snider, 1998 1
## Staley, 1992 1
## Summerhays, 1987 1
## Tamburri, 2001 1
## Tenbarge, 2000 1
## Thiede, 2011 1
## Thompson, 1998 1
## Thompson, 2003 1
## Underwood, 2008 1
## Vaclav, 1984 1
## Vahlenkamp, 2006 1
## Villalobos-Valles, 2006 1
## Wagner, 1994 1
## Walker, 1990 1
## Warwick, 1986 1
## Watkins Jr, 2004 1
## Watson, 1997 1
## Weimer, 1999 1
## Whitaker, 2017 1
## White, 2006 1
## Whiting, 1990 1
## Wierenga, 2004 1
## Williams, 1989 1
## Woodard, 2013 1
## Wuertz, 2016 1
## Wyatt, 2003 1
# fit the model
m.netmeta <- netmeta(TE = TE,
seTE = seTE,
treat1 = treat1,
treat2 = treat2,
studlab = author,
data = TherapyFormats,
sm = "SMD", #type of effect size
fixed = TRUE,
random = FALSE,
reference.group = "cau",
details.chkmultiarm = TRUE,
sep.trts = " vs ") #分隔符
summary(m.netmeta)
## Original data (with adjusted standard errors for multi-arm studies):
##
## treat1 treat2 TE seTE seTE.adj narms multiarm
## Ausbun, 1997 grp ind -0.0920 0.1950 0.1950 2
## Crable, 1986 grp ind 0.6750 0.3500 0.3500 2
## Thiede, 2011 grp ind 0.1070 0.1980 0.1980 2
## Bonertz, 2015 grp ind 0.0900 0.3240 0.3240 2
## Joy, 2002 grp ind 0.1350 0.4530 0.4530 2
## Jones, 2013 grp ind 0.2170 0.2890 0.2890 2
## Aaron, 2004 grp ind -0.1030 0.4010 0.4010 2
## Breiman, 2001 gsh ind 0.0850 0.5160 0.6340 3 *
## Lucero, 2001 gsh ind 0.0520 0.5000 0.5000 2
## Amsberry, 2010 gsh ind 0.1090 0.4130 0.4130 2
## Robinson, 2015 gsh ind 0.1280 0.2560 0.2560 2
## Burgan, 2012 ind tel -0.3110 0.1390 0.1390 2
## Belk, 1986 ind tel -0.1770 0.0830 0.0830 2
## Ledbetter, 1984 ind tel -0.0080 0.2310 0.2310 2
## Narum, 1986 ind tel 0.0390 0.3380 0.3380 2
## Breiman, 2001 ind wlc -0.7500 0.5130 0.6267 3 *
## Wierenga, 2004 ind wlc -1.4380 0.3570 0.3570 2
## Eisenberg, 1995 ind wlc -0.5090 0.0600 0.0600 2
## Scholer, 2017 ind wlc -0.9650 0.4430 0.4430 2
## Schlaver, 2004 ind wlc -1.1560 0.3440 0.3440 2
## Riley, 2002 ind wlc -1.4020 0.2800 0.2800 2
## Barker, 1997 ind wlc -0.9830 0.1790 0.1790 2
## Dewitt, 1995 ind wlc -1.2070 0.3740 0.3740 2
## De Venecia, 2011 ind wlc -0.9210 0.3910 0.3910 2
## Hiatt, 1985 ind wlc -0.7560 0.1020 0.1020 2
## Congour, 1996 ind wlc -1.2910 0.2970 0.2970 2
## Gruber, 2013 ind wlc -0.2590 0.3220 0.3220 2
## Laird, 2008 ind wlc -1.4360 0.4840 0.4840 2
## Kramer, 2007 ind wlc -0.9970 0.3000 0.3000 2
## Germuska, 1996 ind wlc -1.3140 0.2100 0.2100 2
## Fair, 2003 ind wlc -1.1780 0.1750 0.1750 2
## Mickey, 1989 ind wlc -0.7870 0.2390 0.2390 2
## Shea, 2011 ind wlc -0.9170 0.1810 0.1810 2
## Leach, 2014 cau ind 0.9600 0.0710 0.0710 2
## Koslosky, 1991 cau ind 1.2210 0.1510 0.1510 2
## Chambers, 2010 cau ind 0.7880 0.4800 0.4800 2
## Gleave, 2004 cau ind 0.2410 0.3230 0.3230 2
## Englund, 2017 cau ind 0.5700 0.2280 0.2280 2
## Gilleland, 1989 cau ind 0.8790 0.4600 0.4600 2
## Parker, 2002 cau ind 0.7450 0.2240 0.2240 2
## Fergerson, 2014 cau ind 0.7060 0.3350 0.3350 2
## Craft, 2014 cau ind 1.0540 0.4590 0.4590 2
## Horton, 1985 cau ind 0.5620 0.4320 0.4320 2
## Brown, 2017 cau ind 0.3730 0.0760 0.0760 2
## Ancmon, 2001 cau ind 0.7350 0.1240 0.1240 2
## Roberts, 1997 cau ind 0.9380 0.2200 0.2200 2
## Erickson, 1998 cau ind 0.7360 0.2900 0.2900 2
## Summerhays, 1987 cau ind 0.6080 0.3150 0.3150 2
## King, 1990 cau ind 1.1540 0.2530 0.2530 2
## Muhlenbruck, 1998 cau ind 1.0030 0.1430 0.1430 2
## Murray, 1996 cau ind 0.8170 0.2960 0.2960 2
## Shorb, 1998 cau ind 0.9660 0.1240 0.1240 2
## Fotenos, 1994 cau ind 0.5290 0.4770 0.4770 2
## Pauly, 2003 cau ind 0.6040 0.0930 0.0930 2
## Johnson, 1989 cau ind 0.9770 0.1640 0.1640 2
## Whitaker, 2017 cau ind 0.6670 0.1990 0.1990 2
## Brown, 1985 cau ind 0.6630 0.1790 0.1790 2
## Ciccone, 1997 cau ind 0.6210 0.3710 0.3710 2
## Martin, 2017 cau ind 0.6360 0.1330 0.1330 2
## White, 2006 cau ind 0.0690 0.1450 0.1450 2
## Silva, 1986 cau ind 0.6080 0.5010 0.5010 2
## Thompson, 1998 cau ind 0.2230 0.1340 0.1340 2
## Snider, 1998 cau ind 1.1010 0.2290 0.2290 2
## Harder, 2005 grp gsh -0.1730 0.2770 0.2770 2
## Williams, 1989 grp gsh -0.3490 0.1050 0.1050 2
## Gilstrap, 2015 grp gsh -0.1560 0.0920 0.0920 2
## Kilgore, 2003 grp gsh -0.3370 0.3510 0.3510 2
## Hetherington, 2003 grp gsh -0.1500 0.2410 0.2410 2
## Austin, 1988 grp ush -0.6750 0.1080 0.1080 2
## Edwards, 2010 grp wlc -0.0260 0.2270 0.2270 2
## Sherratt, 2000 grp wlc 0.5730 0.5050 0.5050 2
## Ceurvorst, 2011 grp wlc -1.0430 0.0540 0.0540 2
## Graham, 1986 grp wlc 0.6730 0.2080 0.2080 2
## Hughey, 2012 grp wlc -1.6380 0.0720 0.0720 2
## Burges, 1993 grp wlc -0.3560 0.0840 0.0840 2
## Carpenter, 1994 grp wlc -2.7050 0.4350 0.4350 2
## Quick, 2006 grp wlc -1.5480 0.4770 0.4770 2
## Wuertz, 2016 grp wlc -1.6440 0.4880 0.4880 2
## Vaclav, 1984 grp wlc 0.2150 0.3030 0.3030 2
## Roberts, 1996 grp wlc -1.0170 0.4330 0.4330 2
## Heath, 1998 grp wlc 0.1900 0.3630 0.3630 2
## Dobbs, 2000 grp wlc -1.6030 0.5560 0.5560 2
## Carvey, 2008 grp wlc -0.3270 0.4840 0.4840 2
## Neil, 2015 grp wlc -1.2920 0.1160 0.1160 2
## Barrett, 2005 grp wlc -1.3020 0.3540 0.3540 2
## Edgar, 2015 grp wlc 0.5040 0.4350 0.4350 2
## Leonard, 2010 grp wlc 0.0310 0.4220 0.4220 2
## Miller, 1989 cau grp 0.4300 0.4080 0.4080 2
## Clear, 1989 cau grp 0.6560 0.4660 0.4660 2
## Plewe, 1991 cau grp 0.7080 0.1600 0.1600 2
## Hayko, 1999 cau grp 0.0010 0.4290 0.4290 2
## Knickerbocker, 1987 cau grp 0.6330 0.2450 0.2450 2
## Kurth, 1992 cau grp 0.2080 0.2310 0.2310 2
## Menzies, 2003 cau grp 0.8050 0.2850 0.2850 2
## Mcmahon, 1998 cau grp 0.3270 0.4080 0.4080 2
## Isles, 2003 cau grp 0.3890 0.2110 0.2110 2
## Santistevan-Gettel, 2012 cau grp -0.5810 0.2330 0.2330 2
## Frenzel, 2006 cau grp 0.7010 0.1520 0.1520 2
## Fladland, 1997 cau grp 0.7330 0.1510 0.1510 2
## Pylkka, 2008 cau grp 0.3390 0.0960 0.0960 2
## Metzger, 1988 cau grp 0.3250 0.4690 0.4690 2
## Debenedictis, 1993 cau grp 0.6410 0.2370 0.2370 2
## Bengston, 2004 cau grp 0.1490 0.0790 0.0790 2
## Schlegel, 1989 cau grp 1.4590 0.2660 0.2660 2
## Cook, 2013 cau grp 0.6260 0.1020 0.1020 2
## Sheehan, 2013 cau grp 0.0740 0.0990 0.0990 2
## Tenbarge, 2000 cau grp 0.3320 0.2780 0.2780 2
## Buchholtz, 2010 cau grp -0.4350 0.3810 0.3810 2
## Sheppard, 1994 gsh ush -0.1910 0.2610 0.2610 2
## Warwick, 1986 gsh ush -0.3190 0.3960 0.3960 2
## Weimer, 1999 gsh ush -0.4180 0.1190 0.1190 2
## Rusciolelli, 2000 gsh ush -0.4370 0.1300 0.1300 2
## Jeffers, 1996 gsh ush -0.4390 0.5290 0.5290 2
## Whiting, 1990 gsh wlc -0.2260 0.3750 0.3750 2
## Tamburri, 2001 gsh wlc 0.3730 0.3050 0.3050 2
## Hurla, 1985 gsh wlc -1.2430 0.4460 0.4460 2
## Cowan, 2010 gsh wlc 0.4730 0.1290 0.1290 2
## Wyatt, 2003 gsh wlc -1.8380 0.4090 0.4090 2
## Vahlenkamp, 2006 gsh wlc -0.5560 0.3130 0.3130 2
## Campbell, 2000 gsh wlc -2.9050 0.4850 0.4850 2
## Lange, 1993 gsh wlc -1.7480 0.0630 0.0630 2
## Shrednik, 2000 gsh wlc -1.8440 0.4980 0.4980 2
## Chamberlain, 2011 gsh wlc 0.0150 0.3300 0.3300 2
## Runnels, 2004 gsh wlc -1.2170 0.3650 0.3650 2
## Hall, 1997 gsh wlc -0.0100 0.2390 0.2390 2
## James, 2012 gsh wlc -1.8030 0.4110 0.4110 2
## Delaney, 1995 gsh wlc -0.5270 0.1850 0.1850 2
## Watkins Jr, 2004 gsh wlc -1.4920 0.4610 0.4610 2
## Harbert, 1999 gsh wlc -1.5020 0.4340 0.4340 2
## Rodgers, 1993 gsh wlc 0.3040 0.0650 0.0650 2
## Hopkins, 2003 gsh wlc -0.1690 0.2540 0.2540 2
## Ebberts, 1992 gsh wlc -0.2260 0.1070 0.1070 2
## Homrighausen, 2002 gsh wlc 0.3730 0.1620 0.1620 2
## Hanrahan, 1999 gsh wlc -1.2430 0.4350 0.4350 2
## Bond, 1988 gsh wlc 0.4730 0.1180 0.1180 2
## Lemon, 2005 gsh wlc -1.8380 0.3620 0.3620 2
## Blevins, 2003 gsh wlc -0.5560 0.0930 0.0930 2
## Reynolds, 1989 gsh wlc -2.9050 0.3670 0.3670 2
## Marinkovich, 2003 gsh wlc -1.7480 0.3330 0.3330 2
## Lesley, 1999 gsh wlc -1.8440 0.3180 0.3180 2
## Fourroux, 1999 gsh wlc 0.0150 0.2750 0.2750 2
## Orman, 2012 gsh wlc -1.2170 0.2260 0.2260 2
## Underwood, 2008 gsh wlc -0.0100 0.3050 0.3050 2
## Graesch, 2000 gsh wlc -1.8030 0.1630 0.1630 2
## Fay, 2002 gsh wlc -0.5270 0.2790 0.2790 2
## Ristau, 1992 gsh wlc -1.4920 0.2970 0.2970 2
## Nelson, 1987 gsh wlc -1.5020 0.1290 0.1290 2
## Watson, 1997 gsh wlc 0.3040 0.3090 0.3090 2
## Crane, 1984 cau gsh 0.7330 0.3480 0.3480 2
## Reeve, 2013 cau gsh 0.5110 0.2520 0.2520 2
## Christenson, 2018 cau gsh 0.7100 0.3590 0.3590 2
## Ayers, 2008 cau gsh 0.4290 0.3600 0.3600 2
## Schwartz, 2012 cau gsh 0.5730 0.2700 0.2700 2
## Gilly, 2014 cau gsh 0.6750 0.3990 0.3990 2
## Dahl, 2002 cau gsh 0.6620 0.4270 0.4270 2
## Allen, 2003 cau gsh 0.5560 0.0850 0.0850 2
## Scott, 1993 tel wlc -0.5580 0.0980 0.0980 2
## Nelson, 2006 cau tel 0.5140 0.1190 0.1190 2
## Staley, 1992 cau tel 0.5570 0.3630 0.3630 2
## Maurath, 1994 cau tel 0.9620 0.3010 0.3010 2
## Carrell, 2008 cau tel 1.0140 0.1770 0.1770 2
## Schulker, 2009 cau tel 1.3060 0.2650 0.2650 2
## Walker, 1990 cau tel 0.6880 0.2500 0.2500 2
## Wagner, 1994 ush wlc -0.4240 0.1780 0.1780 2
## Huebner, 1997 ush wlc -0.5490 0.2580 0.2580 2
## Mcmanus, 1987 ush wlc -0.5470 0.2510 0.2510 2
## Oberholser, 2008 ush wlc -0.6510 0.2390 0.2390 2
## Mai, 2003 ush wlc -0.5240 0.2550 0.2550 2
## Ralph, 1989 ush wlc -0.3770 0.4190 0.4190 2
## Roberts, 2017 ush wlc -0.4650 0.1430 0.1430 2
## Thompson, 2003 ush wlc -0.6080 0.2460 0.2460 2
## Hassan, 2011 ush wlc -0.6360 0.1590 0.1590 2
## Woodard, 2013 ush wlc -0.5050 0.5140 0.5140 2
## Meraz, 2014 ush wlc -0.2520 0.2950 0.2950 2
## Lopez, 2011 cau ush 0.3180 0.1050 0.1050 2
## Villalobos-Valles, 2006 cau ush -0.0580 0.3500 0.3500 2
## Godinez, 2005 cau ush 0.0140 0.5470 0.5470 2
## Quinonez Dominguez, 1993 cau ush 0.1560 0.3550 0.3550 2
## Ono, 1985 cau ush 0.2490 0.4540 0.4540 2
## Durgan, 1998 cau ush -0.0960 0.1950 0.1950 2
## Nguyen, 1996 cau ush 0.4230 0.3090 0.3090 2
## Maranto, 1997 cau ush 0.2310 0.2450 0.2450 2
## Duba, 1990 cau ush -0.0620 0.2430 0.2430 2
## Breiman, 2001 gsh wlc -0.6640 0.5140 0.6291 3 *
##
## Number of treatment arms (by study):
## narms
## Ausbun, 1997 2
## Crable, 1986 2
## Thiede, 2011 2
## Bonertz, 2015 2
## Joy, 2002 2
## Jones, 2013 2
## Aaron, 2004 2
## Breiman, 2001 3
## Lucero, 2001 2
## Amsberry, 2010 2
## Robinson, 2015 2
## Burgan, 2012 2
## Belk, 1986 2
## Ledbetter, 1984 2
## Narum, 1986 2
## Wierenga, 2004 2
## Eisenberg, 1995 2
## Scholer, 2017 2
## Schlaver, 2004 2
## Riley, 2002 2
## Barker, 1997 2
## Dewitt, 1995 2
## De Venecia, 2011 2
## Hiatt, 1985 2
## Congour, 1996 2
## Gruber, 2013 2
## Laird, 2008 2
## Kramer, 2007 2
## Germuska, 1996 2
## Fair, 2003 2
## Mickey, 1989 2
## Shea, 2011 2
## Leach, 2014 2
## Koslosky, 1991 2
## Chambers, 2010 2
## Gleave, 2004 2
## Englund, 2017 2
## Gilleland, 1989 2
## Parker, 2002 2
## Fergerson, 2014 2
## Craft, 2014 2
## Horton, 1985 2
## Brown, 2017 2
## Ancmon, 2001 2
## Roberts, 1997 2
## Erickson, 1998 2
## Summerhays, 1987 2
## King, 1990 2
## Muhlenbruck, 1998 2
## Murray, 1996 2
## Shorb, 1998 2
## Fotenos, 1994 2
## Pauly, 2003 2
## Johnson, 1989 2
## Whitaker, 2017 2
## Brown, 1985 2
## Ciccone, 1997 2
## Martin, 2017 2
## White, 2006 2
## Silva, 1986 2
## Thompson, 1998 2
## Snider, 1998 2
## Harder, 2005 2
## Williams, 1989 2
## Gilstrap, 2015 2
## Kilgore, 2003 2
## Hetherington, 2003 2
## Austin, 1988 2
## Edwards, 2010 2
## Sherratt, 2000 2
## Ceurvorst, 2011 2
## Graham, 1986 2
## Hughey, 2012 2
## Burges, 1993 2
## Carpenter, 1994 2
## Quick, 2006 2
## Wuertz, 2016 2
## Vaclav, 1984 2
## Roberts, 1996 2
## Heath, 1998 2
## Dobbs, 2000 2
## Carvey, 2008 2
## Neil, 2015 2
## Barrett, 2005 2
## Edgar, 2015 2
## Leonard, 2010 2
## Miller, 1989 2
## Clear, 1989 2
## Plewe, 1991 2
## Hayko, 1999 2
## Knickerbocker, 1987 2
## Kurth, 1992 2
## Menzies, 2003 2
## Mcmahon, 1998 2
## Isles, 2003 2
## Santistevan-Gettel, 2012 2
## Frenzel, 2006 2
## Fladland, 1997 2
## Pylkka, 2008 2
## Metzger, 1988 2
## Debenedictis, 1993 2
## Bengston, 2004 2
## Schlegel, 1989 2
## Cook, 2013 2
## Sheehan, 2013 2
## Tenbarge, 2000 2
## Buchholtz, 2010 2
## Sheppard, 1994 2
## Warwick, 1986 2
## Weimer, 1999 2
## Rusciolelli, 2000 2
## Jeffers, 1996 2
## Whiting, 1990 2
## Tamburri, 2001 2
## Hurla, 1985 2
## Cowan, 2010 2
## Wyatt, 2003 2
## Vahlenkamp, 2006 2
## Campbell, 2000 2
## Lange, 1993 2
## Shrednik, 2000 2
## Chamberlain, 2011 2
## Runnels, 2004 2
## Hall, 1997 2
## James, 2012 2
## Delaney, 1995 2
## Watkins Jr, 2004 2
## Harbert, 1999 2
## Rodgers, 1993 2
## Hopkins, 2003 2
## Ebberts, 1992 2
## Homrighausen, 2002 2
## Hanrahan, 1999 2
## Bond, 1988 2
## Lemon, 2005 2
## Blevins, 2003 2
## Reynolds, 1989 2
## Marinkovich, 2003 2
## Lesley, 1999 2
## Fourroux, 1999 2
## Orman, 2012 2
## Underwood, 2008 2
## Graesch, 2000 2
## Fay, 2002 2
## Ristau, 1992 2
## Nelson, 1987 2
## Watson, 1997 2
## Crane, 1984 2
## Reeve, 2013 2
## Christenson, 2018 2
## Ayers, 2008 2
## Schwartz, 2012 2
## Gilly, 2014 2
## Dahl, 2002 2
## Allen, 2003 2
## Scott, 1993 2
## Nelson, 2006 2
## Staley, 1992 2
## Maurath, 1994 2
## Carrell, 2008 2
## Schulker, 2009 2
## Walker, 1990 2
## Wagner, 1994 2
## Huebner, 1997 2
## Mcmanus, 1987 2
## Oberholser, 2008 2
## Mai, 2003 2
## Ralph, 1989 2
## Roberts, 2017 2
## Thompson, 2003 2
## Hassan, 2011 2
## Woodard, 2013 2
## Meraz, 2014 2
## Lopez, 2011 2
## Villalobos-Valles, 2006 2
## Godinez, 2005 2
## Quinonez Dominguez, 1993 2
## Ono, 1985 2
## Durgan, 1998 2
## Nguyen, 1996 2
## Maranto, 1997 2
## Duba, 1990 2
##
## Results (common effects model):
##
## treat1 treat2 SMD 95%-CI Q
## Ausbun, 1997 grp ind 0.0636 [ 0.0018; 0.1253] 0.64
## Crable, 1986 grp ind 0.0636 [ 0.0018; 0.1253] 3.05
## Thiede, 2011 grp ind 0.0636 [ 0.0018; 0.1253] 0.05
## Bonertz, 2015 grp ind 0.0636 [ 0.0018; 0.1253] 0.01
## Joy, 2002 grp ind 0.0636 [ 0.0018; 0.1253] 0.02
## Jones, 2013 grp ind 0.0636 [ 0.0018; 0.1253] 0.28
## Aaron, 2004 grp ind 0.0636 [ 0.0018; 0.1253] 0.17
## Breiman, 2001 gsh ind 0.2462 [ 0.1783; 0.3142] 0.06
## Lucero, 2001 gsh ind 0.2462 [ 0.1783; 0.3142] 0.15
## Amsberry, 2010 gsh ind 0.2462 [ 0.1783; 0.3142] 0.11
## Robinson, 2015 gsh ind 0.2462 [ 0.1783; 0.3142] 0.21
## Burgan, 2012 ind tel -0.1269 [-0.2194; -0.0344] 1.75
## Belk, 1986 ind tel -0.1269 [-0.2194; -0.0344] 0.36
## Ledbetter, 1984 ind tel -0.1269 [-0.2194; -0.0344] 0.26
## Narum, 1986 ind tel -0.1269 [-0.2194; -0.0344] 0.24
## Breiman, 2001 ind wlc -0.8986 [-0.9559; -0.8414] 0.06
## Wierenga, 2004 ind wlc -0.8986 [-0.9559; -0.8414] 2.28
## Eisenberg, 1995 ind wlc -0.8986 [-0.9559; -0.8414] 42.17
## Scholer, 2017 ind wlc -0.8986 [-0.9559; -0.8414] 0.02
## Schlaver, 2004 ind wlc -0.8986 [-0.9559; -0.8414] 0.56
## Riley, 2002 ind wlc -0.8986 [-0.9559; -0.8414] 3.23
## Barker, 1997 ind wlc -0.8986 [-0.9559; -0.8414] 0.22
## Dewitt, 1995 ind wlc -0.8986 [-0.9559; -0.8414] 0.68
## De Venecia, 2011 ind wlc -0.8986 [-0.9559; -0.8414] 0.00
## Hiatt, 1985 ind wlc -0.8986 [-0.9559; -0.8414] 1.96
## Congour, 1996 ind wlc -0.8986 [-0.9559; -0.8414] 1.75
## Gruber, 2013 ind wlc -0.8986 [-0.9559; -0.8414] 3.95
## Laird, 2008 ind wlc -0.8986 [-0.9559; -0.8414] 1.23
## Kramer, 2007 ind wlc -0.8986 [-0.9559; -0.8414] 0.11
## Germuska, 1996 ind wlc -0.8986 [-0.9559; -0.8414] 3.91
## Fair, 2003 ind wlc -0.8986 [-0.9559; -0.8414] 2.55
## Mickey, 1989 ind wlc -0.8986 [-0.9559; -0.8414] 0.22
## Shea, 2011 ind wlc -0.8986 [-0.9559; -0.8414] 0.01
## Leach, 2014 cau ind 0.6403 [ 0.5915; 0.6890] 20.28
## Koslosky, 1991 cau ind 0.6403 [ 0.5915; 0.6890] 14.79
## Chambers, 2010 cau ind 0.6403 [ 0.5915; 0.6890] 0.09
## Gleave, 2004 cau ind 0.6403 [ 0.5915; 0.6890] 1.53
## Englund, 2017 cau ind 0.6403 [ 0.5915; 0.6890] 0.10
## Gilleland, 1989 cau ind 0.6403 [ 0.5915; 0.6890] 0.27
## Parker, 2002 cau ind 0.6403 [ 0.5915; 0.6890] 0.22
## Fergerson, 2014 cau ind 0.6403 [ 0.5915; 0.6890] 0.04
## Craft, 2014 cau ind 0.6403 [ 0.5915; 0.6890] 0.81
## Horton, 1985 cau ind 0.6403 [ 0.5915; 0.6890] 0.03
## Brown, 2017 cau ind 0.6403 [ 0.5915; 0.6890] 12.37
## Ancmon, 2001 cau ind 0.6403 [ 0.5915; 0.6890] 0.58
## Roberts, 1997 cau ind 0.6403 [ 0.5915; 0.6890] 1.83
## Erickson, 1998 cau ind 0.6403 [ 0.5915; 0.6890] 0.11
## Summerhays, 1987 cau ind 0.6403 [ 0.5915; 0.6890] 0.01
## King, 1990 cau ind 0.6403 [ 0.5915; 0.6890] 4.12
## Muhlenbruck, 1998 cau ind 0.6403 [ 0.5915; 0.6890] 6.43
## Murray, 1996 cau ind 0.6403 [ 0.5915; 0.6890] 0.36
## Shorb, 1998 cau ind 0.6403 [ 0.5915; 0.6890] 6.90
## Fotenos, 1994 cau ind 0.6403 [ 0.5915; 0.6890] 0.05
## Pauly, 2003 cau ind 0.6403 [ 0.5915; 0.6890] 0.15
## Johnson, 1989 cau ind 0.6403 [ 0.5915; 0.6890] 4.22
## Whitaker, 2017 cau ind 0.6403 [ 0.5915; 0.6890] 0.02
## Brown, 1985 cau ind 0.6403 [ 0.5915; 0.6890] 0.02
## Ciccone, 1997 cau ind 0.6403 [ 0.5915; 0.6890] 0.00
## Martin, 2017 cau ind 0.6403 [ 0.5915; 0.6890] 0.00
## White, 2006 cau ind 0.6403 [ 0.5915; 0.6890] 15.52
## Silva, 1986 cau ind 0.6403 [ 0.5915; 0.6890] 0.00
## Thompson, 1998 cau ind 0.6403 [ 0.5915; 0.6890] 9.70
## Snider, 1998 cau ind 0.6403 [ 0.5915; 0.6890] 4.05
## Harder, 2005 grp gsh -0.1827 [-0.2434; -0.1220] 0.00
## Williams, 1989 grp gsh -0.1827 [-0.2434; -0.1220] 2.51
## Gilstrap, 2015 grp gsh -0.1827 [-0.2434; -0.1220] 0.08
## Kilgore, 2003 grp gsh -0.1827 [-0.2434; -0.1220] 0.19
## Hetherington, 2003 grp gsh -0.1827 [-0.2434; -0.1220] 0.02
## Austin, 1988 grp ush -0.4473 [-0.5334; -0.3613] 4.44
## Edwards, 2010 grp wlc -0.8351 [-0.8858; -0.7844] 12.70
## Sherratt, 2000 grp wlc -0.8351 [-0.8858; -0.7844] 7.77
## Ceurvorst, 2011 grp wlc -0.8351 [-0.8858; -0.7844] 14.83
## Graham, 1986 grp wlc -0.8351 [-0.8858; -0.7844] 52.57
## Hughey, 2012 grp wlc -0.8351 [-0.8858; -0.7844] 124.36
## Burges, 1993 grp wlc -0.8351 [-0.8858; -0.7844] 32.53
## Carpenter, 1994 grp wlc -0.8351 [-0.8858; -0.7844] 18.48
## Quick, 2006 grp wlc -0.8351 [-0.8858; -0.7844] 2.23
## Wuertz, 2016 grp wlc -0.8351 [-0.8858; -0.7844] 2.75
## Vaclav, 1984 grp wlc -0.8351 [-0.8858; -0.7844] 12.01
## Roberts, 1996 grp wlc -0.8351 [-0.8858; -0.7844] 0.18
## Heath, 1998 grp wlc -0.8351 [-0.8858; -0.7844] 7.97
## Dobbs, 2000 grp wlc -0.8351 [-0.8858; -0.7844] 1.91
## Carvey, 2008 grp wlc -0.8351 [-0.8858; -0.7844] 1.10
## Neil, 2015 grp wlc -0.8351 [-0.8858; -0.7844] 15.52
## Barrett, 2005 grp wlc -0.8351 [-0.8858; -0.7844] 1.74
## Edgar, 2015 grp wlc -0.8351 [-0.8858; -0.7844] 9.48
## Leonard, 2010 grp wlc -0.8351 [-0.8858; -0.7844] 4.21
## Miller, 1989 cau grp 0.5767 [ 0.5224; 0.6310] 0.13
## Clear, 1989 cau grp 0.5767 [ 0.5224; 0.6310] 0.03
## Plewe, 1991 cau grp 0.5767 [ 0.5224; 0.6310] 0.67
## Hayko, 1999 cau grp 0.5767 [ 0.5224; 0.6310] 1.80
## Knickerbocker, 1987 cau grp 0.5767 [ 0.5224; 0.6310] 0.05
## Kurth, 1992 cau grp 0.5767 [ 0.5224; 0.6310] 2.55
## Menzies, 2003 cau grp 0.5767 [ 0.5224; 0.6310] 0.64
## Mcmahon, 1998 cau grp 0.5767 [ 0.5224; 0.6310] 0.37
## Isles, 2003 cau grp 0.5767 [ 0.5224; 0.6310] 0.79
## Santistevan-Gettel, 2012 cau grp 0.5767 [ 0.5224; 0.6310] 24.69
## Frenzel, 2006 cau grp 0.5767 [ 0.5224; 0.6310] 0.67
## Fladland, 1997 cau grp 0.5767 [ 0.5224; 0.6310] 1.07
## Pylkka, 2008 cau grp 0.5767 [ 0.5224; 0.6310] 6.13
## Metzger, 1988 cau grp 0.5767 [ 0.5224; 0.6310] 0.29
## Debenedictis, 1993 cau grp 0.5767 [ 0.5224; 0.6310] 0.07
## Bengston, 2004 cau grp 0.5767 [ 0.5224; 0.6310] 29.31
## Schlegel, 1989 cau grp 0.5767 [ 0.5224; 0.6310] 11.00
## Cook, 2013 cau grp 0.5767 [ 0.5224; 0.6310] 0.23
## Sheehan, 2013 cau grp 0.5767 [ 0.5224; 0.6310] 25.79
## Tenbarge, 2000 cau grp 0.5767 [ 0.5224; 0.6310] 0.77
## Buchholtz, 2010 cau grp 0.5767 [ 0.5224; 0.6310] 7.05
## Sheppard, 1994 gsh ush -0.2646 [-0.3499; -0.1793] 0.08
## Warwick, 1986 gsh ush -0.2646 [-0.3499; -0.1793] 0.02
## Weimer, 1999 gsh ush -0.2646 [-0.3499; -0.1793] 1.66
## Rusciolelli, 2000 gsh ush -0.2646 [-0.3499; -0.1793] 1.76
## Jeffers, 1996 gsh ush -0.2646 [-0.3499; -0.1793] 0.11
## Whiting, 1990 gsh wlc -0.6524 [-0.7004; -0.6044] 1.29
## Tamburri, 2001 gsh wlc -0.6524 [-0.7004; -0.6044] 11.30
## Hurla, 1985 gsh wlc -0.6524 [-0.7004; -0.6044] 1.75
## Cowan, 2010 gsh wlc -0.6524 [-0.7004; -0.6044] 76.11
## Wyatt, 2003 gsh wlc -0.6524 [-0.7004; -0.6044] 8.40
## Vahlenkamp, 2006 gsh wlc -0.6524 [-0.7004; -0.6044] 0.09
## Campbell, 2000 gsh wlc -0.6524 [-0.7004; -0.6044] 21.57
## Lange, 1993 gsh wlc -0.6524 [-0.7004; -0.6044] 302.44
## Shrednik, 2000 gsh wlc -0.6524 [-0.7004; -0.6044] 5.73
## Chamberlain, 2011 gsh wlc -0.6524 [-0.7004; -0.6044] 4.09
## Runnels, 2004 gsh wlc -0.6524 [-0.7004; -0.6044] 2.39
## Hall, 1997 gsh wlc -0.6524 [-0.7004; -0.6044] 7.22
## James, 2012 gsh wlc -0.6524 [-0.7004; -0.6044] 7.84
## Delaney, 1995 gsh wlc -0.6524 [-0.7004; -0.6044] 0.46
## Watkins Jr, 2004 gsh wlc -0.6524 [-0.7004; -0.6044] 3.32
## Harbert, 1999 gsh wlc -0.6524 [-0.7004; -0.6044] 3.83
## Rodgers, 1993 gsh wlc -0.6524 [-0.7004; -0.6044] 216.49
## Hopkins, 2003 gsh wlc -0.6524 [-0.7004; -0.6044] 3.62
## Ebberts, 1992 gsh wlc -0.6524 [-0.7004; -0.6044] 15.88
## Homrighausen, 2002 gsh wlc -0.6524 [-0.7004; -0.6044] 40.06
## Hanrahan, 1999 gsh wlc -0.6524 [-0.7004; -0.6044] 1.84
## Bond, 1988 gsh wlc -0.6524 [-0.7004; -0.6044] 90.96
## Lemon, 2005 gsh wlc -0.6524 [-0.7004; -0.6044] 10.73
## Blevins, 2003 gsh wlc -0.6524 [-0.7004; -0.6044] 1.07
## Reynolds, 1989 gsh wlc -0.6524 [-0.7004; -0.6044] 37.67
## Marinkovich, 2003 gsh wlc -0.6524 [-0.7004; -0.6044] 10.83
## Lesley, 1999 gsh wlc -0.6524 [-0.7004; -0.6044] 14.04
## Fourroux, 1999 gsh wlc -0.6524 [-0.7004; -0.6044] 5.89
## Orman, 2012 gsh wlc -0.6524 [-0.7004; -0.6044] 6.24
## Underwood, 2008 gsh wlc -0.6524 [-0.7004; -0.6044] 4.44
## Graesch, 2000 gsh wlc -0.6524 [-0.7004; -0.6044] 49.83
## Fay, 2002 gsh wlc -0.6524 [-0.7004; -0.6044] 0.20
## Ristau, 1992 gsh wlc -0.6524 [-0.7004; -0.6044] 7.99
## Nelson, 1987 gsh wlc -0.6524 [-0.7004; -0.6044] 43.38
## Watson, 1997 gsh wlc -0.6524 [-0.7004; -0.6044] 9.58
## Crane, 1984 cau gsh 0.3940 [ 0.3292; 0.4588] 0.95
## Reeve, 2013 cau gsh 0.3940 [ 0.3292; 0.4588] 0.22
## Christenson, 2018 cau gsh 0.3940 [ 0.3292; 0.4588] 0.77
## Ayers, 2008 cau gsh 0.3940 [ 0.3292; 0.4588] 0.01
## Schwartz, 2012 cau gsh 0.3940 [ 0.3292; 0.4588] 0.44
## Gilly, 2014 cau gsh 0.3940 [ 0.3292; 0.4588] 0.50
## Dahl, 2002 cau gsh 0.3940 [ 0.3292; 0.4588] 0.39
## Allen, 2003 cau gsh 0.3940 [ 0.3292; 0.4588] 3.63
## Scott, 1993 tel wlc -0.7718 [-0.8701; -0.6734] 4.76
## Nelson, 2006 cau tel 0.5134 [ 0.4190; 0.6078] 0.00
## Staley, 1992 cau tel 0.5134 [ 0.4190; 0.6078] 0.01
## Maurath, 1994 cau tel 0.5134 [ 0.4190; 0.6078] 2.22
## Carrell, 2008 cau tel 0.5134 [ 0.4190; 0.6078] 8.00
## Schulker, 2009 cau tel 0.5134 [ 0.4190; 0.6078] 8.95
## Walker, 1990 cau tel 0.5134 [ 0.4190; 0.6078] 0.49
## Wagner, 1994 ush wlc -0.3878 [-0.4692; -0.3063] 0.04
## Huebner, 1997 ush wlc -0.3878 [-0.4692; -0.3063] 0.39
## Mcmanus, 1987 ush wlc -0.3878 [-0.4692; -0.3063] 0.40
## Oberholser, 2008 ush wlc -0.3878 [-0.4692; -0.3063] 1.21
## Mai, 2003 ush wlc -0.3878 [-0.4692; -0.3063] 0.29
## Ralph, 1989 ush wlc -0.3878 [-0.4692; -0.3063] 0.00
## Roberts, 2017 ush wlc -0.3878 [-0.4692; -0.3063] 0.29
## Thompson, 2003 ush wlc -0.3878 [-0.4692; -0.3063] 0.80
## Hassan, 2011 ush wlc -0.3878 [-0.4692; -0.3063] 2.44
## Woodard, 2013 ush wlc -0.3878 [-0.4692; -0.3063] 0.05
## Meraz, 2014 ush wlc -0.3878 [-0.4692; -0.3063] 0.21
## Lopez, 2011 cau ush 0.1294 [ 0.0439; 0.2149] 3.23
## Villalobos-Valles, 2006 cau ush 0.1294 [ 0.0439; 0.2149] 0.29
## Godinez, 2005 cau ush 0.1294 [ 0.0439; 0.2149] 0.04
## Quinonez Dominguez, 1993 cau ush 0.1294 [ 0.0439; 0.2149] 0.01
## Ono, 1985 cau ush 0.1294 [ 0.0439; 0.2149] 0.07
## Durgan, 1998 cau ush 0.1294 [ 0.0439; 0.2149] 1.34
## Nguyen, 1996 cau ush 0.1294 [ 0.0439; 0.2149] 0.90
## Maranto, 1997 cau ush 0.1294 [ 0.0439; 0.2149] 0.17
## Duba, 1990 cau ush 0.1294 [ 0.0439; 0.2149] 0.62
## Breiman, 2001 gsh wlc -0.6524 [-0.7004; -0.6044] 0.00
## leverage
## Ausbun, 1997 0.03
## Crable, 1986 0.01
## Thiede, 2011 0.03
## Bonertz, 2015 0.01
## Joy, 2002 0.00
## Jones, 2013 0.01
## Aaron, 2004 0.01
## Breiman, 2001 .
## Lucero, 2001 0.00
## Amsberry, 2010 0.01
## Robinson, 2015 0.02
## Burgan, 2012 0.12
## Belk, 1986 0.32
## Ledbetter, 1984 0.04
## Narum, 1986 0.02
## Breiman, 2001 .
## Wierenga, 2004 0.01
## Eisenberg, 1995 0.24
## Scholer, 2017 0.00
## Schlaver, 2004 0.01
## Riley, 2002 0.01
## Barker, 1997 0.03
## Dewitt, 1995 0.01
## De Venecia, 2011 0.01
## Hiatt, 1985 0.08
## Congour, 1996 0.01
## Gruber, 2013 0.01
## Laird, 2008 0.00
## Kramer, 2007 0.01
## Germuska, 1996 0.02
## Fair, 2003 0.03
## Mickey, 1989 0.01
## Shea, 2011 0.03
## Leach, 2014 0.12
## Koslosky, 1991 0.03
## Chambers, 2010 0.00
## Gleave, 2004 0.01
## Englund, 2017 0.01
## Gilleland, 1989 0.00
## Parker, 2002 0.01
## Fergerson, 2014 0.01
## Craft, 2014 0.00
## Horton, 1985 0.00
## Brown, 2017 0.11
## Ancmon, 2001 0.04
## Roberts, 1997 0.01
## Erickson, 1998 0.01
## Summerhays, 1987 0.01
## King, 1990 0.01
## Muhlenbruck, 1998 0.03
## Murray, 1996 0.01
## Shorb, 1998 0.04
## Fotenos, 1994 0.00
## Pauly, 2003 0.07
## Johnson, 1989 0.02
## Whitaker, 2017 0.02
## Brown, 1985 0.02
## Ciccone, 1997 0.00
## Martin, 2017 0.03
## White, 2006 0.03
## Silva, 1986 0.00
## Thompson, 1998 0.03
## Snider, 1998 0.01
## Harder, 2005 0.01
## Williams, 1989 0.09
## Gilstrap, 2015 0.11
## Kilgore, 2003 0.01
## Hetherington, 2003 0.02
## Austin, 1988 0.17
## Edwards, 2010 0.01
## Sherratt, 2000 0.00
## Ceurvorst, 2011 0.23
## Graham, 1986 0.02
## Hughey, 2012 0.13
## Burges, 1993 0.09
## Carpenter, 1994 0.00
## Quick, 2006 0.00
## Wuertz, 2016 0.00
## Vaclav, 1984 0.01
## Roberts, 1996 0.00
## Heath, 1998 0.01
## Dobbs, 2000 0.00
## Carvey, 2008 0.00
## Neil, 2015 0.05
## Barrett, 2005 0.01
## Edgar, 2015 0.00
## Leonard, 2010 0.00
## Miller, 1989 0.00
## Clear, 1989 0.00
## Plewe, 1991 0.03
## Hayko, 1999 0.00
## Knickerbocker, 1987 0.01
## Kurth, 1992 0.01
## Menzies, 2003 0.01
## Mcmahon, 1998 0.00
## Isles, 2003 0.02
## Santistevan-Gettel, 2012 0.01
## Frenzel, 2006 0.03
## Fladland, 1997 0.03
## Pylkka, 2008 0.08
## Metzger, 1988 0.00
## Debenedictis, 1993 0.01
## Bengston, 2004 0.12
## Schlegel, 1989 0.01
## Cook, 2013 0.07
## Sheehan, 2013 0.08
## Tenbarge, 2000 0.01
## Buchholtz, 2010 0.01
## Sheppard, 1994 0.03
## Warwick, 1986 0.01
## Weimer, 1999 0.13
## Rusciolelli, 2000 0.11
## Jeffers, 1996 0.01
## Whiting, 1990 0.00
## Tamburri, 2001 0.01
## Hurla, 1985 0.00
## Cowan, 2010 0.04
## Wyatt, 2003 0.00
## Vahlenkamp, 2006 0.01
## Campbell, 2000 0.00
## Lange, 1993 0.15
## Shrednik, 2000 0.00
## Chamberlain, 2011 0.01
## Runnels, 2004 0.00
## Hall, 1997 0.01
## James, 2012 0.00
## Delaney, 1995 0.02
## Watkins Jr, 2004 0.00
## Harbert, 1999 0.00
## Rodgers, 1993 0.14
## Hopkins, 2003 0.01
## Ebberts, 1992 0.05
## Homrighausen, 2002 0.02
## Hanrahan, 1999 0.00
## Bond, 1988 0.04
## Lemon, 2005 0.00
## Blevins, 2003 0.07
## Reynolds, 1989 0.00
## Marinkovich, 2003 0.01
## Lesley, 1999 0.01
## Fourroux, 1999 0.01
## Orman, 2012 0.01
## Underwood, 2008 0.01
## Graesch, 2000 0.02
## Fay, 2002 0.01
## Ristau, 1992 0.01
## Nelson, 1987 0.04
## Watson, 1997 0.01
## Crane, 1984 0.01
## Reeve, 2013 0.02
## Christenson, 2018 0.01
## Ayers, 2008 0.01
## Schwartz, 2012 0.01
## Gilly, 2014 0.01
## Dahl, 2002 0.01
## Allen, 2003 0.15
## Scott, 1993 0.26
## Nelson, 2006 0.16
## Staley, 1992 0.02
## Maurath, 1994 0.03
## Carrell, 2008 0.07
## Schulker, 2009 0.03
## Walker, 1990 0.04
## Wagner, 1994 0.05
## Huebner, 1997 0.03
## Mcmanus, 1987 0.03
## Oberholser, 2008 0.03
## Mai, 2003 0.03
## Ralph, 1989 0.01
## Roberts, 2017 0.08
## Thompson, 2003 0.03
## Hassan, 2011 0.07
## Woodard, 2013 0.01
## Meraz, 2014 0.02
## Lopez, 2011 0.17
## Villalobos-Valles, 2006 0.02
## Godinez, 2005 0.01
## Quinonez Dominguez, 1993 0.02
## Ono, 1985 0.01
## Durgan, 1998 0.05
## Nguyen, 1996 0.02
## Maranto, 1997 0.03
## Duba, 1990 0.03
## Breiman, 2001 .
##
## Number of studies: k = 182
## Number of pairwise comparisons: m = 184
## Number of treatments: n = 7
## Number of designs: d = 17
##
## Common effects model
##
## Treatment estimate (sm = 'SMD', comparison: other treatments vs 'cau'):
## SMD 95%-CI z p-value
## cau . . . .
## grp -0.5767 [-0.6310; -0.5224] -20.81 < 0.0001
## gsh -0.3940 [-0.4588; -0.3292] -11.92 < 0.0001
## ind -0.6403 [-0.6890; -0.5915] -25.74 < 0.0001
## tel -0.5134 [-0.6078; -0.4190] -10.65 < 0.0001
## ush -0.1294 [-0.2149; -0.0439] -2.97 0.0030
## wlc 0.2584 [ 0.2011; 0.3157] 8.84 < 0.0001
##
## Quantifying heterogeneity / inconsistency:
## tau^2 = 0.2682; tau = 0.5179; I^2 = 89.6% [88.3%; 90.7%]
##
## Tests of heterogeneity (within designs) and inconsistency (between designs):
## Q d.f. p-value
## Total 1696.94 177 < 0.0001
## Within designs 1595.04 165 < 0.0001
## Between designs 101.90 12 < 0.0001
由于显著的不一致性,我们需要使用完全设计-治疗交互随机效应模型(full design-by-treatment interaction random-effects model).
decomp.design(m.netmeta)
## Q statistics to assess homogeneity / consistency
##
## Q df p-value
## Total 1696.94 177 < 0.0001
## Within designs 1595.04 165 < 0.0001
## Between designs 101.90 12 < 0.0001
##
## Design-specific decomposition of within-designs Q statistic
##
## Design Q df p-value
## gsh vs wlc 1026.72 34 < 0.0001
## grp vs wlc 303.80 17 < 0.0001
## cau vs ind 100.00 29 < 0.0001
## cau vs grp 82.50 20 < 0.0001
## ind vs wlc 52.66 16 < 0.0001
## cau vs tel 11.40 5 0.0440
## ind vs tel 1.86 3 0.6026
## cau vs ush 5.94 8 0.6539
## grp vs ind 4.10 6 0.6636
## grp vs gsh 2.17 4 0.7037
## gsh vs ush 0.79 4 0.9395
## gsh vs ind 0.02 2 0.9909
## ush vs wlc 2.37 10 0.9927
## cau vs gsh 0.72 7 0.9982
##
## Between-designs Q statistic after detaching of single designs
## (influential designs have p-value markedly different from < 0.0001)
##
## Detached design Q df p-value
## cau vs grp 25.92 11 0.0067
## grp vs wlc 58.10 11 < 0.0001
## ind vs wlc 77.23 11 < 0.0001
## cau vs ind 86.12 11 < 0.0001
## cau vs tel 89.15 11 < 0.0001
## cau vs gsh 93.95 11 < 0.0001
## gsh vs wlc 94.97 11 < 0.0001
## tel vs wlc 95.45 11 < 0.0001
## ush vs wlc 95.81 11 < 0.0001
## grp vs ush 96.58 11 < 0.0001
## gsh vs ush 97.90 11 < 0.0001
## ind vs tel 100.37 11 < 0.0001
## cau vs ush 100.78 11 < 0.0001
## grp vs gsh 101.07 11 < 0.0001
## gsh vs ind 101.43 11 < 0.0001
## grp vs ind 101.76 11 < 0.0001
## gsh vs ind vs wlc 101.78 10 < 0.0001
##
## Q statistic to assess consistency under the assumption of
## a full design-by-treatment interaction random effects model
##
## Q df p-value tau.within tau2.within
## Between designs 3.82 12 0.9865 0.5403 0.2919
可视化
# Replace with full name (see treat1.long and treat2.long)
long.labels <- c("Care As Usual", "Group",
"Guided Self-Help",
"Individual", "Telephone",
"Unguided Self-Help",
"Waitlist")
netgraph(m.netmeta,
labels = long.labels)
library(dmetar)
d.evidence <- direct.evidence.plot(m.netmeta)
plot(d.evidence)
根据König、Krahn和Binder(2013),平均路径长度>2意味着在解释比较估计值时应特别谨慎.
获得效应量
result.matrix <- m.netmeta$TE.fixed
result.matrix <- round(result.matrix, 2)
result.matrix[lower.tri(result.matrix, diag = FALSE)] <- NA
result.matrix
## cau grp gsh ind tel ush wlc
## cau 0 0.58 0.39 0.64 0.51 0.13 -0.26
## grp NA 0.00 -0.18 0.06 -0.06 -0.45 -0.84
## gsh NA NA 0.00 0.25 0.12 -0.26 -0.65
## ind NA NA NA 0.00 -0.13 -0.51 -0.90
## tel NA NA NA NA 0.00 -0.38 -0.77
## ush NA NA NA NA NA 0.00 -0.39
## wlc NA NA NA NA NA NA 0.00
# Produce effect table
netleague <- netleague(m.netmeta,
bracket = "(", # use round brackets
digits=2) # round to two digits
netleague
## League table (common effects model):
##
## cau 0.37 ( 0.30; 0.44) 0.57 ( 0.43; 0.71)
## 0.58 ( 0.52; 0.63) grp -0.23 (-0.36; -0.11)
## 0.39 ( 0.33; 0.46) -0.18 (-0.24; -0.12) gsh
## 0.64 ( 0.59; 0.69) 0.06 ( 0.00; 0.13) 0.25 ( 0.18; 0.31)
## 0.51 ( 0.42; 0.61) -0.06 (-0.17; 0.04) 0.12 ( 0.01; 0.22)
## 0.13 ( 0.04; 0.21) -0.45 (-0.53; -0.36) -0.26 (-0.35; -0.18)
## -0.26 (-0.32; -0.20) -0.84 (-0.89; -0.78) -0.65 (-0.70; -0.60)
##
## 0.70 ( 0.65; 0.76) 0.75 ( 0.59; 0.91) 0.19 ( 0.05; 0.34)
## 0.10 (-0.10; 0.30) . -0.67 (-0.89; -0.46)
## 0.11 (-0.26; 0.47) . -0.40 (-0.56; -0.24)
## ind -0.19 (-0.32; -0.05) .
## -0.13 (-0.22; -0.03) tel .
## -0.51 (-0.60; -0.42) -0.38 (-0.51; -0.26) ush
## -0.90 (-0.96; -0.84) -0.77 (-0.87; -0.67) -0.39 (-0.47; -0.31)
##
## .
## -0.98 (-1.05; -0.91)
## -0.61 (-0.67; -0.56)
## -0.76 (-0.84; -0.67)
## -0.56 (-0.75; -0.37)
## -0.52 (-0.65; -0.39)
## wlc
使用P-scores对疗效进行排序
netrank(m.netmeta, small.values = "good")
## P-score
## ind 0.9958
## grp 0.8184
## tel 0.6837
## gsh 0.5022
## ush 0.3331
## cau 0.1669
## wlc 0.0000
forest(m.netmeta,
reference.group = "cau",
sortvar = TE,
xlim = c(-1.3, 0.5),
smlab = paste("Therapy Formats vs. Care As Usual \n",
"(Depressive Symptoms)"),
drop.reference.group = TRUE,
label.left = "Favors Intervention",
label.right = "Favors Care As Usual",
labels = long.labels)
使用网络热图(net heat plot)评估网络的不一致性.
netheat(m.netmeta)
#灰色方框表示一个治疗对比对于另一个治疗对比的估算有多重要;彩色背景表示行中设计可归因于列中的设计的不一致性程度
#使用随机效应模型
netheat(m.netmeta, random = TRUE)
使用网络拆分(net splitting)评估不一致性.
netsplit(m.netmeta)
## Separate indirect from direct evidence (SIDE) using back-calculation method
##
## Common effects model:
##
## comparison k prop nma direct indir. Diff z p-value
## grp vs cau 21 0.58 -0.5767 -0.3727 -0.8628 0.4901 8.72 < 0.0001
## gsh vs cau 8 0.22 -0.3940 -0.5684 -0.3442 -0.2243 -2.82 0.0048
## ind vs cau 30 0.71 -0.6403 -0.7037 -0.4863 -0.2174 -3.97 < 0.0001
## tel vs cau 6 0.35 -0.5134 -0.7471 -0.3867 -0.3604 -3.57 0.0004
## ush vs cau 9 0.35 -0.1294 -0.1919 -0.0953 -0.0966 -1.06 0.2903
## wlc vs cau 0 0 0.2584 . 0.2584 . . .
## grp vs gsh 5 0.24 -0.1827 -0.2332 -0.1670 -0.0663 -0.91 0.3629
## grp vs ind 7 0.09 0.0636 0.1004 0.0598 0.0406 0.37 0.7099
## grp vs tel 0 0 -0.0633 . -0.0633 . . .
## grp vs ush 1 0.17 -0.4473 -0.6750 -0.4023 -0.2727 -2.31 0.0210
## grp vs wlc 18 0.58 -0.8351 -0.9817 -0.6352 -0.3465 -6.62 < 0.0001
## gsh vs ind 4 0.03 0.2462 0.1080 0.2512 -0.1432 -0.76 0.4496
## gsh vs tel 0 0 0.1194 . 0.1194 . . .
## gsh vs ush 5 0.29 -0.2646 -0.4001 -0.2086 -0.1915 -2.00 0.0454
## gsh vs wlc 36 0.73 -0.6524 -0.6134 -0.7594 0.1460 2.64 0.0083
## ind vs tel 4 0.50 -0.1269 -0.1854 -0.0684 -0.1170 -1.24 0.2153
## ind vs ush 0 0 -0.5109 . -0.5109 . . .
## ind vs wlc 18 0.51 -0.8986 -0.7552 -1.0475 0.2923 5.00 < 0.0001
## tel vs ush 0 0 -0.3840 . -0.3840 . . .
## tel vs wlc 1 0.26 -0.7718 -0.5580 -0.8476 0.2896 2.54 0.0111
## ush vs wlc 11 0.38 -0.3878 -0.5182 -0.3072 -0.2109 -2.47 0.0136
##
## Legend:
## comparison - Treatment comparison
## k - Number of studies providing direct evidence
## prop - Direct evidence proportion
## nma - Estimated treatment effect (SMD) in network meta-analysis
## direct - Estimated treatment effect (SMD) derived from direct evidence
## indir. - Estimated treatment effect (SMD) derived from indirect evidence
## Diff - Difference between direct and indirect treatment estimates
## z - z-value of test for disagreement (direct versus indirect)
## p-value - p-value of test for disagreement (direct versus indirect)
netsplit(m.netmeta) %>% forest()
比较调整后的漏斗图(Comparison-Adjusted Funnel Plots)
比较调整漏斗图可以来评估网络meta分析中的发表偏倚风险。这种漏斗图考虑了新研究更容易发表这一假设.
funnel(m.netmeta,
order = c("wlc", "cau", "ind", "grp", # from old to new
"tel", "ush", "gsh"),
pch = c(1:4, 5, 6, 8, 15:19, 21:24),
col = c("blue", "red", "purple", "forestgreen", "grey",
"green", "black", "brown", "orange", "pink",
"khaki", "plum", "aquamarine", "sandybrown",
"coral", "gold4"),
linreg = TRUE)
贝叶斯推断是频率学派以外的另一种统计学流派。贝叶斯推断的基础是贝叶斯公式:
\[P(\text{A}|\text{B})=\frac{P(\text{B}|\text{A})\times P(\text{A})}{P(\text{B})}\]
或表示为:后验概率(posterior probability)正比于似然(likelihood)乘以先验概率(prior probability)
\[P(\text{A}|\text{B}) \propto P(\text{B}|\text{A})\times P(\text{A})\]
\[P(\boldsymbol{\theta} | {\boldsymbol{Y}} ) \propto P( {\boldsymbol{Y}} | \boldsymbol{\theta} )\times P( \boldsymbol{\theta})\]
经典meta分析的固定效应模型如下:
\[\hat\theta_k \sim \mathcal{N}(\theta,\sigma_k^2)\]
随机效应模型:
\[\hat\theta_k \sim \mathcal{N}(\theta_k,\sigma_k^2)\\\theta_k \sim \mathcal{N}(\mu,\tau^2)\]
在网络模型中:
\[\begin{align}\hat\theta_{k \text{,A,B}} &\sim \mathcal{N}(\theta_{k\text{,A,B}},\sigma_k^2) \notag \\\theta_{k \text{,A,B}} &\sim \mathcal{N}(\theta_{\text{A,B}},\tau^2) \end{align}\]
\[\begin{align}\begin{bmatrix}\hat\theta_{k\text{,A,E}} \\\hat\theta_{k\text{,B,E}} \\\hat\theta_{k\text{,C,E}} \\\hat\theta_{k\text{,D,E}}\end{bmatrix}&=\mathcal{N}\left(\begin{bmatrix}\theta_{\text{A,E}} \\\theta_{\text{B,E}} \\\theta_{\text{C,E}} \\\theta_{\text{D,E}}\end{bmatrix},\begin{bmatrix}\tau^2 & \tau^2/2 & \tau^2/2 & \tau^2/2 \\\tau^2/2 & \tau^2 & \tau^2/2 & \tau^2/2 \\\tau^2/2 & \tau^2/2 & \tau^2 & \tau^2/2 \\\tau^2/2 & \tau^2/2 & \tau^2/2 & \tau^2\end{bmatrix}\right)\end{align}\]
#install.packages('gemtc')
library(gemtc)
## 载入需要的程辑包:coda
##
## 载入程辑包:'gemtc'
## The following object is masked from 'package:metafor':
##
## forest
## The following object is masked from 'package:meta':
##
## forest
## The following object is masked from 'package:dmetar':
##
## sucra
library(rjags)
## Linked to JAGS 4.3.1
## Loaded modules: basemod,bugs
library(dmetar)
data(TherapyFormatsGeMTC)
head(TherapyFormatsGeMTC$data)
## study diff std.err treatment
## 1 Ausbun, 1997 0.092 0.195 ind
## 2 Ausbun, 1997 NA NA grp
## 3 Crable, 1986 -0.675 0.350 ind
## 4 Crable, 1986 NA NA grp
## 5 Thiede, 2011 -0.107 0.198 ind
## 6 Thiede, 2011 NA NA grp
TherapyFormatsGeMTC$treat.codes
## id description
## 1 ind Individual
## 2 grp Group
## 3 gsh Guided Self-Help
## 4 tel Telephone
## 5 wlc Waitlist
## 6 cau Care As Usual
## 7 ush Unguided Self-Help
network <- mtc.network(data.re = TherapyFormatsGeMTC$data,
treatments = TherapyFormatsGeMTC$treat.codes)
summary(network)
## $Description
## [1] "MTC dataset: Network"
##
## $`Studies per treatment`
## ind grp gsh tel wlc cau ush
## 62 52 57 11 83 74 26
##
## $`Number of n-arm studies`
## 2-arm 3-arm
## 181 1
##
## $`Studies per treatment comparison`
## t1 t2 nr
## 1 ind tel 4
## 2 ind wlc 18
## 3 grp ind 7
## 4 grp gsh 5
## 5 grp wlc 18
## 6 grp ush 1
## 7 gsh ind 4
## 8 gsh wlc 36
## 9 gsh ush 5
## 10 tel wlc 1
## 11 cau ind 30
## 12 cau grp 21
## 13 cau gsh 8
## 14 cau tel 6
## 15 cau ush 9
## 16 ush wlc 11
plot(network,
use.description = TRUE) # Use full treatment names
#better visualization of our network using the Fruchterman-Reingold algorithm
library(igraph)
##
## 载入程辑包:'igraph'
## The following objects are masked from 'package:lubridate':
##
## %--%, union
## The following objects are masked from 'package:dplyr':
##
## as_data_frame, groups, union
## The following objects are masked from 'package:purrr':
##
## compose, simplify
## The following object is masked from 'package:tidyr':
##
## crossing
## The following object is masked from 'package:tibble':
##
## as_data_frame
## The following objects are masked from 'package:stats':
##
## decompose, spectrum
## The following object is masked from 'package:base':
##
## union
set.seed(12345) # set seed for reproducibility
plot(network,
use.description = TRUE, # Use full treatment names
vertex.color = "white", # node color
vertex.label.color = "gray10", # treatment label color
vertex.shape = "sphere", # shape of the node
vertex.size = 20, # size of the node
vertex.label.dist = 2, # distance label-node center
vertex.label.cex = 1.5, # node label size
edge.curved = 0.2, # edge curvature
layout = layout.fruchterman.reingold)
# We give our compiled model the name `model`.
model <- mtc.model(network,
likelihood = "normal",
link = "identity",
linearModel = "random",
n.chain = 4) #马尔科夫链的数量
马尔科夫链蒙特卡洛取样(Markov Chain Monte Carlo Sampling)
通过 MCMC 模拟,我们可以估计参数的后验分布,从而得出网络meta分析的结果。在这一过程中,我们要实现两个重要的预期目标:
我们希望马尔科夫链蒙特卡洛模拟的前几次运行不会对整个模拟结果产生太大影响,因为前几次运行可能会产生不充分的结果
马尔可夫链蒙特卡罗过程的运行时间应足够长,以便我们获得模型参数的准确估计值(即应该收敛).
mcmc1 <- mtc.run(model, n.adapt = 50, n.iter = 1000, thin = 10) #预训练
## Compiling model graph
## Resolving undeclared variables
## Allocating nodes
## Graph information:
## Observed stochastic nodes: 182
## Unobserved stochastic nodes: 190
## Total graph size: 2606
##
## Initializing model
## Warning in rjags::jags.model(file.model, data = syntax[["data"]], inits =
## syntax[["inits"]], : Adaptation incomplete
## NOTE: Stopping adaptation
mcmc2 <- mtc.run(model, n.adapt = 5000, n.iter = 1e5, thin = 10)
## Compiling model graph
## Resolving undeclared variables
## Allocating nodes
## Graph information:
## Observed stochastic nodes: 182
## Unobserved stochastic nodes: 190
## Total graph size: 2606
##
## Initializing model
plot(mcmc1)
plot(mcmc2)
# Gelman-Rubin plot. This plot shows the so-called Potential Scale Reduction Factor (PSRF)
gelman.plot(mcmc1)
gelman.plot(mcmc2)
gelman.diag(mcmc1)$mpsrf
## [1] 1.037061
gelman.diag(mcmc2)$mpsrf
## [1] 1.000357
rank <- rank.probability(mcmc2, preferredDirection = -1)
plot(rank, beside=TRUE)
forest(relative.effect(mcmc2, t1 = "cau"),
use.description = TRUE, # Use long treatment names
xlim = c(-1.5, 0.5))
library(dmetar)
rank.probability <- rank.probability(mcmc2)
sucra <- dmetar::sucra(rank.probability, lower.is.better = TRUE)
sucra
## SUCRA
## ind 0.9221167
## tel 0.8515375
## gsh 0.6460000
## grp 0.5774417
## ush 0.3219333
## cau 0.1804208
## wlc 0.0005500
plot(sucra)
results <- relative.effect.table(mcmc2)
save(results, file = "results.csv")
Surface Under the Cumulative Ranking (SUCRA) score:
\[\text{SUCRA}_j = \frac{\sum_{b=1}^{a-1}\text{cum}_{jb}}{a-1}\]
TherapyFormatsGeMTC$study.info
## study rob
## 1 Campbell, 2000 1
## 2 Reynolds, 1989 1
## 3 Carpenter, 1994 0
## 4 Shrednik, 2000 1
## 5 Lesley, 1999 1
## 6 Wyatt, 2003 0
## 7 Lemon, 2005 1
## 8 James, 2012 0
## 9 Graesch, 2000 0
## 10 Lange, 1993 1
## 11 Marinkovich, 2003 1
## 12 Wuertz, 2016 1
## 13 Hughey, 2012 1
## 14 Dobbs, 2000 1
## 15 Quick, 2006 1
## 16 Harbert, 1999 1
## 17 Nelson, 1987 1
## 18 Watkins Jr, 2004 1
## 19 Ristau, 1992 1
## 20 Schlegel, 1989 1
## 21 Wierenga, 2004 1
## 22 Laird, 2008 1
## 23 Riley, 2002 1
## 24 Germuska, 1996 0
## 25 Schulker, 2009 1
## 26 Barrett, 2005 1
## 27 Neil, 2015 1
## 28 Congour, 1996 1
## 29 Hurla, 1985 1
## 30 Hanrahan, 1999 1
## 31 Koslosky, 1991 1
## 32 Runnels, 2004 1
## 33 Orman, 2012 0
## 34 Dewitt, 1995 1
## 35 Fair, 2003 0
## 36 Schlaver, 2004 1
## 37 King, 1990 1
## 38 Snider, 1998 1
## 39 Craft, 2014 1
## 40 Ceurvorst, 2011 1
## 41 Roberts, 1996 1
## 42 Carrell, 2008 0
## 43 Muhlenbruck, 1998 1
## 44 Kramer, 2007 1
## 45 Barker, 1997 0
## 46 Johnson, 1989 1
## 47 Shorb, 1998 0
## 48 Scholer, 2017 1
## 49 Maurath, 1994 0
## 50 Leach, 2014 1
## 51 Roberts, 1997 1
## 52 De Venecia, 2011 1
## 53 Shea, 2011 1
## 54 Gilleland, 1989 1
## 55 Murray, 1996 1
## 56 Menzies, 2003 0
## 57 Chambers, 2010 1
## 58 Mickey, 1989 1
## 59 Hiatt, 1985 1
## 60 Breiman, 2001 1
## 61 Parker, 2002 1
## 62 Erickson, 1998 1
## 63 Ancmon, 2001 0
## 64 Fladland, 1997 1
## 65 Crane, 1984 1
## 66 Christenson, 2018 1
## 67 Plewe, 1991 1
## 68 Fergerson, 2014 1
## 69 Frenzel, 2006 0
## 70 Walker, 1990 1
## 71 Crable, 1986 1
## 72 Gilly, 2014 1
## 73 Austin, 1988 1
## 74 Whitaker, 2017 1
## 75 Breiman, 2001 1
## 76 Brown, 1985 1
## 77 Dahl, 2002 0
## 78 Clear, 1989 1
## 79 Oberholser, 2008 1
## 80 Debenedictis, 1993 0
## 81 Martin, 2017 1
## 82 Hassan, 2011 1
## 83 Knickerbocker, 1987 1
## 84 Cook, 2013 0
## 85 Ciccone, 1997 0
## 86 Summerhays, 1987 0
## 87 Silva, 1986 1
## 88 Thompson, 2003 0
## 89 Pauly, 2003 0
## 90 Schwartz, 2012 0
## 91 Englund, 2017 0
## 92 Horton, 1985 0
## 93 Scott, 1993 0
## 94 Staley, 1992 1
## 95 Allen, 2003 0
## 96 Vahlenkamp, 2006 1
## 97 Blevins, 2003 0
## 98 Huebner, 1997 0
## 99 Mcmanus, 1987 0
## 100 Fotenos, 1994 0
## 101 Delaney, 1995 0
## 102 Fay, 2002 0
## 103 Mai, 2003 0
## 104 Nelson, 2006 0
## 105 Reeve, 2013 0
## 106 Eisenberg, 1995 0
## 107 Woodard, 2013 0
## 108 Roberts, 2017 0
## 109 Jeffers, 1996 0
## 110 Rusciolelli, 2000 0
## 111 Miller, 1989 0
## 112 Ayers, 2008 0
## 113 Wagner, 1994 0
## 114 Nguyen, 1996 1
## 115 Weimer, 1999 0
## 116 Isles, 2003 0
## 117 Ralph, 1989 0
## 118 Brown, 2017 0
## 119 Burges, 1993 0
## 120 Williams, 1989 0
## 121 Pylkka, 2008 0
## 122 Kilgore, 2003 1
## 123 Tenbarge, 2000 1
## 124 Carvey, 2008 0
## 125 Mcmahon, 1998 0
## 126 Metzger, 1988 0
## 127 Warwick, 1986 0
## 128 Lopez, 2011 0
## 129 Burgan, 2012 0
## 130 Gruber, 2013 0
## 131 Meraz, 2014 0
## 132 Ono, 1985 0
## 133 Gleave, 2004 0
## 134 Whiting, 1990 0
## 135 Ebberts, 1992 0
## 136 Thompson, 1998 0
## 137 Jones, 2013 0
## 138 Kurth, 1992 0
## 139 Maranto, 1997 0
## 140 Sheppard, 1994 0
## 141 Belk, 1986 0
## 142 Harder, 2005 0
## 143 Hopkins, 2003 0
## 144 Gilstrap, 2015 1
## 145 Quinonez Dominguez, 1993 1
## 146 Hetherington, 2003 0
## 147 Bengston, 2004 0
## 148 Joy, 2002 0
## 149 Robinson, 2015 0
## 150 Amsberry, 2010 0
## 151 Thiede, 2011 1
## 152 Bonertz, 2015 1
## 153 Sheehan, 2013 0
## 154 White, 2006 0
## 155 Lucero, 2001 1
## 156 Edwards, 2010 0
## 157 Godinez, 2005 0
## 158 Hall, 1997 0
## 159 Underwood, 2008 0
## 160 Ledbetter, 1984 0
## 161 Hayko, 1999 0
## 162 Chamberlain, 2011 0
## 163 Fourroux, 1999 0
## 164 Leonard, 2010 0
## 165 Narum, 1986 1
## 166 Villalobos-Valles, 2006 0
## 167 Duba, 1990 0
## 168 Ausbun, 1997 0
## 169 Durgan, 1998 0
## 170 Aaron, 2004 0
## 171 Heath, 1998 0
## 172 Vaclav, 1984 0
## 173 Rodgers, 1993 0
## 174 Watson, 1997 0
## 175 Tamburri, 2001 0
## 176 Homrighausen, 2002 0
## 177 Buchholtz, 2010 0
## 178 Cowan, 2010 1
## 179 Bond, 1988 0
## 180 Edgar, 2015 0
## 181 Sherratt, 2000 0
## 182 Santistevan-Gettel, 2012 0
## 183 Graham, 1986 0
network.mr <- mtc.network(data.re = TherapyFormatsGeMTC$data,
studies = TherapyFormatsGeMTC$study.info,
treatments = TherapyFormatsGeMTC$treat.codes)
regressor <- list(coefficient = "shared",
variable = "rob",
control = "cau")
model.mr <- mtc.model(network.mr,
likelihood = "normal",
link = "identity",
type = "regression",
regressor = regressor)
mcmc3 <- mtc.run(model.mr,
n.adapt = 5000,
n.iter = 1e5,
thin = 10)
## Compiling model graph
## Resolving undeclared variables
## Allocating nodes
## Graph information:
## Observed stochastic nodes: 182
## Unobserved stochastic nodes: 191
## Total graph size: 2980
##
## Initializing model
summary(mcmc3)
##
## Results on the Mean Difference scale
##
## Iterations = 5010:105000
## Thinning interval = 10
## Number of chains = 4
## Sample size per chain = 10000
##
## 1. Empirical mean and standard deviation for each variable,
## plus standard error of the mean:
##
## Mean SD Naive SE Time-series SE
## d.ind.cau 0.69927 0.07950 0.0003975 0.0004137
## d.ind.grp 0.19270 0.10006 0.0005003 0.0005323
## d.ind.gsh 0.16138 0.10559 0.0005280 0.0005542
## d.ind.tel 0.02281 0.16332 0.0008166 0.0008233
## d.ind.wlc 0.95112 0.09244 0.0004622 0.0005044
## d.wlc.ush -0.45540 0.11592 0.0005796 0.0005796
## sd.d 0.47017 0.03399 0.0001700 0.0001711
## B -0.33070 0.13045 0.0006522 0.0010454
##
## 2. Quantiles for each variable:
##
## 2.5% 25% 50% 75% 97.5%
## d.ind.cau 0.542991 0.64571 0.69974 0.7527 0.85438
## d.ind.grp -0.001792 0.12568 0.19236 0.2596 0.38900
## d.ind.gsh -0.047034 0.09108 0.16146 0.2324 0.36786
## d.ind.tel -0.299884 -0.08751 0.02415 0.1322 0.34312
## d.ind.wlc 0.769002 0.88909 0.95103 1.0140 1.13201
## d.wlc.ush -0.682570 -0.53312 -0.45506 -0.3778 -0.22928
## sd.d 0.407688 0.44691 0.46875 0.4919 0.54110
## B -0.585819 -0.41896 -0.33124 -0.2426 -0.07606
##
## -- Model fit (residual deviance):
##
## Dbar pD DIC
## 185.7100 75.0097 260.7197
##
## 183 data points, ratio 1.015, I^2 = 2%
##
## -- Regression settings:
##
## Regression on "rob", shared coefficients, "cau" as control
## Input standardized: x' = (rob - 0.4340659) / 1
## Estimates at the centering value: rob = 0.4340659
forest(relative.effect(mcmc3, t1 = "cau", covariate = 1),
use.description = TRUE, xlim = c(-1.5, 1))
title("High Risk of Bias")
forest(relative.effect(mcmc3, t1 = "cau", covariate = 0),
use.description = TRUE, xlim = c(-1.5, 1))
title("Low Risk of Bias")
# deviance information criteria (DICs)
summary(mcmc3)$DIC
## Dbar pD DIC data points
## 185.7100 75.0097 260.7197 183.0000
summary(mcmc2)$DIC
## Dbar pD DIC data points
## 185.4762 137.8364 323.3126 183.0000