Browsing by Author "Naaktgeboren, Christiana A."
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- ItemAssessing variability in results in systematic reviews of diagnostic studies(BioMed Central, 2016) Naaktgeboren, Christiana A.; Ochodo, Eleanor A.; Van Enst, Wynanda A.; De Groot, Joris A. H.; Hooft, Lotty; Leeflang, Mariska M. G.; Bossuyt, Patrick M.; Moons, Karel G. M.; Reitsma, Johannes B.ENGLISH SUMMARY : Background: To describe approaches used in systematic reviews of diagnostic test accuracy studies for assessing variability in estimates of accuracy between studies and to provide guidance in this area. Methods: Meta-analyses of diagnostic test accuracy studies published between May and September 2012 were systematically identified. Information on how the variability in results was investigated was extracted. Results: Of the 53 meta-analyses included in the review, most (n=48; 91 %) presented variability in diagnostic accuracy estimates visually either through forest plots or ROC plots and the majority (n=40; 75 %) presented a test or statistical measure for the variability. Twenty-eight reviews (53 %) tested for variability beyond chance using Cochran’s Q test and 31 (58 %) reviews quantified it with I2. 7 reviews (13 %) presented between-study variance estimates (τ2) from random effects models and 3 of these presented a prediction interval or ellipse to facilitate interpretation. Half of all the meta-analyses specified what was considered a significant amount of variability (n=24; 49 %). Conclusions: Approaches to assessing variability in estimates of accuracy varied widely between diagnostic test accuracy reviews and there is room for improvement. We provide initial guidance, complemented by an overview of the currently available approaches.
- ItemIncorporating quality assessments of primary studies in the conclusions of diagnostic accuracy reviews : a cross-sectional study(BioMed Central, 2014-03-03) Ochodo, Eleanor A.; Van Enst, Wynanda A.; Naaktgeboren, Christiana A.; De Groot, Joris A. H.; Hooft, Lotty; Moons, Karel G. M.; Reitsma, Johannes B.; Bossuyt, Patrick M.; Leeflang, Mariska M. G.Background: Drawing conclusions from systematic reviews of test accuracy studies without considering the methodological quality (risk of bias) of included studies may lead to unwarranted optimism about the value of the test(s) under study. We sought to identify to what extent the results of quality assessment of included studies are incorporated in the conclusions of diagnostic accuracy reviews. Methods: We searched MEDLINE and EMBASE for test accuracy reviews published between May and September 2012. We examined the abstracts and main texts of these reviews to see whether and how the results of quality assessment were linked to the accuracy estimates when drawing conclusions. Results: We included 65 reviews of which 53 contained a meta-analysis. Sixty articles (92%) had formally assessed the methodological quality of included studies, most often using the original QUADAS tool (n = 44, 68%). Quality assessment was mentioned in 28 abstracts (43%); with a majority (n = 21) mentioning it in the methods section. In only 5 abstracts (8%) were results of quality assessment incorporated in the conclusions. Thirteen reviews (20%) presented results of quality assessment in the main text only, without further discussion. Forty-seven reviews (72%) discussed results of quality assessment; the most frequent form was as limitations in assessing quality (n = 28). Only 6 reviews (9%) further linked the results of quality assessment to their conclusions, 3 of which did not conduct a meta-analysis due to limitations in the quality of included studies. In the reviews with a meta-analysis, 19 (36%) incorporated quality in the analysis. Eight reported significant effects of quality on the pooled estimates; in none of them these effects were factored in the conclusions. Conclusion: While almost all recent diagnostic accuracy reviews evaluate the quality of included studies, very few consider results of quality assessment when drawing conclusions. The practice of reporting systematic reviews of test accuracy should improve if readers not only want to be informed about the limitations in the available evidence, but also on the associated implications for the performance of the evaluated tests.
- ItemSmall-study effects and time trends in diagnostic test accuracy meta-analyses : a meta-epidemiological study(BioMed Central, 2015-05-09) Van Enst, Wynanda Annefloor; Naaktgeboren, Christiana A.; Ochodo, Eleanor A.; De Groot, Joris A. H.; Leeflang, Mariska M.; Reitsma, Johannes B.; Scholten, Rob J. P. M.; Moons, Karel G. M.; Zwinderman, Aeilko H.; Bossuyt, Patrick M. M.; Hooft, LottyBackground: Small-study effects and time trends have been identified in meta-analyses of randomized trials. We evaluated whether these effects are also present in meta-analyses of diagnostic test accuracy studies. Methods: A systematic search identified test accuracy meta-analyses published between May and September 2012. In each meta-analysis, the strength of the associations between estimated accuracy of the test (diagnostic odds ratio (DOR), sensitivity, and specificity) and sample size and between accuracy estimates and time since first publication were evaluated using meta-regression models. The regression coefficients over all meta-analyses were summarized using random effects meta-analysis. Results: Forty-six meta-analyses and their corresponding primary studies (N = 859) were included. There was a non-significant relative change in the DOR of 1.01 per 100 additional participants (95% CI 1.00 to 1.03; P = 0.07). In the subgroup of imaging studies, there was a relative increase in sensitivity of 1.13 per 100 additional diseased subjects (95% CI 1.05 to 1.22; P = 0.002). The relative change in DOR with time since first publication was 0.94 per 5 years (95% CI 0.80 to 1.10; P = 0.42). Sensitivity was lower in studies published later (relative change 0.89, 95% CI 0.80 to 0.99; P = 0.04). Conclusions: Small-study effects and time trends do not seem to be as pronounced in meta-analyses of test accuracy studies as they are in meta-analyses of randomized trials. Small-study effects seem to be reversed in imaging, where larger studies tend to report higher sensitivity.