In our Cochrane Review,3 we assessed the identical set of trials. However, only 4 of the 15 trials included in Bryant’s meta-analysis on mortality met our predefined eligibility criteria, and our conclusion, incorporating careful grading of the certainty of evidence, reveals a less rosy picture. The bottom line demonstrates an important uncertainty whether ivermectin compared with placebo or standard of care reduces or increases mortality in moderately ill hospitalised patients (RR 0.60, 95% CI 0.14 to 2.51; two studies) and mildly ill outpatients (RR 0.33, 95% CI 0.01 to 8.05; two studies), due to serious risk of bias and imprecision. How do the different assessments come about? The answer lies partly in the baseline data of included studies. Bryant et al pooled heterogeneous patient populations, interventions, comparators and outcomes. In other words, they compare apples and oranges, serving a large bowl of a colourful fruit salad. Usually, pooling of heterogeneous studies increases imprecision of effects in meta-analyses. Why does this not apply to ivermectin? Its alleged effect is driven by studies where the effect size is extremely positive, which has influenced the conclusions in other reviews. One of these studies with a huge effect has now been retracted over ethical concern.4
Evidence syntheses must be pieces of the highest trustworthiness. However, reliability is at risk when researchers publish problematic trials or misuse established evidence assessment tools as a guise for quality of evidence synthesis in general, but especially during a pandemic, by trying to create pseudotrustworthiness for substances that cannot be considered effective and safe treatment options nor game changers, at this stage.