Skip to Main Content

Evidence Based Practice

APPRAISE: Systematic Reviews

Consider the following factors when evaluating systematic reviews.

Key issues for Systematic Reviews:

  • focused question
  • thorough literature search
  • include validated studies
  • selection of studies reproducible

Print and save the Systematic Review Worksheet.


Validity

  • Did the review explicitly address a sensible question?
    The systematic review should address a specific question that indicates the patient problem, the exposure and one or more outcomes. General reviews, which usually do not address specific questions, may be too broad to provide an answer to the clinical question for which you are seeking information.
     
  • Was the search for relevant studies detailed and exhaustive?
    Researchers should conduct a thorough search of appropriate bibliographic databases. Databases and search strategies should be outlined in the methodology section. Researchers should show evidence of searching for non-published evidence by contacting experts in the field. Cited references at the end of articles should also be checked.
     
  • Were the primary studies of high methodological quality?
    Researchers should evaluate the validity of each study included in the systematic review. The same EBP criteria used to critically appraise studies should be used to evaluate studies to be included in the systematic review. Differences in study results may be explained by differences in methodology and study design.
     
  • Were selection and assessments of the included studies reproducible?
    More than one researcher should evaluate each study and make decisions about its validity and inclusion. Bias (systematic errors) and mistakes (random errors) can be avoided when judgment is shared. A third reviewer should be available to break a tie vote.

Results

  • Were the results similar from study to study?
    • How similar were the point estimates?
    • Do confidence intervals overlap between studies?
  • Were results weighted both quantitatively and qualitatively in summary estimates?
  • How precise were the results?
  • What is the confidence interval for the summary or cumulative effect size?

Applying Results to Patient Care

  • Were all patient-important outcomes considered? Did the review omit outcomes that could change decisions?
  • Are any postulated subgroup effects credible?
    • Were subgroup differences postulated before data analysis?
    • Were subgroup differences consistent across studies?
  • What is the overall quality of the evidence? Were prevailing study design, size, and conduct reflected in a summary of the quality of evidence?
  • Are the benefits worth the costs and potential risks? Does the cumulative effect size cross a test or therapeutic threshold?

Source: Guyatt, G. Rennie, D. Meade, MO, Cook, DJ. Users’ Guide to Medical Literature: A Manual for Evidence-Based Clinical Practice, 2nd Edition 2008.


Knowledge Check

Question 1) A researcher conducts a systematic review but only searches one database and includes only published studies. Which key principle of validity is most likely violated?

A) The review did not include studies of high methodological quality.
B) The selection of studies was not reproducible because only one reviewer made decisions.
C) The search for relevant studies was not detailed and exhaustive, increasing the risk of publication bias.


Question 2) In a systematic review, two independent reviewers assess which studies to include, and a third reviewer resolves disagreements. Why is this process important for the validity of the review?

A) It helps ensure that the inclusion process is objective and reproducible, reducing the chance of bias from individual judgment.
B) It increases the total number of studies included, improving statistical power.
C) It ensures that only studies with positive results are selected, strengthening the review’s conclusions.