Faculty of
Health Sciences
Capacity Enhancement Project (CEP)

Critical Appraisal



Centre for Evidence-Based Medicine (CEBM):

The CEBM was established with the aim of promoting evidence-based health care. It provides free support and resources to doctors, clinicians, teachers and others interested in learning more about evidence-based medicine.


Agency for Healthcare Research and Quality (AHRQ):

The AHRQ is a federal agency in the United States that works with the public and private sectors to build the knowledge base for what works and does not work in health care. They then translate this knowledge into everyday practice and policy making.


The National Health and Medical Research Council (NHMRC):

The NHMRC has a publication entitled: How to review the evidence: systematic identification and review of the scientific literature, where they describe how to appraise and select studies in two formats. First, they give overall literature search and assessment methods and then they describe methods that are question specific (i.e., research questions regarding an intervention, frequency and rate or diagnostic tests).


The Equator Network:

The Equator Network is an international initiative seeking to enhance reliability of research literature by promoting transparent and accurate reporting of research studies. On their web site, they have a library of reporting guidelines for health research reporting, listed by study type.



Tools for Assessing the Quality and Reporting of Research

Practice Guidelines

Assessing Quality and Reporting:

Appraisal of Guidelines Research and Evaluation: AGREE II Instrument

Systematic Reviews and Meta-Analyses

Assessing Quality:

AMSTAR (Assessment of Multiple Systematic Reviews)


Systematic Review Critical Appraisal Sheet from the CEBM:


Scottish Intercollegiate Guideline Network: Critical Appraisal–Notes and Checklist 1: Systematic Reviews and Meta-analysis.

Assessing Reporting:

PRISMA (Preferred Reporting Items for Systematic reviews an Meta-analyses)

  • The PRISMA statement is an update of the QUORUM guidelines for reporting systematic reviews and meta-analyses. It consists of a 27-item checklist and a four-phase flow diagram illustrating the flow of information through the different phases of a systematic review.
    • Moher et al. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ 2009;339:332-6.

MOOSE (Meta-analyses of Observational Studies)

  • It is common practice to pool the results of observational studies. However, there is a greater risk of confounding and bias with observational studies, and therefore results from these studies should be interpreted with caution. The MOOSE group provides a comprehensive checklist for reporting reviews of observational studies to increase the quality of reporting.
    • Stroup et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000; 283:2008-12

Primary Studies

Assessing Quality:

All Types of Studies:


  • CATmaker is a computer-assisted critical appraisal tool, which helps you create Critically Appraised Topics (CATs), for the key articles you encounter about Therapy, Diagnosis, Prognosis, Aetiology/Harm and Systematic Reviews of Therapy.

GRADE (Grading of Recommendations Assessment, Development and Evaluation)

  • The GRADE approach is a system for grading the quality of evidence and the strength of recommendations that can be applied across a wide range of interventions and contexts. It grades the strength of each important outcome and looks at important considerations around study design and study quality. In addition, it takes into account values and preferences and considers the trade-offs between harms and benefits:

Cochrane Collaboration's Risk of Bias Tool

  • The Cochrane Collaboration has developed a tool to assess the risk of bias in primary studies. The tool is comprised of six domains: sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting and other issues. Further information regarding the Risk of Bias tool is available in the Cochrane Handbook for Systematic Reviews of Interventions Version 5.

Randomized Control Trials (RCT)

Jadad Scale

  • The Jadad Scale is a five-point scale designed to independently assess the methodological quality of a clinical trial:
    • Jadad et al. Assessing the quality of reports of randomized controlled trials: is blinding necessary? Control Clin Trials. 1996;17:1-12.

The Delphi List

  • The Delphi list is a set of generic core items for the quality assessment of RCTs and represents a starting point to a minimum reference standard for RCTs on many different research topics:
    • Verhagen et al. The Delphi List: A Criteria List for Quality Assessment of Randomized Clinical Trials for Conducting Systematic Reviews Developed by Delphi Consensus. J Clin Epidemiol Vol. 51, No. 12, pp. 1235–1241, 1998



Observational Studies



Diagnostic Studies



Assessing Reporting:

CONSORT (Consolidated Standards of Reporting Trials) 2010


STROBE (Strengthening the Reporting of Observational Studies in Epidemiology)

  • The STROBE Statement is a 22-item checklist for observational research (Cohort studies, case-control studies and cross-sectional studies). It urges reporting of what was planned and what was not, how the research was conducted, what was found and what the results mean.
    • von Elm et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) Statement: guidelines for reporting observational studies. Ann Intern Med. 2007; 147:573-577.


CORE-14 (Completeness of Reporting-14)

  • The CORE-14 instrument is a 14-item checklist designed to assess the completeness of reporting for conference abstracts of observational studies.
    • Kho et al. The Completeness of Reporting (CORE) index identifies important deficiencies in observational study conference abstracts. J Clin Epidemiol 2008; 61:1241-1249.

Related Presentations:

Several presentations related to critical appraisal were made at the Capacity Enhancement Program (CEP)'s Practice Guideline Winter Institute in March 2010.

Related Articles:

Juni et al. Assessing the quality of randomized trials. In. In. Egger M, Smith G, Altman D, editors. Systematic Reviews in Health Care: Meta-analysis in Context. Wiley-Blackwell, Oxford. 2008.

Brouwers et al. Evaluating the role of quality assessment of primary studies in systematic reviews of cancer practice guidelines. BMC Med Res Methodol. 2005;5:8.

Guyatt et al. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336:924-6.

Brouwers et al. A for effort: Learning from the application of the GRADE approach to cancer guideline development. J Clin Oncol. 2008;26:1025-6.

Fervers et al. Predictors of high quality clinical practice guidelines: examples in oncology. International Journal for Quality in Health Care 2005;17(2):123–32.

Krzyzanowska et al. Quality of abstracts describing randomized trials in the proceedings of American Society of Clinical Oncology Meetings: Guidelines for Improved Reporting. J Clin Oncol 2004;22:1993-9.

Burns et al. Abstract reporting in randomized clinical trials of acute lung injury: An audit and assessment of a quality of reporting score. Crit Care Med 2005;33(9):1937-45.