Skip to main content

Appraise Study Quality

Overview

Quality appraisal is the systematic assessment of the methodological rigour of each study included after screening. It answers the question: how much confidence can we place in the findings of this study? Appraisal does not judge whether a study is interesting or relevant (screening already established relevance); it judges whether the study was conducted in a way that makes its findings trustworthy.

Quality appraisal is mandatory in a rigorous SLR. Omitting it means you treat a poorly designed survey and a well-designed longitudinal study as equally credible evidence, which undermines the validity of your synthesis.


What Quality Appraisal Assesses

Appraisal criteria vary by study type, but the core questions are consistent across tools:

  • Is the research question or aim clearly stated?
  • Is the methodology appropriate to the research question?
  • Is the sample or data source described and justified?
  • Are data collection procedures transparent and consistent?
  • Are the analysis methods appropriate and described in sufficient detail?
  • Are the findings clearly presented and supported by the data?
  • Are the limitations acknowledged by the authors?

No study is perfect. The goal of appraisal is not to exclude everything with weaknesses but to provide an honest account of the evidence base and to weight your synthesis accordingly.


Decide Before You Appraise

Two decisions must be made in your protocol and applied consistently here:

Will quality scores affect inclusion?

You have two options:

  • Threshold-based exclusion: studies scoring below a defined threshold are excluded from the review. This produces a higher-quality evidence base but risks excluding the only available evidence on niche topics. If you use a threshold, state it in your protocol before appraising (e.g., "studies scoring below 50% on the CASP checklist will be excluded").
  • Retain all, note quality in synthesis: all studies passing screening are included, but quality scores are reported alongside findings and used to qualify the strength of evidence in your discussion. This is the more common approach in business and management SLRs, where evidence bases are often smaller and more heterogeneous.

Either approach is defensible; what is not defensible is deciding after seeing the scores.

Who will appraise?

As with screening, appraisal by two independent reviewers with a conflict resolution process is the gold standard. For a thesis-level review, solo appraisal is acceptable but should be stated as a limitation in your methods chapter.


Selecting an Appraisal Tool

Choose your tool based on the study types in your included set. All three tools listed below are freely available with no registration required.

CASP Checklists

Access: casp-uk.net (direct PDF download, no registration)

The Critical Appraisal Skills Programme checklists are the most accessible entry point for students new to quality appraisal. Each checklist is short (ten to twelve questions with yes/no/can't tell responses) and includes guidance notes. Separate checklists exist for:

  • Qualitative studies
  • Randomised controlled trials
  • Cohort studies
  • Case-control studies
  • Systematic reviews
  • Economic evaluations
  • Diagnostic test studies

For most business and management SLRs, the qualitative checklist will be the primary tool. If your included studies are methodologically mixed, you will need to apply different checklists to different study types and note which checklist was used for each study in your data extraction form.

Mixed Methods Appraisal Tool (MMAT)

Access: mcgill.ca (free PDF download from McGill University)

The MMAT is the strongest choice when your included studies span multiple methodological types, since a single tool handles qualitative, quantitative descriptive, quantitative randomised, quantitative non-randomised, and mixed-methods studies consistently. Each category has five criteria, allowing cross-study comparison of quality scores within a heterogeneous dataset.

The MMAT does not produce a numerical score; instead, each criterion is rated yes, no, or can't tell. This is intentional: the authors explicitly caution against summing scores into a single quality number, as this can create false precision.

JBI Critical Appraisal Tools

Access: jbi.global (free PDF download, no registration)

The Joanna Briggs Institute tools are comparable in accessibility to CASP and provide thirteen separate checklists covering a wider range of study types, including prevalence studies, case reports, case series, and qualitative evidence synthesis. They are slightly more detailed than CASP and include more extensive guidance notes.


Conducting the Appraisal

Work through each included study using your chosen tool. For each study, complete the checklist and record:

  • The tool used and checklist type (where multiple types apply)
  • The response for each criterion (yes / no / can't tell)
  • Any notes on specific methodological concerns
  • The overall quality judgement (strong / moderate / weak, or equivalent)

A suggested format for recording appraisal results is a spreadsheet with one row per study and one column per checklist criterion, plus an overall rating column. This makes it easy to sort by quality rating and to identify patterns (for example, if most studies share a common weakness such as lack of reflexivity, this becomes a theme in your discussion).

A Practical Tip

Read the methods section of each paper carefully before completing the checklist. Authors do not always report methods in detail in the abstract or even the results section; insufficient reporting is itself a quality concern, but it is worth distinguishing between a study that did not address a criterion and one that did but failed to report it.


Reporting Quality Appraisal in Your Thesis

Quality appraisal results must be reported transparently in your methods chapter and referenced in your discussion. Standard practice is to:

  1. Name the tool(s) used and cite the source
  2. Present results in a summary table, with one row per included study and columns for each criterion or an overall rating
  3. Describe the overall quality of the evidence base in narrative: were most studies of moderate quality? Were there systematic weaknesses across studies (e.g., small sample sizes, single-country contexts)?
  4. Reference quality in your discussion: when interpreting conflicting findings, note whether higher-quality studies favour one conclusion over another

Avoid the common error of completing a quality appraisal table and then never mentioning it again. The appraisal should inform how confidently you present your conclusions.


Common Mistakes to Avoid

  • Appraising after synthesis. Quality appraisal must precede synthesis; if you read deeply before appraising, your judgements will be influenced by whether you liked the findings.
  • Applying the wrong checklist. Using a qualitative checklist on a survey study, or vice versa, produces meaningless results. Identify each study's methodology before selecting the checklist.
  • Rating "can't tell" on everything. If most responses are "can't tell," the issue is usually that you are not reading the methods section carefully enough, or the reporting is genuinely poor (which is itself a quality concern worth noting).
  • Treating quality appraisal as a hurdle to clear. Its purpose is to characterise the evidence base, not to disqualify papers. A study with weaknesses can still contribute useful evidence if its limitations are acknowledged in the synthesis.
  • Forgetting to cite the appraisal tool. The tool is a published instrument and must be referenced in your methods chapter.