Skip to main content

Screening the Results

Overview

Screening is the process of applying your pre-specified inclusion and exclusion criteria to the deduplicated set of references produced earlier, in order to identify the studies that will form the basis of your review. It proceeds in two sequential phases: first by title and abstract, then by full text. Each phase reduces the total set further; only studies that pass both phases are included in your final review.

Screening is the step most vulnerable to unconscious bias. The discipline of applying your criteria consistently, rather than on a case-by-case intuitive basis, is what separates a systematic review from an informal one.


Prepare for Screening

Before beginning, confirm the following:

  • Your inclusion and exclusion criteria are written out explicitly (not held in your head)
  • Your deduplicated reference set has been imported into your screening tool
  • You have conducted a short calibration exercise (see below)

Calibration

Before screening the full dataset, test your criteria against a small sample of twenty to thirty records drawn randomly from your reference set. Apply the criteria independently, then compare decisions. This exercise helps surface edge cases and sharpen your application of the criteria.


Phase 1: Title and Abstract Screening

In the first phase, you review the title and abstract of every record in your deduplicated set and make a binary decision: include (proceed to full-text screening) or exclude (remove from the set, with reason recorded).

Decision Rules

  • Include any record where the title and abstract suggest the study could meet your criteria. When in doubt at this phase, include rather than exclude; you will assess more carefully in Phase 2.
  • Exclude only when you are confident the record does not meet one or more criteria. Record which criterion it fails.
  • Cannot determine: if the abstract is absent or too brief to judge, mark the record for full-text retrieval. Do not exclude on the basis of insufficient information.

Recording Exclusions

For every excluded record, note the primary reason for exclusion using the categories from your inclusion/exclusion criteria. Example reasons:

  • Outside date range
  • Not peer-reviewed
  • Not relevant to research question
  • Wrong population or context
  • Language not covered by criteria
  • Duplicate not caught in deduplication

These reasons feed directly into the PRISMA flow diagram. You do not need to record a reason for every individual exclusion; recording the category count is sufficient (e.g., "247 excluded: wrong topic; 43 excluded: outside date range").


Phase 2: Full-Text Screening

Records that pass Phase 1 are retrieved in full and assessed against the complete set of inclusion and exclusion criteria. Full-text screening is more demanding than title/abstract screening because you are working with the entire paper and must make a definitive inclusion decision.

Retrieving Full Texts

  • If you don't already have the full texts, search library databases
  • For items not available through the library databases: contact the library reference desk
  • For preprints or working papers, check SSRN (ssrn.com) or the author's institutional repository
  • If a full text genuinely cannot be obtained after reasonable effort, record it as "full text not retrievable" in your PRISMA count; do not exclude it silently

Assessing Full Texts

Read at minimum the abstract, introduction, methods section, and conclusion of each paper. You do not need to read every paper cover to cover at this stage; the goal is to confirm eligibility, not to extract data. Focus on:

  • Does the study actually investigate the population and phenomenon specified in your research question?
  • Does the methodology match the study types in your inclusion criteria?
  • Is the publication context (journal, conference, report type) within scope?
  • Does the date of data collection (not just publication) fall within your date range?

Record a clear reason for every full-text exclusion. At this phase, vague reasons such as "not relevant" are insufficient; specify which criterion was not met.


Screening Tools

Rayyan (Recommended for most students)

Rayyan (rayyan.ai) is a free, web-based screening tool designed specifically for systematic reviews. Key features:

  • Allows labelling of exclusion reasons
  • Exports screening decisions for the PRISMA diagram
  • No software installation required; browser-based
  • Free for individual and small team use

To get started: create a free account at rayyan.ai, create a new review, and upload your .ris export files. Rayyan deduplicates on import.

Spreadsheet (Fallback Option)

An Excel or LibreOffice Calc spreadsheet with one row per reference, columns for title, abstract, Phase 1 decision, Phase 2 decision, and exclusion reason is a fully acceptable approach for smaller datasets (under 500 records). It requires more manual discipline but has no access barriers.


Produce the PRISMA Flow Diagram

At the conclusion of screening, compile the following counts from your logbook and screening tool:

  1. Total records identified across all databases
  2. Total records after deduplication
  3. Records excluded at title/abstract screening (with reason categories)
  4. Full texts sought
  5. Full texts not retrievable
  6. Full texts excluded (with reason categories)
  7. Studies included in the final review

These numbers populate the PRISMA 2020 flow diagram, a standardised visual representation of the screening process. A pre-formatted Word version of the PRISMA 2020 flow diagram is available here. Complete it as you go; do not attempt to reconstruct the numbers from memory at write-up stage.


Common Mistakes to Avoid

  • Applying criteria inconsistently. If you find yourself making exceptions, revisit the written criteria rather than bending them for individual records.
  • Excluding at Phase 1 on a hunch. If an abstract is ambiguous, include it for full-text review rather than excluding it.
  • Not recording exclusion reasons. Without reasons, the PRISMA diagram cannot be completed and your methods section cannot be written.
  • Screener fatigue. For large datasets, screen in sessions of no more than ninety minutes. Fatigue measurably increases inconsistency.
  • Conflating screening with data extraction. Screening answers only one question: does this study meet the eligibility criteria? Deeper engagement with content comes later.