• Critical Appraisal of Studies

Critical Appraisal

Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context (Burls, 2009). Critical appraisal of studies involves checking the quality, reliability and relevance of the studies you've selected to help answer your review question. Depending on the type of study you are evaluating you may use different evaluation tools. When evaluating studies, some questions to consider are:

  • ​​​​Has the study's aim been clearly stated?
  • Does the sample accurately reflect the population?
  • Has the sampling method and size been described and justified?
  • Have exclusions been stated?
  • Is the control group easily identified?
  • Is the loss to follow-up detailed?
  • Can the results be replicated?
  • Are there confounding factors?
  • Are the conclusions logical?
  • Can the results be extrapolated to other populations?

Adapted from:  University of Illinois, Chicago Library

More on critical appraisal:

  • Critical Appraisal This article helps to define critical appraisal, identify its benefits, discuss conceptual issues influencing the adequacy of a critical appraisal, and detail procedures to help reviewers undertake critical appraisals (Tod, et al., 2021).
  • Critical Appraisal Tools and Reporting Guidelines for Evidence‐Based Practice The primary purpose of this paper is to help nurses understand the difference between critical appraisal tools and reporting guidelines (Buccheri et al., 2017).
  • What is Critical Appraisal? An overview of how to critically appraise studies from Amanda Burls, Director of the Critical Appraisal Skills Programme, University of Oxford (Burls, 2009). From https://whatisseries.co.uk/product/what-is-critical-appraisal/
  • Critical Appraisal Tools
  • AMSTAR 2 - A Critical Appraisal Tool for Systematic Reviews (Shea et al., 2017).
  • Best Bests Critical Appraisal Worksheets Critical appraisal checklists for various study types.
  • CASP Checklists Critical Assessmentl Skills Programme (CASP) has appraisal checklists designed for use with Randomized Controlled Trials and other study types.
  • Critical Appraisal Tools Critical appraisal questions to ask and worksheets from the Centre for Evidence-Based Medicine at Oxford University.
  • Downs & Black- Checklist for Measuring Study Quality See Appendix (Downs & Black, 1998).
  • Downs & Black Checklist for Clinical Trial Quality Assessment (Downs and Black Checklist, 2013)
  • Joanna Briggs Institute (JBI) Critical Appraisal Tools JBI’s critical appraisal tools assist in assessing the trustworthiness, relevance and results of published papers.
  • Johns Hopkins Evidence-Based Practice Model The Johns Hopkins Evidence-Based Practice model for Nurses and Healthcare Professionals is a powerful problem-solving approach to clinical decision-making and is accompanied by user-friendly tools to guide individuals or groups through the EBP process. Must fill out online form to request permission to download tools.
  • Mixed Methods Appraisal Tool (MMAT) The MMAT is intended to be used as a checklist for concomitantly appraising and/or describing studies included in systematic mixed studies reviews (reviews including original qualitative, quantitative and mixed methods studies) (Hong et al., 2018).
  • Repository of Quality Assessment and Risk of Bias Tools A handy resource from Duke University' Medical Center Library for finding and selecting a risk of bias or quality assessment tool for evidence synthesis projects. Download spreadsheet for full functionality.

Tools for Specific Study Types

Integrative Reviews

  • Critical Appraisal Skills Programme (CASP) Checklists  Appraisal checklists designed for use with Systematic Reviews, Randomized Controlled Trials, Cohort Studies, Case Control Studies, Economic Evaluations, Diagnostic Studies, Qualitative studies and Clinical Prediction Rule.
  • Mixed Methods Appraisal Tool (MMAT)  The MMAT is a critical appraisal tool that is designed for the appraisal stage of systematic mixed studies reviews, i.e., reviews that include qualitative, quantitative and mixed methods studies. It permits to appraise the methodological quality of five categories to studies: qualitative research, randomized controlled trials, non-randomized studies, quantitative descriptive studies, and mixed methods studies. (Hong et al., 2018).

Randomized Controlled Trials

  • CASP checklist for RCT 
  • CASP Checklists  Critical Assessment Skills Programme (CASP) has appraisal checklists designed for use with Systematic Reviews, Randomized Controlled Trials, Cohort Studies, Case Contro l  Studies, Economic Evaluations, Diagnostic Studies, Qualitative studies and Clinical Prediction Rule.
  • JBI Critical Appraisal Tools  Joanna Briggs Institute (JBI) is an independent, international, not-for-profit researching and development organization based at the University of Adelaide, South Australia. Contains a number of critical appraisal tools including Checklist for Randomized Controlled Trials
  • RoB 2.0   A revised Cochrane risk of bias tool for randomized trials. Is suitable for individually-randomized, parallel-group, and cluster- randomized trials

Qualitative Studies

  • CASP Qualitative Studies Checklist  Most frequently recommended tool for qualitative study assessment
  • JBI Critical Appraisal Checklist for Qualitative Research

Systematic Reviews

  • AMSTAR 2 - A Critical Appraisal Tool for Systematic Reviews  (Shea et al., 2017)
  • BMJ Framework for Assessing Systematic Reviews
  • CASP Systematic Review Checklist
  • JBI Checklist for Systematic Reviews
  • ROBIS A new tool for assessing the risk of bias in systematic reviews (rather than in primary studies). Here you can find the tool itself, information to help you complete a ROBIS assessment, and resources to help you present the results of your ROBIS assessment.

Scoping and Other Review Types

  • CAT HPPR: A critical appraisal tool to assess the quality of systematic, rapid, and scoping reviews investigating interventions in health promotion and prevention  (Heise et al., 2022).
  • CAT HPPR Critical Appraisal Tool for Health Promotion and Prevention Reviews
  • CAT HPPR Manual and Instructions Manual and instructions to reviewers for using the Critical Appraisal Tool for Health Promotion and Prevention Reviews (CAT HPPR). 2020.

References:

Buccheri, R. K., & Sharifi, C. (2017).  Critical appraisal tools and reporting guidelines for evidence‐basedpPractice.   Worldviews on Evidence-Based Nursing, 14( 6), 463–472. https://doi.org/10.1111/wvn.12258

Burls, A. (2009).  What is critical appraisal?  Retrieved April 21, 2022, from www.whatisseries.co.uk

Downs, S. H., & Black, N. (1998). The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions.  Journal of Epidemiology and Community Health (1979) ,  52 (6), 377–384. https://doi.org/10.1136/jech.52.6.37 7

Downs and Black Checklist for Clinical Trial Quality Assessment.(2013). In Point-of-Care Testing of International Normalized Ratio for Patients on Oral Anticoagulant Therapy – Project Protocol [Internet] . Canadian Agency for Drugs and Technologies in Health. https://www.ncbi.nlm.nih.gov/books/NBK361373/

Heise, T. L., Seidler, A., Girbig, M., Freiberg, A., Alayli, A., Fischer, M., Haß, W., & Zeeb, H. (2022). CAT HPPR: A critical appraisal tool to assess the quality of systematic, rapid, and scoping reviews investigating interventions in health promotion and prevention.  BMC Medical Research Methodology ,  22 (1), 334–334. https://doi.org/10.1186/s12874-022-01821-4

Hong, Q.N., Fàbregues, S., Bartlett, G., Boardman, F.K., Cargo, M., Dagenais, P., Gagnon, M., Griffiths, F.E., Nicolau, B., O’Cathain, A., Rousseau, M.C., Vedel, I., & Pluye, P. (2018). The Mixed Methods Appraisal Tool (MMAT) version 2018 for information professionals and researchers. Education for Information, 34 (4), 285-291.DOI 10.3233/EFI-180221

Ma, Wang, Y., Yang, Z., Huang, D., Weng, H., & Zeng, X. (2020). Methodological quality (risk of bias) assessment tools for primary and secondary medical studies: what are they and which is better?  Military Medical Research,  7 (1), 7. 

Motheral, B., Brooks, J., Clark, M. A., Crown, W. H., Davey, P., Hutchins, D., Martin, B. C., & Stang, P. (2003). A checklist for retrospective database studies—Report of the ISPOR task force on retrospective databases.  Value in Health ,  6 (2), 90–97. https://doi.org/10.1046/j.1524-4733.2003.00242.x

Shea, B. J., Reeves, B. C., Wells, G., Thuku, M., Hamel, C., Moran, J., Moher, D., Tugwell, P., Welch, V., Kristjansson, E., & Henry, D. A. (2017). AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.  BMJ (Clinical research ed.) ,  358 , j4008. https://doi.org/10.1136/bmj.j4008

Tod, D., Booth, A., & Smith, B. (2021). Critical appraisal.  International Review of Sport and Exercise Psychology , 15 (1), 52-72.

  • << Previous: Quantitative vs. Qualitative Research
  • Next: SR Programs & Tools >>
  • Types of Questions
  • Key Features and Limitations
  • Is a Systematic Review Right for Your Research?
  • Integrative Review
  • Scoping Review
  • Rapid Review
  • Meta-Analysis/Meta-Synthesis
  • Selecting a Review Type
  • Reducing Bias
  • Guidelines for Student Researchers
  • Training Resources
  • Register Your Protocol
  • Handbooks & Manuals
  • Reporting Guidelines
  • PRESS 2015 Guidelines
  • Search Strategies
  • Selected Databases
  • Grey Literature
  • Handsearching
  • Citation Searching
  • Study Types & Terminology
  • Quantitative vs. Qualitative Research
  • Broad Functionality Programs & Tools
  • Search Strategy Tools
  • Deduplication Tools
  • CItation Screening
  • Quality Assessment/Risk of Bias Tools
  • Data Collection/Extraction
  • Meta Analysis Tools
  • Books on Systematic Reviews
  • Finding Systematic Review Articles in the Databases
  • Systematic Review Journals
  • More Resources
  • Evidence-Based Practice Research in Nursing
  • Citation Management Programs
  • Last Updated: May 3, 2024 5:52 PM
  • URL: https://libguides.adelphi.edu/Systematic_Reviews
  • Mayo Clinic Libraries
  • Systematic Reviews
  • Critical Appraisal by Study Design

Systematic Reviews: Critical Appraisal by Study Design

  • Knowledge Synthesis Comparison
  • Knowledge Synthesis Decision Tree
  • Standards & Reporting Results
  • Materials in the Mayo Clinic Libraries
  • Training Resources
  • Review Teams
  • Develop & Refine Your Research Question
  • Develop a Timeline
  • Project Management
  • Communication
  • PRISMA-P Checklist
  • Eligibility Criteria
  • Register your Protocol
  • Other Resources
  • Other Screening Tools
  • Grey Literature Searching
  • Citation Searching
  • Data Extraction Tools
  • Minimize Bias
  • Synthesis & Meta-Analysis
  • Publishing your Systematic Review

Tools for Critical Appraisal of Studies

critical appraisal of thesis

“The purpose of critical appraisal is to determine the scientific merit of a research report and its applicability to clinical decision making.” 1 Conducting a critical appraisal of a study is imperative to any well executed evidence review, but the process can be time consuming and difficult. 2 The critical appraisal process requires “a methodological approach coupled with the right tools and skills to match these methods is essential for finding meaningful results.” 3 In short, it is a method of differentiating good research from bad research.

Critical Appraisal by Study Design (featured tools)

  • Non-RCTs or Observational Studies
  • Diagnostic Accuracy
  • Animal Studies
  • Qualitative Research
  • Tool Repository
  • AMSTAR 2 The original AMSTAR was developed to assess the risk of bias in systematic reviews that included only randomized controlled trials. AMSTAR 2 was published in 2017 and allows researchers to “identify high quality systematic reviews, including those based on non-randomised studies of healthcare interventions.” 4 more... less... AMSTAR 2 (A MeaSurement Tool to Assess systematic Reviews)
  • ROBIS ROBIS is a tool designed specifically to assess the risk of bias in systematic reviews. “The tool is completed in three phases: (1) assess relevance(optional), (2) identify concerns with the review process, and (3) judge risk of bias in the review. Signaling questions are included to help assess specific concerns about potential biases with the review.” 5 more... less... ROBIS (Risk of Bias in Systematic Reviews)
  • BMJ Framework for Assessing Systematic Reviews This framework provides a checklist that is used to evaluate the quality of a systematic review.
  • CASP Checklist for Systematic Reviews This CASP checklist is not a scoring system, but rather a method of appraising systematic reviews by considering: 1. Are the results of the study valid? 2. What are the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Systematic Reviews Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • JBI Critical Appraisal Tools, Checklist for Systematic Reviews JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • NHLBI Study Quality Assessment of Systematic Reviews and Meta-Analyses The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • RoB 2 RoB 2 “provides a framework for assessing the risk of bias in a single estimate of an intervention effect reported from a randomized trial,” rather than the entire trial. 6 more... less... RoB 2 (revised tool to assess Risk of Bias in randomized trials)
  • CASP Randomised Controlled Trials Checklist This CASP checklist considers various aspects of an RCT that require critical appraisal: 1. Is the basic study design valid for a randomized controlled trial? 2. Was the study methodologically sound? 3. What are the results? 4. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CONSORT Statement The CONSORT checklist includes 25 items to determine the quality of randomized controlled trials. “Critical appraisal of the quality of clinical trials is possible only if the design, conduct, and analysis of RCTs are thoroughly and accurately described in the report.” 7 more... less... CONSORT (Consolidated Standards of Reporting Trials)
  • NHLBI Study Quality Assessment of Controlled Intervention Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • JBI Critical Appraisal Tools Checklist for Randomized Controlled Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • ROBINS-I ROBINS-I is a “tool for evaluating risk of bias in estimates of the comparative effectiveness… of interventions from studies that did not use randomization to allocate units… to comparison groups.” 8 more... less... ROBINS-I (Risk Of Bias in Non-randomized Studies – of Interventions)
  • NOS This tool is used primarily to evaluate and appraise case-control or cohort studies. more... less... NOS (Newcastle-Ottawa Scale)
  • AXIS Cross-sectional studies are frequently used as an evidence base for diagnostic testing, risk factors for disease, and prevalence studies. “The AXIS tool focuses mainly on the presented [study] methods and results.” 9 more... less... AXIS (Appraisal tool for Cross-Sectional Studies)
  • NHLBI Study Quality Assessment Tools for Non-Randomized Studies The NHLBI’s quality assessment tools were designed to assist reviewers in focusing on concepts that are key for critical appraisal of the internal validity of a study. • Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies • Quality Assessment of Case-Control Studies • Quality Assessment Tool for Before-After (Pre-Post) Studies With No Control Group • Quality Assessment Tool for Case Series Studies more... less... NHLBI (National Heart, Lung, and Blood Institute)
  • Case Series Studies Quality Appraisal Checklist Developed by the Institute of Health Economics (Canada), the checklist is comprised of 20 questions to assess “the robustness of the evidence of uncontrolled, [case series] studies.” 10
  • Methodological Quality and Synthesis of Case Series and Case Reports In this paper, Dr. Murad and colleagues “present a framework for appraisal, synthesis and application of evidence derived from case reports and case series.” 11
  • MINORS The MINORS instrument contains 12 items and was developed for evaluating the quality of observational or non-randomized studies. 12 This tool may be of particular interest to researchers who would like to critically appraise surgical studies. more... less... MINORS (Methodological Index for Non-Randomized Studies)
  • JBI Critical Appraisal Tools for Non-Randomized Trials JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis. • Checklist for Analytical Cross Sectional Studies • Checklist for Case Control Studies • Checklist for Case Reports • Checklist for Case Series • Checklist for Cohort Studies
  • QUADAS-2 The QUADAS-2 tool “is designed to assess the quality of primary diagnostic accuracy studies… [it] consists of 4 key domains that discuss patient selection, index test, reference standard, and flow of patients through the study and timing of the index tests and reference standard.” 13 more... less... QUADAS-2 (a revised tool for the Quality Assessment of Diagnostic Accuracy Studies)
  • JBI Critical Appraisal Tools Checklist for Diagnostic Test Accuracy Studies JBI Critical Appraisal Tools help you assess the methodological quality of a study and to determine the extent to which study has addressed the possibility of bias in its design, conduct and analysis.
  • STARD 2015 The authors of the standards note that “[e]ssential elements of [diagnostic accuracy] study methods are often poorly described and sometimes completely omitted, making both critical appraisal and replication difficult, if not impossible.”10 The Standards for the Reporting of Diagnostic Accuracy Studies was developed “to help… improve completeness and transparency in reporting of diagnostic accuracy studies.” 14 more... less... STARD 2015 (Standards for the Reporting of Diagnostic Accuracy Studies)
  • CASP Diagnostic Study Checklist This CASP checklist considers various aspects of diagnostic test studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • CEBM Diagnostic Critical Appraisal Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance, and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • SYRCLE’s RoB “[I]mplementation of [SYRCLE’s RoB tool] will facilitate and improve critical appraisal of evidence from animal studies. This may… enhance the efficiency of translating animal research into clinical practice and increase awareness of the necessity of improving the methodological quality of animal studies.” 15 more... less... SYRCLE’s RoB (SYstematic Review Center for Laboratory animal Experimentation’s Risk of Bias)
  • ARRIVE 2.0 “The [ARRIVE 2.0] guidelines are a checklist of information to include in a manuscript to ensure that publications [on in vivo animal studies] contain enough information to add to the knowledge base.” 16 more... less... ARRIVE 2.0 (Animal Research: Reporting of In Vivo Experiments)
  • Critical Appraisal of Studies Using Laboratory Animal Models This article provides “an approach to critically appraising papers based on the results of laboratory animal experiments,” and discusses various “bias domains” in the literature that critical appraisal can identify. 17
  • CEBM Critical Appraisal of Qualitative Studies Sheet The CEBM’s critical appraisal sheets are designed to help you appraise the reliability, importance and applicability of clinical evidence. more... less... CEBM (Centre for Evidence-Based Medicine)
  • CASP Qualitative Studies Checklist This CASP checklist considers various aspects of qualitative research studies including: 1. Are the results of the study valid? 2. What were the results? 3. Will the results help locally? more... less... CASP (Critical Appraisal Skills Programme)
  • Quality Assessment and Risk of Bias Tool Repository Created by librarians at Duke University, this extensive listing contains over 100 commonly used risk of bias tools that may be sorted by study type.
  • Latitudes Network A library of risk of bias tools for use in evidence syntheses that provides selection help and training videos.

References & Recommended Reading

1.     Kolaski, K., Logan, L. R., & Ioannidis, J. P. (2024). Guidance to best tools and practices for systematic reviews .  British Journal of Pharmacology ,  181 (1), 180-210

2.    Portney LG.  Foundations of clinical research : applications to evidence-based practice.  Fourth edition. ed. Philadelphia: F A Davis; 2020.

3.     Fowkes FG, Fulton PM.  Critical appraisal of published research: introductory guidelines.   BMJ (Clinical research ed).  1991;302(6785):1136-1140.

4.     Singh S.  Critical appraisal skills programme.   Journal of Pharmacology and Pharmacotherapeutics.  2013;4(1):76-77.

5.     Shea BJ, Reeves BC, Wells G, et al.  AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both.   BMJ (Clinical research ed).  2017;358:j4008.

6.     Whiting P, Savovic J, Higgins JPT, et al.  ROBIS: A new tool to assess risk of bias in systematic reviews was developed.   Journal of clinical epidemiology.  2016;69:225-234.

7.     Sterne JAC, Savovic J, Page MJ, et al.  RoB 2: a revised tool for assessing risk of bias in randomised trials.  BMJ (Clinical research ed).  2019;366:l4898.

8.     Moher D, Hopewell S, Schulz KF, et al.  CONSORT 2010 Explanation and Elaboration: Updated guidelines for reporting parallel group randomised trials.  Journal of clinical epidemiology.  2010;63(8):e1-37.

9.     Sterne JA, Hernan MA, Reeves BC, et al.  ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions.  BMJ (Clinical research ed).  2016;355:i4919.

10.     Downes MJ, Brennan ML, Williams HC, Dean RS.  Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS).   BMJ open.  2016;6(12):e011458.

11.   Guo B, Moga C, Harstall C, Schopflocher D.  A principal component analysis is conducted for a case series quality appraisal checklist.   Journal of clinical epidemiology.  2016;69:199-207.e192.

12.   Murad MH, Sultan S, Haffar S, Bazerbachi F.  Methodological quality and synthesis of case series and case reports.  BMJ evidence-based medicine.  2018;23(2):60-63.

13.   Slim K, Nini E, Forestier D, Kwiatkowski F, Panis Y, Chipponi J.  Methodological index for non-randomized studies (MINORS): development and validation of a new instrument.   ANZ journal of surgery.  2003;73(9):712-716.

14.   Whiting PF, Rutjes AWS, Westwood ME, et al.  QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies.   Annals of internal medicine.  2011;155(8):529-536.

15.   Bossuyt PM, Reitsma JB, Bruns DE, et al.  STARD 2015: an updated list of essential items for reporting diagnostic accuracy studies.   BMJ (Clinical research ed).  2015;351:h5527.

16.   Hooijmans CR, Rovers MM, de Vries RBM, Leenaars M, Ritskes-Hoitinga M, Langendam MW.  SYRCLE's risk of bias tool for animal studies.   BMC medical research methodology.  2014;14:43.

17.   Percie du Sert N, Ahluwalia A, Alam S, et al.  Reporting animal research: Explanation and elaboration for the ARRIVE guidelines 2.0.  PLoS biology.  2020;18(7):e3000411.

18.   O'Connor AM, Sargeant JM.  Critical appraisal of studies using laboratory animal models.   ILAR journal.  2014;55(3):405-417.

  • << Previous: Minimize Bias
  • Next: GRADE >>
  • Last Updated: May 10, 2024 7:59 AM
  • URL: https://libraryguides.mayo.edu/systematicreviewprocess

Please enter both an email address and a password.

Account login

  • Show/Hide Password Show password Hide password
  • Reset Password

Need to reset your password?  Enter the email address which you used to register on this site (or your membership/contact number) and we'll email you a link to reset it. You must complete the process within 2hrs of receiving the link.

We've sent you an email.

An email has been sent to Simply follow the link provided in the email to reset your password. If you can't find the email please check your junk or spam folder and add [email protected] to your address book.

  • About RCS England

critical appraisal of thesis

  • Dissecting the literature: the importance of critical appraisal

08 Dec 2017

Kirsty Morrison

This post was updated  in 2023.

Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context.

Amanda Burls, What is Critical Appraisal?

Critical Appraisal 1

Why is critical appraisal needed?

Literature searches using databases like Medline or EMBASE often result in an overwhelming volume of results which can vary in quality. Similarly, those who browse medical literature for the purposes of CPD or in response to a clinical query will know that there are vast amounts of content available. Critical appraisal helps to reduce the burden and allow you to focus on articles that are relevant to the research question, and that can reliably support or refute its claims with high-quality evidence, or identify high-level research relevant to your practice.

Critical Appraisal 2

Critical appraisal allows us to:

  • reduce information overload by eliminating irrelevant or weak studies
  • identify the most relevant papers
  • distinguish evidence from opinion, assumptions, misreporting, and belief
  • assess the validity of the study
  • assess the usefulness and clinical applicability of the study
  • recognise any potential for bias.

Critical appraisal helps to separate what is significant from what is not. One way we use critical appraisal in the Library is to prioritise the most clinically relevant content for our Current Awareness Updates .

How to critically appraise a paper

There are some general rules to help you, including a range of checklists highlighted at the end of this blog. Some key questions to consider when critically appraising a paper:

  • Is the study question relevant to my field?
  • Does the study add anything new to the evidence in my field?
  • What type of research question is being asked? A well-developed research question usually identifies three components: the group or population of patients, the studied parameter (e.g. a therapy or clinical intervention) and outcomes of interest.
  • Was the study design appropriate for the research question? You can learn more about different study types and the hierarchy of evidence here .
  • Did the methodology address important potential sources of bias? Bias can be attributed to chance (e.g. random error) or to the study methods (systematic bias).
  • Was the study performed according to the original protocol? Deviations from the planned protocol can affect the validity or relevance of a study, e.g. a decrease in the studied population over the course of a randomised controlled trial .
  • Does the study test a stated hypothesis? Is there a clear statement of what the investigators expect the study to find which can be tested, and confirmed or refuted.
  • Were the statistical analyses performed correctly? The approach to dealing with missing data, and the statistical techniques that have been applied should be specified. Original data should be presented clearly so that readers can check the statistical accuracy of the paper.
  • Do the data justify the conclusions? Watch out for definite conclusions based on statistically insignificant results, generalised findings from a small sample size, and statistically significant associations being misinterpreted to imply a cause and effect.
  • Are there any conflicts of interest? Who has funded the study and can we trust their objectivity? Do the authors have any potential conflicts of interest, and have these been declared?

And an important consideration for surgeons:

  • Will the results help me manage my patients?

At the end of the appraisal process you should have a better appreciation of how strong the evidence is, and ultimately whether or not you should apply it to your patients.

Further resources:

  • How to Read a Paper by Trisha Greenhalgh
  • The Doctor’s Guide to Critical Appraisal by Narinder Kaur Gosall
  • CASP checklists
  • CEBM Critical Appraisal Tools
  • Critical Appraisal: a checklist
  • Critical Appraisal of a Journal Article (PDF)
  • Introduction to...Critical appraisal of literature
  • Reporting guidelines for the main study types

Kirsty Morrison, Information Specialist

Share this page:

  • Library Blog

Banner

  • Teesside University Student & Library Services
  • Learning Hub Group

Critical Appraisal for Health Students

  • Critical Appraisal of a qualitative paper
  • Critical Appraisal: Help
  • Critical Appraisal of a quantitative paper
  • Useful resources

Appraisal of a Qualitative paper : Top tips

undefined

  • Introduction

Critical appraisal of a qualitative paper

This guide aimed at health students, provides basic level support for appraising qualitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework  for appraising qualitative research (based on 4 aspects of trustworthiness) is  provided and there is an opportunity to practise the technique on a sample article.

Support Materials

  • Framework for reading qualitative papers
  • Critical appraisal of a qualitative paper PowerPoint

To practise following this framework for critically appraising a qualitative article, please look at the following article:

Schellekens, M.P.J.  et al  (2016) 'A qualitative study on mindfulness-based stress reduction for breast cancer patients: how women experience participating with fellow patients',  Support Care Cancer , 24(4), pp. 1813-1820.

Critical appraisal of a qualitative paper: practical example.

  • Credibility
  • Transferability
  • Dependability
  • Confirmability

How to use this practical example 

Using the framework, you can have a go at appraising a qualitative paper - we are going to look at the following article: 

Step 1.  take a quick look at the article, step 2.  click on the credibility tab above - there are questions to help you appraise the trustworthiness of the article, read the questions and look for the answers in the article. , step 3.   click on each question and our answers will appear., step 4.    repeat with the other aspects of trustworthiness: transferability, dependability and confirmability ., questioning the credibility:, who is the researcher what has been their experience how well do they know this research area, was the best method chosen what method did they use was there any justification was the method scrutinised by peers is it a recognisable method was there triangulation ( more than one method used), how was the data collected was data collected from the participants at more than one time point how long were the interviews were questions asked to the participants in different ways, is the research reporting what the participants actually said were the participants shown transcripts / notes of the interviews / observations to ‘check’ for accuracy are direct quotes used from a variety of participants, how would you rate the overall credibility, questioning the transferability, was a meaningful sample obtained how many people were included is the sample diverse how were they selected, are the demographics given, does the research cover diverse viewpoints do the results include negative cases was data saturation reached, what is the overall transferability can the research be transferred to other settings , questioning the dependability :, how transparent is the audit trail can you follow the research steps are the decisions made transparent is the whole process explained in enough detail did the researcher keep a field diary is there a clear limitations section, was there peer scrutiny of the researchwas the research plan shown to peers / colleagues for approval and/or feedback did two or more researchers independently judge data, how would you rate the overall dependability would the results be similar if the study was repeated how consistent are the data and findings, questioning the confirmability :, is the process of analysis described in detail is a method of analysis named or described is there sufficient detail, have any checks taken place was there cross-checking of themes was there a team of researchers, has the researcher reflected on possible bias is there a reflexive diary, giving a detailed log of thoughts, ideas and assumptions, how do you rate the overall confirmability has the researcher attempted to limit bias, questioning the overall trustworthiness :, overall how trustworthy is the research, further information.

See Useful resources  for links, books and LibGuides to help with Critical appraisal.

  • << Previous: Critical Appraisal: Help
  • Next: Critical Appraisal of a quantitative paper >>
  • Last Updated: Apr 30, 2024 4:47 PM
  • URL: https://libguides.tees.ac.uk/critical_appraisal

X

UCL Doctorate In Clinical Psychology

Menu

Guidelines for Writing and Presenting the Thesis

The DClinPsy thesis has two volumes. The major research project forms Volume 1; Volume 2 contains the four case reports and the service-related research report. These guidelines describe what goes into each part of the thesis and how it all fits together. They mostly focus on Volume 1, which is covered in the following section; the later section on layout and formatting covers both volumes.

What goes in Volume 1

Volume 1, the research component of the thesis, has a three-part structure, consisting of a literature review paper, an empirical paper and a critical appraisal. In addition, from June 2018 onwards, UCL regulations stipulate that the thesis should contain a brief (≤500 words)  Impact Statement , explaining how the work in the thesis could be put to beneficial use inside and outside of academia.

The first two parts (the literature review and the empirical paper) are in the form of papers that might be submitted to a peer-reviewed journal; the third part (the critical appraisal) is not intended for publication, but aims to give you an opportunity to reflect critically on the research that you carried out. Each part is described below.

There will inevitably be some overlap between each of the three parts: for example, the introduction section of the empirical paper may partly be condensed from the literature review paper, and the critical appraisal may address in greater detail some of the issues raised in the discussion section of the empirical paper. However, overlap should generally be minimal, and the same sentences should not normally be repeated in different parts of the thesis.

The regulations state that the length of the research thesis shall be approximately 25,000 words, with a maximum of 40,000 words; there is no minimum word count. We suggest that you aim for about 20,000 to 25,000 words. Conciseness of expression is greatly valued by the examiners, who may require overly wordy theses to be shortened.

We strongly encourage you to start writing drafts of your thesis early on, as this is an essential way to clarify your thoughts. It is a bad idea to leave a lot of the writing until late in the project, since this usually leads to a rushed, poor quality thesis.

Part 1. Review paper

The review paper (of approximately 8,000 words not including tables and references) is a focused review of a body of literature relevant to the research topic. It is not necessary to address the literature for every aspect of your empirical study (the introduction section of your empirical paper will provide the necessary background). The review paper should either be a stand-alone paper in its own right, which should pose a question and then systematically examine the empirical literature that addresses that question OR a Conceptual Introduction which reviews the evidence in a more narrative fashion. Guidance for both formats is avaiable on this website.

The structure that follows is for the stand alone paper - for a conceptual introduction you are free to organise it how you wish (see suggestions in the more detailed guidance in the Literature Review section of the website here: https://www.ucl.ac.uk/clinical-psychology-doctorate/guidance-conceptual-... ):

  • A structured Abstract (of about 200 words), with headings of Aims, Method, Results, Conclusions. It should specify the number of papers reviewed.
  • The Introduction gives the background to the topic and ends with a clearly specified question that the review will address.
  • The Method section specifies the inclusion and exclusion criteria for the studies to be reviewed and the search strategy for locating them. The latter should indicate which databases you used, with which search terms, and specify other search limits, e.g. date or publication type. You should also describe how you narrowed down the studies from the initial (usually large) number of hits generated by the search to the final set of studies that you focus on. The steps in the narrowing down process are usually illustrated by a flowchart.
  • The Results section reviews the assembled studies. It is usually helpful to include a table listing their important characteristics and findings. The review should not be simply descriptive; it should weigh up the evidence, taking into account the methodological soundness of the studies, and take a critical perspective on the evidence base as a whole. It is often helpful to use a structured critical appraisal checklist -- there are several in the literature (see the list on Moodle).
  • The Discussion section addresses what can be concluded from the body of studies reviewed. It should draw on the methodological critique of the studies in order to evaluate the quality of the evidence. It should also address the limitations of the review, draw any clinical implications and make suggestions for further research (that may, by remarkable coincidence, bear considerable similarity to the empirical project reported in the second part of the thesis).
  • The References.
  • Any appendices are placed at the end of Volume 1 (see section below on layout).

One model for the stand-alone paper style of this part of the thesis is articles in  Clinical Psychology Review . You could also look at any theoretical or review article in other clinical psychology journals. However, these published review papers, particularly those in prestigious journals, are usually much more ambitious in terms of quantity, scope and method than is possible within the constraints of the DClinPsy.

Part 2. Empirical paper

The empirical paper (of approximately 8,000 words not including tables and references) reports on your study. Its structure follows the usual research article format, although the length of each section will vary according to the nature of the project, and additional detail may need to be provided in the Method or Results sections (or in an Appendix). You can model it on papers in any mainstream peer-reviewed clinical psychology journal, e.g. the  British Journal of Clinical Psychology  or the  Journal of Consulting and Clinical Psychology , or a specialist journal in your particular research area. As a rough guide, each of the four main sections is usually in the range of about 1,500 to 2,500 words, with the Results section usually being longer than the other three. The structure is as follows:

  • A structured  Abstract  (of about 200 words), with headings of Aims, Method, Results, Conclusions.
  • Introduction . A brief review of the literature, which shows the flow of ideas leading to your research questions. The rationale for the study should be clearly articulated. The Introduction ends with your research questions or hypotheses.
  • Method . A description of participants, procedures, design and measures. The methods should be described in sufficient detail to enable the reader to understand what was done and potentially to be able to replicate the study. For quantitative studies, the statistical power analysis should normally be reported. Descriptions of measures need to include sample items, response options, scoring methods and psychometric properties. There will also be a section on ethics, saying where approval was obtained and discussing any ethical issues in the study. For confidentiality reasons, no names of services where participants were recruited should be given.
  • Results . The findings and any statistical analyses should be presented with the aid of tables and, if necessary, figures. It should be possible for the reader to evaluate the data from which your conclusions are drawn. Qualitative papers will include quotes to illustrate each of the themes.
  • Discussion . An examination of the research questions in the light of the results obtained and the methods used. It will interpret the findings in the context of the research questions and the wider theoretical context in which the work was carried out, including a consideration of alternative explanations, methodological limitations and reasons for unexpected results. It will conclude with a discussion of the scientific and professional implications of the findings.
  • References . A list of all references cited.

Part 3. Critical appraisal

The final part of the thesis (of approximately 3,000 to 5,000 words not including tables and references) is intended to encourage critical reflection on the whole process of doing the research. Its structure and content are more flexible than those of the other two parts. You could, for example, discuss how your previous experiences or theoretical orientation might have influenced how you set about the study, how the process of doing the research might have modified your views (it is often helpful to draw on your research journal here), how you dealt with any dilemmas or methodological choices that arose during the course of the study, and what you might have done differently and why. You could also include an expanded discussion of the strengths and weaknesses of the study, its clinical and scientific implications, and future directions for research (depending on how extensively each of these areas is covered in the discussion section of the empirical paper). It is essential, however, to ensure that all important points are mentioned in your empirical paper first – this is not the place to introduce significant limitations of the study or different ways of interpreting the findings. Whilst it is less formal than the other two parts, the critical appraisal should not be overly personal; it should ideally be addressed to an audience of fellow researchers who might benefit from your considered thoughts about conducting the research.

All appendices are placed at the end of Volume 1. Include here any additional material related to the empirical study, or to the other two parts if needed. Essential material to append includes: the official letter giving ethical approval, sample letters to participants, participant information sheet, informed consent form, instruction sheets, questionnaires, interview schedules and any measures not in common use. Measures that are sensitive or copyrighted will eventually need to be removed. Raw data and computer printouts are not normally needed. However, for qualitative studies, examples of the procedures of analysis should be included.

Confidentiality and privacy

Once your thesis is completed it will effectively become a public document, available on the internet via the UCL's e-thesis repository (UCL Discovery). Therefore it is essential when presenting your work that your participants' right to confidentiality and privacy be upheld. In particular, students writing up small-N and qualitative studies should be especially careful to ensure that no participants are identifiable from the thesis.

Layout and formatting

The text should be double-spaced on plain, white A4 paper. Both sides of the paper may be used - you can choose whether to print the thesis single-sided or double-sided. Margins at the binding edge should be 4cm. The other margins (i.e. top, bottom and unbound side) should be 2.5cm. Remember, if you include a table or figure that uses a landscape page setup then the margins need to be adjusted accordingly, i.e. 4cm becomes the top margin.

Number pages on the bottom right or bottom centre of the page. Page 1 is the title page (although it looks tidier if you suppress the page numbering for that page only).

For general guidance on formatting, follow  APA style , as set out in the  APA Publication Manual  (7th edition). It is essential to use APA citation and referencing style (see the course document on Moodle), and also to lay out tables in APA format. Heading formats can depart slightly from APA style (e.g. you can use italicised headings, or adopt a numbering system if you wish): what is important is to adopt a systematic hierarchy of headings within each part of the thesis. Look at recent theses for models of layout and formatting (ask your UCL supervisor to recommend one or two). Pay meticulous attention to spelling, grammar, punctuation and format: poorly presented theses give an impression of carelessness and will be referred for revision.

The thesis is more easily readable if you left justify the text and use a standard font. We recommend Times New Roman 12 point or Arial 11 point for the main body of the text, although tables and figures can be set in a smaller font size if necessary, as long as they are readable. In accordance with APA style, the best way to indicate a new paragraph in double-spaced text is to indent its first word; there is then no need to leave a blank line between paragraphs.

Tables and figures are numbered (Table 1 etc.) and usually placed on their own separate pages, although smaller ones can be embedded in the text, usually just below the paragraph that first refers to them (in contrast to APA format for submitted journal articles, where the tables and figures are at the end of the paper).

Volume 1 is laid out in the following order:

  • the  Title Page  gives the title (usually the same as that of the empirical paper), your name, and lower down on the page, the words "DClinPsy thesis (Volume 1), [year of submission]" and on the line below "University College London". The title page is justified as centred. You can use a slightly larger font if you wish.
  • a  Signed Declaration  that the work presented is your own. The professional doctorate regulations specify that this be inserted right after the title page of the thesis There is a  declaration form  on the course website.
  • an  Overview  (up to 250 words), giving a summary of the contents of all three parts of the thesis. (Note that this will ultimately be used by the library to catalogue your thesis, and it will form part of the meta-data that will be seen first by people searching for your thesis.)
  • an  Impact Statement  that describes, in no more than 500 words, how the expertise, knowledge, analysis, discovery or insight presented in your thesis could be put to a beneficial use. Please see  guidance  from the UCL Doctoral School on this.
  • the  Table of Contents  covers all three parts of Volume 1, including the appendices, and gives a separate list of tables and figures.
  • the  Acknowledgements  page mentions everyone whose contribution to the work you wish to recognise.
  • Part 1  (the literature review) with a title page and abstract (both on separate pages) and references. The title page should say “Part 1: Literature Review” and then give the title of the review paper on a separate line.
  • Part 2  (the empirical paper) with a title page and abstract (both on separate pages) and references. The title page should say “Part 2: Empirical Paper” and then give the title of the empirical paper on a separate line. The text of the main body of the paper should run continuously: the main sections (Methods, Results, Discussion) should not start on new pages. Tables and figures should be numbered afresh for the empirical paper, so the first table in the empirical paper is Table 1, even if there is also a Table 1 in the literature review.
  • Part 3  (the critical appraisal) with a title page (just saying “Part 3: Critical Appraisal”), and references.
  • the  Appendices , each with their own title page. (There’s no need to number the pages within the appendices if this is fiddly.) There is only one set of appendices for all of Volume 1, placed at the end of the volume. They are numbered in the order in which they appear in the thesis. (If there is only one appendix, just call it Appendix, with no number.)

If your research is part of a joint project (e.g. with another trainee or with a PhD student), you must state this in the Overview and in the Method section of your empirical paper, and include an Appendix setting out each person’s contribution to the project. Please see the course document on  submission of joint theses .

Volume 2 (no longer submitted but you should assemble it as a document as follows)

Volume 2 begins with a title page, which says "Case Reports and Service-Related Research Project", then lists on separate lines your name, "D.Clin.Psy. thesis (Volume 2), [year of submission]" and "University College London". On the next page there is the table of contents, giving the full title, as below; there is no need to list tables and appendices. Then follows each of the four case reports and the service-related research report, in the order in which each was submitted. For case reports, the title page gives the submission number, your own title and the type of case report, e.g., Case report 4: "An angry young man" (Completed Clinical Intervention). For the service-related research it has the words "Service Related Research Report (submitted as Case Report x)"; the title of the report is then listed on a new line. Word counts and trainee code numbers should be omitted. After the title page comes the body of the report, its references, and then any appendices pertaining to that report. Each case report is a stand-alone entity, so tables and appendices are numbered afresh (i.e. each report could have a Table 1, etc.). As described above, Volume 2 is separately paginated.

Handing in before the viva

Electronic submission.

You need to submit an electronic version of Volume 1 in pdf format. Send it to the Research Administrator at  [email protected]   via the  Moodle submission link  with a file name of Thesis_submission_volume1_[yourlastname] (e.g. Thesis_submission_volume1_Smith).

NOTE -  Volume 2 does not need to be submitted at this point but must be made available on request.

Running volume 1 through turnitin.

In addition to the procedures outlined above for submission of the thesis, we require that Volume 1 of the thesis be submitted via Turnitin, a plagiarism-detection programme.

As with case reports, submission of Volume 1 of the thesis to Turnitin is done via Moodle. The link for thesis submission on your Moodle homepage is called ‘Thesis Volume 1 Submission’.

When uploading Volume 1 please call the file ‘Volume 1 [First name] [Family name]’. For example, ‘Volume 1 Ed Miliband’ or ‘Volume 1 Nicola Sturgeon’. You should upload your full Volume 1 (as outlined in the section above called ‘Volume 1’) as a word document.

Turnitin is being used to promote good academic practice, not to catch students out. For this reason the system has been configured so that you can submit your Volume 1, look at the Turnitin report to identify any sections where there may be potential plagiarism, delete the submission and submit a revised report.

Resubmissions can be made up to 14.00 on the day on which theses are due, although in practice it is strongly recommended that Turnitin submissions are made well before then: it will be important to leave yourself time to submit to Turnitin before you submit your final version of Volume 1. Also, please note that Turnitin can take upto 24 hours to generate a similarity report for each submission, so you will need to factor this in to any plans for checking and resubmission. 

How to judge the Turnitin report to decide whether the thesis needs to be amended?

Turnitin will give your Volume 1 an originality score, but this tells you very little about whether there are any problems with plagiarism in your thesis. That is because theses contain copies of measures, participant information sheets, references and so on, which inflate the Turnitin originality score.

Trainees need to use their own judgement to decide whether they should amend their thesis because of inadvertent plagiarism. The key principle is that ideas and quotations are appropriately referenced.  Please look at the guidance about plagiarism on the UCL  website , which is also reproduced in Section 23 of the Training Handbook.

If you have any queries about using Turnitin as part of the thesis submission, please contact Priya Dey, the Research Administrator, in the first instance. 

After the viva

Ongoing access to ucl library resources.

All DClinPsy students continue to have access to UCL library resources after the viva, whilst they work on any required thesis revisions. Once you have have completed any revisions, had them approved and submitted your thesis, your access to the library as a UCL student will come to an end. However, the good news is that UCL alumni are entitled to library access after they complete their studies. You just need to re-register, following the instructions given on the  UCL library website .   

You need to submit two electronic copies of Volume 1 in pdf format:

1. One e-copy to the Research Administrator with a filename of Thesis_final_volume1_[yourlastname] 

2. One e-copy to UCL's e-thesis repository (UCL Discovery) via the  Research Publication Service . The library have produced a useful document (available on the Project Support  Moodle  site) outlining the e-thesis submission procedure.

Once the Research Administrator can confirm that you have completed all other components of the course, they will inform the HCPC that you have satisfied all the course requirements. However, before the Research Administrator can report to UCL that you have completed the course, you also need to have submitted the e-thesis copy to UCL Discovery. Once this is done, you will get a letter from the Course Directors confirming that you have passed the DClinPsy.

University of Texas

  • University of Texas Libraries
  • UT Libraries

Systematic Reviews & Evidence Synthesis Methods

Critical appraisal.

  • Types of Reviews
  • Formulate Question
  • Find Existing Reviews & Protocols
  • Register a Protocol
  • Searching Systematically
  • Supplementary Searching
  • Managing Results
  • Deduplication
  • Glossary of terms
  • Librarian Support
  • Video tutorials This link opens in a new window
  • Systematic Review & Evidence Synthesis Boot Camp

Some reviews require a critical appraisal for each study that makes it through the screening process. This involves a risk of bias assessment and/or a quality assessment. The goal of these reviews is not just to find all of the studies, but to determine their methodological rigor, and therefore, their credibility.

"Critical appraisal is the balanced assessment of a piece of research, looking for its strengths and weaknesses and them coming to a balanced judgement about its trustworthiness and its suitability for use in a particular context." 1

It's important to consider the impact that poorly designed studies could have on your findings and to rule out inaccurate or biased work.

Selection of a valid critical appraisal tool, testing the tool with several of the selected studies, and involving two or more reviewers in the appraisal are good practices to follow.

1. Purssell E, McCrae N. How to Perform a Systematic Literature Review: A Guide for Healthcare Researchers, Practitioners and Students. 1st ed. Springer ;  2020.

Evaluation Tools

  • The Appraisal of Guidelines for Research & Evaluation Instrument (AGREE II) The Appraisal of Guidelines for Research & Evaluation Instrument (AGREE II) was developed to address the issue of variability in the quality of practice guidelines.
  • Centre for Evidence-Based Medicine (CEBM). Critical Appraisal Tools "contains useful tools and downloads for the critical appraisal of different types of medical evidence. Example appraisal sheets are provided together with several helpful examples."
  • Critical Appraisal Skills Programme (CASP) Checklists Critical Appraisal checklists for many different study types
  • Critical Review Form for Qualitative Studies Version 2, developed out of McMaster University
  • Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS) Downes MJ, Brennan ML, Williams HC, et al. Development of a critical appraisal tool to assess the quality of cross-sectional studies (AXIS). BMJ Open 2016;6:e011458. doi:10.1136/bmjopen-2016-011458
  • Downs & Black Checklist for Assessing Studies Downs, S. H., & Black, N. (1998). The Feasibility of Creating a Checklist for the Assessment of the Methodological Quality Both of Randomised and Non-Randomised Studies of Health Care Interventions. Journal of Epidemiology and Community Health (1979-), 52(6), 377–384.
  • GRADE The Grading of Recommendations Assessment, Development and Evaluation (GRADE) working group "has developed a common, sensible and transparent approach to grading quality (or certainty) of evidence and strength of recommendations."
  • Grade Handbook Full handbook on the GRADE method for grading quality of evidence.
  • MAGIC (Making GRADE the Irresistible choice) Clear succinct guidance in how to use GRADE
  • Joanna Briggs Institute. Critical Appraisal Tools "JBI’s critical appraisal tools assist in assessing the trustworthiness, relevance and results of published papers." Includes checklists for 13 types of articles.
  • Latitudes Network This is a searchable library of validity assessment tools for use in evidence syntheses. This website also provides access to training on the process of validity assessment.
  • Mixed Methods Appraisal Tool A tool that can be used to appraise a mix of studies that are included in a systematic review - qualitative research, RCTs, non-randomized studies, quantitative studies, mixed methods studies.
  • RoB 2 Tool Higgins JPT, Sterne JAC, Savović J, Page MJ, Hróbjartsson A, Boutron I, Reeves B, Eldridge S. A revised tool for assessing risk of bias in randomized trials In: Chandler J, McKenzie J, Boutron I, Welch V (editors). Cochrane Methods. Cochrane Database of Systematic Reviews 2016, Issue 10 (Suppl 1). dx.doi.org/10.1002/14651858.CD201601.
  • ROBINS-I Risk of Bias for non-randomized (observational) studies or cohorts of interventions Sterne J A, Hernán M A, Reeves B C, Savović J, Berkman N D, Viswanathan M et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions BMJ 2016; 355 :i4919 doi:10.1136/bmj.i4919
  • Scottish Intercollegiate Guidelines Network. Critical Appraisal Notes and Checklists "Methodological assessment of studies selected as potential sources of evidence is based on a number of criteria that focus on those aspects of the study design that research has shown to have a significant effect on the risk of bias in the results reported and conclusions drawn. These criteria differ between study types, and a range of checklists is used to bring a degree of consistency to the assessment process."
  • The TREND Statement (CDC) Des Jarlais DC, Lyles C, Crepaz N, and the TREND Group. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. Am J Public Health. 2004;94:361-366.
  • Assembling the Pieces of a Systematic Reviews, Chapter 8: Evaluating: Study Selection and Critical Appraisal.
  • How to Perform a Systematic Literature Review, Chapter: Critical Appraisal: Assessing the Quality of Studies.

Other library guides

  • Duke University Medical Center Library. Systematic Reviews: Assess for Quality and Bias
  • UNC Health Sciences Library. Systematic Reviews: Assess Quality of Included Studies
  • Last Updated: May 16, 2024 11:05 AM
  • URL: https://guides.lib.utexas.edu/systematicreviews

Creative Commons License

How To Write a Critical Appraisal

daily newspaper

A critical appraisal is an academic approach that refers to the systematic identification of strengths and weakness of a research article with the intent of evaluating the usefulness and validity of the work’s research findings. As with all essays, you need to be clear, concise, and logical in your presentation of arguments, analysis, and evaluation. However, in a critical appraisal there are some specific sections which need to be considered which will form the main basis of your work.

Structure of a Critical Appraisal

Introduction.

Your introduction should introduce the work to be appraised, and how you intend to proceed. In other words, you set out how you will be assessing the article and the criteria you will use. Focusing your introduction on these areas will ensure that your readers understand your purpose and are interested to read on. It needs to be clear that you are undertaking a scientific and literary dissection and examination of the indicated work to assess its validity and credibility, expressed in an interesting and motivational way.

Body of the Work

The body of the work should be separated into clear paragraphs that cover each section of the work and sub-sections for each point that is being covered. In all paragraphs your perspectives should be backed up with hard evidence from credible sources (fully cited and referenced at the end), and not be expressed as an opinion or your own personal point of view. Remember this is a critical appraisal and not a presentation of negative parts of the work.

When appraising the introduction of the article, you should ask yourself whether the article answers the main question it poses. Alongside this look at the date of publication, generally you want works to be within the past 5 years, unless they are seminal works which have strongly influenced subsequent developments in the field. Identify whether the journal in which the article was published is peer reviewed and importantly whether a hypothesis has been presented. Be objective, concise, and coherent in your presentation of this information.

Once you have appraised the introduction you can move onto the methods (or the body of the text if the work is not of a scientific or experimental nature). To effectively appraise the methods, you need to examine whether the approaches used to draw conclusions (i.e., the methodology) is appropriate for the research question, or overall topic. If not, indicate why not, in your appraisal, with evidence to back up your reasoning. Examine the sample population (if there is one), or the data gathered and evaluate whether it is appropriate, sufficient, and viable, before considering the data collection methods and survey instruments used. Are they fit for purpose? Do they meet the needs of the paper? Again, your arguments should be backed up by strong, viable sources that have credible foundations and origins.

One of the most significant areas of appraisal is the results and conclusions presented by the authors of the work. In the case of the results, you need to identify whether there are facts and figures presented to confirm findings, assess whether any statistical tests used are viable, reliable, and appropriate to the work conducted. In addition, whether they have been clearly explained and introduced during the work. In regard to the results presented by the authors you need to present evidence that they have been unbiased and objective, and if not, present evidence of how they have been biased. In this section you should also dissect the results and identify whether any statistical significance reported is accurate and whether the results presented and discussed align with any tables or figures presented.

The final element of the body text is the appraisal of the discussion and conclusion sections. In this case there is a need to identify whether the authors have drawn realistic conclusions from their available data, whether they have identified any clear limitations to their work and whether the conclusions they have drawn are the same as those you would have done had you been presented with the findings.

The conclusion of the appraisal should not introduce any new information but should be a concise summing up of the key points identified in the body text. The conclusion should be a condensation (or precis) of all that you have already written. The aim is bringing together the whole paper and state an opinion (based on evaluated evidence) of how valid and reliable the paper being appraised can be considered to be in the subject area. In all cases, you should reference and cite all sources used. To help you achieve a first class critical appraisal we have put together some key phrases that can help lift you work above that of others.

Key Phrases for a Critical Appraisal

  • Whilst the title might suggest
  • The focus of the work appears to be…
  • The author challenges the notion that…
  • The author makes the claim that…
  • The article makes a strong contribution through…
  • The approach provides the opportunity to…
  • The authors consider…
  • The argument is not entirely convincing because…
  • However, whilst it can be agreed that… it should also be noted that…
  • Several crucial questions are left unanswered…
  • It would have been more appropriate to have stated that…
  • This framework extends and increases…
  • The authors correctly conclude that…
  • The authors efforts can be considered as…
  • Less convincing is the generalisation that…
  • This appears to mislead readers indicating that…
  • This research proves to be timely and particularly significant in the light of…

You may also like

How to Write a Critical Review of an Article

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Clin Diagn Res
  • v.11(5); 2017 May

Critical Appraisal of Clinical Research

Azzam al-jundi.

1 Professor, Department of Orthodontics, King Saud bin Abdul Aziz University for Health Sciences-College of Dentistry, Riyadh, Kingdom of Saudi Arabia.

Salah Sakka

2 Associate Professor, Department of Oral and Maxillofacial Surgery, Al Farabi Dental College, Riyadh, KSA.

Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations into the decision making process for patient care. It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values. The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice.

Introduction

Decisions related to patient value and care is carefully made following an essential process of integration of the best existing evidence, clinical experience and patient preference. Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ].

Critical appraisal is essential to:

  • Combat information overload;
  • Identify papers that are clinically relevant;
  • Continuing Professional Development (CPD).

Carrying out Critical Appraisal:

Assessing the research methods used in the study is a prime step in its critical appraisal. This is done using checklists which are specific to the study design.

Standard Common Questions:

  • What is the research question?
  • What is the study type (design)?
  • Selection issues.
  • What are the outcome factors and how are they measured?
  • What are the study factors and how are they measured?
  • What important potential confounders are considered?
  • What is the statistical method used in the study?
  • Statistical results.
  • What conclusions did the authors reach about the research question?
  • Are ethical issues considered?

The Critical Appraisal starts by double checking the following main sections:

I. Overview of the paper:

  • The publishing journal and the year
  • The article title: Does it state key trial objectives?
  • The author (s) and their institution (s)

The presence of a peer review process in journal acceptance protocols also adds robustness to the assessment criteria for research papers and hence would indicate a reduced likelihood of publication of poor quality research. Other areas to consider may include authors’ declarations of interest and potential market bias. Attention should be paid to any declared funding or the issue of a research grant, in order to check for a conflict of interest [ 2 ].

II. ABSTRACT: Reading the abstract is a quick way of getting to know the article and its purpose, major procedures and methods, main findings, and conclusions.

  • Aim of the study: It should be well and clearly written.
  • Materials and Methods: The study design and type of groups, type of randomization process, sample size, gender, age, and procedure rendered to each group and measuring tool(s) should be evidently mentioned.
  • Results: The measured variables with their statistical analysis and significance.
  • Conclusion: It must clearly answer the question of interest.

III. Introduction/Background section:

An excellent introduction will thoroughly include references to earlier work related to the area under discussion and express the importance and limitations of what is previously acknowledged [ 2 ].

-Why this study is considered necessary? What is the purpose of this study? Was the purpose identified before the study or a chance result revealed as part of ‘data searching?’

-What has been already achieved and how does this study be at variance?

-Does the scientific approach outline the advantages along with possible drawbacks associated with the intervention or observations?

IV. Methods and Materials section : Full details on how the study was actually carried out should be mentioned. Precise information is given on the study design, the population, the sample size and the interventions presented. All measurements approaches should be clearly stated [ 3 ].

V. Results section : This section should clearly reveal what actually occur to the subjects. The results might contain raw data and explain the statistical analysis. These can be shown in related tables, diagrams and graphs.

VI. Discussion section : This section should include an absolute comparison of what is already identified in the topic of interest and the clinical relevance of what has been newly established. A discussion on a possible related limitations and necessitation for further studies should also be indicated.

Does it summarize the main findings of the study and relate them to any deficiencies in the study design or problems in the conduct of the study? (This is called intention to treat analysis).

  • Does it address any source of potential bias?
  • Are interpretations consistent with the results?
  • How are null findings interpreted?
  • Does it mention how do the findings of this study relate to previous work in the area?
  • Can they be generalized (external validity)?
  • Does it mention their clinical implications/applicability?
  • What are the results/outcomes/findings applicable to and will they affect a clinical practice?
  • Does the conclusion answer the study question?
  • -Is the conclusion convincing?
  • -Does the paper indicate ethics approval?
  • -Can you identify potential ethical issues?
  • -Do the results apply to the population in which you are interested?
  • -Will you use the results of the study?

Once you have answered the preliminary and key questions and identified the research method used, you can incorporate specific questions related to each method into your appraisal process or checklist.

1-What is the research question?

For a study to gain value, it should address a significant problem within the healthcare and provide new or meaningful results. Useful structure for assessing the problem addressed in the article is the Problem Intervention Comparison Outcome (PICO) method [ 3 ].

P = Patient or problem: Patient/Problem/Population:

It involves identifying if the research has a focused question. What is the chief complaint?

E.g.,: Disease status, previous ailments, current medications etc.,

I = Intervention: Appropriately and clearly stated management strategy e.g.,: new diagnostic test, treatment, adjunctive therapy etc.,

C= Comparison: A suitable control or alternative

E.g.,: specific and limited to one alternative choice.

O= Outcomes: The desired results or patient related consequences have to be identified. e.g.,: eliminating symptoms, improving function, esthetics etc.,

The clinical question determines which study designs are appropriate. There are five broad categories of clinical questions, as shown in [ Table/Fig-1 ].

[Table/Fig-1]:

Categories of clinical questions and the related study designs.

2- What is the study type (design)?

The study design of the research is fundamental to the usefulness of the study.

In a clinical paper the methodology employed to generate the results is fully explained. In general, all questions about the related clinical query, the study design, the subjects and the correlated measures to reduce bias and confounding should be adequately and thoroughly explored and answered.

Participants/Sample Population:

Researchers identify the target population they are interested in. A sample population is therefore taken and results from this sample are then generalized to the target population.

The sample should be representative of the target population from which it came. Knowing the baseline characteristics of the sample population is important because this allows researchers to see how closely the subjects match their own patients [ 4 ].

Sample size calculation (Power calculation): A trial should be large enough to have a high chance of detecting a worthwhile effect if it exists. Statisticians can work out before the trial begins how large the sample size should be in order to have a good chance of detecting a true difference between the intervention and control groups [ 5 ].

  • Is the sample defined? Human, Animals (type); what population does it represent?
  • Does it mention eligibility criteria with reasons?
  • Does it mention where and how the sample were recruited, selected and assessed?
  • Does it mention where was the study carried out?
  • Is the sample size justified? Rightly calculated? Is it adequate to detect statistical and clinical significant results?
  • Does it mention a suitable study design/type?
  • Is the study type appropriate to the research question?
  • Is the study adequately controlled? Does it mention type of randomization process? Does it mention the presence of control group or explain lack of it?
  • Are the samples similar at baseline? Is sample attrition mentioned?
  • All studies report the number of participants/specimens at the start of a study, together with details of how many of them completed the study and reasons for incomplete follow up if there is any.
  • Does it mention who was blinded? Are the assessors and participants blind to the interventions received?
  • Is it mentioned how was the data analysed?
  • Are any measurements taken likely to be valid?

Researchers use measuring techniques and instruments that have been shown to be valid and reliable.

Validity refers to the extent to which a test measures what it is supposed to measure.

(the extent to which the value obtained represents the object of interest.)

  • -Soundness, effectiveness of the measuring instrument;
  • -What does the test measure?
  • -Does it measure, what it is supposed to be measured?
  • -How well, how accurately does it measure?

Reliability: In research, the term reliability means “repeatability” or “consistency”

Reliability refers to how consistent a test is on repeated measurements. It is important especially if assessments are made on different occasions and or by different examiners. Studies should state the method for assessing the reliability of any measurements taken and what the intra –examiner reliability was [ 6 ].

3-Selection issues:

The following questions should be raised:

  • - How were subjects chosen or recruited? If not random, are they representative of the population?
  • - Types of Blinding (Masking) Single, Double, Triple?
  • - Is there a control group? How was it chosen?
  • - How are patients followed up? Who are the dropouts? Why and how many are there?
  • - Are the independent (predictor) and dependent (outcome) variables in the study clearly identified, defined, and measured?
  • - Is there a statement about sample size issues or statistical power (especially important in negative studies)?
  • - If a multicenter study, what quality assurance measures were employed to obtain consistency across sites?
  • - Are there selection biases?
  • • In a case-control study, if exercise habits to be compared:
  • - Are the controls appropriate?
  • - Were records of cases and controls reviewed blindly?
  • - How were possible selection biases controlled (Prevalence bias, Admission Rate bias, Volunteer bias, Recall bias, Lead Time bias, Detection bias, etc.,)?
  • • Cross Sectional Studies:
  • - Was the sample selected in an appropriate manner (random, convenience, etc.,)?
  • - Were efforts made to ensure a good response rate or to minimize the occurrence of missing data?
  • - Were reliability (reproducibility) and validity reported?
  • • In an intervention study, how were subjects recruited and assigned to groups?
  • • In a cohort study, how many reached final follow-up?
  • - Are the subject’s representatives of the population to which the findings are applied?
  • - Is there evidence of volunteer bias? Was there adequate follow-up time?
  • - What was the drop-out rate?
  • - Any shortcoming in the methodology can lead to results that do not reflect the truth. If clinical practice is changed on the basis of these results, patients could be harmed.

Researchers employ a variety of techniques to make the methodology more robust, such as matching, restriction, randomization, and blinding [ 7 ].

Bias is the term used to describe an error at any stage of the study that was not due to chance. Bias leads to results in which there are a systematic deviation from the truth. As bias cannot be measured, researchers need to rely on good research design to minimize bias [ 8 ]. To minimize any bias within a study the sample population should be representative of the population. It is also imperative to consider the sample size in the study and identify if the study is adequately powered to produce statistically significant results, i.e., p-values quoted are <0.05 [ 9 ].

4-What are the outcome factors and how are they measured?

  • -Are all relevant outcomes assessed?
  • -Is measurement error an important source of bias?

5-What are the study factors and how are they measured?

  • -Are all the relevant study factors included in the study?
  • -Have the factors been measured using appropriate tools?

Data Analysis and Results:

- Were the tests appropriate for the data?

- Are confidence intervals or p-values given?

  • How strong is the association between intervention and outcome?
  • How precise is the estimate of the risk?
  • Does it clearly mention the main finding(s) and does the data support them?
  • Does it mention the clinical significance of the result?
  • Is adverse event or lack of it mentioned?
  • Are all relevant outcomes assessed?
  • Was the sample size adequate to detect a clinically/socially significant result?
  • Are the results presented in a way to help in health policy decisions?
  • Is there measurement error?
  • Is measurement error an important source of bias?

Confounding Factors:

A confounder has a triangular relationship with both the exposure and the outcome. However, it is not on the causal pathway. It makes it appear as if there is a direct relationship between the exposure and the outcome or it might even mask an association that would otherwise have been present [ 9 ].

6- What important potential confounders are considered?

  • -Are potential confounders examined and controlled for?
  • -Is confounding an important source of bias?

7- What is the statistical method in the study?

  • -Are the statistical methods described appropriate to compare participants for primary and secondary outcomes?
  • -Are statistical methods specified insufficient detail (If I had access to the raw data, could I reproduce the analysis)?
  • -Were the tests appropriate for the data?
  • -Are confidence intervals or p-values given?
  • -Are results presented as absolute risk reduction as well as relative risk reduction?

Interpretation of p-value:

The p-value refers to the probability that any particular outcome would have arisen by chance. A p-value of less than 1 in 20 (p<0.05) is statistically significant.

  • When p-value is less than significance level, which is usually 0.05, we often reject the null hypothesis and the result is considered to be statistically significant. Conversely, when p-value is greater than 0.05, we conclude that the result is not statistically significant and the null hypothesis is accepted.

Confidence interval:

Multiple repetition of the same trial would not yield the exact same results every time. However, on average the results would be within a certain range. A 95% confidence interval means that there is a 95% chance that the true size of effect will lie within this range.

8- Statistical results:

  • -Do statistical tests answer the research question?

Are statistical tests performed and comparisons made (data searching)?

Correct statistical analysis of results is crucial to the reliability of the conclusions drawn from the research paper. Depending on the study design and sample selection method employed, observational or inferential statistical analysis may be carried out on the results of the study.

It is important to identify if this is appropriate for the study [ 9 ].

  • -Was the sample size adequate to detect a clinically/socially significant result?
  • -Are the results presented in a way to help in health policy decisions?

Clinical significance:

Statistical significance as shown by p-value is not the same as clinical significance. Statistical significance judges whether treatment effects are explicable as chance findings, whereas clinical significance assesses whether treatment effects are worthwhile in real life. Small improvements that are statistically significant might not result in any meaningful improvement clinically. The following questions should always be on mind:

  • -If the results are statistically significant, do they also have clinical significance?
  • -If the results are not statistically significant, was the sample size sufficiently large to detect a meaningful difference or effect?

9- What conclusions did the authors reach about the study question?

Conclusions should ensure that recommendations stated are suitable for the results attained within the capacity of the study. The authors should also concentrate on the limitations in the study and their effects on the outcomes and the proposed suggestions for future studies [ 10 ].

  • -Are the questions posed in the study adequately addressed?
  • -Are the conclusions justified by the data?
  • -Do the authors extrapolate beyond the data?
  • -Are shortcomings of the study addressed and constructive suggestions given for future research?
  • -Bibliography/References:

Do the citations follow one of the Council of Biological Editors’ (CBE) standard formats?

10- Are ethical issues considered?

If a study involves human subjects, human tissues, or animals, was approval from appropriate institutional or governmental entities obtained? [ 10 , 11 ].

Critical appraisal of RCTs: Factors to look for:

  • Allocation (randomization, stratification, confounders).
  • Follow up of participants (intention to treat).
  • Data collection (bias).
  • Sample size (power calculation).
  • Presentation of results (clear, precise).
  • Applicability to local population.

[ Table/Fig-2 ] summarizes the guidelines for Consolidated Standards of Reporting Trials CONSORT [ 12 ].

[Table/Fig-2]:

Summary of the CONSORT guidelines.

Critical appraisal of systematic reviews: provide an overview of all primary studies on a topic and try to obtain an overall picture of the results.

In a systematic review, all the primary studies identified are critically appraised and only the best ones are selected. A meta-analysis (i.e., a statistical analysis) of the results from selected studies may be included. Factors to look for:

  • Literature search (did it include published and unpublished materials as well as non-English language studies? Was personal contact with experts sought?).
  • Quality-control of studies included (type of study; scoring system used to rate studies; analysis performed by at least two experts).
  • Homogeneity of studies.

[ Table/Fig-3 ] summarizes the guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses PRISMA [ 13 ].

[Table/Fig-3]:

Summary of PRISMA guidelines.

Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession. It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the practice of evidence based medicine and dentistry. By following a systematic approach, such evidence can be considered and applied to clinical practice.

Financial or other Competing Interests

  • Our services

Critical Appraisal Questionnaires

Critical Appraisal Questionnaires

Critical appraisal is the process of carefully and systematically assessing the outcome of scientific research (evidence) to judge its trustworthiness, value and relevance in a particular context. Critical appraisal looks at the way a study is conducted and examines factors such as internal validity, generalizability and relevance.

Some initial appraisal questions you could ask are:

  • Is the evidence from a known, reputable source?
  • Has the evidence been evaluated in any way? If so, how and by whom?
  • How up-to-date is the evidence?

Second, you could look at the study itself and ask the following general appraisal questions:

  • How was the outcome measured?
  • Is that a reliable way to measure?
  • How large was the effect size?
  • What implications does the study have for your practice? Is it relevant?
  • Can the results be applied to your organization?

Questionnaires

If you would like to critically appraise a study, we strongly recommend using the app we have developed for iOS and Android:  CAT Manager App

You could also consider using the following appraisal questionnaires (checklists) for specific study designs, but we do not recommend this. 

Appraisal of a meta-analysis or systematic review

Appraisal of a controlled study, appraisal of a cohort or panel study, appraisal of a case control study, appraisal of a cross-sectional study (survey), appraisal of a qualitative study, appraisal of a case study.

  • Navigate To
  • Members area
  • Bargelaan 200
  • 2333 CW Leiden
  • The Netherlands
  • Want to stay up to date?

Critically Analyzing Information Sources: Critical Appraisal and Analysis

  • Critical Appraisal and Analysis

Initial Appraisal : Reviewing the source

  • What are the author's credentials--institutional affiliation (where he or she works), educational background, past writings, or experience? Is the book or article written on a topic in the author's area of expertise? You can use the various Who's Who publications for the U.S. and other countries and for specific subjects and the biographical information located in the publication itself to help determine the author's affiliation and credentials.
  • Has your instructor mentioned this author? Have you seen the author's name cited in other sources or bibliographies? Respected authors are cited frequently by other scholars. For this reason, always note those names that appear in many different sources.
  • Is the author associated with a reputable institution or organization? What are the basic values or goals of the organization or institution?

B. Date of Publication

  • When was the source published? This date is often located on the face of the title page below the name of the publisher. If it is not there, look for the copyright date on the reverse of the title page. On Web pages, the date of the last revision is usually at the bottom of the home page, sometimes every page.
  • Is the source current or out-of-date for your topic? Topic areas of continuing and rapid development, such as the sciences, demand more current information. On the other hand, topics in the humanities often require material that was written many years ago. At the other extreme, some news sources on the Web now note the hour and minute that articles are posted on their site.

C. Edition or Revision

Is this a first edition of this publication or not? Further editions indicate a source has been revised and updated to reflect changes in knowledge, include omissions, and harmonize with its intended reader's needs. Also, many printings or editions may indicate that the work has become a standard source in the area and is reliable. If you are using a Web source, do the pages indicate revision dates?

D. Publisher

Note the publisher. If the source is published by a university press, it is likely to be scholarly. Although the fact that the publisher is reputable does not necessarily guarantee quality, it does show that the publisher may have high regard for the source being published.

E. Title of Journal

Is this a scholarly or a popular journal? This distinction is important because it indicates different levels of complexity in conveying ideas. If you need help in determining the type of journal, see Distinguishing Scholarly from Non-Scholarly Periodicals . Or you may wish to check your journal title in the latest edition of Katz's Magazines for Libraries (Olin Reference Z 6941 .K21, shelved at the reference desk) for a brief evaluative description.

Critical Analysis of the Content

Having made an initial appraisal, you should now examine the body of the source. Read the preface to determine the author's intentions for the book. Scan the table of contents and the index to get a broad overview of the material it covers. Note whether bibliographies are included. Read the chapters that specifically address your topic. Reading the article abstract and scanning the table of contents of a journal or magazine issue is also useful. As with books, the presence and quality of a bibliography at the end of the article may reflect the care with which the authors have prepared their work.

A. Intended Audience

What type of audience is the author addressing? Is the publication aimed at a specialized or a general audience? Is this source too elementary, too technical, too advanced, or just right for your needs?

B. Objective Reasoning

  • Is the information covered fact, opinion, or propaganda? It is not always easy to separate fact from opinion. Facts can usually be verified; opinions, though they may be based on factual information, evolve from the interpretation of facts. Skilled writers can make you think their interpretations are facts.
  • Does the information appear to be valid and well-researched, or is it questionable and unsupported by evidence? Assumptions should be reasonable. Note errors or omissions.
  • Are the ideas and arguments advanced more or less in line with other works you have read on the same topic? The more radically an author departs from the views of others in the same field, the more carefully and critically you should scrutinize his or her ideas.
  • Is the author's point of view objective and impartial? Is the language free of emotion-arousing words and bias?

C. Coverage

  • Does the work update other sources, substantiate other materials you have read, or add new information? Does it extensively or marginally cover your topic? You should explore enough sources to obtain a variety of viewpoints.
  • Is the material primary or secondary in nature? Primary sources are the raw material of the research process. Secondary sources are based on primary sources. For example, if you were researching Konrad Adenauer's role in rebuilding West Germany after World War II, Adenauer's own writings would be one of many primary sources available on this topic. Others might include relevant government documents and contemporary German newspaper articles. Scholars use this primary material to help generate historical interpretations--a secondary source. Books, encyclopedia articles, and scholarly journal articles about Adenauer's role are considered secondary sources. In the sciences, journal articles and conference proceedings written by experimenters reporting the results of their research are primary documents. Choose both primary and secondary sources when you have the opportunity.

D. Writing Style

Is the publication organized logically? Are the main points clearly presented? Do you find the text easy to read, or is it stilted or choppy? Is the author's argument repetitive?

E. Evaluative Reviews

  • Locate critical reviews of books in a reviewing source , such as the Articles & Full Text , Book Review Index , Book Review Digest, and ProQuest Research Library . Is the review positive? Is the book under review considered a valuable contribution to the field? Does the reviewer mention other books that might be better? If so, locate these sources for more information on your topic.
  • Do the various reviewers agree on the value or attributes of the book or has it aroused controversy among the critics?
  • For Web sites, consider consulting this evaluation source from UC Berkeley .

Permissions Information

If you wish to use or adapt any or all of the content of this Guide go to Cornell Library's Research Guides Use Conditions to review our use permissions and our Creative Commons license.

  • Next: Tips >>
  • Last Updated: Apr 18, 2022 1:43 PM
  • URL: https://guides.library.cornell.edu/critically_analyzing

Banner

Best Practice for Literature Searching

  • Literature Search Best Practice
  • What is literature searching?
  • What are literature reviews?
  • Hierarchies of evidence
  • 1. Managing references
  • 2. Defining your research question
  • 3. Where to search
  • 4. Search strategy
  • 5. Screening results
  • 6. Paper acquisition
  • 7. Critical appraisal
  • Further resources
  • Training opportunities and videos
  • Join FSTA student advisory board This link opens in a new window
  • Chinese This link opens in a new window
  • Italian This link opens in a new window
  • Persian This link opens in a new window
  • Portuguese This link opens in a new window
  • Spanish This link opens in a new window

What is critical appraisal?

We critically appraise information constantly, formally or informally, to determine if something is going to be valuable for our purpose and whether we trust the content it provides.

In the context of a literature search, critical appraisal is the process of systematically evaluating and assessing the research you have found in order to determine its quality and validity. It is essential to evidence-based practice.

More formally, critical appraisal is a systematic evaluation of research papers in order to answer the following questions:

  • Does this study address a clearly focused question?
  • Did the study use valid methods to address this question?
  • Are there factors, based on the study type, that might have confounded its results?
  • Are the valid results of this study important?
  • What are the confines of what can be concluded from the study?
  • Are these valid, important, though possibly limited, results applicable to my own research?

What is quality and how do you assess it?

In research we commissioned in 2018, researchers told us that they define ‘high quality evidence’ by factors such as:

  • Publication in a journal they consider reputable or with a high Impact Factor.
  • The peer review process, coordinated by publishers and carried out by other researchers.
  • Research institutions and authors who undertake quality research, and with whom they are familiar.

In other words, researchers use their own experience and expertise to assess quality.

However, students and early career researchers are unlikely to have built up that level of experience, and no matter how experienced a researcher is, there are certain times (for instance, when conducting a systematic review) when they will need to take a very close look at the validity of research articles.

There are checklists available to help with critical appraisal.  The checklists outline the key questions to ask for a specific study design.  Examples can be found in the  Critical Appraisal  section of this guide, and the Further Resources section.  

You may also find it beneficial to discuss issues such as quality and reputation with:

  • Your principal investigator (PI)
  • Your supervisor or other senior colleagues
  • Journal clubs. These are sometimes held by faculty or within organisations to encourage researchers to work together to discover and critically appraise information.
  • Topic-specific working groups

The more you practice critical appraisal, the quicker and more confident you will become at it.

  • << Previous: What are literature reviews?
  • Next: Hierarchies of evidence >>
  • Last Updated: Sep 15, 2023 2:17 PM
  • URL: https://ifis.libguides.com/literature_search_best_practice

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Volume 25, Issue 1
  • Critical appraisal of qualitative research: necessity, partialities and the issue of bias
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0001-5660-8224 Veronika Williams ,
  • Anne-Marie Boylan ,
  • http://orcid.org/0000-0003-4597-1276 David Nunan
  • Nuffield Department of Primary Care Health Sciences , University of Oxford, Radcliffe Observatory Quarter , Oxford , UK
  • Correspondence to Dr Veronika Williams, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford OX2 6GG, UK; veronika.williams{at}phc.ox.ac.uk

https://doi.org/10.1136/bmjebm-2018-111132

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • qualitative research

Introduction

Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the ‘how’ and ‘why’. As we have argued previously 1 , qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety, 2 prescribing, 3 4 and understanding chronic illness. 5 Equally, it offers additional insight into quantitative studies, explaining contextual factors surrounding a successful intervention or why an intervention might have ‘failed’ or ‘succeeded’ where effect sizes cannot. It is for these reasons that the MRC strongly recommends including qualitative evaluations when developing and evaluating complex interventions. 6

Critical appraisal of qualitative research

Is it necessary.

Although the importance of qualitative research to improve health services and care is now increasingly widely supported (discussed in paper 1), the role of appraising the quality of qualitative health research is still debated. 8 10 Despite a large body of literature focusing on appraisal and rigour, 9 11–15 often referred to as ‘trustworthiness’ 16 in qualitative research, there remains debate about how to —and even whether to—critically appraise qualitative research. 8–10 17–19 However, if we are to make a case for qualitative research as integral to evidence-based healthcare, then any argument to omit a crucial element of evidence-based practice is difficult to justify. That being said, simply applying the standards of rigour used to appraise studies based on the positivist paradigm (Positivism depends on quantifiable observations to test hypotheses and assumes that the researcher is independent of the study. Research situated within a positivist paradigm isbased purely on facts and consider the world to be external and objective and is concerned with validity, reliability and generalisability as measures of rigour.) would be misplaced given the different epistemological underpinnings of the two types of data.

Given its scope and its place within health research, the robust and systematic appraisal of qualitative research to assess its trustworthiness is as paramount to its implementation in clinical practice as any other type of research. It is important to appraise different qualitative studies in relation to the specific methodology used because the methodological approach is linked to the ‘outcome’ of the research (eg, theory development, phenomenological understandings and credibility of findings). Moreover, appraisal needs to go beyond merely describing the specific details of the methods used (eg, how data were collected and analysed), with additional focus needed on the overarching research design and its appropriateness in accordance with the study remit and objectives.

Poorly conducted qualitative research has been described as ‘worthless, becomes fiction and loses its utility’. 20 However, without a deep understanding of concepts of quality in qualitative research or at least an appropriate means to assess its quality, good qualitative research also risks being dismissed, particularly in the context of evidence-based healthcare where end users may not be well versed in this paradigm.

How is appraisal currently performed?

Appraising the quality of qualitative research is not a new concept—there are a number of published appraisal tools, frameworks and checklists in existence. 21–23  An important and often overlooked point is the confusion between tools designed for appraising methodological quality and reporting guidelines designed to assess the quality of methods reporting. An example is the Consolidate Criteria for Reporting Qualitative Research (COREQ) 24 checklist, which was designed to provide standards for authors when reporting qualitative research but is often mistaken for a methods appraisal tool. 10

Broadly speaking there are two types of critical appraisal approaches for qualitative research: checklists and frameworks. Checklists have often been criticised for confusing quality in qualitative research with ‘technical fixes’ 21 25 , resulting in the erroneous prioritisation of particular aspects of methodological processes over others (eg, multiple coding and triangulation). It could be argued that a checklist approach adopts the positivist paradigm, where the focus is on objectively assessing ‘quality’ where the assumptions is that the researcher is independent of the research conducted. This may result in the application of quantitative understandings of bias in order to judge aspects of recruitment, sampling, data collection and analysis in qualitative research papers. One of the most widely used appraisal tools is the Critical Appraisal Skills Programme (CASP) 26 and along with the JBI QARI (Joanna Briggs Institute Qualitative Assessment and Assessment Instrument) 27 presents examples which tend to mimic the quantitative approach to appraisal. The CASP qualitative tool follows that of other CASP appraisal tools for quantitative research designs developed in the 1990s. The similarities are therefore unsurprising given the status of qualitative research at that time.

Frameworks focus on the overarching concepts of quality in qualitative research, including transparency, reflexivity, dependability and transferability (see box 1 ). 11–13 15 16 20 28 However, unless the reader is familiar with these concepts—their meaning and impact, and how to interpret them—they will have difficulty applying them when critically appraising a paper.

The main issue concerning currently available checklist and framework appraisal methods is that they take a broad brush approach to ‘qualitative’ research as whole, with few, if any, sufficiently differentiating between the different methodological approaches (eg, Grounded Theory, Interpretative Phenomenology, Discourse Analysis) nor different methods of data collection (interviewing, focus groups and observations). In this sense, it is akin to taking the entire field of ‘quantitative’ study designs and applying a single method or tool for their quality appraisal. In the case of qualitative research, checklists, therefore, offer only a blunt and arguably ineffective tool and potentially promote an incomplete understanding of good ‘quality’ in qualitative research. Likewise, current framework methods do not take into account how concepts differ in their application across the variety of qualitative approaches and, like checklists, they also do not differentiate between different qualitative methodologies.

On the need for specific appraisal tools

Current approaches to the appraisal of the methodological rigour of the differing types of qualitative research converge towards checklists or frameworks. More importantly, the current tools do not explicitly acknowledge the prejudices that may be present in the different types of qualitative research.

Concepts of rigour or trustworthiness within qualitative research 31

Transferability: the extent to which the presented study allows readers to make connections between the study’s data and wider community settings, ie, transfer conceptual findings to other contexts.

Credibility: extent to which a research account is believable and appropriate, particularly in relation to the stories told by participants and the interpretations made by the researcher.

Reflexivity: refers to the researchers’ engagement of continuous examination and explanation of how they have influenced a research project from choosing a research question to sampling, data collection, analysis and interpretation of data.

Transparency: making explicit the whole research process from sampling strategies, data collection to analysis. The rationale for decisions made is as important as the decisions themselves.

However, we often talk about these concepts in general terms, and it might be helpful to give some explicit examples of how the ‘technical processes’ affect these, for example, partialities related to:

Selection: recruiting participants via gatekeepers, such as healthcare professionals or clinicians, who may select them based on whether they believe them to be ‘good’ participants for interviews/focus groups.

Data collection: poor interview guide with closed questions which encourage yes/no answers and/leading questions.

Reflexivity and transparency: where researchers may focus their analysis on preconceived ideas rather than ground their analysis in the data and do not reflect on the impact of this in a transparent way.

The lack of tailored, method-specific appraisal tools has potentially contributed to the poor uptake and use of qualitative research for informing evidence-based decision making. To improve this situation, we propose the need for more robust quality appraisal tools that explicitly encompass both the core design aspects of all qualitative research (sampling/data collection/analysis) but also considered the specific partialities that can be presented with different methodological approaches. Such tools might draw on the strengths of current frameworks and checklists while providing users with sufficient understanding of concepts of rigour in relation to the different types of qualitative methods. We provide an outline of such tools in the third and final paper in this series.

As qualitative research becomes ever more embedded in health science research, and in order for that research to have better impact on healthcare decisions, we need to rethink critical appraisal and develop tools that allow differentiated evaluations of the myriad of qualitative methodological approaches rather than continuing to treat qualitative research as a single unified approach.

  • Williams V ,
  • Boylan AM ,
  • Lingard L ,
  • Orser B , et al
  • Brawn R , et al
  • Van Royen P ,
  • Vermeire E , et al
  • Barker M , et al
  • McGannon KR
  • Dixon-Woods M ,
  • Agarwal S , et al
  • Greenhalgh T ,
  • Dennison L ,
  • Morrison L ,
  • Conway G , et al
  • Barrett M ,
  • Mayan M , et al
  • Lockwood C ,
  • Santiago-Delefosse M ,
  • Bruchez C , et al
  • Sainsbury P ,
  • ↵ CASP (Critical Appraisal Skills Programme). date unknown . http://www.phru.nhs.uk/Pages/PHD/CASP.htm .
  • ↵ The Joanna Briggs Institute . JBI QARI Critical appraisal checklist for interpretive & critical research . Adelaide : The Joanna Briggs Institute , 2014 .
  • Stephens J ,

Contributors VW and DN: conceived the idea for this article. VW: wrote the first draft. AMB and DN: contributed to the final draft. All authors approve the submitted article.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Correction notice This article has been updated since its original publication to include a new reference (reference 1.)

Read the full text or download the PDF:

IMAGES

  1. (PDF) Chapter 4-Critical appraisal of qualitative research Key points

    critical appraisal of thesis

  2. Critical Appraisal of Quantitative Research Article Essay Example

    critical appraisal of thesis

  3. The Critical Appraisal of the Article

    critical appraisal of thesis

  4. 10 Critical Analysis Templates to Download

    critical appraisal of thesis

  5. How to write a critical appraisal of a research paper

    critical appraisal of thesis

  6. Critical Appraisal Criteria Essay Example

    critical appraisal of thesis

VIDEO

  1. Critical appraisal

  2. BUKU: THESIS FEMINIST CRITICAL DIGITAL PEDAGOGY-Author: Suzuan Koseoglu & George Valetsianos

  3. Critical Appraisal (3 sessions) practical book EBM

  4. CRITICAL APPRAISAL OF A SYSTEMATIC REVIEW_DR.CHRISTINA

  5. Critical Appraisal of a Clinical Trial- Lecture by Dr. Bishal Gyawali

  6. How to do a Systematic Review

COMMENTS

  1. Full article: Critical appraisal

    What is critical appraisal? Critical appraisal involves a careful and systematic assessment of a study's trustworthiness or rigour (Booth et al., Citation 2016).A well-conducted critical appraisal: (a) is an explicit systematic, rather than an implicit haphazard, process; (b) involves judging a study on its methodological, ethical, and theoretical quality, and (c) is enhanced by a reviewer ...

  2. Critical appraisal of published research papers

    INTRODUCTION. Critical appraisal of a research paper is defined as "The process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context."[] Since scientific literature is rapidly expanding with more than 12,000 articles being added to the MEDLINE database per week,[] critical appraisal is very important to distinguish ...

  3. Critical Appraisal Tools and Reporting Guidelines

    More. Critical appraisal tools and reporting guidelines are the two most important instruments available to researchers and practitioners involved in research, evidence-based practice, and policymaking. Each of these instruments has unique characteristics, and both instruments play an essential role in evidence-based practice and decision-making.

  4. PDF Critical appraisal of a journal article

    Critical appraisal of a journal article 1. Introduction to critical appraisal Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. (Burls 2009) Critical appraisal is an important element of evidence-based medicine.

  5. (PDF) How to critically appraise an article

    SuMMarY. Critical appraisal is a systematic process used to identify the strengths. and weaknesse s of a res earch article in order t o assess the usefulness and. validity of r esearch findings ...

  6. Critical Appraisal of Studies

    Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context (Burls, 2009). Critical appraisal of studies involves checking the quality, reliability and relevance of the studies you've selected to help answer your review question.

  7. Systematic Reviews: Critical Appraisal by Study Design

    "The purpose of critical appraisal is to determine the scientific merit of a research report and its applicability to clinical decision making." 1 Conducting a critical appraisal of a study is imperative to any well executed evidence review, but the process can be time consuming and difficult. 2 The critical appraisal process requires "a methodological approach coupled with the right ...

  8. Dissecting the literature: the importance of critical appraisal

    Critical appraisal allows us to: reduce information overload by eliminating irrelevant or weak studies. identify the most relevant papers. distinguish evidence from opinion, assumptions, misreporting, and belief. assess the validity of the study. assess the usefulness and clinical applicability of the study. recognise any potential for bias.

  9. Critical Appraisal for Health Students

    Critical appraisal of a qualitative paper. This guide aimed at health students, provides basic level support for appraising qualitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising qualitative research (based on 4 aspects of trustworthiness) is provided and ...

  10. Guidelines for Writing and Presenting the Thesis

    Part 3. Critical appraisal. The final part of the thesis (of approximately 3,000 to 5,000 words not including tables and references) is intended to encourage critical reflection on the whole process of doing the research. Its structure and content are more flexible than those of the other two parts.

  11. Critical Appraisal

    Critical appraisal for evaluating studies. Cochrane Handbook, Chapter 14: Completing "Summary of Findings' Tables and Grading the Certainty of the Evidence Cochrane has adopted the GRADE approach (Grading of Recommendations Assessment, Development and Evaluation) for assessing certainty (or quality) of a body of evidence. The GRADE approach specifies four levels of the certainty for a body of ...

  12. How To Write a Critical Appraisal

    A critical appraisal is an academic approach that refers to the systematic identification of strengths and weakness of a research article with the intent of evaluating the usefulness and validity of the work's research findings. As with all essays, you need to be clear, concise, and logical in your presentation of arguments, analysis, and ...

  13. Critical Appraisal of Clinical Research

    Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession. It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the ...

  14. Critical Appraisal Questionnaires » CEBMa

    Critical appraisal is the process of carefully and systematically assessing the outcome of scientific research (evidence) to judge its trustworthiness, value and relevance in a particular context. Critical appraisal looks at the way a study is conducted and examines factors such as internal validity, generalizability and relevance.

  15. 7. Critical appraisal

    Documenting your reasoning will help you reassure yourself and demonstrate to others that you have been systematic and unbiased in your appraisal decisions. Keeping track of what you have excluded, and why, will be very helpful if you must defend your work—for instance, if your literature review is part of a dissertation or thesis.

  16. Critical Appraisal and Analysis

    Critical Appraisal and Analysis; Tips; Initial Appraisal : Reviewing the source. A. Author. What are the author's credentials--institutional affiliation (where he or she works), educational background, past writings, or experience? Is the book or article written on a topic in the author's area of expertise?

  17. How to Write a Critical Analysis Essay

    How to Write a Critical Analysis Essay. Written by MasterClass. Last updated: Jun 7, 2021 • 3 min read. Critical analysis essays can be a daunting form of academic writing, but crafting a good critical analysis paper can be straightforward if you have the right approach.

  18. PDF How To Write A Critical Essay

    A critical essay involves evaluating information, theories or situations and is an important way of analysing information, posing questions and challenging information. The critical essay is an important academic tool that allows your knowledge to develop, because rather than being a personal opinion, the critical essay requires an in-

  19. What is critical appraisal?

    In the context of a literature search, critical appraisal is the process of systematically evaluating and assessing the research you have found in order to determine its quality and validity. It is essential to evidence-based practice. More formally, critical appraisal is a systematic evaluation of research papers in order to answer the ...

  20. Critical appraisal of qualitative research

    Qualitative evidence allows researchers to analyse human experience and provides useful exploratory insights into experiential matters and meaning, often explaining the 'how' and 'why'. As we have argued previously1, qualitative research has an important place within evidence-based healthcare, contributing to among other things policy on patient safety,2 prescribing,3 4 and ...

  21. thesis

    The conclusion section typically has the following sub-sections: critical appraisal, relevance of work, and outlook. Naturally my supervisor gave me some ideas what the critical appraisal should look like and I can also look for inspiration in previous theses. Nevertheless, I wonder what a critical appraisal normally is.

  22. Critical Appraisal of a Qualitative Journal Article

    This essay critically appraises a research article, Using CASP (critical appraisal skills programme, 2006) and individual sections of Bellini & Rumrill: guidelines for critiquing research articles (Bellini &Rumrill, 1999). The title of this article is; 'Clinical handover in the trauma setting: A qualitative study of paramedics and trauma team ...