Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

Duke University Libraries

Literature Reviews

  • Getting started

What is a literature review?

Why conduct a literature review, stages of a literature review, lit reviews: an overview (video), check out these books.

  • Types of reviews
  • 1. Define your research question
  • 2. Plan your search
  • 3. Search the literature
  • 4. Organize your results
  • 5. Synthesize your findings
  • 6. Write the review
  • Artificial intelligence (AI) tools
  • Thompson Writing Studio This link opens in a new window
  • Need to write a systematic review? This link opens in a new window

empirical research literature review

Contact a Librarian

Ask a Librarian

Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject.

Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field.

Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in academic literature.

Identifying Gaps: Aims to pinpoint areas where there is a lack of research or unresolved questions, highlighting opportunities for further investigation.

Contextualization: Enables researchers to understand how their work fits into the broader academic conversation and contributes to the existing body of knowledge.

empirical research literature review

tl;dr  A literature review critically examines and synthesizes existing scholarly research and publications on a specific topic to provide a comprehensive understanding of the current state of knowledge in the field.

What is a literature review NOT?

❌ An annotated bibliography

❌ Original research

❌ A summary

❌ Something to be conducted at the end of your research

❌ An opinion piece

❌ A chronological compilation of studies

The reason for conducting a literature review is to:

empirical research literature review

Literature Reviews: An Overview for Graduate Students

While this 9-minute video from NCSU is geared toward graduate students, it is useful for anyone conducting a literature review.

empirical research literature review

Writing the literature review: A practical guide

Available 3rd floor of Perkins

empirical research literature review

Writing literature reviews: A guide for students of the social and behavioral sciences

Available online!

empirical research literature review

So, you have to write a literature review: A guided workbook for engineers

empirical research literature review

Telling a research story: Writing a literature review

empirical research literature review

The literature review: Six steps to success

empirical research literature review

Systematic approaches to a successful literature review

Request from Duke Medical Center Library

empirical research literature review

Doing a systematic review: A student's guide

  • Next: Types of reviews >>
  • Last Updated: May 17, 2024 8:42 AM
  • URL: https://guides.library.duke.edu/litreviews

Duke University Libraries

Services for...

  • Faculty & Instructors
  • Graduate Students
  • Undergraduate Students
  • International Students
  • Patrons with Disabilities

Twitter

  • Harmful Language Statement
  • Re-use & Attribution / Privacy
  • Support the Libraries

Creative Commons License

Research Methods

  • Getting Started
  • Literature Review Research
  • Research Design
  • Research Design By Discipline
  • SAGE Research Methods
  • Teaching with SAGE Research Methods

Literature Review

  • What is a Literature Review?
  • What is NOT a Literature Review?
  • Purposes of a Literature Review
  • Types of Literature Reviews
  • Literature Reviews vs. Systematic Reviews
  • Systematic vs. Meta-Analysis

Literature Review  is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.

Also, we can define a literature review as the collected body of scholarly works related to a topic:

  • Summarizes and analyzes previous research relevant to a topic
  • Includes scholarly books and articles published in academic journals
  • Can be an specific scholarly paper or a section in a research paper

The objective of a Literature Review is to find previous published scholarly works relevant to an specific topic

  • Help gather ideas or information
  • Keep up to date in current trends and findings
  • Help develop new questions

A literature review is important because it:

  • Explains the background of research on a topic.
  • Demonstrates why a topic is significant to a subject area.
  • Helps focus your own research questions or problems
  • Discovers relationships between research studies/ideas.
  • Suggests unexplored ideas or populations
  • Identifies major themes, concepts, and researchers on a topic.
  • Tests assumptions; may help counter preconceived ideas and remove unconscious bias.
  • Identifies critical gaps, points of disagreement, or potentially flawed methodology or theoretical approaches.
  • Indicates potential directions for future research.

All content in this section is from Literature Review Research from Old Dominion University 

Keep in mind the following, a literature review is NOT:

Not an essay 

Not an annotated bibliography  in which you summarize each article that you have reviewed.  A literature review goes beyond basic summarizing to focus on the critical analysis of the reviewed works and their relationship to your research question.

Not a research paper   where you select resources to support one side of an issue versus another.  A lit review should explain and consider all sides of an argument in order to avoid bias, and areas of agreement and disagreement should be highlighted.

A literature review serves several purposes. For example, it

  • provides thorough knowledge of previous studies; introduces seminal works.
  • helps focus one’s own research topic.
  • identifies a conceptual framework for one’s own research questions or problems; indicates potential directions for future research.
  • suggests previously unused or underused methodologies, designs, quantitative and qualitative strategies.
  • identifies gaps in previous studies; identifies flawed methodologies and/or theoretical approaches; avoids replication of mistakes.
  • helps the researcher avoid repetition of earlier research.
  • suggests unexplored populations.
  • determines whether past studies agree or disagree; identifies controversy in the literature.
  • tests assumptions; may help counter preconceived ideas and remove unconscious bias.

As Kennedy (2007) notes*, it is important to think of knowledge in a given field as consisting of three layers. First, there are the primary studies that researchers conduct and publish. Second are the reviews of those studies that summarize and offer new interpretations built from and often extending beyond the original studies. Third, there are the perceptions, conclusions, opinion, and interpretations that are shared informally that become part of the lore of field. In composing a literature review, it is important to note that it is often this third layer of knowledge that is cited as "true" even though it often has only a loose relationship to the primary studies and secondary literature reviews.

Given this, while literature reviews are designed to provide an overview and synthesis of pertinent sources you have explored, there are several approaches to how they can be done, depending upon the type of analysis underpinning your study. Listed below are definitions of types of literature reviews:

Argumentative Review      This form examines literature selectively in order to support or refute an argument, deeply imbedded assumption, or philosophical problem already established in the literature. The purpose is to develop a body of literature that establishes a contrarian viewpoint. Given the value-laden nature of some social science research [e.g., educational reform; immigration control], argumentative approaches to analyzing the literature can be a legitimate and important form of discourse. However, note that they can also introduce problems of bias when they are used to to make summary claims of the sort found in systematic reviews.

Integrative Review      Considered a form of research that reviews, critiques, and synthesizes representative literature on a topic in an integrated way such that new frameworks and perspectives on the topic are generated. The body of literature includes all studies that address related or identical hypotheses. A well-done integrative review meets the same standards as primary research in regard to clarity, rigor, and replication.

Historical Review      Few things rest in isolation from historical precedent. Historical reviews are focused on examining research throughout a period of time, often starting with the first time an issue, concept, theory, phenomena emerged in the literature, then tracing its evolution within the scholarship of a discipline. The purpose is to place research in a historical context to show familiarity with state-of-the-art developments and to identify the likely directions for future research.

Methodological Review      A review does not always focus on what someone said [content], but how they said it [method of analysis]. This approach provides a framework of understanding at different levels (i.e. those of theory, substantive fields, research approaches and data collection and analysis techniques), enables researchers to draw on a wide variety of knowledge ranging from the conceptual level to practical documents for use in fieldwork in the areas of ontological and epistemological consideration, quantitative and qualitative integration, sampling, interviewing, data collection and data analysis, and helps highlight many ethical issues which we should be aware of and consider as we go through our study.

Systematic Review      This form consists of an overview of existing evidence pertinent to a clearly formulated research question, which uses pre-specified and standardized methods to identify and critically appraise relevant research, and to collect, report, and analyse data from the studies that are included in the review. Typically it focuses on a very specific empirical question, often posed in a cause-and-effect form, such as "To what extent does A contribute to B?"

Theoretical Review      The purpose of this form is to concretely examine the corpus of theory that has accumulated in regard to an issue, concept, theory, phenomena. The theoretical literature review help establish what theories already exist, the relationships between them, to what degree the existing theories have been investigated, and to develop new hypotheses to be tested. Often this form is used to help establish a lack of appropriate theories or reveal that current theories are inadequate for explaining new or emerging research problems. The unit of analysis can focus on a theoretical concept or a whole theory or framework.

* Kennedy, Mary M. "Defining a Literature."  Educational Researcher  36 (April 2007): 139-147.

All content in this section is from The Literature Review created by Dr. Robert Larabee USC

Robinson, P. and Lowe, J. (2015),  Literature reviews vs systematic reviews.  Australian and New Zealand Journal of Public Health, 39: 103-103. doi: 10.1111/1753-6405.12393

empirical research literature review

What's in the name? The difference between a Systematic Review and a Literature Review, and why it matters . By Lynn Kysh from University of Southern California

empirical research literature review

Systematic review or meta-analysis?

A  systematic review  answers a defined research question by collecting and summarizing all empirical evidence that fits pre-specified eligibility criteria.

A  meta-analysis  is the use of statistical methods to summarize the results of these studies.

Systematic reviews, just like other research articles, can be of varying quality. They are a significant piece of work (the Centre for Reviews and Dissemination at York estimates that a team will take 9-24 months), and to be useful to other researchers and practitioners they should have:

  • clearly stated objectives with pre-defined eligibility criteria for studies
  • explicit, reproducible methodology
  • a systematic search that attempts to identify all studies
  • assessment of the validity of the findings of the included studies (e.g. risk of bias)
  • systematic presentation, and synthesis, of the characteristics and findings of the included studies

Not all systematic reviews contain meta-analysis. 

Meta-analysis is the use of statistical methods to summarize the results of independent studies. By combining information from all relevant studies, meta-analysis can provide more precise estimates of the effects of health care than those derived from the individual studies included within a review.  More information on meta-analyses can be found in  Cochrane Handbook, Chapter 9 .

A meta-analysis goes beyond critique and integration and conducts secondary statistical analysis on the outcomes of similar studies.  It is a systematic review that uses quantitative methods to synthesize and summarize the results.

An advantage of a meta-analysis is the ability to be completely objective in evaluating research findings.  Not all topics, however, have sufficient research evidence to allow a meta-analysis to be conducted.  In that case, an integrative review is an appropriate strategy. 

Some of the content in this section is from Systematic reviews and meta-analyses: step by step guide created by Kate McAllister.

  • << Previous: Getting Started
  • Next: Research Design >>
  • Last Updated: Aug 21, 2023 4:07 PM
  • URL: https://guides.lib.udel.edu/researchmethods

empirical research literature review

  • Meriam Library

SWRK 330 - Social Work Research Methods

  • Literature Reviews and Empirical Research
  • Databases and Search Tips
  • Article Citations
  • Scholarly Journal Evaulation
  • Statistical Sources
  • Books and eBooks

What is a Literature Review?

Empirical research.

  • Annotated Bibliographies

A literature review  summarizes and discusses previous publications  on a topic.

It should also:

explore past research and its strengths and weaknesses.

be used to validate the target and methods you have chosen for your proposed research.

consist of books and scholarly journals that provide research examples of populations or settings similar to your own, as well as community resources to document the need for your proposed research.

The literature review does not present new  primary  scholarship. 

be completed in the correct citation format requested by your professor  (see the  C itations Tab)

Access Purdue  OWL's Social Work Literature Review Guidelines here .  

Empirical Research  is  research  that is based on experimentation or observation, i.e. Evidence. Such  research  is often conducted to answer a specific question or to test a hypothesis (educated guess).

How do you know if a study is empirical? Read the subheadings within the article, book, or report and look for a description of the research "methodology."  Ask yourself: Could I recreate this study and test these results?

These are some key features to look for when identifying empirical research.

NOTE:  Not all of these features will be in every empirical research article, some may be excluded, use this only as a guide.

  • Statement of methodology
  • Research questions are clear and measurable
  • Individuals, group, subjects which are being studied are identified/defined
  • Data is presented regarding the findings
  • Controls or instruments such as surveys or tests were conducted
  • There is a literature review
  • There is discussion of the results included
  • Citations/references are included

See also Empirical Research Guide

  • << Previous: Citations
  • Next: Annotated Bibliographies >>
  • Last Updated: Feb 6, 2024 8:38 AM
  • URL: https://libguides.csuchico.edu/SWRK330

Meriam Library | CSU, Chico

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Module 2 Chapter 3: What is Empirical Literature & Where can it be Found?

In Module 1, you read about the problem of pseudoscience. Here, we revisit the issue in addressing how to locate and assess scientific or empirical literature . In this chapter you will read about:

  • distinguishing between what IS and IS NOT empirical literature
  • how and where to locate empirical literature for understanding diverse populations, social work problems, and social phenomena.

Probably the most important take-home lesson from this chapter is that one source is not sufficient to being well-informed on a topic. It is important to locate multiple sources of information and to critically appraise the points of convergence and divergence in the information acquired from different sources. This is especially true in emerging and poorly understood topics, as well as in answering complex questions.

What Is Empirical Literature

Social workers often need to locate valid, reliable information concerning the dimensions of a population group or subgroup, a social work problem, or social phenomenon. They might also seek information about the way specific problems or resources are distributed among the populations encountered in professional practice. Or, social workers might be interested in finding out about the way that certain people experience an event or phenomenon. Empirical literature resources may provide answers to many of these types of social work questions. In addition, resources containing data regarding social indicators may also prove helpful. Social indicators are the “facts and figures” statistics that describe the social, economic, and psychological factors that have an impact on the well-being of a community or other population group.The United Nations (UN) and the World Health Organization (WHO) are examples of organizations that monitor social indicators at a global level: dimensions of population trends (size, composition, growth/loss), health status (physical, mental, behavioral, life expectancy, maternal and infant mortality, fertility/child-bearing, and diseases like HIV/AIDS), housing and quality of sanitation (water supply, waste disposal), education and literacy, and work/income/unemployment/economics, for example.

Image of the Globe

Three characteristics stand out in empirical literature compared to other types of information available on a topic of interest: systematic observation and methodology, objectivity, and transparency/replicability/reproducibility. Let’s look a little more closely at these three features.

Systematic Observation and Methodology. The hallmark of empiricism is “repeated or reinforced observation of the facts or phenomena” (Holosko, 2006, p. 6). In empirical literature, established research methodologies and procedures are systematically applied to answer the questions of interest.

Objectivity. Gathering “facts,” whatever they may be, drives the search for empirical evidence (Holosko, 2006). Authors of empirical literature are expected to report the facts as observed, whether or not these facts support the investigators’ original hypotheses. Research integrity demands that the information be provided in an objective manner, reducing sources of investigator bias to the greatest possible extent.

Transparency and Replicability/Reproducibility.   Empirical literature is reported in such a manner that other investigators understand precisely what was done and what was found in a particular research study—to the extent that they could replicate the study to determine whether the findings are reproduced when repeated. The outcomes of an original and replication study may differ, but a reader could easily interpret the methods and procedures leading to each study’s findings.

What is NOT Empirical Literature

By now, it is probably obvious to you that literature based on “evidence” that is not developed in a systematic, objective, transparent manner is not empirical literature. On one hand, non-empirical types of professional literature may have great significance to social workers. For example, social work scholars may produce articles that are clearly identified as describing a new intervention or program without evaluative evidence, critiquing a policy or practice, or offering a tentative, untested theory about a phenomenon. These resources are useful in educating ourselves about possible issues or concerns. But, even if they are informed by evidence, they are not empirical literature. Here is a list of several sources of information that do not meet the standard of being called empirical literature:

  • your course instructor’s lectures
  • political statements
  • advertisements
  • newspapers & magazines (journalism)
  • television news reports & analyses (journalism)
  • many websites, Facebook postings, Twitter tweets, and blog postings
  • the introductory literature review in an empirical article

You may be surprised to see the last two included in this list. Like the other sources of information listed, these sources also might lead you to look for evidence. But, they are not themselves sources of evidence. They may summarize existing evidence, but in the process of summarizing (like your instructor’s lectures), information is transformed, modified, reduced, condensed, and otherwise manipulated in such a manner that you may not see the entire, objective story. These are called secondary sources, as opposed to the original, primary source of evidence. In relying solely on secondary sources, you sacrifice your own critical appraisal and thinking about the original work—you are “buying” someone else’s interpretation and opinion about the original work, rather than developing your own interpretation and opinion. What if they got it wrong? How would you know if you did not examine the primary source for yourself? Consider the following as an example of “getting it wrong” being perpetuated.

Example: Bullying and School Shootings . One result of the heavily publicized April 1999 school shooting incident at Columbine High School (Colorado), was a heavy emphasis placed on bullying as a causal factor in these incidents (Mears, Moon, & Thielo, 2017), “creating a powerful master narrative about school shootings” (Raitanen, Sandberg, & Oksanen, 2017, p. 3). Naturally, with an identified cause, a great deal of effort was devoted to anti-bullying campaigns and interventions for enhancing resilience among youth who experience bullying.  However important these strategies might be for promoting positive mental health, preventing poor mental health, and possibly preventing suicide among school-aged children and youth, it is a mistaken belief that this can prevent school shootings (Mears, Moon, & Thielo, 2017). Many times the accounts of the perpetrators having been bullied come from potentially inaccurate third-party accounts, rather than the perpetrators themselves; bullying was not involved in all instances of school shooting; a perpetrator’s perception of being bullied/persecuted are not necessarily accurate; many who experience severe bullying do not perpetrate these incidents; bullies are the least targeted shooting victims; perpetrators of the shooting incidents were often bullying others; and, bullying is only one of many important factors associated with perpetrating such an incident (Ioannou, Hammond, & Simpson, 2015; Mears, Moon, & Thielo, 2017; Newman &Fox, 2009; Raitanen, Sandberg, & Oksanen, 2017). While mass media reports deliver bullying as a means of explaining the inexplicable, the reality is not so simple: “The connection between bullying and school shootings is elusive” (Langman, 2014), and “the relationship between bullying and school shooting is, at best, tenuous” (Mears, Moon, & Thielo, 2017, p. 940). The point is, when a narrative becomes this publicly accepted, it is difficult to sort out truth and reality without going back to original sources of information and evidence.

Wordcloud of Bully Related Terms

What May or May Not Be Empirical Literature: Literature Reviews

Investigators typically engage in a review of existing literature as they develop their own research studies. The review informs them about where knowledge gaps exist, methods previously employed by other scholars, limitations of prior work, and previous scholars’ recommendations for directing future research. These reviews may appear as a published article, without new study data being reported (see Fields, Anderson, & Dabelko-Schoeny, 2014 for example). Or, the literature review may appear in the introduction to their own empirical study report. These literature reviews are not considered to be empirical evidence sources themselves, although they may be based on empirical evidence sources. One reason is that the authors of a literature review may or may not have engaged in a systematic search process, identifying a full, rich, multi-sided pool of evidence reports.

There is, however, a type of review that applies systematic methods and is, therefore, considered to be more strongly rooted in evidence: the systematic review .

Systematic review of literature. A systematic reviewis a type of literature report where established methods have been systematically applied, objectively, in locating and synthesizing a body of literature. The systematic review report is characterized by a great deal of transparency about the methods used and the decisions made in the review process, and are replicable. Thus, it meets the criteria for empirical literature: systematic observation and methodology, objectivity, and transparency/reproducibility. We will work a great deal more with systematic reviews in the second course, SWK 3402, since they are important tools for understanding interventions. They are somewhat less common, but not unheard of, in helping us understand diverse populations, social work problems, and social phenomena.

Locating Empirical Evidence

Social workers have available a wide array of tools and resources for locating empirical evidence in the literature. These can be organized into four general categories.

Journal Articles. A number of professional journals publish articles where investigators report on the results of their empirical studies. However, it is important to know how to distinguish between empirical and non-empirical manuscripts in these journals. A key indicator, though not the only one, involves a peer review process . Many professional journals require that manuscripts undergo a process of peer review before they are accepted for publication. This means that the authors’ work is shared with scholars who provide feedback to the journal editor as to the quality of the submitted manuscript. The editor then makes a decision based on the reviewers’ feedback:

  • Accept as is
  • Accept with minor revisions
  • Request that a revision be resubmitted (no assurance of acceptance)

When a “revise and resubmit” decision is made, the piece will go back through the review process to determine if it is now acceptable for publication and that all of the reviewers’ concerns have been adequately addressed. Editors may also reject a manuscript because it is a poor fit for the journal, based on its mission and audience, rather than sending it for review consideration.

Word cloud of social work related publications

Indicators of journal relevance. Various journals are not equally relevant to every type of question being asked of the literature. Journals may overlap to a great extent in terms of the topics they might cover; in other words, a topic might appear in multiple different journals, depending on how the topic was being addressed. For example, articles that might help answer a question about the relationship between community poverty and violence exposure might appear in several different journals, some with a focus on poverty, others with a focus on violence, and still others on community development or public health. Journal titles are sometimes a good starting point but may not give a broad enough picture of what they cover in their contents.

In focusing a literature search, it also helps to review a journal’s mission and target audience. For example, at least four different journals focus specifically on poverty:

  • Journal of Children & Poverty
  • Journal of Poverty
  • Journal of Poverty and Social Justice
  • Poverty & Public Policy

Let’s look at an example using the Journal of Poverty and Social Justice . Information about this journal is located on the journal’s webpage: http://policy.bristoluniversitypress.co.uk/journals/journal-of-poverty-and-social-justice . In the section headed “About the Journal” you can see that it is an internationally focused research journal, and that it addresses social justice issues in addition to poverty alone. The research articles are peer-reviewed (there appear to be non-empirical discussions published, as well). These descriptions about a journal are almost always available, sometimes listed as “scope” or “mission.” These descriptions also indicate the sponsorship of the journal—sponsorship may be institutional (a particular university or agency, such as Smith College Studies in Social Work ), a professional organization, such as the Council on Social Work Education (CSWE) or the National Association of Social Work (NASW), or a publishing company (e.g., Taylor & Frances, Wiley, or Sage).

Indicators of journal caliber.  Despite engaging in a peer review process, not all journals are equally rigorous. Some journals have very high rejection rates, meaning that many submitted manuscripts are rejected; others have fairly high acceptance rates, meaning that relatively few manuscripts are rejected. This is not necessarily the best indicator of quality, however, since newer journals may not be sufficiently familiar to authors with high quality manuscripts and some journals are very specific in terms of what they publish. Another index that is sometimes used is the journal’s impact factor . Impact factor is a quantitative number indicative of how often articles published in the journal are cited in the reference list of other journal articles—the statistic is calculated as the number of times on average each article published in a particular year were cited divided by the number of articles published (the number that could be cited). For example, the impact factor for the Journal of Poverty and Social Justice in our list above was 0.70 in 2017, and for the Journal of Poverty was 0.30. These are relatively low figures compared to a journal like the New England Journal of Medicine with an impact factor of 59.56! This means that articles published in that journal were, on average, cited more than 59 times in the next year or two.

Impact factors are not necessarily the best indicator of caliber, however, since many strong journals are geared toward practitioners rather than scholars, so they are less likely to be cited by other scholars but may have a large impact on a large readership. This may be the case for a journal like the one titled Social Work, the official journal of the National Association of Social Workers. It is distributed free to all members: over 120,000 practitioners, educators, and students of social work world-wide. The journal has a recent impact factor of.790. The journals with social work relevant content have impact factors in the range of 1.0 to 3.0 according to Scimago Journal & Country Rank (SJR), particularly when they are interdisciplinary journals (for example, Child Development , Journal of Marriage and Family , Child Abuse and Neglect , Child Maltreatmen t, Social Service Review , and British Journal of Social Work ). Once upon a time, a reader could locate different indexes comparing the “quality” of social work-related journals. However, the concept of “quality” is difficult to systematically define. These indexes have mostly been replaced by impact ratings, which are not necessarily the best, most robust indicators on which to rely in assessing journal quality. For example, new journals addressing cutting edge topics have not been around long enough to have been evaluated using this particular tool, and it takes a few years for articles to begin to be cited in other, later publications.

Beware of pseudo-, illegitimate, misleading, deceptive, and suspicious journals . Another side effect of living in the Age of Information is that almost anyone can circulate almost anything and call it whatever they wish. This goes for “journal” publications, as well. With the advent of open-access publishing in recent years (electronic resources available without subscription), we have seen an explosion of what are called predatory or junk journals . These are publications calling themselves journals, often with titles very similar to legitimate publications and often with fake editorial boards. These “publications” lack the integrity of legitimate journals. This caution is reminiscent of the discussions earlier in the course about pseudoscience and “snake oil” sales. The predatory nature of many apparent information dissemination outlets has to do with how scientists and scholars may be fooled into submitting their work, often paying to have their work peer-reviewed and published. There exists a “thriving black-market economy of publishing scams,” and at least two “journal blacklists” exist to help identify and avoid these scam journals (Anderson, 2017).

This issue is important to information consumers, because it creates a challenge in terms of identifying legitimate sources and publications. The challenge is particularly important to address when information from on-line, open-access journals is being considered. Open-access is not necessarily a poor choice—legitimate scientists may pay sizeable fees to legitimate publishers to make their work freely available and accessible as open-access resources. On-line access is also not necessarily a poor choice—legitimate publishers often make articles available on-line to provide timely access to the content, especially when publishing the article in hard copy will be delayed by months or even a year or more. On the other hand, stating that a journal engages in a peer-review process is no guarantee of quality—this claim may or may not be truthful. Pseudo- and junk journals may engage in some quality control practices, but may lack attention to important quality control processes, such as managing conflict of interest, reviewing content for objectivity or quality of the research conducted, or otherwise failing to adhere to industry standards (Laine & Winker, 2017).

One resource designed to assist with the process of deciphering legitimacy is the Directory of Open Access Journals (DOAJ). The DOAJ is not a comprehensive listing of all possible legitimate open-access journals, and does not guarantee quality, but it does help identify legitimate sources of information that are openly accessible and meet basic legitimacy criteria. It also is about open-access journals, not the many journals published in hard copy.

An additional caution: Search for article corrections. Despite all of the careful manuscript review and editing, sometimes an error appears in a published article. Most journals have a practice of publishing corrections in future issues. When you locate an article, it is helpful to also search for updates. Here is an example where data presented in an article’s original tables were erroneous, and a correction appeared in a later issue.

  • Marchant, A., Hawton, K., Stewart A., Montgomery, P., Singaravelu, V., Lloyd, K., Purdy, N., Daine, K., & John, A. (2017). A systematic review of the relationship between internet use, self-harm and suicidal behaviour in young people: The good, the bad and the unknown. PLoS One, 12(8): e0181722. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5558917/
  • Marchant, A., Hawton, K., Stewart A., Montgomery, P., Singaravelu, V., Lloyd, K., Purdy, N., Daine, K., & John, A. (2018).Correction—A systematic review of the relationship between internet use, self-harm and suicidal behaviour in young people: The good, the bad and the unknown. PLoS One, 13(3): e0193937.  http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0193937

Search Tools. In this age of information, it is all too easy to find items—the problem lies in sifting, sorting, and managing the vast numbers of items that can be found. For example, a simple Google® search for the topic “community poverty and violence” resulted in about 15,600,000 results! As a means of simplifying the process of searching for journal articles on a specific topic, a variety of helpful tools have emerged. One type of search tool has previously applied a filtering process for you: abstracting and indexing databases . These resources provide the user with the results of a search to which records have already passed through one or more filters. For example, PsycINFO is managed by the American Psychological Association and is devoted to peer-reviewed literature in behavioral science. It contains almost 4.5 million records and is growing every month. However, it may not be available to users who are not affiliated with a university library. Conducting a basic search for our topic of “community poverty and violence” in PsychINFO returned 1,119 articles. Still a large number, but far more manageable. Additional filters can be applied, such as limiting the range in publication dates, selecting only peer reviewed items, limiting the language of the published piece (English only, for example), and specified types of documents (either chapters, dissertations, or journal articles only, for example). Adding the filters for English, peer-reviewed journal articles published between 2010 and 2017 resulted in 346 documents being identified.

Just as was the case with journals, not all abstracting and indexing databases are equivalent. There may be overlap between them, but none is guaranteed to identify all relevant pieces of literature. Here are some examples to consider, depending on the nature of the questions asked of the literature:

  • Academic Search Complete—multidisciplinary index of 9,300 peer-reviewed journals
  • AgeLine—multidisciplinary index of aging-related content for over 600 journals
  • Campbell Collaboration—systematic reviews in education, crime and justice, social welfare, international development
  • Google Scholar—broad search tool for scholarly literature across many disciplines
  • MEDLINE/ PubMed—National Library of medicine, access to over 15 million citations
  • Oxford Bibliographies—annotated bibliographies, each is discipline specific (e.g., psychology, childhood studies, criminology, social work, sociology)
  • PsycINFO/PsycLIT—international literature on material relevant to psychology and related disciplines
  • SocINDEX—publications in sociology
  • Social Sciences Abstracts—multiple disciplines
  • Social Work Abstracts—many areas of social work are covered
  • Web of Science—a “meta” search tool that searches other search tools, multiple disciplines

Placing our search for information about “community violence and poverty” into the Social Work Abstracts tool with no additional filters resulted in a manageable 54-item list. Finally, abstracting and indexing databases are another way to determine journal legitimacy: if a journal is indexed in a one of these systems, it is likely a legitimate journal. However, the converse is not necessarily true: if a journal is not indexed does not mean it is an illegitimate or pseudo-journal.

Government Sources. A great deal of information is gathered, analyzed, and disseminated by various governmental branches at the international, national, state, regional, county, and city level. Searching websites that end in.gov is one way to identify this type of information, often presented in articles, news briefs, and statistical reports. These government sources gather information in two ways: they fund external investigations through grants and contracts and they conduct research internally, through their own investigators. Here are some examples to consider, depending on the nature of the topic for which information is sought:

  • Agency for Healthcare Research and Quality (AHRQ) at https://www.ahrq.gov/
  • Bureau of Justice Statistics (BJS) at https://www.bjs.gov/
  • Census Bureau at https://www.census.gov
  • Morbidity and Mortality Weekly Report of the CDC (MMWR-CDC) at https://www.cdc.gov/mmwr/index.html
  • Child Welfare Information Gateway at https://www.childwelfare.gov
  • Children’s Bureau/Administration for Children & Families at https://www.acf.hhs.gov
  • Forum on Child and Family Statistics at https://www.childstats.gov
  • National Institutes of Health (NIH) at https://www.nih.gov , including (not limited to):
  • National Institute on Aging (NIA at https://www.nia.nih.gov
  • National Institute on Alcohol Abuse and Alcoholism (NIAAA) at https://www.niaaa.nih.gov
  • National Institute of Child Health and Human Development (NICHD) at https://www.nichd.nih.gov
  • National Institute on Drug Abuse (NIDA) at https://www.nida.nih.gov
  • National Institute of Environmental Health Sciences at https://www.niehs.nih.gov
  • National Institute of Mental Health (NIMH) at https://www.nimh.nih.gov
  • National Institute on Minority Health and Health Disparities at https://www.nimhd.nih.gov
  • National Institute of Justice (NIJ) at https://www.nij.gov
  • Substance Abuse and Mental Health Services Administration (SAMHSA) at https://www.samhsa.gov/
  • United States Agency for International Development at https://usaid.gov

Each state and many counties or cities have similar data sources and analysis reports available, such as Ohio Department of Health at https://www.odh.ohio.gov/healthstats/dataandstats.aspx and Franklin County at https://statisticalatlas.com/county/Ohio/Franklin-County/Overview . Data are available from international/global resources (e.g., United Nations and World Health Organization), as well.

Other Sources. The Health and Medicine Division (HMD) of the National Academies—previously the Institute of Medicine (IOM)—is a nonprofit institution that aims to provide government and private sector policy and other decision makers with objective analysis and advice for making informed health decisions. For example, in 2018 they produced reports on topics in substance use and mental health concerning the intersection of opioid use disorder and infectious disease,  the legal implications of emerging neurotechnologies, and a global agenda concerning the identification and prevention of violence (see http://www.nationalacademies.org/hmd/Global/Topics/Substance-Abuse-Mental-Health.aspx ). The exciting aspect of this resource is that it addresses many topics that are current concerns because they are hoping to help inform emerging policy. The caution to consider with this resource is the evidence is often still emerging, as well.

Numerous “think tank” organizations exist, each with a specific mission. For example, the Rand Corporation is a nonprofit organization offering research and analysis to address global issues since 1948. The institution’s mission is to help improve policy and decision making “to help individuals, families, and communities throughout the world be safer and more secure, healthier and more prosperous,” addressing issues of energy, education, health care, justice, the environment, international affairs, and national security (https://www.rand.org/about/history.html). And, for example, the Robert Woods Johnson Foundation is a philanthropic organization supporting research and research dissemination concerning health issues facing the United States. The foundation works to build a culture of health across systems of care (not only medical care) and communities (https://www.rwjf.org).

While many of these have a great deal of helpful evidence to share, they also may have a strong political bias. Objectivity is often lacking in what information these organizations provide: they provide evidence to support certain points of view. That is their purpose—to provide ideas on specific problems, many of which have a political component. Think tanks “are constantly researching solutions to a variety of the world’s problems, and arguing, advocating, and lobbying for policy changes at local, state, and federal levels” (quoted from https://thebestschools.org/features/most-influential-think-tanks/ ). Helpful information about what this one source identified as the 50 most influential U.S. think tanks includes identifying each think tank’s political orientation. For example, The Heritage Foundation is identified as conservative, whereas Human Rights Watch is identified as liberal.

While not the same as think tanks, many mission-driven organizations also sponsor or report on research, as well. For example, the National Association for Children of Alcoholics (NACOA) in the United States is a registered nonprofit organization. Its mission, along with other partnering organizations, private-sector groups, and federal agencies, is to promote policy and program development in research, prevention and treatment to provide information to, for, and about children of alcoholics (of all ages). Based on this mission, the organization supports knowledge development and information gathering on the topic and disseminates information that serves the needs of this population. While this is a worthwhile mission, there is no guarantee that the information meets the criteria for evidence with which we have been working. Evidence reported by think tank and mission-driven sources must be utilized with a great deal of caution and critical analysis!

In many instances an empirical report has not appeared in the published literature, but in the form of a technical or final report to the agency or program providing the funding for the research that was conducted. One such example is presented by a team of investigators funded by the National Institute of Justice to evaluate a program for training professionals to collect strong forensic evidence in instances of sexual assault (Patterson, Resko, Pierce-Weeks, & Campbell, 2014): https://www.ncjrs.gov/pdffiles1/nij/grants/247081.pdf . Investigators may serve in the capacity of consultant to agencies, programs, or institutions, and provide empirical evidence to inform activities and planning. One such example is presented by Maguire-Jack (2014) as a report to a state’s child maltreatment prevention board: https://preventionboard.wi.gov/Documents/InvestmentInPreventionPrograming_Final.pdf .

When Direct Answers to Questions Cannot Be Found. Sometimes social workers are interested in finding answers to complex questions or questions related to an emerging, not-yet-understood topic. This does not mean giving up on empirical literature. Instead, it requires a bit of creativity in approaching the literature. A Venn diagram might help explain this process. Consider a scenario where a social worker wishes to locate literature to answer a question concerning issues of intersectionality. Intersectionality is a social justice term applied to situations where multiple categorizations or classifications come together to create overlapping, interconnected, or multiplied disadvantage. For example, women with a substance use disorder and who have been incarcerated face a triple threat in terms of successful treatment for a substance use disorder: intersectionality exists between being a woman, having a substance use disorder, and having been in jail or prison. After searching the literature, little or no empirical evidence might have been located on this specific triple-threat topic. Instead, the social worker will need to seek literature on each of the threats individually, and possibly will find literature on pairs of topics (see Figure 3-1). There exists some literature about women’s outcomes for treatment of a substance use disorder (a), some literature about women during and following incarceration (b), and some literature about substance use disorders and incarceration (c). Despite not having a direct line on the center of the intersecting spheres of literature (d), the social worker can develop at least a partial picture based on the overlapping literatures.

Figure 3-1. Venn diagram of intersecting literature sets.

empirical research literature review

Take a moment to complete the following activity. For each statement about empirical literature, decide if it is true or false.

Social Work 3401 Coursebook Copyright © by Dr. Audrey Begun is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License , except where otherwise noted.

Share This Book

Penn State University Libraries

Empirical research in the social sciences and education.

  • What is Empirical Research and How to Read It
  • Finding Empirical Research in Library Databases
  • Designing Empirical Research
  • Ethics, Cultural Responsiveness, and Anti-Racism in Research
  • Citing, Writing, and Presenting Your Work

Contact the Librarian at your campus for more help!

Ellysa Cahoy

Introduction: What is Empirical Research?

Empirical research is based on observed and measured phenomena and derives knowledge from actual experience rather than from theory or belief. 

How do you know if a study is empirical? Read the subheadings within the article, book, or report and look for a description of the research "methodology."  Ask yourself: Could I recreate this study and test these results?

Key characteristics to look for:

  • Specific research questions to be answered
  • Definition of the population, behavior, or   phenomena being studied
  • Description of the process used to study this population or phenomena, including selection criteria, controls, and testing instruments (such as surveys)

Another hint: some scholarly journals use a specific layout, called the "IMRaD" format, to communicate empirical research findings. Such articles typically have 4 components:

  • Introduction : sometimes called "literature review" -- what is currently known about the topic -- usually includes a theoretical framework and/or discussion of previous studies
  • Methodology: sometimes called "research design" -- how to recreate the study -- usually describes the population, research process, and analytical tools used in the present study
  • Results : sometimes called "findings" -- what was learned through the study -- usually appears as statistical data or as substantial quotations from research participants
  • Discussion : sometimes called "conclusion" or "implications" -- why the study is important -- usually describes how the research results influence professional practices or future studies

Reading and Evaluating Scholarly Materials

Reading research can be a challenge. However, the tutorials and videos below can help. They explain what scholarly articles look like, how to read them, and how to evaluate them:

  • CRAAP Checklist A frequently-used checklist that helps you examine the currency, relevance, authority, accuracy, and purpose of an information source.
  • IF I APPLY A newer model of evaluating sources which encourages you to think about your own biases as a reader, as well as concerns about the item you are reading.
  • Credo Video: How to Read Scholarly Materials (4 min.)
  • Credo Tutorial: How to Read Scholarly Materials
  • Credo Tutorial: Evaluating Information
  • Credo Video: Evaluating Statistics (4 min.)
  • Next: Finding Empirical Research in Library Databases >>
  • Last Updated: Feb 18, 2024 8:33 PM
  • URL: https://guides.libraries.psu.edu/emp

Get science-backed answers as you write with Paperpal's Research feature

What is a Literature Review? How to Write It (with Examples)

literature review

A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship, demonstrating your understanding of the topic and showing how your work contributes to the ongoing conversation in the field. Learning how to write a literature review is a critical tool for successful research. Your ability to summarize and synthesize prior research pertaining to a certain topic demonstrates your grasp on the topic of study, and assists in the learning process. 

Table of Contents

  • What is the purpose of literature review? 
  • a. Habitat Loss and Species Extinction: 
  • b. Range Shifts and Phenological Changes: 
  • c. Ocean Acidification and Coral Reefs: 
  • d. Adaptive Strategies and Conservation Efforts: 

How to write a good literature review 

  • Choose a Topic and Define the Research Question: 
  • Decide on the Scope of Your Review: 
  • Select Databases for Searches: 
  • Conduct Searches and Keep Track: 
  • Review the Literature: 
  • Organize and Write Your Literature Review: 
  • How to write a literature review faster with Paperpal? 
  • Frequently asked questions 

What is a literature review?

A well-conducted literature review demonstrates the researcher’s familiarity with the existing literature, establishes the context for their own research, and contributes to scholarly conversations on the topic. One of the purposes of a literature review is also to help researchers avoid duplicating previous work and ensure that their research is informed by and builds upon the existing body of knowledge.

empirical research literature review

What is the purpose of literature review?

A literature review serves several important purposes within academic and research contexts. Here are some key objectives and functions of a literature review: 2  

1. Contextualizing the Research Problem: The literature review provides a background and context for the research problem under investigation. It helps to situate the study within the existing body of knowledge. 

2. Identifying Gaps in Knowledge: By identifying gaps, contradictions, or areas requiring further research, the researcher can shape the research question and justify the significance of the study. This is crucial for ensuring that the new research contributes something novel to the field. 

Find academic papers related to your research topic faster. Try Research on Paperpal  

3. Understanding Theoretical and Conceptual Frameworks: Literature reviews help researchers gain an understanding of the theoretical and conceptual frameworks used in previous studies. This aids in the development of a theoretical framework for the current research. 

4. Providing Methodological Insights: Another purpose of literature reviews is that it allows researchers to learn about the methodologies employed in previous studies. This can help in choosing appropriate research methods for the current study and avoiding pitfalls that others may have encountered. 

5. Establishing Credibility: A well-conducted literature review demonstrates the researcher’s familiarity with existing scholarship, establishing their credibility and expertise in the field. It also helps in building a solid foundation for the new research. 

6. Informing Hypotheses or Research Questions: The literature review guides the formulation of hypotheses or research questions by highlighting relevant findings and areas of uncertainty in existing literature. 

Literature review example

Let’s delve deeper with a literature review example: Let’s say your literature review is about the impact of climate change on biodiversity. You might format your literature review into sections such as the effects of climate change on habitat loss and species extinction, phenological changes, and marine biodiversity. Each section would then summarize and analyze relevant studies in those areas, highlighting key findings and identifying gaps in the research. The review would conclude by emphasizing the need for further research on specific aspects of the relationship between climate change and biodiversity. The following literature review template provides a glimpse into the recommended literature review structure and content, demonstrating how research findings are organized around specific themes within a broader topic. 

Literature Review on Climate Change Impacts on Biodiversity:

Climate change is a global phenomenon with far-reaching consequences, including significant impacts on biodiversity. This literature review synthesizes key findings from various studies: 

a. Habitat Loss and Species Extinction:

Climate change-induced alterations in temperature and precipitation patterns contribute to habitat loss, affecting numerous species (Thomas et al., 2004). The review discusses how these changes increase the risk of extinction, particularly for species with specific habitat requirements. 

b. Range Shifts and Phenological Changes:

Observations of range shifts and changes in the timing of biological events (phenology) are documented in response to changing climatic conditions (Parmesan & Yohe, 2003). These shifts affect ecosystems and may lead to mismatches between species and their resources. 

c. Ocean Acidification and Coral Reefs:

The review explores the impact of climate change on marine biodiversity, emphasizing ocean acidification’s threat to coral reefs (Hoegh-Guldberg et al., 2007). Changes in pH levels negatively affect coral calcification, disrupting the delicate balance of marine ecosystems. 

d. Adaptive Strategies and Conservation Efforts:

Recognizing the urgency of the situation, the literature review discusses various adaptive strategies adopted by species and conservation efforts aimed at mitigating the impacts of climate change on biodiversity (Hannah et al., 2007). It emphasizes the importance of interdisciplinary approaches for effective conservation planning. 

empirical research literature review

Strengthen your literature review with factual insights. Try Research on Paperpal for free!    

Writing a literature review involves summarizing and synthesizing existing research on a particular topic. A good literature review format should include the following elements. 

Introduction: The introduction sets the stage for your literature review, providing context and introducing the main focus of your review. 

  • Opening Statement: Begin with a general statement about the broader topic and its significance in the field. 
  • Scope and Purpose: Clearly define the scope of your literature review. Explain the specific research question or objective you aim to address. 
  • Organizational Framework: Briefly outline the structure of your literature review, indicating how you will categorize and discuss the existing research. 
  • Significance of the Study: Highlight why your literature review is important and how it contributes to the understanding of the chosen topic. 
  • Thesis Statement: Conclude the introduction with a concise thesis statement that outlines the main argument or perspective you will develop in the body of the literature review. 

Body: The body of the literature review is where you provide a comprehensive analysis of existing literature, grouping studies based on themes, methodologies, or other relevant criteria. 

  • Organize by Theme or Concept: Group studies that share common themes, concepts, or methodologies. Discuss each theme or concept in detail, summarizing key findings and identifying gaps or areas of disagreement. 
  • Critical Analysis: Evaluate the strengths and weaknesses of each study. Discuss the methodologies used, the quality of evidence, and the overall contribution of each work to the understanding of the topic. 
  • Synthesis of Findings: Synthesize the information from different studies to highlight trends, patterns, or areas of consensus in the literature. 
  • Identification of Gaps: Discuss any gaps or limitations in the existing research and explain how your review contributes to filling these gaps. 
  • Transition between Sections: Provide smooth transitions between different themes or concepts to maintain the flow of your literature review. 

Write and Cite as you go with Paperpal Research. Start now for free.   

Conclusion: The conclusion of your literature review should summarize the main findings, highlight the contributions of the review, and suggest avenues for future research. 

  • Summary of Key Findings: Recap the main findings from the literature and restate how they contribute to your research question or objective. 
  • Contributions to the Field: Discuss the overall contribution of your literature review to the existing knowledge in the field. 
  • Implications and Applications: Explore the practical implications of the findings and suggest how they might impact future research or practice. 
  • Recommendations for Future Research: Identify areas that require further investigation and propose potential directions for future research in the field. 
  • Final Thoughts: Conclude with a final reflection on the importance of your literature review and its relevance to the broader academic community. 

what is a literature review

Conducting a literature review

Conducting a literature review is an essential step in research that involves reviewing and analyzing existing literature on a specific topic. It’s important to know how to do a literature review effectively, so here are the steps to follow: 1  

Choose a Topic and Define the Research Question:

  • Select a topic that is relevant to your field of study. 
  • Clearly define your research question or objective. Determine what specific aspect of the topic do you want to explore? 

Decide on the Scope of Your Review:

  • Determine the timeframe for your literature review. Are you focusing on recent developments, or do you want a historical overview? 
  • Consider the geographical scope. Is your review global, or are you focusing on a specific region? 
  • Define the inclusion and exclusion criteria. What types of sources will you include? Are there specific types of studies or publications you will exclude? 

Select Databases for Searches:

  • Identify relevant databases for your field. Examples include PubMed, IEEE Xplore, Scopus, Web of Science, and Google Scholar. 
  • Consider searching in library catalogs, institutional repositories, and specialized databases related to your topic. 

Conduct Searches and Keep Track:

  • Develop a systematic search strategy using keywords, Boolean operators (AND, OR, NOT), and other search techniques. 
  • Record and document your search strategy for transparency and replicability. 
  • Keep track of the articles, including publication details, abstracts, and links. Use citation management tools like EndNote, Zotero, or Mendeley to organize your references. 

Review the Literature:

  • Evaluate the relevance and quality of each source. Consider the methodology, sample size, and results of studies. 
  • Organize the literature by themes or key concepts. Identify patterns, trends, and gaps in the existing research. 
  • Summarize key findings and arguments from each source. Compare and contrast different perspectives. 
  • Identify areas where there is a consensus in the literature and where there are conflicting opinions. 
  • Provide critical analysis and synthesis of the literature. What are the strengths and weaknesses of existing research? 

Organize and Write Your Literature Review:

  • Literature review outline should be based on themes, chronological order, or methodological approaches. 
  • Write a clear and coherent narrative that synthesizes the information gathered. 
  • Use proper citations for each source and ensure consistency in your citation style (APA, MLA, Chicago, etc.). 
  • Conclude your literature review by summarizing key findings, identifying gaps, and suggesting areas for future research. 

Whether you’re exploring a new research field or finding new angles to develop an existing topic, sifting through hundreds of papers can take more time than you have to spare. But what if you could find science-backed insights with verified citations in seconds? That’s the power of Paperpal’s new Research feature!  

How to write a literature review faster with Paperpal?

Paperpal, an AI writing assistant, integrates powerful academic search capabilities within its writing platform. With the Research feature, you get 100% factual insights, with citations backed by 250M+ verified research articles, directly within your writing interface with the option to save relevant references in your Citation Library. By eliminating the need to switch tabs to find answers to all your research questions, Paperpal saves time and helps you stay focused on your writing.   

Here’s how to use the Research feature:  

  • Ask a question: Get started with a new document on paperpal.com. Click on the “Research” feature and type your question in plain English. Paperpal will scour over 250 million research articles, including conference papers and preprints, to provide you with accurate insights and citations. 
  • Review and Save: Paperpal summarizes the information, while citing sources and listing relevant reads. You can quickly scan the results to identify relevant references and save these directly to your built-in citations library for later access. 
  • Cite with Confidence: Paperpal makes it easy to incorporate relevant citations and references into your writing, ensuring your arguments are well-supported by credible sources. This translates to a polished, well-researched literature review. 

The literature review sample and detailed advice on writing and conducting a review will help you produce a well-structured report. But remember that a good literature review is an ongoing process, and it may be necessary to revisit and update it as your research progresses. By combining effortless research with an easy citation process, Paperpal Research streamlines the literature review process and empowers you to write faster and with more confidence. Try Paperpal Research now and see for yourself.  

Frequently asked questions

A literature review is a critical and comprehensive analysis of existing literature (published and unpublished works) on a specific topic or research question and provides a synthesis of the current state of knowledge in a particular field. A well-conducted literature review is crucial for researchers to build upon existing knowledge, avoid duplication of efforts, and contribute to the advancement of their field. It also helps researchers situate their work within a broader context and facilitates the development of a sound theoretical and conceptual framework for their studies.

Literature review is a crucial component of research writing, providing a solid background for a research paper’s investigation. The aim is to keep professionals up to date by providing an understanding of ongoing developments within a specific field, including research methods, and experimental techniques used in that field, and present that knowledge in the form of a written report. Also, the depth and breadth of the literature review emphasizes the credibility of the scholar in his or her field.  

Before writing a literature review, it’s essential to undertake several preparatory steps to ensure that your review is well-researched, organized, and focused. This includes choosing a topic of general interest to you and doing exploratory research on that topic, writing an annotated bibliography, and noting major points, especially those that relate to the position you have taken on the topic. 

Literature reviews and academic research papers are essential components of scholarly work but serve different purposes within the academic realm. 3 A literature review aims to provide a foundation for understanding the current state of research on a particular topic, identify gaps or controversies, and lay the groundwork for future research. Therefore, it draws heavily from existing academic sources, including books, journal articles, and other scholarly publications. In contrast, an academic research paper aims to present new knowledge, contribute to the academic discourse, and advance the understanding of a specific research question. Therefore, it involves a mix of existing literature (in the introduction and literature review sections) and original data or findings obtained through research methods. 

Literature reviews are essential components of academic and research papers, and various strategies can be employed to conduct them effectively. If you want to know how to write a literature review for a research paper, here are four common approaches that are often used by researchers.  Chronological Review: This strategy involves organizing the literature based on the chronological order of publication. It helps to trace the development of a topic over time, showing how ideas, theories, and research have evolved.  Thematic Review: Thematic reviews focus on identifying and analyzing themes or topics that cut across different studies. Instead of organizing the literature chronologically, it is grouped by key themes or concepts, allowing for a comprehensive exploration of various aspects of the topic.  Methodological Review: This strategy involves organizing the literature based on the research methods employed in different studies. It helps to highlight the strengths and weaknesses of various methodologies and allows the reader to evaluate the reliability and validity of the research findings.  Theoretical Review: A theoretical review examines the literature based on the theoretical frameworks used in different studies. This approach helps to identify the key theories that have been applied to the topic and assess their contributions to the understanding of the subject.  It’s important to note that these strategies are not mutually exclusive, and a literature review may combine elements of more than one approach. The choice of strategy depends on the research question, the nature of the literature available, and the goals of the review. Additionally, other strategies, such as integrative reviews or systematic reviews, may be employed depending on the specific requirements of the research.

The literature review format can vary depending on the specific publication guidelines. However, there are some common elements and structures that are often followed. Here is a general guideline for the format of a literature review:  Introduction:   Provide an overview of the topic.  Define the scope and purpose of the literature review.  State the research question or objective.  Body:   Organize the literature by themes, concepts, or chronology.  Critically analyze and evaluate each source.  Discuss the strengths and weaknesses of the studies.  Highlight any methodological limitations or biases.  Identify patterns, connections, or contradictions in the existing research.  Conclusion:   Summarize the key points discussed in the literature review.  Highlight the research gap.  Address the research question or objective stated in the introduction.  Highlight the contributions of the review and suggest directions for future research.

Both annotated bibliographies and literature reviews involve the examination of scholarly sources. While annotated bibliographies focus on individual sources with brief annotations, literature reviews provide a more in-depth, integrated, and comprehensive analysis of existing literature on a specific topic. The key differences are as follows: 

References 

  • Denney, A. S., & Tewksbury, R. (2013). How to write a literature review.  Journal of criminal justice education ,  24 (2), 218-234. 
  • Pan, M. L. (2016).  Preparing literature reviews: Qualitative and quantitative approaches . Taylor & Francis. 
  • Cantero, C. (2019). How to write a literature review.  San José State University Writing Center . 

Paperpal is an AI writing assistant that help academics write better, faster with real-time suggestions for in-depth language and grammar correction. Trained on millions of research manuscripts enhanced by professional academic editors, Paperpal delivers human precision at machine speed.  

Try it for free or upgrade to  Paperpal Prime , which unlocks unlimited access to premium features like academic translation, paraphrasing, contextual synonyms, consistency checks and more. It’s like always having a professional academic editor by your side! Go beyond limitations and experience the future of academic writing.  Get Paperpal Prime now at just US$19 a month!

Related Reads:

  • Empirical Research: A Comprehensive Guide for Academics 
  • How to Write a Scientific Paper in 10 Steps 
  • How Long Should a Chapter Be?
  • How to Use Paperpal to Generate Emails & Cover Letters?

6 Tips for Post-Doc Researchers to Take Their Career to the Next Level

Self-plagiarism in research: what it is and how to avoid it, you may also like, how to write a high-quality conference paper, how paperpal’s research feature helps you develop and..., how paperpal is enhancing academic productivity and accelerating..., how to write a successful book chapter for..., academic editing: how to self-edit academic text with..., 4 ways paperpal encourages responsible writing with ai, what are scholarly sources and where can you..., how to write a hypothesis types and examples , measuring academic success: definition & strategies for excellence, what is academic writing: tips for students.

PSYC 200 Lab in Experimental Methods (Atlanta)

  • Find Research Articles

Empirical vs. Review Articles

How to recognize empirical journal articles, scholarly vs. non-scholarly sources.

  • Cite Sources
  • Find Tests & Measures
  • Find Research Methods Materials
  • Post-Library Lab Activity on Finding Tests and Measures
  • Find Books, E-books, and Films

Psychology Librarian

Profile Photo

Know the difference between empirical and review articles.

Empirical article An empirical (research) article reports methods and findings of an original research study conducted by the authors of the article.  

Literature Review article A review article or "literature review" discusses past research studies on a given topic.

Definition of an empirical study:  An empirical research article reports the results of a study that uses data derived from actual observation or experimentation. Empirical research articles are examples of primary research.

Parts of a standard empirical research article:  (articles will not necessary use the exact terms listed below.)

  • Abstract  ... A paragraph length description of what the study includes.
  • Introduction ...Includes a statement of the hypotheses for the research and a review of other research on the topic.
  • Who are participants
  • Design of the study
  • What the participants did
  • What measures were used
  • Results ...Describes the outcomes of the measures of the study.
  • Discussion ...Contains the interpretations and implications of the study.
  • References ...Contains citation information on the material cited in the report. (also called bibliography or works cited)

Characteristics of an Empirical Article:

  • Empirical articles will include charts, graphs, or statistical analysis.
  • Empirical research articles are usually substantial, maybe from 8-30 pages long.
  • There is always a bibliography found at the end of the article.

Type of publications that publish empirical studies:

  • Empirical research articles are published in scholarly or academic journals
  • These journals are also called “peer-reviewed,” or “refereed” publications.

Examples of such publications include:

  • Computers in Human Behavior
  • Journal of Educational Psychology

Examples of databases that contain empirical research:  (selected list only)

  • Web of Science

This page is adapted from the Sociology Research Guide: Identify Empirical Articles page at Cal State Fullerton Pollak Library.

Know the difference between scholarly and non-scholarly articles.

"Scholarly" journal = "Peer-Reviewed" journal = "Refereed" journal

When researching your topic, you may come across many different types of sources and articles. When evaluating these sources, it is important to think about: 

  • Who is the author? 
  • Who is the audience or why was this written? 
  • Where was this published? 
  • Is this relevant to your research? 
  • When was this written? Has it been updated? 
  • Are there any citations? Who do they cite?  

Helpful Links and Guides

Here are helpful links and guides to check out for more information on scholarly sources: 

  • This database contains data on different types of serials and can be used to determine whether a periodical is peer-reviewed or not:  Ulrich's Periodicals Directory  
  • The UC Berkeley Library published this useful guide on evaluating resources, including the differences between scholarly and popular sources, as well as how to find primary sources:  UC Berkeley's Evaluating Resources LibGuide
  • << Previous: Quick Poll
  • Next: Cite Sources >>
  • Last Updated: Feb 14, 2024 3:32 PM
  • URL: https://guides.libraries.emory.edu/main/psyc200

KG-EmpiRE: A Community-Maintainable Knowledge Graph for a Sustainable Literature Review on the State and Evolution of Empirical Research in Requirements Engineering

In the last two decades, several researchers provided snapshots of the “current” state and evolution of empirical research in requirements engineering (RE) through literature reviews. However, these literature reviews were not sustainable, as none built on or updated previous works due to the unavailability of the extracted and analyzed data. KG-EmpiRE is a Knowledge Graph (KG) of empirical research in RE based on scientific data extracted from currently 680 papers published in the IEEE International Requirements Engineering Conference (1994-2022). KG-EmpiRE is maintained in the Open Research Knowledge Graph (ORKG), making all data openly and long-term available according to the FAIR data principles. Our long-term goal is to constantly maintain KG-EmpiRE with the research community to synthesize a comprehensive, up-to-date, and long-term available overview of the state and evolution of empirical research in RE. Besides KG-EmpiRE, we provide its analysis with all supplementary materials in a repository. This repository contains all files with instructions for replicating and (re-)using the analysis locally or via executable environments and for repeating the research approach. Since its first release based on 199 papers (2014-2022), KG-EmpiRE and its analysis have been updated twice, currently covering over 650 papers. KG-EmpiRE and its analysis demonstrate how innovative infrastructures, such as the ORKG, can be leveraged to make data from literature reviews FAIR, openly available, and maintainable for the research community in the long term. In this way, we can enable replicable, (re-)usable , and thus sustainable literature reviews to ensure the quality, reliability, and timeliness of their research results.

Index Terms:

I introduction.

For 20 years, various researchers conducted literature reviews to examine the state and evolution of empirical research in requirements engineering (RE) with the shared goal of providing a comprehensive, up-to-date, and long-term available overview  [ 1 , 2 ] . However, these literature reviews were not sustainable, as none built on or updated previous ones, which are known challenges of literature reviews  [ 3 ] . While recent research addresses these challenges by providing social and economic decision support and guidance  [ 3 ] , the underlying problem is the unavailability of the extracted and analyzed data. Researchers need technical support, i.e., infrastructures, to conduct sustainable literature reviews so that all data is openly and long-term available according to the FAIR data principles  [ 3 ] and corresponding to open science in SE  [ 4 ] .

In their joint work, Wernlein  [ 5 ] and Karras et al.  [ 1 , 6 ] examined the use of the Open Research Knowledge Graph (ORKG)  [ 7 ] , as such technical support by building, publishing, and analyzing a Knowledge Graph (KG) of empirical research in RE (KG-EmpiRE) based on currently 680 research track papers of the IEEE International Requirements Engineering Conference (1994-2022).

In this paper, we present the KG-EmpiRE, available in the ORKG 1 1 1 https://orkg.org/observatory/Empirical_Software_Engineering , and its analysis, available on GitHub  [ 8 ] , Zenodo  [ 9 ] , and on Binder 2 2 2 https://tinyurl.com/empire-analysis for interactive replication and (re-)use.

KG-EmpiRE contains scientific data on the six themes  research paradigm , research design , research method , data collection , data analysis , and bibliographic metadata . We plan to expand these themes in the long term. For more details on the themes, refer to the supplementary materials  [ 9 , 8 ] . Since its first release based on 199 papers (2014-2022)  [ 5 ] , KG-EmpiRE and its analysis have been updated twice. Karras et al.   [ 1 ] published the first update with 570 papers (2000-2022) at the 17th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement 2023, where they received the best paper award. The second update is ongoing and covers 680 papers (1994-2022) so far. The goal for the second update is to cover all 748 research track papers from the IEEE International Requirements Engineering Conference  (1993-2023) .

The analysis provides answers to 16 out of 77 competency questions (cf. supplementary materials  [ 9 , 8 ] ) regarding empirical research in RE that we derived from the vision of Sjøberg et al.  [ 10 ] on the role of empirical methods in SE, including RE, for 2020-2025. While the number of competency questions answered reflects the coverage of the curated topic in KG-EmpiRE, the answers to competency questions provide insights into the state and evolution of empirical research in RE. For each competency question answered, we provide all details of the analysis with its data, visualizations, explanations, and answers in a repository  [ 9 , 8 ] that is also hosted on Binder for interactive replication and (re-)use.

Overall, this repository contains all files with detailed explanations and instructions for replication and (re-)use of KG-EmpiRE and its analysis locally or via executable environments (Binder and GitHub Codespaces), as well as for repeating the research approach for sustainable literature reviews with the ORKG. The repository also contains all generated visualizations with their data, exported as PNG and CSV files, as well as supplementary materials on the themes, their structuring in the ORKG, and all 77 competency questions.

II Structure of KG-EmpiRE and the Repository

Ii-a kg-empire.

We developed an ORKG template 3 3 3 https://orkg.org/template/R186491 to organize the scientific data extracted from the papers in the ORKG. ORKG templates implement a subset of the Shapes Constraint Language (SHACL) and allow specifying the underlying (graph) structure to organize the data in a structured manner  [ 11 ] . In this way, we determined which data to extract and standardized their description to ensure they are FAIR, consistent, and comparable across all papers. The developed ORKG template covers the six themes investigated. For more details on the ORKG template, refer to the supplementary materials  [ 9 , 8 ] .

By applying the ORKG template to the papers, KG-EmpiRE currently consists of almost 35,000 triples, which are made up of over 51,000 resources and almost 19,000 literals (see Table I ). While these statistics reflect the efforts to provide a solid structured description of the extracted data, they also show that KG-EmpiRE is relatively small compared to the entire ORKG and other well-known knowledge graphs, e.g., Wikidata or DBpedia, which include millions of entities.

II-B Repository

In the repository, there are three folders and six files, with the Jupyter Notebook empire-analysis.ipynb as the main file. The Jupyter Notebook encapsulates the entire analysis of KG-EmpiRE and provides visualizations, explanations, and answers for each of the 16 competency questions. The visualizations are exported as PNG files per competency question to the Figures folder. The data retrieved by KG-EmpiRE for analysis is stored as CSV files for each competency question in the SPARQL-Data folder by date. In this folder, we also provide CSV files of the latest release to replicate the results of the related publication  [ 1 ] . The last folder Supplementary materials provides additional materials for detailed overviews of the content for data extraction regarding the themes, the developed ORKG template, all 77 competency questions derived, and the research approach. The second most important file is README.md , which contains detailed explanations and instructions about the project, the repository, its installation (locally and via executable environments), the replication of the analysis, and the (re)use of KG-EmpiRE with its most recent data. The remaining four files support the installation ( requirements.txt , runtime.txt ), clarify the copyright ( LICENSE ), and ensure the citability of the repository ( CITATION.cff ) 4 4 4 https://citation-file-format.github.io/ .

III Conclusion

Overall, KG-EmpiRE and its analysis lay the foundation for a sustainable literature review on the state and evolution of empirical research in requirements engineering. They can be used to replicate the results from the related publication  [ 1 ] , (re-)use the data for further studies, and repeat the research approach for sustainable literature reviews on other topics. KG-EmpiRE and its analysis demonstrate how innovative infrastructures, such as the ORKG, can be leveraged to make data from literature reviews FAIR and openly available in the long term. In this way, researchers can build on and update the data ideally collaboratively, enabling sustainable literature reviews for comprehensive, up-to-date, and long-term available overviews, true to the principle: Divide et Impera .

In summary, the special feature of KG-EmpiRE lies in the proof that data from literature reviews can already be prepared during data extraction in such a way that they are understandable and processable by humans and machines to update, replicate, and (re-)use them sustainably. KG-EmpiRE and the underlying research approach using technical infrastructures, such as the ORKG, have the potential to be used on a large scale to establish sustainable literature reviews and thus ensure the quality, reliability, and timeliness of their research results.

Acknowledgment

The authors thank the Federal Government, the Heads of Government of the Länder, as well as the Joint Science Conference (GWK), for their funding and support within the NFDI4Ing and NFDI4DataScience consortia. This work was funded by the German Research Foundation (DFG) project numbers 44214671 and 460234259 and by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536).

  • [1] O. Karras, F. Wernlein, J. Klünder, and S. Auer, “Divide and Conquer the EmpiRE: A Community-Maintainable Knowledge Graph of Empirical Research in Requirements Engineering,” in International Symposium on Empirical Software Engineering and Measurement , 2023.
  • [2] O. Karras, F. Wernlein, J. Klünder, and S. Auer, “A Comparison of Scientific Publications on the State of Empirical Research in Requirements Engineering and Software Engineering,” 2023. [Online]. Available: https://orkg.org/comparison/R650023/
  • [3] V. dos Santos et al. , “Towards Sustainability of Systematic Literature Reviews,” in 15th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement , 2021.
  • [4] D. Mendez et al. , “Open Science in Software Engineering,” in Contemporary Empirical Methods in Software Engineering .   Springer, 2020.
  • [5] F. Wernlein, “Acquisition and Analysis of Research Practices Using Semantic Structures,” Bachelor Thesis, Gottfried Wilhelm Leibniz Universität, 2022. [Online]. Available: https://doi.org/10.15488/12796
  • [6] O. Karras, F. Wernlein, J. Klünder, and S. Auer, “KG-EmpiRE: A Community-Maintainable Knowledge Graph of Empirical Research in Requirements Engineering,” in Software Engineering 2024 , 2024.
  • [7] M. Stocker et al. , “FAIR Scientific Information with the Open Research Knowledge Graph,” FAIR Connect , vol. 1, no. 1, 2023.
  • [8] O. Karras, “Divide and Conquer the EmpiRE: A Community-Maintainable Knowledge Graph of Empirical Research in Requirements Engineering - A Sustainable Literature Review for Analyzing the State and Evolution of Empirical Research in Requirements Engineering,” 2023. [Online]. Available: https://github.com/okarras/EmpiRE-Analysis
  • [9] ——, “Divide and Conquer the EmpiRE: A Community-Maintainable Knowledge Graph of Empirical Research in Requirements Engineering - A Sustainable Literature Review for Analyzing the State and Evolution of Empirical Research in Requirements Engineering,” 2024. [Online]. Available: https://doi.org/10.5281/zenodo.11092471
  • [10] D. I. Sjoberg et al. , “The Future of Empirical Methods in Software Engineering Research,” in Future of Software Engineering , 2007.
  • [11] H. Hussein et al. , “Increasing Reproducibility in Science by Interlinking Semantic Artifact Descriptions in a Knowledge Graph,” in Leveraging Generative Intelligence in Digital Libraries: Towards Human-Machine Collaboration .   Springer, 2023.

The Vagueness of Integrating the Empirical and the Normative: Researchers’ Views on Doing Empirical Bioethics

  • Original Research
  • Open access
  • Published: 08 November 2023

Cite this article

You have full access to this open access article

empirical research literature review

  • T. Wangmo   ORCID: orcid.org/0000-0003-0857-0510 1 ,
  • V. Provoost 2 &
  • E. Mihailov 3  

897 Accesses

4 Altmetric

Explore all metrics

The integration of normative analysis with empirical data often remains unclear despite the availability of many empirical bioethics methodologies. This paper sought bioethics scholars’ experiences and reflections of doing empirical bioethics research to feed these practical insights into the debate on methods. We interviewed twenty-six participants who revealed their process of integrating the normative and the empirical. From the analysis of the data, we first used the themes to identify the methodological content. That is, we show participants’ use of familiar methods explained as “back-and-forth” methods (reflective equilibrium), followed by dialogical methods where collaboration was seen as a better way of doing integration. Thereafter, we highlight methods that were deemed as inherent integration approaches, where the normative and the empirical were intertwined from the start of the research project. Second, we used the themes to express not only how we interpreted what was said but also how things were said. In this, we describe an air of uncertainty and overall vagueness that surrounded the above methods. We conclude that the indeterminacy of integration methods is a double-edged sword. It allows for flexibility but also risks obscuring a lack of understanding of the theoretical-methodological underpinnings of empirical bioethics research methods.

Similar content being viewed by others

empirical research literature review

Acceptable objectives of empirical research in bioethics: a qualitative exploration of researchers’ views

empirical research literature review

Combining Empirical Data and Normativity: Possible or Not?

A systematic review of empirical bioethics methodologies.

Avoid common mistakes on your manuscript.

Introduction

Empirical bioethics is an interdisciplinary activity that centres around the integration of empirical findings with normative (philosophical) analysis (Ives, Dunn, and Cribb 2017 ). Mertz and colleagues ( 2014 ) posited that “empirical research in EE [empirical ethics] is not an end in itself, but a required step towards a normative conclusion or statement with regard to empirical analysis, leading to a combination of empirical research with ethical analysis and argument” (p. 1). Thegrowth of this field is often attributed to a dissatisfaction with a purely philosophical approach, perceived as being insufficient to address bioethical issues (Hedgecoe 2004 ; Hoffmaster 2018 ) and hence a belief that an empirically informed bioethics is better suited to deal with the complexity of human practices. A consensus paper put forward by European empirical ethics scholars aimed to reach standards of practice for those working in and wanting to do empirical bioethics (Ives, et al. 2018 ). Concerning integration, the standards included the need to (1) clearly state how the theoretical position was chosen for integration, (2) explain and justify how the method of integration was carried out, and (3) be transparent in informing how the method of integration was executed.

Despite consensus that empirical research is relevant to bioethical argument (Mihailov, et al. 2022 ; Musschenga 2005 ; Sulmasy and Sugarman 2010 ; Rost and Mihailov 2021 ), integrating empirical research with normative analysis remains challenging. An often and long discussed way of integration is the (wide) reflective equilibrium (Daniels 1979 ), which has been tailored to serve empirical bioethics projects by several scholars (Ives and Draper 2009 ; Van Thiel and Van Delden 2010 ; de Vries and van Leeuwen 2010 ). Briefly, (wide) reflective equilibrium is a two-way dialogue between ethical principles/values/judgement and practice (empirical data). It is carried out by the researcher, “the thinker.” In this process, the thinker goes back and forth between the normative underpinnings and empirical facts (data available from the study or other sources) until he or she can produce moral coherence (an “equilibrium”).

A systematic review of integrative empirical bioethics identified thirty-two methodologies (Davies, et al. 2015 ). Amongst others, these include (wide) reflective equilibrium (Ives 2014 ; Van Thiel and Van Delden 2010 ; de Vries and van Leeuwen 2010 ), dialogical empirical ethics (Widdershoven, Abma, and Molewijk 2009 ; Abma, et al. 2010 ), reflexive balancing (Ives 2014 ), integrative empirical ethics (Molewijk, et al. 2003 ), hermeneutical approach to bioethics (Rehmann-Sutter, Porz, and Scully 2012 ), symbiotic ethics (Frith 2012 ), and grounded moral analysis (Dunn, et al. 2012 ). Davies and colleagues ( 2015 ) categorized the identified methodologies into, inter alia, (1) dialogical, where there is a reliance on a dialogue between the stakeholders (e.g., researchers and participants) to reach a shared understanding of the analysis and the conclusion (e.g., inter-ethics); (2) consultative, which comprises analysis of the data by the researcher, who is the external thinker and works independently to develop a normative conclusion (e.g., reflexive balancing, reflective equilibrium), and (3) those that combine the two (e.g., hermeneutics).

The wide variety of integration methodologies available illustrates considerable uncertainty about the particular aims, content, and domain of application (Davies, et al. 2015 ; Wangmo and Provoost 2017 ). Furthermore, the steps that guide the integration process are often unspecific (Davies, et al. 2015 ; Huxtable and Ives 2019 ). For example, if an ethicist acts as facilitator and applies ethical theory to enrich the dialogical process for decision-making in concrete situations (Abma, et al. 2010 ), one may wonder whether the application of ethical theories was up to the subjective appreciation of the ethicist. In reflective equilibrium, there are pressing issues of how much weight should be given to empirical data and ethical theory. The existing methodologies thus risk being frustratingly vague and insufficiently determinate in practical contexts (Arras 2009 ; Dunn, et al. 2008 ). All in all, the multiplicity of methodological paths and their lack of clarity gives rise to a debate about appropriate methodologies (Hedgecoe 2004 ; Ives and Draper 2009 ; Ives, Dunn, and Cribb 2017 ).

In a survey of bioethics scholars in twelve European countries, Wangmo and Provoost ( 2017 ), found that one-third of the respondents (total respondents N = 200) attempted to integrate the normative with the empirical. Their findings indicate that not everyone in the field of bioethics did or intended to engage in this kind of interdisciplinary work. A reason could be the methodological diversity and complications pointed to above. It is of importance to further clarify and, where necessary, develop (new) integration methodologies that address the needs in the field. In this explorative qualitative study, we set out to investigate how researchers perform the integration of empirical data with normative analysis and how they evaluate that process. Our hope is to learn from the experiences and reflections of researchers who engaged in empirical bioethics research and to feed these insights from practice into the debate on methods.

Sampling and Study Participants

To form our participant sample pool, we conducted a systematic search of peer-reviewed publications in two databases—PubMed and SCOPUS—and used the following key terms: “Empirical Bioethics” OR “Empirical Ethics” OR “Interdisciplinary Ethics” OR “Interdisciplinary Empirical Ethics” OR “empirical-normative” OR “normative-empirical” OR “Empirical research in Bioethics.” The literature search resulted in 334 results, from which we removed 143 results because they were duplicates or did not match our inclusion criteria. A sample pool of 191 papers were left. A separate Google Scholar search using the same terms lead to thirteen extra papers, resulting in a total sample pool of 204 papers.

Starting from this sample pool, we first aimed for a maximum variation sample of scholars according to the type of paper they had authored. Therefore, the 204 results were categorized into three groups: (a) Empirical: ninety-four; (b) Methodological: seventy-four; and (c) Empirical-Argumentative: thirty-six. Empirical papers were those that used purely empirical social science methodology. The methodological papers were those that discussed and/or used empirical bioethics research. Empirical-argumentative papers were those that produced empirical results along with an attempt to use them in an argumentative manner to make certain claims. These three categories were ordered alphabetically to allow simple random selection of the first authors of those included publications. Secondly, we also purposefully selected papers to aim for a balanced distribution of male versus female scholars. We carried out two rounds of selection which identified first authors of eighty-five publications who were invited to participate in our study. A total of twenty-four scholars agreed to participate. We interviewed two additional participants who were referred to us by a participant. See table 1 for participant information.

Data Collection

All selected first authors received an email from EM informing them about the study, its purpose, the researchers, and the voluntary nature of the study. All non-responders received one reminder. No incentive was given to participate in the study. The interviews were carried out using Zoom in light of the pandemic and because our participants were from different countries. The interviews were completed between April 2020 and January 2021 and were on average sixty minutes long (range forty-five to ninety minutes).

To structure the discussion, we used an interview guide composed of three sections. The first part of the interview was geared towards generally understanding the type of research carried out by the participants. Therefore, this part of the interview was not limited to the research presented in the paper via which they were selected. The second part aimed at their attitudes towards the purpose of empirical research in bioethics, using a series of eight statements to which they were invited to respond (Mihailov, et al. 2022 ). The third section sought participants’ experiences of doing empirical bioethics (i.e., integration), the advantages and challenges to carrying out empirical bioethics study, and their views on the empirical turn in bioethics. During the data collection process, the research team met twice to discuss the interview guide based on reading two of the first four interviews. This resulted in minor adjustments to the interview guide. For the interview guide and further information on the study method, please refer to the first paper from this project (Mihailov, et al. 2022 ).

Data Analysis

Audio recordings were transcribed verbatim. All anonymized transcripts were imported into qualitative data analysis software, MAXQDA. Two authors (EM and TW) carefully read and coded several interviews independently and discussed the coding process and code labels used for the entire data corpus. This pre-coding followed a thematic analysis (TA) framework (Braun and Clark, 2006 ; Guest, et al. 2012 ) in light of its fit with the explorative nature of the overall project. Thereafter, a more specific analysis of the data related to integration methods took place in order to meet the aim of this paper.

The first author created and analysed a data set pertaining to participants’ experience, opinions, and their use of particular methods of integrating the normative and the empirical. Themes and sub-themes were developed based on authors’ discussion of the data related to the integration process. Using these themes and sub-themes, TW drafted the study results in a detailed and descriptive way for the co-authors (VP and EM) to gain the richness and depth of this specific content. After several rounds of iterations and discussions among the authors (process described in the next paragraph), we agreed on the result interpretations as presented in the next section.

Briefly, our analytical approach combines TA with a hermeneutics of faith or empathy and a hermeneutics of suspicion. Such approach has been used in other studies (Huxley et al. 2011 ). Whereas a hermeneutics of faith aims at better understanding what the participant described, a hermeneutics of suspicion aims to find out hidden or latent meanings. Our team integrated two types of hermeneutics that were reflected in the researcher roles: a hermeneutics of faith or empathy (EM, the interviewer and TW, the first author), a hermeneutics of suspicion (VP) and a mixture of both (TW). Integrating various analytic roles in one team has the advantage that different readings of the data can be used to challenge each other’s views, whilst still keeping track of those different interpretations. In the results section, these layers of interpretation are interwoven. We start with interpretations close to the participants’ accounts (the first two themes predominantly resulted from a hermeneutics of faith, where we also added critical notes at the end). As the results section progresses, critical interpretations that go beyond the data surface are given more weight (hermeneutics of suspicion). At the same time, we simultaneously keep underlining the scholars’ experiences in their own terms. We present data as block-quotes to support our analysis. Shorter expressions of the participants are given in the text using italic print between quotation marks.

Ethics Approval

The study was approved by the Research Ethics Commission of the University of Bucharest. All participants provided their informed consent to participate in this study and to record their interviews.

We identified four themes related directly to our research question. The first theme “the back-and forth methods” relays the scholars’ accounts of using a reflective equilibrium method or similar. The second theme “collaboration as doing integration” deals with dialogical methods and the views of scholars who thought that collaboration was a better way of organizing integration. In reporting these two themes, we also illustrate the inherently vague manner in which the participants discussed their use of integration methods. Both theme labels were also chosen to reflect the simplified way several of the scholars conveyed their integration process. Thereafter, we continue with two additional themes, where we focus in on these accounts of participants’ chosen methods and how they were used. For this, we first present the theme “Integration as inherently ingrained from the start of the project; but is it integration?” In this theme, we start by critically looking at participants’ process of how the integration is done. Finally, we move further to unpack the ambiguity with which some participants spoke of engaging with these methods. In the theme “the integration method as a particular opaque intelligence” we highlight participants’ plea for creativity and flexibility. Here we note that although the participants are making a good point, this plea may at the same time reveal hesitance and uncertainty in talking about how they chose and applied the method they used.

Theme 1: The “back-and-forth” methods

Several participants described their method as cyclical and included terms like “back-and-forth” between the conceptual framework and the empirical data. They alluded to their method as reflective equilibrium. Here, the participants noted that their research begins with a conceptual understanding of the ethical issues relevant for the topic or question. This was followed by the collection of empirical data based on the ethical concern teased out from conceptual work and going back to the conceptual to evaluate how it must be changed or adapted. Important in this backward and forward process was the notion of “ revising ” the theory and that this was an iterative process.

While doing the back-and-forth method of integration, one participant distinguished the normative and the empirical work, with the former being the core and the empirical elements being used to shape the normative concept. This reflective equilibrium method was also seen as a way of trying to understand why practice and theory are different; hence, it includes the need to go back-and-forth iteratively between what is happening in practice and why it does or does not conform to what is set out in theory.

My approach would be to start with the normative bit. Do(ing) research around that area. Have that firmly consolidated. With that, I could develop the empirical research bit: method, structure, instrument, population, whatever ... the design of the empirical bit. But probably that—the ongoing findings from this empirical bit, empirical research—would be continuously informing the normative bit that I already had then. And—as I mentioned before—for the output and the final outcomes, I think that probably starts by seeing how the empirical changed the shape of this normative “stone” [laughs]. (P18, SSE) I think it’s kind of a reflective equilibrium thing going on and ... if it turns out that people who are on the front lines making certain kinds of moral decisions systematically think about a case a certain way, and that’s different, you know, they are sensitive to factors that maybe my theory thinks shouldn’t be important, it’s not obvious what should happen. Maybe I need to update my theory …. Or it might be that I come up with an account of why it is that they are systematically wrong, that their intuitions are corrupted in some way, or they’re responsive to factors that shouldn’t be normatively relevant. (P9, EE)

The iterative process was also seen as something that cannot be set into stone since one may have to go through several rounds of going backward and forward. Thus, a participant said that although this method is in essence a simple one, it cannot be recipe-like. This method was described as a creative process, explicitly set apart from empirical methods that follow a strict and preset schedule.

You know, this isn’t like science, where, you know, you have this type of data, do this statistical step, and follow x, y, z .... It is a creative process. You do your conceptual work; you look at the data. “No, that doesn’t work. Something’s not right. Doesn’t fit.” You go back to your concepts, reorganize them, look at your data again and other information you might have. So, it’s this iterative process of interpreting, reinterpreting data—you might have to go and seek more information, you know, if there’s certain gaps in what you ... to solve certain dilemmas you come up with. But yeah, I mean, it’s that simple. You just ... look at your data, try to ... gain meaning from that data and then conceptualize it and keep going backwards and forwards. (P7, ERB)

Within the participants’ accounts of doing such “back-and-forth” integration work, we were surprised by how often their descriptions expressed hesitance and uncertainty. This vagueness becomes clearest in this part of the discussions where an actual method of doing empirical bioethics was described. It was evident in the use of language such as “it’s kind of a […] thing,” and “a bit of . ” Also, the participants used expressions such as “trying” and “we reflect a bit and balance a bit” when explaining how they used the method. These wording suggest a lack of confidence towards their own role in the methodological process.

I think basically my advice is some kind of evaluation of judgement is a normative one, philosophical normative one, but I try to use empirical [data] as in some kind of understanding, or I try to apply those normative into the practice, and also when the real or the empirical data, empirical knowledge, has some different implication or different meaning, then I could go back for my normative one. So, it’s kind of the reflective equilibrium thing. (P25, ERB) So, what we normally say is that we use a bit of the method of reflective equilibrium, trying to combine all kinds of considerations [of the people you are studying or the issue of the study], and norms and values and principles and professional norms and individual norms. And try to mix those and weigh those and come to an equilibrium. (P19, ERB)

Theme 2: Collaboration as doing integration without a distinct integration method

Some participants said that integration can be done through collaboration, in which two or more researchers with different skills (normative analysis and empirical method) would come together to formulate the research question and conduct the study. Participants reporting this mode of integration used a dialogical encounter. It was advised that the researchers with different backgrounds should know each other’s trade and work closely together, although in some ways also staying distinct. Calling for collaboration, one participant felt that although each researcher within their respective disciplines needs their own methods, there is no particular need for a standardized overarching method.

Well, first of all it’s an interdisciplinary work. So, you need the methodology, and you need the experts in their fields. For the empirical part you really need experienced social scientists, who know how to do empirical research in a valid manner. And for the normative part you need philosophers and people who are used ... are familiar with how to approach a normative question. And I think what is also important is that they know from each other and their different methodology and work. … So, integration sounds a little bit as if all things come together ... kind of a ménage. But there ... I see it more as staying distinct but working very closely and interactively together. But still with different methodology and [remaining] aware that they are different. (P20, ERB)

One participant explained this collaborative integration as a communicative process where the normative conclusions drawn are the result of discussions with the study participants, stakeholders, and even journal readers and other audiences. This participant’s collaboration method made a clear differentiation between the empirical and the theoretical parts. That is, the empirical phase stops after finishing the data collection and the (first-level) interpretation of those collected data. Thereafter, the empirical results are taken through a process of discussions with different stakeholders, a collaborative process that in theory is unending as it continues even after the publication of the study findings.

So, we were very interested in how they [their participants] narrate what they experience, and we saw, that [they] have typical […] narratives, with which they identify. […] That was the empirical approach and then at the end there was another [approach] between the results from the empirical part and the more theoretical or bioethical discussion, where we had regular interactions with the two parts of our team and some of the members, myself included, per parts of the empirical theme and of the theoretical theme and so we had this exchange of perspective and that led then to the publications. It’s a communication process. I think bioethics is always a conversation, also when we just write up papers, we are in a conversation, just one step in a conversation. So, your question how to integrate, is how to proceed in more comprehensive conversation with the audience, the readers of our papers, we are addressing. (P4, EE)

What these participants relayed is that the integration occurred through the process of collaboration. In these accounts, there is no specific integration method used during this collaboration and no plea for an overarching method of integration. Another way of stakeholder collaboration leading to integration was described as dialogical encounter during workshops. Here the key idea was that the research team along with their invited experts deliberate on the aggregate findings and reach a consensus as to what could be the key message of the overall work. Here again, we notice how no specific method (used during such collaboration) was brought forward.

Yeah. We tend to do a little bit of reflection ourselves on the data to come up with a conceptual map or model or policy recommendations and then we try to iterate that with the group, because we realize that, you know, we have a responsibility together. Right? And so balancing our ideas offered people ... it’s a good way of assessing whether ... when we are making the shift from what the “is” is to perhaps what the “ought” should be. Having different perspectives there is important. And we do that and depending on the project, sometimes we built in a formal consensus process, another time we just want to test our ideas to see how they ... If other people endorse them or can make some suggestions to improve them. (P3, ERB) I think ... I don’t think we need one [a specific method]. I really, I don’t. I don’t actually think we need one. Because a lot of people do a lot of good work—either empirically or normatively—and there are people who get along and so ... I think that is the empiricist and the normative […] and I also very hate to “pick” ... I think we have a lot of people who do both really well. But what I WISH ... is that instead of looking for a recipe to be able to integrate ... that people with different expertise would just work together more often. (P15, ERB)

Overall, we saw a similar vagueness in their description of the “how” of integration. For example, in the quote above, the participant talks of “balancing” that is done among the invited stakeholders as part of their discussion. It remains unclear how exactly such collaboration occurs and how to confirm the value of the outcomes reached. Also in this quote, we note the language of indeterminacy we described above (e.g., “try to iterate” ).

Theme 3: Integration as inherently ingrained from the start of the project; but is it integration?

Several scholars did not consider it necessary to use a specific method of integration. They reported that, for them, the normative and empirical parts of a study are interwoven within the different phases of the research process. According to these participants, the normative and empirical cannot be teased out. This is because these are inherently linked from the start of the study, with the research question and the research project being, in and of itself, normatively oriented. The empirical and the normative are constantly informing one another: “ you cannot separate the normative from the empirical. When doing empirical work, you already do a lot of normative work as well. So yeah it’s for me it’s integrated anyway” (P12, EE). Adding to the above quote, the same participant stated, “ No, it’s always both [normative and empirical], you cannot separate actually. But it also depends on what you understand as normative analysis of course .”

However, some scholars who felt that they were also doing this type of integration in empirical bioethics, to our view, are mistaken. This is because they were either (1) describing what looked to be purely theoretical research activities or (2) presenting what looked to be purely empirical activities as both empirical and normative. For instance, one participant argued that the normative and the empirical are not distinguishable in that there is no separation between the normative and empirical. This scholar talked about a feature of this approach, where “ no data is gathered ” as it was a process of doing philosophical work in context. The claim was that the entire research is situated in the world of “oughts,” thereby making it possible to come to an “ought” statement without having to trouble oneself with the is-ought gap. What this scholar sees as “integration” looks like context-sensitive normative argumentation.

So, the integration account is basically the production of a certain kind of an argument in a certain kind of context. And that’s why the integration that I defend, I guess, is, it’s so, it’s about normative reasoning of a certain kind, taking place in a certain kind of context, in situ. Which is why I resist the idea of, as seeing descriptive and normative phases. If you take that view, you’re basically saying something I think more profoundly about how, that data can produce an understanding of the ethics or something like that or that data can profoundly impact on our political positions. I don’t think that’s what the data is doing, insofar as what data is doing on my account on integration, it’s much more about how we can make better, how we can make arguments that have a particular kind of fall. (P6, EE)

A few other participants’ empirical bioethics work seemed to us as merely descriptive-oriented research activities on ethically relevant topics. One participant stated how the normative and empirical are not distinguishable and that somehow the analysis process is when normative thinking takes place. In this, however, no normative undertaking of the data was evident. Within their descriptions, we also found statements that conveyed vagueness in how this process of integrating the empirical and the normative was done. For instance, a participant regarded several parts of the research process, interpreting and discussing the research data, as normative in nature because it could not be disentangled from normative presuppositions.

Yeah. So, the way I do data analysis is by listening to the audio of interviews and also reading transcripts. And so ... often by the time I’ve gotten to the point of analysis I already have ... interpretative themes … So, it really is an integrated theoretical and empirical process. (P16, SSE). I wouldn’t know how to distinguish the empirical and the normative because ... what you can do empirically is deeply dependent on ... normative ... presuppositions. Ehm ... and then of course, what you actually do when you ask people for responses, and when you do ... your statistical analysis, I mean that’s not […] that’s only partly normative in the epistemic sense, but not in the moral sense. Ehm so, that’s obviously empirical then. But again—as soon as you start interpreting and discussing the empirical results—you’re back in the normative arena so, that really goes hand in hand. (P23, TE)

In another example, participants explained how in a descriptive type of study on an ethical topic, the normative work still played a role by referring to a thematic map that was based on normative concepts. However, one could claim that by describing the normative part as doing “ an empirical analysis in an ethically relevant way” they actually place this research activity fully within the empirical domain.

I mean, the normative and the empirical, what I actually, I’m not so much concerned with that question, even though that may be a little bit, um, bit weird. Um, I often think a little bit different, I think like what can I contribute for the empirical and what can I contribute from the applied ethics, perspective so to say. It doesn’t necessarily have to be normative, um, it just needs to be in the realm of ethics so to say, so again if I talk about [ethical topic of the participant’s research], I, I’m also just interested in what do they [researchers] think is their [values on the ethical topic], how do they frame their [value on the ethical topic], and by asking them about [the ethical topic] I ask them about their actions, what they do, why they do it, what is their normative basis, all those things, and by that I already ensure the ethical debate, to some degree. (P5, ERB) If you are doing the interviews, I would say, this is more the point where you are on the empirical parts …. Though I would still say, it’s very helpful to have the normative background assisting, when you are doing the interviews and hearing out what are the normative interesting things that people say. So, still it is not completely gone, the normative background. When you are analysing the data, then I would say, you have the empirical part for one, because you have to do this in an empirically solid manner, but you also have the normative part included, because you want to analyse the data not just in a sociological way, but you want to analyse this in an ethically relevant way. (P2, EE)

Theme 4: The integration method as a particular opaque intelligence

Within this theme, we illustrate how the vagueness in the methods used was more explicitly brought forward as a feature of these methods. Participants who have done empirical bioethics or sought to do it described how one can go from one step to the next to reach the normative conclusion. Their use of terminologies to describe this vague process pointed to something mysterious: an “ opaque A.I. ” and a “ big leap .” The process was seen as something that was difficult to explain. One participant claimed it could not be put into precise methodological rules. We pointed to this argument above when we reported the case participants made against recipe-like methods. Here, the participant explicitly raised the view that this process remains open to post-hoc justification.

What does integration really mean? How do you articulate this—kind of—magic box, where data goes in and then you come out conclusions?. It’s a particular opaque A.I., where you—kind of—plug in the data and this conclusion comes out. … And, that’s not a transparent process, we don’t know how our brains work, we don’t know how we make connections. So all we can do is perhaps be transparent about the steps we’re taking to get the information, be reflexive about how we use information, and then articulate the reasons for our conclusions. But I think—as I said earlier—there will always tend to be post-hoc justifications. (P22, EE) And then the big leap ... and the big leap is probably the one that you are curious about. The big leap toward what is the good thing to do. … But yet again, I have always thought that that methods [reflective equilibrium] falls short in giving clear sight of the black box, of the end, of the conclusion, ... I don’t have an answer whether or not we really get a clear view what happens when we take the “jump” from what we see, what we think, towards what we think would be the right thing to do, what we ought to do. (P19, ERB).

Accounts where we saw this vagueness presented as a feature of the method also expressed a need for a creative process that would require some flexibility. In the same line another participant noted: “ I feel that if we did have a recipe for integration, it would almost be sad ... people might feel that they are finding the ‘holy grail,’ but then you limiting yourself to just one way of thinking” (P15, ERB). Several participants underscored the need for flexibility and not to be restricted by too many rules. They said that much of empirical bioethics seeks to integrate work from two disciplines that have indeterminate processes, i.e., qualitative research and theoretical ethics. They thus emphasized the challenges of articulating two methods that are themselves opaque into one that is not.

And I think qualitative researchers have been ... struggling with this for a long time, and I think a lot of what we’re doing now mirrors the difficulties that qualitative researchers have been having—particularly in medicine—where they’re being challenged to explain a method. … And we have to explain method, but you can’t explain how your brain got there. With empirical bioethics, we’re working with qualitative research AND we’re working with ... theoretical ethics, so it’s doubly challenging to articulate two uhm very opaque processes. (P22, EE)

In 2015, Davies and colleagues summarized thirty-two empirical bioethics integrative methodologies that combine normative analysis and empirical data obtained using social-science research. Following this, scholars have discussed the integration of the normative and life sciences research (Mertz and Schildmann 2018 ), using critical realism in empirical bioethics (McKeown 2017 ), and integrating experimental philosophical bioethics and normative ethics (Earp, et al. 2020 ; Mihailov, et al. 2021 ). In line with the systematic review of empirical bioethics methodologies’ two broad categories of dialogical and consultative processes of integration (Davies, et al. 2015 ), our participants indicated two familiar approaches. The first one is based on a reflective equilibrium–type process, and the other, an interdisciplinary collaboration between and among different stakeholders.

In addition, several participants suggested integration was inherent with the normative and empirical intertwined within the overall research process. Our participants’ accounts of inherent integration shared some similarities with, for example, moral case analysis (Dunn, et al. 2012 ), integrated empirical ethics (Molewijk, et al. 2003 ), and dialogical empirical ethics (Landeweer, et al. 2017 ; Widdershoven, et al. 2009 ). The shared similarities were in the sense that there were no separate normative and empirical parts to be distinguished in a project and that the project itself was normatively oriented. However, we should be critical of this view. The mere fact the empirical and the normative is inseparably intertwined throughout a research process does not mean (1) that these claims cannot be conceptually separated and (2) that such a method is free of methodological concerns. For instance, there would still be the need to specify what moral principles demand in a particular situation, decide which ethical theory to use, or make normative judgements with the help of empirical data (Frith 2012 ; Salloch, et al. 2015 ). Apart from that, several of these “inherently integrated” methods lacked a clear normative side and the enterprises described seemed purely empirical. Upon closer analysis, one could interpret some of the accounts of “integration was always inherently present” as a way of avoiding looking into the black box.

Furthermore, within these “inherently integrated” approaches, a few scholars described their descriptive research on ethical issues as empirical bioethics. Based on the available definition of empirical bioethics (Ives, Dunn, and Cribb 2017 ; Mertz, et al. 2014 ) and the standards offered by Ives and colleagues ( 2018 ), the works of these participants would thus not count as empirical bioethics. This is because there was no evidence of any integration happening. In our opinion, this mismatch between the practice of some scholars and what is “agreed” to in the literature as empirical bioethics may be pointing to the fact that empirical work in bioethics is in essence heterogeneous (Ives, Dunn, and Cribb 2017 ; Mertz, et al. 2014 ). For one, it is possible that scholars look at their projects as fitting an empirical bioethics because they start from research questions relating to the normative and because their projects, even with purely descriptive parts (and papers), are aimed to eventually lead to normative conclusions. But also in that case, we need to be clear about the nature of such particular (sub)projects and about the absence of integration efforts in these parts. Second, it is possible that scholars have different perspectives on the matter than the one expressed in the standards paper (Ives, et al. 2018 ). In that case as well, these must be brought out in the open. Third, some scholars may simply be mistaken when they consider their projects to be empirical bioethics. Their mistaken belief might be based on the idea that the empirical findings were at some point integrated in normative reasoning, which results in a normative claim. This simply might not be the case. This then, more than anything, would point to the need for transparency about and agreement on the use of methods. A heterogeneity of approaches in the field should be applauded. However, for all of them, we need to be able to identify where and how the integration happens. In the remaining part of this discussion, we focus on the overall vague manner in which our participants talked about their methods and what that implies for the field of empirical bioethics.

Vagueness of Integration Methods Used

Reflective equilibrium, broadly construed, is a deliberative process that seeks coherence between attitudes, beliefs, and competing ethical principles (Daniels 2020 ). A standard objection against reflective equilibrium methodology is that it is insufficiently determinate in practical contexts to be action-guiding or to help decide between conflicting views (Arras 2009 ; Paulo 2020 ; Raz 1982 ). The iterative process of going back-and-forth between the normative and the empirical to come to a coherent account, similarly, is fraught with indeterminate indications. The way study participants relayed their approaches and explained their practices underscored the vagueness they felt. It further showed the difficulties even scholars with expertise in using these methods had in illustrating the “how” in an exact manner.

Such vagueness was also evident in collaboration methods of integration reported by our study participants. This collaboration involves an iterative and deliberative process of sharing information and engaging with different perspectives (Rehmann-Sutter, et al. 2012 ). It requires ongoing dialogue between social scientists and bioethicists. Their practical know-how guides the conclusion about the normative significance of empirical data. Even though the experience and implicit know-how of the experts can be rich in content and varied, how the communication process is done and who decides the outcome often remains indeterminate. This was noted in the voices of our participants.

The difficulty in clearly explaining the “how” of the integration process is something that researchers who have carried out an integration or wished to do so are likely to be familiar with. Several scholars have pointed to this unclear process as well (Ives and Draper 2009 ; Mertz and Schildmann 2018 ; Strong, et al. 2010 ). One explanation for this finding may be that, given the numerous tailored versions of the reflective equilibrium methodology for empirical bioethics (de Vries and van Leeuwen 2010 ; Ives 2014 , Ives and Draper 2009 ; Van Thiel and Van Delden 2010 ; Savulescu, et al. 2021 ), there may be confusion surrounding how to make a choice and how to implement it in practice. As noted earlier, there are many available empirical bioethics methodologies (Davies, et al. 2015 ), and it has been suggested that each researcher could be using his or her own version (Wangmo and Provoost 2017 ). This situation, to us, points in two directions. First, it may convey a general need to remain flexible and open to creativeness, key components of the normative reasoning that is central to the integration method. We may thus have to stop looking for a method that is akin to empirical standards, especially those of quantitative methods, and recognize that the empirical and normative integration is in many ways a normative enterprise, which does not follow an exact method. Second, the wide variation of approaches makes it even clearer that we need to seek more methodological clarity on the overarching level. This is where the debate on standards (Ives, et al. 2018 ), for instance, has been an added value. It allows for heterogeneity while at the same time striving to create more clarity. In fact, we point out that the integration methods are inherently indeterminate and that this is a good thing. That said, an acceptance of the indeterminate character of this integration does not absolve us from the need to identify the foundations of what we are doing in a theoretical-methodological way.

The study findings confirm the image of an indeterminate process. As research on this topic is developing, it is ever more clear that the scholars involved come from a wide variation of disciplines. This is another argument as to why this indeterminate character is indispensable. The findings thus substantiate what has already been written about the indeterminate status of the methods used in empirical bioethics (Arras 2009 ; Davies, et al. 201 5 ; Dunn, et al. 2008 ; Huxtable and Ives 2019 ), despite efforts to delimit and standardize empirical bioethics work (Mertz, et al. 2014 ; Ives, et al. 2018 ). One way of reading the vagueness we encountered is the scholars’ struggle to explain their own integration process, and perhaps even a lack of full comprehension of that process. Another interpretation is one that is in line with the wish for creativity and flexibility, and a level of indeterminacy in the methods we look for, namely an expression of leaving things open. Creativity can be a medicine against the belief that precise and transparent standards can account for such a “maze of interactions” (Feyerabend 2010 ) between experts with fertile know-hows. Too much standardization misses how particular research situations inspire novel ways of seeing the ethical relevance of empirical data. We should nevertheless be aware that the indeterminate nature of any integrative methodology makes it subject to risks of post-hoc rationalizations and motivated reasoning (Ives and Dunn 2010 ; Mihailov 2016 ). In the end, demands for creativity—however valid—should go hand in hand with demands for a thorough theoretical foundation as well as practical understanding of the method at hand.

The Normative Nature of Integrative Methodologies

Reflective equilibrium is a deliberation method that helps us come to a conclusion about what we ought to do (Daniels 1996 ; Rawls 1951 , 1971 ). If we describe the integration process only in terms of going back-and-forth between data and theory, or in terms of collaboration between different experts, we risk obscuring the normative nature of using empirical data to help elaborate ethical prescriptions, which is the goal of doing such an integration (Ives and Draper 2009 ; Mertz, et al. 2014 ). Researchers often talk about integration as if it is a process half empirical and half normative or something that just needs normative reasoning alongside empirical data. But the very act of integration is normative in nature. While facts are essential for addressing bioethical issues, the task of integration ultimately depends on normative assumptions about the normative weight of moral intuitions.

Our data show that many of our participants rely on a reflective equilibrium characterized in their explanations mostly by moving back-and-forth between empirical results about moral attitudes and intuitions. Although the cyclical thinking is an important part of reflective equilibrium, there is more to it. Often, however, our participants did not move beyond this aspect. Ideas of coherence between moral intuitions and moral principles, and the fundamental willingness to adjust moral principles in light of what we discover were rarely touched upon. Perhaps what we see here is that several study participants embarked on an intuitive account of a—sometimes simplified—reflective equilibrium inspired methodology. At least in the interviews, it was not shown that they were fully aware of theoretical commitments to coherence, giving normative weight to moral intuitions, and screening them for bias.

The need to clarify the essential normative nature of integration appeals to normatively trained bioethicists, who may be in a better position to debate and assess how empirical input should be integrated into normative recommendations. We are not claiming that bioethics should be the arena of philosophers. Empirical research in bioethics is widespread (Borry, et al. 2006 ; Wangmo, et al. 2018 ), and scholarly perceptions about who belongs in the field are no longer exclusivist. There is thus a need to look at empirical bioethics projects in a broader way, including studies where empirical data are gathered but not used directly as part of a normative argumentation. Such empirical data may thus contribute to a larger body of work aimed at reaching normative conclusions. They can include, for example, empirical studies that explore stakeholders’ views relating to bioethical matters and explain how people arrive at certain reasoning patterns or studies that reveal the lived experience of stakeholders and explore how moral questions are experienced in practice (Mihailov, et al. 2022 ). To our view, despite the central role of normative know-how to integration, this does not mean that integration efforts need to be exclusively the work of ethicists or that empirical researchers will be unable to engage in it.

Limitations

Our findings are, first and foremost, not generalizable, as they are based on an exploratory qualitative study design. The data come from a small non-representative sample of researchers. Other scholars, with different or greater experience in using particular (interdisciplinary) integration methods may have different opinions. They could perhaps have provided us with more concrete information about the way they carried out such integration. Also, only one of our participants described him/herself as a normative researcher. It would have been interesting to have more participants who were normatively oriented to include their views on how empirical data can be of use to the adaptation or formation of normative recommendations. Second, we asked scholars to tell us the process they use in integrating the normative and the empirical. This is a challenge task in and of itself. Not only did the scholars have limited time for the interview, but also it is generally difficult to explain how exactly this process pans out post-hoc. We thus acknowledge that we presented the participants with questions which were in no way easy for them to address in a single conversation. Because we wanted to focus on the scholars’ own reports, we did not confront them with approaches adopted by others in as systematic way. We did not also engage in a critical assessment of the reported method at the time of the interview. It would be interesting for further research to include such an approach and, for instance, study this using focus group methods. Using confrontation with other approaches or other views could offer the opportunity for a more critical reflection. For this paper, however, we opted to enrich the ongoing debate first and foremost with the accounts of the scholars. Third, we underline that a minority of our participants had already published methodological papers related to empirical bioethics as evident from the EBE sample. We did not ask the scholars to discuss the method that they have written about or most liked, nor did we ask them to discuss the paper that led to their identification for this study. During the interviews, however, we sought to address acquiescence and social desirability by using Socratic questioning and probing, to provide time for participants to explain their method of integration.

Conclusion: Ambiguity Waiting to Be Disentangled

We set out to find more about the “how” of the integration methods used by scholars in empirical bioethics. Our hope was to provide input for the ongoing debate on methods and perhaps even some practical support for those considering empirical bioethics projects. Although we shed some light onto the way integration methods were used by different bioethics scholars, we especially bring forth the vagueness and uncertainties in their accounts. The main challenge was not the heterogeneity of methods but rather the indeterminate nature of integration methodologies. On a practical level, this finding may express the need for flexibility and variation in approaches rather than a need for recipe-like instructions. Such a clear-cut method will likely neither be possible nor appreciated. Philosopher of science Paul Feyerabend once said that methodological rules “are ambiguous in the way certain drawings are ambiguous” ( 2001 , 39). The ambiguity of integration methods does not make them less appealing, just as the ambiguity of drawings does not make them less beautiful. Therefore, we may be wiser to accept some degree of indeterminacy, while simultaneously striving for clarity and transparency in terms of the theoretical-methodological underpinnings.

Data Availability

Anonymized data relevant to evaluate the results presented in this paper can be made available upon request. 

Abma, T.A., V.E. Baur, B. Molewijk, and G.A. Widdershoven. 2010. Inter-ethics: Towards an interactive and interdependent bioethics . Bioethics 24(5): 242–255.

Article   PubMed   Google Scholar  

Arras, J.D. 2009. The way we reason now: Reflective equilibrium in bioethics. In  The Oxford handbook of bioethics , edited by B. Steinbock, Oxford University Press.

Google Scholar  

Borry, P., P. Schotsmans, and K. Dierickx. 2006. Empirical research in bioethical journals. A quantitative analysis. Journal of Medical Ethics 32(4): 240–245.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Braun, V., and V. Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology 3(2): 77–101.

Article   Google Scholar  

Davies, R., J. Ives, and M. Dunn. 2015. A systematic review of empirical bioethics methodologies . BMC Medical Ethics 16: 15.

Article   PubMed   PubMed Central   Google Scholar  

Daniels, N. 1979. Wide reflective equilibrium and theory acceptance in ethics . Journal of Philosophy 76(5): 256–282.

Daniels, N. 1996.  Justice and justification: Reflective equilibrium in theory and practice . Cambridge University Press.

Book   Google Scholar  

Daniels, N. 2020. Reflective equilibrium. In The Stanford encyclopedia of philosophy (Summer edition), edited by E.N. Zalta. https://plato.stanford.edu/archives/sum2020/entries/reflective-equilibrium .

de Vries, M., and E. Van Leeuwen. 2010. Reflective equilibrium and empirical data: Third person moral experiences in empirical medical ethics . Bioethics 24(9): 490–498.

Dunn, M., Z. Gurtin-Broadbent, J. Wheeler, and J. Ives. 2008. Jack of all trades, master of none? Challenges facing junior academic researchers in bioethics. Clinical Ethics 3(4): 160–163.

Dunn, M., M. Sheehan, T. Hope, and M. Parker. 2012. Toward methodological innovation in empirical ethics research . Cambridge Quarterly of Healthcare Ethics 21(4): 466–480.

Earp, B., J. Demaree-Cotton, M. Dunn, et al. 2020. Experimental philosophical bioethics. AJOB Empirical Bioethics 11(1): 30–33.

Feyerabend, P. 2010. Against method: Outline of an anarchistic theory of knowledge . Verso Books.

Feyerabend, P. 2001. Conquest of abundance: A tale of abstraction versus the richness of being . University of Chicago Press.

Frith, L. 2012. Symbiotic empirical ethics: A practical methodology . Bioethics 26(4): 198–206.

Guest, G., K. MacQueen, and E. Namey. 2012. Applied thematic analysis . Los Angeles:Sage Publications.

Hedgecoe AM. 2004. Critical bioethics: Beyond the social science critique of applied ethics. Bioethics 18(2):120–143.

Hoffmaster, B. 2018. From applied ethics to empirical ethics to contextual ethics. Bioethics 32(2): 119–125.

Huxtable, R., and J. Ives. 2019. Mapping, framing, shaping: A framework for empirical bioethics research projects. BMC Medical Ethics 20(1): 86.

Huxley, C., V. Clarke, and E. Halliwell. 2011. “It’s a comparison thing, isn’t it?” Lesbian and bisexual women’s accounts of how partner relationships shape their feelings about their body and appearance.  Psychology of Women Quarterly  35(3): 415– 427.

Ives, J., and H. Draper. 2009. Appropriate methodologies for empirical bioethics: It's all relative . Bioethics 23(4): 249–258.

Ives, J., and M. Dunn. 2010. Who's arguing? A call for reflexivity in bioethics. Bioethics 24(5): 256–265.

Ives, J. 2014. A method of reflexive balancing in a pragmatic, interdisciplinary and reflexive bioethics . Bioethics 28(6): 302–312.

Ives, J., M. Dunn, B. Molewijk, et al. 2018. Standards of practice in empirical bioethics research: Towards a consensus. BMC Medical Ethics 19(1): 68.

Ives, J., M. Dunn, and A. Cribb, A. 2017. Empirical ethics: Theoretical and practical perspectives . Cambridge: Cambridge University Press.

Landeweer, E., B. Molewijk, and G. Widdershoven. 2017. Moral improvement through interactive research: A practice example of dialogical empirical ethics. In Empirical ethics: Theoretical and practical perspectives , edited by J. Ives, M. Dunn, and A. Cribb. Cambridge: Cambridge University Press.

McKeown, A. 2017. Critical realism and empirical bioethics: A methodological exposition. Health Care Analysis 25: 191–211.

Mertz, M., J. Inthorn, G. Renz, et al. 2014. Research across the disciplines: A road map for quality criteria in empirical ethics research . BMC Medical Ethics 15: 17.

Mertz, M., and J. Schildmann. 2018. Beyond integrating social sciences: Reflecting on the place of life sciences in empirical bioethics methodologies. Medicine, Health Care, and Philosophy 21(2): 207–214.

Mihailov, E. 2016. Is deontology a moral confabulation? Neuroethics 9(1): 1–13.

Mihailov, E., I. Hannikainen, and B. Earp. 2021. Advancing methods in empirical bioethics: Bioxphi meets digital technologies. American Journal of Bioethics 21(6): 53–56.

Mihailov, E., V. Provoost, and T. Wangmo. 2022. Acceptable objectives of empirical research in bioethics: A qualitative exploration of researchers’ views. BMC Medical Ethics 23: 140.

Molewijk, A., A. Stiggelbout, W. Otten, H. Dupuis, and J. Kievit. 2003. Implicit normativity in evidence-based medicine: A plea for integrated empirical ethics research . Health Care Analysis 11(1): 69–92.

Article   CAS   PubMed   Google Scholar  

Musschenga, A. 2005. Empirical ethics, context-sensitivity, and contextualism. The Journal of Medicine and Philosophy 30 (5): 467–490.

Paulo, N. 2020. The unreliable intuitions objection against reflective equilibrium. The Journal of Ethics 24(3): 333–353.

Rawls, J. 1951. Outline of a decision procedure for ethics.  The Philosophical Review  60(2):177–197.

Rawls, J. 1971. A theory of justice . Belknap Press.

Raz, J. 1982. The claims of reflective equilibrium.  Inquiry  25(3): 307–330.

Rehmann-Sutter, C., R. Porz, and J. Scully. 2012. How to relate the empirical to the normative: Toward a phenomenologically informed hermeneutic approach to bioethics . Cambridge Quarterly of Healthcare Ethics 21(4): 436–447.

Rost, M., and E. Mihailov. 2021. In the name of the family? Against parents’ refusal to disclose prognostic information to children. Medicine, Health Care and Philosophy , 24(3): 421–432.

Salloch, S., S. Wäscher, J. Vollmann, and J. Schildmann. 2015. The normative background of empirical-ethical research: First steps towards a transparent and reasoned approach in the selection of an ethical theory. BMC Medical Ethics 16(1): 20.

Savulescu, J., C. Gyngell, and G. Kahane. 2021. Collective reflective equilibrium in practice (CREP) and controversial novel technologies. Bioethics 35(7): 652–663.

Strong, K., W. Lipworth, and I. Kerridge. 2010. The strengths and limitations of empirical bioethics. Journal of Law and Medicine 18(2): 316–319.

CAS   PubMed   Google Scholar  

Sulmasy, D., and J. Sugarman. 2010. The many methods of medical ethics (or, thirteen ways of looking at a blackbird). In Methods in medical ethics , edited by J. Sugarman and D. Sulmasy. Georgetown University Press.

Van Thiel, G., and J. Van Delden. 2010. Reflective equilibrium as a normative-empirical model in bioethics. In Reflective equilibrium , edited by W. Burg and T. Willigenburg, 251–259. Netherlands: Springer Netherlands.

Wangmo, T., and V. Provoost. 2017. The use of empirical research in bioethics: A survey of researchers in twelve European countries. BMC Medical Ethics 18(1): 79.

Wangmo, T., S. Hauri, E. Gennet, E. Anane-Sarpong, V. Provoost, and B. Elger. 2018. An update on the “empirical turn” in bioethics: Analysis of empirical research in nine bioethics journals. BMC Medical Ethics 19(1): 6.

Widdershoven, G., T. Abma, and B. Molewijk. 2009. Empirical ethics as dialogical practice . Bioethics 23(4): 236–248.

Download references

Acknowledgements

We sincerely acknowledge the two anonymous reviewers for their insightful comments and for how they constructively challenged the discussion of the study findings.

The authors thank the study participants for their time and sharing their views. The study was supported by the Swiss National Science Foundation, IZSEZ0_190015.

Open access funding provided by University of Basel

Author information

Authors and affiliations.

Institute for Biomedical Ethics, University of Basel, Basel, Switzerland

Bioethics Institute Ghent, Ghent University, Gent, Belgium

V. Provoost

Research Centre in Applied Ethics, Faculty of Philosophy, University of Bucharest, București, Romania

E. Mihailov

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to T. Wangmo .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Wangmo, T., Provoost, V. & Mihailov, E. The Vagueness of Integrating the Empirical and the Normative: Researchers’ Views on Doing Empirical Bioethics. Bioethical Inquiry (2023). https://doi.org/10.1007/s11673-023-10286-z

Download citation

Received : 02 January 2023

Accepted : 20 July 2023

Published : 08 November 2023

DOI : https://doi.org/10.1007/s11673-023-10286-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Empirical bioethics
  • Integration methods
  • Reflective equilibrium
  • Qualitative study
  • Empirical research in bioethics
  • Interdisciplinary bioethics
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 9 methods for literature reviews.

Guy Paré and Spyros Kitsiou .

9.1. Introduction

Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).

Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).

The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).

When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.

The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.

9.2. Overview of the Literature Review Process and Steps

As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:

  • formulating the research question(s) and objective(s),
  • searching the extant literature,
  • screening for inclusion,
  • assessing the quality of primary studies,
  • extracting data, and
  • analyzing data.

Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).

Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).

9.3. Types of Review Articles and Brief Illustrations

EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.

9.3.1. Narrative Reviews

The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).

Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).

Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.

Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health ( m-health ) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.

9.3.2. Descriptive or Mapping Reviews

The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).

In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.

An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).

9.3.3. Scoping Reviews

Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.

Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).

One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).

9.3.4. Forms of Aggregative Reviews

Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.

Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:

  • Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
  • Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
  • Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
  • Analyzing data using quantitative or qualitative methods.
  • Presenting results in summary of findings tables.
  • Interpreting results and drawing conclusions.

Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.

The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed inde­­pen­dently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.

Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.

A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guide­lines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.

In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).

9.3.5. Realist Reviews

Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).

To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).

The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.

9.3.6. Critical Reviews

Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).

Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.

9.4. Summary

Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.

Table 9.1. Typology of Literature Reviews (adapted from Paré et al., 2015).

Typology of Literature Reviews (adapted from Paré et al., 2015).

As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.

9.5. Concluding Remarks

In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.

We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.

To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.

  • Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
  • Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
  • Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
  • Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
  • A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
  • Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
  • Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
  • Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
  • Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
  • Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
  • Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
  • Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
  • Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
  • Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
  • Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
  • Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
  • Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
  • Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
  • Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
  • Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
  • Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
  • Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
  • Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
  • Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
  • Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
  • Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
  • King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
  • Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
  • Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
  • Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
  • Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
  • Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
  • Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
  • Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
  • Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
  • Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
  • Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
  • Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
  • Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
  • Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel ​.aisnet.org/cgi/viewcontent ​.cgi?article ​=1221&context ​=ecis2011 .
  • Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
  • Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
  • Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
  • Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
  • Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
  • Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
  • Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
  • Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
  • Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
  • Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
  • Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
  • Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
  • Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
  • Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
  • Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
  • Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
  • Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
  • Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
  • Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
  • Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
  • Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)
  • Disable Glossary Links

In this Page

  • Introduction
  • Overview of the Literature Review Process and Steps
  • Types of Review Articles and Brief Illustrations
  • Concluding Remarks

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

IMAGES

  1. Differences Between Empirical Research and Literature Review

    empirical research literature review

  2. (PDF) A Literature Review of Empirical Studies of Recommendation Systems

    empirical research literature review

  3. (PDF) The development and empirical study of a literature review aiding

    empirical research literature review

  4. (PDF) A Review of the Empirical Literature on No Child Left Behind From

    empirical research literature review

  5. Notable Differences between Empirical Review and Literature Review

    empirical research literature review

  6. Summary of empirical literature review

    empirical research literature review

VIDEO

  1. Methods L04

  2. Academic Writing Workshop

  3. Do Entrepreneurship and Sectoral Outputs Support Sustainable Development AJEM 2022 102 71 91

  4. ACE 745: Research Report (IUP)

  5. WHAT TO WRITE IN RESEARCH LITERATURE REVIEW#RESEARCHWRITNGFORBEGINNERS

  6. What is Empirical Research

COMMENTS

  1. Literature review as a research methodology: An overview and guidelines

    This is why the literature review as a research method is more relevant than ever. Traditional literature reviews often lack thoroughness and rigor and are conducted ad hoc, rather than following a specific methodology. ... The aim of a systematic review is to identify all empirical evidence that fits the pre-specified inclusion criteria to ...

  2. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  3. Writing a Literature Review

    Qualitative versus quantitative research; Empirical versus theoretical scholarship; Divide the research by sociological, historical, or cultural sources; Theoretical: In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key ...

  4. Guidance on Conducting a Systematic Literature Review

    This article is organized as follows: The next section presents the methodology adopted by this research, followed by a section that discusses the typology of literature reviews and provides empirical examples; the subsequent section summarizes the process of literature review; and the last section concludes the paper with suggestions on how to improve the quality and rigor of literature ...

  5. Writing the literature review for empirical papers

    Empirical paper s usually are structured in at. least five sections: (1) introduction, (2) literature review, (3) empirical methods, (4) data analysi s, discussion and. findings, and (5 ...

  6. PDF The Thesis Writing Process and Literature Review

    The key here is to focus first on the literature relevant to the puzzle. In this example, the tokenism literature sets up a puzzle derived from a theory and contradictory empirical evidence. Let's consider what each of these means... The literature(s) from which you develop the theoretical/empirical puzzle that drives your research question.

  7. Reviewing the research methods literature: principles and strategies

    The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [2-4], and the numerous forms of qualitative and mixed methods literature synthesis [5-10]) is to synthesize empirical research findings from multiple ...

  8. Methodological Approaches to Literature Review

    A literature review is defined as "a critical analysis of a segment of a published body of knowledge through summary, classification, and comparison of prior research studies, reviews of literature, and theoretical articles." (The Writing Center University of Winconsin-Madison 2022) A literature review is an integrated analysis, not just a summary of scholarly work on a specific topic.

  9. Getting started

    What is a literature review? Definition: A literature review is a systematic examination and synthesis of existing scholarly research on a specific topic or subject. Purpose: It serves to provide a comprehensive overview of the current state of knowledge within a particular field. Analysis: Involves critically evaluating and summarizing key findings, methodologies, and debates found in ...

  10. Approaching literature review for academic purposes: The Literature

    A sophisticated literature review (LR) can result in a robust dissertation/thesis by scrutinizing the main problem examined by the academic study; anticipating research hypotheses, methods and results; and maintaining the interest of the audience in how the dissertation/thesis will provide solutions for the current gaps in a particular field.

  11. Writing a literature review

    A formal literature review is an evidence-based, in-depth analysis of a subject. There are many reasons for writing one and these will influence the length and style of your review, but in essence a literature review is a critical appraisal of the current collective knowledge on a subject. Rather than just being an exhaustive list of all that ...

  12. Literature Reviews, Theoretical Frameworks, and Conceptual Frameworks

    The first element we discuss is a review of research (literature reviews), which highlights the need for a specific research question, study problem, or topic of investigation. ... Standards for reporting on empirical social science research in AERA publications: American Educational Research Association. Educational Researcher, 35 (6), 33-40.

  13. Literature Review Research

    Literature Review is a comprehensive survey of the works published in a particular field of study or line of research, usually over a specific period of time, in the form of an in-depth, critical bibliographic essay or annotated list in which attention is drawn to the most significant works.. Also, we can define a literature review as the collected body of scholarly works related to a topic:

  14. Literature Reviews and Empirical Research

    be used to validate the target and methods you have chosen for your proposed research. consist of books and scholarly journals that provide research examples of populations or settings similar to your own, as well as community resources to document the need for your proposed research. The literature review does not present new primary scholarship.

  15. Module 2 Chapter 3: What is Empirical Literature & Where can it be

    What May or May Not Be Empirical Literature: Literature Reviews Investigators typically engage in a review of existing literature as they develop their own research studies. The review informs them about where knowledge gaps exist, methods previously employed by other scholars, limitations of prior work, and previous scholars' recommendations ...

  16. Empirical Research in the Social Sciences and Education

    Another hint: some scholarly journals use a specific layout, called the "IMRaD" format, to communicate empirical research findings. Such articles typically have 4 components: Introduction: sometimes called "literature review" -- what is currently known about the topic -- usually includes a theoretical framework and/or discussion of previous studies

  17. PDF Writing the literature review for empirical papers

    Originality: Most papers and books focus on literature review as full articles (systematic reviews, meta analyses and critical analyses) or dissertation, chapters, this paper is focused on literature review for an empirical article. Research method: It is a theoretical essay.

  18. What is a Literature Review? How to Write It (with Examples)

    A literature review is a critical analysis and synthesis of existing research on a particular topic. It provides an overview of the current state of knowledge, identifies gaps, and highlights key findings in the literature. 1 The purpose of a literature review is to situate your own research within the context of existing scholarship, demonstrating your understanding of the topic and showing ...

  19. A Systematic Literature Review of Empirical Research on the Impacts of

    This systematic literature review examines 60 empirical studies on the impacts of e-Government published in the leading public administration and information systems journals. The impacts are classified using public value theory, first, by the role for whom value is generated and, second, by the nature of the impact.

  20. PSYC 200 Lab in Experimental Methods (Atlanta)

    A review article or "literature review" discusses past research studies on a given topic. How to recognize empirical journal articles Definition of an empirical study: An empirical research article reports the results of a study that uses data derived from actual observation or experimentation.

  21. Introduction to systematic review and meta-analysis

    A systematic review attempts to gather all available empirical research by using clearly defined, systematic methods to obtain answers to a specific question. ... When performing a systematic literature review or meta-analysis, if the quality of studies is not properly evaluated or if proper methodology is not strictly applied, the results can ...

  22. Difference between theoretical literature review and empirical

    Theoretical literature review focuses on the existing theories, models and concepts that are relevant to a research topic. It does not collect or analyze primary data, but rather synthesizes and ...

  23. PDF LITERATURE REVIEWS

    2. MOTIVATE YOUR RESEARCH in addition to providing useful information about your topic, your literature review must tell a story about how your project relates to existing literature. popular literature review narratives include: ¡ plugging a gap / filling a hole within an incomplete literature ¡ building a bridge between two "siloed" literatures, putting literatures "in conversation"

  24. Online self-disclosure: An interdisciplinary literature review of 10

    This article reviews 309 empirical studies about online self-disclosure published between 2010 and 2020 and aggregates insights thereof into an overarching model describing the ways in which this socio-technical undertaking unfolds. ... An interdisciplinary literature review of 10 years of research @article{Ashuri2024OnlineSA, title={Online ...

  25. A structured literature review of empirical research on mandatory

    A literature review protocol was defined to synthesize and critically evaluate the development and focus of empirical research on MAR. A broad query was launched in Scopus, Web of Science, and Google Scholar, which identified 128 articles coded according to the analytical framework that included timeframe, location, rotation type, research ...

  26. II Structure of KG-EmpiRE and the Repository

    Overall, KG-EmpiRE and its analysis lay the foundation for a sustainable literature review on the state and evolution of empirical research in requirements engineering. They can be used to replicate the results from the related publication [ 1 ] , (re-)use the data for further studies, and repeat the research approach for sustainable literature ...

  27. The Vagueness of Integrating the Empirical and the Normative ...

    The integration of normative analysis with empirical data often remains unclear despite the availability of many empirical bioethics methodologies. This paper sought bioethics scholars' experiences and reflections of doing empirical bioethics research to feed these practical insights into the debate on methods. We interviewed twenty-six participants who revealed their process of integrating ...

  28. Chapter 9 Methods for Literature Reviews

    Literature reviews can take two major forms. The most prevalent one is the "literature review" or "background" section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses (Sylvester, Tate, & Johnstone, 2013).

  29. Review: "Rent control effects through the lens of empirical research

    The Journal of Housing Economics has recently published a paper entitled "Rent control effects through the lens of empirical research: An almost complete review of the literature", written by Dr Konstantin A. Kholodilin from the German Institute for Economic Research (DIW). It is a meta-study which summarises the empirical literature of the various effects of rent controls.

  30. One-dimensional modelling of sensible heat storage tanks ...

    The review is structured into four main sections, starting with an examination of storage-exchanger assembly configurations discussed from a modelling standpoint, followed by an overview of the fundamental principles and limitations of one-dimensional models, a critical review of contemporary methodologies to determine the overall heat transfer ...