Jump to navigation

Home

Cochrane Training

Chapter 15: interpreting results and drawing conclusions.

Holger J Schünemann, Gunn E Vist, Julian PT Higgins, Nancy Santesso, Jonathan J Deeks, Paul Glasziou, Elie A Akl, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group

Key Points:

  • This chapter provides guidance on interpreting the results of synthesis in order to communicate the conclusions of the review effectively.
  • Methods are presented for computing, presenting and interpreting relative and absolute effects for dichotomous outcome data, including the number needed to treat (NNT).
  • For continuous outcome measures, review authors can present summary results for studies using natural units of measurement or as minimal important differences when all studies use the same scale. When studies measure the same construct but with different scales, review authors will need to find a way to interpret the standardized mean difference, or to use an alternative effect measure for the meta-analysis such as the ratio of means.
  • Review authors should not describe results as ‘statistically significant’, ‘not statistically significant’ or ‘non-significant’ or unduly rely on thresholds for P values, but report the confidence interval together with the exact P value.
  • Review authors should not make recommendations about healthcare decisions, but they can – after describing the certainty of evidence and the balance of benefits and harms – highlight different actions that might be consistent with particular patterns of values and preferences and other factors that determine a decision such as cost.

Cite this chapter as: Schünemann HJ, Vist GE, Higgins JPT, Santesso N, Deeks JJ, Glasziou P, Akl EA, Guyatt GH. Chapter 15: Interpreting results and drawing conclusions. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

15.1 Introduction

The purpose of Cochrane Reviews is to facilitate healthcare decisions by patients and the general public, clinicians, guideline developers, administrators and policy makers. They also inform future research. A clear statement of findings, a considered discussion and a clear presentation of the authors’ conclusions are, therefore, important parts of the review. In particular, the following issues can help people make better informed decisions and increase the usability of Cochrane Reviews:

  • information on all important outcomes, including adverse outcomes;
  • the certainty of the evidence for each of these outcomes, as it applies to specific populations and specific interventions; and
  • clarification of the manner in which particular values and preferences may bear on the desirable and undesirable consequences of the intervention.

A ‘Summary of findings’ table, described in Chapter 14 , Section 14.1 , provides key pieces of information about health benefits and harms in a quick and accessible format. It is highly desirable that review authors include a ‘Summary of findings’ table in Cochrane Reviews alongside a sufficient description of the studies and meta-analyses to support its contents. This description includes the rating of the certainty of evidence, also called the quality of the evidence or confidence in the estimates of the effects, which is expected in all Cochrane Reviews.

‘Summary of findings’ tables are usually supported by full evidence profiles which include the detailed ratings of the evidence (Guyatt et al 2011a, Guyatt et al 2013a, Guyatt et al 2013b, Santesso et al 2016). The Discussion section of the text of the review provides space to reflect and consider the implications of these aspects of the review’s findings. Cochrane Reviews include five standard subheadings to ensure the Discussion section places the review in an appropriate context: ‘Summary of main results (benefits and harms)’; ‘Potential biases in the review process’; ‘Overall completeness and applicability of evidence’; ‘Certainty of the evidence’; and ‘Agreements and disagreements with other studies or reviews’. Following the Discussion, the Authors’ conclusions section is divided into two standard subsections: ‘Implications for practice’ and ‘Implications for research’. The assessment of the certainty of evidence facilitates a structured description of the implications for practice and research.

Because Cochrane Reviews have an international audience, the Discussion and Authors’ conclusions should, so far as possible, assume a broad international perspective and provide guidance for how the results could be applied in different settings, rather than being restricted to specific national or local circumstances. Cultural differences and economic differences may both play an important role in determining the best course of action based on the results of a Cochrane Review. Furthermore, individuals within societies have widely varying values and preferences regarding health states, and use of societal resources to achieve particular health states. For all these reasons, and because information that goes beyond that included in a Cochrane Review is required to make fully informed decisions, different people will often make different decisions based on the same evidence presented in a review.

Thus, review authors should avoid specific recommendations that inevitably depend on assumptions about available resources, values and preferences, and other factors such as equity considerations, feasibility and acceptability of an intervention. The purpose of the review should be to present information and aid interpretation rather than to offer recommendations. The discussion and conclusions should help people understand the implications of the evidence in relation to practical decisions and apply the results to their specific situation. Review authors can aid this understanding of the implications by laying out different scenarios that describe certain value structures.

In this chapter, we address first one of the key aspects of interpreting findings that is also fundamental in completing a ‘Summary of findings’ table: the certainty of evidence related to each of the outcomes. We then provide a more detailed consideration of issues around applicability and around interpretation of numerical results, and provide suggestions for presenting authors’ conclusions.

15.2 Issues of indirectness and applicability

15.2.1 the role of the review author.

“A leap of faith is always required when applying any study findings to the population at large” or to a specific person. “In making that jump, one must always strike a balance between making justifiable broad generalizations and being too conservative in one’s conclusions” (Friedman et al 1985). In addition to issues about risk of bias and other domains determining the certainty of evidence, this leap of faith is related to how well the identified body of evidence matches the posed PICO ( Population, Intervention, Comparator(s) and Outcome ) question. As to the population, no individual can be entirely matched to the population included in research studies. At the time of decision, there will always be differences between the study population and the person or population to whom the evidence is applied; sometimes these differences are slight, sometimes large.

The terms applicability, generalizability, external validity and transferability are related, sometimes used interchangeably and have in common that they lack a clear and consistent definition in the classic epidemiological literature (Schünemann et al 2013). However, all of the terms describe one overarching theme: whether or not available research evidence can be directly used to answer the health and healthcare question at hand, ideally supported by a judgement about the degree of confidence in this use (Schünemann et al 2013). GRADE’s certainty domains include a judgement about ‘indirectness’ to describe all of these aspects including the concept of direct versus indirect comparisons of different interventions (Atkins et al 2004, Guyatt et al 2008, Guyatt et al 2011b).

To address adequately the extent to which a review is relevant for the purpose to which it is being put, there are certain things the review author must do, and certain things the user of the review must do to assess the degree of indirectness. Cochrane and the GRADE Working Group suggest using a very structured framework to address indirectness. We discuss here and in Chapter 14 what the review author can do to help the user. Cochrane Review authors must be extremely clear on the population, intervention and outcomes that they intend to address. Chapter 14, Section 14.1.2 , also emphasizes a crucial step: the specification of all patient-important outcomes relevant to the intervention strategies under comparison.

In considering whether the effect of an intervention applies equally to all participants, and whether different variations on the intervention have similar effects, review authors need to make a priori hypotheses about possible effect modifiers, and then examine those hypotheses (see Chapter 10, Section 10.10 and Section 10.11 ). If they find apparent subgroup effects, they must ultimately decide whether or not these effects are credible (Sun et al 2012). Differences between subgroups, particularly those that correspond to differences between studies, should be interpreted cautiously. Some chance variation between subgroups is inevitable so, unless there is good reason to believe that there is an interaction, review authors should not assume that the subgroup effect exists. If, despite due caution, review authors judge subgroup effects in terms of relative effect estimates as credible (i.e. the effects differ credibly), they should conduct separate meta-analyses for the relevant subgroups, and produce separate ‘Summary of findings’ tables for those subgroups.

The user of the review will be challenged with ‘individualization’ of the findings, whether they seek to apply the findings to an individual patient or a policy decision in a specific context. For example, even if relative effects are similar across subgroups, absolute effects will differ according to baseline risk. Review authors can help provide this information by identifying identifiable groups of people with varying baseline risks in the ‘Summary of findings’ tables, as discussed in Chapter 14, Section 14.1.3 . Users can then identify their specific case or population as belonging to a particular risk group, if relevant, and assess their likely magnitude of benefit or harm accordingly. A description of the identifying prognostic or baseline risk factors in a brief scenario (e.g. age or gender) will help users of a review further.

Another decision users must make is whether their individual case or population of interest is so different from those included in the studies that they cannot use the results of the systematic review and meta-analysis at all. Rather than rigidly applying the inclusion and exclusion criteria of studies, it is better to ask whether or not there are compelling reasons why the evidence should not be applied to a particular patient. Review authors can sometimes help decision makers by identifying important variation where divergence might limit the applicability of results (Rothwell 2005, Schünemann et al 2006, Guyatt et al 2011b, Schünemann et al 2013), including biologic and cultural variation, and variation in adherence to an intervention.

In addressing these issues, review authors cannot be aware of, or address, the myriad of differences in circumstances around the world. They can, however, address differences of known importance to many people and, importantly, they should avoid assuming that other people’s circumstances are the same as their own in discussing the results and drawing conclusions.

15.2.2 Biological variation

Issues of biological variation that may affect the applicability of a result to a reader or population include divergence in pathophysiology (e.g. biological differences between women and men that may affect responsiveness to an intervention) and divergence in a causative agent (e.g. for infectious diseases such as malaria, which may be caused by several different parasites). The discussion of the results in the review should make clear whether the included studies addressed all or only some of these groups, and whether any important subgroup effects were found.

15.2.3 Variation in context

Some interventions, particularly non-pharmacological interventions, may work in some contexts but not in others; the situation has been described as program by context interaction (Hawe et al 2004). Contextual factors might pertain to the host organization in which an intervention is offered, such as the expertise, experience and morale of the staff expected to carry out the intervention, the competing priorities for the clinician’s or staff’s attention, the local resources such as service and facilities made available to the program and the status or importance given to the program by the host organization. Broader context issues might include aspects of the system within which the host organization operates, such as the fee or payment structure for healthcare providers and the local insurance system. Some interventions, in particular complex interventions (see Chapter 17 ), can be only partially implemented in some contexts, and this requires judgements about indirectness of the intervention and its components for readers in that context (Schünemann 2013).

Contextual factors may also pertain to the characteristics of the target group or population, such as cultural and linguistic diversity, socio-economic position, rural/urban setting. These factors may mean that a particular style of care or relationship evolves between service providers and consumers that may or may not match the values and technology of the program.

For many years these aspects have been acknowledged when decision makers have argued that results of evidence reviews from other countries do not apply in their own country or setting. Whilst some programmes/interventions have been successfully transferred from one context to another, others have not (Resnicow et al 1993, Lumley et al 2004, Coleman et al 2015). Review authors should be cautious when making generalizations from one context to another. They should report on the presence (or otherwise) of context-related information in intervention studies, where this information is available.

15.2.4 Variation in adherence

Variation in the adherence of the recipients and providers of care can limit the certainty in the applicability of results. Predictable differences in adherence can be due to divergence in how recipients of care perceive the intervention (e.g. the importance of side effects), economic conditions or attitudes that make some forms of care inaccessible in some settings, such as in low-income countries (Dans et al 2007). It should not be assumed that high levels of adherence in closely monitored randomized trials will translate into similar levels of adherence in normal practice.

15.2.5 Variation in values and preferences

Decisions about healthcare management strategies and options involve trading off health benefits and harms. The right choice may differ for people with different values and preferences (i.e. the importance people place on the outcomes and interventions), and it is important that decision makers ensure that decisions are consistent with a patient or population’s values and preferences. The importance placed on outcomes, together with other factors, will influence whether the recipients of care will or will not accept an option that is offered (Alonso-Coello et al 2016) and, thus, can be one factor influencing adherence. In Section 15.6 , we describe how the review author can help this process and the limits of supporting decision making based on intervention reviews.

15.3 Interpreting results of statistical analyses

15.3.1 confidence intervals.

Results for both individual studies and meta-analyses are reported with a point estimate together with an associated confidence interval. For example, ‘The odds ratio was 0.75 with a 95% confidence interval of 0.70 to 0.80’. The point estimate (0.75) is the best estimate of the magnitude and direction of the experimental intervention’s effect compared with the comparator intervention. The confidence interval describes the uncertainty inherent in any estimate, and describes a range of values within which we can be reasonably sure that the true effect actually lies. If the confidence interval is relatively narrow (e.g. 0.70 to 0.80), the effect size is known precisely. If the interval is wider (e.g. 0.60 to 0.93) the uncertainty is greater, although there may still be enough precision to make decisions about the utility of the intervention. Intervals that are very wide (e.g. 0.50 to 1.10) indicate that we have little knowledge about the effect and this imprecision affects our certainty in the evidence, and that further information would be needed before we could draw a more certain conclusion.

A 95% confidence interval is often interpreted as indicating a range within which we can be 95% certain that the true effect lies. This statement is a loose interpretation, but is useful as a rough guide. The strictly correct interpretation of a confidence interval is based on the hypothetical notion of considering the results that would be obtained if the study were repeated many times. If a study were repeated infinitely often, and on each occasion a 95% confidence interval calculated, then 95% of these intervals would contain the true effect (see Section 15.3.3 for further explanation).

The width of the confidence interval for an individual study depends to a large extent on the sample size. Larger studies tend to give more precise estimates of effects (and hence have narrower confidence intervals) than smaller studies. For continuous outcomes, precision depends also on the variability in the outcome measurements (i.e. how widely individual results vary between people in the study, measured as the standard deviation); for dichotomous outcomes it depends on the risk of the event (more frequent events allow more precision, and narrower confidence intervals), and for time-to-event outcomes it also depends on the number of events observed. All these quantities are used in computation of the standard errors of effect estimates from which the confidence interval is derived.

The width of a confidence interval for a meta-analysis depends on the precision of the individual study estimates and on the number of studies combined. In addition, for random-effects models, precision will decrease with increasing heterogeneity and confidence intervals will widen correspondingly (see Chapter 10, Section 10.10.4 ). As more studies are added to a meta-analysis the width of the confidence interval usually decreases. However, if the additional studies increase the heterogeneity in the meta-analysis and a random-effects model is used, it is possible that the confidence interval width will increase.

Confidence intervals and point estimates have different interpretations in fixed-effect and random-effects models. While the fixed-effect estimate and its confidence interval address the question ‘what is the best (single) estimate of the effect?’, the random-effects estimate assumes there to be a distribution of effects, and the estimate and its confidence interval address the question ‘what is the best estimate of the average effect?’ A confidence interval may be reported for any level of confidence (although they are most commonly reported for 95%, and sometimes 90% or 99%). For example, the odds ratio of 0.80 could be reported with an 80% confidence interval of 0.73 to 0.88; a 90% interval of 0.72 to 0.89; and a 95% interval of 0.70 to 0.92. As the confidence level increases, the confidence interval widens.

There is logical correspondence between the confidence interval and the P value (see Section 15.3.3 ). The 95% confidence interval for an effect will exclude the null value (such as an odds ratio of 1.0 or a risk difference of 0) if and only if the test of significance yields a P value of less than 0.05. If the P value is exactly 0.05, then either the upper or lower limit of the 95% confidence interval will be at the null value. Similarly, the 99% confidence interval will exclude the null if and only if the test of significance yields a P value of less than 0.01.

Together, the point estimate and confidence interval provide information to assess the effects of the intervention on the outcome. For example, suppose that we are evaluating an intervention that reduces the risk of an event and we decide that it would be useful only if it reduced the risk of an event from 30% by at least 5 percentage points to 25% (these values will depend on the specific clinical scenario and outcomes, including the anticipated harms). If the meta-analysis yielded an effect estimate of a reduction of 10 percentage points with a tight 95% confidence interval, say, from 7% to 13%, we would be able to conclude that the intervention was useful since both the point estimate and the entire range of the interval exceed our criterion of a reduction of 5% for net health benefit. However, if the meta-analysis reported the same risk reduction of 10% but with a wider interval, say, from 2% to 18%, although we would still conclude that our best estimate of the intervention effect is that it provides net benefit, we could not be so confident as we still entertain the possibility that the effect could be between 2% and 5%. If the confidence interval was wider still, and included the null value of a difference of 0%, we would still consider the possibility that the intervention has no effect on the outcome whatsoever, and would need to be even more sceptical in our conclusions.

Review authors may use the same general approach to conclude that an intervention is not useful. Continuing with the above example where the criterion for an important difference that should be achieved to provide more benefit than harm is a 5% risk difference, an effect estimate of 2% with a 95% confidence interval of 1% to 4% suggests that the intervention does not provide net health benefit.

15.3.2 P values and statistical significance

A P value is the standard result of a statistical test, and is the probability of obtaining the observed effect (or larger) under a ‘null hypothesis’. In the context of Cochrane Reviews there are two commonly used statistical tests. The first is a test of overall effect (a Z-test), and its null hypothesis is that there is no overall effect of the experimental intervention compared with the comparator on the outcome of interest. The second is the (Chi 2 ) test for heterogeneity, and its null hypothesis is that there are no differences in the intervention effects across studies.

A P value that is very small indicates that the observed effect is very unlikely to have arisen purely by chance, and therefore provides evidence against the null hypothesis. It has been common practice to interpret a P value by examining whether it is smaller than particular threshold values. In particular, P values less than 0.05 are often reported as ‘statistically significant’, and interpreted as being small enough to justify rejection of the null hypothesis. However, the 0.05 threshold is an arbitrary one that became commonly used in medical and psychological research largely because P values were determined by comparing the test statistic against tabulations of specific percentage points of statistical distributions. If review authors decide to present a P value with the results of a meta-analysis, they should report a precise P value (as calculated by most statistical software), together with the 95% confidence interval. Review authors should not describe results as ‘statistically significant’, ‘not statistically significant’ or ‘non-significant’ or unduly rely on thresholds for P values , but report the confidence interval together with the exact P value (see MECIR Box 15.3.a ).

We discuss interpretation of the test for heterogeneity in Chapter 10, Section 10.10.2 ; the remainder of this section refers mainly to tests for an overall effect. For tests of an overall effect, the computation of P involves both the effect estimate and precision of the effect estimate (driven largely by sample size). As precision increases, the range of plausible effects that could occur by chance is reduced. Correspondingly, the statistical significance of an effect of a particular magnitude will usually be greater (the P value will be smaller) in a larger study than in a smaller study.

P values are commonly misinterpreted in two ways. First, a moderate or large P value (e.g. greater than 0.05) may be misinterpreted as evidence that the intervention has no effect on the outcome. There is an important difference between this statement and the correct interpretation that there is a high probability that the observed effect on the outcome is due to chance alone. To avoid such a misinterpretation, review authors should always examine the effect estimate and its 95% confidence interval.

The second misinterpretation is to assume that a result with a small P value for the summary effect estimate implies that an experimental intervention has an important benefit. Such a misinterpretation is more likely to occur in large studies and meta-analyses that accumulate data over dozens of studies and thousands of participants. The P value addresses the question of whether the experimental intervention effect is precisely nil; it does not examine whether the effect is of a magnitude of importance to potential recipients of the intervention. In a large study, a small P value may represent the detection of a trivial effect that may not lead to net health benefit when compared with the potential harms (i.e. harmful effects on other important outcomes). Again, inspection of the point estimate and confidence interval helps correct interpretations (see Section 15.3.1 ).

MECIR Box 15.3.a Relevant expectations for conduct of intervention reviews

15.3.3 Relation between confidence intervals, statistical significance and certainty of evidence

The confidence interval (and imprecision) is only one domain that influences overall uncertainty about effect estimates. Uncertainty resulting from imprecision (i.e. statistical uncertainty) may be no less important than uncertainty from indirectness, or any other GRADE domain, in the context of decision making (Schünemann 2016). Thus, the extent to which interpretations of the confidence interval described in Sections 15.3.1 and 15.3.2 correspond to conclusions about overall certainty of the evidence for the outcome of interest depends on these other domains. If there are no concerns about other domains that determine the certainty of the evidence (i.e. risk of bias, inconsistency, indirectness or publication bias), then the interpretation in Sections 15.3.1 and 15.3.2 . about the relation of the confidence interval to the true effect may be carried forward to the overall certainty. However, if there are concerns about the other domains that affect the certainty of the evidence, the interpretation about the true effect needs to be seen in the context of further uncertainty resulting from those concerns.

For example, nine randomized controlled trials in almost 6000 cancer patients indicated that the administration of heparin reduces the risk of venous thromboembolism (VTE), with a risk ratio of 43% (95% CI 19% to 60%) (Akl et al 2011a). For patients with a plausible baseline risk of approximately 4.6% per year, this relative effect suggests that heparin leads to an absolute risk reduction of 20 fewer VTEs (95% CI 9 fewer to 27 fewer) per 1000 people per year (Akl et al 2011a). Now consider that the review authors or those applying the evidence in a guideline have lowered the certainty in the evidence as a result of indirectness. While the confidence intervals would remain unchanged, the certainty in that confidence interval and in the point estimate as reflecting the truth for the question of interest will be lowered. In fact, the certainty range will have unknown width so there will be unknown likelihood of a result within that range because of this indirectness. The lower the certainty in the evidence, the less we know about the width of the certainty range, although methods for quantifying risk of bias and understanding potential direction of bias may offer insight when lowered certainty is due to risk of bias. Nevertheless, decision makers must consider this uncertainty, and must do so in relation to the effect measure that is being evaluated (e.g. a relative or absolute measure). We will describe the impact on interpretations for dichotomous outcomes in Section 15.4 .

15.4 Interpreting results from dichotomous outcomes (including numbers needed to treat)

15.4.1 relative and absolute risk reductions.

Clinicians may be more inclined to prescribe an intervention that reduces the relative risk of death by 25% than one that reduces the risk of death by 1 percentage point, although both presentations of the evidence may relate to the same benefit (i.e. a reduction in risk from 4% to 3%). The former refers to the relative reduction in risk and the latter to the absolute reduction in risk. As described in Chapter 6, Section 6.4.1 , there are several measures for comparing dichotomous outcomes in two groups. Meta-analyses are usually undertaken using risk ratios (RR), odds ratios (OR) or risk differences (RD), but there are several alternative ways of expressing results.

Relative risk reduction (RRR) is a convenient way of re-expressing a risk ratio as a percentage reduction:

draw conclusion from research findings

For example, a risk ratio of 0.75 translates to a relative risk reduction of 25%, as in the example above.

The risk difference is often referred to as the absolute risk reduction (ARR) or absolute risk increase (ARI), and may be presented as a percentage (e.g. 1%), as a decimal (e.g. 0.01), or as account (e.g. 10 out of 1000). We consider different choices for presenting absolute effects in Section 15.4.3 . We then describe computations for obtaining these numbers from the results of individual studies and of meta-analyses in Section 15.4.4 .

15.4.2 Number needed to treat (NNT)

The number needed to treat (NNT) is a common alternative way of presenting information on the effect of an intervention. The NNT is defined as the expected number of people who need to receive the experimental rather than the comparator intervention for one additional person to either incur or avoid an event (depending on the direction of the result) in a given time frame. Thus, for example, an NNT of 10 can be interpreted as ‘it is expected that one additional (or less) person will incur an event for every 10 participants receiving the experimental intervention rather than comparator over a given time frame’. It is important to be clear that:

  • since the NNT is derived from the risk difference, it is still a comparative measure of effect (experimental versus a specific comparator) and not a general property of a single intervention; and
  • the NNT gives an ‘expected value’. For example, NNT = 10 does not imply that one additional event will occur in each and every group of 10 people.

NNTs can be computed for both beneficial and detrimental events, and for interventions that cause both improvements and deteriorations in outcomes. In all instances NNTs are expressed as positive whole numbers. Some authors use the term ‘number needed to harm’ (NNH) when an intervention leads to an adverse outcome, or a decrease in a positive outcome, rather than improvement. However, this phrase can be misleading (most notably, it can easily be read to imply the number of people who will experience a harmful outcome if given the intervention), and it is strongly recommended that ‘number needed to harm’ and ‘NNH’ are avoided. The preferred alternative is to use phrases such as ‘number needed to treat for an additional beneficial outcome’ (NNTB) and ‘number needed to treat for an additional harmful outcome’ (NNTH) to indicate direction of effect.

As NNTs refer to events, their interpretation needs to be worded carefully when the binary outcome is a dichotomization of a scale-based outcome. For example, if the outcome is pain measured on a ‘none, mild, moderate or severe’ scale it may have been dichotomized as ‘none or mild’ versus ‘moderate or severe’. It would be inappropriate for an NNT from these data to be referred to as an ‘NNT for pain’. It is an ‘NNT for moderate or severe pain’.

We consider different choices for presenting absolute effects in Section 15.4.3 . We then describe computations for obtaining these numbers from the results of individual studies and of meta-analyses in Section 15.4.4 .

15.4.3 Expressing risk differences

Users of reviews are liable to be influenced by the choice of statistical presentations of the evidence. Hoffrage and colleagues suggest that physicians’ inferences about statistical outcomes are more appropriate when they deal with ‘natural frequencies’ – whole numbers of people, both treated and untreated (e.g. treatment results in a drop from 20 out of 1000 to 10 out of 1000 women having breast cancer) – than when effects are presented as percentages (e.g. 1% absolute reduction in breast cancer risk) (Hoffrage et al 2000). Probabilities may be more difficult to understand than frequencies, particularly when events are rare. While standardization may be important in improving the presentation of research evidence (and participation in healthcare decisions), current evidence suggests that the presentation of natural frequencies for expressing differences in absolute risk is best understood by consumers of healthcare information (Akl et al 2011b). This evidence provides the rationale for presenting absolute risks in ‘Summary of findings’ tables as numbers of people with events per 1000 people receiving the intervention (see Chapter 14 ).

RRs and RRRs remain crucial because relative effects tend to be substantially more stable across risk groups than absolute effects (see Chapter 10, Section 10.4.3 ). Review authors can use their own data to study this consistency (Cates 1999, Smeeth et al 1999). Risk differences from studies are least likely to be consistent across baseline event rates; thus, they are rarely appropriate for computing numbers needed to treat in systematic reviews. If a relative effect measure (OR or RR) is chosen for meta-analysis, then a comparator group risk needs to be specified as part of the calculation of an RD or NNT. In addition, if there are several different groups of participants with different levels of risk, it is crucial to express absolute benefit for each clinically identifiable risk group, clarifying the time period to which this applies. Studies in patients with differing severity of disease, or studies with different lengths of follow-up will almost certainly have different comparator group risks. In these cases, different comparator group risks lead to different RDs and NNTs (except when the intervention has no effect). A recommended approach is to re-express an odds ratio or a risk ratio as a variety of RD or NNTs across a range of assumed comparator risks (ACRs) (McQuay and Moore 1997, Smeeth et al 1999). Review authors should bear these considerations in mind not only when constructing their ‘Summary of findings’ table, but also in the text of their review.

For example, a review of oral anticoagulants to prevent stroke presented information to users by describing absolute benefits for various baseline risks (Aguilar and Hart 2005, Aguilar et al 2007). They presented their principal findings as “The inherent risk of stroke should be considered in the decision to use oral anticoagulants in atrial fibrillation patients, selecting those who stand to benefit most for this therapy” (Aguilar and Hart 2005). Among high-risk atrial fibrillation patients with prior stroke or transient ischaemic attack who have stroke rates of about 12% (120 per 1000) per year, warfarin prevents about 70 strokes yearly per 1000 patients, whereas for low-risk atrial fibrillation patients (with a stroke rate of about 2% per year or 20 per 1000), warfarin prevents only 12 strokes. This presentation helps users to understand the important impact that typical baseline risks have on the absolute benefit that they can expect.

15.4.4 Computations

Direct computation of risk difference (RD) or a number needed to treat (NNT) depends on the summary statistic (odds ratio, risk ratio or risk differences) available from the study or meta-analysis. When expressing results of meta-analyses, review authors should use, in the computations, whatever statistic they determined to be the most appropriate summary for meta-analysis (see Chapter 10, Section 10.4.3 ). Here we present calculations to obtain RD as a reduction in the number of participants per 1000. For example, a risk difference of –0.133 corresponds to 133 fewer participants with the event per 1000.

RDs and NNTs should not be computed from the aggregated total numbers of participants and events across the trials. This approach ignores the randomization within studies, and may produce seriously misleading results if there is unbalanced randomization in any of the studies. Using the pooled result of a meta-analysis is more appropriate. When computing NNTs, the values obtained are by convention always rounded up to the next whole number.

15.4.4.1 Computing NNT from a risk difference (RD)

A NNT may be computed from a risk difference as

draw conclusion from research findings

where the vertical bars (‘absolute value of’) in the denominator indicate that any minus sign should be ignored. It is convention to round the NNT up to the nearest whole number. For example, if the risk difference is –0.12 the NNT is 9; if the risk difference is –0.22 the NNT is 5. Cochrane Review authors should qualify the NNT as referring to benefit (improvement) or harm by denoting the NNT as NNTB or NNTH. Note that this approach, although feasible, should be used only for the results of a meta-analysis of risk differences. In most cases meta-analyses will be undertaken using a relative measure of effect (RR or OR), and those statistics should be used to calculate the NNT (see Section 15.4.4.2 and 15.4.4.3 ).

15.4.4.2 Computing risk differences or NNT from a risk ratio

To aid interpretation of the results of a meta-analysis of risk ratios, review authors may compute an absolute risk reduction or NNT. In order to do this, an assumed comparator risk (ACR) (otherwise known as a baseline risk, or risk that the outcome of interest would occur with the comparator intervention) is required. It will usually be appropriate to do this for a range of different ACRs. The computation proceeds as follows:

draw conclusion from research findings

As an example, suppose the risk ratio is RR = 0.92, and an ACR = 0.3 (300 per 1000) is assumed. Then the effect on risk is 24 fewer per 1000:

draw conclusion from research findings

The NNT is 42:

draw conclusion from research findings

15.4.4.3 Computing risk differences or NNT from an odds ratio

Review authors may wish to compute a risk difference or NNT from the results of a meta-analysis of odds ratios. In order to do this, an ACR is required. It will usually be appropriate to do this for a range of different ACRs. The computation proceeds as follows:

draw conclusion from research findings

As an example, suppose the odds ratio is OR = 0.73, and a comparator risk of ACR = 0.3 is assumed. Then the effect on risk is 62 fewer per 1000:

draw conclusion from research findings

The NNT is 17:

draw conclusion from research findings

15.4.4.4 Computing risk ratio from an odds ratio

Because risk ratios are easier to interpret than odds ratios, but odds ratios have favourable mathematical properties, a review author may decide to undertake a meta-analysis based on odds ratios, but to express the result as a summary risk ratio (or relative risk reduction). This requires an ACR. Then

draw conclusion from research findings

It will often be reasonable to perform this transformation using the median comparator group risk from the studies in the meta-analysis.

15.4.4.5 Computing confidence limits

Confidence limits for RDs and NNTs may be calculated by applying the above formulae to the upper and lower confidence limits for the summary statistic (RD, RR or OR) (Altman 1998). Note that this confidence interval does not incorporate uncertainty around the ACR.

If the 95% confidence interval of OR or RR includes the value 1, one of the confidence limits will indicate benefit and the other harm. Thus, appropriate use of the words ‘fewer’ and ‘more’ is required for each limit when presenting results in terms of events. For NNTs, the two confidence limits should be labelled as NNTB and NNTH to indicate the direction of effect in each case. The confidence interval for the NNT will include a ‘discontinuity’, because increasingly smaller risk differences that approach zero will lead to NNTs approaching infinity. Thus, the confidence interval will include both an infinitely large NNTB and an infinitely large NNTH.

15.5 Interpreting results from continuous outcomes (including standardized mean differences)

15.5.1 meta-analyses with continuous outcomes.

Review authors should describe in the study protocol how they plan to interpret results for continuous outcomes. When outcomes are continuous, review authors have a number of options to present summary results. These options differ if studies report the same measure that is familiar to the target audiences, studies report the same or very similar measures that are less familiar to the target audiences, or studies report different measures.

15.5.2 Meta-analyses with continuous outcomes using the same measure

If all studies have used the same familiar units, for instance, results are expressed as durations of events, such as symptoms for conditions including diarrhoea, sore throat, otitis media, influenza or duration of hospitalization, a meta-analysis may generate a summary estimate in those units, as a difference in mean response (see, for instance, the row summarizing results for duration of diarrhoea in Chapter 14, Figure 14.1.b and the row summarizing oedema in Chapter 14, Figure 14.1.a ). For such outcomes, the ‘Summary of findings’ table should include a difference of means between the two interventions. However, when units of such outcomes may be difficult to interpret, particularly when they relate to rating scales (again, see the oedema row of Chapter 14, Figure 14.1.a ). ‘Summary of findings’ tables should include the minimum and maximum of the scale of measurement, and the direction. Knowledge of the smallest change in instrument score that patients perceive is important – the minimal important difference (MID) – and can greatly facilitate the interpretation of results (Guyatt et al 1998, Schünemann and Guyatt 2005). Knowing the MID allows review authors and users to place results in context. Review authors should state the MID – if known – in the Comments column of their ‘Summary of findings’ table. For example, the chronic respiratory questionnaire has possible scores in health-related quality of life ranging from 1 to 7 and 0.5 represents a well-established MID (Jaeschke et al 1989, Schünemann et al 2005).

15.5.3 Meta-analyses with continuous outcomes using different measures

When studies have used different instruments to measure the same construct, a standardized mean difference (SMD) may be used in meta-analysis for combining continuous data. Without guidance, clinicians and patients may have little idea how to interpret results presented as SMDs. Review authors should therefore consider issues of interpretability when planning their analysis at the protocol stage and should consider whether there will be suitable ways to re-express the SMD or whether alternative effect measures, such as a ratio of means, or possibly as minimal important difference units (Guyatt et al 2013b) should be used. Table 15.5.a and the following sections describe these options.

Table 15.5.a Approaches and their implications to presenting results of continuous variables when primary studies have used different instruments to measure the same construct. Adapted from Guyatt et al (2013b)

15.5.3.1 Presenting and interpreting SMDs using generic effect size estimates

The SMD expresses the intervention effect in standard units rather than the original units of measurement. The SMD is the difference in mean effects between the experimental and comparator groups divided by the pooled standard deviation of participants’ outcomes, or external SDs when studies are very small (see Chapter 6, Section 6.5.1.2 ). The value of a SMD thus depends on both the size of the effect (the difference between means) and the standard deviation of the outcomes (the inherent variability among participants or based on an external SD).

If review authors use the SMD, they might choose to present the results directly as SMDs (row 1a, Table 15.5.a and Table 15.5.b ). However, absolute values of the intervention and comparison groups are typically not useful because studies have used different measurement instruments with different units. Guiding rules for interpreting SMDs (or ‘Cohen’s effect sizes’) exist, and have arisen mainly from researchers in the social sciences (Cohen 1988). One example is as follows: 0.2 represents a small effect, 0.5 a moderate effect and 0.8 a large effect (Cohen 1988). Variations exist (e.g. <0.40=small, 0.40 to 0.70=moderate, >0.70=large). Review authors might consider including such a guiding rule in interpreting the SMD in the text of the review, and in summary versions such as the Comments column of a ‘Summary of findings’ table. However, some methodologists believe that such interpretations are problematic because patient importance of a finding is context-dependent and not amenable to generic statements.

15.5.3.2 Re-expressing SMDs using a familiar instrument

The second possibility for interpreting the SMD is to express it in the units of one or more of the specific measurement instruments used by the included studies (row 1b, Table 15.5.a and Table 15.5.b ). The approach is to calculate an absolute difference in means by multiplying the SMD by an estimate of the SD associated with the most familiar instrument. To obtain this SD, a reasonable option is to calculate a weighted average across all intervention groups of all studies that used the selected instrument (preferably a pre-intervention or post-intervention SD as discussed in Chapter 10, Section 10.5.2 ). To better reflect among-person variation in practice, or to use an instrument not represented in the meta-analysis, it may be preferable to use a standard deviation from a representative observational study. The summary effect is thus re-expressed in the original units of that particular instrument and the clinical relevance and impact of the intervention effect can be interpreted using that familiar instrument.

The same approach of re-expressing the results for a familiar instrument can also be used for other standardized effect measures such as when standardizing by MIDs (Guyatt et al 2013b): see Section 15.5.3.5 .

Table 15.5.b Application of approaches when studies have used different measures: effects of dexamethasone for pain after laparoscopic cholecystectomy (Karanicolas et al 2008). Reproduced with permission of Wolters Kluwer

1 Certainty rated according to GRADE from very low to high certainty. 2 Substantial unexplained heterogeneity in study results. 3 Imprecision due to wide confidence intervals. 4 The 20% comes from the proportion in the control group requiring rescue analgesia. 5 Crude (arithmetic) means of the post-operative pain mean responses across all five trials when transformed to a 100-point scale.

15.5.3.3 Re-expressing SMDs through dichotomization and transformation to relative and absolute measures

A third approach (row 1c, Table 15.5.a and Table 15.5.b ) relies on converting the continuous measure into a dichotomy and thus allows calculation of relative and absolute effects on a binary scale. A transformation of a SMD to a (log) odds ratio is available, based on the assumption that an underlying continuous variable has a logistic distribution with equal standard deviation in the two intervention groups, as discussed in Chapter 10, Section 10.6  (Furukawa 1999, Guyatt et al 2013b). The assumption is unlikely to hold exactly and the results must be regarded as an approximation. The log odds ratio is estimated as

draw conclusion from research findings

(or approximately 1.81✕SMD). The resulting odds ratio can then be presented as normal, and in a ‘Summary of findings’ table, combined with an assumed comparator group risk to be expressed as an absolute risk difference. The comparator group risk in this case would refer to the proportion of people who have achieved a specific value of the continuous outcome. In randomized trials this can be interpreted as the proportion who have improved by some (specified) amount (responders), for instance by 5 points on a 0 to 100 scale. Table 15.5.c shows some illustrative results from this method. The risk differences can then be converted to NNTs or to people per thousand using methods described in Section 15.4.4 .

Table 15.5.c Risk difference derived for specific SMDs for various given ‘proportions improved’ in the comparator group (Furukawa 1999, Guyatt et al 2013b). Reproduced with permission of Elsevier 

15.5.3.4 Ratio of means

A more frequently used approach is based on calculation of a ratio of means between the intervention and comparator groups (Friedrich et al 2008) as discussed in Chapter 6, Section 6.5.1.3 . Interpretational advantages of this approach include the ability to pool studies with outcomes expressed in different units directly, to avoid the vulnerability of heterogeneous populations that limits approaches that rely on SD units, and for ease of clinical interpretation (row 2, Table 15.5.a and Table 15.5.b ). This method is currently designed for post-intervention scores only. However, it is possible to calculate a ratio of change scores if both intervention and comparator groups change in the same direction in each relevant study, and this ratio may sometimes be informative.

Limitations to this approach include its limited applicability to change scores (since it is unlikely that both intervention and comparator group changes are in the same direction in all studies) and the possibility of misleading results if the comparator group mean is very small, in which case even a modest difference from the intervention group will yield a large and therefore misleading ratio of means. It also requires that separate ratios of means be calculated for each included study, and then entered into a generic inverse variance meta-analysis (see Chapter 10, Section 10.3 ).

The ratio of means approach illustrated in Table 15.5.b suggests a relative reduction in pain of only 13%, meaning that those receiving steroids have a pain severity 87% of those in the comparator group, an effect that might be considered modest.

15.5.3.5 Presenting continuous results as minimally important difference units

To express results in MID units, review authors have two options. First, they can be combined across studies in the same way as the SMD, but instead of dividing the mean difference of each study by its SD, review authors divide by the MID associated with that outcome (Johnston et al 2010, Guyatt et al 2013b). Instead of SD units, the pooled results represent MID units (row 3, Table 15.5.a and Table 15.5.b ), and may be more easily interpretable. This approach avoids the problem of varying SDs across studies that may distort estimates of effect in approaches that rely on the SMD. The approach, however, relies on having well-established MIDs. The approach is also risky in that a difference less than the MID may be interpreted as trivial when a substantial proportion of patients may have achieved an important benefit.

The other approach makes a simple conversion (not shown in Table 15.5.b ), before undertaking the meta-analysis, of the means and SDs from each study to means and SDs on the scale of a particular familiar instrument whose MID is known. For example, one can rescale the mean and SD of other chronic respiratory disease instruments (e.g. rescaling a 0 to 100 score of an instrument) to a the 1 to 7 score in Chronic Respiratory Disease Questionnaire (CRQ) units (by assuming 0 equals 1 and 100 equals 7 on the CRQ). Given the MID of the CRQ of 0.5, a mean difference in change of 0.71 after rescaling of all studies suggests a substantial effect of the intervention (Guyatt et al 2013b). This approach, presenting in units of the most familiar instrument, may be the most desirable when the target audiences have extensive experience with that instrument, particularly if the MID is well established.

15.6 Drawing conclusions

15.6.1 conclusions sections of a cochrane review.

Authors’ conclusions in a Cochrane Review are divided into implications for practice and implications for research. While Cochrane Reviews about interventions can provide meaningful information and guidance for practice, decisions about the desirable and undesirable consequences of healthcare options require evidence and judgements for criteria that most Cochrane Reviews do not provide (Alonso-Coello et al 2016). In describing the implications for practice and the development of recommendations, however, review authors may consider the certainty of the evidence, the balance of benefits and harms, and assumed values and preferences.

15.6.2 Implications for practice

Drawing conclusions about the practical usefulness of an intervention entails making trade-offs, either implicitly or explicitly, between the estimated benefits, harms and the values and preferences. Making such trade-offs, and thus making specific recommendations for an action in a specific context, goes beyond a Cochrane Review and requires additional evidence and informed judgements that most Cochrane Reviews do not provide (Alonso-Coello et al 2016). Such judgements are typically the domain of clinical practice guideline developers for which Cochrane Reviews will provide crucial information (Graham et al 2011, Schünemann et al 2014, Zhang et al 2018a). Thus, authors of Cochrane Reviews should not make recommendations.

If review authors feel compelled to lay out actions that clinicians and patients could take, they should – after describing the certainty of evidence and the balance of benefits and harms – highlight different actions that might be consistent with particular patterns of values and preferences. Other factors that might influence a decision should also be highlighted, including any known factors that would be expected to modify the effects of the intervention, the baseline risk or status of the patient, costs and who bears those costs, and the availability of resources. Review authors should ensure they consider all patient-important outcomes, including those for which limited data may be available. In the context of public health reviews the focus may be on population-important outcomes as the target may be an entire (non-diseased) population and include outcomes that are not measured in the population receiving an intervention (e.g. a reduction of transmission of infections from those receiving an intervention). This process implies a high level of explicitness in judgements about values or preferences attached to different outcomes and the certainty of the related evidence (Zhang et al 2018b, Zhang et al 2018c); this and a full cost-effectiveness analysis is beyond the scope of most Cochrane Reviews (although they might well be used for such analyses; see Chapter 20 ).

A review on the use of anticoagulation in cancer patients to increase survival (Akl et al 2011a) provides an example for laying out clinical implications for situations where there are important trade-offs between desirable and undesirable effects of the intervention: “The decision for a patient with cancer to start heparin therapy for survival benefit should balance the benefits and downsides and integrate the patient’s values and preferences. Patients with a high preference for a potential survival prolongation, limited aversion to potential bleeding, and who do not consider heparin (both UFH or LMWH) therapy a burden may opt to use heparin, while those with aversion to bleeding may not.”

15.6.3 Implications for research

The second category for authors’ conclusions in a Cochrane Review is implications for research. To help people make well-informed decisions about future healthcare research, the ‘Implications for research’ section should comment on the need for further research, and the nature of the further research that would be most desirable. It is helpful to consider the population, intervention, comparison and outcomes that could be addressed, or addressed more effectively in the future, in the context of the certainty of the evidence in the current review (Brown et al 2006):

  • P (Population): diagnosis, disease stage, comorbidity, risk factor, sex, age, ethnic group, specific inclusion or exclusion criteria, clinical setting;
  • I (Intervention): type, frequency, dose, duration, prognostic factor;
  • C (Comparison): placebo, routine care, alternative treatment/management;
  • O (Outcome): which clinical or patient-related outcomes will the researcher need to measure, improve, influence or accomplish? Which methods of measurement should be used?

While Cochrane Review authors will find the PICO domains helpful, the domains of the GRADE certainty framework further support understanding and describing what additional research will improve the certainty in the available evidence. Note that as the certainty of the evidence is likely to vary by outcome, these implications will be specific to certain outcomes in the review. Table 15.6.a shows how review authors may be aided in their interpretation of the body of evidence and drawing conclusions about future research and practice.

Table 15.6.a Implications for research and practice suggested by individual GRADE domains

The review of compression stockings for prevention of deep vein thrombosis (DVT) in airline passengers described in Chapter 14 provides an example where there is some convincing evidence of a benefit of the intervention: “This review shows that the question of the effects on symptomless DVT of wearing versus not wearing compression stockings in the types of people studied in these trials should now be regarded as answered. Further research may be justified to investigate the relative effects of different strengths of stockings or of stockings compared to other preventative strategies. Further randomised trials to address the remaining uncertainty about the effects of wearing versus not wearing compression stockings on outcomes such as death, pulmonary embolism and symptomatic DVT would need to be large.” (Clarke et al 2016).

A review of therapeutic touch for anxiety disorder provides an example of the implications for research when no eligible studies had been found: “This review highlights the need for randomized controlled trials to evaluate the effectiveness of therapeutic touch in reducing anxiety symptoms in people diagnosed with anxiety disorders. Future trials need to be rigorous in design and delivery, with subsequent reporting to include high quality descriptions of all aspects of methodology to enable appraisal and interpretation of results.” (Robinson et al 2007).

15.6.4 Reaching conclusions

A common mistake is to confuse ‘no evidence of an effect’ with ‘evidence of no effect’. When the confidence intervals are too wide (e.g. including no effect), it is wrong to claim that the experimental intervention has ‘no effect’ or is ‘no different’ from the comparator intervention. Review authors may also incorrectly ‘positively’ frame results for some effects but not others. For example, when the effect estimate is positive for a beneficial outcome but confidence intervals are wide, review authors may describe the effect as promising. However, when the effect estimate is negative for an outcome that is considered harmful but the confidence intervals include no effect, review authors report no effect. Another mistake is to frame the conclusion in wishful terms. For example, review authors might write, “there were too few people in the analysis to detect a reduction in mortality” when the included studies showed a reduction or even increase in mortality that was not ‘statistically significant’. One way of avoiding errors such as these is to consider the results blinded; that is, consider how the results would be presented and framed in the conclusions if the direction of the results was reversed. If the confidence interval for the estimate of the difference in the effects of the interventions overlaps with no effect, the analysis is compatible with both a true beneficial effect and a true harmful effect. If one of the possibilities is mentioned in the conclusion, the other possibility should be mentioned as well. Table 15.6.b suggests narrative statements for drawing conclusions based on the effect estimate from the meta-analysis and the certainty of the evidence.

Table 15.6.b Suggested narrative statements for phrasing conclusions

Another common mistake is to reach conclusions that go beyond the evidence. Often this is done implicitly, without referring to the additional information or judgements that are used in reaching conclusions about the implications of a review for practice. Even when additional information and explicit judgements support conclusions about the implications of a review for practice, review authors rarely conduct systematic reviews of the additional information. Furthermore, implications for practice are often dependent on specific circumstances and values that must be taken into consideration. As we have noted, review authors should always be cautious when drawing conclusions about implications for practice and they should not make recommendations.

15.7 Chapter information

Authors: Holger J Schünemann, Gunn E Vist, Julian PT Higgins, Nancy Santesso, Jonathan J Deeks, Paul Glasziou, Elie Akl, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group

Acknowledgements: Andrew Oxman, Jonathan Sterne, Michael Borenstein and Rob Scholten contributed text to earlier versions of this chapter.

Funding: This work was in part supported by funding from the Michael G DeGroote Cochrane Canada Centre and the Ontario Ministry of Health. JJD receives support from the National Institute for Health Research (NIHR) Birmingham Biomedical Research Centre at the University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham. JPTH receives support from the NIHR Biomedical Research Centre at University Hospitals Bristol NHS Foundation Trust and the University of Bristol. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

15.8 References

Aguilar MI, Hart R. Oral anticoagulants for preventing stroke in patients with non-valvular atrial fibrillation and no previous history of stroke or transient ischemic attacks. Cochrane Database of Systematic Reviews 2005; 3 : CD001927.

Aguilar MI, Hart R, Pearce LA. Oral anticoagulants versus antiplatelet therapy for preventing stroke in patients with non-valvular atrial fibrillation and no history of stroke or transient ischemic attacks. Cochrane Database of Systematic Reviews 2007; 3 : CD006186.

Akl EA, Gunukula S, Barba M, Yosuico VE, van Doormaal FF, Kuipers S, Middeldorp S, Dickinson HO, Bryant A, Schünemann H. Parenteral anticoagulation in patients with cancer who have no therapeutic or prophylactic indication for anticoagulation. Cochrane Database of Systematic Reviews 2011a; 1 : CD006652.

Akl EA, Oxman AD, Herrin J, Vist GE, Terrenato I, Sperati F, Costiniuk C, Blank D, Schünemann H. Using alternative statistical formats for presenting risks and risk reductions. Cochrane Database of Systematic Reviews 2011b; 3 : CD006776.

Alonso-Coello P, Schünemann HJ, Moberg J, Brignardello-Petersen R, Akl EA, Davoli M, Treweek S, Mustafa RA, Rada G, Rosenbaum S, Morelli A, Guyatt GH, Oxman AD, Group GW. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ 2016; 353 : i2016.

Altman DG. Confidence intervals for the number needed to treat. BMJ 1998; 317 : 1309-1312.

Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, Guyatt GH, Harbour RT, Haugh MC, Henry D, Hill S, Jaeschke R, Leng G, Liberati A, Magrini N, Mason J, Middleton P, Mrukowicz J, O'Connell D, Oxman AD, Phillips B, Schünemann HJ, Edejer TT, Varonen H, Vist GE, Williams JW, Jr., Zaza S. Grading quality of evidence and strength of recommendations. BMJ 2004; 328 : 1490.

Brown P, Brunnhuber K, Chalkidou K, Chalmers I, Clarke M, Fenton M, Forbes C, Glanville J, Hicks NJ, Moody J, Twaddle S, Timimi H, Young P. How to formulate research recommendations. BMJ 2006; 333 : 804-806.

Cates C. Confidence intervals for the number needed to treat: Pooling numbers needed to treat may not be reliable. BMJ 1999; 318 : 1764-1765.

Clarke MJ, Broderick C, Hopewell S, Juszczak E, Eisinga A. Compression stockings for preventing deep vein thrombosis in airline passengers. Cochrane Database of Systematic Reviews 2016; 9 : CD004002.

Cohen J. Statistical Power Analysis in the Behavioral Sciences . 2nd edition ed. Hillsdale (NJ): Lawrence Erlbaum Associates, Inc.; 1988.

Coleman T, Chamberlain C, Davey MA, Cooper SE, Leonardi-Bee J. Pharmacological interventions for promoting smoking cessation during pregnancy. Cochrane Database of Systematic Reviews 2015; 12 : CD010078.

Dans AM, Dans L, Oxman AD, Robinson V, Acuin J, Tugwell P, Dennis R, Kang D. Assessing equity in clinical practice guidelines. Journal of Clinical Epidemiology 2007; 60 : 540-546.

Friedman LM, Furberg CD, DeMets DL. Fundamentals of Clinical Trials . 2nd edition ed. Littleton (MA): John Wright PSG, Inc.; 1985.

Friedrich JO, Adhikari NK, Beyene J. The ratio of means method as an alternative to mean differences for analyzing continuous outcome variables in meta-analysis: a simulation study. BMC Medical Research Methodology 2008; 8 : 32.

Furukawa T. From effect size into number needed to treat. Lancet 1999; 353 : 1680.

Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines, Board on Health Care Services: Clinical Practice Guidelines We Can Trust. Washington, DC: National Academies Press; 2011.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology 2011a; 64 : 383-394.

Guyatt GH, Juniper EF, Walter SD, Griffith LE, Goldstein RS. Interpreting treatment effects in randomised trials. BMJ 1998; 316 : 690-693.

Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336 : 924-926.

Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, Alonso-Coello P, Falck-Ytter Y, Jaeschke R, Vist G, Akl EA, Post PN, Norris S, Meerpohl J, Shukla VK, Nasser M, Schünemann HJ. GRADE guidelines: 8. Rating the quality of evidence--indirectness. Journal of Clinical Epidemiology 2011b; 64 : 1303-1310.

Guyatt GH, Oxman AD, Santesso N, Helfand M, Vist G, Kunz R, Brozek J, Norris S, Meerpohl J, Djulbegovic B, Alonso-Coello P, Post PN, Busse JW, Glasziou P, Christensen R, Schünemann HJ. GRADE guidelines: 12. Preparing summary of findings tables-binary outcomes. Journal of Clinical Epidemiology 2013a; 66 : 158-172.

Guyatt GH, Thorlund K, Oxman AD, Walter SD, Patrick D, Furukawa TA, Johnston BC, Karanicolas P, Akl EA, Vist G, Kunz R, Brozek J, Kupper LL, Martin SL, Meerpohl JJ, Alonso-Coello P, Christensen R, Schünemann HJ. GRADE guidelines: 13. Preparing summary of findings tables and evidence profiles-continuous outcomes. Journal of Clinical Epidemiology 2013b; 66 : 173-183.

Hawe P, Shiell A, Riley T, Gold L. Methods for exploring implementation variation and local context within a cluster randomised community intervention trial. Journal of Epidemiology and Community Health 2004; 58 : 788-793.

Hoffrage U, Lindsey S, Hertwig R, Gigerenzer G. Medicine. Communicating statistical information. Science 2000; 290 : 2261-2262.

Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Controlled Clinical Trials 1989; 10 : 407-415.

Johnston B, Thorlund K, Schünemann H, Xie F, Murad M, Montori V, Guyatt G. Improving the interpretation of health-related quality of life evidence in meta-analysis: The application of minimal important difference units. . Health Outcomes and Qualithy of Life 2010; 11 : 116.

Karanicolas PJ, Smith SE, Kanbur B, Davies E, Guyatt GH. The impact of prophylactic dexamethasone on nausea and vomiting after laparoscopic cholecystectomy: a systematic review and meta-analysis. Annals of Surgery 2008; 248 : 751-762.

Lumley J, Oliver SS, Chamberlain C, Oakley L. Interventions for promoting smoking cessation during pregnancy. Cochrane Database of Systematic Reviews 2004; 4 : CD001055.

McQuay HJ, Moore RA. Using numerical results from systematic reviews in clinical practice. Annals of Internal Medicine 1997; 126 : 712-720.

Resnicow K, Cross D, Wynder E. The Know Your Body program: a review of evaluation studies. Bulletin of the New York Academy of Medicine 1993; 70 : 188-207.

Robinson J, Biley FC, Dolk H. Therapeutic touch for anxiety disorders. Cochrane Database of Systematic Reviews 2007; 3 : CD006240.

Rothwell PM. External validity of randomised controlled trials: "to whom do the results of this trial apply?". Lancet 2005; 365 : 82-93.

Santesso N, Carrasco-Labra A, Langendam M, Brignardello-Petersen R, Mustafa RA, Heus P, Lasserson T, Opiyo N, Kunnamo I, Sinclair D, Garner P, Treweek S, Tovey D, Akl EA, Tugwell P, Brozek JL, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 3: detailed guidance for explanatory footnotes supports creating and understanding GRADE certainty in the evidence judgments. Journal of Clinical Epidemiology 2016; 74 : 28-39.

Schünemann HJ, Puhan M, Goldstein R, Jaeschke R, Guyatt GH. Measurement properties and interpretability of the Chronic respiratory disease questionnaire (CRQ). COPD: Journal of Chronic Obstructive Pulmonary Disease 2005; 2 : 81-89.

Schünemann HJ, Guyatt GH. Commentary--goodbye M(C)ID! Hello MID, where do you come from? Health Services Research 2005; 40 : 593-597.

Schünemann HJ, Fretheim A, Oxman AD. Improving the use of research evidence in guideline development: 13. Applicability, transferability and adaptation. Health Research Policy and Systems 2006; 4 : 25.

Schünemann HJ. Methodological idiosyncracies, frameworks and challenges of non-pharmaceutical and non-technical treatment interventions. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen 2013; 107 : 214-220.

Schünemann HJ, Tugwell P, Reeves BC, Akl EA, Santesso N, Spencer FA, Shea B, Wells G, Helfand M. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Research Synthesis Methods 2013; 4 : 49-62.

Schünemann HJ, Wiercioch W, Etxeandia I, Falavigna M, Santesso N, Mustafa R, Ventresca M, Brignardello-Petersen R, Laisaar KT, Kowalski S, Baldeh T, Zhang Y, Raid U, Neumann I, Norris SL, Thornton J, Harbour R, Treweek S, Guyatt G, Alonso-Coello P, Reinap M, Brozek J, Oxman A, Akl EA. Guidelines 2.0: systematic development of a comprehensive checklist for a successful guideline enterprise. CMAJ: Canadian Medical Association Journal 2014; 186 : E123-142.

Schünemann HJ. Interpreting GRADE's levels of certainty or quality of the evidence: GRADE for statisticians, considering review information size or less emphasis on imprecision? Journal of Clinical Epidemiology 2016; 75 : 6-15.

Smeeth L, Haines A, Ebrahim S. Numbers needed to treat derived from meta-analyses--sometimes informative, usually misleading. BMJ 1999; 318 : 1548-1551.

Sun X, Briel M, Busse JW, You JJ, Akl EA, Mejza F, Bala MM, Bassler D, Mertz D, Diaz-Granados N, Vandvik PO, Malaga G, Srinathan SK, Dahm P, Johnston BC, Alonso-Coello P, Hassouneh B, Walter SD, Heels-Ansdell D, Bhatnagar N, Altman DG, Guyatt GH. Credibility of claims of subgroup effects in randomised controlled trials: systematic review. BMJ 2012; 344 : e1553.

Zhang Y, Akl EA, Schünemann HJ. Using systematic reviews in guideline development: the GRADE approach. Research Synthesis Methods 2018a: doi: 10.1002/jrsm.1313.

Zhang Y, Alonso-Coello P, Guyatt GH, Yepes-Nunez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW, Jr., Tugwell P, Flottorp S, Chang Y, Zhang Y, Mustafa RA, Rojas MX, Schünemann HJ. GRADE Guidelines: 19. Assessing the certainty of evidence in the importance of outcomes or values and preferences-Risk of bias and indirectness. Journal of Clinical Epidemiology 2018b: doi: 10.1016/j.jclinepi.2018.1001.1013.

Zhang Y, Alonso Coello P, Guyatt G, Yepes-Nunez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW, Jr., Tugwell P, Flottorp S, Chang Y, Zhang Y, Mustafa RA, Rojas MX, Xie F, Schünemann HJ. GRADE Guidelines: 20. Assessing the certainty of evidence in the importance of outcomes or values and preferences - Inconsistency, Imprecision, and other Domains. Journal of Clinical Epidemiology 2018c: doi: 10.1016/j.jclinepi.2018.1005.1011.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

draw conclusion from research findings

Get science-backed answers as you write with Paperpal's Research feature

How to Write a Conclusion for Research Papers (with Examples)

How to Write a Conclusion for Research Papers (with Examples)

The conclusion of a research paper is a crucial section that plays a significant role in the overall impact and effectiveness of your research paper. However, this is also the section that typically receives less attention compared to the introduction and the body of the paper. The conclusion serves to provide a concise summary of the key findings, their significance, their implications, and a sense of closure to the study. Discussing how can the findings be applied in real-world scenarios or inform policy, practice, or decision-making is especially valuable to practitioners and policymakers. The research paper conclusion also provides researchers with clear insights and valuable information for their own work, which they can then build on and contribute to the advancement of knowledge in the field.

The research paper conclusion should explain the significance of your findings within the broader context of your field. It restates how your results contribute to the existing body of knowledge and whether they confirm or challenge existing theories or hypotheses. Also, by identifying unanswered questions or areas requiring further investigation, your awareness of the broader research landscape can be demonstrated.

Remember to tailor the research paper conclusion to the specific needs and interests of your intended audience, which may include researchers, practitioners, policymakers, or a combination of these.

Table of Contents

What is a conclusion in a research paper, summarizing conclusion, editorial conclusion, externalizing conclusion, importance of a good research paper conclusion, how to write a conclusion for your research paper, research paper conclusion examples.

  • How to write a research paper conclusion with Paperpal? 

Frequently Asked Questions

A conclusion in a research paper is the final section where you summarize and wrap up your research, presenting the key findings and insights derived from your study. The research paper conclusion is not the place to introduce new information or data that was not discussed in the main body of the paper. When working on how to conclude a research paper, remember to stick to summarizing and interpreting existing content. The research paper conclusion serves the following purposes: 1

  • Warn readers of the possible consequences of not attending to the problem.
  • Recommend specific course(s) of action.
  • Restate key ideas to drive home the ultimate point of your research paper.
  • Provide a “take-home” message that you want the readers to remember about your study.

draw conclusion from research findings

Types of conclusions for research papers

In research papers, the conclusion provides closure to the reader. The type of research paper conclusion you choose depends on the nature of your study, your goals, and your target audience. I provide you with three common types of conclusions:

A summarizing conclusion is the most common type of conclusion in research papers. It involves summarizing the main points, reiterating the research question, and restating the significance of the findings. This common type of research paper conclusion is used across different disciplines.

An editorial conclusion is less common but can be used in research papers that are focused on proposing or advocating for a particular viewpoint or policy. It involves presenting a strong editorial or opinion based on the research findings and offering recommendations or calls to action.

An externalizing conclusion is a type of conclusion that extends the research beyond the scope of the paper by suggesting potential future research directions or discussing the broader implications of the findings. This type of conclusion is often used in more theoretical or exploratory research papers.

Align your conclusion’s tone with the rest of your research paper. Start Writing with Paperpal Now!  

The conclusion in a research paper serves several important purposes:

  • Offers Implications and Recommendations : Your research paper conclusion is an excellent place to discuss the broader implications of your research and suggest potential areas for further study. It’s also an opportunity to offer practical recommendations based on your findings.
  • Provides Closure : A good research paper conclusion provides a sense of closure to your paper. It should leave the reader with a feeling that they have reached the end of a well-structured and thought-provoking research project.
  • Leaves a Lasting Impression : Writing a well-crafted research paper conclusion leaves a lasting impression on your readers. It’s your final opportunity to leave them with a new idea, a call to action, or a memorable quote.

draw conclusion from research findings

Writing a strong conclusion for your research paper is essential to leave a lasting impression on your readers. Here’s a step-by-step process to help you create and know what to put in the conclusion of a research paper: 2

  • Research Statement : Begin your research paper conclusion by restating your research statement. This reminds the reader of the main point you’ve been trying to prove throughout your paper. Keep it concise and clear.
  • Key Points : Summarize the main arguments and key points you’ve made in your paper. Avoid introducing new information in the research paper conclusion. Instead, provide a concise overview of what you’ve discussed in the body of your paper.
  • Address the Research Questions : If your research paper is based on specific research questions or hypotheses, briefly address whether you’ve answered them or achieved your research goals. Discuss the significance of your findings in this context.
  • Significance : Highlight the importance of your research and its relevance in the broader context. Explain why your findings matter and how they contribute to the existing knowledge in your field.
  • Implications : Explore the practical or theoretical implications of your research. How might your findings impact future research, policy, or real-world applications? Consider the “so what?” question.
  • Future Research : Offer suggestions for future research in your area. What questions or aspects remain unanswered or warrant further investigation? This shows that your work opens the door for future exploration.
  • Closing Thought : Conclude your research paper conclusion with a thought-provoking or memorable statement. This can leave a lasting impression on your readers and wrap up your paper effectively. Avoid introducing new information or arguments here.
  • Proofread and Revise : Carefully proofread your conclusion for grammar, spelling, and clarity. Ensure that your ideas flow smoothly and that your conclusion is coherent and well-structured.

Write your research paper conclusion 2x faster with Paperpal. Try it now!

Remember that a well-crafted research paper conclusion is a reflection of the strength of your research and your ability to communicate its significance effectively. It should leave a lasting impression on your readers and tie together all the threads of your paper. Now you know how to start the conclusion of a research paper and what elements to include to make it impactful, let’s look at a research paper conclusion sample.

draw conclusion from research findings

How to write a research paper conclusion with Paperpal?

A research paper conclusion is not just a summary of your study, but a synthesis of the key findings that ties the research together and places it in a broader context. A research paper conclusion should be concise, typically around one paragraph in length. However, some complex topics may require a longer conclusion to ensure the reader is left with a clear understanding of the study’s significance. Paperpal, an AI writing assistant trusted by over 800,000 academics globally, can help you write a well-structured conclusion for your research paper. 

  • Sign Up or Log In: Create a new Paperpal account or login with your details.  
  • Navigate to Features : Once logged in, head over to the features’ side navigation pane. Click on Templates and you’ll find a suite of generative AI features to help you write better, faster.  
  • Generate an outline: Under Templates, select ‘Outlines’. Choose ‘Research article’ as your document type.  
  • Select your section: Since you’re focusing on the conclusion, select this section when prompted.  
  • Choose your field of study: Identifying your field of study allows Paperpal to provide more targeted suggestions, ensuring the relevance of your conclusion to your specific area of research. 
  • Provide a brief description of your study: Enter details about your research topic and findings. This information helps Paperpal generate a tailored outline that aligns with your paper’s content. 
  • Generate the conclusion outline: After entering all necessary details, click on ‘generate’. Paperpal will then create a structured outline for your conclusion, to help you start writing and build upon the outline.  
  • Write your conclusion: Use the generated outline to build your conclusion. The outline serves as a guide, ensuring you cover all critical aspects of a strong conclusion, from summarizing key findings to highlighting the research’s implications. 
  • Refine and enhance: Paperpal’s ‘Make Academic’ feature can be particularly useful in the final stages. Select any paragraph of your conclusion and use this feature to elevate the academic tone, ensuring your writing is aligned to the academic journal standards. 

By following these steps, Paperpal not only simplifies the process of writing a research paper conclusion but also ensures it is impactful, concise, and aligned with academic standards. Sign up with Paperpal today and write your research paper conclusion 2x faster .  

The research paper conclusion is a crucial part of your paper as it provides the final opportunity to leave a strong impression on your readers. In the research paper conclusion, summarize the main points of your research paper by restating your research statement, highlighting the most important findings, addressing the research questions or objectives, explaining the broader context of the study, discussing the significance of your findings, providing recommendations if applicable, and emphasizing the takeaway message. The main purpose of the conclusion is to remind the reader of the main point or argument of your paper and to provide a clear and concise summary of the key findings and their implications. All these elements should feature on your list of what to put in the conclusion of a research paper to create a strong final statement for your work.

A strong conclusion is a critical component of a research paper, as it provides an opportunity to wrap up your arguments, reiterate your main points, and leave a lasting impression on your readers. Here are the key elements of a strong research paper conclusion: 1. Conciseness : A research paper conclusion should be concise and to the point. It should not introduce new information or ideas that were not discussed in the body of the paper. 2. Summarization : The research paper conclusion should be comprehensive enough to give the reader a clear understanding of the research’s main contributions. 3 . Relevance : Ensure that the information included in the research paper conclusion is directly relevant to the research paper’s main topic and objectives; avoid unnecessary details. 4 . Connection to the Introduction : A well-structured research paper conclusion often revisits the key points made in the introduction and shows how the research has addressed the initial questions or objectives. 5. Emphasis : Highlight the significance and implications of your research. Why is your study important? What are the broader implications or applications of your findings? 6 . Call to Action : Include a call to action or a recommendation for future research or action based on your findings.

The length of a research paper conclusion can vary depending on several factors, including the overall length of the paper, the complexity of the research, and the specific journal requirements. While there is no strict rule for the length of a conclusion, but it’s generally advisable to keep it relatively short. A typical research paper conclusion might be around 5-10% of the paper’s total length. For example, if your paper is 10 pages long, the conclusion might be roughly half a page to one page in length.

In general, you do not need to include citations in the research paper conclusion. Citations are typically reserved for the body of the paper to support your arguments and provide evidence for your claims. However, there may be some exceptions to this rule: 1. If you are drawing a direct quote or paraphrasing a specific source in your research paper conclusion, you should include a citation to give proper credit to the original author. 2. If your conclusion refers to or discusses specific research, data, or sources that are crucial to the overall argument, citations can be included to reinforce your conclusion’s validity.

The conclusion of a research paper serves several important purposes: 1. Summarize the Key Points 2. Reinforce the Main Argument 3. Provide Closure 4. Offer Insights or Implications 5. Engage the Reader. 6. Reflect on Limitations

Remember that the primary purpose of the research paper conclusion is to leave a lasting impression on the reader, reinforcing the key points and providing closure to your research. It’s often the last part of the paper that the reader will see, so it should be strong and well-crafted.

  • Makar, G., Foltz, C., Lendner, M., & Vaccaro, A. R. (2018). How to write effective discussion and conclusion sections. Clinical spine surgery, 31(8), 345-346.
  • Bunton, D. (2005). The structure of PhD conclusion chapters.  Journal of English for academic purposes ,  4 (3), 207-224.

Paperpal is a comprehensive AI writing toolkit that helps students and researchers achieve 2x the writing in half the time. It leverages 21+ years of STM experience and insights from millions of research articles to provide in-depth academic writing, language editing, and submission readiness support to help you write better, faster.  

Get accurate academic translations, rewriting support, grammar checks, vocabulary suggestions, and generative AI assistance that delivers human precision at machine speed. Try for free or upgrade to Paperpal Prime starting at US$19 a month to access premium features, including consistency, plagiarism, and 30+ submission readiness checks to help you succeed.  

Experience the future of academic writing – Sign up to Paperpal and start writing for free!  

Related Reads:

  • 5 Reasons for Rejection After Peer Review
  • Ethical Research Practices For Research with Human Subjects

7 Ways to Improve Your Academic Writing Process

  • Paraphrasing in Academic Writing: Answering Top Author Queries

Preflight For Editorial Desk: The Perfect Hybrid (AI + Human) Assistance Against Compromised Manuscripts

You may also like, how to write a high-quality conference paper, academic editing: how to self-edit academic text with..., measuring academic success: definition & strategies for excellence, phd qualifying exam: tips for success , ai in education: it’s time to change the..., is it ethical to use ai-generated abstracts without..., what are journal guidelines on using generative ai..., quillbot review: features, pricing, and free alternatives, what is an academic paper types and elements , should you use ai tools like chatgpt for....

  • U.S. Locations
  • UMGC Europe
  • Learn Online
  • Find Answers
  • 855-655-8682
  • Current Students

Online Guide to Writing and Research

The research process, explore more of umgc.

  • Online Guide to Writing

Planning and Writing a Research Paper

Draw Conclusions

As a writer, you are presenting your viewpoint, opinions, evidence, etc. for others to review, so you must take on this task with maturity, courage and thoughtfulness.  Remember, you are adding to the discourse community with every research paper that you write.  This is a privilege and an opportunity to share your point of view with the world at large in an academic setting.

Because research generates further research, the conclusions you draw from your research are important. As a researcher, you depend on the integrity of the research that precedes your own efforts, and researchers depend on each other to draw valid conclusions. 

Business process and workflow automation with flowchart. Hand holding wooden cube block arranging processing management

To test the validity of your conclusions, you will have to review both the content of your paper and the way in which you arrived at the content. You may ask yourself questions, such as the ones presented below, to detect any weak areas in your paper, so you can then make those areas stronger.  Notice that some of the questions relate to your process, others to your sources, and others to how you arrived at your conclusions.

Checklist for Evaluating Your Conclusions

Key takeaways.

  • Because research generates further research, the conclusions you draw from your research are important.
  • To test the validity of your conclusions, you will have to review both the content of your paper and the way in which you arrived at the content.

Mailing Address: 3501 University Blvd. East, Adelphi, MD 20783 This work is licensed under a  Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License . © 2022 UMGC. All links to external sites were verified at the time of publication. UMGC is not responsible for the validity or integrity of information located at external sites.

Table of Contents: Online Guide to Writing

Chapter 1: College Writing

How Does College Writing Differ from Workplace Writing?

What Is College Writing?

Why So Much Emphasis on Writing?

Chapter 2: The Writing Process

Doing Exploratory Research

Getting from Notes to Your Draft

Introduction

Prewriting - Techniques to Get Started - Mining Your Intuition

Prewriting: Targeting Your Audience

Prewriting: Techniques to Get Started

Prewriting: Understanding Your Assignment

Rewriting: Being Your Own Critic

Rewriting: Creating a Revision Strategy

Rewriting: Getting Feedback

Rewriting: The Final Draft

Techniques to Get Started - Outlining

Techniques to Get Started - Using Systematic Techniques

Thesis Statement and Controlling Idea

Writing: Getting from Notes to Your Draft - Freewriting

Writing: Getting from Notes to Your Draft - Summarizing Your Ideas

Writing: Outlining What You Will Write

Chapter 3: Thinking Strategies

A Word About Style, Voice, and Tone

A Word About Style, Voice, and Tone: Style Through Vocabulary and Diction

Critical Strategies and Writing

Critical Strategies and Writing: Analysis

Critical Strategies and Writing: Evaluation

Critical Strategies and Writing: Persuasion

Critical Strategies and Writing: Synthesis

Developing a Paper Using Strategies

Kinds of Assignments You Will Write

Patterns for Presenting Information

Patterns for Presenting Information: Critiques

Patterns for Presenting Information: Discussing Raw Data

Patterns for Presenting Information: General-to-Specific Pattern

Patterns for Presenting Information: Problem-Cause-Solution Pattern

Patterns for Presenting Information: Specific-to-General Pattern

Patterns for Presenting Information: Summaries and Abstracts

Supporting with Research and Examples

Writing Essay Examinations

Writing Essay Examinations: Make Your Answer Relevant and Complete

Writing Essay Examinations: Organize Thinking Before Writing

Writing Essay Examinations: Read and Understand the Question

Chapter 4: The Research Process

Planning and Writing a Research Paper: Ask a Research Question

Planning and Writing a Research Paper: Cite Sources

Planning and Writing a Research Paper: Collect Evidence

Planning and Writing a Research Paper: Decide Your Point of View, or Role, for Your Research

Planning and Writing a Research Paper: Draw Conclusions

Planning and Writing a Research Paper: Find a Topic and Get an Overview

Planning and Writing a Research Paper: Manage Your Resources

Planning and Writing a Research Paper: Outline

Planning and Writing a Research Paper: Survey the Literature

Planning and Writing a Research Paper: Work Your Sources into Your Research Writing

Research Resources: Where Are Research Resources Found? - Human Resources

Research Resources: What Are Research Resources?

Research Resources: Where Are Research Resources Found?

Research Resources: Where Are Research Resources Found? - Electronic Resources

Research Resources: Where Are Research Resources Found? - Print Resources

Structuring the Research Paper: Formal Research Structure

Structuring the Research Paper: Informal Research Structure

The Nature of Research

The Research Assignment: How Should Research Sources Be Evaluated?

The Research Assignment: When Is Research Needed?

The Research Assignment: Why Perform Research?

Chapter 5: Academic Integrity

Academic Integrity

Giving Credit to Sources

Giving Credit to Sources: Copyright Laws

Giving Credit to Sources: Documentation

Giving Credit to Sources: Style Guides

Integrating Sources

Practicing Academic Integrity

Practicing Academic Integrity: Keeping Accurate Records

Practicing Academic Integrity: Managing Source Material

Practicing Academic Integrity: Managing Source Material - Paraphrasing Your Source

Practicing Academic Integrity: Managing Source Material - Quoting Your Source

Practicing Academic Integrity: Managing Source Material - Summarizing Your Sources

Types of Documentation

Types of Documentation: Bibliographies and Source Lists

Types of Documentation: Citing World Wide Web Sources

Types of Documentation: In-Text or Parenthetical Citations

Types of Documentation: In-Text or Parenthetical Citations - APA Style

Types of Documentation: In-Text or Parenthetical Citations - CSE/CBE Style

Types of Documentation: In-Text or Parenthetical Citations - Chicago Style

Types of Documentation: In-Text or Parenthetical Citations - MLA Style

Types of Documentation: Note Citations

Chapter 6: Using Library Resources

Finding Library Resources

Chapter 7: Assessing Your Writing

How Is Writing Graded?

How Is Writing Graded?: A General Assessment Tool

The Draft Stage

The Draft Stage: The First Draft

The Draft Stage: The Revision Process and the Final Draft

The Draft Stage: Using Feedback

The Research Stage

Using Assessment to Improve Your Writing

Chapter 8: Other Frequently Assigned Papers

Reviews and Reaction Papers: Article and Book Reviews

Reviews and Reaction Papers: Reaction Papers

Writing Arguments

Writing Arguments: Adapting the Argument Structure

Writing Arguments: Purposes of Argument

Writing Arguments: References to Consult for Writing Arguments

Writing Arguments: Steps to Writing an Argument - Anticipate Active Opposition

Writing Arguments: Steps to Writing an Argument - Determine Your Organization

Writing Arguments: Steps to Writing an Argument - Develop Your Argument

Writing Arguments: Steps to Writing an Argument - Introduce Your Argument

Writing Arguments: Steps to Writing an Argument - State Your Thesis or Proposition

Writing Arguments: Steps to Writing an Argument - Write Your Conclusion

Writing Arguments: Types of Argument

Appendix A: Books to Help Improve Your Writing

Dictionaries

General Style Manuals

Researching on the Internet

Special Style Manuals

Writing Handbooks

Appendix B: Collaborative Writing and Peer Reviewing

Collaborative Writing: Assignments to Accompany the Group Project

Collaborative Writing: Informal Progress Report

Collaborative Writing: Issues to Resolve

Collaborative Writing: Methodology

Collaborative Writing: Peer Evaluation

Collaborative Writing: Tasks of Collaborative Writing Group Members

Collaborative Writing: Writing Plan

General Introduction

Peer Reviewing

Appendix C: Developing an Improvement Plan

Working with Your Instructor’s Comments and Grades

Appendix D: Writing Plan and Project Schedule

Devising a Writing Project Plan and Schedule

Reviewing Your Plan with Others

By using our website you agree to our use of cookies. Learn more about how we use cookies by reading our  Privacy Policy .

  • Privacy Policy

Research Method

Home » Research Paper Conclusion – Writing Guide and Examples

Research Paper Conclusion – Writing Guide and Examples

Table of Contents

Research Paper Conclusion

Research Paper Conclusion

Definition:

A research paper conclusion is the final section of a research paper that summarizes the key findings, significance, and implications of the research. It is the writer’s opportunity to synthesize the information presented in the paper, draw conclusions, and make recommendations for future research or actions.

The conclusion should provide a clear and concise summary of the research paper, reiterating the research question or problem, the main results, and the significance of the findings. It should also discuss the limitations of the study and suggest areas for further research.

Parts of Research Paper Conclusion

The parts of a research paper conclusion typically include:

Restatement of the Thesis

The conclusion should begin by restating the thesis statement from the introduction in a different way. This helps to remind the reader of the main argument or purpose of the research.

Summary of Key Findings

The conclusion should summarize the main findings of the research, highlighting the most important results and conclusions. This section should be brief and to the point.

Implications and Significance

In this section, the researcher should explain the implications and significance of the research findings. This may include discussing the potential impact on the field or industry, highlighting new insights or knowledge gained, or pointing out areas for future research.

Limitations and Recommendations

It is important to acknowledge any limitations or weaknesses of the research and to make recommendations for how these could be addressed in future studies. This shows that the researcher is aware of the potential limitations of their work and is committed to improving the quality of research in their field.

Concluding Statement

The conclusion should end with a strong concluding statement that leaves a lasting impression on the reader. This could be a call to action, a recommendation for further research, or a final thought on the topic.

How to Write Research Paper Conclusion

Here are some steps you can follow to write an effective research paper conclusion:

  • Restate the research problem or question: Begin by restating the research problem or question that you aimed to answer in your research. This will remind the reader of the purpose of your study.
  • Summarize the main points: Summarize the key findings and results of your research. This can be done by highlighting the most important aspects of your research and the evidence that supports them.
  • Discuss the implications: Discuss the implications of your findings for the research area and any potential applications of your research. You should also mention any limitations of your research that may affect the interpretation of your findings.
  • Provide a conclusion : Provide a concise conclusion that summarizes the main points of your paper and emphasizes the significance of your research. This should be a strong and clear statement that leaves a lasting impression on the reader.
  • Offer suggestions for future research: Lastly, offer suggestions for future research that could build on your findings and contribute to further advancements in the field.

Remember that the conclusion should be brief and to the point, while still effectively summarizing the key findings and implications of your research.

Example of Research Paper Conclusion

Here’s an example of a research paper conclusion:

Conclusion :

In conclusion, our study aimed to investigate the relationship between social media use and mental health among college students. Our findings suggest that there is a significant association between social media use and increased levels of anxiety and depression among college students. This highlights the need for increased awareness and education about the potential negative effects of social media use on mental health, particularly among college students.

Despite the limitations of our study, such as the small sample size and self-reported data, our findings have important implications for future research and practice. Future studies should aim to replicate our findings in larger, more diverse samples, and investigate the potential mechanisms underlying the association between social media use and mental health. In addition, interventions should be developed to promote healthy social media use among college students, such as mindfulness-based approaches and social media detox programs.

Overall, our study contributes to the growing body of research on the impact of social media on mental health, and highlights the importance of addressing this issue in the context of higher education. By raising awareness and promoting healthy social media use among college students, we can help to reduce the negative impact of social media on mental health and improve the well-being of young adults.

Purpose of Research Paper Conclusion

The purpose of a research paper conclusion is to provide a summary and synthesis of the key findings, significance, and implications of the research presented in the paper. The conclusion serves as the final opportunity for the writer to convey their message and leave a lasting impression on the reader.

The conclusion should restate the research problem or question, summarize the main results of the research, and explain their significance. It should also acknowledge the limitations of the study and suggest areas for future research or action.

Overall, the purpose of the conclusion is to provide a sense of closure to the research paper and to emphasize the importance of the research and its potential impact. It should leave the reader with a clear understanding of the main findings and why they matter. The conclusion serves as the writer’s opportunity to showcase their contribution to the field and to inspire further research and action.

When to Write Research Paper Conclusion

The conclusion of a research paper should be written after the body of the paper has been completed. It should not be written until the writer has thoroughly analyzed and interpreted their findings and has written a complete and cohesive discussion of the research.

Before writing the conclusion, the writer should review their research paper and consider the key points that they want to convey to the reader. They should also review the research question, hypotheses, and methodology to ensure that they have addressed all of the necessary components of the research.

Once the writer has a clear understanding of the main findings and their significance, they can begin writing the conclusion. The conclusion should be written in a clear and concise manner, and should reiterate the main points of the research while also providing insights and recommendations for future research or action.

Characteristics of Research Paper Conclusion

The characteristics of a research paper conclusion include:

  • Clear and concise: The conclusion should be written in a clear and concise manner, summarizing the key findings and their significance.
  • Comprehensive: The conclusion should address all of the main points of the research paper, including the research question or problem, the methodology, the main results, and their implications.
  • Future-oriented : The conclusion should provide insights and recommendations for future research or action, based on the findings of the research.
  • Impressive : The conclusion should leave a lasting impression on the reader, emphasizing the importance of the research and its potential impact.
  • Objective : The conclusion should be based on the evidence presented in the research paper, and should avoid personal biases or opinions.
  • Unique : The conclusion should be unique to the research paper and should not simply repeat information from the introduction or body of the paper.

Advantages of Research Paper Conclusion

The advantages of a research paper conclusion include:

  • Summarizing the key findings : The conclusion provides a summary of the main findings of the research, making it easier for the reader to understand the key points of the study.
  • Emphasizing the significance of the research: The conclusion emphasizes the importance of the research and its potential impact, making it more likely that readers will take the research seriously and consider its implications.
  • Providing recommendations for future research or action : The conclusion suggests practical recommendations for future research or action, based on the findings of the study.
  • Providing closure to the research paper : The conclusion provides a sense of closure to the research paper, tying together the different sections of the paper and leaving a lasting impression on the reader.
  • Demonstrating the writer’s contribution to the field : The conclusion provides the writer with an opportunity to showcase their contribution to the field and to inspire further research and action.

Limitations of Research Paper Conclusion

While the conclusion of a research paper has many advantages, it also has some limitations that should be considered, including:

  • I nability to address all aspects of the research: Due to the limited space available in the conclusion, it may not be possible to address all aspects of the research in detail.
  • Subjectivity : While the conclusion should be objective, it may be influenced by the writer’s personal biases or opinions.
  • Lack of new information: The conclusion should not introduce new information that has not been discussed in the body of the research paper.
  • Lack of generalizability: The conclusions drawn from the research may not be applicable to other contexts or populations, limiting the generalizability of the study.
  • Misinterpretation by the reader: The reader may misinterpret the conclusions drawn from the research, leading to a misunderstanding of the findings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • How to Write Discussions and Conclusions

How to Write Discussions and Conclusions

The discussion section contains the results and outcomes of a study. An effective discussion informs readers what can be learned from your experiment and provides context for the results.

What makes an effective discussion?

When you’re ready to write your discussion, you’ve already introduced the purpose of your study and provided an in-depth description of the methodology. The discussion informs readers about the larger implications of your study based on the results. Highlighting these implications while not overstating the findings can be challenging, especially when you’re submitting to a journal that selects articles based on novelty or potential impact. Regardless of what journal you are submitting to, the discussion section always serves the same purpose: concluding what your study results actually mean.

A successful discussion section puts your findings in context. It should include:

  • the results of your research,
  • a discussion of related research, and
  • a comparison between your results and initial hypothesis.

Tip: Not all journals share the same naming conventions.

You can apply the advice in this article to the conclusion, results or discussion sections of your manuscript.

Our Early Career Researcher community tells us that the conclusion is often considered the most difficult aspect of a manuscript to write. To help, this guide provides questions to ask yourself, a basic structure to model your discussion off of and examples from published manuscripts. 

draw conclusion from research findings

Questions to ask yourself:

  • Was my hypothesis correct?
  • If my hypothesis is partially correct or entirely different, what can be learned from the results? 
  • How do the conclusions reshape or add onto the existing knowledge in the field? What does previous research say about the topic? 
  • Why are the results important or relevant to your audience? Do they add further evidence to a scientific consensus or disprove prior studies? 
  • How can future research build on these observations? What are the key experiments that must be done? 
  • What is the “take-home” message you want your reader to leave with?

How to structure a discussion

Trying to fit a complete discussion into a single paragraph can add unnecessary stress to the writing process. If possible, you’ll want to give yourself two or three paragraphs to give the reader a comprehensive understanding of your study as a whole. Here’s one way to structure an effective discussion:

draw conclusion from research findings

Writing Tips

While the above sections can help you brainstorm and structure your discussion, there are many common mistakes that writers revert to when having difficulties with their paper. Writing a discussion can be a delicate balance between summarizing your results, providing proper context for your research and avoiding introducing new information. Remember that your paper should be both confident and honest about the results! 

What to do

  • Read the journal’s guidelines on the discussion and conclusion sections. If possible, learn about the guidelines before writing the discussion to ensure you’re writing to meet their expectations. 
  • Begin with a clear statement of the principal findings. This will reinforce the main take-away for the reader and set up the rest of the discussion. 
  • Explain why the outcomes of your study are important to the reader. Discuss the implications of your findings realistically based on previous literature, highlighting both the strengths and limitations of the research. 
  • State whether the results prove or disprove your hypothesis. If your hypothesis was disproved, what might be the reasons? 
  • Introduce new or expanded ways to think about the research question. Indicate what next steps can be taken to further pursue any unresolved questions. 
  • If dealing with a contemporary or ongoing problem, such as climate change, discuss possible consequences if the problem is avoided. 
  • Be concise. Adding unnecessary detail can distract from the main findings. 

What not to do

Don’t

  • Rewrite your abstract. Statements with “we investigated” or “we studied” generally do not belong in the discussion. 
  • Include new arguments or evidence not previously discussed. Necessary information and evidence should be introduced in the main body of the paper. 
  • Apologize. Even if your research contains significant limitations, don’t undermine your authority by including statements that doubt your methodology or execution. 
  • Shy away from speaking on limitations or negative results. Including limitations and negative results will give readers a complete understanding of the presented research. Potential limitations include sources of potential bias, threats to internal or external validity, barriers to implementing an intervention and other issues inherent to the study design. 
  • Overstate the importance of your findings. Making grand statements about how a study will fully resolve large questions can lead readers to doubt the success of the research. 

Snippets of Effective Discussions:

Consumer-based actions to reduce plastic pollution in rivers: A multi-criteria decision analysis approach

Identifying reliable indicators of fitness in polar bears

  • How to Write a Great Title
  • How to Write an Abstract
  • How to Write Your Methods
  • How to Report Statistics
  • How to Edit Your Work

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Overview of the Scientific Method

13 Drawing Conclusions and Reporting the Results

Learning objectives.

  • Identify the conclusions researchers can make based on the outcome of their studies.
  • Describe why scientists avoid the term “scientific proof.”
  • Explain the different ways that scientists share their findings.

Drawing Conclusions

Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research.

If the results are statistically significant and consistent with the hypothesis and the theory that was used to generate the hypothesis, then researchers can conclude that the theory is supported. Not only did the theory make an accurate prediction, but there is now a new phenomenon that the theory accounts for. If a hypothesis is disconfirmed in a systematic empirical study, then the theory has been weakened. It made an inaccurate prediction, and there is now a new phenomenon that it does not account for.

Although this seems straightforward, there are some complications. First, confirming a hypothesis can strengthen a theory but it can never prove a theory. In fact, scientists tend to avoid the word “prove” when talking and writing about theories. One reason for this avoidance is that the result may reflect a type I error.  Another reason for this  avoidance  is that there may be other plausible theories that imply the same hypothesis, which means that confirming the hypothesis strengthens all those theories equally. A third reason is that it is always possible that another test of the hypothesis or a test of a new hypothesis derived from the theory will be disconfirmed. This  difficulty  is a version of the famous philosophical “problem of induction.” One cannot definitively prove a general principle (e.g., “All swans are white.”) just by observing confirming cases (e.g., white swans)—no matter how many. It is always possible that a disconfirming case (e.g., a black swan) will eventually come along. For these reasons, scientists tend to think of theories—even highly successful ones—as subject to revision based on new and unexpected observations.

A second complication has to do with what it means when a hypothesis is disconfirmed. According to the strictest version of the hypothetico-deductive method, disconfirming a hypothesis disproves the theory it was derived from. In formal logic, the premises “if  A  then  B ” and “not  B ” necessarily lead to the conclusion “not  A .” If  A  is the theory and  B  is the hypothesis (“if  A  then  B ”), then disconfirming the hypothesis (“not  B ”) must mean that the theory is incorrect (“not  A ”). In practice, however, scientists do not give up on their theories so easily. One reason is that one disconfirmed hypothesis could be a missed opportunity (the result of a type II error) or it could be the result of a faulty research design. Perhaps the researcher did not successfully manipulate the independent variable or measure the dependent variable.

A disconfirmed hypothesis could also mean that some unstated but relatively minor assumption of the theory was not met. For example, if Zajonc had failed to find social facilitation in cockroaches, he could have concluded that drive theory is still correct but it applies only to animals with sufficiently complex nervous systems. That is, the evidence from a study can be used to modify a theory.  This practice does not mean that researchers are free to ignore disconfirmations of their theories. If they cannot improve their research designs or modify their theories to account for repeated disconfirmations, then they eventually must abandon their theories and replace them with ones that are more successful.

The bottom line here is that because statistics are probabilistic in nature and because all research studies have flaws there is no such thing as scientific proof, there is only scientific evidence.

Reporting the Results

The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences.

The most prestigious way to report one’s findings is by writing a manuscript and having it published in a peer-reviewed scientific journal. Manuscripts published in psychology journals typically must adhere to the writing style of the American Psychological Association (APA style). You will likely be learning the major elements of this writing style in this course.

Another way to report findings is by writing a book chapter that is published in an edited book. Preferably the editor of the book puts the chapter through peer review but this is not always the case and some scientists are invited by editors to write book chapters.

A fun way to disseminate findings is to give a presentation at a conference. This can either be done as an oral presentation or a poster presentation. Oral presentations involve getting up in front of an audience of fellow scientists and giving a talk that might last anywhere from 10 minutes to 1 hour (depending on the conference) and then fielding questions from the audience. Alternatively, poster presentations involve summarizing the study on a large poster that provides a brief overview of the purpose, methods, results, and discussion. The presenter stands by their poster for an hour or two and discusses it with people who pass by. Presenting one’s work at a conference is a great way to get feedback from one’s peers before attempting to undergo the more rigorous peer-review process involved in publishing a journal article.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 9. The Conclusion
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The conclusion is intended to help the reader understand why your research should matter to them after they have finished reading the paper. A conclusion is not merely a summary of the main topics covered or a re-statement of your research problem, but a synthesis of key points derived from the findings of your study and, if applicable, where you recommend new areas for future research. For most college-level research papers, two or three well-developed paragraphs is sufficient for a conclusion, although in some cases, more paragraphs may be required in describing the key findings and their significance.

Conclusions. The Writing Center. University of North Carolina; Conclusions. The Writing Lab and The OWL. Purdue University.

Importance of a Good Conclusion

A well-written conclusion provides you with important opportunities to demonstrate to the reader your understanding of the research problem. These include:

  • Presenting the last word on the issues you raised in your paper . Just as the introduction gives a first impression to your reader, the conclusion offers a chance to leave a lasting impression. Do this, for example, by highlighting key findings in your analysis that advance new understanding about the research problem, that are unusual or unexpected, or that have important implications applied to practice.
  • Summarizing your thoughts and conveying the larger significance of your study . The conclusion is an opportunity to succinctly re-emphasize  your answer to the "So What?" question by placing the study within the context of how your research advances past research about the topic.
  • Identifying how a gap in the literature has been addressed . The conclusion can be where you describe how a previously identified gap in the literature [first identified in your literature review section] has been addressed by your research and why this contribution is significant.
  • Demonstrating the importance of your ideas . Don't be shy. The conclusion offers an opportunity to elaborate on the impact and significance of your findings. This is particularly important if your study approached examining the research problem from an unusual or innovative perspective.
  • Introducing possible new or expanded ways of thinking about the research problem . This does not refer to introducing new information [which should be avoided], but to offer new insight and creative approaches for framing or contextualizing the research problem based on the results of your study.

Bunton, David. “The Structure of PhD Conclusion Chapters.” Journal of English for Academic Purposes 4 (July 2005): 207–224; Conclusions. The Writing Center. University of North Carolina; Kretchmer, Paul. Twelve Steps to Writing an Effective Conclusion. San Francisco Edit, 2003-2008; Conclusions. The Writing Lab and The OWL. Purdue University; Assan, Joseph. "Writing the Conclusion Chapter: The Good, the Bad and the Missing." Liverpool: Development Studies Association (2009): 1-8.

Structure and Writing Style

I.  General Rules

The general function of your paper's conclusion is to restate the main argument . It reminds the reader of the strengths of your main argument(s) and reiterates the most important evidence supporting those argument(s). Do this by clearly summarizing the context, background, and necessity of pursuing the research problem you investigated in relation to an issue, controversy, or a gap found in the literature. However, make sure that your conclusion is not simply a repetitive summary of the findings. This reduces the impact of the argument(s) you have developed in your paper.

When writing the conclusion to your paper, follow these general rules:

  • Present your conclusions in clear, concise language. Re-state the purpose of your study, then describe how your findings differ or support those of other studies and why [i.e., what were the unique, new, or crucial contributions your study made to the overall research about your topic?].
  • Do not simply reiterate your findings or the discussion of your results. Provide a synthesis of arguments presented in the paper to show how these converge to address the research problem and the overall objectives of your study.
  • Indicate opportunities for future research if you haven't already done so in the discussion section of your paper. Highlighting the need for further research provides the reader with evidence that you have an in-depth awareness of the research problem but that further investigations should take place beyond the scope of your investigation.

Consider the following points to help ensure your conclusion is presented well:

  • If the argument or purpose of your paper is complex, you may need to summarize the argument for your reader.
  • If, prior to your conclusion, you have not yet explained the significance of your findings or if you are proceeding inductively, use the end of your paper to describe your main points and explain their significance.
  • Move from a detailed to a general level of consideration that returns the topic to the context provided by the introduction or within a new context that emerges from the data [this is opposite of the introduction, which begins with general discussion of the context and ends with a detailed description of the research problem]. 

The conclusion also provides a place for you to persuasively and succinctly restate the research problem, given that the reader has now been presented with all the information about the topic . Depending on the discipline you are writing in, the concluding paragraph may contain your reflections on the evidence presented. However, the nature of being introspective about the research you have conducted will depend on the topic and whether your professor wants you to express your observations in this way. If asked to think introspectively about the topics, do not delve into idle speculation. Being introspective means looking within yourself as an author to try and understand an issue more deeply, not to guess at possible outcomes or make up scenarios not supported by the evidence.

II.  Developing a Compelling Conclusion

Although an effective conclusion needs to be clear and succinct, it does not need to be written passively or lack a compelling narrative. Strategies to help you move beyond merely summarizing the key points of your research paper may include any of the following:

  • If your essay deals with a critical, contemporary problem, warn readers of the possible consequences of not attending to the problem proactively.
  • Recommend a specific course or courses of action that, if adopted, could address a specific problem in practice or in the development of new knowledge leading to positive change.
  • Cite a relevant quotation or expert opinion already noted in your paper in order to lend authority and support to the conclusion(s) you have reached [a good source would be from your literature review].
  • Explain the consequences of your research in a way that elicits action or demonstrates urgency in seeking change.
  • Restate a key statistic, fact, or visual image to emphasize the most important finding of your paper.
  • If your discipline encourages personal reflection, illustrate your concluding point by drawing from your own life experiences.
  • Return to an anecdote, an example, or a quotation that you presented in your introduction, but add further insight derived from the findings of your study; use your interpretation of results from your study to recast it in new or important ways.
  • Provide a "take-home" message in the form of a succinct, declarative statement that you want the reader to remember about your study.

III. Problems to Avoid

Failure to be concise Your conclusion section should be concise and to the point. Conclusions that are too lengthy often have unnecessary information in them. The conclusion is not the place for details about your methodology or results. Although you should give a summary of what was learned from your research, this summary should be relatively brief, since the emphasis in the conclusion is on the implications, evaluations, insights, and other forms of analysis that you make. Strategies for writing concisely can be found here .

Failure to comment on larger, more significant issues In the introduction, your task was to move from the general [the field of study] to the specific [the research problem]. However, in the conclusion, your task is to move from a specific discussion [your research problem] back to a general discussion framed around the implications and significance of your findings [i.e., how your research contributes new understanding or fills an important gap in the literature]. In short, the conclusion is where you should place your research within a larger context [visualize your paper as an hourglass--start with a broad introduction and review of the literature, move to the specific analysis and discussion, conclude with a broad summary of the study's implications and significance].

Failure to reveal problems and negative results Negative aspects of the research process should never be ignored. These are problems, deficiencies, or challenges encountered during your study. They should be summarized as a way of qualifying your overall conclusions. If you encountered negative or unintended results [i.e., findings that are validated outside the research context in which they were generated], you must report them in the results section and discuss their implications in the discussion section of your paper. In the conclusion, use negative results as an opportunity to explain their possible significance and/or how they may form the basis for future research.

Failure to provide a clear summary of what was learned In order to be able to discuss how your research fits within your field of study [and possibly the world at large], you need to summarize briefly and succinctly how it contributes to new knowledge or a new understanding about the research problem. This element of your conclusion may be only a few sentences long.

Failure to match the objectives of your research Often research objectives in the social and behavioral sciences change while the research is being carried out. This is not a problem unless you forget to go back and refine the original objectives in your introduction. As these changes emerge they must be documented so that they accurately reflect what you were trying to accomplish in your research [not what you thought you might accomplish when you began].

Resist the urge to apologize If you've immersed yourself in studying the research problem, you presumably should know a good deal about it [perhaps even more than your professor!]. Nevertheless, by the time you have finished writing, you may be having some doubts about what you have produced. Repress those doubts! Don't undermine your authority as a researcher by saying something like, "This is just one approach to examining this problem; there may be other, much better approaches that...." The overall tone of your conclusion should convey confidence to the reader about the study's validity and realiability.

Assan, Joseph. "Writing the Conclusion Chapter: The Good, the Bad and the Missing." Liverpool: Development Studies Association (2009): 1-8; Concluding Paragraphs. College Writing Center at Meramec. St. Louis Community College; Conclusions. The Writing Center. University of North Carolina; Conclusions. The Writing Lab and The OWL. Purdue University; Freedman, Leora  and Jerry Plotnick. Introductions and Conclusions. The Lab Report. University College Writing Centre. University of Toronto; Leibensperger, Summer. Draft Your Conclusion. Academic Center, the University of Houston-Victoria, 2003; Make Your Last Words Count. The Writer’s Handbook. Writing Center. University of Wisconsin Madison; Miquel, Fuster-Marquez and Carmen Gregori-Signes. “Chapter Six: ‘Last but Not Least:’ Writing the Conclusion of Your Paper.” In Writing an Applied Linguistics Thesis or Dissertation: A Guide to Presenting Empirical Research . John Bitchener, editor. (Basingstoke,UK: Palgrave Macmillan, 2010), pp. 93-105; Tips for Writing a Good Conclusion. Writing@CSU. Colorado State University; Kretchmer, Paul. Twelve Steps to Writing an Effective Conclusion. San Francisco Edit, 2003-2008; Writing Conclusions. Writing Tutorial Services, Center for Innovative Teaching and Learning. Indiana University; Writing: Considering Structure and Organization. Institute for Writing Rhetoric. Dartmouth College.

Writing Tip

Don't Belabor the Obvious!

Avoid phrases like "in conclusion...," "in summary...," or "in closing...." These phrases can be useful, even welcome, in oral presentations. But readers can see by the tell-tale section heading and number of pages remaining that they are reaching the end of your paper. You'll irritate your readers if you belabor the obvious.

Assan, Joseph. "Writing the Conclusion Chapter: The Good, the Bad and the Missing." Liverpool: Development Studies Association (2009): 1-8.

Another Writing Tip

New Insight, Not New Information!

Don't surprise the reader with new information in your conclusion that was never referenced anywhere else in the paper. This why the conclusion rarely has citations to sources. If you have new information to present, add it to the discussion or other appropriate section of the paper. Note that, although no new information is introduced, the conclusion, along with the discussion section, is where you offer your most "original" contributions in the paper; the conclusion is where you describe the value of your research, demonstrate that you understand the material that you’ve presented, and position your findings within the larger context of scholarship on the topic, including describing how your research contributes new insights to that scholarship.

Assan, Joseph. "Writing the Conclusion Chapter: The Good, the Bad and the Missing." Liverpool: Development Studies Association (2009): 1-8; Conclusions. The Writing Center. University of North Carolina.

  • << Previous: Limitations of the Study
  • Next: Appendices >>
  • Last Updated: May 22, 2024 12:03 PM
  • URL: https://libguides.usc.edu/writingguide

Logo for Portland State University Pressbooks

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Drawing Conclusions and Reporting the Results

Rajiv S. Jhangiani; I-Chant A. Chiang; Carrie Cuttler; and Dana C. Leighton

Learning Objectives

  • Identify the conclusions researchers can make based on the outcome of their studies.
  • Describe why scientists avoid the term “scientific proof.”
  • Explain the different ways that scientists share their findings.

Drawing Conclusions

Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research.

If the results are statistically significant and consistent with the hypothesis and the theory that was used to generate the hypothesis, then researchers can conclude that the theory is supported. Not only did the theory make an accurate prediction, but there is now a new phenomenon that the theory accounts for. If a hypothesis is disconfirmed in a systematic empirical study, then the theory has been weakened. It made an inaccurate prediction, and there is now a new phenomenon that it does not account for.

Although this seems straightforward, there are some complications. First, confirming a hypothesis can strengthen a theory but it can never prove a theory. In fact, scientists tend to avoid the word “prove” when talking and writing about theories. One reason for this avoidance is that the result may reflect a type I error.  Another reason for this  avoidance  is that there may be other plausible theories that imply the same hypothesis, which means that confirming the hypothesis strengthens all those theories equally. A third reason is that it is always possible that another test of the hypothesis or a test of a new hypothesis derived from the theory will be disconfirmed. This  difficulty  is a version of the famous philosophical “problem of induction.” One cannot definitively prove a general principle (e.g., “All swans are white.”) just by observing confirming cases (e.g., white swans)—no matter how many. It is always possible that a disconfirming case (e.g., a black swan) will eventually come along. For these reasons, scientists tend to think of theories—even highly successful ones—as subject to revision based on new and unexpected observations.

A second complication has to do with what it means when a hypothesis is disconfirmed. According to the strictest version of the hypothetico-deductive method, disconfirming a hypothesis disproves the theory it was derived from. In formal logic, the premises “if  A  then  B ” and “not  B ” necessarily lead to the conclusion “not  A .” If  A  is the theory and  B  is the hypothesis (“if  A  then  B ”), then disconfirming the hypothesis (“not  B ”) must mean that the theory is incorrect (“not  A ”). In practice, however, scientists do not give up on their theories so easily. One reason is that one disconfirmed hypothesis could be a missed opportunity (the result of a type II error) or it could be the result of a faulty research design. Perhaps the researcher did not successfully manipulate the independent variable or measure the dependent variable.

A disconfirmed hypothesis could also mean that some unstated but relatively minor assumption of the theory was not met. For example, if Zajonc had failed to find social facilitation in cockroaches, he could have concluded that drive theory is still correct but it applies only to animals with sufficiently complex nervous systems. That is, the evidence from a study can be used to modify a theory.  This practice does not mean that researchers are free to ignore disconfirmations of their theories. If they cannot improve their research designs or modify their theories to account for repeated disconfirmations, then they eventually must abandon their theories and replace them with ones that are more successful.

The bottom line here is that because statistics are probabilistic in nature and because all research studies have flaws there is no such thing as scientific proof, there is only scientific evidence.

Reporting the Results

The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences.

The most prestigious way to report one’s findings is by writing a manuscript and having it published in a peer-reviewed scientific journal. Manuscripts published in psychology journals typically must adhere to the writing style of the American Psychological Association (APA style). You will likely be learning the major elements of this writing style in this course.

Another way to report findings is by writing a book chapter that is published in an edited book. Preferably the editor of the book puts the chapter through peer review but this is not always the case and some scientists are invited by editors to write book chapters.

A fun way to disseminate findings is to give a presentation at a conference. This can either be done as an oral presentation or a poster presentation. Oral presentations involve getting up in front of an audience of fellow scientists and giving a talk that might last anywhere from 10 minutes to 1 hour (depending on the conference) and then fielding questions from the audience. Alternatively, poster presentations involve summarizing the study on a large poster that provides a brief overview of the purpose, methods, results, and discussion. The presenter stands by their poster for an hour or two and discusses it with people who pass by. Presenting one’s work at a conference is a great way to get feedback from one’s peers before attempting to undergo the more rigorous peer-review process involved in publishing a journal article.

Drawing Conclusions and Reporting the Results Copyright © by Rajiv S. Jhangiani; I-Chant A. Chiang; Carrie Cuttler; and Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

2.7 Drawing Conclusions and Reporting the Results

Learning objectives.

  • Identify the conclusions researchers can make based on the outcome of their studies.
  • Describe why scientists avoid the term “scientific proof.”
  • Explain the different ways that scientists share their findings.

Drawing Conclusions

Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research.

If the results are statistically significant and consistent with the hypothesis and the theory that was used to generate the hypothesis, then researchers can conclude that the theory is supported. Not only did the theory make an accurate prediction, but there is now a new phenomenon that the theory accounts for. If a hypothesis is disconfirmed in a systematic empirical study, then the theory has been weakened. It made an inaccurate prediction, and there is now a new phenomenon that it does not account for.

Although this seems straightforward, there are some complications. First, confirming a hypothesis can strengthen a theory but it can never prove a theory. In fact, scientists tend to avoid the word “prove” when talking and writing about theories. One reason for this avoidance is that the result may reflect a type I error.  Another reason for this  avoidance  is that there may be other plausible theories that imply the same hypothesis, which means that confirming the hypothesis strengthens all those theories equally. A third reason is that it is always possible that another test of the hypothesis or a test of a new hypothesis derived from the theory will be disconfirmed. This  difficulty  is a version of the famous philosophical “problem of induction.” One cannot definitively prove a general principle (e.g., “All swans are white.”) just by observing confirming cases (e.g., white swans)—no matter how many. It is always possible that a disconfirming case (e.g., a black swan) will eventually come along. For these reasons, scientists tend to think of theories—even highly successful ones—as subject to revision based on new and unexpected observations.

A second complication has to do with what it means when a hypothesis is disconfirmed. According to the strictest version of the hypothetico-deductive method, disconfirming a hypothesis disproves the theory it was derived from. In formal logic, the premises “if  A  then  B ” and “not  B ” necessarily lead to the conclusion “not  A .” If  A  is the theory and  B  is the hypothesis (“if  A  then  B ”), then disconfirming the hypothesis (“not  B ”) must mean that the theory is incorrect (“not  A ”). In practice, however, scientists do not give up on their theories so easily. One reason is that one disconfirmed hypothesis could be a missed opportunity (the result of a type II error) or it could be the result of a faulty research design. Perhaps the researcher did not successfully manipulate the independent variable or measure the dependent variable.

A disconfirmed hypothesis could also mean that some unstated but relatively minor assumption of the theory was not met. For example, if Zajonc had failed to find social facilitation in cockroaches, he could have concluded that drive theory is still correct but it applies only to animals with sufficiently complex nervous systems. That is, the evidence from a study can be used to modify a theory.  This practice does not mean that researchers are free to ignore disconfirmations of their theories. If they cannot improve their research designs or modify their theories to account for repeated disconfirmations, then they eventually must abandon their theories and replace them with ones that are more successful.

The bottom line here is that because statistics are probabilistic in nature and because all research studies have flaws there is no such thing as scientific proof, there is only scientific evidence.

Reporting the Results

The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences.

The most prestigious way to report one’s findings is by writing a manuscript and having it published in a peer-reviewed scientific journal. Manuscripts published in psychology journals typically must adhere to the writing style of the American Psychological Association (APA style). You will likely be learning the major elements of this writing style in this course.

Another way to report findings is by writing a book chapter that is published in an edited book. Preferably the editor of the book puts the chapter through peer review but this is not always the case and some scientists are invited by editors to write book chapters.

A fun way to disseminate findings is to give a presentation at a conference. This can either be done as an oral presentation or a poster presentation. Oral presentations involve getting up in front of an audience of fellow scientists and giving a talk that might last anywhere from 10 minutes to 1 hour (depending on the conference) and then fielding questions from the audience. Alternatively, poster presentations involve summarizing the study on a large poster that provides a brief overview of the purpose, methods, results, and discussion. The presenter stands by his or her poster for an hour or two and discusses it with people who pass by. Presenting one’s work at a conference is a great way to get feedback from one’s peers before attempting to undergo the more rigorous peer-review process involved in publishing a journal article.

Creative Commons License

Share This Book

  • Increase Font Size

How to write a strong conclusion for your research paper

Last updated

17 February 2024

Reviewed by

Writing a research paper is a chance to share your knowledge and hypothesis. It's an opportunity to demonstrate your many hours of research and prove your ability to write convincingly.

Ideally, by the end of your research paper, you'll have brought your readers on a journey to reach the conclusions you've pre-determined. However, if you don't stick the landing with a good conclusion, you'll risk losing your reader’s trust.

Writing a strong conclusion for your research paper involves a few important steps, including restating the thesis and summing up everything properly.

Find out what to include and what to avoid, so you can effectively demonstrate your understanding of the topic and prove your expertise.

  • Why is a good conclusion important?

A good conclusion can cement your paper in the reader’s mind. Making a strong impression in your introduction can draw your readers in, but it's the conclusion that will inspire them.

  • What to include in a research paper conclusion

There are a few specifics you should include in your research paper conclusion. Offer your readers some sense of urgency or consequence by pointing out why they should care about the topic you have covered. Discuss any common problems associated with your topic and provide suggestions as to how these problems can be solved or addressed.

The conclusion should include a restatement of your initial thesis. Thesis statements are strengthened after you’ve presented supporting evidence (as you will have done in the paper), so make a point to reintroduce it at the end.

Finally, recap the main points of your research paper, highlighting the key takeaways you want readers to remember. If you've made multiple points throughout the paper, refer to the ones with the strongest supporting evidence.

  • Steps for writing a research paper conclusion

Many writers find the conclusion the most challenging part of any research project . By following these three steps, you'll be prepared to write a conclusion that is effective and concise.

  • Step 1: Restate the problem

Always begin by restating the research problem in the conclusion of a research paper. This serves to remind the reader of your hypothesis and refresh them on the main point of the paper. 

When restating the problem, take care to avoid using exactly the same words you employed earlier in the paper.

  • Step 2: Sum up the paper

After you've restated the problem, sum up the paper by revealing your overall findings. The method for this differs slightly, depending on whether you're crafting an argumentative paper or an empirical paper.

Argumentative paper: Restate your thesis and arguments

Argumentative papers involve introducing a thesis statement early on. In crafting the conclusion for an argumentative paper, always restate the thesis, outlining the way you've developed it throughout the entire paper.

It might be appropriate to mention any counterarguments in the conclusion, so you can demonstrate how your thesis is correct or how the data best supports your main points.

Empirical paper: Summarize research findings

Empirical papers break down a series of research questions. In your conclusion, discuss the findings your research revealed, including any information that surprised you.

Be clear about the conclusions you reached, and explain whether or not you expected to arrive at these particular ones.

  • Step 3: Discuss the implications of your research

Argumentative papers and empirical papers also differ in this part of a research paper conclusion. Here are some tips on crafting conclusions for argumentative and empirical papers.

Argumentative paper: Powerful closing statement

In an argumentative paper, you'll have spent a great deal of time expressing the opinions you formed after doing a significant amount of research. Make a strong closing statement in your argumentative paper's conclusion to share the significance of your work.

You can outline the next steps through a bold call to action, or restate how powerful your ideas turned out to be.

Empirical paper: Directions for future research

Empirical papers are broader in scope. They usually cover a variety of aspects and can include several points of view.

To write a good conclusion for an empirical paper, suggest the type of research that could be done in the future, including methods for further investigation or outlining ways other researchers might proceed.

If you feel your research had any limitations, even if they were outside your control, you could mention these in your conclusion.

After you finish outlining your conclusion, ask someone to read it and offer feedback. In any research project you're especially close to, it can be hard to identify problem areas. Having a close friend or someone whose opinion you value read the research paper and provide honest feedback can be invaluable. Take note of any suggested edits and consider incorporating them into your paper if they make sense.

  • Things to avoid in a research paper conclusion

Keep these aspects to avoid in mind as you're writing your conclusion and refer to them after you've created an outline.

Dry summary

Writing a memorable, succinct conclusion is arguably more important than a strong introduction. Take care to avoid just rephrasing your main points, and don't fall into the trap of repeating dry facts or citations.

You can provide a new perspective for your readers to think about or contextualize your research. Either way, make the conclusion vibrant and interesting, rather than a rote recitation of your research paper’s highlights.

Clichéd or generic phrasing

Your research paper conclusion should feel fresh and inspiring. Avoid generic phrases like "to sum up" or "in conclusion." These phrases tend to be overused, especially in an academic context and might turn your readers off.

The conclusion also isn't the time to introduce colloquial phrases or informal language. Retain a professional, confident tone consistent throughout your paper’s conclusion so it feels exciting and bold.

New data or evidence

While you should present strong data throughout your paper, the conclusion isn't the place to introduce new evidence. This is because readers are engaged in actively learning as they read through the body of your paper.

By the time they reach the conclusion, they will have formed an opinion one way or the other (hopefully in your favor!). Introducing new evidence in the conclusion will only serve to surprise or frustrate your reader.

Ignoring contradictory evidence

If your research reveals contradictory evidence, don't ignore it in the conclusion. This will damage your credibility as an expert and might even serve to highlight the contradictions.

Be as transparent as possible and admit to any shortcomings in your research, but don't dwell on them for too long.

Ambiguous or unclear resolutions

The point of a research paper conclusion is to provide closure and bring all your ideas together. You should wrap up any arguments you introduced in the paper and tie up any loose ends, while demonstrating why your research and data are strong.

Use direct language in your conclusion and avoid ambiguity. Even if some of the data and sources you cite are inconclusive or contradictory, note this in your conclusion to come across as confident and trustworthy.

  • Examples of research paper conclusions

Your research paper should provide a compelling close to the paper as a whole, highlighting your research and hard work. While the conclusion should represent your unique style, these examples offer a starting point:

Ultimately, the data we examined all point to the same conclusion: Encouraging a good work-life balance improves employee productivity and benefits the company overall. The research suggests that when employees feel their personal lives are valued and respected by their employers, they are more likely to be productive when at work. In addition, company turnover tends to be reduced when employees have a balance between their personal and professional lives. While additional research is required to establish ways companies can support employees in creating a stronger work-life balance, it's clear the need is there.

Social media is a primary method of communication among young people. As we've seen in the data presented, most young people in high school use a variety of social media applications at least every hour, including Instagram and Facebook. While social media is an avenue for connection with peers, research increasingly suggests that social media use correlates with body image issues. Young girls with lower self-esteem tend to use social media more often than those who don't log onto social media apps every day. As new applications continue to gain popularity, and as more high school students are given smartphones, more research will be required to measure the effects of prolonged social media use.

What are the different kinds of research paper conclusions?

There are no formal types of research paper conclusions. Ultimately, the conclusion depends on the outline of your paper and the type of research you’re presenting. While some experts note that research papers can end with a new perspective or commentary, most papers should conclude with a combination of both. The most important aspect of a good research paper conclusion is that it accurately represents the body of the paper.

Can I present new arguments in my research paper conclusion?

Research paper conclusions are not the place to introduce new data or arguments. The body of your paper is where you should share research and insights, where the reader is actively absorbing the content. By the time a reader reaches the conclusion of the research paper, they should have formed their opinion. Introducing new arguments in the conclusion can take a reader by surprise, and not in a positive way. It might also serve to frustrate readers.

How long should a research paper conclusion be?

There's no set length for a research paper conclusion. However, it's a good idea not to run on too long, since conclusions are supposed to be succinct. A good rule of thumb is to keep your conclusion around 5 to 10 percent of the paper's total length. If your paper is 10 pages, try to keep your conclusion under one page.

What should I include in a research paper conclusion?

A good research paper conclusion should always include a sense of urgency, so the reader can see how and why the topic should matter to them. You can also note some recommended actions to help fix the problem and some obstacles they might encounter. A conclusion should also remind the reader of the thesis statement, along with the main points you covered in the paper. At the end of the conclusion, add a powerful closing statement that helps cement the paper in the mind of the reader.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

draw conclusion from research findings

Users report unexpectedly high data usage, especially during streaming sessions.

draw conclusion from research findings

Users find it hard to navigate from the home page to relevant playlists in the app.

draw conclusion from research findings

It would be great to have a sleep timer feature, especially for bedtime listening.

draw conclusion from research findings

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

draw conclusion from research findings

Drawing Conclusions

For any research project and any scientific discipline, drawing conclusions is the final, and most important, part of the process.

This article is a part of the guide:

  • Null Hypothesis
  • Research Hypothesis
  • Defining a Research Problem
  • Selecting Method

Browse Full Outline

  • 1 Scientific Method
  • 2.1.1 Null Hypothesis
  • 2.1.2 Research Hypothesis
  • 2.2 Prediction
  • 2.3 Conceptual Variable
  • 3.1 Operationalization
  • 3.2 Selecting Method
  • 3.3 Measurements
  • 3.4 Scientific Observation
  • 4.1 Empirical Evidence
  • 5.1 Generalization
  • 5.2 Errors in Conclusion

Whichever reasoning processes and research methods were used, the final conclusion is critical, determining success or failure. If an otherwise excellent experiment is summarized by a weak conclusion, the results will not be taken seriously.

Success or failure is not a measure of whether a hypothesis is accepted or refuted, because both results still advance scientific knowledge.

Failure lies in poor experimental design, or flaws in the reasoning processes, which invalidate the results. As long as the research process is robust and well designed, then the findings are sound, and the process of drawing conclusions begins.

The key is to establish what the results mean. How are they applied to the world?

draw conclusion from research findings

What Has Been Learned?

Generally, a researcher will summarize what they believe has been learned from the research, and will try to assess the strength of the hypothesis.

Even if the null hypothesis is accepted, a strong conclusion will analyze why the results were not as predicted. 

Theoretical physicist Wolfgang Pauli was known to have criticized another physicist’s work by saying, “it’s not only not right; it is not even wrong.”

While this is certainly a humorous put-down, it also points to the value of the null hypothesis in science, i.e. the value of being “wrong.” Both accepting or rejecting the null hypothesis provides useful information – it is only when the research provides no illumination on the phenomenon at all that it is truly a failure.

In observational research , with no hypothesis, the researcher will analyze the findings, and establish if any valuable new information has been uncovered. The conclusions from this type of research may well inspire the development of a new hypothesis for further experiments. 

draw conclusion from research findings

Generating Leads for Future Research

However, very few experiments give clear-cut results, and most research uncovers more questions than answers.

The researcher can use these to suggest interesting directions for further study. If, for example, the null hypothesis was accepted, there may still have been trends apparent within the results. These could form the basis of further study, or experimental refinement and redesign.

Question: Let’s say a researcher is interested in whether people who are ambidextrous (can write with either hand) are more likely to have ADHD. She may have three groups – left-handed, right-handed and ambidextrous, and ask each of them to complete an ADHD screening.

She hypothesizes that the ambidextrous people will in fact be more prone to symptoms of ADHD. While she doesn’t find a significant difference when she compares the mean scores of the groups, she does notice another trend: the ambidextrous people seem to score lower overall on tests of verbal acuity. She accepts the null hypothesis, but wishes to continue with her research. Can you think of a direction her research could take, given what she has already learnt?

Answer: She may decide to look more closely at that trend. She may design another experiment to isolate the variable of verbal acuity, by controlling for everything else. This may eventually help her arrive at a new hypothesis: ambidextrous people have lower verbal acuity.

Evaluating Flaws in the Research Process

The researcher will then evaluate any apparent problems with the experiment. This involves critically evaluating any weaknesses and errors in the design, which may have influenced the results .

Even strict, ' true experimental ,' designs have to make compromises, and the researcher must be thorough in pointing these out, justifying the methodology and reasoning.

For example, when drawing conclusions, the researcher may think that another causal effect influenced the results, and that this variable was not eliminated during the experimental process . A refined version of the experiment may help to achieve better results, if the new effect is included in the design process.

In the global warming example, the researcher might establish that carbon dioxide emission alone cannot be responsible for global warming. They may decide that another effect is contributing, so propose that methane may also be a factor in global warming. A new study would incorporate methane into the model.

What are the Benefits of the Research?

The next stage is to evaluate the advantages and benefits of the research.

In medicine and psychology, for example, the results may throw out a new way of treating a medical problem, so the advantages are obvious.

In some fields, certain kinds of research may not typically be seen as beneficial, regardless of the results obtained. Ideally, researchers will consider the implications of their research beforehand, as well as any ethical considerations. In fields such as psychology, social sciences or sociology, it’s important to think about who the research serves and what will ultimately be done with the results.

For example, the study regarding ambidexterity and verbal acuity may be interesting, but what would be the effect of accepting that hypothesis? Would it really benefit anyone to know that the ambidextrous are less likely to have a high verbal acuity?

However, all well-constructed research is useful, even if it only strengthens or supports a more tentative conclusion made by prior research.

Suggestions Based Upon the Conclusions

The final stage is the researcher's recommendations based on the results, depending on the field of study. This area of the research process is informed by the researcher's judgement, and will integrate previous studies.

For example, a researcher interested in schizophrenia may recommend a more effective treatment based on what has been learnt from a study. A physicist might propose that our picture of the structure of the atom should be changed. A researcher could make suggestions for refinement of the experimental design, or highlight interesting areas for further study. This final piece of the paper is the most critical, and pulls together all of the findings into a coherent agrument.

The area in a research paper that causes intense and heated debate amongst scientists is often when drawing conclusions .

Sharing and presenting findings to the scientific community is a vital part of the scientific process. It is here that the researcher justifies the research, synthesizes the results and offers them up for scrutiny by their peers.

As the store of scientific knowledge increases and deepens, it is incumbent on researchers to work together. Long ago, a single scientist could discover and publish work that alone could have a profound impact on the course of history. Today, however, such impact can only be achieved in concert with fellow scientists.

Summary - The Strength of the Results

The key to drawing a valid conclusion is to ensure that the deductive and inductive processes are correctly used, and that all steps of the scientific method were followed.

Even the best-planned research can go awry, however. Part of interpreting results also includes the researchers putting aside their ego to appraise what, if anything went wrong. Has anything occurred to warrant a more cautious interpretation of results?

If your research had a robust design, questioning and scrutiny will be devoted to the experiment conclusion, rather than the methods.

Question: Researchers are interested in identifying new microbial species that are capable of breaking down cellulose for possible application in biofuel production. They collect soil samples from a particular forest and create laboratory cultures of every microbial species they discover there. They then “feed” each species a cellulose compound and observe that in all the species tested, there was no decrease in cellulose after 24 hours.

Read the following conclusions below and decide which of them is the most sound:

They conclude that there are no microbes that can break down cellulose.

They conclude that the sampled microbes are not capable of breaking down cellulose in a lab environment within 24 hours.

They conclude that all the species are related somehow.

They conclude that these microbes are not useful in the biofuel industry.

They conclude that microbes from forests don’t break down cellulose.

Answer: The most appropriate conclusion is number 2. As you can see, sound conclusions are often a question of not extrapolating too widely, or making assumptions that are not supported by the data obtained. Even conclusion number 2 will likely be presented as tentative, and only provides evidence given the limits of the methods used.

  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Martyn Shuttleworth , Lyndsay T Wilson (Jul 22, 2008). Drawing Conclusions. Retrieved May 24, 2024 from Explorable.com: https://explorable.com/drawing-conclusions

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

draw conclusion from research findings

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

draw conclusion from research findings

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Social Sci LibreTexts

2.1F: Analyzing Data and Drawing Conclusions

  • Last updated
  • Save as PDF
  • Page ID 7916

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Data analysis in sociological research aims to identify meaningful sociological patterns.

Learning Objectives

  • Compare and contrast the analysis of quantitative vs. qualitative data
  • Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making. Data analysis is a process, within which several phases can be distinguished.
  • One way in which analysis can vary is by the nature of the data. Quantitative data is often analyzed using regressions. Regression analyses measure relationships between dependent and independent variables, taking the existence of unknown parameters into account.
  • Qualitative data can be coded–that is, key concepts and variables are assigned a shorthand, and the data gathered are broken down into those concepts or variables. Coding allows sociologists to perform a more rigorous scientific analysis of the data.

Sociological data analysis is designed to produce patterns. It is important to remember, however, that correlation does not imply causation; in other words, just because variables change at a proportional rate, it does not follow that one variable influences the other.

  • Without a valid design, valid scientific conclusions cannot be drawn. Internal validity concerns the degree to which conclusions about causality can be made. External validity concerns the extent to which the results of a study are generalizable.
  • correlation : A reciprocal, parallel or complementary relationship between two or more comparable objects.
  • causation : The act of causing; also the act or agency by which an effect is produced.
  • Regression analysis : In statistics, regression analysis includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed.

The Process of Data Analysis

Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making. In statistical applications, some people divide data analysis into descriptive statistics, exploratory data analysis (EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new features in the data and CDA focuses on confirming or falsifying existing hypotheses. Predictive analytics focuses on the application of statistical or structural models for predictive forecasting or classification. Text analytics applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of unstructured data.

Data analysis is a process, within which several phases can be distinguished. The initial data analysis phase is guided by examining, among other things, the quality of the data (for example, the presence of missing or extreme observations), the quality of measurements, and if the implementation of the study was in line with the research design. In the main analysis phase, either an exploratory or confirmatory approach can be adopted. Usually the approach is decided before data is collected. In an exploratory analysis, no clear hypothesis is stated before analyzing the data, and the data is searched for models that describe the data well. In a confirmatory analysis, clear hypotheses about the data are tested.

Regression Analysis

The type of data analysis employed can vary. One way in which analysis often varies is by the quantitative or qualitative nature of the data.

Quantitative data can be analyzed in a variety of ways, regression analysis being among the most popular. Regression analyses measure relationships between dependent and independent variables, taking the existence of unknown parameters into account. More specifically, regression analysis helps one understand how the typical value of the dependent variable changes when any one of the independent variables is varied, while the other independent variables are held fixed.

A large body of techniques for carrying out regression analysis has been developed. In practice, the performance of regression analysis methods depends on the form of the data generating process and how it relates to the regression approach being used. Since the true form of the data-generating process is generally not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes testable if a large amount of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods give misleading results.

Qualitative data can involve coding–that is, key concepts and variables are assigned a shorthand, and the data gathered is broken down into those concepts or variables. Coding allows sociologists to perform a more rigorous scientific analysis of the data. Coding is the process of categorizing qualitative data so that the data becomes quantifiable and thus measurable. Of course, before researchers can code raw data such as taped interviews, they need to have a clear research question. How data is coded depends entirely on what the researcher hopes to discover in the data; the same qualitative data can be coded in many different ways, calling attention to different aspects of the data.

image

Sociological Data Analysis

Correlation, Causation, and Spurious Relationships : This mock newscast gives three competing interpretations of the same survey findings and demonstrates the dangers of assuming that correlation implies causation.

Conclusions

In terms of the kinds of conclusions that can be drawn, a study and its results can be assessed in multiple ways. Without a valid design, valid scientific conclusions cannot be drawn. Internal validity is an inductive estimate of the degree to which conclusions about causal relationships can be made (e.g., cause and effect), based on the measures used, the research setting, and the whole research design. External validity concerns the extent to which the (internally valid) results of a study can be held to be true for other cases, such as to different people, places, or times. In other words, it is about whether findings can be validly generalized. Learning about and applying statistics (as well as knowing their limitations) can help you better understand sociological research and studies. Knowledge of statistics helps you makes sense of the numbers in terms of relationships, and it allows you to ask relevant questions about sociological phenomena.

Logo for British Columbia/Yukon Open Authoring Platform

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

74 Drawing Conclusions From Your Data

As we mentioned earlier, it is important to not just state the results of your statistical analyses. You should interpret the meanings, because this will enable you to answer your research questions. At the end of your analysis, you should be able to conclude whether your hypotheses are confirmed or rejected. To ensure you are able to draw conclusions from your analyses, we offer the following suggestions:

  • Highlight key findings from the data. ​
  • Making generalized comparisons​
  • Assess the right strength of the claim. Are hypotheses supported? To what extent? ​To what extent do generalizations hold?​
  • Examine the goodness of fit.

Your conclusions could be framed in statements such as:

“Most respondents …..​”

“Group A (e.g., Young adults) were more likely to ___than group B (older adults)

“Given the low degree of fit, other variables/factors might explain the relationship discovered”

Box 10.10 – Statistical Analysis Checklist

Access and Organize the Dataset

  • I have checked whether an Institutional Ethics Review is needed. If it is needed, I have obtained it.
  • I have recorded all the ways that I manipulated the data
  • I have inspected the data set and have noted the limitations (e.g., sampling, non-response, measurement, coverage) and have inspected it for reliability and validity.
  • I have inspected the data to ensure that it meets the requirements and assumptions of the statistical techniques that I wish to perform

Cleaning, Coding, and Recoding

  • I have re-coded variables as appropriate.
  • I have cleaned and processed the data set to make sure it is ready for analysis.

Research Design

  • If it is secondary data I am using, my methodology has documented their method for deriving the data.
  • My methodology documented the procedures for the quantitative data analysis.
  • I have highlighted my research questions and how my findings relate to them

Statistical Analysis

  • I have reported on the goodness of fit measures such as r2 and chi-square for the likelihood ratio test in order to show that your model fits the data well.
  • I have not interpreted coefficients for models that do not fit the data.
  • I have not merely provided statistical results, I have also interpreted the results.
  • You must test relationships. Univariate statistics are not enough for quantitative research.​ Make some inferences supported by tests of significance.​ Correlations, Chi-square, ANOVAs, Regressions (Linear and Logistics) etc. ​
  • I have stored all my statistical results in a central file which I can use to write up my results.

Statistical Presentation

  • My tables and figures conform to the referencing styles that I am using.
  • Report both statistically significant and non-statistically significant results.​ Do not be tempted to ignore the non-statistically significant results. They also tell a story.
  • I have avoided generalizations that my statistics cannot make.
  • I have discussed all of the relevant demographics

Practicing and Presenting Social Research Copyright © 2022 by Oral Robinson and Alexander Wilson is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.

Share This Book

  • Methodology
  • Open access
  • Published: 20 May 2024

Fuzzy cognitive mapping in participatory research and decision making: a practice review

  • Iván Sarmiento 1 , 2 ,
  • Anne Cockcroft 1 ,
  • Anna Dion 1 ,
  • Loubna Belaid 1 ,
  • Hilah Silver 1 ,
  • Katherine Pizarro 1 ,
  • Juan Pimentel 1 , 3 ,
  • Elyse Tratt 4 ,
  • Lashanda Skerritt 1 ,
  • Mona Z. Ghadirian 1 ,
  • Marie-Catherine Gagnon-Dufresne 1 , 5 &
  • Neil Andersson 1 , 6  

Archives of Public Health volume  82 , Article number:  76 ( 2024 ) Cite this article

107 Accesses

1 Altmetric

Metrics details

Fuzzy cognitive mapping (FCM) is a graphic technique to describe causal understanding in a wide range of applications. This practice review summarises the experience of a group of participatory research specialists and trainees who used FCM to include stakeholder views in addressing health challenges. From a meeting of the research group, this practice review reports 25 experiences with FCM in nine countries between 2016 and 2023.

The methods, challenges and adjustments focus on participatory research practice. FCM portrayed multiple sources of knowledge: stakeholder knowledge, systematic reviews of literature, and survey data. Methodological advances included techniques to contrast and combine maps from different sources using Bayesian procedures, protocols to enhance the quality of data collection, and tools to facilitate analysis. Summary graphs communicating FCM findings sacrificed detail but facilitated stakeholder discussion of the most important relationships. We used maps not as predictive models but to surface and share perspectives of how change could happen and to inform dialogue. Analysis included simple manual techniques and sophisticated computer-based solutions. A wide range of experience in initiating, drawing, analysing, and communicating the maps illustrates FCM flexibility for different contexts and skill bases.

Conclusions

A strong core procedure can contribute to more robust applications of the technique while adapting FCM for different research settings. Decision-making often involves choices between plausible interventions in a context of uncertainty and multiple possible answers to the same question. FCM offers systematic and traceable ways to document, contrast and sometimes to combine perspectives, incorporating stakeholder experience and causal models to inform decision-making. Different depths of FCM analysis open opportunities for applying the technique in skill-limited settings.

Peer Review reports

Collaborative generation of knowledge recognises people’s right to be involved in decisions that shape their lives [ 1 ]. Their participation makes research and interventions more relevant to local context and priorities and, thus, more likely to be effective [ 2 ]. A commitment to the co-creation of knowledge proposes that people make better decisions when they have the benefit of both scientific and other forms of knowledge. These include context-specific understanding, knowledge claims based on local settings, experience and practice, and organisational know-how [ 3 ]. Participatory research expands the idea of what counts as evidence, opening space for the experience and knowledge of stakeholders [ 4 , 5 ]. The challenge is how to create a level playing field where diverse knowledges can contribute equally. We present fuzzy cognitive mapping (FCM) as a rigorous and transparent tool to combine different perspectives into composite theories to guide shared decision-making [ 6 , 7 , 8 ].

In the early 1980s, the combination of fuzzy logic [ 9 ] to concept mapping of decision making [ 10 , 11 ] led to FCM [ 12 ]. Fuzzy cognitive maps are directed graphs [ 13 ] where nodes correspond to factors or concepts, and arrows describe directed influences. Using this basic structure for causal relationships, users can represent their knowledge of complex systems, including many interacting concepts. Many variables are not easily measured or estimated with precision or are hard to circumscribe within a formal definition, for example, wellbeing, cultural safety, or racism [ 14 , 15 ]. Nevertheless, their causes and effects are important to capture for decision-making. Fuzzy cognitive maps offer a formal structure to include these kinds of variables in the analysis of complex health issues.

The flexibility of the technique allows for systematic mapping of knowledge from multiple sources to identify influences on a particular outcome while supporting collective learning and decision making [ 16 ]. FCM has been used across multiple fields with applications that include modelling, prediction, monitoring, decision-making, and management [ 17 , 18 , 19 , 20 ]. FCM has been applied in medicine to aid diagnosis and treatment decision-making [ 21 , 22 ]. FCM has also supported community and stakeholder engagement in environmental sciences [ 23 , 24 ] and health by examining conventional and Indigenous understanding of causes of diabetes [ 25 ].

Many implementation details contribute to interpretability of FCM, a common concern for researchers new to the technique. This review addresses these practical details when we used FCM to include local stakeholder understanding of causes of health issues in co-design of actions to tackle those issues. The focus is on transparent mapping of stakeholder experience and how it meets requirements for trustworthy data collection and initial analysis. The methods section describes what fuzzy cognitive maps are and how we documented our experience using them. We describe tools and procedures for researchers using FCM to incorporate different knowledges in health research. The results summarize experience in four stages of mapping: framing the outcome of concern, drawing the maps, performing basic analyses, and using the resulting maps. The discussion contrasts our practices with those described in the literature, identifying potential limitations and suggesting future directions.

Methods of the practice review

Fuzzy cognitive maps are graphs of causal understanding [ 6 ]. The unit of meaning in fuzzy cognitive mapping is a relationship, which corresponds to two nodes (concepts) linked by an arrow. Arrows originate in the causes and point to their outcomes. A cause can lead to an outcome directly or through multiple pathways (succession of arrows). Figure  1 shows a fuzzy cognitive map of causes of healthy maternity according to indigenous traditional midwives in the South of Mexico [ 26 ].

figure 1

Fuzzy cognitive map of causes of a healthy maternity according to indigenous traditional midwives in Guerrero, Mexico. ( a ) Graphical display of a fuzzy cognitive map. The boxes are nodes, and the arrows are directed edges. Strong lines indicate positive influences, and dashed lines indicate negative influences. Thicker lines correspond to stronger effects. ( b ) Adjacency matrix with the same content as the map. Rows and columns correspond to the nodes. The value in each cell indicates the strength of the influence of one node (row) on another (column). Reproduced without changes with permission from the authors of [ 26 ]

The “fuzzy” appellation refers to weights that indicate the strength of relationships between concepts. For example, a numeric scale with values between one and five might correspond to very low , low , medium , high or very high influence. If the value is 0, there is no causal relationship, and the concepts are independent. Negative weights indicate a causal decrease in the outcome, and positive weights indicate a causal increase in the outcome. A tabular display of the map, an adjacency matrix, has the concepts in columns and rows. The value in a cell indicates the weight of the influence of the row concept on the column concept (Fig.  1 ). A map can also be represented as an edge list. This shows relationships across three columns: causes (originating node), outcomes (landing node) and weights. Some maps use ranges of variability for the weights (grey fuzzy cognitive maps) [ 27 ] or fuzzy scales to indicate changing states of factors [ 21 ].

Following rules of logical inference, the relationships between concepts can suggest potential explanations for how they work together to influence a specific outcome [ 28 , 29 ]. One might interpret a cognitive map as a series of if-then rules [ 9 ] describing causal relationships between concepts [ 12 ]. For example, if the quality of health care increases, then the population’s health should also improve. Maps can incorporate feedback loops [ 30 ], such as: if violence increases, then more violence happens.

An international participatory research group met in Montreal, Canada, to share FCM experience and discussed its application. FCM implementation in all cases shared a common ten-step protocol [ 6 ], with results of almost all exercises published in peer-reviewed journals. The lead author of each publication presented their work and corroborated the synthesis reflected the most important aspects of their experiences. A webpage details the methods, materials, and tools members of the group have used in practice ( https://ciet.org/fcm ).

As a multilevel training exercise, the meeting included graduate students, emerging researchers with their first research projects and experienced FCM researchers. Nine researchers presented their experience, challenges and lessons learned. The senior co-author (NA) led a four-round nominal group discussion covering consecutive mapping stages: (1) who defined the research issue and how, (2) procedures for building maps and the role of participants at each point, (3) analysis tools and methods and (4) use of the maps. Before the session, participants received the published papers concerning the FCM projects under discussion and the guiding questions about the four themes. After the meeting, the first author (IS) transcribed and drew on the session recording to draft the manuscript. All authors subsequently contributed to the manuscript, which follows the approach used to describe our work with narrative evaluations [ 31 ]. The summary of FCM methods used, the results of the practice review, follows the categories used in the nominal group to inquire about FCM implementation.

Researchers reported their practice in three different FCM applications. Most cases mapped stakeholder knowledge in the context of participatory research [ 26 , 32 , 33 , 34 , 35 , 36 , 37 , 38 39 ]. They also described using FCM to contextualise mixed-methods literature reviews in stakeholder perspectives [ 5 , 40 , 41 , 42 ] and to conduct secondary quantitative analysis of surveys [ 43 , 44 , 45 ]. A fourth FCM application, not discussed in detail in this paper, is in graduate teaching. A master’s program in Colombia and a PhD course in Canada incorporated the creation of cognitive maps as a learning tool, with each student building a map to describe how their research project could contribute to promoting change.

Table  1 summarises the characteristics of the 25 FCM practices reviewed. The number of maps varied from a handful to dozens. Table  2 summarises the processes of defining the issue, drawing, analysing, and using the three different kinds of maps: stakeholder knowledge, mixed-methods literature reviews, and questionnaire data. Table  3 summarises the FCM processes in each of the four mapping stages. Of 23 FCM publications from the group since 2017 (see Additional File 1 ), four describe methodological contributions [ 4 , 5 , 6 , 35 ], and the rest describe the use of FCM in specific contexts.

Stage 1. Who defined the issue and how

Focus group discussions or conversations with partners were the most common methods for defining the issue to be mapped. Cases #6 (pregnant and parenting adolescents) and #20 (women’s satisfaction with HIV care) used literature maps to identify priorities with participants in Canada, while cases #5 (immigrant’s unmet postpartum care needs) and #7 (child protection involvement) contextualised literature-based maps with stakeholder knowledge. In cases #15 and #16 on violence against women and suicide among men in Botswana, community members involved in another project raised these issues as concerns. Two cases used FCM in the secondary analysis of survey data to answer questions defined by the research teams (#1 Mexico dengue) and academic groups (#2 Colombia medical education).

All cases used a participatory research framework [ 46 ]. FCM worked both in well-established partnerships (#8 and #9 involved researchers and Indigenous communities in Mexico, and #20 well-established partnerships with women living with HIV) and in the early stages of trust building (#6 adolescent parents in Canada).

Almost all cases reported two levels of ethical review: institutional boards linked with universities and local entities (health ministries and authorities, advisory boards, community organisations or leaders). Most review boards were unfamiliar with FCM, and some requested additional descriptions and protocols to help them understand the method. In Guatemala (#17) and Nunavik (#18), Indigenous authorities and a steering committee requested a mapping session themselves before approving the project. Most projects used oral consent, mainly due to the involvement of participants with a wide range of literacy levels and in contexts of mistrust about potential misuse of signed documents (Indigenous groups in #8) or during virtual mapping sessions (women living with HIV in #20).

Strengths-based or problem-focused

Most cases followed a strengths-based approach, focusing on what influences a positive outcome (for example, what causes good maternal health instead of what causes maternal morbidity or mortality). Some cases created two maps: one about causes of a positive outcome and one about causes of the corresponding negative outcome (#8 causes and risks for safe birth in Indigenous communities, and #10 causes and protectors of short birth interval). Building two maps helped to unearth additional actionable concepts but was time-consuming and tiring for the stakeholders creating the maps.

Broad concepts or tight questions

A recurring issue was how broad the question or focus should be. A broad question about ‘what influences wellbeing’ fitted well with the holistic perspectives of Mayan communities but posed challenges for drawing, analysing, and communicating maps with many concepts and interactions (#17, Guatemala). A very narrowly defined outcome, on the other hand, might miss potentially actionable causes.

Stage 2. Drawing maps

In the group’s experience, most people readily understand how to make maps, given their basic structure (cause, arrow and consequence). Based on their collective experience, the research group developed a protocol to increase replicability and data quality in FCM, particularly for stakeholder maps, which often involve multiple facilitators and different languages. Creating maps from literature reviews and questionnaire data did not have some of the complications of creating maps with stakeholders but also benefitted from detailed protocols.

Stakeholder maps

The mapping cases reviewed here included mappers ranging from highly trained university researchers (#9 on safe birth) to people without education and speaking only their local language (#8 in Mexico, #10 and #21 in Nigeria, #11 and #12 in Uganda). Meeting participants discussed the advantages and disadvantages of group and individual maps. Groups stimulate the emergence of ideas but include the challenge of ensuring all participants are heard. Careful training of facilitators and managing the mapping sessions as nominal groups helped to increase the participation of quieter people. Groups of not more than five mappers were much easier to facilitate without losing the creative turbulence of a group. Most cases relied on small homogeneous groups, run separately by age and gender, to avoid power imbalances among the map authors. Individual sessions worked well for sensitive topics. They accommodated schedules of busy participants and worked for mappers not linked to a specific community.

Basic equipment for mapping is inexpensive and almost universally available. Most researchers in our group used either sticky notes on a large sheet of paper or magnetic tiles on a metal whiteboard (Fig.  2 ). Some researchers had worked directly with free software to draw the electronic maps ( www.mentalmodeler.com or www.yworks.com/products/yed ), while others digitised the physical maps, often from a photograph. Three cases conducted FCM over the internet or telephone, with individual mappers (#9, #20 and #25) constructing their maps online in real-time.

figure 2

Fuzzy cognitive maps from group sessions in Uganda and Nigeria. ( a ) A group of women in Uganda discusses what contributes to increasing institutional childbirths in rural communities. They used sticky notes and markers on white paper to draw the maps. ( b ) A group of men in Northern Nigeria uses a whiteboard and magnetic tiles to draw a map on causes of short birth intervals

Group mapping sessions typically had a facilitator and a reporter to take notes on the discussions. Reporters are crucial in recording explanations about the meaning of concepts and links. Experienced researchers stressed that careful training of facilitators and reporters, including several rounds of field practice, is essential to ensure quality. We developed materials to support training and quality control of mapping sessions (#21 Nigeria), available at www.ciet.org/fcm . In Nigeria (#21), the research team successfully field-tested the use of Zoom technology via mobile handsets with internet connection by the cellular network to allow virtual participation of international researchers in FCM sessions in the classroom and communities.

Many mappers in community groups had limited or no schooling and only verbal use of their local language. It worked well in these cases for the facilitators to write the concepts on the labels in English or Spanish, while the discussion was in the local language. Facilitators frequently reminded the groups about the labels of the concepts in the local language. In case #16 in Botswana, more literate groups wrote the concepts in Setswana, and the facilitators later translated them into English. Most researchers found that the FCM graphical format helped to overcome language barriers, and it seems to have worked equally well with literate and illiterate groups. Additional file 2 lists common pitfalls and potential solutions during group mapping sessions.

Identifying causes of the issue

Some mapping sessions started by asking participants what the central issue of the map meant to them. This was useful for comparing participant views about the main topic (#8 and #9 maternal health in Indigenous communities and #20 satisfaction with HIV care) and in understanding local concepts of broad topics (#17 Indigenous wellbeing). In Nigeria (#21), group discussions defined elements of adolescent sexual and reproductive health before undertaking FCM, and facilitators shared the list of elements with participants in mapping sessions. In Nunavik (#13 Canada, Inuit women on HPV self-sampling), participating women received an initial presentation to create a common understanding to discuss HPV self-sampling, an unfamiliar technique in Inuit communities.

Some cases created stakeholder maps from scratch, asking participants what they thought would cause the main outcome (#8 to 10, 14 to 19, 21, and 23 to 25). Other cases reviewed the literature first and presented the findings to participants (#5, 7 and 20). In these cases, the facilitators reminded participants that literature maps might not represent their experiences. They encouraged them to add, remove and reorganise concepts, relationships, and weights until they felt the map represented their knowledge.

Once participants had identified concepts (nodes), facilitators had to carefully consider the wording of the labels to represent the meaning of each node and identify potential duplicates. They confirmed duplications with participants and removed repeated nodes. In case #19, participating girls first had one-on-one conversations to discuss and prioritise what they thought contributed to a balanced diet. In a second activity, the actual mapping session, participants organised those concepts into categories and voted on their priorities for action. The Nigerian cases, with large numbers of maps, included creation of an iterative list of labels, with new concepts added after each mapping session to ensure the use of standard labels in future sessions when the mappers confirmed that the standard label wording indicated what they wanted to convey. This step is helpful in the combination of maps that we describe in stage 3.

Drawing arrows

Some maps showed mainly direct influences on the central issue, while others identified multiple relationships between concepts in the map. When the central issue was too broad, participants found it hard to assign relationships between concepts (#17). Facilitators frequently asked participants to clarify the meaning of proposed causal pathways or how they perceived one factor would lead to another and to the main outcome (see Additional file 2 ). To ensure arrows were appropriately labelled as positive or negative, some facilitators used standardised if-then questions to draw the relationships. For example, if factor A increases, does factor B increase or decrease? (#9).

All the presented cases used a scale from one to five to indicate the weights of links. Many Indigenous participants insisted that all the concepts were equally important (#8, 13 and 18). Careful training of facilitators encouraged participant weighing (#10, 15 and 16). It was often helpful to identify the two relationships with extreme upper and lower weights and use those as a reference to weight the rest of the relationships.

Verifying the maps

Stakeholder sessions ended with a verification of the final map. This initial member checking preceded any additional analysis. Participants readily accepted the technique and reported satisfaction that they could see concrete representations of their knowledge by the end of the FCM sessions (#13). It reaffirmed what they knew and what they could contribute in a meaningful way. In Ghana (#19 adolescent nutrition), young participants described mapping sessions as empowering when interviewed six months later [ 42 ].

Synthesis of literature reviews

FCM can portray qualitative and quantitative evidence from the literature in the same terms as stakeholder experience and beliefs and is a cornerstone of an innovative and systematic approach called the Weight of Evidence. In this approach, stakeholders interpret, expand on, and prioritise evidence from literature reviews (#5 unmet postpartum care needs [ 5 , 34 ], #3 maternal health in communities with traditional midwives [ 40 ], #4 medical evacuation of Indigenous pregnant women [ 47 ], #7 child protection investigations among adolescent parents, and #22 community participation in health research) [ 41 ].

Case #5 (Weight of Evidence) demonstrated how to convert quantitative effect estimates (e.g., odds ratio, relative risk) into a shared format to facilitate comparison between findings [ 5 ]. When multiple effect estimates described the same relationship, appropriate techniques [ 7 , 48 , 49 ] allowed for calculating pooled estimates. In #5, qualitative concepts represented ‘unattached’ nodes when the studies suggested they contributed to the outcome of interest. The researchers updated the literature maps with stakeholder views using a Bayesian hierarchical random-effects model with non-informative priors [ 50 ].

In scoping reviews with a broader topic and more heterogeneity of sources (#3, #22) [ 40 ], the map reported the relationships and their supporting data, such as quotes for qualitative studies and odds ratios for quantitative ones, instead of unifying the results in a single scale. Each relationship was counted as 1 (present) with positive or negative signs. Data extraction used a predefined format in which at least two independent researchers registered the relationships after reading the full texts. Each included study contributed to the model in the same way it would contribute to an overall discourse about the topic.

Maps from questionnaire data

Researchers used questionnaire data to generate maps of a behavioural change model in dengue prevention in Mexico [ 43 ] and cultural safety among medical trainees in Colombia [ 44 , 45 ]. The dengue project produced separate maps for men and women, while the Colombian map included all participants. Each map had seven nodes, one for each domain of change in the CASCADA model of behavioural change (Fig.  3 ): Conscious knowledge, Attitudes, positive deviation from Subjective norms, intentions to Change behaviour, Agency, Discussion of possible action and Action or change of practice [ 51 ]The surveys included questions for each intermediate result, and the repeat survey during the impact assessment provided a counterfactual comparison. For example, in Mexico (#1), Conscious knowledge (first C) was the ability to identify a physical sample of a mosquito larva during the interview, and Action (last A) focused on participation in collective activities in the neighbourhood to control mosquito breeding sites. The maps in Colombia (#2) explored the CASCADA network of partial results towards the students’ self-reported intention to change their patient-related behaviour.

figure 3

Maps from questionnaire data from the study on dengue control in Guerrero, Mexico. Green arrows are positive influences, and red arrows correspond to negative influences. The control group showed a negative influence in the results chain with a cumulative net influence of 0.88; the intervention group showed no such block and a cumulative net influence of 1.92. Reproduced without changes with permission from the authors of [ 43 ]

The arrows linking the nodes received a weight ( w ) equivalent to the odds ratio (OR) between the outcomes, transformed to a symmetrical range (-1 to 1) using the formula proposed by Šajna:

Stage 3. Tools and methods to analyse the maps

Comparing levels of influence.

Initial analysis of maps includes a pattern correspondence table that lists and contrasts direct and indirect influences reported from different sources. Free software allows for digitising maps and converting them into lists of relationships or matrices for more complex analyses. In our analysis approach, we first calculate the transitive closure (TC) of each map. This mathematical model provides the total influence of one concept on all others after considering all the possible paths linking them [ 7 ]. Two models are available [ 52 ]: fuzzy TC, recommended for maps with ad hoc concepts, and probabilistic TC, often used for maps with predetermined concepts. With the transitive closure of a map, it is possible to build a pattern correspondence table comparing influences according to different knowledge sources. Table  4 shows an example.

Additional tools for analysing the maps include centrality scores from social network analysis. These measures compare the sum of the absolute values of the weights of incoming or outgoing edges to identify the total importance of a node [ 53 ]. Higher levels of out-degree centrality indicate more influence on other concepts, and higher values of in-degree centrality suggest that the concepts are important outcomes in the map [ 16 ].

Operator-independent weighting

In response to the challenges of participant weighting in some contexts, we applied Harris’ discourse analysis to calculate overall weights across multiple maps based on the frequency of each relationship across the whole discourse (e.g., multiple maps from stakeholders or studies in literature reviews). Harris intended to have an operator-independent alternative to identify the role of morphemes (part of a word, a word or several words with an irreducible meaning) in a discourse, exclusively from their occurrence in the text [ 54 ]. Because it used frequency, among other criteria (partial order, redundancies and dependencies), it did not depend on the researcher’s assumptions of meaning. Similarly, we intended to understand the causal meaning of relationships identified through FCM with an operator-independent procedure. A concept that caused an outcome across multiple maps would have a stronger causal role than a concept that caused the same outcome only in one or two maps. We found that analysis of maps using discourse analysis and participant weighting produced similar results [ 35 ].

Combining maps

In many cases, the analysis included bringing the transitive closure maps together as an average representation of stakeholder groups. Combining maps often required reconciling differences in labels across maps. This was also an opportunity to generate categories to describe groups of related factors. Some cases involved stakeholders in this process, while others applied systematic researcher-led procedures followed by member checking exercises to confirm categories. Combining maps used weighted or unweighted averages of each relationship’s weight across maps. It also used stakeholder-assigned Bayesian priors to update corresponding relationships identified in the literature [ 5 ].

Reduction of maps

Stakeholder and literature maps usually have many factors and relationships, making their analysis complex and hindering communication of results. We created reduced maps following a qualitative synthesis of nodes and a mathematical procedure to calculate category level weights [ 35 ]. Some maps in Canada have engaged participants in defining the categories as they progress with the mapping session (#7). However, creating categories within individual mapping sessions can lead to difficulties with comparability between groups when the categorisation varies between them.

Sensemaking of relationships

Weighting by stakeholders helps prioritise direct and indirect influences that contribute to an outcome. Stakeholder narratives and weights helped to develop explanations of how different factors contribute to the outcomes. In cases #5 and #7, an additional literature search based on factors identified by stakeholders contributed to creating explanatory accounts. The reporting of women’s satisfaction with HIV care (#20) used quotes recorded in the mapping sessions to explain the narratives of the most meaningful relationships. The analysis of maps on violence against women in Botswana (#15) identified important intermediate factors commonly depicted along the pathways from other factors to the main outcome.

Stage 4. How maps were used

Researchers described how they edited and simplified complex maps to make them more accessible, including to people with limited literacy, in Mexico, Nigeria, and Uganda (#8 and 10 to 12). In addition to creating category maps, they used colour coding, labels in the local language for the most influential factors, arrows of different thicknesses according to their weight, and different sizes of boxes for concepts according to their importance based on centrality scores. When sharing results, they often contrasted maps from different stakeholders. In Canada (#5 and #7), researchers developed explanatory frameworks from the mapping exercises, and stakeholders refined this framework and identified priority areas for action. In Canada (#5 and #7), Botswana (#14) and Uganda (#11 and #12), stakeholders viewed and discussed the summary maps from other groups. The maps, further discussed by stakeholders, helped inform the design of media-based communication interventions in Ghana (#19) and Nigeria (#10).

Our experiences with FCM resonate with and adds considerable detail to earlier FCM authors [ 18 , 19 ], including those offering protocols for meaningful participation in environmental sciences [ 16 , 49 ]. The most recent literature reviews on the use of FCM have not discussed the contributions we described here [ 20 , 22 , 55 , 56 ] and do not provide details on practical decisions across the mapping process or on the implications of stakeholder authorship. This review provides practical insights for FCM researchers before they generate maps, during data collection and in analysis. The use of FCM to increase data sources in the coproduction of knowledge brings numerous challenges and multiple potential decisions. This paper summarises how we approached these challenges across 25 real-world projects and responds to the questions we often receive from researchers new to the method. These methodological considerations are essential to increase trustworthiness of FCM applications and for an adequate interpretation of its results.

Variability in facilitation of mapping sessions with stakeholders is well recognised as a source of potential differences between groups [ 19 ]. In our experience, the behaviour and attitudes of researchers and facilitators can influence the content of the maps. Careful quality control and member checking can help minimise this influence [ 57 ]. To achieve high-quality, informative maps, our experience highlights the need for clear protocols for data collection, including careful training of facilitators and ongoing supervision and monitoring. This has been essential in some of our projects, which have involved hundreds of participants in creating hundreds of maps.

Our group also used FCM in contextualising mixed-methods literature reviews. Knowledge synthesis is seldom free of reviewer interpretations [ 58 ], and formal protocols for data collection, analysis, synthesis, and presentation could increase the reliability and validity of findings [ 59 ]. Singer et al. also used FCM to summarise qualitative data [ 60 ], a promising application that benefits from FCM’s if-then configurations and linguistic descriptions of concepts and relationships. In our practice, FCM was a practical support to develop formal protocols, to generate pooled effect estimates across studies [ 58 ] and to summarising heterogeneous sources. Weight of Evidence is an innovation to incorporate stakeholder perspectives with scientific evidence, thus addressing the common challenge of contextualising literature findings with local realities. The application of FCM in modelling questionnaire data helps to evaluate result chains as knowledge networks.

Despite its name (fuzzy) and tolerance for uncertainty, FCM is not fuzzy or vague [ 61 ]. It incorporates multiple dimensions of decision-making, including impressions, feelings, and inclinations, in addition to careful reasoning of events and possibilities [ 9 , 62 ]. FCM is a participatory modelling approach [ 63 ] that improves conventional modelling with real-world experience. FCM can help formalise stakeholder knowledge and support learning about an issue to promote action [ 64 ]. An important part of the literature focuses on applying learning algorithms for scenario planning [ 55 , 65 ]. Our group reported positive changes and increased agency among mappers. Future research might explore the impact of FCM as an intervention, both on those sharing their knowledge and on those using the models. The commitment to operator-independent procedures has led us to adapt Harris’s discourse analysis to complement the sometimes-problematic weighing step [ 35 ]. Notwithstanding our ability to generate operator-independent weights, the question of whose views the models represent and who is empowered remains valid and should be discussed in every case [ 66 ].

There is very little literature on FCM in education. FCM could help students clarify the knowledge they share in class [ 67 ]. FCM can also formalise steps to connect and evaluate students’ progression towards concrete learning objectives, a helpful feature in game-based learning [ 68 ]. In our experience, mapping sessions had a transformative effect as participants reflected on what they knew about the main issue and appreciated their knowledge being presented in a tangible product. Further studies could investigate how group and individual characteristics evolve throughout the mapping process.

Decision-making involves choosing alternatives based on their expected impacts. Many people think of FCM in the context of predictive models using learning algorithms [ 55 , 56 , 69 , 70 , 71 , 72 ]. There is also potential for informing other AI-driven methods by incorporating expert knowledge in the form of fuzzy cognitive maps into complex graph-based models [ 73 , 74 ]. The concern in participatory research and, therefore, use of FCM in participatory research, is equitable engagement in informed decision making. We used FCM not as a predictive tool but for making sense of scenarios and theories to inform choices, recognising multiple possible ways of seeing any issue. Map interpretation hinges on who the authors are and the type of data depicted (opinions, observations, or components of a theory. These soft models characterise direct and indirect dependencies that are difficult to incorporate in formal approaches like differential equations [ 28 ]. Current work of the research group explores participant-led FCM weighting to inform Bayesian analysis of quantitative data and ethnographic approaches to understand deeper meanings of factors depicted in FCM.

A potential concern about FCM is whether the sample size and selection are adequate, yet FCM reports rarely discuss this. There are no formal procedures to estimate the required sample size for mapping exercises (total number of participants, maps, or people in a group session). Singh and Chudasama, for example, continued mapping sessions until the list of causal factors identified reached saturation [ 75 ]. A participatory research approach, however, would conduct as many mapping sessions as much as necessary to allow all voices, especially those of the most marginalised, to be heard. Our application of Harris’ discourse analysis allows quicker mapping sessions, avoiding the often lengthy weighting process; this can increase the number of maps that can be created with finite resources. The combination of maps results in more robust models because more knowledge informs the final output [ 76 ]. Multiple alternatives exist for combining maps [ 5 , 8 , 21 , 77 ]. Our work has explored Bayesian updating using stakeholder weights as priors [ 5 ].

Strengths and limitations

Almost all the experiences described in this review are published and provide further details on specific topics. This practice review reflects the experience in participatory research and thus mainly focused on stakeholder maps. Our group pioneered the use of FCM for contextualising systematic reviews in stakeholder experience. We also used FCM to analyse and to portray progress in changing a results chain in a modified theory of planned behaviour. Operator bias is a constant concern in our FCM practice, reflected in the review of efforts to avoid operator influence in generating the maps, in the coding of map concepts into categories, and especially in weighting of maps, where our innovation relies on Harris’ discourse analysis.

The general use of FCM has well-recognised challenges and limitations. It is easy to forget that cognitive maps reflect opinions and personal experience, which can differ between map authors and from biological causality. This is seldom a major problem in our participatory research practice, where we frame FCM as different perspectives to engage stakeholders or as an entry point to dialogue. As with most visual techniques, the maps are static and do not model the longitudinal evolution of the depicted knowledge network. Viewers might assume relationships in the maps are linear, which is not always the case [ 76 ]. For example, the effect of higher age on maternal health outcomes would be very different for teenagers and older mothers.

Most map readers make inferences from the causes to the outcome, the direction of the arrow not inviting a reversed cause from the outcome. Different approaches to causal reasoning could affect map construction, weighting and interpretation; although relatively robust to cultural and educational differences, our experience includes cultural groups that have more complex views of causal relationships than can be reflected in FCM.

Several questions about conducting FCM remain unanswered, such as how to standardise (and limit) the influence of facilitators, how to use FCM with people living with visual or hearing loss, or how to create meaningful maps using distance communication, such as social media, or when participants have limited time for the exercise.

FCM is a flexible and robust way to share multiple stakeholder perspectives. Although mostly applied to beliefs and experiences, it can also portray published evidence and questionnaire data in formats comparable with subjective experience. FCM requires multiple practical decisions that have implications for interpreting and sharing results. We review these methodological decisions in 25 research projects in different contexts since 2016. Insights might be relevant to researchers interested in using FCM and can contribute to applying it in a more systematic way. Clear protocols and quality control improve the reliability of fuzzy cognitive maps. FCM helps build a shared understanding of an issue across diverse knowledge sources and can provide a systematic and transparent basis for shared decision-making.

Data availability

The dataset supporting the conclusions of this article is included within the article and its additional files.

Wallerstein NB, Duran B. Using community-based participatory research to address health disparities. Health Promot Pract. 2006;7:312–23.

Article   PubMed   Google Scholar  

George AS, Mehra V, Scott K, Sriram V. Community participation in health systems research: a systematic review assessing the state of research, the nature of interventions involved and the features of engagement with communities. PLoS ONE. 2015;10:e0141091.

Article   PubMed   PubMed Central   Google Scholar  

Oliver S, Roche C, Stewart R, Bangpan M, Dickson K, Pells K, et al. Stakeholder engagement for development impact evaluation and evidence synthesis. London: Centre for Excellence for Development Impact and Learning (CEDIL); 2018. https://doi.org/10.51744/CIP3 .

Book   Google Scholar  

Dion A, Joseph L, Jimenez V, Gutierrez AC, Ben Ameur A, Robert E, et al. Grounding evidence in experience to support people-centered health services. Int J Public Health. 2019;64:797–802.

Dion A, Carini-Gutierrez A, Jimenez V, Ben Ameur A, Robert E, Joseph L, et al. Weight of evidence: participatory methods and bayesian updating to contextualize evidence synthesis in stakeholders’ knowledge. J Mix Methods Res. 2021;JMMR–19–03:155868982110374.

Google Scholar  

Andersson N, Silver H. Fuzzy cognitive mapping: an old tool with new uses in nursing research. J Adv Nurs. 2019;75:3823–30.

Giles BG, Haas G, Šajna M, Findlay CS. Exploring aboriginal views of health using fuzzy cognitive maps and transitive closure: a case study of the determinants of diabetes. Can J Public Health. 2008;99:411–7.

Kosko B. Hidden patterns in combined and adaptive knowledge networks. Int J Approximate Reasoning. 1988;2:377–93.

Article   Google Scholar  

Zadeh LA. Outline of a new approach to the analysis of complex systems and decision processes. IEEE Trans Syst Man Cybern. 1973;SMC–3:28–44.

Axelrod R, editor. Structure of decision: the cognitive maps of political elites. New Jersey, USA: Princeton University Press; 1976.

Langfield-Smith K, Wirth A. Measuring differences between cognitive maps. J Oper Res Soc. 1992;43:1135.

Kosko B. Fuzzy cognitive maps. Int J Man Mach Stud. 1986;24:65–75.

Harary Frank, Norman RZ, Cartwright D. Structural models: an introduction to the theory of directed graphs. New York: Wiley; 1965.

Zadeh LA. On the analysis of large-scale systems. In: Klir GJ, Yuan B, editors. Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected papers by Lotfi A Zadeh. 1996. pp. 195–209.

Seising R, Tabacchi M. Fuzziness, philosophy, and medicine. In: Seising R, Tabacchi M, editors. Fuzziness and medicine. Berlin: Springer; 2013. pp. 3–8.

Gray SA, Zanre E, Gray SRJ. Fuzzy cognitive maps as representations of mental models and group beliefs. In: Papageorgiou EI, editor. Fuzzy cognitive maps for applied sciences and engineering. Berlin: Springer; 2014. pp. 29–48.

Chapter   Google Scholar  

Papageorgiou EI, Salmeron JL. A review of fuzzy cognitive maps research during the last decade. IEEE Trans Fuzzy Syst. 2013;21:66–79.

Glykas M, editor. Fuzzy cognitive maps. Advances in theory, methodologies, tools and applications. Berlin: Springer; 2010.

Jetter AJ, Kok K. Fuzzy cognitive maps for futures studies—A methodological assessment of concepts and methods. Futures. 2014;61:45–57.

Apostolopoulos ID, Papandrianos NI, Papathanasiou ND, Papageorgiou EI. Fuzzy cognitive map applications in Medicine over the last two decades: a review study. Bioengineering. 2024;11:139.

Papageorgiou EI. A new methodology for decisions in medical informatics using fuzzy cognitive maps based on fuzzy rule-extraction techniques. Appl Soft Comput. 2011;11:500–13.

Amirkhani A, Papageorgiou EI, Mohseni A, Mosavi MR. A review of fuzzy cognitive maps in medicine: taxonomy, methods, and applications. Comput Methods Programs Biomed. 2017;142:129–45.

Gray S, Gray S, De kok JL, Helfgott AER, O’Dwyer B, Jordan R et al. Using fuzzy cognitive mapping as a participatory approach to analyze change, preferred states, and perceived resilience of social-ecological systems. Ecol Soc. 2015;20.

Gray S, Chan A, Clark D, Jordan R. Modeling the integration of stakeholder knowledge in social–ecological decision-making: benefits and limitations to knowledge diversity. Ecol Modell. 2012;229:88–96.

Giles BG, Findlay CS, Haas G, LaFrance B, Laughing W, Pembleton S. Integrating conventional science and aboriginal perspectives on diabetes using fuzzy cognitive maps. Soc Sci Med. 2007;64:562–76.

Sarmiento I, Paredes-Solís S, Loutfi D, Dion A, Cockcroft A, Andersson N. Fuzzy cognitive mapping and soft models of indigenous knowledge on maternal health in Guerrero, Mexico. BMC Med Res Methodol. 2020;20:125.

Salmeron JL. Modelling grey uncertainty with fuzzy grey cognitive maps. Expert Syst Appl. 2010;37:7581–8.

Zadeh LA. Fuzzy logic reaches adulthood. Control Eng. 1996;43:50.

Dickerson JA, Kosko B. Virtual worlds as fuzzy cognitive maps. Presence: Teleoperators Virtual Environ. 1994;3:173–89.

Osoba O, Kosko B. Causal modeling with feedback fuzzy cognitive maps. In: Davis PK, O’Mahony A, Pfautz J, editors. Social-behavioral modeling for Complex systems. Wiley; 2019. pp. 587–616.

Tonkin K, Silver H, Pimentel J, Chomat AM, Sarmiento I, Belaid L, et al. How beneficiaries see complex health interventions: a practice review of the most significant change in ten countries. Archives Public Health. 2021;79:18.

Tratt E, Sarmiento I, Gamelin R, Nayoumealuk J, Andersson N, Brassard P. Fuzzy cognitive mapping with Inuit women: what needs to change to improve cervical cancer screening in Nunavik, northern Quebec? BMC Health Serv Res. 2020;20:529.

Sarmiento I, Ansari U, Omer K, Gidado Y, Baba MC, Gamawa AI, et al. Causes of short birth interval (kunika) in Bauchi State, Nigeria: systematizing local knowledge with fuzzy cognitive mapping. Reprod Health. 2021;18:74.

Dion A, Klevor A, Nakajima A, Andersson N. Evidence-based priorities of under‐served pregnant and parenting adolescents: addressing inequities through a participatory approach to contextualizing evidence syntheses. Int J Equity Health. 2021;20:118.

Sarmiento I, Cockcroft A, Dion A, Paredes-Solís S, De Jesús-García A, Melendez D, et al. Combining conceptual frameworks on maternal health in indigenous communities—fuzzy cognitive mapping using participant and operator-independent weighting. Field Methods. 2022;34:1525822X2110704.

Sarmiento I, Field M, Kgakole L, Molatlhwa P, Girish I, Andersson N et al. Community perceptions of causes of violence against young women in Botswana: fuzzy cognitive mapping. Vulnerable Child Youth Stud. 2023;:1–57.

Sarmiento I, Kgakole L, Molatlhwa P, Girish I, Andersson N, Cockcroft A. Community perceptions about causes of suicide among young men in Botswana: an analysis based on fuzzy cognitive maps. Vulnerable Child Youth Stud. 2023;:1–23.

Cockcroft A, Sarmiento I, Andersson N. Shared perceived causes of suicide among young men and violence against young women offer potential for co-designed solutions: intervention soft-modelling with fuzzy cognitive mapping. Vulnerable Child Youth Stud. 2023;:1–22.

Ghadirian M, Marquis G, Dodoo N, Andersson N. Ghanaian female adolescents perceived changes in nutritional behaviors and social environment after creating participatory videos: a most significant change evaluation. Curr Dev Nutr. 2022;6:nzac103.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sarmiento I, Paredes-Solís S, Dion A, Silver H, Vargas E, Cruz P, et al. Maternal health and indigenous traditional midwives in southern Mexico: contextualisation of a scoping review. BMJ Open. 2021;11:e054542.

Sarmiento I, Paredes-Solís S, Morris M, Pimentel J, Cockcroft A, Andersson N. Factors influencing maternal health in indigenous communities with presence of traditional midwifery in the Americas: protocol for a scoping review. BMJ Open. 2020;10:e037922.

Gagnon-Dufresne M-C, Sarmiento I, Fortin G, Andersson N, Zinszer K. Why urban communities from low-income and middle-income countries participate in public and global health research: protocol for a scoping review. BMJ Open. 2023;13:e069340.

Andersson N, Beauchamp M, Nava-Aguilera E, Paredes-Solís S, Šajna M. The women made it work: fuzzy transitive closure of the results chain in a dengue prevention trial in Mexico. BMC Public Health. 2017;17(Suppl 1):133–73.

Pimentel J, Cockcroft A, Andersson N. Impact of game jam learning about cultural safety in Colombian medical education: a randomised controlled trial. BMC Med Educ. 2021;21:132.

Pimentel J, Cockcroft A, Andersson N. Game jams for cultural safety training in Colombian medical education: a pilot randomised controlled trial. BMJ Open. 2021;11:e042892.

Andersson N. Participatory research-A modernizing science for primary health care. J Gen Fam Med. 2018;19:154–9.

Silver H, Sarmiento I, Pimentel J-P, Budgell R, Cockcroft A, Vang ZM, et al. Childbirth evacuation among rural and remote indigenous communities in Canada: a scoping review. Women Birth. 2021. https://doi.org/10.1016/j.wombi.2021.03.003 .

Borenstein M, Hedges LV. Effect size for meta-analysis. In: Cooper HM, Hedges L V, Valentine J, editors. Handbook of research synthesis and meta-analysis. 3rd edition. New York: Russell Sage Foundation; 2019. pp. 208–43.

Özesmi U, Özesmi SL. Ecological models based on people’s knowledge: a multi-step fuzzy cognitive mapping approach. Ecol Modell. 2004;176:43–64.

Rosenberg L, Joseph L, Barkun A. Surgical arithmetic: epidemiological, statistical, and outcome-based approach to surgical practice. CRC; 2000.

Andersson N, Ledogar RJ. The CIET Aboriginal youth resilience studies: 14 years of capacity building and methods development in Canada. Pimatisiwin. 2008;6:65–88.

PubMed   PubMed Central   Google Scholar  

Niesink P, Poulin K, Šajna M. Computing transitive closure of bipolar weighted digraphs. Discrete Appl Math (1979). 2013;161:217–43.

Papageorgiou EI, Kontogianni A. Using fuzzy cognitive mapping in environmental decision making and management: a methodological primer and an application. International perspectives on Global Environmental Change. InTech; 2012.

Harris ZS. Discourse analysis. Language (Baltim). 1952;28:1.

Felix G, Nápoles G, Falcon R, Froelich W, Vanhoof K, Bello R. A review on methods and software for fuzzy cognitive maps. Artif Intell Rev. 2019;52:1707–37.

Jiya EA, Georgina ON. A review of fuzzy cognitive maps extensions and learning. J Inform Syst Inf. 2023;5:300–23.

Olazabal M, Neumann MB, Foudi S, Chiabai A. Transparency and reproducibility in participatory systems modelling: the case of fuzzy cognitive mapping. Syst Res Behav Sci. 2018;35:791–810.

Sandelowski M, Voils CI, Leeman J, Crandell JL. Mapping the mixed methods–mixed research synthesis terrain. J Mix Methods Res. 2012;6:317–31.

Wheeldon J. Mapping mixed methods research: methods, measures, and meaning. J Mix Methods Res. 2010;4:87–102.

Singer A, Gray S, Sadler A, Schmitt Olabisi L, Metta K, Wallace R, et al. Translating community narratives into semi-quantitative models to understand the dynamics of socio-environmental crises. Environ Model Softw. 2017;97:46–55.

Zadeh LA. Is there a need for fuzzy logic? Inf Sci (N Y). 2008;178:2751–79.

Kahneman D. Thinking, fast and slow. New York, Canada: Farrar, Straus and Giroux; 2013; 1934-.

Voinov A, Jenni K, Gray S, Kolagani N, Glynn PD, Bommel P, et al. Tools and methods in participatory modeling: selecting the right tool for the job. Environ Model Softw. 2018;109:232–55.

Stave K. Participatory system dynamics modeling for sustainable environmental management: observations from four cases. Sustainability. 2010;2:2762–84.

Papageorgiou EI. Learning algorithms for fuzzy cognitive maps - a review study. IEEE Trans Syst Man Cybernetics Part C (Applications Reviews). 2012;42:150–63.

Chambers R. Participatory mapping and geographic information systems: whose map? Who is empowered and who disempowered? Who gains and who loses? Electron J Inform Syst Developing Ctries. 2006;25:1–11.

Cole JR, Persichitte KA. Fuzzy cognitive mapping: applications in education. Int J Intell Syst. 2000;15:1–25.

Luo X, Wei X, Zhang J. Game-based learning model using fuzzy cognitive map. In: Proceedings of the first ACM international workshop on Multimedia technologies for distance learning - MTDL ’09. New York, New York, USA: ACM Press; 2009. p. 67.

Nápoles G, Espinosa ML, Grau I, Vanhoof K. FCM expert: software tool for scenario analysis and pattern classification based on fuzzy cognitive maps. Int J Artif Intell Tools. 2018;27:1860010.

Papageorgiou K, Carvalho G, Papageorgiou EI, Bochtis D, Stamoulis G. Decision-making process for photovoltaic solar energy sector development using fuzzy cognitive map technique. Energies (Basel). 2020;13:1427.

Papageorgiou EI, Papageorgiou K, Dikopoulou Z, Mouhrir A. A web-based tool for fuzzy cognitive map modeling. In: International Congress on Environmental Modelling and Software. 2018. p. 73.

Nápoles G, Papageorgiou E, Bello R, Vanhoof K. On the convergence of sigmoid fuzzy cognitive maps. Inf Sci (N Y). 2016;349–350:154–71.

Apostolopoulos ID, Groumpos PP. Fuzzy cognitive maps: their role in Explainable Artificial Intelligence. Appl Sci. 2023;13:3412.

Article   CAS   Google Scholar  

Mkhitaryan S, Giabbanelli PJ, Wozniak MK, de Vries NK, Oenema A, Crutzen R. How to use machine learning and fuzzy cognitive maps to test hypothetical scenarios in health behavior change interventions: a case study on fruit intake. BMC Public Health. 2023;23:2478.

Singh PK, Chudasama H. Assessing impacts and community preparedness to cyclones: a fuzzy cognitive mapping approach. Clim Change. 2017;143:337–54.

Kosko B. Foreword. In: Glykas M, editor. Fuzzy cognitive maps. Advances in theory, methodologies, tools and applications. Berlin: Springer; 2010. pp. VII–VIII.

Papageorgiou K, Singh PK, Papageorgiou EI, Chudasama H, Bochtis D, Stamoulis G. Participatory modelling for poverty alleviation using fuzzy cognitive maps and OWA learning aggregation. PLoS ONE. 2020;15:e0233984.

Dion A, Nakajima A, McGee A, Andersson N. How adolescent mothers interpret and prioritize evidence about perinatal child protection involvement: participatory contextualization of published evidence. Child Adolesc Soc Work J. 2022;39:785–803.

Belaid L, Atim P, Ochola E, Omara B, Atim E, Ogwang M, et al. Community views on short birth interval in Northern Uganda: a participatory grounded theory. Reprod Health. 2021;18:88.

Belaid L, Atim P, Atim E, Ochola E, Ogwang M, Bayo P, et al. Communities and service providers address access to perinatal care in Postconflict Northern Uganda: socialising evidence for participatory action. Fam Med Community Health. 2021;9:e000610.

Skerritt L, Kaida A, Savoie É, Sánchez M, Sarmiento I, O’Brien N, et al. Factors and priorities influencing satisfaction with care among women living with HIV in Canada: a fuzzy cognitive mapping study. J Pers Med. 2022;12:1079.

Cockcroft A, Omer K, Gidado Y, Mohammed R, Belaid L, Ansari U, et al. Impact-oriented dialogue for culturally safe adolescent sexual and reproductive health in Bauchi State, Nigeria: protocol for a codesigned pragmatic cluster randomized controlled trial. JMIR Res Protoc. 2022;11:e36060.

Sarmiento I, Zuluaga G, Paredes-Solís S, Chomat AM, Loutfi D, Cockcroft A, et al. Bridging western and indigenous knowledge through intercultural dialogue: lessons from participatory research in Mexico. BMJ Glob Health. 2020;5:e002488.

Ansari U, Pimentel J, Omer K, Gidado Y, Baba MC, Andersson N, et al. Kunika women are always sick: views from community focus groups on short birth interval (kunika) in Bauchi state, northern Nigeria. BMC Womens Health. 2020;20:113.

Download references

Acknowledgements

Umaira Ansari, Michaela Field, Sonia Michelsen, Khalid Omer, Amar Azis, Drs Shaun Cleaver and Sergio Paredes were present in the nominal group discussion. Dr Mateja Šajna read and commented on initial versions of this manuscript. The data and methods used for this paper are available in the manuscript. All authors read, contributed to, and approved the final manuscript.

The work for this manuscript did not receive external funding. The individual FCM projects received financial support from a range of funding bodies, acknowledged in the individual publications about the projects. All individual funding agreements ensured the authors’ independence in designing the study, interpreting the data, writing, and publishing the report. The Canadian Institutes of Health Research contributed to the publication costs of this manuscript. 

Author information

Authors and affiliations.

Department of Family Medicine, McGill University, 5858 Ch. de la Côte-des-Neiges, Montreal, QC, H3S 1Z1, Canada

Iván Sarmiento, Anne Cockcroft, Anna Dion, Loubna Belaid, Hilah Silver, Katherine Pizarro, Juan Pimentel, Lashanda Skerritt, Mona Z. Ghadirian, Marie-Catherine Gagnon-Dufresne & Neil Andersson

Universidad del Rosario, Grupo de Estudios en Sistemas Tradicionales de Salud, Bogota, Colombia

Iván Sarmiento

Facultad de Medicina, Universidad de La Sabana, Chía, Colombia

Juan Pimentel

Institut Lady Davis pour la Recherche Médicale, Montreal, Canada

Elyse Tratt

École de santé publique, Département de médecine sociale et préventive, Université de Montréal, Montreal, Canada

Marie-Catherine Gagnon-Dufresne

Centro de Investigación de Enfermedades Tropicales, Universidad Autónoma de Guerrero, Acapulco, Mexico

Neil Andersson

You can also search for this author in PubMed   Google Scholar

Contributions

IS and NA designed the study and drafted the initial version of the manuscript. All the authors read and contributed to the sections for which their work was more relevant.

Corresponding author

Correspondence to Iván Sarmiento .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sarmiento, I., Cockcroft, A., Dion, A. et al. Fuzzy cognitive mapping in participatory research and decision making: a practice review. Arch Public Health 82 , 76 (2024). https://doi.org/10.1186/s13690-024-01303-7

Download citation

Received : 25 July 2023

Accepted : 30 April 2024

Published : 20 May 2024

DOI : https://doi.org/10.1186/s13690-024-01303-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Fuzzy cognitive mapping
  • Participatory modelling
  • Weight of evidence
  • Stakeholder engagement
  • Fuzzy logic
  • Public health
  • Global health

Archives of Public Health

ISSN: 2049-3258

draw conclusion from research findings

Stanford University

Along with Stanford news and stories, show me:

  • Student information
  • Faculty/Staff information

We want to provide announcements, events, leadership messages and resources that are relevant to you. Your selection is stored in a browser cookie which you can remove at any time using “Clear all personalization” below.

A child using a kiosk at the Children’s Discovery Museum of San Jose. Drawings from the kiosk were collected and analyzed using AI to help the researchers better understand how children perceive the world, and how they communicate those perceptions through drawing.

A child using a kiosk at the Children’s Discovery Museum of San Jose. Drawings from the kiosk were collected and analyzed using AI to help the researchers better understand how children perceive the world, and how they communicate those perceptions through drawing. (Image credit: Nick Gamurot)

Children’s ability to draw recognizable objects and to recognize each other’s drawings improves concurrently throughout childhood, according to a new study from Stanford University.

In work published Feb. 8 in Nature Communications , the researchers used machine learning algorithms to analyze changes in a large sample of drawings of children from the ages of 2 to 10. The study, conducted by researchers Bria Long, Judith Fan, Holly Huey, Zixian Chai, and Michael Frank, found that children’s ability to draw and recognize objects develops in parallel. It also found that not all the improvement in drawing recognizability throughout childhood could be attributed to improvement in drawing skill or inclusion of stereotypical attributes, such as tall ears on a rabbit.

“The kinds of features that lead drawings from older children to be recognizable don’t seem to be driven by just a single feature that all the older kids learn to include in their drawings,” said Judith Fan , an assistant professor of psychology in the School of Humanities and Sciences and principal investigator of the Cognitive Tools Lab . “It’s something much more complex that these machine learning systems are picking up on.”

Using machine learning enabled the researchers to interpret the large sample size of drawings in this study and highlighted subtleties that helped them understand how children perceive the world, and how they communicate those perceptions through drawing.

Data and doodles

To conduct the study, researchers worked with staff members from the Children’s Discovery Museum of San Jose to install a kiosk within the museum. The kiosk displayed recorded video prompts of the study’s first author, Stanford psychology postdoctoral fellow Bria Long, asking children to draw certain animals or objects. After receiving the prompt, children using the kiosk would then have 30 seconds to draw the object using their fingertip on a digital tablet. Children using the kiosk were also asked to identify the objects drawn by other children in a guessing game, and to trace objects shown on the screen to assess their motor skills.

Examples of correctly classified drawings from each of the 48 categories presented at the experiment station in alphabetical order: airplane, apple, bear, bed, bee, bike, bird, boat, book, bottle, bowl, cactus, (2nd row): camel, car, cat, chair, clock, couch, cow, cup, dog, elephant, face, fish, (3rd row): frog, hand, hat, horse, house, ice cream, key, lamp, mushroom, octopus, person, phone, (4th row): piano, rabbit, scissors, sheep, snail, spider, tiger, train, tree, TV, watch, whale.

Examples of correctly classified drawings from each of the 48 categories presented at the experiment station in alphabetical order: airplane, apple, bear, bed, bee, bike, bird, boat, book, bottle, bowl, cactus, (2nd row): camel, car, cat, chair, clock, couch, cow, cup, dog, elephant, face, fish, (3rd row): frog, hand, hat, horse, house, ice cream, key, lamp, mushroom, octopus, person, phone, (4th row): piano, rabbit, scissors, sheep, snail, spider, tiger, train, tree, TV, watch, whale. (Image credit: Long, B., Fan, J.E., Huey, H. et al. Parallel developmental changes in children’s production and recognition of line drawings of visual concepts. Nat Commun 15, 1191 (2024). CC BY 4.0 DEED )

After collecting around 37,000 individual drawings from the kiosk, the researchers used machine learning algorithms to analyze each drawing’s recognizability. Then, the researchers collected data on the distinct object parts of each image in around 2,000 of the drawings, annotated by adult participants who were asked to describe what part of the object the children had drawn with each pen stroke (e.g., “head” or “tail”).

“Scientists have been interested in children’s drawings for quite a long time,” said Long, referencing past studies on how children draw recognizable objects. “But this is the first time that we have been able to combine digital drawings with innovations in machine learning to analyze drawings at scale over development.”

The researchers hope that future work in this area will include similar studies across different cultural groups, in both children and adults.

Drawing conclusions

This large-scale work adds robust support to previous findings that as children grow up, their ability to both recognize and draw animals and objects increases. The fact that the analysis assessed such a sizable set of drawings allowed the researchers to infer more nuanced conclusions than past studies, where far fewer drawings were analyzed by humans.

Although the recognizability of the drawings increased with age, the researchers found that the increase wasn’t completely explained by improvements in motor control. Even trademark features that children learn to recognize and include in their drawings over time, such as eight legs on a spider, did not fully explain the increase. This suggests that children’s improvement over time reflects not just what they directly observe or are able to produce, but also a change in how they think about objects.

“Children’s drawings contain a lot of rich information about what they know. … Just because your child isn’t drawing something really well doesn’t mean that they’re not expressing interesting knowledge about that category.” —Bria Long Postdoctoral fellow in psychology

“Children’s drawings reflect not just their ability to draw, but something about what they know about these objects,” said Long. “And you see these changes both in their ability to produce these drawings and also to recognize other children’s drawings.”

According to the researchers, even drawings that are unrecognizable can convey clues about the child’s intent. For instance, a drawing of a tiger may not be recognizable as a tiger, but is still clearly an animal. Children were also able to convey information about the real-world size of the drawing’s subject, even if the drawing itself was otherwise mysterious.

“Children’s drawings contain a lot of rich information about what they know. And we think this is a really cool way to learn about what children are thinking,” said Long. “Just because your child isn’t drawing something really well doesn’t mean that they’re not expressing interesting knowledge about that category.”

Senior author Michael Frank is the Benjamin Scott Crocker Professor in Human Biology, professor of psychology, and professor, by courtesy, of linguistics in the School of Humanities and Sciences . Frank is also a member of Stanford Bio-X , the Maternal & Child Health Research Institute (MCHRI) , and the Wu Tsai Neurosciences Institute ; a faculty affiliate of the Institute for Human-Centered Artificial Intelligence (HAI) ; and director of the Symbolic Systems Program.

This work was funded by the National Science Foundation, the National Institutes of Health, and a Jacobs Foundation Fellowship.

Media Contacts

Taylor Kubota, University Communications: [email protected]

Some Dinosaurs Evolved to Be Warm-Blooded 180 Million Years Ago, Study Suggests

Researchers studied the geographic distribution of dinosaurs to draw conclusions about whether they could regulate their internal temperatures

Will Sullivan

Will Sullivan

Daily Correspondent

A dinosaur with feathers in the snow

Two major groups of dinosaurs may have been warm-blooded —having evolved the ability to regulate their body temperatures—around 180 million years ago, according to a new study.

Scientists used to think that all dinosaurs were cold-blooded , meaning that, like modern lizards, their body temperatures were dependent on their surroundings. While scientists have since discovered that some dinosaurs were actually warm-blooded, they haven’t been able to pinpoint when this adaptation evolved, according to a statement from University College London.

The new findings suggest that theropods, a group of mostly carnivorous dinosaurs including Tyrannosaurus rex and Velociraptor , as well as the ornithischians, which include the mostly plant-eating relatives of Stegosaurus and Triceratops , may have both developed warm-bloodedness in the early Jurassic Period. This change might have been prompted by global warming that followed volcanic eruptions, according to the results published Wednesday in the journal Current Biology .

The study is the “first real attempt to quantify broad patterns that some of us had thought about previously,” Anthony Fiorillo , executive director of the New Mexico Museum of Natural History & Science who was not involved in the work, tells CNN ’s Katie Hunt. “Their modeling helps create a robustness to our biogeographical understanding of dinosaurs and their related physiology.”

Warm-blooded animals, which include mammals and birds, use energy from food to maintain a constant body temperature. Their bodies can shiver to generate heat, and they may sweat, pant or dilate their blood vessels to cool off. As a result, these animals can live in a wide range of environments.

On the other hand, cold-blooded creatures must move to different environments to control their body temperature. They might lie in the sun to warm up and move under a rock or into the water to cool off.

Previous work had uncovered evidence of warm-bloodedness in both theropods and ornithischians, such as feathers that trap body heat, according to the university’s statement. In the new study, the researchers studied the geographic spread of dinosaurs during the Mesozoic Era, which lasted from 230 million to 66 million years ago, by examining 1,000 fossils, climate models and dinosaur evolutionary trees.

They found that theropods and ornithischians lived in wide-ranging climates, and during the early Jurassic, these two groups migrated to colder areas. This suggested they had developed the ability to generate their own heat.

“If something is capable of living in the Arctic, or very cold regions, it must have some way of heating up,” Alfio Allesandro Chiarenza , a co-author of the study and a paleontologist at University College London, tells Adithi Ramakrishnan of the Associated Press (AP).

Long-necked sauropods, on the other hand, which include the Brontosaurus , seemed restricted to areas with higher temperatures. The team suggests this means sauropods could have been cold-blooded.

“It reconciles well with what we imagine about their ecology,” Chiarenza says to CNN. “They were the biggest terrestrial animals that ever lived. They probably would have overheated if they were hot-blooded.”

At around the same time, volcanic eruptions led to global warming and the extinction of some plant species.

“The adoption of endothermy, perhaps a result of this environmental crisis, may have enabled theropods and ornithischians to thrive in colder environments, allowing them to be highly active and sustain activity over longer periods, to develop and grow faster and produce more offspring,” Chiarenza says in the statement.

Jasmina Wiemann , a paleobiologist at the Field Museum of Natural History who was not involved in the new research, published a study in 2022 that came to a different conclusion: Based on oxygen intake in dinosaur fossils, she found that ornithischians were more likely cold-blooded, while sauropods were more likely warm-blooded.

She tells the AP that considering information on dinosaurs’ body temperatures and diets, not just their geographic distribution, can help scientists understand when dinosaurs evolved to be warm-blooded.

Get the latest stories in your inbox every weekday.

Will Sullivan

Will Sullivan | | READ MORE

Will Sullivan is a science writer based in Washington, D.C. His work has appeared in Inside Science and NOVA Next .

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • JME Commentaries
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Epistemic injustice, healthcare disparities and the missing pipeline: reflections on the exclusion of disabled scholars from health research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0003-3868-5765 Joanne Hunt 1 ,
  • http://orcid.org/0000-0002-0205-1165 Charlotte Blease 1 , 2
  • 1 Department of Women's and Children's Health , Uppsala University , Uppsala , Sweden
  • 2 Digital Psychiatry, Department of Psychiatry, Beth Israel Deaconess Medical Center , Harvard Medical School , Boston , Massachusetts , USA
  • Correspondence to Joanne Hunt, Department of Women's and Children's Health, Uppsala University, Uppsala 751 05, Sweden; joanne.hunt{at}uu.se

People with disabilities are subject to multiple forms of health-related and wider social disparities; carefully focused research is required to inform more inclusive, safe and effective healthcare practice and policy. Through lived experience, disabled people are well positioned to identify and persistently pursue problems and opportunities within existing health provisions that may be overlooked by a largely non-disabled research community. Thus, the academy can play an important role in shining a light on the perspectives and insights from within the disability community, and combined with policy decisions, these perspectives and insights have a better opportunity to become integrated into the fabric of public life, within healthcare and beyond. However, despite the potential benefits that could be yielded by greater inclusivity, in this paper we describe barriers within the UK academy confronting disabled people who wish to embark on health research. We do this by drawing on published findings, and via the lived experience of the first author, who has struggled for over 3 years to find an accessible PhD programme as a person with energy limiting conditions who is largely confined to the home in the UK. First, we situate the discussion in the wider perspective of epistemic injustice in health research. Second, we consider evidence of epistemic injustice among disabled researchers, focusing primarily on what philosophers Kidd and Carel (2017, p 184) describe as ‘strategies of exclusion’. Third, we offer recommendations for overcoming these barriers to improve the pipeline of researchers with disabilities in the academy.

  • Disabled Persons
  • Quality of Health Care

Data availability statement

Data sharing not applicable as no datasets generated and/or analysed for this study.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/jme-2023-109837

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

People with disabilities have been described as an ‘unrecognized health disparity population’. 1 Health disparity (or health inequity) is understood as an avoidable and unjust difference in health or healthcare outcomes experienced by social, geographical or demographic groups with a history of socioeconomic, political or cultural discrimination and exclusion. 1 2 Despite the passage of landmark disability legislation, including the UK Equality Act 2010, the US Americans with Disabilities Act 1990 and the United Nations Convention on the Rights of Persons with Disabilities (adopted in 2006), disability-related health and healthcare disparities persist. Disabled people report lower levels of well-being on average compared with non-disabled people, are at increased risk of physical and mental comorbidity and are more likely to die younger. 1–3 There are multiple reasons as to why health disparities persist along the lines of disability; however, prejudicial biases, engendering structural barriers to care, play a critical part. For example, recently, the WHO 2 reported that people with disabilities are significantly more likely to perceive discrimination and stigma in healthcare contexts compared with non-disabled people. This is supported by a wealth of literature from across the world revealing institutional, physical and attitudinal healthcare barriers for disabled people, including medical professionals’ ambivalence or lack of understanding towards disability, lack of confidence vis-à-vis providing quality care and physically inaccessible clinics and clinical equipment. 4–7

Health and healthcare-related disparities also intersect with broader social disparities. For example, people with disabilities are less likely to be employed and earn less when they are in work, despite the fact that disability incurs higher living costs. 2 In the UK, government data from 2021 reveal a disability employment gap of 28%, 8 with a disability pay gap of 14%. 9 Recent figures from the US Bureau of Labor Statistics 10 indicate that the unemployment rate among disabled people is over twice the rate for non-disabled people, with similar trends across other countries. 2 Perhaps unsurprisingly, disabled people are also more likely to live in poverty than their non-disabled counterparts. 2 11 Compounding matters is structural disablism: discrimination and stigma (woven into collective attitudes, organisational policies, legislation and infrastructure) that often go unnoticed by non-disabled people but can take a serious toll on individuals living with disabilities. In 2023, the UK’s Office for National Statistics reported that the suicide rate was higher among people with disabilities than any other demographic group. 12

To better understand and address such disparities, carefully focused research is needed. 2 In this regard, people with lived experience of chronic illness and disability can offer unique insights that can strengthen and help drive richer research, where disabled people are positioned equally as co-researchers, as opposed to the traditional dynamic of disabled ‘research subject’ to be passively studied. Through first-hand experience, via experiential or standpoint epistemology, 13 disabled researchers are often well positioned to understand how health-related policies and practices (informed through largely non-disabled research communities) may unwittingly harm or otherwise disadvantage disabled persons. 14 Researchers with disabilities may also be more motivated and well placed to perceive knowledge gaps, and to pose penetrating and uncomfortable questions necessary to galvanise change. Embracing viewpoint diversity, and the input of disabled researchers, could therefore represent a powerful pathway to improve understanding and to develop more inclusive health and healthcare policy and practice.

The history of the disabled people’s movement within the UK, 15–17 whereby disabled scholar-activists entered the academy and contributed to profound changes in social practice and policy, constitutes an exemplar of the potential value of viewpoint diversity and disability standpoint, the legacy of which continues today, most notably within disability studies, but also more widely within critical social sciences and humanities. 18–20 However, within health sciences—particularly those tightly tied to science, technology, engineering and mathematics (STEM)—there appear to be greater barriers to including disabled scholars and integrating disabled knowledges. 21–23 For example, research shows that the percentage of people with a declared disability is lower in STEM subjects relative to non-STEM subjects at first degree, postgraduate level and within the academic workforce. 22 Moreover, a 2020 data analysis brief from the UK All-Party Parliamentary Group on Diversity and Inclusion in STEM 23 reported that the UK STEM workforce had a lower representation of disabled people relative to the rest of the UK workforce (11% vs 14%). Here, it is noteworthy that the analysis used the wider definition of STEM, that of ‘STEM(H)’ which specifically includes health and related fields. 23 Such exclusions are further compounded by intersectionality, the intersection and co-constitution of multiple forms of social (dis)advantage. 24 Indeed, the intersection of disability with other minoritised identities 19 21 23 is yet another reason to promote disability inclusion within the academy and beyond.

Despite the potential benefits that could be yielded by greater inclusivity, in this paper we describe barriers within the UK academy confronting disabled people who wish to embark on health research. We do this by drawing on published findings, and via the lived experience of the first author (hereafter, ‘JH’) who has struggled for over 3 years to find an accessible PhD programme in the UK as a person with ‘energy limiting conditions’ (ELC) 25 26 who is largely confined to the home. First, we situate the discussion in the wider perspective of epistemic injustice in health research. Second, we consider evidence of epistemic injustice among disabled researchers, in particular those with ELC, by situating this in the legal context in the UK, and by detailing the nature of barriers experienced. Third, we offer recommendations for overcoming these barriers in the academy.

A note on nomenclature: we recognise that person-first language (‘people with disabilities’) is the globally prevalent form. 18 As a self-identifying disabled person broadly ascribing to the British social model of disability, 16 17 JH tends towards identity-first language (‘disabled people’). Therefore, while recognising the semantic and ideological divergences embedded within different forms of disability-related language, 18 we have chosen to adopt both forms in this paper to reflect our case for viewpoint diversity.

Additionally, while recognising the heterogeneity of disability and disability-related exclusions, 19 we focus on ELC: health conditions that share energy impairment as a key experience and substrate of disability discrimination or disablism.

ELC include but are not limited to ‘medically unexplained’ or contested conditions such as myalgic encephalomyelitis/chronic fatigue syndrome, alongside ‘rare’ conditions such as Ehlers-Danlos syndromes. 25 26 Since ELC do not conform to socially prevalent (fixed, non-fluctuating, easily identifiable) stereotypes of disability, disablism largely manifests as clinical and social disbelief, resulting in ELC being poorly recognised and poorly researched through the lens of disability rights and diversity, equity and inclusion (DEI). 25 26 Equally, while we focus on exclusions within the academic space, it is important to note that people with ELC (and wider disabled communities) are subject to marginalisation and exclusion in all social arenas, including education, employment and the healthcare system itself. 25–28 Moreover, measures to improve physical inclusion (such as wheelchair-accessible environments) are oftentimes ineffective or insufficient among people with ELC who are confined to the home, thus furthering marginalisation of this group. In this respect, we recognise that people diagnosed with mental health conditions (notably but not limited to agoraphobia or social anxiety) may be confined to the home and are subject to similar dynamics of disability-related disbelief and associated exclusions as evidenced in the ELC arena. 29–31 Therefore, while we focus on ELC, the following discussion and recommendations for academic inclusion may benefit others with ‘hidden’ or poorly recognised health conditions.

The importance of ELC-specific research is arguably amplified by the emergence of long COVID, another condition that sits well within the ELC umbrella. 26 The concept of ELC arose from research led by disabled people within and outside of the UK academy 25 26 and thus represents an example of the potential value of ‘disability standpoint’ in contributing to health and healthcare-related research gaps. Nevertheless, there is very little peer-reviewed academic literature explicitly focusing on ELC (for recent exceptions see ref 32–34 ). To our knowledge, and motivating this paper, there is no research exploring academic exclusions in the ELC arena through a lens of epistemic injustice.

Epistemic injustice

Epistemic injustice refers to a variety of wrongs perpetrated against individuals in their capacity as a knower or contributor to knowledge. According to philosopher Miranda Fricker, 35 it takes two forms: testimonial injustice and hermeneutic injustice. The former arises when an individual is unfairly discriminated against with respect to their capacity to know or contribute to knowledge. This form of injustice often arises because of negative stereotypes about a demographic group. For example, in the case of disability, testimonial injustice may take the form of global, unjustified prejudices about the intellectual or bodily capacity of disabled individuals to contribute to knowledge. Disabled people may, for example, be seen as lacking the stamina, strength, reliability or acuity to offer useful insights. Philosophers of medicine Ian Kidd and Havi Carel 36 sum it up as a ‘pre-emptive derogation of the epistemic credibility and capacities of ill persons’ that involves ‘a prior view, for instance, of ill persons being confused, incapable or incompetent, that distorts an evaluation of their actual epistemic performance’. Testimonial injustice can take the form of implicit or explicit discrimination on the part of the hearer, leading to an outright dismissal or discrediting of the contribution of individuals to discussions in which they might otherwise offer valuable insights.

As others have argued, many people with disabilities may have acquired valuable knowledge about their condition through lived experience that renders them experts on aspects of their illness, the nature of health services and the quality of provider care. 27 37 38 Notwithstanding, it is also important to clarify that living with an illness need not automatically afford epistemic privilege. Rather, the point is that a finer awareness is needed to move past unhelpful stereotyping, to appreciate the contributions to knowledge that individuals may make. This, with a view to avoiding global or unwarranted assumptions about the credibility of individuals’ contributions to knowledge formation activities.

Hermeneutic injustice represents a wrong which Fricker describes as the set of structural and social problems that arise because ‘both speaker and hearer are labouring with the same inadequate tools’. 35 This form of injustice arises when individuals are precluded from accessing, or can only partially access, resources that could improve understanding about their experiences. Because of this asymmetry, those with unequal access to resources can suffer additional disadvantages that serve to further undermine their status and impede understanding about their condition. Kidd and Carel describe two kinds of means—which they dub ‘strategies’—by which hermeneutic injustice can be explicitly or implicitly perpetuated. 39 The first includes a range of structural barriers to participation in practices whereby knowledge is formed. Kidd and Carel argue that these can encompass physical barriers and subtler exclusions such as employing specific terminologies and conventions that serve to exclude the participation of disadvantaged people who might otherwise usefully contribute to knowledge. 39 A related, second strategy of exclusion, they argue, is the downgrading of certain forms of expression (such as first-person experiences, affective styles of presentation or vernacular) as evidence of the diminished credibility of the marginalised group. This demotion, Kidd and Carel contend, serves to further frustrate the efforts of the disadvantaged individual to participate, compounding ‘epistemic disenfranchisement’. 39 In this way, hermeneutic injustice can lead to a vicious, self-perpetuating cycle of testimonial injustice.

In what follows, we focus primarily on evidence of hermeneutic injustice, including strategies of exclusion among disabled researchers with ELC, who are largely or completely confined to the home and who seek to contribute to knowledge formation activities within the UK academy. Before we delve into the evidence, however, we offer some contextual caveats. First, it is important to offer some legal context with respect to disability rights. On the most charitable analysis, we acknowledge that not every individual who is disabled can expect to participate in every research context. For example, some barriers—such as the design or location of laboratories—might preclude full participation among some disabled researchers even with significant adaptations. Our aim then is to examine forms of epistemic injustice that pertain to ‘reasonable adjustments’, a legal term that we will unpack. Since our focus is on barriers to people with disabilities in British universities, we focus on UK legislation; however, what we have to say doubtlessly applies to other countries and regions.

Evidence of epistemic injustice among disabled researchers

Background on uk disability legislation.

Under Section 20 of the UK Equality Act 2010, higher education providers in England, Scotland and Wales are legally bound to provide ‘reasonable adjustments’ for people with disabilities who require them. 40 Section 6 of the Act defines disability as the experience of an impairment that has a ‘substantial’, long-term adverse impact on a person’s ability to engage in daily activities. Section 20 clarifies that the duty to make reasonable adjustments exists where any provisions or criteria offered or required by education providers place disabled people at a ‘substantial’ disadvantage relative to non-disabled people. 40

Health scholars have identified vagueness and therefore ambiguities in how qualifiers such as ‘substantial’ and ‘reasonable’ are interpreted. 41 Moreover, it has been contended that ‘reasonable adjustments’ rely on a non-disabled and potentially ableist perspective of what is reasonable, while also placing the burden to prove eligibility for adjustments onto disabled people, thus individualising the structural problem of normalised discrimination. 42 As previously outlined, ELC are poorly recognised as forms of disability, and research demonstrates that people living with diagnoses that can be positioned as ELC struggle to gain the recognition necessary to obtain reasonable adjustments. 32–34 43 Section 19 of the Equality Act 2010 explains that indirect discrimination occurs when one party applies a provision, criterion or practice that puts a person with a protected characteristic (such as disability) at a substantial disadvantage when compared with people without that protected characteristic. 40 44

The Equality Act allows for scenarios where discrimination may be justified (known as ‘objective justification’) in cases where providers can demonstrate that their policies or provisions constitute ‘a proportionate means of achieving a legitimate aim’. 40 Among the considerations about what might constitute a proportionate means are the size of the organisation, the practicalities and costs involved. 44 However, these are seldom explicitly articulated as a justification for the status quo, and the resulting ambiguities (which ultimately can only be resolved by tribunal or court) mean—as we will next find out—that disability discrimination may inadvertently become normalised.

Evidence of strategies of exclusion

Despite an ostensible increase in DEI policies within the academy, 45 46 there exists considerable literature demonstrating experiences of physical and attitudinal barriers to participation in academic research among disabled students and academics, including those with diagnoses that sit within the ELC umbrella. 29 31–34 43 46 There is also evidence that disability-related inequities in higher education persist in terms of degree completion, degree attainment and progression onto skilled employment or postgraduate study, within and beyond STEM. 21 22 47 48 The experience of JH is that such disparities are deeply entwined with physical and attitudinal barriers to full epistemic participation within the academy. Drawing on research findings and situating these against the lived experience of JH, we now explore evidence of strategies of exclusion for disabled researchers that, we argue, could contribute to epistemic injustice.

Studies that reveal barriers to academic participation, among people with ELC and disabled people more broadly, focus on two principal scenarios: (1) experiences of higher education students who can attend ‘on campus’ but require accommodations, 29 33 43 and (2) experiences of academics (from PhD study level upwards) navigating workplace barriers pertaining to reasonable adjustments, employment and career progression opportunities. 31 34 46 49 Where these barriers occur, we suggest they point to evidence of hermeneutical injustice that may also be underpinned by testimonial injustice. Indeed, chief among themes across such literature is that of ableism, understood as ‘a cultural imaginary and social order centred around the idealised able-bodied and -minded citizen who is self-sufficient, self-governing and autonomous’ 50 ; this ‘social order’ is founded on global prejudices about disabled bodies and minds. 50 Reports of academic ableism are evidenced as manifesting through, inter alia, a lack of accessible buildings and equipment, institutional inability or unwillingness to facilitate disability-related accommodations, and lack of familiarity (or consensus) among faculty and non-academic staff as to what constitutes disability-specific DEI practice and policy. 31 43 45 46 Additionally, increasing literature probes the creeping neoliberalisation of academia, which is contended to intersect with and perpetuate ableism, most notably though institutional normalisation of competition and hyperproductivity as a reflection of ‘excellence’. 31 46 Relatedly, and notably among students or academics with health conditions that can be positioned as ELC, the question of whether or how to disclose disability and implications of (non)disclosure is receiving critical attention. 21 29 31 33 34 43

Furthermore, as previously outlined, scarce attention has been paid to ELC explicitly, especially among people with ELC who are largely or completely confined to the home, yet may wish to continue within or enter academic spaces and thus require remote access. JH’s experience is that some of these people are not only marginalised within the academy but may be excluded from accessing it altogether. This, it would appear, is owing to a failure of institutions to facilitate remote access programmes. Here again, to understand how strategies of exclusion operate, we must turn to legal considerations. In terms of what might be considered ‘reasonable’, the willingness of research institutes to extend remote access to students and faculty during successive lockdowns owing to the SARS-CoV-2 pandemic 31 51 52 suggests that failure to extend such accommodations to disabled people who depend on them, and especially where research can be conducted from home, would be difficult to justify.

Yet, such remote access tends to be considered at best an ‘adjustment’ to preferred or ‘normal’ (non-disabled) practice, and provision appears to be patchy and poorly signposted; lack of clarity over which research institutes offer remote delivery programmes may thus constitute the initial hurdle. Some universities appear to offer remote PhDs within some disciplines but not within others, and the exclusions do not appear to be related to pragmatics such as requiring laboratory access. For example, according to JH’s enquiries, and information received, one UK research institute and member of the Russell Group (representing UK leading research-intensive institutions) offered distance learning PhD programmes in 2021 and 2022 within psychology, but not within sociology. For added context, JH’s research interests are interdisciplinary but primarily straddle disability studies (typically sited within academic schools of sociology and faculties of social sciences) and psychology. This is with a view to researching disability-affirmative, socioculturally and politically cognisant approaches to psychotherapy practice and policy. However, in academic fora, psychology and psychotherapy (often aligned with health sciences faculties) foreground heavily medicalised understandings of disability, and JH’s experience has been that psychology departments have not been open minded or welcoming vis-à-vis the prospect of integrating sociocultural and political perspectives, as per disability studies. In practice, this has meant that JH’s endeavours to find an accessible PhD have been limited to the purview of sociology. These disciplinary exclusions arguably represent the legacy of the reluctance of psychology, wider health sciences and life sciences to embrace disability in all its diversity. 21–23 50

In response to an enquiry as to why the above institution did not offer remote access PhDs in disability studies/sociology, the postgraduate admissions team informed JH: ‘All our PhD students undertake mandatory units which are only delivered in person’ (email, 10 February 2022). It is unclear how these mandatory units differ from units offered on remote access programmes. Indeed, a recurring motif throughout JH’s enquiries across various UK institutions is that further probing about potentially exclusionary policies results in ambiguous responses, or no response at all. Reasons for lack of remote access offered by other institutions included a mandatory requirement for direct (on-campus) contact with the PhD supervisor or the need to participate in onboarding sessions face to face on campus. However, lack of justification about why this was necessary was not offered.

Again, it might be expected that institutional willingness to provide remote access during lockdowns would serve as a precedent for remote access to become the norm rather than the exception. 46 However, in response to JH challenging lack of remote access provision on these grounds, the reply from the admissions team at another Russell Group university was as follows:

While during the last year some teaching and supervision has taken place online this is a temporary measure and not part of a formal distance learning course. Some supervision and teaching is also now taking place back on campus in person again. All ‘on campus’ programmes are subject to government mandated attendance requirements. (email, 28 January 2022)

When JH requested more details regarding these government-mandated attendance requirements, the admissions team declared that the enquiry would be passed onto another point of contact. Over 2 years later, no further details have been forthcoming. Ad hoc adjustments pertaining to remote delivery might be possible at some institutions, but it seems conceivable that these may be dependent on the supervisor’s individual preferences rather than policy, perhaps permitting prejudicial judgements about disability to interfere with decision-making.

Furthermore, for those fortunate enough to find a supervisor willing to ‘accommodate’ them, additional strategies of exclusion arise pertaining to funding via doctoral training programme (DTP) and research council consortiums. For example, a representative of the UK White Rose social sciences DTP 53 (covering seven UK higher education institutions in Northern England) informed JH that, in accordance with Economic and Social Research Council (ESRC) policy, disabled students confined to the home are not eligible to be considered for funding. Further digging revealed that this policy is not limited to the White Rose DTP; for example, the UK Midlands Graduate School DTP, 54 covering a further eight UK higher education institutions, lists the same exclusion criteria on its website at time of writing. When JH challenged the White Rose DTP’s policy on grounds of (dis)ableism, a representative forwarded the following response from the ESRC:

UKRI [UK Research and Innovation, non-departmental body of the UK government responsible for funding research] terms and conditions confirm that UKRI funded students must live within a reasonable travel time of their Research Organisation (RO) or collaborative organisation to ensure that they are able to maintain regular contact with their department and their supervisor. This should also ensure that the student receives the full support, mentoring, access to a broad range of training and skill development activities available at their RO, as well as access to the resources and facilities required to complete their research successfully and to a high standard. Our expectation also reflects that we want to avoid students studying in isolation […] (email, 15 December 2022)

In light of the considerable evidence that scholars across many disciplines can work remotely, the assumption that disabled people cannot research to a ‘high standard’ while confined to the home is problematic. Additionally, the reasoning around avoiding isolation, while likely well intended, does not hold much weight from JH’s standpoint. Many disabled people frequently experience significant physical and emotional isolation through navigating a (dis)ableist society and develop numerous strategies (including use of remote access technology) to mitigate this; in this respect, they may even be considered ‘experts by experience’ in resiliently striving to manage isolation. 51 55 56 Social media, for example, is used by many disabled people to connect with others, share ideas on managing health conditions and disability discrimination and develop collective advocacy and activism initiatives. 55 Refusing to offer remote access on (partial) grounds that disabled people may not be able to cope with the ensuing isolation risks infantilising people with disabilities, and withholds one of the very tools that can facilitate inclusion and thus counter isolation.

Moreover, literature suggests that being on campus does not necessarily prevent disabled people from experiencing or overcoming isolation, notably emotional isolation or alienation arising from lack of accommodations and thus feeling ‘unwelcome’ or ‘less than’. 33 46 The ESRC’s reasoning would therefore appear to arise from a non-disabled perspective (or at least, a perspective not attuned to certain facets of disability culture). Funding-related barriers are aggravated by the general lack of other funding opportunities for disabled students. For example, while scholarships for other under-represented groups are justly offered across many institutions, 57–59 often with emphasis on recruiting traditionally marginalised candidates, similar much-needed initiatives for people disadvantaged through disability are conspicuously absent. This is particularly important to address since disability and economic disadvantage are entwined in a complex manner, 2 11 and because, as previously noted, disability is intersected with other forms of social (dis)advantage. 19 21 24 28

It is worth emphasising that the exclusionary practices pertaining to health-related research, as discussed here, may be more pervasive and entrenched than we have presented. Discussing the impact of academic ableism, Brown 46 notes that disability disclosure rates, though on the increase in undergraduate admissions, drop between undergraduate and academic employment level. Brown identifies two factors that might explain this: (a) disabled academics may avoid disclosure for fear that declaring disability would impede their career, and (b) disabled students may simply drop out of the academy. As the foregoing demonstrates, JH’s experience suggests that the second factor may be entwined with disabled students being excluded from the academy because they cannot meet ‘on campus’ attendance requirements. It is currently unknown how many fledgling academics with disabilities have been excluded from the academy owing to discriminatory policies and academic culture, but it seems likely that JH’s case is not exceptional. Recent research recounts that some disabled faculty are being refused remote working arrangements as lockdown accommodations begin to revert to ‘normal’ practice. 60 For disabled researchers in perpetual lockdown, such refusals might result in experiences such as those detailed here remaining unknown and thus unaddressed.

In summary, where a ‘leaky pipeline’ exists vis-à-vis academic representation of some historically oppressed groups, 61 62 it appears that there exists no pipeline at all for a subgroup of disabled people who cannot leave their homes due to a combination of body/mind restrictions and lack of social provisions such as healthcare. Yet, disadvantages created by refusing remote access accommodations to scholars with disabilities who are confined to the home are certainly substantial. Beyond the potential loss to collective wisdom, the hermeneutical injustice perpetuated by barriers to education and employment among disabled people results in what Kidd and Carel describe as a ‘double injury’, 39 since it leads to significant ramifications for the psychosocial well-being and financial security of those excluded.

Conclusions and recommendations

Despite an ostensible increase in commitment to DEI policy and practice, the academy is far from an inclusive space for disabled people. In the case of disabled people who are unable to leave the home, we might better speak of outright exclusions as opposed to marginalisation. The above discussion has demonstrated that various strategies of exclusion operate within the academy that serve to exclude some people with disabilities ‘from the practices and places where social meanings are made and legitimated’. 39 Such exclusions risk further marginalising an already hermeneutically marginalised group, with concomitant psychosocial, occupational and financial harms. Additionally, these exclusions incur a loss of collective wisdom that adversely impacts the development of inclusive, safe and effective healthcare practice and policy.

Although we urge the importance of universities facilitating remote access to disabled scholars, we add a note of caution. First, a remote access academy should be offered in complementarity with, as opposed to an alternative to, ensuring accessibility of academic buildings and equipment, or to otherwise supporting disabled people to attend on campus. This is especially important since we also acknowledge that remote access is not a solution for all disabled people. 52 63 Of note, while remote access can be understood as an assistive technology that helps support the health, well-being and social inclusion of people with disabilities, 2 the digital divide means that disabled people are also less likely to be able to access this technology compared with their non-disabled counterparts. Such marginalisation is owing to lack of devices, broadband connectivity or reduced digital literacy, underpinned by financial, social and educational disparities as already discussed. 1 2 63 Our promotion of remote access as an inclusivity tool does not negate the need to address this divide. Nevertheless, recent research has shown that a leading UK online education provider (University of Derby) has three times as many disabled students as the national average, 30 suggesting that remote delivery of academic programmes can be a significant facilitator of DEI. We therefore conclude by offering recommendations with a view to building on such strategies of inclusion.

Given the lack of familiarity vis-à-vis disability-specific DEI practice and policy, as reported in literature 31 45 46 and as experienced by JH, our first recommendation is for formalised disability equality training and education initiatives that specifically take account of people with ELC and those confined to the home. Since report of such training reinforcing disability-related stereotyping exists, 31 there should be greater emphasis on co-producing such resources with people with disabilities, including those confined to the home who are often excluded from public policy-making. Such initiatives, which could also beneficially target personnel involved in research councils and DTPs, should address implicit personal and organisational biases, facilitate understanding of how current policy and practices perpetuate (dis)ableism and promote a proactive approach to equity and inclusion, specifically in the case of people confined to the home. Disabled researchers and disability studies scholars have argued that an institutional culture change is necessary to move beyond a perfunctory engagement in, or basic legal compliance with, DEI initiatives; a foregrounding of the social model of disability and universal design principles has thus been proposed in developing DEI policy and practice. 29 31 46 The social model upends academically prevalent (individualistic) representations of disability and reasonable adjustments, by placing the onus for change on social structures and institutions as opposed to the people who are discriminated against. 16 17 In the case of ELC, we suggest that the social structures requiring greatest change to facilitate inclusion are attitudinal contexts, most notably disbelief. 24 25 In complement to the social model, application of universal design tenets to academic contexts, which involve building ‘accommodations’ into academic standard and managing disability-related diversity proactively as opposed to reactively, 29 31 46 should be extended to remote access. In practice, this means reducing the likelihood that disabled people have to ask and prove eligibility for reasonable adjustments. 42

Second, we recommend greater institutional transparency, including clear guidance for researchers with disabilities, vis-à-vis remote working policies. For many research and study programmes, online library access, supervision and other meetings represent acceptable accommodations, if not candidates for integration into academic standard as a complement to on-campus delivery. Such accommodations should be clearly signposted and, where remote working is not possible or government mandates apply, both transparency and strong justifications are required. In this regard, an institution outside of the UK has set a precedent. Uppsala University in Sweden has welcomed JH as research affiliate in the Department of Women’s and Children’s Health, operating entirely via remote access. This approach, which embraces remote working as if it were standard practice (as per universal design principles), is invaluable in challenging the prevalent yet exclusionary academic notion of dominant (on-campus) practice and policy as ‘normal’ and ‘ability neutral’. It thus serves as an exemplar for disability-related best practice for UK institutions.

Third, the current funding system requires considerable revision to better include people with disabilities who are confined to the home. In cases where research projects can be conducted remotely, there is surely no justification for exempting this group of disabled people from being eligible to apply for grants and PhD stipends. As per our above recommendations for remote accommodations, information on funding eligibility should be easily accessible, with strong and transparent rationale for any exclusions. Additionally, existing initiatives to ring-fence funding for researchers from minoritised groups to study health-related inequities 64 should be extended to include disabled people. Without such measures, much-needed research might never be conducted. This article, which has arisen from disability standpoint, and both disability and academic allyship, has indicated a considerable research gap pertaining to how disabled students or academics confined to the home experience barriers to health-related research. With a view to addressing this research gap with the added value of disability standpoint, funding opportunities must facilitate the inclusion of disabled researchers. Yet, while some under-represented groups are supported through funding-related DEI schemes, 64 disability is often overlooked.

Finally, we recommend a more formalised and universally applied academic DEI monitoring and ombudsman scheme, both to assess DEI-related shortcomings and to support minoritised researchers in raising concerns. Disabled scholars have suggested using Disability Standard (a form of benchmarking used in business to assess inclusivity and accessibility) to analyse gaps in disability-related DEI practice and policy 31 ; practical application across UK universities appears very limited. Existing schemes to promote DEI within the education sector should ensure that disability, including disabled people confined to the home, is represented and consider how institutional compliance can be secured. ‘Advance HE’ is a UK non-governmental body that promotes excellence in higher education, an objective the body acknowledges as entwined with DEI. 65 While DEI ‘international charters’ pertaining to gender and race exist with a view to encouraging providers to commit to inclusion of under-represented groups, 65 an equivalent charter specifically for disability does not exist. Here again, we recognise that different forms of discrimination intersect and that race and gender shape disability. 2 21 28 Moreover, while we do not mean to overlook recent efforts among Advance HE and other bodies to include disability in DEI initiatives, 66 the voluntary nature of many of these initiatives (which ‘encourage’ higher education institutions to address more fully disability-related DEI) will likely allow the inequitable status quo to persist. Seeking to ground a collective institutional commitment to disability inclusion within legislation, or at the very least within a transparent ‘award’ system as with DEI initiatives pertaining to other under-represented groups, 65 would likely lend more gravitas to such schemes and ‘nudge’ research institutes towards greater accountability.

In summary, insights from scholars with disabilities can help to inform more inclusive, safe and effective health-related interventions, with further benefits for social inclusion. Current academic structures deny opportunities to the very people who are well placed to identify and research the most overlooked problems in our health systems. If we truly prize DEI, the academy must become more accessible to disabled people.

Ethics statements

Patient consent for publication.

Not applicable.

  • Walker DK ,
  • Correa-De-Araujo R
  • Equality and Human Rights Commission
  • Rotarou ES ,
  • Sakellariou D
  • Sakellariou D ,
  • Gaze S , et al
  • Iezzoni LI ,
  • Ressalam J , et al
  • Office for National Statistics
  • US Bureau of Labor Statistics
  • Lillywhite A ,
  • Shakespeare T
  • Campbell FK
  • Egambaram O ,
  • Leigh J , et al
  • The Royal society
  • APPG on diversity and inclusion in STEM and British Science Association
  • Benstead S ,
  • Cockerill V ,
  • Green P , et al
  • Hamilton PR ,
  • Harrison ED
  • Rexhepi H , et al
  • ↵ Equality act . 2010 . Available : https://www.legislation.gov.uk/ukpga/2010/15/contents
  • Shrewsbury D
  • Wolbring G ,
  • Lillywhite A
  • Stone S-D ,
  • Crooks VA ,
  • White Rose Social Sciences DTP
  • ↵ Midlands graduate school ESRC DTP . Eligibility requirements , 2023 . Available : https://warwick.ac.uk/fac/cross_fac/mgsdtp/studentships/eligibility/
  • Moseley RL ,
  • Wignall L , et al
  • University of Leeds
  • Open University
  • University of Oxford
  • Nicholson J ,
  • Campbell FK , et al
  • Pettersson L ,
  • Johansson S ,
  • Demmelmaier I , et al

X @JoElizaHunt, @crblease

JH and CB contributed equally.

Contributors Both authors contributed equally to all aspects of the paper. As corresponding author, JH acts as guarantor.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

IMAGES

  1. how to draw conclusion in research findings

    draw conclusion from research findings

  2. how to draw conclusion in research findings

    draw conclusion from research findings

  3. how to draw conclusion in research findings

    draw conclusion from research findings

  4. how to draw conclusion in research findings

    draw conclusion from research findings

  5. how to draw conclusion in research findings

    draw conclusion from research findings

  6. A Complete Guide on How to Write a Conclusion for a Research Paper

    draw conclusion from research findings

VIDEO

  1. Chapter 5: Summary of Findings, Conclusion and Recommendation

  2. Pre-Algebra Lesson 4.5

  3. How to write a research paper conclusion

  4. Drafting Manuscript for Scopus Free Publication

  5. linking findings to purpose

  6. Research Seminar by Dr. Long Bunteng

COMMENTS

  1. Writing a Research Paper Conclusion

    Table of contents. Step 1: Restate the problem. Step 2: Sum up the paper. Step 3: Discuss the implications. Research paper conclusion examples. Frequently asked questions about research paper conclusions.

  2. Draw conclusions and make recommendations (Chapter 6)

    Some research is simply descriptive - the final report merely presents the results. In most cases, though, you will want to interpret them, saying what they mean for you - drawing conclusions. These conclusions might arise from a comparison between your results and the findings of other studies.

  3. Chapter 15: Interpreting results and drawing conclusions

    Key Points: This chapter provides guidance on interpreting the results of synthesis in order to communicate the conclusions of the review effectively. Methods are presented for computing, presenting and interpreting relative and absolute effects for dichotomous outcome data, including the number needed to treat (NNT).

  4. How to Write a Conclusion for Research Papers (with Examples)

    The conclusion in a research paper is the final section, where you need to summarize your research, presenting the key findings and insights derived from your study. Check out this article on how to write a conclusion for a research paper, with examples. ... If you are drawing a direct quote or paraphrasing a specific source in your research ...

  5. Planning and Writing a Research Paper: Draw Conclusions

    Key Takeaways. Because research generates further research, the conclusions you draw from your research are important. To test the validity of your conclusions, you will have to review both the content of your paper and the way in which you arrived at the content. Mailing Address: 3501 University Blvd. East, Adelphi, MD 20783.

  6. Research Paper Conclusion

    Research Paper Conclusion. Definition: A research paper conclusion is the final section of a research paper that summarizes the key findings, significance, and implications of the research. It is the writer's opportunity to synthesize the information presented in the paper, draw conclusions, and make recommendations for future research or ...

  7. How to Write Discussions and Conclusions

    Begin with a clear statement of the principal findings. This will reinforce the main take-away for the reader and set up the rest of the discussion. Explain why the outcomes of your study are important to the reader. Discuss the implications of your findings realistically based on previous literature, highlighting both the strengths and ...

  8. Drawing Conclusions and Reporting the Results

    The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences. The most prestigious way to report one's findings is by writing a manuscript and having it published in a ...

  9. 9. The Conclusion

    The conclusion is intended to help the reader understand why your research should matter to them after they have finished reading the paper. A conclusion is not merely a summary of the main topics covered or a re-statement of your research problem, but a synthesis of key points derived from the findings of your study and, if applicable, where you recommend new areas for future research.

  10. Drawing Conclusions and Reporting the Results

    Drawing Conclusions. Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research. If the results are statistically significant and ...

  11. 2.7 Drawing Conclusions and Reporting the Results

    The final step in the research process involves reporting the results. As described in the section on Reviewing the Research Literature in this chapter, results are typically reported in peer-reviewed journal articles and at conferences. The most prestigious way to report one's findings is by writing a manuscript and having it published in a ...

  12. How to write a strong conclusion for your research paper

    Step 1: Restate the problem. Always begin by restating the research problem in the conclusion of a research paper. This serves to remind the reader of your hypothesis and refresh them on the main point of the paper. When restating the problem, take care to avoid using exactly the same words you employed earlier in the paper.

  13. PDF Module 6: Summarizing Results and Drawing Conclusions

    Summarizing Results and Drawing Conclusions 5. What new questions do you have? Your findings might also help to drive future research studies by generating new questions. Follow the guidance below and try to answer the questions asked as they apply to your results. Most research uncovers more questions than answers. This is one of the most

  14. Drawing Conclusions

    The area in a research paper that causes intense and heated debate amongst scientists is often when drawing conclusions. Sharing and presenting findings to the scientific community is a vital part of the scientific process. It is here that the researcher justifies the research, synthesizes the results and offers them up for scrutiny by their peers.

  15. 2.8: Drawing Conclusions and Reporting the Results

    Drawing Conclusions. Since statistics are probabilistic in nature and findings can reflect type I or type II errors, we cannot use the results of a single study to conclude with certainty that a theory is true. Rather theories are supported, refuted, or modified based on the results of research. If the results are statistically significant and ...

  16. 2.1F: Analyzing Data and Drawing Conclusions

    Key Points. Analysis of data is a process of inspecting, cleaning, transforming, and modeling data with the goal of highlighting useful information, suggesting conclusions, and supporting decision making. Data analysis is a process, within which several phases can be distinguished. One way in which analysis can vary is by the nature of the data.

  17. Drawing Conclusions From Your Data

    At the end of your analysis, you should be able to conclude whether your hypotheses are confirmed or rejected. To ensure you are able to draw conclusions from your analyses, we offer the following suggestions: Highlight key findings from the data. Making generalized comparisons . Assess the right strength of the claim.

  18. Q: How is the conclusion drawn in qualitative research?

    Having said that, the conclusion of a qualitative study can at times be quite detailed. This would depend on the complexity of the study. A questionnaire about likes and dislikes is simpler to score, interpret, and infer than a focus group, interview, or case study. In the case of a simpler study, you may reiterate the key findings of the study ...

  19. How to Write a Conclusion for a Research Paper

    A conclusion is the final paragraph of a research paper and serves to help the reader understand why your research should matter to them. The conclusion of a conclusion should: Restate your topic and why it is important. Restate your thesis/claim. Address opposing viewpoints and explain why readers should align with your position.

  20. Lesson 28 drawing logical conclusions from research findings

    Conclusions are inferences and generalizations are base upon the findings. Example: Based on a research study on "Factors Affecting the Career Choices of High School students" two conclusions can be drawn from the findings of the study. Conclusion 1: Males prefer technology-based courses while females prefer business-related course.

  21. Drawing Conclusions From Research Findings & Recommendations

    The learner: draws conclusions from research findings;CS_RS12-IIh-j-1; and formulates recommendations; CS_RS12-IIh-j-D. Objectives (Mga Layunin) At the end of the lesson, the learners should be able to: Draws conclusions from research findings and Formulates recommendations. II. CONTENT (Nilalaman) Reporting and Sharing Findings

  22. Fuzzy cognitive mapping in participatory research and decision making

    Fuzzy cognitive mapping (FCM) is a graphic technique to describe causal understanding in a wide range of applications. This practice review summarises the experience of a group of participatory research specialists and trainees who used FCM to include stakeholder views in addressing health challenges. From a meeting of the research group, this practice review reports 25 experiences with FCM in ...

  23. Pr2 q2 mod7 Drawing Conclusion and Formulating Recommendation for Findings

    a. Identify the guidelines in making conclusions and recommendations; b. Draw conclusions from research findings CS_RS12-IIh-j-1; c. Formulate recommendations CS_RS12-IIh-j-2. What I Need to Know What I Know. Find the following words in the box and explain each word based on your prior knowledge. findings research result recommendation conclusion

  24. LESSON PLAN FOR DRAWING CONCLUSION

    1. CS_RS12-IIh-j-1: Draws conclusions from research findings 2. CS_RS12-IIh-j-2: Formulates recommendations D. Objectives At the end of the class facilitation, the students are expected to: 1. draw conclusions from the sample research findings through tables; 2. appreciate the significance of conclusion through (a) application in

  25. Learning from children's drawings

    Drawing conclusions This large-scale work adds robust support to previous findings that as children grow up, their ability to both recognize and draw animals and objects increases.

  26. Some Dinosaurs Evolved to Be Warm-Blooded 180 Million Years Ago, Study

    Jasmina Wiemann, a paleobiologist at the Field Museum of Natural History who was not involved in the new research, published a study in 2022 that came to a different conclusion: Based on oxygen ...

  27. Epistemic injustice, healthcare disparities and the missing pipeline

    People with disabilities are subject to multiple forms of health-related and wider social disparities; carefully focused research is required to inform more inclusive, safe and effective healthcare practice and policy. Through lived experience, disabled people are well positioned to identify and persistently pursue problems and opportunities within existing health provisions that may be ...

  28. Full article: Analysing adherence to guidelines for time-lapse imaging

    Drawing on these guidelines, we systematically analysed all the websites of UK fertility clinics. ... Previous research (Perrotta & Geampana, Citation 2020) has highlighted various perceived advantages of TLI, including its utility as a laboratory ... In conclusion, the findings presented in this article underscore several policy implications ...

  29. Impacts of large-scale irrigation and climate change on groundwater

    In conclusion, the large-scale irrigation implementation and climate change significantly altered the water cycle of the study region. Overall, these findings addressed existing knowledge gaps and provided valuable insights that can be extrapolated to draw conclusions and generalize climate change and irrigation's effects on fluvial ecosystems.

  30. Analysis of Space Efficiency in High-Rise Timber Residential Towers

    In our research, drawing upon an analysis of 51 case studies, the average space utilization and core area-to-GFA ratio were calculated at approximately 83% and 10%, respectively. The spectrum of measurements portrayed a wide gamut, with the lowest recorded space efficiency at 70% and the highest at 93%, while the core area-to-GFA ratio ranged ...