Have a thesis expert improve your writing

Check your thesis for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base
  • Null and Alternative Hypotheses | Definitions & Examples

Null and Alternative Hypotheses | Definitions & Examples

Published on 5 October 2022 by Shaun Turney . Revised on 6 December 2022.

The null and alternative hypotheses are two competing claims that researchers weigh evidence for and against using a statistical test :

  • Null hypothesis (H 0 ): There’s no effect in the population .
  • Alternative hypothesis (H A ): There’s an effect in the population.

The effect is usually the effect of the independent variable on the dependent variable .

Table of contents

Answering your research question with hypotheses, what is a null hypothesis, what is an alternative hypothesis, differences between null and alternative hypotheses, how to write null and alternative hypotheses, frequently asked questions about null and alternative hypotheses.

The null and alternative hypotheses offer competing answers to your research question . When the research question asks “Does the independent variable affect the dependent variable?”, the null hypothesis (H 0 ) answers “No, there’s no effect in the population.” On the other hand, the alternative hypothesis (H A ) answers “Yes, there is an effect in the population.”

The null and alternative are always claims about the population. That’s because the goal of hypothesis testing is to make inferences about a population based on a sample . Often, we infer whether there’s an effect in the population by looking at differences between groups or relationships between variables in the sample.

You can use a statistical test to decide whether the evidence favors the null or alternative hypothesis. Each type of statistical test comes with a specific way of phrasing the null and alternative hypothesis. However, the hypotheses can also be phrased in a general way that applies to any test.

The null hypothesis is the claim that there’s no effect in the population.

If the sample provides enough evidence against the claim that there’s no effect in the population ( p ≤ α), then we can reject the null hypothesis . Otherwise, we fail to reject the null hypothesis.

Although “fail to reject” may sound awkward, it’s the only wording that statisticians accept. Be careful not to say you “prove” or “accept” the null hypothesis.

Null hypotheses often include phrases such as “no effect”, “no difference”, or “no relationship”. When written in mathematical terms, they always include an equality (usually =, but sometimes ≥ or ≤).

Examples of null hypotheses

The table below gives examples of research questions and null hypotheses. There’s always more than one way to answer a research question, but these null hypotheses can help you get started.

*Note that some researchers prefer to always write the null hypothesis in terms of “no effect” and “=”. It would be fine to say that daily meditation has no effect on the incidence of depression and p 1 = p 2 .

The alternative hypothesis (H A ) is the other answer to your research question . It claims that there’s an effect in the population.

Often, your alternative hypothesis is the same as your research hypothesis. In other words, it’s the claim that you expect or hope will be true.

The alternative hypothesis is the complement to the null hypothesis. Null and alternative hypotheses are exhaustive, meaning that together they cover every possible outcome. They are also mutually exclusive, meaning that only one can be true at a time.

Alternative hypotheses often include phrases such as “an effect”, “a difference”, or “a relationship”. When alternative hypotheses are written in mathematical terms, they always include an inequality (usually ≠, but sometimes > or <). As with null hypotheses, there are many acceptable ways to phrase an alternative hypothesis.

Examples of alternative hypotheses

The table below gives examples of research questions and alternative hypotheses to help you get started with formulating your own.

Null and alternative hypotheses are similar in some ways:

  • They’re both answers to the research question
  • They both make claims about the population
  • They’re both evaluated by statistical tests.

However, there are important differences between the two types of hypotheses, summarized in the following table.

To help you write your hypotheses, you can use the template sentences below. If you know which statistical test you’re going to use, you can use the test-specific template sentences. Otherwise, you can use the general template sentences.

The only thing you need to know to use these general template sentences are your dependent and independent variables. To write your research question, null hypothesis, and alternative hypothesis, fill in the following sentences with your variables:

Does independent variable affect dependent variable ?

  • Null hypothesis (H 0 ): Independent variable does not affect dependent variable .
  • Alternative hypothesis (H A ): Independent variable affects dependent variable .

Test-specific

Once you know the statistical test you’ll be using, you can write your hypotheses in a more precise and mathematical way specific to the test you chose. The table below provides template sentences for common statistical tests.

Note: The template sentences above assume that you’re performing one-tailed tests . One-tailed tests are appropriate for most studies.

The null hypothesis is often abbreviated as H 0 . When the null hypothesis is written using mathematical symbols, it always includes an equality symbol (usually =, but sometimes ≥ or ≤).

The alternative hypothesis is often abbreviated as H a or H 1 . When the alternative hypothesis is written using mathematical symbols, it always includes an inequality symbol (usually ≠, but sometimes < or >).

A research hypothesis is your proposed answer to your research question. The research hypothesis usually includes an explanation (‘ x affects y because …’).

A statistical hypothesis, on the other hand, is a mathematical statement about a population parameter. Statistical hypotheses always come in pairs: the null and alternative hypotheses. In a well-designed study , the statistical hypotheses correspond logically to the research hypothesis.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Turney, S. (2022, December 06). Null and Alternative Hypotheses | Definitions & Examples. Scribbr. Retrieved 14 May 2024, from https://www.scribbr.co.uk/stats/null-and-alternative-hypothesis/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, levels of measurement: nominal, ordinal, interval, ratio, the standard normal distribution | calculator, examples & uses, types of variables in research | definitions & examples.

Publication bias: Why null results are not necessarily ‘dull’ results

Any result is a good result

Negative results are sometimes hidden discoveries

The publication of research findings is crucial to scientific progress. It is one of the primary ways scientists share their research with their peers and contemporaries. Indeed, it enables scientists to share their knowledge with, and inspire, future generations by preserving and keeping a record of their work that lights the way for future discoveries. 

Nonetheless:

 “To light a candle is to cast a shadow” —Ursula Le Guin,  A Wizard of Earthsea.

While the publication of research ‘lights a candle’ by illuminating discoveries and allowing researchers to learn of, and build on, each other’s work, publication bias potentially casts a shadow over a trail of valuable  hidden discoveries and research.

Publication bias typically describes the phenomenon whereby positive results which support a hypothesis are more likely to be published than negative or null results, thus some discoveries remain hidden from view.

This has led to the potential distortion of scientific literature, from the view that bilingualism confers certain cognitive advantages, to the exaggeration of neurological sex differences.

Publication bias, isn't just a matter of academic rigour nor some niche pretentious academic concern, it has real consequences and null results do matter.

Misleading estimates of drug efficacy due to publication bias in antidepressant drug trials could cause doctors to undertake distorted risk-benefit analyses when prescribing drugs for patients.

Moreover, publication bias can misdirect research efforts resulting in wasted time, effort, resources, and funding which ultimately slow scienfitic progress as the overstated success of drug candidates in animal trials could lead to ineffective drug candidates being brought to clinical trial.

Furthermore, publication bias can skew the results of meta-analyses when it is not accounted for—thus compromising the effectiveness of a powerful scientific tool.

This is significant; meta-analyses are designed to account for variations and errors between studies by systematically synthesising and statistically analysing their combined findings. 

Clearly, null results are important in science.

Publication bias can arise from a researcher’s decision not to publish negative results due to a combination of personal, financial, and professional pressures to publish ‘exciting’ results.

For example, the results may be contrary to existing research or the researcher’s beliefs; they may deem the results to be ‘null and dull’ and unlikely to attract further funding; and negative results may even be omitted to save page space in a journal and better highlight positive results from a study.

Some may even re-analyse their negative results or aggregate outcomes into a new endpoint to extract positive results for publication. 

Researchers may also fail to submit a study showing negative results due to their perception that journal editors and peer-reviewers are biased against such results.

Publication bias can also arise from decisions of journal editors and peer reviewers to reject studies with null results because: “negative results have never made riveting reading”; the study may be similar to the peer-reviewer’s own work; they may be biased against research in different fields; or because studies with null results tend to be scrutinised more than positive ones.

Organisational and business conflicts of interest can also produce publication bias. 

For example, a business may not publish negative results because they could endanger profits by diminishing the perceived effectiveness of a product or test.

Science is a rigorous and dynamic discipline that is attempting to implement strategies to mitigate publication bias.

This includes methods to discern and assess the extent of publication bias; statistical methods to reduce its impact in meta-analyses;10 the pre-registering of experiment design;1 and certain journals requesting and publishing null results.

So next time you peruse a scientific journal, spare a thought for the null results that never made it onto the page and the importance of these hidden discoveries.

  • Chase, J. M., The Shadow of Bias. PLOS. Biol. 2013, 11 (7), e1001608.
  • Eliot, L., You don’t have a male or female brain – the more brains scientists study, the weaker the evidence for sex differences. The Conversation, April 22, 2021.
  • Bialystok, E.;  Kroll, J. F.;  Green, D. W.;  MacWhinney, B.; Craik, F. I., Publication Bias and the Validity of Evidence: What's the Connection? Psychol. Sci. 2015, 26 (6), 944-6.
  • Turner, E. H.;  Matthews, A. M.;  Linardatos, E.;  Tell, R. A.; Rosenthal, R., Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy. N. Engl. J. Med. 2008, 358 (3), 252-260.
  • Ropovik, I.;  Adamkovic, M.; Greger, D., Neglect of publication bias compromises meta-analyses of educational research. PLoS One 2021, 16 (6), e0252415.
  • Murad, M. H.;  Chu, H.;  Lin, L.; Wang, Z., The effect of publication bias magnitude and direction on the certainty in evidence. BMJ. Evid. Based. Med. 2018, 23 (3), 84-86.
  • Kepes, S.;  Banks, G. C.; Oh, I.-S., Avoiding Bias in Publication Bias Research: The Value of “Null” Findings. J. Bus. Psychol. 2014, 29 (2), 183-203.
  • Thornton, A.; Lee, P., Publication bias in meta-analysis: its causes and consequences. J. Clin. Epidemiol. 2000, 53 (2), 207-216.
  • Roest, A.; Williams, C., Does publication bias make antidepressants seem more effective at treating anxiety than they really are? The Conversation, May 6, 2015.
  • Stanley, T. D.;  Doucouliagos, H.; Ioannidis, J. P. A., Finding the power to reduce publication bias. Stat. Med. 2017, 36 (10), 1580-1598. 1598.                        
  • DeVito, N. J.; Goldacre, B., Catalogue of bias: publication bias. BMJ. Evid. Based. Med. 2019, 24 (2), 53-54.

Study with us

Written by louis casey, third-year student and dalyell scholar, bachelor of science and advanced studies, the university of sydney, related articles, new hybrid agriculture model brings food and energy benefits .

Combining traditional agricultural farming land with renewable energy technologies reap potential benefits.

Open the door to a dynamic career in veterinary medicine

Improved wheat to counter climate extremes.

Rising temperatures threaten our ability to grow crops. Partnerships between academia and industry have created top-level research with a tangible impact.

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections
  • About This Blog
  • About PLOS ONE
  • Official PLOS Blog
  • EveryONE Blog
  • Speaking of Medicine
  • PLOS Biologue
  • Absolutely Maybe
  • DNA Science
  • PLOS ECR Community
  • All Models Are Wrong
  • About PLOS Blogs

Filling in the Scientific Record: The Importance of Negative and Null Results

PLOS strives to publish scientific research with transparency, openness, and integrity. Whether that means giving authors the choice to preregister their study, publish peer review comments, or diversifying publishing outputs; we’re here to support researchers as they work to uncover and communicate discoveries that advance scientific progress. Negative and null results are an important part of this process. 

This is something we agree on across our journal portfolio — the most recent updates from PLOS Biology being one example– and it’s something we care about especially on PLOS ONE . Our journal’s mission is to provide researchers with a quality, peer-reviewed and Open Access venue for all rigorously conducted research, regardless of novelty or impact. Our role in the publishing ecosystem is to provide a complete, transparent view of scientific literature to enable discovery. While negative and null results can often be overlooked — by authors and publishers alike — their publication is equally as important as positive outcomes and can help fill in critical gaps in the scientific record. 

We encourage researchers to share their negative and null results.

To provide checks and balances for emerging research 

Positive results are often viewed as more impactful. From authors, editors, and publishers alike, there is a tendency to favor the publication of positive results over negative ones and, yes, there is evidence to suggest that positive results are more frequently cited by other researchers. 

Negative results, however, are crucial to providing a system of checks and balances against similar positive findings. Studies have attempted to determine to what extent the lack of negative results in scientific literature has inflated the efficacy of certain treatments or allowed false positives to remain unchecked. 

The effect is particularly dramatic in meta-analyses which are typically undertaken with the assumption that the sample of retrieved studies is representative of all conducted studies:

“ However, it is clear that a positive bias is introduced when studies with negative results remain unreported, thereby jeopardizing the validity of meta-analysis ( 25 , 26 ). This is potentially harmful as the false positive outcome of meta-analysis misinforms researchers, doctors, policymakers and greater scientific community, specifically when the wrong conclusions are drawn on the benefit of the treatment.” — Mlinarić, et al (2017). Dealing with publication bias: why you should really publish your negative results . Biochem Med (Zagreb) 27(3): 030201

As important as it is to report on studies that show a positive effect, it is equally vital to document instances where the same processes were not effective. We should be actively reporting, evaluating, and sharing negative and null results with the same emphasis we give to positive outcomes.

To reduce time and resources needed for researchers to continue investigation

Regardless of the outcomes, new research requires time and financial resources to complete. At the end of the process, something is learned — even if the answer is unexpected or less clear than you had hoped for. Nevertheless, these efforts can provide valuable insights to other research groups.

If you’re seeking the answer to a particular scientific question, chances are that another research group is looking for that answer as well: either as a main focus or to provide additional background for a different study. Independent verification of the results through replication studies are also an important piece of solidifying the foundation of future research. This also can only happen when researchers have a complete record of previous results to work from. 

By making more findings available, we can help increase efficiencies and advance scientific discovery faster. 

To fill in the scientific record and increase reproducibility

It’s difficult to draw reliable conclusions from a set of data that we know is incomplete. This lack of information affects the entire scientific ecosystem. Readers are often unaware that negative results for a particular study may even exist, and it may even be more difficult for researchers to replicate studies where pieces of the data have been left out of the published record.

Some researchers opt to obtain specific null and negative results from outside the published literature, from non peer-reviewed depositories, or by requesting data directly from the authors. The inclusions of this “ grey literature ” can improve accuracy, but the additional time and effort that goes into obtaining and verifying this information would be prohibitive for many to include.

This is where publishers can play a pivotal role in ensuring that authors not only feel welcome to submit and publish negative results, but to make sure those efforts are properly recognized and credited. Published, peer-reviewed results allow for a more complete analysis of all available data and increased trust in the scientific record.  

We know it’s difficult to get into the lab right now and many researchers are having to rethink the way that they work or focus on other projects. We encourage anyone with previously unpublished negative and null results to submit their work to PLOS ONE and help fill in the gaps of the scientific record, or consider doing so in the future. 

This is a great initiative. I have a number of study manuscripts sitting in the draw.

I think that the idea for publishing of negative and null results is very good. Most scientists reject data which do not have a possitive effect on expected end results. Publishing of negative and null results will make research credible.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name and email for the next time I comment.

research null findings

Staff & Students

  • Staff Email
  • Semester Dates
  • Parking (SOUPS)
  • For Students
  • Student Email
  • SurreyLearn

Members of the CoGDeV lab share their research findings, research experiences, news and events summaries.

The value of null results.

Until very recently I had never heard of Open Research and what it meant.  I came across an article in The Conversation that had a headline that caught my attention ‘Medical research is broken: here’s how to fix it’. On reading the article I learned about the reproducibility crisis. I was astounded to read that as much as 85% of medical research is ‘wasted’ and that about 50% of medical research is not published (Lloyd & Bradley, 2020). No domains are immune to the reproducibility crisis. For example, in psychology, in an attempt to reproduce findings of 100 studies The Open Science Collaboration found that only 36% of these studies could be reproduced (Gilbert & Strohminger, 2015). When another researcher cannot replicate an experiment we naturally question the validity of the results of the original experiment. This has serious implications for research and science as replicating studies is crucial to verify findings, to advance knowledge and to build a robust evidence-based literature. So how could that be? The relatively high prevalence of questionable research practices (QRPs) such as selectively reporting hypotheses and excluding data post-hoc (John et al., 2012) as well as the non-publication of failed studies, ‘null results’, all seem to be playing a role in maintaining the reproducibility crisis. QRPs such as excluding data after looking at the impact of doing so on the results can significantly improve the probability of finding evidence in support of a hypothesis, making it harder to subsequently replicate the study (John et al, 2012). QRPs although problematic and widespread are not unequivocally unacceptable but call for greater transparency in the research process. The factor I am most intrigued about is the non-publication of null results. A null result is a result that does not support the experimental hypothesis and is therefore difficult to publish because many perceive these results as less interesting.

Journal editors tend to publish studies that are statistically significant, which are more likely to be cited and therefore raise the reach and impact of their journals (Fanelli, 2010). Journal editors unwittingly contribute to a vicious cycle: researchers are more likely to engage in QPRs to get a positive result so their research has a higher chance to get cited and be accepted for publication by editors (Wenzel, 2016). Unfortunately, this unhelpful pattern works against the publication of many rigorously well-conducted studies that yield a null result.

Publishing only successful studies falsely gives the impression that experiments almost always work and that research gets it right every time. Research is usually, by nature, exploratory; some studies will yield positive results, but null results form an integral part of the process of exploration. Many researchers would be willing to spend time and effort to publish a result that does not support their hypothesis, however they face a barrier of finding a journal that will be willing to publish it. Many journals and editors steer away from null findings because null findings are cited less frequently (Mlinarić, Horvat, & Šupak Smolčić, 2017). This means that there is a publication bias against the null result whereby successful studies are more likely to be published. Since null results do not always support what we expected to find it can make them difficult to interpret however they are an integral part of scientific research. The endeavour of research involves keeping track of our unsuccessful attempts and learning from them. As researchers, we reflect and review what worked and what did not work, make the necessary amendments and try again. It takes us many trials to refine and master a new skill. The same is true in science; many trials are often necessary to understand our findings.

Unfortunately, the non-publication of null results means that useful data is kept away from collective knowledge. Publishing null results helps the scientific community by allowing researchers to build on previous studies.  The publication of unsuccessful attempts has the potential to save other researchers’ time. Indeed, a new researcher may use a similar study protocol than a previously unpublished null study and risk having their findings go against their predictions leading to a null result.

I also wonder about the influence of the researcher’s experience and belief system on their willingness to publish their null results. It is conceivable that some researchers may fear exposing that their theory had to change in light of the null findings. This could undermine their previous papers. This may be particularly true at the start of a career when the desire to be published in a renowned journal is highly enticing, which may encourage researchers to hold back from sharing novel hypothesis or engage in QRPs. Moreover, those researchers who publish a null result that challenges the current theoretical understanding of a particular issue may fear the consequences of questioning the status quo. It takes time and resources to publish results that do not support a carefully thought out hypothesis. It is understandable that researchers are reluctant to publish their null results, as there is little incentive to do so. Furthermore, established and new researchers alike may wish to preserve their status and not publish a null result that would contradict previous findings and may attract ‘the wrath’ of the original researcher (Nature editorial, 2017).

In the field of psychology, progress would not have happened if researchers had not carefully considered why their result went against their prediction. That is, null findings have the potential to move theory forward and challenge current theoretical understanding. For instance, a comprehensive study on candidate genes for depression challenged previous studies that had seemingly demonstrated that there was an association between genes and major depression. This comprehensive study yielded a null result. Because the study had been rigorously conducted it suggested that there was no significant association between genes and major depression (Border et al., 2019). More recently, the null findings of a study on approximate arithmetic training challenged the claim that training improves arithmetic fluency in adults; a theory that had been supported by several training studies with a positive result (Szkudlarek, Park & Brannon, 2020).

Commitment to open research practices by universities, funders and publishers is starting to have an impact on this issue by raising the visibility of null findings among students and researchers. The Registered Report which is a publishing model that works on the basis of ‘in-principle acceptance’- a study is accepted for publication before knowing its outcome – can help alleviate the publication bias. The in principle acceptance is likely to increase the publication of null results as the criteria for publication is not the result itself but the quality of research (Soderberg et al., 2020).

In order to challenge the publication bias a group of editors from prestigious journals (European Journal of Work and Organizational Psychology, Journal of Business & Psychology, etc.) have committed to publish null results resulting from rigorously conducted research. This is part of a new initiative to enhance the integrity and quality of research (Wenzel, 2016). Perhaps another solution to encourage the scientific community to share their null results would be to publish a summary of all studies in an online ‘null results’ journal, which would enable other researchers to access them. An additional solution could be to add a mandatory section in every paper summarising the previous known null results, what was learned from them, and how they led to the publication of the conclusive study. Pre-registration – a time-stamped record of a study design & analysis plan created prior to the data collection ( https://www.ukrn.org/primers/ ; Farran, 2020)- could also help that process by encouraging researchers to be transparent about their protocol, data analysis and outcome expectations from the outset. It takes a level of confidence to publicly admit that after rigorously conducting a study our results do not support our experimental hypothesis. I wonder if that ‘admission’ could be eased by re-affirming that research is often exploratory: it would be unrealistic to always expect a positive result in research. Changing the narrative of null results into a story of learning and progress may be supportive of researchers sharing their null results with the wider scientific community. By taking the focus of research off the results and moving it onto the quality of the study protocol and how rigorously a study has been conducted, I believe we can reduce the prevalence of QRPs and give null findings their rightful place in the world of scientific research.

I believe that increasing co-operation could also be helpful. The publication of null results carries the risk that other researchers using a similar protocol and have a positive outcome potentially taking credit away from the original researcher. A change in culture where collaboration is valued above competition may help that movement. We also need a common framework for replication of studies with standardised protocols including the publication and/or consideration of null results.

Null findings tend not to be published, which I feel is a waste of resources. The publications of these findings would benefit the wider scientific community. Research at its best is a collective effort with collaboration between organisations, increased transparency, and a smaller emphasis on competition and protection of resources.

Border, R., Johnson, E. C., Evans, L. M., Smolen, A., Berley, N., Sullivan, P. F., & Keller, M. C. (2019). No Support for Historical Candidate Gene or Candidate Gene-by-Interaction Hypotheses for Major Depression Across Multiple Large Samples. The American journal of psychiatry , 176(5), 376–387. https://doi.org/10.1176/appi.ajp.2018.18070881

Fanelli, D. (2010). “Positive” results increase down the Hierarchy of the Sciences. PloS one , 5(4), e10068. https://doi.org/10.1371/journal.pone.0010068

Farran E. K., (2020). Is pre-registration for you? Retrieved from https://blogs.surrey.ac.uk/doctoralcollege/2020/01/06/guest-blog-prof-emily-farran-is-pre-registration-for-you/

Gilbert, E., Strohminger, N. (2015) We found only one-third of published psychology research is reliable – now what? The conversation . https://theconversation.com/we-found-only-one-third-of-published-psychology-research-is-reliable-now-what-46596

 John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling. Psychological Science , 23(5), 524–532. https://doi.org/10.1177/0956797611430953

Lloyd, K. E., Bradley, S. (2020). Medical research is broken here’s how we can fix it reference. The conversation. https://theconversation.com/medical-research-is-broken-heres-how-we-can-fix-it-145281

Mlinarić, A., Horvat, M., & Šupak Smolčić, V. (2017). Dealing with the positive publication bias: Why you should really publish your negative results. Biochemia medica , 27(3), 030201. https://doi.org/10.11613/BM.2017.030201

Rewarding negative results keeps science on track. Nature 551, 414 (2017). https://doi.org/10.1038/d41586-017-07325-2

Soderberg, C. K., Errington, T. M., Schiavone, S. R., Bottesini, J. G., Singleton Thorn, F., Vazire, S., … Nosek, B. A. (2020, November 16). Initial Evidence of Research Quality of Registered Reports Compared to the Traditional Publishing Model. https://doi.org/10.31222/osf.io/7x9vy

Szkudlarek, E., Park, J., & Brannon , E. (2020). Failure to replicate the benefit of approximate arithmetic training for symbolic arithmetic fluency in adults. Cognition . 207. 104521. 10.1016/j.cognition.2020.104521.

Wenzel, R. (2016). Business journals to tackle publication bias, will publish ‘null’ results. The conversation. https://theconversation.com/business-journals-to-tackle-publication-bias-will-publish-null-results-52818

Written by Badri Bechlem

Undergraduate’s perspective on Open Research

Centre for educational neuroscience online seminar: “why do children with dcd trip and fall more than others looking around for answers”.

research null findings

Accessibility | Contact the University | Privacy | Cookies | Disclaimer | Freedom of Information

© University of Surrey, Guildford, Surrey, GU2 7XH, United Kingdom. +44 (0)1483 300800

View the latest institution tables

View the latest country/territory tables

5 tips for dealing with non-significant results

research null findings

Credit: Image Source/Getty Images

It might look like failure, but don’t let go just yet.

16 September 2019

research null findings

Image Source/Getty Images

When researchers fail to find a statistically significant result, it’s often treated as exactly that – a failure. Non-significant results are difficult to publish in scientific journals and, as a result, researchers often choose not to submit them for publication.

This means that the evidence published in scientific journals is biased towards studies that find effects.

A study published in Science by a team from Stanford University who investigated 221 survey-based experiments funded by the National Science Foundation found that nearly two-thirds of the social science experiments that produced null results were filed away, never to be published.

By comparison, 96% of the studies with statistically strong results were written up .

“These biases imperil the robustness of scientific evidence,” says David Mehler, a psychologist at the University of Münster in Germany . “But they also harm early career researchers in particular who depend on building up a track record.”

Mehler is the co-author of a recent article published in the Journal of European Psychology Students about appreciating the significance of non-significant findings.

So, what can researchers do to avoid unpublishable results?

#1: Perform an equivalence test

The problem with a non-significant result is that it’s ambiguous, explains Daniël Lakens , a psychologist at Eindhoven University of Technology in the Netherlands .

It could mean that the null hypothesis is true – there really is no effect. But it could also indicate that the data are inconclusive either way.

Lakens says performing an ‘equivalence test’ can help you distinguish between these two possibilities. It can’t tell you that there is no effect, but it can tell you that an effect – if it exists – is likely to be of negligible practical or theoretical significance.

Bayesian statistics offer an alternative way of performing this test, and in Lakens’ experience, “either is better than current practice”.

#2 Collaborate to collect more data

Equivalence tests and Bayesian analyses can be helpful, but if you don’t have enough data, their results are likely to be inconclusive.

“The root problem remains that researchers want to conduct confirmatory hypothesis tests for effects that their studies are mostly underpowered to detect,” says Mehler.

This, he adds, is a particular problem for students and early career researchers, whose limited resources often constrain them to small sample sizes.

One solution is to collaborate with other researchers to collect more data. In psychology, the StudySwap website is one way for researchers to team up and combine resources.

#3 Use directional tests to increase statistical power

If resources are scarce, it’s important to use them as efficiently as possible. Lakens suggests a number of ways in which researchers can tweak their research design to increase statistical power – the likelihood of finding an effect if it really does exist.

In some circumstances, he says, researchers should consider ‘directional’ or ‘one-sided’ tests.

For example, if your hypothesis clearly states that patients receiving a new drug should have better outcomes than those receiving a placebo, it makes sense to test that prediction rather than looking for a difference between the groups in either direction.

“It’s basically free statistical power just for making a prediction,” says Lakens.

#4 Perform sequential analyses to improve data collection efficiency

Efficiency can also be increased by conducting sequential analyses, whereby data collection is terminated if there is already enough evidence to support the hypothesis, or it’s clear that further data will not lead to it being supported.

This approach is often taken in clinical trials where it might be unethical to test patients beyond the point that the efficacy of the treatment can already be determined.

A common concern is that performing multiple analyses increases the probability of finding an effect that doesn’t exist. However, this can be addressed by adjusting the threshold for statistical significance, Lakens explains.

#5 Submit a Registered Report

Whichever approach is taken, it’s important to describe the sampling and analyses clearly to permit a fair evaluation by peer reviewers and readers, says Mehler.

Ideally, studies should be preregistered. This allows authors to demonstrate that the tests were determined before rather than after the results were known. In fact, Mehler argues, the best way to ensure that results are published is to submit a Registered Report.

In this format, studies are evaluated and provisionally accepted based on the methods and analysis plan. The paper is then guaranteed to be published if the researchers follow this preregistered plan – whatever the results.

In a recent investigation , Mehler and his colleague, Chris Allen from Cardiff University in the UK , found that Registered Reports led to a much increased rate of null results: 61% compared with 5 to 20% for traditional papers.

First analysis of ‘pre-registered’ studies shows sharp rise in null findings

This simple tool shows you how to choose your mentors

Q&A Niamh Brennan: 100 rules for publishing in top journals

  • Translators
  • Graphic Designers

Solve

Please enter the email address you used for your account. Your sign in information will be sent to your email address after it has been verified.

How to Write About Negative (Or Null) Results in Academic Research

ScienceEditor

Researchers are often disappointed when their work yields "negative" results, meaning that the null hypothesis cannot be rejected. However, negative results are essential for research to progress. Negative results tell researchers that they are on the wrong path, or that their current techniques are ineffective. This is a natural and necessary part of discovering something that was previously unknown. Solving problems that lead to negative results is an integral part of being an effective researcher. Publishing negative results that are the result of rigorous research contributes to scientific progress.

There are three main reasons for negative results:

  • The original hypothesis was incorrect
  • The findings of a published report cannot be replicated
  • Technical problems

Here, we will discuss how to write about negative results, first focusing on the most common reason: technical problems.

Writing about technical problems

Technical problems might include faulty reagents, inappropriate study design, and insufficient statistical power. Most researchers would prefer to resolve technical problems before presenting their work, and focus instead on their convincing results. In reality, researchers often need to present their work at a conference or to a thesis committee before some problems can be resolved.

When presenting at a conference, the objective should be to clearly describe your overall research goal and why it is important, your preliminary results, the current problem, and how previously published work is informing the steps you are taking to resolve the problem. Here, you want to take advantage of the collective expertise at the conference. By being straightforward about your difficulties, you increase the chance that someone can help you find a solution.

When presenting to a thesis committee, much of what you discuss will be the same (overall research goal and why it is important, results, problem(s) and possible solutions). Your primarily goal is to show that you are well prepared to move forward in your research career, despite the recent difficulties. The thesis defense is a defined stopping point, so most thesis students should write about solutions they would pursue if they were to continue the work. For example, "To resolve this problem, it would be advisable to increase the survey area by a factor of 4, and then…" In contrast, researchers who will be continuing their work should write about possible solutions using present and future tense. For example, "To resolve this problem, we are currently testing a wider variety of standards, and will then conduct preliminary experiments to determine…"

Putting the "re" in "research"

Whether you are presenting at a conference, defending a thesis, applying for funding, or simply trying to make progress in your research, you will often need to search through the academic literature to determine the best path forward. This is especially true when you get unexpected results—either positive or negative. When trying to resolve a technical problem, you should often find yourself carefully reading the materials and methods sections of papers that address similar research questions, or that used similar techniques to explore very different problems. For example, a single computer algorithm might be adapted to address research questions in many different fields.

In searching through published papers and less formal methods of communication—such as conference abstracts—you may come to appreciate the important details that good researchers will include when discussing technical problems or other negative results. For example, "We found that participants were more likely to complete the process when light refreshments were provided between the two sessions." By including this information, the authors may help other researchers save time and resources.

Thus, you are advised to be as thorough as possible in reviewing the relevant literature, to find the most promising solutions for technical problems. When presenting your work, show that you have carefully considered the possibilities, and have developed a realistic plan for moving forward. This will help a thesis committee view your efforts favorably, and can also convince possible collaborators or advisors to invest time in helping you.

Publishing negative results

Negative results due to technical problems may be acceptable for a conference presentation or a thesis at the undergraduate or master's degree level. Negative results due to technical problems are not sufficient for publication, a Ph.D. dissertation, or tenure. In those situations, you will need to resolve the technical problem and generate high quality results (either positive or negative) that stand up to rigorous analysis. Depending on the research field, high quality negative results might include multiple readouts and narrow confidence intervals.

Researchers are often reluctant to publish negative results, especially if their data don't support an interesting alternative hypothesis. Traditionally, journals have been reluctant to publish negative results that are not paired with positive results, even if the study is well designed and the results have sufficient statistical power. This is starting to change— especially for medical research —but publishing negative results can still be an uphill battle.

Not publishing high quality negative results is a disservice to the scientific community and the people who support it (including tax payers), since other scientists may need to repeat the work. For studies involving animal research or human tissue samples, not publishing would squander significant sacrifices. For research involving medical treatments—especially studies that contradict a published report—not publishing negative results leads to an inaccurate understanding of treatment efficacy.

So how can researchers write about negative results in a way that reflects its importance? Let's consider a common reason for negative results: the original hypothesis was incorrect.

Writing about negative results when the original hypothesis was incorrect

Researchers should be comfortable with being wrong some of the time, such as when results don't support an initial hypothesis. After all, research wouldn't be necessary if we already knew the answer to every possible question. The next step is usually to revise the hypothesis after reconsidering the available data, reading through the relevant literature, and consulting with colleagues.

Ideally, a revised hypothesis will lead to results that allow you to reject a (revised) null hypothesis. The negative results can then be reported alongside the positive results, possibly bolstering the significance of both. For example, "The DNA mutations in region A had a significant effect on gene expression, while the mutations outside of domain A had no effect. Don't forget to include important details about how you overcame technical problems, so that other researchers don't need to reinvent the wheel.

Unfortunately, it isn't always possible to pair negative results with related positive results. For example, imagine a year-long study on the effect of COVID-19 shelter-in-place orders on the mental health of avid video game players compared to people who don't play video games. Despite using well-established tools for measuring mental health, having a large sample size, and comparing multiple subpopulations (e.g. gamers who live alone vs. gamers who live with others), no significant differences were identified. There is no way to modify and repeat this study because the same shelter-in-place conditions no longer exist. So how can this research be presented effectively?

Writing when you only have negative results

When you write a scientific paper to report negative results, the sections will be the same as for any other paper: Introduction, Materials and Methods, Results and Discussion. In the introduction, you should prepare your reader for the possibility of negative results. You can highlight gaps or inconsistencies in past research, and point to data that could indicate an incomplete understanding of the situation.

In the example about video game players, you might highlight data showing that gamers are statistically very similar to large chunks of the population in terms of age, education, marital status, etc. You might discuss how the stigma associated with playing video games might be unfair and harmful to people in certain situations. You could discuss research showing the benefits of playing video games, and contrast gaming with engaging in social media, which is another modern hobby. Putting a positive spin on negative results can make the difference between a published manuscript and rejection.

In a paper that focuses on negative results—especially one that contradicts published findings—the research design and data analysis must be impeccable. You may need to collaborate with other researchers to ensure that your methods are sound, and apply multiple methods of data analysis.

As long as the research is rigorous, negative results should be used to inform and guide future experiments. This is how science improves our understanding of the world.

Related Posts

Completely Randomized Design: The One-Factor Approach

Completely Randomized Design: The One-Factor Approach

Single-blind vs. Double-blind Peer Review

Single-blind vs. Double-blind Peer Review

  • Academic Writing Advice
  • All Blog Posts
  • Writing Advice
  • Admissions Writing Advice
  • Book Writing Advice
  • Short Story Advice
  • Employment Writing Advice
  • Business Writing Advice
  • Web Content Advice
  • Article Writing Advice
  • Magazine Writing Advice
  • Grammar Advice
  • Dialect Advice
  • Editing Advice
  • Freelance Advice
  • Legal Writing Advice
  • Poetry Advice
  • Graphic Design Advice
  • Logo Design Advice
  • Translation Advice
  • Blog Reviews
  • Short Story Award Winners
  • Scholarship Winners

Need an academic editor before submitting your work?

Need an academic editor before submitting your work?

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 13: Inferential Statistics

Understanding Null Hypothesis Testing

Learning Objectives

  • Explain the purpose of null hypothesis testing, including the role of sampling error.
  • Describe the basic logic of null hypothesis testing.
  • Describe the role of relationship strength and sample size in determining statistical significance and make reasonable judgments about statistical significance based on these two factors.

The Purpose of Null Hypothesis Testing

As we have seen, psychological research typically involves measuring one or more variables for a sample and computing descriptive statistics for that sample. In general, however, the researcher’s goal is not to draw conclusions about that sample but to draw conclusions about the population that the sample was selected from. Thus researchers must use sample statistics to draw conclusions about the corresponding values in the population. These corresponding values in the population are called  parameters . Imagine, for example, that a researcher measures the number of depressive symptoms exhibited by each of 50 clinically depressed adults and computes the mean number of symptoms. The researcher probably wants to use this sample statistic (the mean number of symptoms for the sample) to draw conclusions about the corresponding population parameter (the mean number of symptoms for clinically depressed adults).

Unfortunately, sample statistics are not perfect estimates of their corresponding population parameters. This is because there is a certain amount of random variability in any statistic from sample to sample. The mean number of depressive symptoms might be 8.73 in one sample of clinically depressed adults, 6.45 in a second sample, and 9.44 in a third—even though these samples are selected randomly from the same population. Similarly, the correlation (Pearson’s  r ) between two variables might be +.24 in one sample, −.04 in a second sample, and +.15 in a third—again, even though these samples are selected randomly from the same population. This random variability in a statistic from sample to sample is called  sampling error . (Note that the term error  here refers to random variability and does not imply that anyone has made a mistake. No one “commits a sampling error.”)

One implication of this is that when there is a statistical relationship in a sample, it is not always clear that there is a statistical relationship in the population. A small difference between two group means in a sample might indicate that there is a small difference between the two group means in the population. But it could also be that there is no difference between the means in the population and that the difference in the sample is just a matter of sampling error. Similarly, a Pearson’s  r  value of −.29 in a sample might mean that there is a negative relationship in the population. But it could also be that there is no relationship in the population and that the relationship in the sample is just a matter of sampling error.

In fact, any statistical relationship in a sample can be interpreted in two ways:

  • There is a relationship in the population, and the relationship in the sample reflects this.
  • There is no relationship in the population, and the relationship in the sample reflects only sampling error.

The purpose of null hypothesis testing is simply to help researchers decide between these two interpretations.

The Logic of Null Hypothesis Testing

Null hypothesis testing  is a formal approach to deciding between two interpretations of a statistical relationship in a sample. One interpretation is called the   null hypothesis  (often symbolized  H 0  and read as “H-naught”). This is the idea that there is no relationship in the population and that the relationship in the sample reflects only sampling error. Informally, the null hypothesis is that the sample relationship “occurred by chance.” The other interpretation is called the  alternative hypothesis  (often symbolized as  H 1 ). This is the idea that there is a relationship in the population and that the relationship in the sample reflects this relationship in the population.

Again, every statistical relationship in a sample can be interpreted in either of these two ways: It might have occurred by chance, or it might reflect a relationship in the population. So researchers need a way to decide between them. Although there are many specific null hypothesis testing techniques, they are all based on the same general logic. The steps are as follows:

  • Assume for the moment that the null hypothesis is true. There is no relationship between the variables in the population.
  • Determine how likely the sample relationship would be if the null hypothesis were true.
  • If the sample relationship would be extremely unlikely, then reject the null hypothesis  in favour of the alternative hypothesis. If it would not be extremely unlikely, then  retain the null hypothesis .

Following this logic, we can begin to understand why Mehl and his colleagues concluded that there is no difference in talkativeness between women and men in the population. In essence, they asked the following question: “If there were no difference in the population, how likely is it that we would find a small difference of  d  = 0.06 in our sample?” Their answer to this question was that this sample relationship would be fairly likely if the null hypothesis were true. Therefore, they retained the null hypothesis—concluding that there is no evidence of a sex difference in the population. We can also see why Kanner and his colleagues concluded that there is a correlation between hassles and symptoms in the population. They asked, “If the null hypothesis were true, how likely is it that we would find a strong correlation of +.60 in our sample?” Their answer to this question was that this sample relationship would be fairly unlikely if the null hypothesis were true. Therefore, they rejected the null hypothesis in favour of the alternative hypothesis—concluding that there is a positive correlation between these variables in the population.

A crucial step in null hypothesis testing is finding the likelihood of the sample result if the null hypothesis were true. This probability is called the  p value . A low  p  value means that the sample result would be unlikely if the null hypothesis were true and leads to the rejection of the null hypothesis. A high  p  value means that the sample result would be likely if the null hypothesis were true and leads to the retention of the null hypothesis. But how low must the  p  value be before the sample result is considered unlikely enough to reject the null hypothesis? In null hypothesis testing, this criterion is called  α (alpha)  and is almost always set to .05. If there is less than a 5% chance of a result as extreme as the sample result if the null hypothesis were true, then the null hypothesis is rejected. When this happens, the result is said to be  statistically significant . If there is greater than a 5% chance of a result as extreme as the sample result when the null hypothesis is true, then the null hypothesis is retained. This does not necessarily mean that the researcher accepts the null hypothesis as true—only that there is not currently enough evidence to conclude that it is true. Researchers often use the expression “fail to reject the null hypothesis” rather than “retain the null hypothesis,” but they never use the expression “accept the null hypothesis.”

The Misunderstood  p  Value

The  p  value is one of the most misunderstood quantities in psychological research (Cohen, 1994) [1] . Even professional researchers misinterpret it, and it is not unusual for such misinterpretations to appear in statistics textbooks!

The most common misinterpretation is that the  p  value is the probability that the null hypothesis is true—that the sample result occurred by chance. For example, a misguided researcher might say that because the  p  value is .02, there is only a 2% chance that the result is due to chance and a 98% chance that it reflects a real relationship in the population. But this is incorrect . The  p  value is really the probability of a result at least as extreme as the sample result  if  the null hypothesis  were  true. So a  p  value of .02 means that if the null hypothesis were true, a sample result this extreme would occur only 2% of the time.

You can avoid this misunderstanding by remembering that the  p  value is not the probability that any particular  hypothesis  is true or false. Instead, it is the probability of obtaining the  sample result  if the null hypothesis were true.

Role of Sample Size and Relationship Strength

Recall that null hypothesis testing involves answering the question, “If the null hypothesis were true, what is the probability of a sample result as extreme as this one?” In other words, “What is the  p  value?” It can be helpful to see that the answer to this question depends on just two considerations: the strength of the relationship and the size of the sample. Specifically, the stronger the sample relationship and the larger the sample, the less likely the result would be if the null hypothesis were true. That is, the lower the  p  value. This should make sense. Imagine a study in which a sample of 500 women is compared with a sample of 500 men in terms of some psychological characteristic, and Cohen’s  d  is a strong 0.50. If there were really no sex difference in the population, then a result this strong based on such a large sample should seem highly unlikely. Now imagine a similar study in which a sample of three women is compared with a sample of three men, and Cohen’s  d  is a weak 0.10. If there were no sex difference in the population, then a relationship this weak based on such a small sample should seem likely. And this is precisely why the null hypothesis would be rejected in the first example and retained in the second.

Of course, sometimes the result can be weak and the sample large, or the result can be strong and the sample small. In these cases, the two considerations trade off against each other so that a weak result can be statistically significant if the sample is large enough and a strong relationship can be statistically significant even if the sample is small. Table 13.1 shows roughly how relationship strength and sample size combine to determine whether a sample result is statistically significant. The columns of the table represent the three levels of relationship strength: weak, medium, and strong. The rows represent four sample sizes that can be considered small, medium, large, and extra large in the context of psychological research. Thus each cell in the table represents a combination of relationship strength and sample size. If a cell contains the word  Yes , then this combination would be statistically significant for both Cohen’s  d  and Pearson’s  r . If it contains the word  No , then it would not be statistically significant for either. There is one cell where the decision for  d  and  r  would be different and another where it might be different depending on some additional considerations, which are discussed in Section 13.2 “Some Basic Null Hypothesis Tests”

Although Table 13.1 provides only a rough guideline, it shows very clearly that weak relationships based on medium or small samples are never statistically significant and that strong relationships based on medium or larger samples are always statistically significant. If you keep this lesson in mind, you will often know whether a result is statistically significant based on the descriptive statistics alone. It is extremely useful to be able to develop this kind of intuitive judgment. One reason is that it allows you to develop expectations about how your formal null hypothesis tests are going to come out, which in turn allows you to detect problems in your analyses. For example, if your sample relationship is strong and your sample is medium, then you would expect to reject the null hypothesis. If for some reason your formal null hypothesis test indicates otherwise, then you need to double-check your computations and interpretations. A second reason is that the ability to make this kind of intuitive judgment is an indication that you understand the basic logic of this approach in addition to being able to do the computations.

Statistical Significance Versus Practical Significance

Table 13.1 illustrates another extremely important point. A statistically significant result is not necessarily a strong one. Even a very weak result can be statistically significant if it is based on a large enough sample. This is closely related to Janet Shibley Hyde’s argument about sex differences (Hyde, 2007) [2] . The differences between women and men in mathematical problem solving and leadership ability are statistically significant. But the word  significant  can cause people to interpret these differences as strong and important—perhaps even important enough to influence the college courses they take or even who they vote for. As we have seen, however, these statistically significant differences are actually quite weak—perhaps even “trivial.”

This is why it is important to distinguish between the  statistical  significance of a result and the  practical  significance of that result.  Practical significance refers to the importance or usefulness of the result in some real-world context. Many sex differences are statistically significant—and may even be interesting for purely scientific reasons—but they are not practically significant. In clinical practice, this same concept is often referred to as “clinical significance.” For example, a study on a new treatment for social phobia might show that it produces a statistically significant positive effect. Yet this effect still might not be strong enough to justify the time, effort, and other costs of putting it into practice—especially if easier and cheaper treatments that work almost as well already exist. Although statistically significant, this result would be said to lack practical or clinical significance.

Key Takeaways

  • Null hypothesis testing is a formal approach to deciding whether a statistical relationship in a sample reflects a real relationship in the population or is just due to chance.
  • The logic of null hypothesis testing involves assuming that the null hypothesis is true, finding how likely the sample result would be if this assumption were correct, and then making a decision. If the sample result would be unlikely if the null hypothesis were true, then it is rejected in favour of the alternative hypothesis. If it would not be unlikely, then the null hypothesis is retained.
  • The probability of obtaining the sample result if the null hypothesis were true (the  p  value) is based on two considerations: relationship strength and sample size. Reasonable judgments about whether a sample relationship is statistically significant can often be made by quickly considering these two factors.
  • Statistical significance is not the same as relationship strength or importance. Even weak relationships can be statistically significant if the sample size is large enough. It is important to consider relationship strength and the practical significance of a result in addition to its statistical significance.
  • Discussion: Imagine a study showing that people who eat more broccoli tend to be happier. Explain for someone who knows nothing about statistics why the researchers would conduct a null hypothesis test.
  • The correlation between two variables is  r  = −.78 based on a sample size of 137.
  • The mean score on a psychological characteristic for women is 25 ( SD  = 5) and the mean score for men is 24 ( SD  = 5). There were 12 women and 10 men in this study.
  • In a memory experiment, the mean number of items recalled by the 40 participants in Condition A was 0.50 standard deviations greater than the mean number recalled by the 40 participants in Condition B.
  • In another memory experiment, the mean scores for participants in Condition A and Condition B came out exactly the same!
  • A student finds a correlation of  r  = .04 between the number of units the students in his research methods class are taking and the students’ level of stress.

Long Descriptions

“Null Hypothesis” long description: A comic depicting a man and a woman talking in the foreground. In the background is a child working at a desk. The man says to the woman, “I can’t believe schools are still teaching kids about the null hypothesis. I remember reading a big study that conclusively disproved it years ago.” [Return to “Null Hypothesis”]

“Conditional Risk” long description: A comic depicting two hikers beside a tree during a thunderstorm. A bolt of lightning goes “crack” in the dark sky as thunder booms. One of the hikers says, “Whoa! We should get inside!” The other hiker says, “It’s okay! Lightning only kills about 45 Americans a year, so the chances of dying are only one in 7,000,000. Let’s go on!” The comic’s caption says, “The annual death rate among people who know that statistic is one in six.” [Return to “Conditional Risk”]

Media Attributions

  • Null Hypothesis by XKCD  CC BY-NC (Attribution NonCommercial)
  • Conditional Risk by XKCD  CC BY-NC (Attribution NonCommercial)
  • Cohen, J. (1994). The world is round: p < .05. American Psychologist, 49 , 997–1003. ↵
  • Hyde, J. S. (2007). New directions in the study of gender similarities and differences. Current Directions in Psychological Science, 16 , 259–263. ↵

Values in a population that correspond to variables measured in a study.

The random variability in a statistic from sample to sample.

A formal approach to deciding between two interpretations of a statistical relationship in a sample.

The idea that there is no relationship in the population and that the relationship in the sample reflects only sampling error.

The idea that there is a relationship in the population and that the relationship in the sample reflects this relationship in the population.

When the relationship found in the sample would be extremely unlikely, the idea that the relationship occurred “by chance” is rejected.

When the relationship found in the sample is likely to have occurred by chance, the null hypothesis is not rejected.

The probability that, if the null hypothesis were true, the result found in the sample would occur.

How low the p value must be before the sample result is considered unlikely in null hypothesis testing.

When there is less than a 5% chance of a result as extreme as the sample result occurring and the null hypothesis is rejected.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

research null findings

Enago Academy

Researcher Alert! 5 Ways to Deal With Null, Inconclusive, or Insignificant Results

' src=

  • Was your original hypothesis based on credible literature sources and background information?
  • Do you need to re-run certain experiments or run additional experiments with other variables?
  • Have you correctly assessed your data?

What if the Results are not Statistically Significant?

Validate your methods for their sensitivity and specificity, contact the authors of original studies, illustration of the role of communication, collaborate with experts in your field, prevention is better than cure.

  • Authors may submit a detailed description of the intended study (objectives, sample size, methods, planned analyses) to a credible study registry such as the Open Science Framework .
  • Authors may provide a complete account of their work (background literature, hypothesis and the rationale, objectives, methodologies and the planned statistical analyses and pilot data if applicable) to the target journal for peer review prior to data collection. This allows the referees to evaluate the theoretical basis and experimental design before beginning actual research.

An in-depth peer review analysis is performed to judge the quality of your proposed work. If found suitable, studies are provisionally accepted. Following the provisional acceptance, authors can proceed with their study. The feedback provided by the reviewers can help you to effectively plan and improve your study design, before embarking on actual experimental work. On completion of all the experiments, authors have to submit their final manuscript for a second round of peer review to confirm sensible interpretation of the results. If the manuscript passes this quality check, it is likely to receive acceptance, regardless of the results – negative, null or insignificant.

Have you ever had such results? We would love to hear how you dealt with them! if you have any questions related to publishing of negative, null or insignificant results, post them here and our experts will be happy to answer them! You can also visit our Q&A forum for frequently asked questions related to research writing and publishing answered by our team that comprises subject-matter experts, eminent researchers, and publication experts.

Rate this article Cancel Reply

Your email address will not be published.

research null findings

Enago Academy's Most Popular Articles

AI vs. AI: Can we outsmart image manipulation in research?

  • AI in Academia

AI vs. AI: How to detect image manipulation and avoid academic misconduct

The scientific community is facing a new frontier of controversy as artificial intelligence (AI) is…

Diversify Your Learning: Why inclusive academic curricula matter

  • Diversity and Inclusion

Need for Diversifying Academic Curricula: Embracing missing voices and marginalized perspectives

In classrooms worldwide, a single narrative often dominates, leaving many students feeling lost. These stories,…

Understand Academic Burnout: Spot the Signs & Reclaim Your Focus

  • Career Corner
  • Trending Now

Recognizing the signs: A guide to overcoming academic burnout

As the sun set over the campus, casting long shadows through the library windows, Alex…

How to Promote an Inclusive and Equitable Lab Environment

Reassessing the Lab Environment to Create an Equitable and Inclusive Space

The pursuit of scientific discovery has long been fueled by diverse minds and perspectives. Yet…

How To Write A Lab Report | Traditional vs. AI-Assisted Approach

  • Reporting Research

How to Improve Lab Report Writing: Best practices to follow with and without AI-assistance

Imagine you’re a scientist who just made a ground-breaking discovery! You want to share your…

3 Quick Tips on How Researchers Can Handle Lack of Literature in Original Research

How to Effectively Structure an Opinion Article

5 Simple Tips on How to Patent Your Research

From Thesis to Journal Articles: Expert Tips on Publishing in PubMed

research null findings

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research null findings

As a researcher, what do you consider most when choosing an image manipulation detector?

research null findings

The career path of a scholar, regardless of their field, usually begins with the passion to make a change in the world somehow. Their goals of making a difference lead them to want to perform research with positive, major outcomes by finding a significant scientific result that impacts their field somehow. But while this does happen occasionally, researchers are more likely to engage in experiments that are mundane, with minimal results or even “null” findings.

When the outcome is so minimally impactful, or neutral entirely, it’s tempting to chalk the work up to wasted time or lessons learned and move on without spending the next few months on a research manuscript that shows no exciting conclusion. However, these findings are often just as important, if not more so, than some positive correlations, and writing them up can be beneficial - and even essential - to research progress.

What is a “Null” Finding?

The term “null,” in regards to reporting research findings, is derived from the crucial null hypothesis, which is part of the scientific method itself. Null is the neutral component of any research experiment in which variables are attempted to be controlled, and then, through your hypothesis and study design, you work to nullify - that is, disprove - the null hypothesis you initially set up.

Because it’s more likely that your null hypothesis is accurate and no change will occur, these null results are typical. However, most journals don’t want to publish an experiment in which no positive outcome occurred. This includes both null findings and negative results. Positive findings are published more frequently and faster than either of the other outcomes. Yet null reporting plays a crucial role in a lot of evidence-based research and growth.

The Importance of Publishing Your Null Results

As a scholar, you of all people understand the frustration that comes with doing work that results in information that was already available. If only the knowledge had been readily accessible, you could have based your next research step on it and saved yourself lots of time.

Null results might not have a major implication for you , but they could be huge for other researchers. Publishing your findings has benefits such as:

●      Offering a form of checks and balances for new research ideas. When all findings are always positive, there is no way to check to ensure that they are all accurate and unbiased. False positives exist in science, and without null or negative results, it’s harder to see the whole picture of treatment or outcome.

●      Scholars have less time and resources wasted on their experiment when the processes are already there and don’t need to be repeated. Questions can be duplicated among researchers as they look for answers to lead them through their investigation. When the results they find are verified by other studies or the studies are already performed and can be built off of, it saves time and resources.

●      The replicability crisis is reduced since there are more recordings of null or negative findings to aid in a meta-analysis of results or to help with replicating studies that haven’t been thoroughly documented.

More journals today are realizing the importance of the null finding. Some even focus primarily on publishing articles with null or negative outcomes. As a scholar, please understand that your null and negative experiments are worthy of just as much importance as a positive outcome, and should be treated the same way.

  • Afghanistan
  • Åland Islands
  • American Samoa
  • Antigua and Barbuda
  • Bolivia (Plurinational State of)
  • Bonaire, Sint Eustatius and Saba
  • Bosnia and Herzegovina
  • Bouvet Island
  • British Indian Ocean Territory
  • Brunei Darussalam
  • Burkina Faso
  • Cayman Islands
  • Central African Republic
  • Christmas Island
  • Cocos (Keeling) Islands
  • Congo (Democratic Republic of the)
  • Cook Islands
  • Côte d'Ivoire
  • Curacao !Curaçao
  • Dominican Republic
  • El Salvador
  • Equatorial Guinea
  • Falkland Islands (Malvinas)
  • Faroe Islands
  • French Guiana
  • French Polynesia
  • French Southern Territories
  • Guinea-Bissau
  • Heard Island and McDonald Islands
  • Iran (Islamic Republic of)
  • Isle of Man
  • Korea (Democratic Peoples Republic of)
  • Korea (Republic of)
  • Lao People's Democratic Republic
  • Liechtenstein
  • Marshall Islands
  • Micronesia (Federated States of)
  • Moldova (Republic of)
  • Netherlands
  • New Caledonia
  • New Zealand
  • Norfolk Island
  • North Macedonia
  • Northern Mariana Islands
  • Palestine, State of
  • Papua New Guinea
  • Philippines
  • Puerto Rico
  • Russian Federation
  • Saint Barthélemy
  • Saint Helena, Ascension and Tristan da Cunha
  • Saint Kitts and Nevis
  • Saint Lucia
  • Saint Martin (French part)
  • Saint Pierre and Miquelon
  • Saint Vincent and the Grenadines
  • Sao Tome and Principe
  • Saudi Arabia
  • Sierra Leone
  • Sint Maarten (Dutch part)
  • Solomon Islands
  • South Africa
  • South Georgia and the South Sandwich Islands
  • South Sudan
  • Svalbard and Jan Mayen
  • Switzerland
  • Syrian Arab Republic
  • Tanzania, United Republic of
  • Timor-Leste
  • Trinidad and Tobago
  • Turkmenistan
  • Turks and Caicos Islands
  • United Arab Emirates
  • United Kingdom of Great Britain and Northern Ireland
  • United States of America
  • United States Minor Outlying Islands
  • Venezuela (Bolivarian Republic of)
  • Virgin Islands (British)
  • Virgin Islands (U.S.)
  • Wallis and Futuna
  • Western Sahara

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.37(16); 2022 Apr 25

Logo of jkms

A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles

Edward barroga.

1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.

Glafera Janet Matanguihan

2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.

The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.

INTRODUCTION

Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6

It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4

There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.

DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES

A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5

On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4

Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8

Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12

CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES

Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13

There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10

TYPES OF RESEARCH QUESTIONS AND HYPOTHESES

Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .

Research questions in quantitative research

In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .

Hypotheses in quantitative research

In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .

Research questions in qualitative research

Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15

There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .

Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15

Hypotheses in qualitative research

Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1

FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES

Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14

The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14

As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.

a These statements were composed for comparison and illustrative purposes only.

b These statements are direct quotes from Higashihara and Horiuchi. 16

a This statement is a direct quote from Shimoda et al. 17

The other statements were composed for comparison and illustrative purposes only.

CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES

To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g001.jpg

Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.

Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12

In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.

An external file that holds a picture, illustration, etc.
Object name is jkms-37-e121-g002.jpg

EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES

  • EXAMPLE 1. Descriptive research question (quantitative research)
  • - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
  • “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
  • RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
  • EXAMPLE 2. Relationship research question (quantitative research)
  • - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
  • “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
  • Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
  • EXAMPLE 3. Comparative research question (quantitative research)
  • - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
  • “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
  • RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
  • STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
  • EXAMPLE 4. Exploratory research question (qualitative research)
  • - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
  • “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
  • EXAMPLE 5. Relationship research question (quantitative research)
  • - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
  • “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23

EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES

  • EXAMPLE 1. Working hypothesis (quantitative research)
  • - A hypothesis that is initially accepted for further research to produce a feasible theory
  • “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
  • “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
  • EXAMPLE 2. Exploratory hypothesis (qualitative research)
  • - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
  • “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
  • “Conclusion
  • Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
  • EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
  • “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
  • Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
  • EXAMPLE 4. Statistical hypothesis (quantitative research)
  • - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
  • “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
  • “Statistical Analysis
  • ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27

EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS

  • EXAMPLE 1. Background, hypotheses, and aims are provided
  • “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
  • “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
  • “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
  • EXAMPLE 2. Background, hypotheses, and aims are provided
  • “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
  • “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
  • EXAMPLE 3. Background, aim, and hypothesis are provided
  • “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
  • “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
  • “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30

Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Barroga E, Matanguihan GJ.
  • Methodology: Barroga E, Matanguihan GJ.
  • Writing - original draft: Barroga E, Matanguihan GJ.
  • Writing - review & editing: Barroga E, Matanguihan GJ.

The effect of transcranial direct current stimulation (tDCS) on learning and performing statistical calculations Rick A Houser, Steve Thoma, Marietta Stanton, Erin O’Connor, Hong Jiang, Yangxue Dong  

Is There A Lunar Influence on Search and Rescue Incidents? Andrew P. Billyard, Talia J. McCallum, Irene A. Collin  

No Effect of a Brief Music Intervention on Test Anxiety and Exam Scores in College Undergraduates Matthew A. Goldenberg, Anna H. L. Floyd, Anne Moyer  

Training Skills of Divided Attention among Older Adults Ilmiye Se�er, Lata Satyen  

Do jackdaws have a memory for order? Gerit Pfuhl, Robert Biegler  

Factorial Structure of the Existence Scale André Brouwers, Welko Tomic  

Failure of Tactile Contact to Increase Request Compliance: The Case of Blood Donation Behavior Nicolas Gu�guen, Farid Afifi, Sarah Brault, Virginie Charles-Sire, Pierre‑marie Leforestier, Ana�lle Morzedec, Elodie Piron  

Effects of D-Glucose on Acquisition of Implicit Mirror-Tracing and Explicit Word Recall in a Non-Diabetic Sample Robert W. Flint, Jr.  

Songwriting Loafing or Creative Collaboration?: A Comparison of Individual and Team Written Billboard Hits in the USA Terry F. Pettijohn II, Shujaat F. Ahmed  

Is cleanliness next to godliness? Dispelling old wives’ tales: Failure to replicate Zhong and Liljenquist (2006) Jennifer V. Fayard, Amandeep K. Bassi, Daniel M. Bernstein, Brent W. Roberts  

Playing video games does not make for better visual attention skills Karen Murphy, Amy Spencer  

Inconsistent Mood Congruent Effects in Lexical Decision Experiments C. Darren Piercey, Nicole Rioux  

An Evaluation of CHANGE, a Pilot Prison Cognitive Treatment Program Eric G. Lambert, Nancy L. Hogan, Shannon Barton, Michael T. Stevenson  

Is There an Effect of Subliminal Messages in Music on Choice Behavior? Hauke Egermann, Reinhard Kopiez, Cristoph Reuter  

An Experimental Test of the Discontinuity Hypothesis: Examining the Effects of Mortality Salience on Nostalgia Jonathan F. Bassett  

Is Implicit Self-Esteem Really Unconscious?: Implicit Self-Esteem Eludes Conscious Reflection Matthew T. Gailliot and Brandon J. Schmeichel

Evaluative Affect Display toward Male & Female Leaders II: Transmission among Group Members and Leader Reactions Sabine C. Koch

False Recall Does Not Increase When Words are Presented in a Gender-Congruent Voice David S. Kreiner, R. Zane Price, Amy M. Gross, & Kristy L. Appleby Self-Regulation: A Challenge to the Strength Model Anne M. Murtagh and Susan A. Todd

The Effects of Gender and Ethnicity on the Overcontrolled-Hostility Scale of the MMPI-2 Theresa Kay, Cheryl Duerksen, Patricia Pike, Tamara Andersen

Test Reactivity: Does the Measurement of Identity Serve as an Impetus for Identity Exploration? Kristine S. Anthis

Questioning the Generality of Behavioral Confirmation to Gender Role Stereotypes: The Effect of Status in Producing Self-Verification? Theodore W. McDonald and Loren L. Toussaint

Interpreting Null Results: Improving Presentation and Conclusions with Confidence Intervals Chris Aberson Moderate Immunodepression Does Not Alter Some Murine Behaviors Jose Vidal

Multiple Targets of Organizational Identification: The Role of Identification Congruency Tim Grice, Liz Jones, and Neil Paulsen Does Fetal Malnourishment Put Infants at Risk of Caregiver Neglect Because Their Faces Are Unappealing? Lisa Daleo and Gordon Bear

Any Questions or comments, send correspondence to [email protected] .

U.S. flag

A .gov website belongs to an official government organization in the United States.

A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Data, Statistics, and Research
  • Cardiovascular Disease Program Toolkit
  • Evaluation Tips and Training
  • Reports and Publications
  • About Heart Disease
  • About Stroke
  • About High Blood Pressure

Evaluation Basics Guide

At a glance.

This Evaluation Basics guide is helpful when evaluating heart disease and stroke prevention activities. This guide focuses on ensuring evaluation use through evaluation reporting.

Various aspects of evaluation reporting can affect how information is used. Programs should consider stakeholder needs, the evaluation purpose, and the target audience when communicating results. Evaluation reporting should identify what, when, how, and to what extent information should be shared. Evaluation reporting should also consider how information might be received and used.

Learn more about evaluation reporting and how to ensure use of evaluation findings. You can also download the complete guide .

Evaluation reporting: a guide to help ensure use of evaluation findings

Key considerations for effectively reporting evaluation findings.

These steps can help drive your intended users to action or influence someone or something based on the findings presented in your evaluation report.

Engage stakeholders

Stakeholders are people who are invested in the program or potentially affected by the evaluation. Stakeholders can play a key role by offering input throughout the evaluation process to ensure effective and useful reporting of evaluation results.

You can engage stakeholders:

  • During the planning phase. Stakeholders can help determine the intended use of the evaluation findings, identify potential primary users of findings, and help develop a reporting and dissemination plan.
  • Once data have been collected. Stakeholders can review interim findings, interpret data, help prepare findings, and to help develop potential recommendations.
  • When developing the evaluation report. Stakeholders can help define the audience, identify any potential uses of the information, and ensure report findings meet the evaluation purpose.

Revisit the purpose of your evaluation

The purpose determines how the evaluation report and findings are used, who the users are, and the most appropriate type of reporting. There may be multiple purposes for conducting an evaluation.

Two common reasons for evaluating CDC-funded programs are to guide program improvement and to ensure program effectiveness.

  • Program improvement. Program staff may want to see a dashboard report of selected indicators and receive regular brief, verbal updates at meetings to learn what midcourse adjustments to make to improve program operations and activities.
  • Program effectiveness. A funding entity may ask for a detailed, comprehensive report to demonstrate whether program components contribute to expected outcomes for accountability purposes.

The evaluation's purpose can have a direct effect on how evaluation data are applied and used. Often, the desire is for evaluation recommendations and findings to inform decision making and lead to program improvement. Alternatively, evaluation results may be used to support or justify a preexisting position, resulting in little to no programmatic change.

Define your target audience

Consider and define the target audience of your evaluation report and findings.

  • Who are the intended primary users or the specific stakeholders who will most likely use the findings?
  • Is the target audience the funding agency of the program, people who are served by the program, or key legislators or decision makers in your local government?

Evaluation findings can be presented differently depending on the target audience and primary evaluation users. Some things to keep in mind about your audience are:

  • Effective communication channels. Identify the appropriate, preferred, and commonly used channels of communicating with your audience.
  • Desired action. Consider what action you want the audience to take and what is within their sphere of influence. Explore how the target audience makes decisions or decides to take action on the basis of new information.
  • Technical expertise or comprehension. Reflect on the level of familiarity the audience has with the subject matter and tailor the level of language to meet their comfort level. Use plain language over more technical language.
  • Cultural appropriateness. Ensure that reports are culturally appropriate for the audience.
  • Perceptions and expectations. Identify the audience's interest in or expectations of the project. Evaluation results may not always be expected or favorable. Regardless of how the findings are perceived, the opportunity for use remains. Also consider how the audience perceives the evaluator and the evaluation process.
  • Presentation of information. Present findings according to the audience's preference. For example, choose between written documentation and oral communication and between presenting anecdotal stories and presenting data.
  • Experience and context. Consider how your audience may interpret the findings, based on their understanding and experiences. Provide context where necessary, and keep the methodology description simple.

Making evaluation reports work for you

The format you use to deliver your evaluation findings will affect how and whether the findings are used. Use these tips to help ensure your evaluation findings are used by your stakeholders.

  • Use action-oriented reporting . Action-oriented reporting prompts your audience to action. This type of report is focused, simple, and geared toward a particular audience.
  • Offer creative options for format of delivery , such as newsletter articles, a website, one-page fact sheets, executive summaries, and PowerPoint presentations or webinars.
  • Communicate your findings in a way that the audience can easily understand . Write content to describe graphs, tables, and charts. Do not assume that your readers will look at both the displays and the narrative. Ensure that all of your graphs, tables, and charts can stand alone.
  • Interpret the findings . Interpretation means looking beyond the data and asking what the results mean in relation to your evaluation questions. It is always a good idea to review the results with selected stakeholders before completing an evaluation report. This review can be accomplished by circulating an interim or draft report and meeting to discuss it together.
  • Include recommendations and lessons learned . The recommendations should address specific findings and be feasible, realistic, actionable, and tailored to intended users. A report that details lessons learned is particularly useful in contributing to public health practices and reporting for accountability purposes.

Keeping it off the bookshelf—the importance of dissemination

Effective dissemination requires a plan to get the right knowledge to the right people at the right time and to help them apply it in ways that may improve a program’s performance.

Step 1: Create a dissemination plan

Your dissemination plan should answer these questions:

  • Who is the target audience?
  • What medium will you use to disseminate findings—hardcopy print, electronic, presentations, briefings?
  • How, where, and when will findings be used?
  • Who is responsible for dissemination?
  • What resources are available to accomplish the work?
  • What are the follow-up activities after release?
  • How will follow-up activities be monitored?

Step 2: Identify a person to oversee the dissemination plan

Identify a person to lead the dissemination effort. This person makes sure that the dissemination plan is carried out. They person should have experience making information accessible and understandable to different audiences.

Step 3: Know the current landscape

Recognize that most reports have a shelf life and most findings have a “relevancy date.” Be knowledgeable about your context, and select optimal release times. For example, if there is a great deal of media coverage about a topic related to your work, such as helping families stay healthy, you may wish to be con­nected to an existing press release or press conference. If your topic has received negative publicity, on the other hand, you may wish to “plan around” this coverage.

Step 4: Consider the timing and frequency

Dissemination works best when multiple products (e.g., a full report, a summary report, an evaluation brief) and channels (i.e., print, verbal, and Web) are used.

Step 5: Stay involved

Convene follow-up discussions and facilitation as needed to ensure continued use of the report. You can take advantage of events that may help keep continued focus on your findings, such as social media, brown-bag lunches, meetings, conferences, or workshops.

Cardiovascular Disease Data, Tools, and Evaluation Resources

CDC provides public health professionals with resources related to heart disease and stroke prevention.

For Everyone

Public health.

Miles D. Williams

Logo

Visiting Assistant Professor | Denison University

  • Download My CV
  • Send Me an Email
  • View My LinkedIn
  • Follow Me on Twitter

When the Research Hypothesis Is the Null

Posted on May 13, 2024 by Miles Williams in Methods   Statistics  

Back to Blog

What should you do if your research hypothesis is the null hypothesis? In other words, how should you approach hypothesis testing if your theory predicts no effect between two variables? I and a coauthor are working on a paper where a couple of our proposed hypotheses look like this, and we got some push-back from a reviewer about it. This prompted me to go down a rabbit hole of journal articles and message boards to see how others handle this situation. I quickly found that I waded into a contentious issue that’s connected to a bigger philosophical debate about the merits of hypothesis testing in general and whether the null hypothesis in particular as a bench-mark for hypothesis testing is even logically sound.

There’s too much to unpack with this debate for me to cover in a single blog post (and I’m sure I’d get some of the key points wrong anyway if I tried). The main issue I want to explore in this post is the practical problem of how to approach testing a null research hypothesis. From an applied perspective, this is a tricky problem that raises issues with how we calculate and interpret p-values. Thankfully, there is a sound solution for the null research hypothesis which I explore in greater detail below. It’s called a two one-sided test, and it’s easy to implement once you know what it is.

The usual approach

Most of the time when doing research, a scientist usually has a research hypothesis that goes something like X has a positive effect on Y . For example, a political scientist might propose that a get-out-the-vote (GOTV) campaign ( X ) will increase voter turnout ( Y ).

The typical approach for testing this claim might be to estimate a regression model with voter turnout as the outcome and the GOTV campaign as the explanatory variable of interest:

Y = α + β X + ε

If the parameter β > 0, this would support the hypothesis that GOTV campaigns improve voter turnout. To test this hypothesis, in practice the researcher would actually test a different hypothesis that we call the null hypothesis. This is the hypothesis that says there is no true effect of GOTV campaigns on voter turnout.

By proposing and testing the null, we now have a point of reference for calculating a measure of uncertainty—that is, the probability of observing an empirical effect of a certain magnitude or greater if the null hypothesis is true. This probability is called a p-value, and by convention if it is less than 0.05 we say that we can reject the null hypothesis.

For the hypothetical regression model proposed above, to get this p-value we’d estimate β, then calculate its standard error, and then we’d take the ratio of the former to the latter giving us what’s called a t-statistic or t-value. Under the null hypothesis, the t-value has a known distribution which makes it really easy to map any t-value to a p-value. The below figure illustrates using a hypothetical data sample of size N = 200. You can see that the t-statistic’s distribution has a distinct bell shape centered around 0. You can also see the range of t-values in blue where if we observed them in our empirical data we’d fail to reject the null hypothesis at the p < 0.05 level. Values in gray are t-values that would lead us to reject the null hypothesis at this same level.

research null findings

When the null is the research hypothesis we want to test

There’s nothing new or special here. If you have even a basic stats background (particularly with Frequentist statistics), the conventional approach to hypothesis testing is pretty ubiquitous. Things get more tricky when our research hypothesis is that there is no effect. Say for a certain set of theoretical reasons we think that GOTV campaigns are basically useless at increasing voter turnout. If this argument is true, then if we estimate the following regression model, we’d expect β = 0.

The problem here is that our substantive research hypothesis is also the one that we want to try to find evidence against. We could just proceed like usual and just say that if we fail to reject the null this is evidence in support of our theory, but the problem with doing this is that failure to reject the null is not the same thing as finding support for the null hypothesis.

There are a few ideas in the literature for how we should approach this instead. Many of these approaches are Bayesian, but most of my research relies on Frequentist statistics, so these approaches were a no-go for me. However, there is one really simple approach that is consistent with the Frequentist paradigm: equivalence testing . The idea is simple. Propose some absolute effect size that is of minimal interest and then test whether an observed effect is different from it. This minimum effect is called the “smallest effect size of interest” (SESOI). I read about the approach in an article by Harms and Lakens (2018) in the Journal of Clinical and Translational Research .

Say, for example, that we deemed a t-value of +/-1.96 (the usual threshold for rejecting the null hypothesis) as extreme enough to constitute good evidence of a non-zero effect. We could make the appropriate adjustments to our t-distribution to identify a new range of t-values that would allow us to reject the hypothesis that an effect is non-zero. This is illustrated in the below figure. We can now see a range of t-values in the middle where we’d have t-values such that we could reject the non-zero hypothesis at the p < 0.05 level. This distribution looks like it’s been inverted relative to the usual null distribution. The reason is that with this approach what we’re doing is conducting a pair of alternative one-tailed tests. We’re testing both the hypothesis that β / se(β) - 1.96 > 0 and β / se(β) + 1.96 < 0. In the Harms and Lakens paper cited above, they call this approach two one-sided tests or TOST (I’m guessing this is pronounced “toast”).

research null findings

Something to pay attention to with this approach is that the observed t-statistic needs to be very small in absolute magnitude for us to reject the hypothesis of a non-zero effect. This means that the bar for testing a null research hypothesis is actually quite high. This is demonstrated using the following simulation in R. Using the {seerrr} package, I had R generate 1,000 random draws (each of size 200) for a pair of variables x and y where the former is a binary “treatment” and the latter is a random normal “outcome.” By design, there is no true causal relationship between these variables. Once I simulated the data, I then generated a set of estimates of the effect of x on y for each simulated dataset and collected the results in an object called sim_ests . I then visualized two metrics that that I calculated with the simulated results: (1) the rejection rate for the null hypothesis test and (2) the rejection rate for the two one-sided equivalence tests. As you can see, if we were to try to test a research null hypothesis the usual way, we’d expect to be able to fail to reject the null about 95% of the time. Conversely, if we were to use the two one-sided equivalence tests, we’d expect to reject the non-zero alternative hypothesis only about 25% of the time. I tested out a few additional simulations to see if a larger sample size would lead to improvements in power (not shown), but no dice.

research null findings

The two one-sided tests approach strikes me as a nice method when dealing with a null research hypothesis. It’s actually pretty easy to implement, too. The one downside is that this test is under-powered. If the null is true, it will only reject the alternative 25% of the time (though you could select a different non-zero alternative which would possibly give you more power). However, this isn’t all bad. The flip side of the coin is that this is a really conservative test, so if you can reject the alternative that puts you on solid rhetorical footing to show the data really do seem consistent with the null.

APS

New Research From Clinical Psychological Science

  • Clinical Psychological Science
  • Obsessive Compulsive Disorder (OCD)
  • Personality
  • Stereotypes

research null findings

Threat Appraisal and Pediatric Anxiety: Proof of Concept of a Latent-Variable Approach Rachel Bernstein, Ashley Smith, Elizabeth Kitt, Elise Cardinale, Anita Harrewijn, Rany Abend, Kalina Michalska, Daniel Pine, and Katharina Kircanski  

Elevated threat appraisal is a postulated neurodevelopmental mechanism of anxiety disorders. However, laboratory-assessed threat appraisals are task-specific and subject to measurement error. We used latent-variable analysis to integrate youths’ self-reported threat appraisals across different experimental tasks; we next examined associations with pediatric anxiety and behavioral- and psychophysiological-task indices. Ninety-two youths ages 8 to 17 (M = 13.07 years, 65% female), including 51 with a primary anxiety disorder and 41 with no Axis I diagnosis, completed up to eight threat-exposure tasks. Anxiety symptoms were assessed using questionnaires and ecological momentary assessment. Appraisals both before and following threat exposures evidenced shared variance across tasks. Derived factor scores for threat appraisal were associated significantly with anxiety symptoms and variably with task indices; findings were comparable with task-specific measures and had several advantages. Results support an overarching construct of threat appraisal linked with pediatric anxiety, providing groundwork for more robust laboratory-based measurement. 

Investigating a Common Structure of Personality Pathology and Attachment Madison Smith and Susan South

Critical theoretical intersections between adult insecure attachment and personality disorders (PDs) suggest that they may overlap, but a lack of empirical analysis to date has limited further interpretation. The current study used a large sample (N = 812) of undergraduates (N = 355) and adults receiving psychological treatment (N = 457) to test whether a joint hierarchical factor structure of personality pathology and insecure attachment is tenable. Results suggested that attachment and PD indicators load together on latent domains of emotional lability, detachment, and vulnerability, but antagonistic, impulsigenic, and psychosis-spectrum factors do not subsume attachment indicators. This solution was relatively consistent across treatment status but varied across gender, potentially suggesting divergent socialization of interpersonal problems. Although further tests are needed, if attachment and PDs prove to be unitary, combining them has exciting potential for providing an etiologic-developmental substrate to the classification of interpersonal dysfunction. 

Does Major Depression Differentially Affect Daily Affect in Adults From Six Middle-Income Countries: China, Ghana, India, Mexico, Russian Federation, and South Africa? Vanessa Panaite and Nathan Cohen

Much of the research on how depression affects daily emotional functioning comes from Western, Educated, Industrialized, Rich, and Democratic (WEIRD) countries. In the current study, we investigated daily positive affect (PA) and negative affect (NA) and PA and NA variability in a cross-cultural sample of adults with a depression diagnosis (N = 2,487) and without a depression diagnosis (N = 31,764) from six middle-income non-WEIRD countries: China, Ghana, India, Mexico, Russian Federation, and South Africa. Across countries, adults with depression relative to adults without depression reported higher average NA and NA variability and lower average PA but higher PA variability. Findings varied between countries. Observations are discussed within the context of new theories and evidence. Implications for current knowledge and for future efforts to grow cross-cultural and non-WEIRD affective science are discussed.

Depressive Symptoms and Their Mechanisms: An Investigation of Long-Term Patterns of Interaction Through a Panel-Network Approach Asle Hoffart, Nora Skjerdingstad, René Freichel, Alessandra Mansueto, Sverre Johnson, Sacha Epskamp, and Omid V. Ebrahimi  

The dynamic interaction between depressive symptoms, mechanisms proposed in the metacognitive-therapy model, and loneliness across a 9-month period was investigated. Four data waves 2 months apart were delivered by a representative population sample of 4,361 participants during the COVID-19 pandemic in Norway. Networks were estimated using the newly developed panel graphical vector-autoregression method. In the temporal network, use of substance to cope with negative feelings or thoughts positively predicted threat monitoring and depressed mood. In turn, threat monitoring positively predicted suicidal ideation. Metacognitive beliefs that thoughts and feelings are dangerous positively predicted anhedonia. Suicidal ideation positively predicted sleep problems and worthlessness. Loneliness was positively predicted by depressed mood. In turn, more loneliness predicted more control of emotions. The findings point at the theory-derived variables, threat monitoring, beliefs that thoughts and feelings are dangerous, and use of substance to cope, as potential targets for intervention to alleviate long-term depressive symptoms. 

Feedback on this article? Email  [email protected]  or login to comment.

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

research null findings

Research Briefs

Recent highlights from APS journals articles on the link between self-esteem and eating disorders, how to be liked in first encounters, the effects of stress on rigid learning, and much more.

research null findings

Student Notebook: Learning Through Exposure

Ellen Finch on what therapists in training can learn from exposure therapy during a pandemic.

research null findings

Video: 5 Flash Talks on Mental Health

Researchers share how their work is advancing the understanding and treatment of conditions like postpartum depression, OCD, and bipolar disorder.

Privacy Overview

  • Introduction
  • Conclusions
  • Article Information

Social Deprivation Index ranges from 0 (least deprived) to 100 (most deprived) and was divided by 10 for reporting. Adjustments were for age, sex, pediatric intensive care unit length of stay before screening date, Pediatric Risk of Mortality III score, year of screening, elective or nonelective admission, origin of admission, and study type (observational or interventional). Other race included American Indian or Alaska Native, Asian, Native Hawaiian or Other Pacific Islander, multiracial, other, and refused. NA indicates not applicable; OR, odds ratio.

Social Deprivation Index ranges from 0 (least deprived) to 100 (most deprived) and was divided by 10 for reporting. Adjustments were for age, sex, pediatric intensive care unit length of stay before screening date, Pediatric Risk of Mortality III score, year of screening, elective or nonelective admission, origin of admission, and study type (observational or interventional). Other race included American Indian or Alaska Native, Asian, Native Hawaiian or Other Pacific Islander, multiracial, other, and refused. NA indicates not applicable.

It was hypothesized that reduced probability of approach would act as a mediator of the association between Black race and reduced consent. In this framework, we assessed the natural indirect effect (NIE) via mediation and the natural direct effect (NDE) of Black race (reference group, White race) on consent rates. Odds ratios (ORs) are presented, adjusted for age, sex, pediatric intensive care unit length of stay before screening date, Pediatric Risk of Mortality III score, year of screening, elective or nonelective admission, origin of admission, and study type (observational or interventional).

eMethods. Data Source, Eligibility, Definitions, Statistical Analysis

eFigure 1 . Directed Acyclic Graph Informing Regression Models

eFigure 2. Flowchart of Study Participation

eTable 1. Odds of Approach for Study Participation According to Race and Ethnicity, Preferred Language, Religion, or Social Deprivation Index

eTable 2. Reasons for Not Approaching by Race and Ethnicity

eTable 3. Reasons for Not Approaching by Language Preferred

eTable 4. Reasons for Not Approaching by Religion

eTable 5. Odds of Approach Stratified According to Study Type

eTable 6. Odds of Consent for Study Participation Among All Eligible Patients According to Race and Ethnicity, Preferred Language, Religion, or Social Deprivation Index

eTable 7. Odds of Consent Stratified According to Study Type

eTable 8. Odds of Consent for Study Participation Among Approached Patients According to Race and Ethnicity, Preferred Language, Religion, or Social Deprivation Index

eTable 9. Odds of Consent Restricted to Those Approached for a Study, Stratified According to Study Type

eTable 10. Odds of Approach, Consent, and Consent Restricted to Those Approached for a Study, With Multiple Imputation of Missing Data

eTable 11. Odds of Approach, Consent, and Consent Restricted to Those Approached for a Study, With All Exposures Included in the Same Model, Using the Dataset With Imputed Missing Variables

eTable 12. Results of a Multinomial Logistic Regression for Odds of Approached and Declined Consent and Approached and Provided Consent, With Not Approached Used as the Reference

Data Sharing Statement

See More About

Sign up for emails based on your interests, select your interests.

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Get the latest research based on your areas of interest.

Others also liked.

  • Download PDF
  • X Facebook More LinkedIn

Mayer SL , Brajcich MR , Juste L , Hsu JY , Yehya N. Racial and Ethnic Disparity in Approach for Pediatric Intensive Care Unit Research Participation. JAMA Netw Open. 2024;7(5):e2411375. doi:10.1001/jamanetworkopen.2024.11375

Manage citations:

© 2024

  • Permissions

Racial and Ethnic Disparity in Approach for Pediatric Intensive Care Unit Research Participation

  • 1 Department of Anesthesiology and Critical Care Medicine, Children’s Hospital of Philadelphia and University of Pennsylvania, Philadelphia
  • 2 Center for Clinical Epidemiology and Biostatistics, Perelman School of Medicine, University of Pennsylvania, Philadelphia
  • 3 Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia

Question   Are sociodemographic factors associated with rates of approach and consent for pediatric intensive care unit (PICU) research?

Findings   This cohort study of 3154 children found disparities in approach and consent according to race and ethnicity, language, religion, and degree of social deprivation. Lower consent rates were partly mediated by lower approach rates, with reduced approach mediating approximately half of the lower rates of consent for Black children.

Meaning   In this study, multiple sociodemographic variables were associated with disparate consent rates for PICU research, and strategies to increase approaches could contribute to equitable enrollment in PICU studies.

Importance   While disparities in consent rates for research have been reported in multiple adult and pediatric settings, limited data informing enrollment in pediatric intensive care unit (PICU) research are available. Acute care settings such as the PICU present unique challenges for study enrollment, given the highly stressful and emotional environment for caregivers and the time-sensitive nature of the studies.

Objective   To determine whether race and ethnicity, language, religion, and Social Deprivation Index (SDI) were associated with disparate approach and consent rates in PICU research.

Design, Setting, and Participants   This retrospective cohort study was performed at the Children’s Hospital of Philadelphia PICU between July 1, 2011, and December 31, 2021. Participants included patients eligible for studies requiring prospective consent. Data were analyzed from February 2 to July 26, 2022.

Exposure   Exposures included race and ethnicity (Black, Hispanic, White, and other), language (Arabic, English, Spanish, and other), religion (Christian, Jewish, Muslim, none, and other), and SDI (composite of multiple socioeconomic indicators).

Main Outcomes and Measures   Multivariable regressions separately tested associations between the 4 exposures (race and ethnicity, language, religion, and SDI) and 3 outcomes (rates of approach among eligible patients, consent among eligible patients, and consent among those approached). The degree to which reduced rates of approach mediated the association between lower consent in Black children was also assessed.

Results   Of 3154 children included in the study (median age, 6 [IQR, 1.9-12.5] years; 1691 [53.6%] male), rates of approach and consent were lower for Black and Hispanic families and those of other races, speakers of Arabic and other languages, Muslim families, and those with worse SDI. Among children approached for research, lower consent odds persisted for those of Black race (unadjusted odds ratio [OR], 0.73 [95% CI, 0.55-0.97]; adjusted OR, 0.68 [95% CI, 0.49-0.93]) relative to White race. Mediation analysis revealed that 51.0% (95% CI, 11.8%-90.2%) of the reduced odds of consent for Black individuals was mediated by lower probability of approach.

Conclusions and Relevance   In this cohort study of consent rates for PICU research, multiple sociodemographic factors were associated with lower rates of consent, partly attributable to disparate rates of approach. These findings suggest opportunities for reducing disparities in PICU research participation.

Inclusive representation in research is important for ensuring generalizability of results, equitable access to medical advances, and improved trust between patients and clinicians. Disparities in research enrollment have been demonstrated in oncology, 1 - 3 COVID-19 trials, 4 , 5 and the general adult population. 6 , 7 Importantly, racial and ethnic disparities in research are often indexed to census data, 8 - 12 which may differ from the population eligible for studies. In US pediatric trials from 2011 to 2020, relative to census demographics, Black children appear overrepresented, whereas American Indian and Alaska Native, Asian, and Native Hawaiian and Other Pacific Islander children appear to be underrepresented, 11 and no racial or ethnic disparities were identified in pediatric drug or device studies. 9 However, defining overrepresentation relative to census data masks actual disparities in consent rates among eligible participants, given higher rates of hospital 13 and pediatric intensive care unit (PICU) 14 - 16 admission for Black children and residents of high-poverty neighborhoods. In addition to race and ethnicity, disparities in enrollment have also been demonstrated based on language preference, 17 - 19 although this has been less studied in critical care.

The PICU presents unique challenges for study enrollment, given the highly stressful and emotional environment for caregivers and the time-sensitive nature of enrolling participants with severe and rapidly changing disease. 20 , 21 This limits a research team’s ability to build trust and rapport with a family prior to study introduction and restricts families’ time to consider a study prior to consenting. Research investigating disparities in study enrollment in the PICU is limited to a reanalysis of a cluster-randomized interventional trial of sedation management (Randomized Evaluation of Sedation Titration for Respiratory Failure [RESTORE]) 22 and an evaluation of enrollment in a biorepository at a single center. 23 Both studies identified lower rates of research approach and consent of patients who were members of racial and ethnic minority groups. These results contrast with the conclusions of studies referencing census data, and further work is necessary to determine whether these disparities are seen across a larger sample of pediatric critical care research.

Therefore, we analyzed all research studies, interventional and observational, from a large academic PICU over 10 years that required prospective informed consent. We hypothesized that disparities in study approach and consent existed according to race and ethnicity, religion, spoken language, and socioeconomic status and that disparities in consent rate were partly mediated by the probability of approaching families to offer study participation.

This retrospective cohort study reviewed all screening and consent logs for all research studies prospectively enrolling in the Children’s Hospital of Philadelphia (CHOP) PICU from July 1, 2011, to December 31, 2021. The CHOP Institutional Review Board reviewed this study and provided an exempt determination from approval and informed consent because it was a retrospective cohort. The study was reported in accordance with the Strengthening the Reporting of Observational Studies in Epidemiology ( STROBE ) guideline. Additional details are provided in the eMethods in Supplement 1 .

All patients eligible for research requiring consent were potentially eligible for this study. Protocols for each study were examined for specific inclusion and exclusion criteria to determine eligibility for our study. Detailed eligibility criteria are provided in the eMethods in Supplement 1 .

Screening logs were linked to the electronic medical record (EMR) for data collection. We examined 4 distinct exposures: race and ethnicity, preferred language, religion, and Social Deprivation Index (SDI). Since our EMR permits Hispanic to be reported as either a race or ethnicity, we combined race and ethnicity into groupings of Hispanic, non-Hispanic Black, non-Hispanic White, and other (including American Indian or Alaska Native, Asian, Native Hawaiian or Other Pacific Islander, Indian, multiracial, other, and refused). Preferred language was encoded as Arabic, English, Spanish, and other. Religion was coded as Christian, Jewish, Muslim, none, and other. Zip code was used to assign the SDI, a validated composite of area-level deprivation extracted from the American Community Survey. 24

We modeled 3 distinct outcomes: approach for research, consent to research among eligible patients, and consent to research among those approached. Confounders included age, sex, PICU length of stay prior to screening, illness severity as defined by Pediatric Risk of Mortality (PRISM) III score at 12 hours (range, 0-74, with higher scores indicating greater mortality risk), year of screening, elective or nonelective admission, origin of admission (emergency department, inpatient floor, neonatal ICU, operating room, or outside hospital), and study type (observational or interventional). Entries reflecting the same patient eligible for different trials or on separate admissions were retained as separate encounters. Quantitative variables were treated as continuous variables.

Data were analyzed from February 2 to July 26, 2022. Separate logistic regression models were used to separately test the association between the 4 exposures of interest (race and ethnicity, language, religion, and SDI) and the 3 outcomes (approach for study [all eligible patients], consent to study [all eligible patients], and consent if approached [restricted to those approached]). All analyses (unadjusted and adjusted) used robust variance estimators to account for 2-way nonnested clustering (by patient and by study), and all multivariable analyses were adjusted for confounders selected using a causal framework. Exposures had less than 5% missingness except for language (187 [5.9%]), and as the nonrandom missingness of language made imputation conceptually difficult with the available variables, 25 only complete case analyses were conducted in the primary analyses. Exposures were analyzed in independent models, given the complex interactions and potential collinearity among race and ethnicity, language, religion, and SDI (eFigure 1 in Supplement 1 ). We performed multiple additional analyses. First, given possible differences between observational and interventional studies, we a priori tested for differential associations between exposures and outcomes according to study type. Second, to test whether data missingness affected our conclusions, we repeated analyses using multiple imputation by chained equations 26 (10 imputations over the entire cohort) to impute missing values for language, religion, and SDI. Third, we performed an exploratory analysis by including all exposure variables in the model, in addition to confounders, on the dataset with imputed missing data. Fourth, as an alternative method to model the data, we performed multinomial regression for odds of approach and declining consent and approach and providing consent, with not approached set as the reference.

Finally, causal mediation analysis 27 was performed to estimate the degree to which the association between Black race and odds of consent was mediated by the probability of being approached. This was a 2-step procedure where we first estimated the probability of being approached for all participants using a separate logistic regression model with all variables included as independent variables and being approached as the outcome. We then used this estimated probability of being approached as a mediator of the association between Black (compared with White) race and odds of consent. All analyses were conducted in Stata, version 18 (StataCorp LLC), with 2-sided P  < .05 considered significant for main analyses and 2-sided P  < .10 for assessing the significance of interaction terms.

Forty-four screening logs from studies (10 interventional trials and 34 observational studies) enrolling between 2011 and 2021 included 35 837 encounters. Two studies had no patients screened. Of the total, 31 585 encounters were excluded as ineligible according to the eligibility criteria for the parent studies, 180 for incomplete or ambiguous data regarding eligibility, 299 for suspended enrollment (including during the COVID-19 pandemic), 561 for patient location in the cardiac ICU (rather than PICU), and 58 for inability to determine whether or not a patient was approached (ie, unable to assign a primary outcome), leaving a total of 3154 patients eligible for our study (eFigure 2 in Supplement 1 ). Of these, sex was recorded as male for 1691 patients (53.6%) and female for 1461 (46.4%), with data missing for 2 (0.1%). Median age was 6.0 (IQR, 1.9-12.5) years, and median PRISM III score was 9 (IQR, 4-15). In terms of race and ethnicity, 855 patients (27.1%) were identified in the EMR as Black, 484 (15.3%) as Hispanic, 1204 (38.2%) as White, and 611 (19.4%) as other. English was the preferred language for most patients (2635 of 2967 with data available [88.8%]). Of the 3154 eligible patients, 896 patients were not approached, 816 were approached but declined consent, and 1442 (45.7% of eligible patients and 63.9% of approached patients) consented to studies ( Table ).

Relative to White children, lower odds of approach were seen for Black children (unadjusted odds ratio [OR], 0.64 [95% CI, 0.52-0.79]; adjusted OR [AOR], 0.60 [95% CI, 0.49-0.73]), Hispanic children (OR, 0.59 [95% CI, 0.44-0.80]; AOR, 0.57 [95% CI, 0.42-0.76]), and children of other race (OR, 0.47 [95% CI, 0.36-0.61]; AOR, 0.44 [95% CI, 0.35-0.56]) ( Figure 1 and eTable 1 in Supplement 1 ). Black and Hispanic families were more commonly not approached due to family unavailability, while Hispanic families and families of other race were more commonly not approached due to perceived language barriers (eTable 2 in Supplement 1 ).

Compared with families who preferred English (eTable 1 in Supplement 1 ), families who preferred Arabic (OR, 0.28 [95% CI, 0.16-0.50]; AOR, 0.28 [95% CI, 0.15-0.51]), Spanish (OR, 0.50 [95% CI, 0.29-0.85]; AOR, 0.57 [95% CI, 0.32-1.02]), or other language (OR, 0.12 [95% CI, 0.07-0.22]; AOR, 0.12 [95% CI, 0.07-0.21]) had lower odds for approach, primarily due to perceived language barriers (eTable 3 in Supplement 1 ). Muslim families had lower odds for approach than those with none for religious affiliation (OR, 0.46 [95% CI, 0.32-0.66]; AOR, 0.41 [95% CI, 0.28-0.59]), also primarily due to language barriers (eTable 4 in Supplement 1 ). Higher (worse) SDI was associated with lower odds of approach (OR, 0.95 [95% CI, 0.92-0.97] per 10-point change; AOR, 0.95 [95% CI, 0.93-0.98] per 10-point change).

In stratified analysis, odds of approach were more favorable for other language for interventional (OR, 0.25 [95% CI, 0.08-0.84]) rather than observational (OR, 0.09 [95% CI, 0.05-0.16]) studies ( P  = .09 for interaction) (eTable 5 in Supplement 1 ). No other variables had a differential association with approach according to study type.

Among eligible patients, Black children (OR, 0.65 [95% CI, 0.51-0.82]; AOR, 0.59 [95% CI, 0.46-0.77]) and those of other race (OR, 0.66 [95% CI, 0.50-0.86]; AOR, 0.58 [95% CI, 0.42-0.79]) had lower consent odds ( Figure 2 and eTable 6 in Supplement 1 ) relative to White children. Families preferring Arabic (OR, 0.48 [95% CI, 0.27-0.87]; AOR, 0.45 [95% CI, 0.24-0.85]) or other language (OR, 0.15 [95% CI, 0.07-0.30]; AOR, 0.14 [95% CI, 0.06-0.31]) were less likely to consent relative to English-speaking families. Muslim families also were less likely to consent (OR, 0.56 [95% CI, 0.38-0.82]; AOR, 0.56 [95% CI, 0.36-0.86]) relative to those with none for religious affiliation. Higher (worse) SDI had an OR less than 1 for odds of consent, but the results were not significant in adjusted analysis (OR, 0.97 [95% CI, 0.94-1.00] per 10-point change; AOR, 0.97 [95% CI, 0.94-1.01] per 10-point change). In stratified analysis, odds of consent were more favorable for interventional vs observational studies for other race (ORs, 0.83 [95% CI, 0.62-1.12] and 0.51 [95% CI, 0.35-0.74], respectively; P  = .009 for interaction) and other language (ORs, 0.44 [95% CI, 0.14-1.41] and 0.09 [95% CI, 0.04-0.21], respectively; P  = .03 for interaction) (eTable 7 in Supplement 1 ).

When restricted to patients approached for research participation, odds of consent did not differ by race, language, religion, or SDI, except for Black relative to White children (OR, 0.73 [95% CI, 0.55-0.97]; AOR, 0.68 [95% CI, 0.49-0.93]) and Jewish children relative to those with none for religious affiliation (OR, 0.56 [95% CI, 0.32-0.96]; AOR, 0.57 [95% CI, 0.31-1.04]) ( Figure 3 and eTable 8 in Supplement 1 ). In stratified analysis, odds of consent were more favorable for interventional rather than observational studies for other race (ORs, 1.19 [95% CI, 0.72-1.96] and 0.76 [95% CI, 0.48-1.20], respectively; P  = .07 for interaction) and other language (ORs, 1.24 [95% CI, 0.47-3.26] and 0.24 [95% CI, 0.07-0.87], respectively; P  = .08 for interaction) (eTable 9 in Supplement 1 ).

When we repeated the analyses after imputing missing language, religion, and SDI (eTable 10 in Supplement 1 ), we found similar conclusions as those of the primary analyses (compare Figure 1 with Figure 3 ). Overall effect sizes with imputed data were similar for associations with approach among all eligible patients, consent among all eligible patients, and consent among those approached (compared with eTables 1, 6, and 8 in Supplement 1 ).

In an exploratory analysis, we examined whether conclusions were substantially affected by our choice to model race and ethnicity, language, religion, and SDI separately. Using the fully imputed dataset, we explored all exposure variables in a single model (eTable 11 in Supplement 1 ). In this analysis, effect sizes for associations between demographic variables and odds of approach or consent among all patients were all somewhat attenuated toward the null, although overall conclusions did not change, with the same variables retaining statistical significance. When assessing the odds of consent among those approached, conclusions were unchanged, with Black race and Jewish religion associated with lower odds of consent among those approached, identical to our primary analyses (compared with Figure 3 ; eTables 8 and 10 in Supplement 1 ). Interestingly in this fully adjusted model, the ORs for Black race and Jewish religion were more extreme than in the primary analyses, although we caution that based on the assumptions laid out in our directed acyclic graph (eFigure 1 in Supplement 1 ), this analysis may have a biased interpretation.

Last, we explored multinomial regression as an alternative analytic method (eTable 12 in Supplement 1 ). All racial and ethnic minority patient groups (relative to White patients), all non–English-speaking patients (relative to English-speaking patients), and Muslim patients (relative to those with none for religious affiliation) had lower odds of approach overall and of consent relative to not being approached. Consistent with the primary analysis, Black race and Jewish religion had significant differences between the effect sizes reported for approached and declined and approached and consented.

Given persistently lower odds of approach and consent for Black children in all of our analyses, causal mediation analysis was performed to determine the degree to which a lower estimated probability of being approached mediated overall lower rates of consent ( Figure 4 ). We found that 51.0% (95% CI, 11.8%-90.2%) of the lower rates of consent for Black children were mediated by the lower rates of approach.

In children eligible for research studies from 2011 to 2021 in the CHOP PICU, this cohort study found underrepresentation according to race and ethnicity (Black, Hispanic, and other), preferred language (Arabic or other), religion (Jewish, Muslim, or other), and socioeconomic status (higher SDI). These disparities were primarily attributable to lower odds of being approached by research teams, with attenuation of ORs when analyzing only patients who were approached. Overall, our results suggest that improved rates of representative enrollment can be achieved with increased rates of research approach, including among Black children. However, there may be additional reasons why Black patients are less likely to consent even if approached that require additional investigation.

Our results are concordant with the existing 2 studies regarding consent disparities in the PICU, which showed lower consent in racial and ethnic minority groups and non-English speakers. 22 , 23 Our study adds to this literature by attempting to quantify the degree to which research populations represent the population eligible for the study, rather than the US population as a whole. Our findings of lower odds of consent for Black children contrasts with some recent literature. In a review of 612 pediatric trials conducted between 2011 and 2020, Black patients were reported to be overrepresented relative to the US census (OR, 1.88 [95% CI, 1.87-1.89]), 11 although this was not confirmed in a review of pediatric studies listed on ClinicalTrials.gov between 2007 and 2020. 12 However, using the US census as a reference to identify disparate rates of trial enrollment can be problematic, 8 , 10 , 11 as previous studies have demonstrated disparities according to race and ethnicity and SDI in patients admitted to PICUs. 15 , 16 The magnitude of disparities in study enrollment can be further biased due to systematic undercounting of Black individuals in the US census. 28 Even attempts to identify disparities in enrollment by referencing disease or hospitalization prevalence can be inaccurate, 12 as not every patient with a diagnosis is eligible for a study. Thus, our study may provide a less biased estimate of disparities for participation in PICU research by conditioning on patients actually eligible for the research. By including 42 total studies, both observational and interventional, our study evaluates the effects of multiple sociodemographic exposures across a variety of clinical and research scenarios. In general, even demographic groups with lower odds of consent were more likely to consent to interventional studies. Prior investigations on disparities in PICU research were limited to single-parent studies. 22 , 23 Our study provides some additional nuance regarding the motivations of families affecting their willingness to have their critically ill child participate in research.

Our results highlight the effect of communication between research staff and families on equitable inclusion in research. Differential rates of being approached for research may reflect implicit bias of the research team or the clinical attending physician in assessing likelihood of consent or uncertainty in building rapport with the family. Disparities according to language may reflect inadequacy of or discomfort using interpreter services, limited parental presence at bedside, or accessibility by telephone. Effective interpreter use could mitigate the lower rates of approach seen for non-English speakers, especially Spanish speakers, who were approached at lower rates but had similar odds of consent relative to English speakers. By contrast, speakers of languages other than Arabic, English, or Spanish had the lowest odds of consenting even when approached, which may be due to inadequacy of communication even if approached. Institutional review board inclusion and exclusion criteria varied in their consideration of families with non–English language preference. Parental absence at bedside accounted for nearly half of cases of inability to approach families in the RESTORE trial 22 and was also a factor in our cohort. Improved research staff training (including bias training), increased awareness of and access to interpretation services, availability of telephone consent, enrollment outside business hours (especially for studies not above minimal risk), greater use of video-conferencing technologies and web-based signatures, and deferring immediate consent in appropriate scenarios are all strategies to increase rates of approach.

Lower rates of approach for research participation may not, however, account for all of the observed disparities in study enrollment. Among Black children, half of the observed disparity in consent was mediated by the lower probability of being approached. The reasons why families of Black children are less likely to consent after being approached requires dedicated exploration. Solving this problem is essential for ensuring that conclusions from research are generalizable and applicable to the population at risk for the conditions being studied.

While our study offers some perspectives on disparities in PICU-based research, additional work is necessary. A survey of primary care clinicians found greater mistrust of research among Black parents. 29 Attitudes toward research may differ in the intensive care setting, and existing studies can be affected by participation bias, limiting the utility of such studies to the PICU population. Survey studies carry the additional concerns of social desirability bias and acquiescence bias. Barriers to parental consent to PICU research after being approached may also include time constraints, feeling overwhelmed, perception of research being burdensome, health literacy, trust in the medical system, and research not being explained well. 23 , 30 , 31 Variations in these attitudes by social or cultural group are not well understood and likely differ between institutions. Racism in the health care field, for example, varies geographically in the US. 16 , 32 Additional research is needed to better understand why families are approached at varying frequencies and how a family decides whether to consent following approach, both globally and within specific institutions. Qualitative research methods may provide unique opportunities and insights to address these questions.

Our study has limitations. We were limited to review of consent logs from a single institution, and some findings may not generalize to all North American PICUs. However, our demographics and severity of illness are similar to many other academic PICUs engaged in research. Our study relied on documentation of race and ethnicity, language, and religion in the EMR, which may lead to exposure misclassifications if categories were entered inaccurately. Minority races and ethnicities are more likely to have discordance between EMR- and self-reported race and ethnicity, 33 , 34 although a recent study assessing this in pediatrics found reasonably high overall concordance (κ = 0.77). 33 Local patient demographics led to many racial and ethnic, linguistic, or religious groups being combined into groups designated other. This resulted in substantial heterogeneity within these groups and limited generalizability outside the larger sociodemographic groups we report herein. Overall, these limitations related to documentation of our exposure variables require our results to be interpreted cautiously and highlight the importance of future studies investigating disparities to standardize the recording of sociodemographic variables. The method in which we handled non-English language for eligibility (ie, eligible unless the study protocol excluded) was an effort to capture potential causes of disparities in approach and consent related to language, but this may introduce potential bias. Certain factors that may affect research participation, such as demographic characteristics of research coordinators, consenting parties, and medical teams, were not available. Last, while the availability of detailed screening logs helped to inform conclusions, we have only limited information about why patients were not approached and no information about why families declined.

In this cohort study of consent rates for PICU research participation, we found lower odds of enrollment according to race and ethnicity, language, religion, and degree of social deprivation. These disparities were largely attributable to disparate rates of approach for research participation, with the important exception of Black children, who were less likely to be enrolled even after accounting for lower rates of approach. Future research should seek to better understand cultural attitudes toward pediatric research in the PICU and test interventions to improve communication and trust between research teams and families.

Accepted for Publication: March 13, 2024.

Published: May 15, 2024. doi:10.1001/jamanetworkopen.2024.11375

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2024 Mayer SL et al. JAMA Network Open .

Corresponding Author: Nadir Yehya, MD, MSCE, Children’s Hospital of Philadelphia, 3401 Civic Center Blvd, CHOP Main, Room 9NW39, Philadelphia, PA 19104 ( [email protected] ).

Author Contributions: Drs Mayer and Yehya had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Mayer, Yehya.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Mayer, Juste, Yehya.

Critical review of the manuscript for important intellectual content: Mayer, Brajcich, Hsu, Yehya.

Statistical analysis: Mayer, Brajcich, Hsu, Yehya.

Administrative, technical, or material support: Juste.

Supervision: Yehya.

Conflict of Interest Disclosures: Ms Juste reported consulting for Fulcrum Therapeutics Inc outside the submitted work. Dr Yehya reported receiving grant funding from the National Institutes of Health during the conduct of the study and consulting for AstraZeneca outside the submitted work. No other disclosures were reported.

Funding/Support: This study was supported by the Endowed Chair of Pediatric Lung Injury at Children’s Hospital of Philadelphia (CHOP) (Dr Yehya).

Role of the Funder/Sponsor: The funder had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Data Sharing Statement: See Supplement 2 .

Additional Contributions: We wish to thank the multiple families involved in research at CHOP over the past decade, as well as the research coordinators and research assistants involved. Charlotte Z. Woods-Hill, MD, Children’s Hospital of Philadelphia and University of Pennsylvania, made important contributions to the manuscript, for which she was not compensated.

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Featured Topics

Featured series.

A series of random questions answered by Harvard experts.

Explore the Gazette

Read the latest.

Portrait of Venki Ramakrishnan.

Science is making anti-aging progress. But do we want to live forever?

Panelists Melissa Dell, Alex Csiszar, and Latanya Sweeney at a Harvard symposium on artificial intelligence.

What is ‘original scholarship’ in the age of AI?

Joonho Lee (top left), Rita Hamad, Fei Chen, Miaki Ishii, Jeeyun Chung, Suyang Xu, Stephanie Pierce, and Jarad Mason.

Complex questions, innovative approaches

Epic science inside a cubic millimeter of brain.

Six layers of excitatory neurons color-coded by depth.

Six layers of excitatory neurons color-coded by depth.

Credit: Google Research and Lichtman Lab

Anne J. Manning

Harvard Staff Writer

Researchers publish largest-ever dataset of neural connections

A cubic millimeter of brain tissue may not sound like much. But considering that that tiny square contains 57,000 cells, 230 millimeters of blood vessels, and 150 million synapses, all amounting to 1,400 terabytes of data, Harvard and Google researchers have just accomplished something stupendous.   

Led by Jeff Lichtman, the Jeremy R. Knowles Professor of Molecular and Cellular Biology and newly appointed dean of science , the Harvard team helped create the largest 3D brain reconstruction to date, showing in vivid detail each cell and its web of connections in a piece of temporal cortex about half the size of a rice grain.

Published in Science, the study is the latest development in a nearly 10-year collaboration with scientists at Google Research, combining Lichtman’s electron microscopy imaging with AI algorithms to color-code and reconstruct the extremely complex wiring of mammal brains. The paper’s three first co-authors are former Harvard postdoc Alexander Shapson-Coe, Michał Januszewski of Google Research, and Harvard postdoc Daniel Berger.

The ultimate goal, supported by the National Institutes of Health BRAIN Initiative , is to create a comprehensive, high-resolution map of a mouse’s neural wiring, which would entail about 1,000 times the amount of data the group just produced from the 1-cubic-millimeter fragment of human cortex.  

“The word ‘fragment’ is ironic,” Lichtman said. “A terabyte is, for most people, gigantic, yet a fragment of a human brain — just a minuscule, teeny-weeny little bit of human brain — is still thousands of terabytes.”  

Headshot of Jeff Lichtman.

Jeff Lichtman.

Kris Snibbe/Harvard Staff Photographer

The latest map contains never-before-seen details of brain structure, including a rare but powerful set of axons connected by up to 50 synapses. The team also noted oddities in the tissue, such as a small number of axons that formed extensive whorls. Because the sample was taken from a patient with epilepsy, the researchers don’t know whether such formations are pathological or simply rare.

Lichtman’s field is connectomics, which seeks to create comprehensive catalogs of brain structure, down to individual cells. Such completed maps would unlock insights into brain function and disease, about which scientists still know very little.

Google’s state-of-the-art AI algorithms allow for reconstruction and mapping of brain tissue in three dimensions. The team has also developed a suite of publicly available tools researchers can use to examine and annotate the connectome.

“Given the enormous investment put into this project, it was important to present the results in a way that anybody else can now go and benefit from them,” said Google collaborator Viren Jain.

Next the team will tackle the mouse hippocampal formation, which is important to neuroscience for its role in memory and neurological disease.

Share this article

You might like.

Nobel laureate details new book, which surveys research, touches on larger philosophical questions

Panelists Melissa Dell, Alex Csiszar, and Latanya Sweeney at a Harvard symposium on artificial intelligence.

Symposium considers how technology is changing academia

Joonho Lee (top left), Rita Hamad, Fei Chen, Miaki Ishii, Jeeyun Chung, Suyang Xu, Stephanie Pierce, and Jarad Mason.

Seven projects awarded Star-Friedman Challenge grants

How far has COVID set back students?

An economist, a policy expert, and a teacher explain why learning losses are worse than many parents realize

Excited about new diet drug? This procedure seems better choice.

Study finds minimally invasive treatment more cost-effective over time, brings greater weight loss

Economic complexity, greenfield investments, and energy innovation: policy implications for sustainable development goals in newly industrialised economies

  • Research Article
  • Published: 15 May 2024

Cite this article

research null findings

  • Muhammad Farhan Bashir   ORCID: orcid.org/0000-0001-5103-4639 1 ,
  • Roula Inglesi-Lotz 2 ,
  • Ummara Razi 3 , 4 &
  • Luqman Shahzad 5  

The crucial role of environmental assessment quality has been recognised by environmental and sustainable development goals in addressing climate change challenges. By focusing on the key identifier of environmental assessment, progress can be made towards overcoming climate change issues effectively. The current study considers environmental commitments under COP28 to study the role of economic complexity, greenfield investments, and energy innovation in environmental degradation in newly industrialised economies from 1995 to 2021. We employ novel panel estimations from CS-ARDL, CS-DL, AMG, and CCEMG to confirm that economic growth and greenfield investments degrade environmental quality. On the other hand, energy innovation and urbanisation improve environmental sustainability. Lastly, we confirm the EKC hypothesis for economic complexity as well. Given the reported empirical findings, the study suggests policymakers must focus on economic complexity to transform industrial sectors’ economic potential. Furthermore, foreign investment projects must be linked with environmental goals to increase renewable energy capacity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

research null findings

Data availability

Data and relevant materials will be available from the corresponding author.

Turkey, Thailand, South Africa, Philippines, Mexico, Malaysia, Indonesia, India, China, Brazil.

Hydrofluorocarbons sulphur, hexafluoride, and perfluorocarbons.

Abu Bakar NA (2019) Greenfield, mergers and acquisitions, energy consumption, and environmental performance in selected SAARC and ASEAN countries. Int J Energy Econ Policy 9(2):216–224

Adeel-Farooq RM, Abu Bakar NA, Olajide Raji J (2018) Green field investment and environmental performance: a case of selected nine developing countries of Asia. Environ Prog Sustain Energy 37:1085–1092

Article   CAS   Google Scholar  

Adom PK, Opoku EEO, Yan IKM (2019) Energy demand–FDI nexus in Africa: do FDIs induce dichotomous paths? Energy Econ 81:928–941. https://doi.org/10.1016/j.eneco.2019.05.030

Article   Google Scholar  

Ahmed Z, Asghar MM, Malik MN, Nawaz K (2020a) Moving towards a sustainable environment: the dynamic linkage between natural resources, human capital, urbanization, economic growth, and ecological footprint in China. Resour Policy 67:101677

Ahmed Z, Zafar MW, Ali S (2020b) Linking urbanization, human capital, and the ecological footprint in G7 countries: an empirical analysis. Sustain Cities Soc 55:102064

Akadiri S saint, Adebayo TS, Asuzu OC et al (2022) Testing the role of economic complexity on the ecological footprint in China: a nonparametric causality-in-quantiles approach. Energy Environ 34(7):2290–2316

Al-Mulali U, Ozturk I (2015) The effect of energy consumption, urbanization, trade openness, industrial output, and the political stability on the environmental degradation in the MENA (Middle East and North African) region. Energy 84:382–389

Al-Mulali U, Weng-Wai C, Sheau-Ting L, Mohammed AH (2015) Investigating the environmental Kuznets curve (EKC) hypothesis by utilizing the ecological footprint as an indicator of environmental degradation. Ecol Indic 48:315–323

Aluko OA, Opoku EEO, Acheampong AO (2022) Economic complexity and environmental degradation: evidence from OECD countries. Bus Strategy Environ n/a. https://doi.org/10.1002/bse.3269

Álvarez-Herránz A, Balsalobre D, Cantos JM, Shahbaz M (2017) Energy innovations-GHG emissions nexus: fresh empirical evidence from OECD countries. Energy Policy 101:90–100. https://doi.org/10.1016/j.enpol.2016.11.030

Baek J (2016) A new look at the FDI–income–energy–environment nexus: dynamic panel data analysis of ASEAN. Energy Policy 91:22–27

Bashir MF (2022) Discovering the evolution of Pollution Haven Hypothesis: a literature review and future research agenda. Environ Sci Pollut Res 29:48210–48232. https://doi.org/10.1007/s11356-022-20782-1

Bashir MF, Ma B, Shahbaz M, Jiao Z (2020) The nexus between environmental tax and carbon emissions with the roles of environmental technology and financial development. PLoS One 15:e0242412

Bashir MF, MA B, Hussain HI et al (2022a) Evaluating environmental commitments to COP21 and the role of economic complexity, renewable energy, financial development, urbanization, and energy innovation: empirical evidence from the RCEP countries. Renew Energy 184:541–550. https://doi.org/10.1016/j.renene.2021.11.102

Bashir MF, Sadiq M, Talbi B et al (2022b) An outlook on the development of renewable energy, policy measures to reshape the current energy mix, and how to achieve sustainable economic growth in the post COVID-19 era. Environ Sci Pollut Res 29:43636–43647. https://doi.org/10.1007/s11356-022-20010-w

Bashir MF, Pan Y, Shahbaz M, Ghosh S (2023) How energy transition and environmental innovation ensure environmental sustainability? Contextual evidence from Top-10 manufacturing countries. Renew Energy 204:697–709. https://doi.org/10.1016/j.renene.2023.01.049

Bekun FV, Alola AA, Sarkodie SA (2019) Toward a sustainable environment: nexus between CO2 emissions, resource rent, renewable and nonrenewable energy in 16-EU countries. Sci Total Environ 657:1023–1029

Bilgili F, Kuşkaya S, Khan M et al (2021) The roles of economic growth and health expenditure on CO2 emissions in selected Asian countries: a quantile regression model approach. Environ Sci Pollut Res 28:44949–44972

Bukhari WAA, Pervaiz A, Zafar M et al (2023) Role of renewable and non-renewable energy consumption in environmental quality and their subsequent effects on average temperature: an assessment of sustainable development goals in South Korea. Environ Sci Pollut Res. https://doi.org/10.1007/s11356-023-30493-w

Can M, Gozgor G (2017) The impact of economic complexity on carbon emissions: evidence from France. Environ Sci Pollut Res 24:16364–16370

Charfeddine L (2017) The impact of energy consumption and economic development on ecological footprint and CO2 emissions: evidence from a Markov switching equilibrium correction model. Energy Econ 65:355–374

Charfeddine L, Mrabet Z (2017) The impact of economic development and social-political factors on ecological footprint: a panel data analysis for 15 MENA countries. Renew Sustain Energy Rev 76:138–154

Chu LK (2021) Economic structure and environmental Kuznets curve hypothesis: new evidence from economic complexity. Appl Econ Lett 28:612–616

Chu LK, Le NTM (2022) Environmental quality and the role of economic policy uncertainty, economic complexity, renewable energy, and energy intensity: the case of G7 countries. Environ Sci Pollut Res 29:2866–2882

Danish, Wang Z (2019) Investigation of the ecological footprint’s driving factors: what we learn from the experience of emerging economies. Sustain Cities Soc 49:101626

Destek MA, Manga M (2021) Technological innovation, financialization, and ecological footprint: evidence from BEM economies. Environ Sci Pollut Res 28:21991–22001

Dogan E, Ulucak R, Kocak E, Isik C (2020) The use of ecological footprint in estimating the environmental Kuznets curve hypothesis for BRICST by considering cross-section dependence and heterogeneity. Sci Total Environ 723:138063

Doğan B, Balsalobre-Lorente D, Nasir MA (2020) European commitment to COP21 and the role of energy consumption, FDI, trade and economic complexity in sustaining economic growth. J Environ Manag 273:111146

Doğan B, Driha OM, Balsalobre Lorente D, Shahzad U (2021) The mitigating effects of economic complexity and renewable energy on carbon emissions in developed countries. Sustain Dev 29:1–12

Doytch N (2020) The impact of foreign direct investment on the ecological footprints of nations. Environ Sustain Indic 8:100085

Google Scholar  

Doytch N, Ashraf A (2022) The ecological footprints of greenfield FDI and cross-border M&A Sales. Environ Model Assess 27:935–951

Doytch N, Narayan S (2016) Does FDI influence renewable energy consumption? An analysis of sectoral FDI impact on renewable and non-renewable industrial energy consumption. Energy Econ 54:291–301

Dumitrescu EI, Hurlin C (2012) Testing for Granger non-causality in heterogeneous panels. Econ Model 29:1450–1460. https://doi.org/10.1016/j.econmod.2012.02.014

García-Quevedo J, Jové-Llopis E (2021) Environmental policies and energy efficiency investments. An industry-level analysis. Energy Policy 156:112461

Gormus S, Aydin M (2020) Revisiting the environmental Kuznets curve hypothesis using innovation: new evidence from the top 10 innovative economies. Environ Sci Pollut Res 27:27904–27913

Hassan ST, Wang P, Khan I, Zhu B (2023) The impact of economic complexity, technology advancements, and nuclear energy consumption on the ecological footprint of the USA: Towards circular economy initiatives. Gondwana Res 113:237–246

He K, Ramzan M, Awosusi AA et al (2021) Does globalization moderate the effect of economic complexity on CO2 emissions? Evidence from the top 10 energy transition economies. Front Environ Sci 9:778088

IEA (2020) World energy outlook 2020. International Energy Agency: Paris, France, p 2050

Ke H, Yang W, Liu X, Fan F (2020) Does innovation efficiency suppress the ecological footprint? Empirical evidence from 280 Chinese cities. Int J Environ Res Public Health 17:6826

Khezri M, Heshmati A, Khodaei M (2022) Environmental implications of economic complexity and its role in determining how renewable energies affect CO2 emissions. Appl Energy 306:117948

Kihombo S, Ahmed Z, Chen S et al (2021) Linking financial development, economic growth, and ecological footprint: what is the role of technological innovation? Environ Sci Pollut Res 28:61235–61245

Kosifakis G, Kampas A, Papadas CT (2020) Economic complexity and the environment: some estimates on their links. Int J Sustain Agric Manag Inform 6:261–271

Lee C-C, Chen M-P, Wu W (2022) The criticality of tourism development, economic complexity, and country security on ecological footprint. Environ Sci Pollut Res 29:37004–37040

Leitão NC, Balsalobre-Lorente D, Cantos-Cantos JM (2021) The impact of renewable energy and economic complexity on carbon emissions in BRICS countries under the EKC scheme. Energies (basel) 14:4908

Li S, Sun H, Sharif A et al (2024) Economic complexity, natural resource abundance and education: implications for sustainable development in BRICST economies. Resour Policy 89:104572. https://doi.org/10.1016/j.resourpol.2023.104572

Liang W, Yang M (2019) Urbanization, economic growth and environmental pollution: evidence from China. Sustai Comput: Inform Syst 21:1–9. https://doi.org/10.1016/j.suscom.2018.11.007

Long X, Ji X, Ulgiati S (2017) Is urbanization eco-friendly? An energy and land use cross-country analysis. Energy Policy 100:387–396

Luo W, Bai H, Jing Q et al (2018) Urbanization-induced ecological degradation in Midwestern China: an analysis based on an improved ecological footprint model. Resour Conserv Recycl 137:113–125

Ma B, Lin S, Bashir MF et al (2023a) Revisiting the role of firm-level carbon disclosure in sustainable development goals: research agenda and policy implications. Gondwana Res 117:230–242. https://doi.org/10.1016/j.gr.2023.02.002

Ma B, Sharif A, Bashir M, Bashir MF (2023b) The dynamic influence of energy consumption, fiscal policy and green innovation on environmental degradation in BRICST economies. Energy Policy 183:113823. https://doi.org/10.1016/j.enpol.2023.113823

Mahmood N, Zhao Y, Lou Q, Geng J (2022) Role of environmental regulations and eco-innovation in energy structure transition for green growth: evidence from OECD. Technol Forecast Soc Chang 183:121890

Majeed MT, Mazhar M, Samreen I, Tauqir A (2022) Economic complexities and environmental degradation: evidence from OECD countries. Environ Dev Sustain 24:5846–5866

Martins JM, Adebayo TS, Mata MN et al (2021) Modeling the relationship between economic complexity and environmental degradation: evidence from top seven economic complexity countries. Front Environ Sci 9(2021):744781

Mealy P, Teytelboym A (2022) Economic complexity and the green economy. Res Policy 51:103948

Mensah CN, Long X, Boamah KB et al (2018) The effect of innovation on CO2 emissions of OCED countries from 1990 to 2014. Environ Sci Pollut Res 25:29678–29698

Mrabet Z, AlSamara M, Hezam Jarallah S (2017) The impact of economic development on environmental degradation in Qatar. Environ Ecol Stat 24:7–38

Nathaniel S, Khan SAR (2020) The nexus between urbanization, renewable energy, trade, and ecological footprint in ASEAN countries. J Clean Prod 272:122709

Neagu O (2021) Economic complexity: a new challenge for the environment. Earth 2:1059–1076

Okamoto S (2013) Impacts of growth of a service economy on CO 2 emissions: Japan’s case. J Econ Struct 2:1–21

Pesaran MH (2007) A simple panel unit root test in the presence of cross-section dependence. J Appl Economet 22:265–312. https://doi.org/10.1002/jae.951

Ponce P, Álvarez-García J, Álvarez V, Irfan M (2022) Analysing the influence of foreign direct investment and urbanization on the development of private financial system and its ecological footprint. Environ Sci Pollut Res 30(4):9624–9641

Rafei M, Esmaeili P, Balsalobre-Lorente D (2022) A step towards environmental mitigation: How do economic complexity and natural resources matter? Focusing on different institutional quality level countries. Resour Policy 78:102848

Sadiq M, Wen F, Bashir MF, Amin A (2022) Does nuclear energy consumption contribute to human development? Modeling the effects of public debt and trade globalization in an OECD heterogeneous panel. J Clean Prod 375:133965. https://doi.org/10.1016/j.jclepro.2022.133965

Shahbaz M, Balsalobre D, Shahzad SJH (2019) The influencing factors of CO 2 emissions and the role of biomass energy consumption: statistical experience from G-7 countries. Environ Model Assess 24:143–161

Shahbaz M, Sinha A, Raghutla C, Vo XV (2022) Decomposing scale and technique effects of financial development and foreign direct investment on renewable energy consumption. Energy 238:121758

Swart J, Brinkmann L (2020) Economic complexity and the environment: evidence from Brazil. In: Universities and sustainable communities: meeting the goals of the agenda 2030. pp 3–45. Springer International Publishing, 2020

Wang Y, Kang L, Wu X, Xiao Y (2013) Estimating the environmental Kuznets curve for ecological footprint at the global level: a spatial econometric approach. Ecol Indic 34:15–21

World Bank (2022) World development indicators 2022. The World Bank 2022

Yahya F, Rafiq M (2020) Brownfield, greenfield, and renewable energy consumption: moderating role of effective governance. Energy Environ 31:405–423

Yilanci V, Pata UK (2020) Investigating the EKC hypothesis for China: the role of economic complexity on ecological footprint. Environ Sci Pollut Res 27:32683–32694

Yu Y, Jiang T, Li S et al (2020) Energy-related CO2 emissions and structural emissions’ reduction in China’s agriculture: an input–output perspective. J Clean Prod 276:124169

Download references

Moreover, we acknowledge funding support from Project (Grant No:2023JJ40061) supported by Natural Science Foundation of Hunan Province.

Author information

Authors and affiliations.

College of Management, Shenzhen University, Shenzhen, Guangzhou, People’s Republic of China

Muhammad Farhan Bashir

Department of Economics, University of Pretoria, Pretoria, South Africa

Roula Inglesi-Lotz

Department of Economic and Finance, Sunway Business School, Sunway University, Subang Jaya, Malaysia

Ummara Razi

Department of Business Administration, ILMA University, Karachi, Pakistan

Independent Researcher, Guangzhou, Guangdong, People’s Republic of China

Luqman Shahzad

You can also search for this author in PubMed   Google Scholar

Contributions

Muhammad Farhan Bashir: conceptualization, methodology, writing draft, writing—revision; Roula Inglesi-Lotz: data curation, writing draft, methodology, data analysis; Ummara Razi: writing draft; Luqman Shahzad: data curation; methodology.

Corresponding author

Correspondence to Muhammad Farhan Bashir .

Ethics declarations

Ethics approval, consent to participate, consent for publication, competing interests.

The author declares no competing interest.

Additional information

Responsible Editor: Philippe Garrigues

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Bashir, M.F., Inglesi-Lotz, R., Razi, U. et al. Economic complexity, greenfield investments, and energy innovation: policy implications for sustainable development goals in newly industrialised economies. Environ Sci Pollut Res (2024). https://doi.org/10.1007/s11356-024-33433-4

Download citation

Received : 07 November 2023

Accepted : 18 April 2024

Published : 15 May 2024

DOI : https://doi.org/10.1007/s11356-024-33433-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Newly industrialised countries
  • Energy transition
  • Environmental degradation
  • Economics complexity
  • Greenfield investment
  • Find a journal
  • Publish with us
  • Track your research

COMMENTS

  1. 'Null' research findings aren't empty of meaning. Let's publish them

    Not reporting null research findings likely reflects competing priorities of scientific journals and researchers. With limited resources and space, journals prefer to publish positive findings and ...

  2. The Importance of Publishing Null Results: Editorial Guideli ...

    We hope to confirm our hypothesis or uncover a novel finding, but null findings are all too often the outcome. Although a null finding is often acutely disappointing for translational studies that aim to test the application of efficacious basic or clinical research into the clinical practice or community settings, the lack of a new finding is ...

  3. PDF A framework for learning from null results

    designed and interpreted appropriately, null results have the potential to yield valuable information beyond simply "this didn't work." In the spring of 2015, a small conference was convened to discuss the prevalence of null results in education research, explore potential explanations for these null findings, and dis-

  4. Null and Alternative Hypotheses

    The null and alternative hypotheses offer competing answers to your research question. When the research question asks "Does the independent variable affect the dependent variable?", the null hypothesis (H 0) answers "No, there's no effect in the population.". On the other hand, the alternative hypothesis (H A) answers "Yes, there ...

  5. Null result

    Null result. In science, a null result is a result without the expected content: that is, the proposed result is absent. [1] It is an experimental outcome which does not show an otherwise expected effect. This does not imply a result of zero or nothing, simply a result that does not support the hypothesis . In statistical hypothesis testing, a ...

  6. Publication bias: Why null results are not necessarily 'dull' results

    The publication of research findings is crucial to scientific progress. It is one of the primary ways scientists share their research with their peers and contemporaries. ... for variations and errors between studies by systematically synthesising and statistically analysing their combined findings. Clearly, null results are important in science.

  7. First analysis of 'pre-registered' studies shows sharp rise in null

    For studies that sought to replicate previous findings, the percentage of null results was slightly higher, at 66%, whereas this figure stood at 55% for original research (see 'Registered ...

  8. Filling in the Scientific Record: The Importance of Negative and Null

    By making more findings available, we can help increase efficiencies and advance scientific discovery faster. ... Publishing of negative and null results will make research credible. Reply. Leave a Reply Cancel reply. Your email address will not be published. Required fields are marked * Comment. Name * Email * Website. ORCID Add your ORCID ...

  9. When and How to Interpret Null Results in NIBS: A Taxonomy Based on

    Seven years ago (de Graaf and Sack, 2011) we argued against a dichotomous distinction of positive and negative findings in non-invasive brain stimulation (NIBS) research, discussing criteria that could raise the interpretability of null results. We opened our paper with the familiar adage: absence of evidence is not evidence of absence.

  10. The importance of taking no for an answer

    Although important steps have been taken to prevent publication of a disproportionate number of non-reproducible chance findings, null findings are usually still considered disappointing. There is ...

  11. Null Hypothesis Collection

    AHA Journals Null Hypothesis Collection. Publishing research with negative results—that is, null or inconclusive findings—is a critical but often overlooked task of biomedical journals. Without it, the scientific literature relies on highly selected pieces of evidence that viewed in isolation can distort a field.

  12. The Value of Null Results

    Commitment to open research practices by universities, funders and publishers is starting to have an impact on this issue by raising the visibility of null findings among students and researchers. The Registered Report which is a publishing model that works on the basis of 'in-principle acceptance'- a study is accepted for publication ...

  13. Why null results do not mean no results: negative findings have

    Papers with null findings that have a strong research design and use rigorous methods with appropriate statistical analyses need to be part of the evidence base. The current study in this issue by Huberty and colleagues provides a prototypic exemplar for this approach. Pregnant women were randomized to one of four groups and provided text ...

  14. 5 tips for dealing with non-significant results

    First analysis of 'pre-registered' studies shows sharp rise in null findings This simple tool shows you how to choose your mentors Q&A Niamh Brennan: 100 rules for publishing in top journals

  15. Null & Alternative Hypotheses

    The null and alternative hypotheses offer competing answers to your research question. When the research question asks "Does the independent variable affect the dependent variable?": The null hypothesis ( H0) answers "No, there's no effect in the population.". The alternative hypothesis ( Ha) answers "Yes, there is an effect in the ...

  16. When is Nothing Something? Editorial for the Null Results ...

    This special issue is focused on how null results can meaningfully advance science and practice. This editorial describes some of the unique aspects of creating a special issue devoted to null results, offers opinions regarding why and under what conditions null results should be valued, and offers recommendations for key stakeholders (i.e., editors, reviewers, and authors) in the publication ...

  17. Null Effects and Publication Bias in Special Education Research

    However, null findings are not always published, leading to the possibility of publication bias, a positively skewed research base, and policy and practices based on incomplete data. This special issue of Behavioral Disorders provides an outlet for methodologically sound studies with null findings. In this introductory article, we provide a ...

  18. How to Write About Negative (Or Null) Results in Academic Research

    Solving problems that lead to negative results is an integral part of being an effective researcher. Publishing negative results that are the result of rigorous research contributes to scientific progress. There are three main reasons for negative results: The original hypothesis was incorrect; The findings of a published report cannot be ...

  19. Understanding Null Hypothesis Testing

    A crucial step in null hypothesis testing is finding the likelihood of the sample result if the null hypothesis were true. This probability is called the p value. A low p value means that the sample result would be unlikely if the null hypothesis were true and leads to the rejection of the null hypothesis. A high p value means that the sample ...

  20. Researcher Alert! 5 Ways to Deal With Null, Inconclusive, or ...

    Null, non-significant, or non-conclusive often stay hidden in lab notebooks, never to be published! How should researchers proceed when they find themselves in a situation where their research failed to deliver any significant outcomes? ... Sharing your research findings with subject matter experts or peers is definitely a good move for ...

  21. Why the Reporting of Null Results is Essential To Research ...

    The term "null," in regards to reporting research findings, is derived from the crucial null hypothesis, which is part of the scientific method itself. Null is the neutral component of any research experiment in which variables are attempted to be controlled, and then, through your hypothesis and study design, you work to nullify - that is ...

  22. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  23. Journal of Articles in Support of the Null Hypothesis

    The null effect of recalling an experience to elicit disgust: A replication and extension of Sato and Sugiura (2014) Akiko Matsuo, Tomoya Mukai, Yuri Taknaka Volume 19, No. 2 (February 2023) Limits of the Bogus Pipeline Condition: An Examination of Null Findings in an Experimental Study Caroline C. Kaufman, Andrew J. Paladino, Idia B. Thurston

  24. Improving the utility of non-significant results for educational

    Null-hypothesis significance testing posits the null hypothesis as given and does not provide the opportunity to test its adequacy. ... The results of this review indicate a rather large number of non-significant findings in educational research, many of which are misinterpreted by researchers as indicating evidence for the absence of an effect ...

  25. Evaluation Basics Guide

    Stakeholders can review interim findings, interpret data, help prepare findings, and to help develop potential recommendations. When developing the evaluation report. Stakeholders can help define the audience, identify any potential uses of the information, and ensure report findings meet the evaluation purpose. Revisit the purpose of your ...

  26. When the Research Hypothesis Is the Null

    The two one-sided tests approach strikes me as a nice method when dealing with a null research hypothesis. It's actually pretty easy to implement, too. The one downside is that this test is under-powered. If the null is true, it will only reject the alternative 25% of the time (though you could select a different non-zero alternative which ...

  27. New Research From Clinical Psychological Science

    Tests of our preregistered hypotheses provided a mix of supportive and null findings for stress processes identified in past research and mixed support for the moderating role of personality. The results provide insights into the relations between everyday stressors and personality pathology.

  28. Racial and Ethnic Disparity in Approach for PICU Research Participation

    These findings suggest opportunities for reducing disparities in PICU research participation. Introduction Inclusive representation in research is important for ensuring generalizability of results, equitable access to medical advances, and improved trust between patients and clinicians.

  29. Researchers publish largest-ever dataset of neural connections

    Published in Science, the study is the latest development in a nearly 10-year collaboration with scientists at Google Research, combining Lichtman's electron microscopy imaging with AI algorithms to color-code and reconstruct the extremely complex wiring of mammal brains. The paper's three first co-authors are former Harvard postdoc ...

  30. Economic complexity, greenfield investments, and energy ...

    The findings from CS-DL validate earlier findings with a positive association between GDP and greenfield investments. At the same time, there is a negative association between energy innovation and urbanisation towards environmental degradation. Lastly, the EKC hypothesis for ECI is also proven like CS-ARDL.