TLDR This

Summarize any | in a click.

TLDR This helps you summarize any piece of text into concise, easy to digest content so you can free yourself from information overload.

University College London

Enter an Article URL or paste your Text

Browser extensions.

Use TLDR This browser extensions to summarize any webpage in a click.

Chrome Web Store

Single platform, endless summaries

Transforming information overload into manageable insights — consistently striving for clarity.

Features 01

100% Automatic Article Summarization with just a click

In the sheer amount of information that bombards Internet users from all sides, hardly anyone wants to devote their valuable time to reading long texts. TLDR This's clever AI analyzes any piece of text and summarizes it automatically, in a way that makes it easy for you to read, understand and act on.

Features 02

Article Metadata Extraction

TLDR This, the online article summarizer tool, not only condenses lengthy articles into shorter, digestible content, but it also automatically extracts essential metadata such as author and date information, related images, and the title. Additionally, it estimates the reading time for news articles and blog posts, ensuring you have all the necessary information consolidated in one place for efficient reading.

  • Automated author-date extraction
  • Related images consolidation
  • Instant reading time estimation

Features 03

Distraction and ad-free reading

As an efficient article summarizer tool, TLDR This meticulously eliminates ads, popups, graphics, and other online distractions, providing you with a clean, uncluttered reading experience. Moreover, it enhances your focus and comprehension by presenting the essential content in a concise and straightforward manner, thus transforming the way you consume information online.

Features 02

Avoid the Clickbait Trap

TLDR This smartly selects the most relevant points from a text, filtering out weak arguments and baseless speculation. It allows for quick comprehension of the essence, without needing to sift through all paragraphs. By focusing on core substance and disregarding fluff, it enhances efficiency in consuming information, freeing more time for valuable content.

  • Filters weak arguments and speculation
  • Highlights most relevant points
  • Saves time by eliminating fluff

Who is TLDR This for?

TLDR This is a summarizing tool designed for students, writers, teachers, institutions, journalists, and any internet user who needs to quickly understand the essence of lengthy content.

Anyone with access to the Internet

TLDR This is for anyone who just needs to get the gist of a long article. You can read this summary, then go read the original article if you want to.

TLDR This is for students studying for exams, who are overwhelmed by information overload. This tool will help them summarize information into a concise, easy to digest piece of text.

TLDR This is for anyone who writes frequently, and wants to quickly summarize their articles for easier writing and easier reading.

TLDR This is for teachers who want to summarize a long document or chapter for their students.

Institutions

TLDR This is for corporations and institutions who want to condense a piece of content into a summary that is easy to digest for their employees/students.

Journalists

TLDR This is for journalists who need to summarize a long article for their newspaper or magazine.

Featured by the world's best websites

Our platform has been recognized and utilized by top-tier websites across the globe, solidifying our reputation for excellence and reliability in the digital world.

Focus on the Value, Not the Noise.

Use AI to summarize scientific articles in seconds

Watch SciSummary summarize scientific articles in seconds

Send a document, get a summary. It's that easy.

Harvard logo

If GPT had a PhD

  • 50,000 words summarized
  • First article summarized per month can be up to 200,000 words
  • 50 documents indexed for semantic search
  • 100 Chat Messages
  • Unlimited article searches
  • Import and summarize references with the click of a button
  • 1,000,000 words summarized per month
  • Maximum document length of 200,000 words
  • Unlimited bulk summaries
  • 10,000 chat messages per month
  • 1,000 documents indexed for semantic search

Upsum.io

Summarize any text or PDF in seconds ‍

Check your email for the summary.

Try PRO Upsum if you:

  • Need to summarize more than 2 pages
  • Need more accurate summaries
  • No limits on PDFs summarisation

Chat with your PDF documents

Why choose upsum, simple and adaptable plans for your needs.

Upsum.io

Who is UpSum for?

research paper summary tool

Research Papers

Analyze and understand large amounts of text, get insights, speed up research,   communicate findings efficiently and create concise notes, abstracts, literature reviews.

research paper summary tool

Lengthy Reports

Stay up-to-date with the latest developments. Improve the efficiency of your research and analysis, present  findings in a clear and concise manner.

research paper summary tool

Marketing Reports

Generate summaries of large amounts of text data and extract important information for analysis and understanding of trends and key insights.

UpSum to save hours

Unlock the core of any document with the UpSum algorithm. Experience the luxury of having the most vital information at your fingertips, whether it be a complex research paper, a pressing news article, or a critical business report. Save precious time and elevate your productivity.

research paper summary tool

Upload your documents

research paper summary tool

Set the length and the style of the summary

research paper summary tool

Download your summary

Full text input.

research paper summary tool

Research Papers and Research Articles

research paper summary tool

Business Reports and Legal Documents

research paper summary tool

News Reports and Blog Articles

research paper summary tool

Books and Novels

“UpSum.io is saving me hundreds of hours that I would have wasted on reading lengthy reports. With this tool I feel like I have developed a superpower. ”

research paper summary tool

Robert Jiménez

The latest from our blog.

research paper summary tool

UPDATE: Chat with your documents

research paper summary tool

Summarise research paper tools: A valuable resource for academics and researchers.

research paper summary tool

Introduction to online summarizing tools: What are they and how do they work?

research paper summary tool

Asking the Right Questions: How to Extract Specific Information from Your PDFs with Upsum.io

Frequently asked questions.

We use state-of-the-art technology to summarize any text. Our core AI is based on the ChatGPT algorythm. ChatGPT uses a technique called extractive summarization to summarize text. Extractive summarization involves identifying and selecting the most important and relevant sentences or phrases from the original text and assembling them to create a summary. This is done by analyzing the text and determining the key concepts, entities and the relationships between them, then using this information to rank and select the most informative sentences. This technique is based on machine learning models such as transformer based models like GPT-2, GPT-3, etc. which are trained on large amounts of text data and are able to understand the meaning and context of the text. This enables the model to identify the most important and relevant information and generate a summary that accurately represents the main ideas and key points of the original text. ‍ We aren't just a summary tool, we use the latest AI models to make sure the summary is not just a shorter version of the text, but an actual summarization of the text with its more important and key takeaways.

Absolutely! Our tool is available for anyone to use, free of charge. With the free version, you may be limited in the amount of text you can input at once. If you require more flexibility and advanced features, we offer a premium subscription option. This will give you the ability to input longer text, access additional formats, and customize the summary length to suit your needs

At our company, we are committed to conducting our business with the highest level of integrity and ethical standards. We understand that trust is a fundamental element of any relationship, and we take great care to earn and maintain the trust of our customers. We are dedicated to providing the best service and the most advanced features to meet your needs. We are constantly working to improve our tool and stay ahead of the latest trends and technologies to ensure that our customers have the most effective and efficient solution available.

Suspendisse potenti. Aenean sodales nisl eu sapien consequat, at iaculis massa rutrum. Curabitur fringilla, risus commodo imperdiet tincidunt, urna elit faucibus massa, at tempus nisi mauris a sapien. Vestibulum faucibus, mi et venenatis hendrerit, mi tortor pharetra massa, ac molestie tortor lacus sed dui. Sed non magna consequat, rutrum leo sit amet, mattis augue. Cras eget purus rutrum, fermentum libero id, hendrerit mi.

Our free tool allows you to easily upload and condense texts of up to 3000 words, which is roughly equivalent to four standard pages. The resulting summary will be a concise 200-300-word summary, or roughly half a page. Upgrade to a premium account to enjoy even more flexibility, such as the ability to upload longer texts and customize your summary length to your exact needs.

Research Paper Summarizer by AcademicHelp

Free and easy-to-use summarization.

Summarize content in various formats

Different Summary Options

Effortlessly condense lengthy texts

Clear and To-The-Point Paragraphs

Tailor summaries to your liking

Customizable Text Length

Free research paper summarizer.

research paper summary tool

Remember Me

What is your profession ? Student Teacher Writer Other

Forgotten Password?

Username or Email

research paper summary tool

Research faster with genei

Automatically summarise background reading and produce blogs, articles, and reports faster.

research paper summary tool

"I could totally see this startup playing the same role as a Grammarly: a helpful extension of workflows that optimizes the way people who write for a living, write." ‍ Natasha Mascarenhas ‍ Senior Reporter at TechCrunch ‍

Y-combinator summer 2021.

Genei is part of Y-Combinator, a US startup accelerator with over 2000 companies including Stripe, Airbnb, Reddit and Twitch.

research paper summary tool

TechCrunch favourite startups 2021

Genei was recently named among Tech Crunch's favourite startups of summer 2021.

Oxford University All Innovate 2020

Prize winning company in Oxford University's prestigious "All Innovate" startup competition.

Trusted by thought leaders and experts

"genei is a company that excites me a lot. Their AI has the potential to offer massive productivity boosts in research and writing."

"We can perform research using genei's keyword extraction tool to optimize our article content better than before."

"Genei’s summarisation provides a whole new dimension to our research and reporting, and helps contribute towards the clarity and conciseness of our work."

research paper summary tool

Add, organise, and manage information with ease.

95% of users say genei enables them work more productively. Documents can be stored in customisable projects and folders, whilst content can be linked to any part of a document to generate automatic references.

research paper summary tool

Ask questions and our AI will find answers.

95% of users say they find greater answers and insights from their work when using genei. 

research paper summary tool

Finish your reading list faster.

AI-powered summarisation and keyword extraction for any group of PDFs or webpages. ‍ ‍ 98% of users say genei saves them time by paraphrasing complex ideas and enabling them to find crucial information faster.

research paper summary tool

Improve the quality & efficiency of your research today

Never miss important reading again.

Our chrome extension add-on means you can summarize webpages or save them for later reading as you browse.

research paper summary tool

  • Import, view, summarise & analyse PDFs and webpages
  • Document management and file storage system
  • Full notepad & annotation capabilities 
  • In-built citation management and reference generator
  • Export functionality
  • Everything in basic
  • 70% higher quality AI
  • Access to GPT3 - the world's most advanced language based AI
  • Multi-document summarisation, search, and question answering
  • Rephrasing and Paraphrasing functionality

Loved by thousands of users worldwide

Find out how genei can benefit you.

Empower Your Academic Journey

AI Summarizer & Summary Generator

Jenni AI stands as a comprehensive academic writing assistant, encompassing an AI summarizer and summary generator among its key features. This specialized functionality is meticulously crafted to facilitate the creation of concise summaries, effectively condensing extensive research papers, articles, or essays. Jenni AI simplifies the process, enabling you to focus more on your analysis and less on summarization. Our tool is built with the ethos of promoting authentic academic endeavors, not replacing them.

research paper summary tool

Loved by over 3 million academics

research paper summary tool

Trusted By Academics Worldwide

Academics from leading institutions rely on Jenni AI for efficient summary generation

google logo

Crafting Quality Academic Writing Solutions with Our Text Summarizer

Discover how Jenni AI stands out as the solution for your summarization needs

Effortless Summarization

Jenni AI takes the hassle out of summarization. Just paste your text, and watch as Jenni AI distills the core ideas into a clear, concise summary.

Get started

concise free ai summarizing tool

Interactive Editing

Don’t just settle for the first draft. Interact with the summary, tweak, and refine it to meet your specific requirements, ensuring that every summary is precisely what you need.

Learning and Improvement

Jenni AI is not just a tool, but a companion in your academic journey. Learn from the summarization process and improve your writing skills with every interaction.

research paper summary tool

Our Commitment to Academic Integrity

At Jenni AI, we uphold the principle of academic integrity with the utmost regard. Our tool is devised to assist, not to replace your original work.

How Does Jenni AI Summarizing Tool Work?

Navigating the Realm of Academic Writing Has Never Been Easier

Create Your Account

Sign up for a free Jenni AI account to embark on a simplified summarization journey.

Paste Your Text

Copy and paste the text you wish to summarize. Whether it's a research article, essay, or a complex thesis, Jenni AI is here to assist.

Generate Your Summary

Ask Jenni to summarize and watch as it employs advanced algorithms to distill the core essence of your text, presenting a coherent and concise summary.

Review and edit your summary. Jenni AI's interactive platform allows you to tweak and refine the summary to align perfectly with your academic objectives.

What Scholars Are Saying

Hear from our satisfied users and elevate your writing to the next level

research paper summary tool

· Aug 26

I thought AI writing was useless. Then I found Jenni AI, the AI-powered assistant for academic writing. It turned out to be much more advanced than I ever could have imagined. Jenni AI = ChatGPT x 10.

research paper summary tool

Charlie Cuddy

@sonofgorkhali

· 23 Aug

Love this use of AI to assist with, not replace, writing! Keep crushing it @Davidjpark96 💪

research paper summary tool

Waqar Younas, PhD

@waqaryofficial

· 6 Apr

4/9 Jenni AI's Outline Builder is a game-changer for organizing your thoughts and structuring your content. Create detailed outlines effortlessly, ensuring your writing is clear and coherent. #OutlineBuilder #WritingTools #JenniAI

research paper summary tool

I started with Jenni-who & Jenni-what. But now I can't write without Jenni. I love Jenni AI and am amazed to see how far Jenni has come. Kudos to http://Jenni.AI team.

research paper summary tool

· 28 Jul

Jenni is perfect for writing research docs, SOPs, study projects presentations 👌🏽

research paper summary tool

Stéphane Prud'homme

http://jenni.ai is awesome and super useful! thanks to @Davidjpark96 and @whoisjenniai fyi @Phd_jeu @DoctoralStories @WriteThatPhD

Frequently asked questions

How does jenni ai generate summaries, is jenni ai suitable for all academic fields.

How does the citation helper work?

Can I use Jenni AI for professional or non-academic writing?

How does Jenni AI help with writer’s block?

How does Jenni AI compare to other summarization tools?

Choosing the Right Academic Writing Companion

Get ready to make an informed decision and uncover the key reasons why Jenni AI is your ultimate tool for academic excellence.

Feature Featire

COMPETITORS

Academic Orientation

Designed with academic rigor in mind, ensuring your summaries uphold scholarly standards.

Often lack academic focus, potentially diluting the essence of scholarly texts.

Contextual Understanding

Employs advanced AI to grasp the context, ensuring summaries are meaningful and coherent.

May struggle with contextual understanding, leading to disjointed or misleading summaries.

Customization

Offers customization options to tailor summaries according to your specific needs and preferences.

Generic summarization often with limited customization, risking loss of critical information.

User-Friendly Interface

Intuitive interface makes summarization a breeze, enhancing the user experience.

Clunky interfaces can hinder the summarization process, making it less user-friendly.

Promotes an interactive learning environment, aiding in improving your summarization skills over time.

Merely provide summarization with no added value in terms of learning or skill enhancement.

Ready to Elevate Your Academic Writing?

Create your free Jenni AI account today and discover a new horizon of academic excellence!

Research Paper Summary Generator – Online & Free Tool for Students

Use our research paper summarizer to shorten any text in 3 easy steps:

  • Enter the text you want to reduce.
  • Choose how long you want the summary to be.
  • Press the “summarize” button and get the new text.

Number of sentences in the summary:

Original ratio

100 % in your summary

There's no doubt that recapping is an essential skill for students. Most academic writings require you to summarize literature sources for background information, or you need to condense tons of materials a night before an exam. All this might take a lot of time and effort.

We created a research paper summarizer to help you sum up any academic text in a few clicks. Continue reading to learn more about the free summary writer, and don’t miss excellent tips on how to cut down words manually.

  • 🔧 When to Apply the Summarizer?
  • 🤔 How to Summarize?
  • 📝 Summary Examples
  • 🤩 5 Extra Tips

🔗 References

🔧 summary of research paper online – application.

High school and college students often need to deal with long texts. Consider the most common situations in which our research paper summary generator might be helpful.

  • You don't have time for home reading. The summary generator can narrow the texts down to the main points and eliminate irrelevant details.
  • You need to summarize articles for literature analysis. The tool can help you identify the key ideas from literary sources and compare them easily.
  • You want to reword someone's ideas and create an original text. The summarizing tool helps narrow down the text without plagiarism .

🤔 How Do You Summarize a Research Paper?

Our tool is a great solution to summarize a research paper online. However, you should know the general summing-up rules to get the best results from the tool.

📝 Research Paper Summary Generator Examples

We used an online research paper summarizer to cut down two articles in a few easy clicks. Have a look at these examples!

Source: What Affects Rural Ecological Environment Governance Efficiency? Evidence from China

The research paper focuses on the influencing factors that impact the sustainable economic and social development of vast rural areas of China. According to the author, protecting the ecological environment has become crucial with rapid economic growth. This paper separately examines the influencing factors in the eastern, central, and western regions. The results show that the main factors that positively affect the efficiency of rural ecological environment governance are: the level of rural economic development, rural public participation, and the size of village committees. Conversely, environmental protection social organizations have a negative influence, preventing the productivity of rural ecological environment governance. In conclusion, the key issue in improving rural ecological environment governance in China is to create differentiated regional coordinated governance mechanisms.

Source: Is artificial intelligence better at assessing heart health?

The article discusses the use of AI in assessing and diagnosing cardiac function. The author believes this technology will be beneficial when deployed across the clinical system nationwide. The study provides the results of the experiment conducted in 2020 by Smidt Heart Institute and Stanford University. They developed one of the first AI technologies to assess cardiac function, specifically, left ventricular ejection fraction. The key findings are: Cardiologists agreed with the AI initial assessment more frequently. The physicians could not tell which evaluations were made by AI and which were made by professionals. The AI assistance saved cardiologists a lot of time. In conclusion, the author believes that this level of evidence offers clinicians extra assurance to adopt artificial intelligence more broadly to increase efficiency and quality.

🤩 5 Extra Tips to Summarize a Research Paper

Summarizing a research paper might be incredibly challenging. We recommend using our free research paper summary generator to save time on other tasks. Here are some additional tips to help you create an accurate summary:

  • Ensure you include the main ideas from all research paper parts : introduction , literature review, methods, and results.
  • To identify the main idea of the research paper, look for a hypothesis or a thesis statement .
  • Don't try to fit all the experiment's results and statistics in summary.
  • Always mention the author of the original paper to avoid plagiarism.
  • Don't add your personal opinion on the research paper you summarize.

❓ Research Paper Summarizer FAQ

❓ how to use the research paper summarizer.

Research paper summarizer is a free online tool that can summarize any paper in almost no time. The summarizer is available online and has a simple interface. You only need to copy and paste the original passage, choose the length of your summary, and click the "summarize" button.

❓ How to Write a Summary of a Research Paper?

You can summarize a research paper manually or with the help of a free research paper summary generator. Pay attention to the text's central idea expressed in the thesis statement to make an accurate summary. Ensure you carefully reflect on the author's main point and eliminate the less important details.

❓ How Long Is a Summary?

The length of a summary might vary depending on your goal. For example, if you summarize several sources for your literature review, it's better to keep them short. However, if you write an essay based on one article or book, you might want to provide a more extended summary.

  • Summary For Research Paper: How To Write - eLearning Industry
  • Summarizing a Research Article 1997-2006, University of Washington
  • How To Write a Summary in 8 Steps (With Examples) | Indeed.com
  • Thesis Statement - Writing Your Research Papers - SJSU Research Guides at San José State University Library
  • How to reduce word count without reducing content

research paper summary tool

Research Assistant

Ai-powered research summarizer.

  • Summarize academic papers: Quickly understand the main points of research papers without reading the entire document.
  • Extract insights from reports: Identify key findings and trends from industry reports, surveys, or reviews.
  • Prepare for presentations: Create concise summaries of research materials to include in your presentations or talking points.
  • Enhance your understanding: Improve your comprehension of complex subjects by summarizing the main ideas and insights.
  • Save time: Reduce the time spent on reading lengthy research documents by focusing on the most important points.

New & Trending Tools

In-cite ai reference generator, legal text refiner, job search ai assistant.

Best Summarizing Tool for Academic Texts

Copy and paste your text

Number of sentences in results:

⚙️ 11 Best Summarizing Tools

  • 🤔 How to Summarize an Article without Plagiarizing?
  • 📝 How to Proofread Your Summary?

⭐ Best Summarizing Tool: the Benefits

🔗 references, ✅ 11 best summary generators to consider.

We’re here to offer the whole list of text summarizers in this article. Every tool has a strong algorithm so you won’t have to proofread a lot in order to make the summary look hand-written. The usage of such websites can be productive for your studying as long as you can focus on more important tasks and leave this routine work to online tools.

In this blog post, you’ll also find tips on successful summarizing and proofreading. These are basic skills that you will need for many assignments. To summarize text better, you’ll need to read it critically, spot the main idea, underline the essential points, and so on. As for proofreading, this skill is useful not only to students but also to professional writers.

To summarize a text, a paragraph or even an essay, you can find a lot of tools online. Here we’ll list some of these, including those that allow choose the percent of similarity and define the length of the text you’ll get.

If you’re asked to summarize some article or paragraph in your own words, one of these summary makers can become significant for getting fast results. Their user-friendly design and accurate algorithms play an important role in the summary development.

1. Summarize Bot

Summarize Bot is an easy-to-use and ad-free software for fast and accurate summary creation in our list. With its help, you can save your time for research by compressing texts. The summary maker shows the reading time, which it saves for you, and other useful statistics. To summarize any text, you should only send the message in Facebook or add the bot to Slack. The app works with various file types: including PDF, mp3, DOC, TXT, jpg, etc., and supports almost every language.

The only drawback is the absence of web version. If you don’t have a Facebook account and don’t want to install Slack, you won’t be able to enjoy this app’s features.

SMMRY has everything you need for a perfect summary—easy to use design, lots of features, and advanced settings (URL usage). If you look for a web service that changes the wording, this one would never disappoint you.

SMMRY allows you to summarize the text not only by copy-pasting but also with the file uploading or URL inserting. The last one is especially interesting. With this option, you don’t have to edit an article in any way. Just put the URL into the field and get the result. The tool is ad-free and doesn’t require registration.

Jasper is an AI-powered summary generator. It creates unique, plagiarism-free summaries, so it’s a perfect option for those who don’t want to change the wording on their own.

When using this tool, you can summarize a text of up to 12,000 characters (roughly 2,000-3,000 words) in more than 30 languages. Although Jasper doesn’t have a free plan, it offers a 7-day free trial to let you see whether this tool meets your needs.

4. Quillbot

Quillbot offers many tools for students and writers, and summarizer is one of them. With this tool, you can customize your summary length and choose between two modes: paragraph and key sentences. The former presents a summary as a coherent paragraph, while the latter gives you key ideas of the text in the form of bullet points.

What is great about Quillbot is that you can use it for free. However, there’s a limitation: you can only summarize a text of up to 1,200 words on a free plan. A premium plan extends this limit to 6,000 words. In addition, you don’t need to register to use Quillbot summarizer; just input your text and get the result.

5. TLDR This

TLDR This is a summary generator that can help you quickly summarize long text. You can paste your paper directly into the tool or provide it with a URL of the article you want to shorten.

With a free plan, you have unlimited attempts to summarize texts in the form of key sentences. TLDR This also provides advanced AI summaries, but you have only 10 of these on a free plan. To get more of them, you have to go premium, which starts at $4 per month.

HIX.AI summarizer is an AI-powered tool that can help you summarize texts of up to 10,000 characters. If you use Google Chrome or Microsoft Edge, HIX.AI has a convenient extension for you.

You can use the tool for free to check 1,000 words per week. Along with this, you get access to over 120 other AI-powered tools to help you with your writing tasks. These include essay checker, essay rewriter, essay topic generator, and many others. With a premium plan, which starts at $19.99 a month billed yearly, you also get access to GPT-4 and other advanced features.

7. Scholarcy

Scholarcy is one of the best tools for summarizing academic articles. It presents summaries in the form of flashcards, which can be downloaded as Word, PowerPoint, or Markdown files.

This tool has some outstanding features for students and researchers. For example, it creates a referenced summary, which makes it easier for you to cite the information correctly in your paper. In addition, it can find the references from the summarized article and provide you with open-access links to them. The tool can also extract tables and figures from the text and let you download them as Excel files.

Unregistered users can summarize one article per day. With a free registered account, you can make 3 summary flashcards a day. Moreover, Scholarcy offers free Chrome, Edge, and Firefox extensions that allow you to summarize short and medium-sized articles.

Frase is an AI-powered summary generator that is available for free. The tool can summarize texts of up to 600-700 words. Therefore, it’s good if you want to, say, summarize the main points of your short essay or blog post to write a conclusion. However, if you need summaries of long research articles, you should choose another option.

This summary generator allows you to adjust the level of creativity, meaning that you can generate original, plagiarism-free summaries. Frase also has lots of extra features for SEO and project management, which makes it a good option for website content creators.

9. Resoomer

Resoomer is another paraphrasing and summarizing tool that works with several languages. You’re free to use the app in English, French, German, Italian, and Spanish.

This online tool may be considered as one of the best text summarizers in IvyPanda ranking, because it allows performing many custom settings. For example, you can click to Manual and set the size of the summary (in percent or words). You can also set the number of keywords for the tool to focus on.

Among its drawbacks, we would mention that the software works only with argumentative texts and won’t reword other types correctly. Also, free version contains lots of ads and does not allow its users to import files. The premium subscription costs 4.90€ per month or 39.90€/year.

10. Summarizer

Summarizer is another good way to summarize any article you read online. This simple Chrome extension will provide you with a summary within a couple of clicks. Install the add-on, open the article or select the piece of text you want to summarize and click the button “Summarize”.

The software processes various texts in your browser, including long PDF articles. The result of summarizing has only 7% of the original article. This app is great for all who don’t want to read long publications. However, it doesn’t allow you to import file or download the result.

11. Summary Generator

The last article and essay Summary Generator in our list which can be helpful for your experience in college or university. This is free open software everyone can use.

The tool has only two buttons—one to summarize the document and the other to clear the field. With this software, you’ll get a brief summary based on your text. You don’t have to register there to get your document shortened.

Speaking about drawbacks of the website, we would mention too many ads and no options to summarize a URL or document, set up the length of the result and export it to the popular file types.

These were the best online summarizing tools to deal with the task effectively. We hope some of them became your favorite summarizers, and you’ll use them often in the future.

Not sure if a summarizer will work for your paper? Check out this short tutorial on how the text summarizing tool can come in handy for essay writing.

🤔 Techniques & Tools to Summarize without Plagiarizing

Of course, there are times when you can’t depend on online tools. For example, you may be restricted to use them in a class or maybe you have to highlight some specific paragraphs and customizing the tool’s settings would take more time and efforts than summary writing itself.

In this chapter, you’ll learn to summarize a long article, essay, research paper, report, or a book chapter with the help of helpful tips, a logical approach, and a little bit of creativity.

Here are some methods to let you create a fantastic summary.

  • Know your goal. To choose the right route to your goal, you need to understand it perfectly. Why should you summarize the text? What is its style: scientific or publicistic? Who is the author? Where was the article published? There are many significant questions that can help to adapt your text better. Develop a short interview to use during the summary writing. Include all the important information on where you need to post the text and for what purpose.
  • Thorough reading. To systemize your thoughts about the text, it’s significant to investigate it in detail. Read the text two or more times to grasp the basic ideas of the article and understand its goals and motives. Give yourself all the time you need to process the text. Often we need a couple of hours to extract the right results from the study or learn to paraphrase the text properly.
  • Highlight the main idea. When writing a summary, you bear a responsibility for the author. Not only you have to extract the significant idea of the text but to paraphrase it correctly. It’s important not to misrepresent any of the author’s conclusions in your summary. That’s why you should find the main idea and make sure, you can paraphrase it without a loss of meaning. If possible, read a couple of professional reviews of a targeted book chapter or article. It can help you to analyze the text better.
  • Mark the arguments. The process of summarizing is always easier if you have a marker to highlight important details in the text. If you don’t have a printed text, there’s always Microsoft Word to use a highlight tool on the paper. Try to mark all arguments, statistics, and facts in the text to represent them in your summary. This information will turn into key elements of the summary you’ll create, so keep attention on what you highlight exactly.
  • Take care of plagiarism. Before you start writing, learn what percent of originality should you aim at. Various projects have different requirements. And they determine how many efforts you should put into writing to get a perfect summary your teacher will like. Depending on the percent of originality, build a plan for your short text. Allow yourself copy as much information as allowed to save your time.
  • Build a structure. With the help of key elements, which you’ve highlighted in the text, it’s possible to create a powerful structure including all the interesting facts and arguments. Develop an outline according to a basic structure – introduction, body, and conclusion. Even if your summary is extremely short, the main idea should sound in both the first and last sentences.
  • Write a draft. If you’re not a professional writer, it can be extremely difficult to develop a text with the correct word count on the first try. We advise you to develop a general text firstly – include all the information without controlling the number of sentences.
  • Cut out the unnecessary parts. On this step, you should edit the draft and eliminate the unnecessary parts. Keep in mind, the number of sentences your summary must contain. Make sure the main point is fully represented in the text. You can cut out any sentence except those concluding the significant arguments.
  • Wordiness – you should delete unnecessary words, which make it difficult to understand the text
  • Common mistakes – mistakes made in academic papers are basically the same, so it’s helpful to have an article like this one when you’re proofreading
  • Appropriate terminology – for each topic, there’s a list of the terminology you can use
  • Facts and statistics – you can accidently write a wrong year or percent, make sure to avoid these mistakes
  • Quotes – every quote should be written correctly and have a link to its source.

📝 How to Proofread the Summarized Text?

Now, when you know how to summarize an article, it’s time to edit your text whether it’s your own writing or a summary generator’s results.

In this chapter, you’ll see the basic ways to proofread any type of text: academic paper (essay, research paper, etc.), article, letter, book’s chapter, and so on.

  • Proofread your summary. Are there times when you can’t remember an appropriate synonym? Then you should use Thesaurus and analogous services from time to time. They can expand your vocabulary a lot and help to find the right words even in the most challenging situations.
  • Pay attention to easily confused words. It’s especially significant if you edit a nonfiction text – there’s a number of words people often confuse without even realizing. English Oxford Living Dictionaries have a list of these word pairs so you won’t miss any.
  • Proofread one type of mistakes at a time. To edit a paper properly don’t split your attention to grammar and punctuation—this way you can miss dozens of mistakes. To get more accurate results, read the first time to edit the style, the second to eliminate grammar mistakes, and the third to proofread punctuation. Take as many times as you need to concentrate on each type.
  • Take a rest from your paper. If you use an online summarizing tool, you can skip this step. But if you’ve been writing a paper for several hours and now trying to edit it without taking a break, it may be a bad idea. Why? Because without a fresh pair of eyes there’s a great possibility not to spot even obvious mistakes. Give yourself some time to slightly forget the text—go for a walk or call a friend, and then return to work as a new person.
  • Hire a proofreader. If you need to get perfect results, think about hiring a professional. Skills and qualification, which they have, guarantee a perfect text without any mistakes or style issues. Once you find a proofreader, you can optimize your work perfectly. Search for specialists on freelance websites like UpWork — it’s comfortable and safe to use. Of course, there’s one flaw you should think about—hiring a pro is expensive. So, everyone should decide on their own whether they need to spend this money or not.
  • Switch your paper with a friend. If you can’t afford a professional editor, there’s a less expensive option—ask a friend to look through your paper and proofread theirs in return. Make sure, you both make manual editing, not just check it with Microsoft Office or analogous software. Although there are great grammar tools, they still can’t spot many mistakes obvious to a human.
  • Use grammar checking tools. We recommend you not to depend on multiple grammar tools. But the assistance it can offer is irreplaceable. Start your proofreading by scanning your text with Grammarly or an analogous tool. The service detects many types of errors including confusing words’ pairs, punctuation, misspellings, wordiness, incorrect word order, unfinished sentences, and so on. Of course, you should never correct the mistakes without thinking on every specific issue. Tools not only miss a lot of mistakes but they also can be wrong about your errors.
  • Read aloud. It’s amazing how different the written text can sound when read aloud. If you practice this proofreading method, you know that many mistakes can be spotted if you actually pronounce the text. Why does it happen? People understand information better if they perceive it with the help of different senses. You can use this trick even in learning— memorize the materials with the help of reading, listening, and speaking.

These tips are developed to help students proofread their papers easily. We hope this chapter and the post itself create a helpful guide on how to summarize an article.

Here you found the best summarizing tools, which are accessible online and completely free, and learned to summarize various texts and articles on your own.

Updated: Dec 19th, 2023

  • Summarizing: Academic Integrity at MIT
  • 4 of the Best Online Summarizer Tools to Shorten Text: maketecheasier
  • Summarizing: University of Toronto
  • 5 Easy Summarizing Strategies for Students: ThoughtCo.
  • Summarizing: Texas A&M University Writing Center
  • Comparative Study of Text Summarization Methods: Semantic Scholar
  • How to Write a Summary: UTEP
  • How to Write a Summary: UW
  • Free Essays
  • Writing Tools
  • Lit. Guides
  • Donate a Paper
  • Referencing Guides
  • Free Textbooks
  • Tongue Twisters
  • Job Openings
  • Expert Application
  • Video Contest
  • Writing Scholarship
  • Discount Codes
  • IvyPanda Shop
  • Terms and Conditions
  • Privacy Policy
  • Cookies Policy
  • Copyright Principles
  • DMCA Request
  • Service Notice

This page is for anyone interested in creating a summary for an essay or any other written work. It lists the best online summarizing tools and gives advice on how to summarize an article well. Finally, you'll find tips on how to properly proofread your summary.

Upload your PDF, EPUB, DOCX, ODT, or TXT file here.

PDF, EPUB, DOCX, ODT, TXT

Or import your images / photos by clicking below

(JPEG / PNG)

Please wait... or cancel

Reading speed : 0.8

Go to the main ideas in your texts, summarize them « relevantly » in 1 Click

We advice + we design + we develope.

  • Text example        

Text example    

Initialisation...

Identify the important ideas and facts

To help you summarize and analyze your argumentative texts , your articles, your scientific texts, your history texts as well as your well-structured analyses work of art, Resoomer provides you with a "Summary text tool" : an educational tool that identifies and summarizes the important ideas and facts of your documents. Summarize in 1-Click, go to the main idea or skim through so that you can then interpret your texts quickly and develop your syntheses .

Who is Resoomer for ?

College students.

With Resoomer, summarize your Wikipedia pages in a matter of seconds for your productivity.

Identify the most important ideas and arguments of your texts so that you can prepare your lessons.

JOURNALISTS

If you prefer simplified information that summarizes the major events, then Resoomer is for you !

Identify and understand very fast the facts and the ideas of your texts that are part of the current news and events.

PRESS RELEASES

With the help of Resoomer, go to the main idea of your articles to write your arguments and critiques .

Save time, summarize your digital documents for a relevant and fast uptake of information.

Need to summarize your books' presentations ? Identify the arguments in a matter of seconds.

Too many documents ? Simplify your readings with Resoomer like a desktop tool.

Need to summarize your chapters ? With Resoomer, go to the heart of your ideas.

Identify your books' or your authors' ideas quickly. Summarize the most important main points.

From now on, create quick summaries of your artists' presentation and their artworks .

INSTITUTIONS

Identify the most important passages in texts that contains a lot of words for detailed analyses .

They Tweeted

Follow @resoomer_     Tweeter

SUMMARIZE YOUR ONLINE ARTICLES IN 1-CLICK

Download the extension for your browser

Surf online and save time when reading on internet ! Resoomer summarizes your articles in 500 words so that you can go to the main idea of your text.

HOW DOES RESOOMER WORK ?

Popular articles.

  • Summary and synthesis: the difference?
  • The text summarizer
  • Summarize a text
  • Summarize a document online
  • Summarize an online article
  • Read more and faster documents
  • Argue and find arguments in a text
  • Learn more": How to increase your knowledge?

  Our partners that like Resoom(er)ing their texts :  

BrainBuxa

10 Powerful AI Tools for Academic Research

  • Serra Ardem

10 Powerful AI Tools for Academic Research

AI is no longer science fiction, but a powerful ally in the academic realm. With AI by their side, researchers can free themselves from the burden of tedious tasks, and push the boundaries of knowledge. However, they must use AI carefully and ethically, as these practices introduce new considerations regarding data integrity, bias mitigation, and the preservation of academic rigor.

In this blog, we will:

  • Highlight the increasing role of AI in academic research
  • List 10 best AI tools for academic research, with a focus on each one’s strengths
  • Share 5 best practices on how to use AI tools for academic research

Let’s dig in…

The Role of AI in Academic Research

AI tools for academic research hold immense potential, as they can analyze massive datasets and identify complex patterns. These tools can assist in generating new research questions and hypotheses, navigate mountains of academic literature to find relevant information, and automate tedious tasks like data entry.

Four blue and white AI robots working on laptops.

Let’s take a look at the benefits AI tools offer for academic research:

  • Supercharged literature reviews: AI can sift through vast amounts of academic literature, and pinpoint relevant studies with far greater speed and accuracy than manual searches.
  • Accelerated data analysis: AI tools can rapidly analyze large datasets and uncover intricate insights that might otherwise be overlooked, or time-consuming to identify manually.
  • Enhanced research quality: Helping with grammar checking, citation formatting, and data visualization, AI tools can lead to a more polished and impactful final product.
  • Automation of repetitive tasks: By automating routine tasks, AI can save researchers time and effort, allowing them to focus on more intellectually demanding tasks of their research.
  • Predictive modeling and forecasting: AI algorithms can develop predictive models and forecasts, aiding researchers in making informed decisions and projections in various fields.
  • Cross-disciplinary collaboration: AI fosters collaboration between researchers from different disciplines by facilitating communication through shared data analysis and interpretation.

Now let’s move on to our list of 10 powerful AI tools for academic research, which you can refer to for a streamlined, refined workflow. From formulating research questions to organizing findings, these tools can offer solutions for every step of your research.

1. HyperWrite

For: hypothesis generation

HyperWrite’s Research Hypothesis Generator is perfect for students and academic researchers who want to formulate clear and concise hypotheses. All you have to do is enter your research topic and objectives into the provided fields, and then the tool will let its AI generate a testable hypothesis. You can review the generated hypothesis, make any necessary edits, and use it to guide your research process.

Pricing: You can have a limited free trial, but need to choose at least the Premium Plan for additional access. See more on pricing here .

The web page of Hyperwrite's Research Hypothesis Generator.

2. Semantic Scholar

For: literature review and management

With over 200 million academic papers sourced, Semantic Scholar is one of the best AI tools for literature review. Mainly, it helps researchers to understand a paper at a glance. You can scan papers faster with the TLDRs (Too Long; Didn’t Read), or generate your own questions about the paper for the AI to answer. You can also organize papers in your own library, and get AI-powered paper recommendations for further research.

Pricing: free

Semantic Scholar's web page on personalized AI-powered paper recommendations.

For: summarizing papers

Apparently, Elicit is a huge booster as its users save up to 5 hours per week. With a database of 125 million papers, the tool will enable you to get one-sentence, abstract AI summaries, and extract details from a paper into an organized table. You can also find common themes and concepts across many papers. Keep in mind that Elicit works best with empirical domains that involve experiments and concrete results, like biomedicine and machine learning.

Pricing: Free plan offers 5,000 credits one time. See more on pricing here .

The homepage of Elicit, one of the AI tools for academic research.

For: transcribing interviews

Supporting 125+ languages, Maestra’s interview transcription software will save you from the tedious task of manual transcription so you can dedicate more time to analyzing and interpreting your research data. Just upload your audio or video file to the tool, select the audio language, and click “Submit”. Maestra will convert your interview into text instantly, and with very high accuracy. You can always use the tool’s built-in text editor to make changes, and Maestra Teams to collaborate with fellow researchers on the transcript.

Pricing: With the “Pay As You Go” plan, you can pay for the amount of work done. See more on pricing here .

How to transcribe research interviews with Maestra's AI Interview Transcription Software.

5. ATLAS.ti

For: qualitative data analysis

Whether you’re working with interview transcripts, focus group discussions, or open-ended surveys, ATLAS.ti provides a set of tools to help you extract meaningful insights from your data. You can analyze texts to uncover hidden patterns embedded in responses, or create a visualization of terms that appear most often in your research. Plus, features like sentiment analysis can identify emotional undercurrents within your data.

Pricing: Offers a variety of licenses for different purposes. See more on pricing here .

The homepage of ATLAS.ti.

6. Power BI

For: quantitative data analysis

Microsoft’s Power BI offers AI Insights to consolidate data from various sources, analyze trends, and create interactive dashboards. One feature is “Natural Language Query”, where you can directly type your question and get quick insights about your data. Two other important features are “Anomaly Detection”, which can detect unexpected patterns, and “Decomposition Tree”, which can be utilized for root cause analysis.

Pricing: Included in a free account for Microsoft Fabric Preview. See more on pricing here .

The homepage of Microsoft's Power BI.

7. Paperpal

For: writing research papers

As a popular AI writing assistant for academic papers, Paperpal is trained and built on 20+ years of scholarly knowledge. You can generate outlines, titles, abstracts, and keywords to kickstart your writing and structure your research effectively. With its ability to understand academic context, the tool can also come up with subject-specific language suggestions, and trim your paper to meet journal limits.

Pricing: Free plan offers 5 uses of AI features per day. See more on pricing here .

The homepage of Paperpal, one of the best AI tools for academic research.

For: proofreading

With Scribbr’s AI Proofreader by your side, you can make your academic writing more clear and easy to read. The tool will first scan your document to catch mistakes. Then it will fix grammatical, spelling and punctuation errors while also suggesting fluency corrections. It is really easy to use (you can apply or reject corrections with 1-click), and works directly in a DOCX file.

Pricing: The free version gives a report of your issues but does not correct them. See more on pricing here .

The web page of Scribbr's AI Proofreader.

9. Quillbot

For: detecting AI-generated content

Want to make sure your research paper does not include AI-generated content? Quillbot’s AI Detector can identify certain indicators like repetitive words, awkward phrases, and an unnatural flow. It’ll then show a percentage representing the amount of AI-generated content within your text. The tool has a very user-friendly interface, and you can have an unlimited number of checks.

The interface of Quillbot's Free AI Detector.

10. Lateral

For: organizing documents

Lateral will help you keep everything in one place and easily find what you’re looking for. 

With auto-generated tables, you can keep track of all your findings and never lose a reference. Plus, Lateral uses its own machine learning technology (LIP API) to make content suggestions. With its “AI-Powered Concepts” feature, you can name a Concept, and the tool will recommend relevant text across all your papers.

Pricing: Free version offers 500 Page Credits one-time. See more on pricing here .

Lateral's web page showcasing the smart features of the tool.

How to Use AI Tools for Research: 5 Best Practices

Before we conclude our blog, we want to list 5 best practices to adopt when using AI tools for academic research. They will ensure you’re getting the most out of AI technology in your academic pursuits while maintaining ethical standards in your work.

  • Always remember that AI is an enhancer, not a replacement. While it can excel at tasks like literature review and data analysis, it cannot replicate the critical thinking and creativity that define strong research. Researchers should leverage AI for repetitive tasks, but dedicate their own expertise to interpret results and draw conclusions.
  • Verify results. Don’t take AI for granted. Yes, it can be incredibly efficient, but results still require validation to prevent misleading or inaccurate results. Review them thoroughly to ensure they align with your research goals and existing knowledge in the field.
  • Guard yourself against bias. AI tools for academic research are trained on existing data, which can contain social biases. You must critically evaluate the underlying assumptions used by the AI model, and ask if they are valid or relevant to your research question. You can also minimize bias by incorporating data from various sources that represent diverse perspectives and demographics.
  • Embrace open science. Sharing your AI workflow and findings can inspire others, leading to innovative applications of AI tools. Open science also promotes responsible AI development in research, as it fosters transparency and collaboration among scholars.
  • Stay informed about the developments in the field. AI tools for academic research are constantly evolving, and your work can benefit from the recent advancements. You can follow numerous blogs and newsletters in the area ( The Rundown AI is a great one) , join online communities, or participate in workshops and training programs. Moreover, you can connect with AI researchers whose work aligns with your research interests.

A woman typing on her laptop while sitting at a wooden desk.

Frequently Asked Questions

Is chatgpt good for academic research.

ChatGPT can be a valuable tool for supporting your academic research, but it has limitations. You can use it for brainstorming and idea generation, identifying relevant resources, or drafting text. However, ChatGPT can’t guarantee the information it provides is entirely accurate or unbiased. In short, you can use it as a starting point, but never rely solely on its output.

Can I use AI for my thesis?

Yes, but it shouldn’t replace your own work. It can help you identify research gaps, formulate a strong thesis statement, and synthesize existing knowledge to support your argument. You can always reach out to your advisor and discuss how you plan to use AI tools for academic research .

Can AI write review articles?

AI can analyze vast amounts of information and summarize research papers much faster than humans, which can be a big time-saver in the literature review stage. Yet it can struggle with critical thinking and adding its own analysis to the review. Plus, AI-generated text can lack the originality and unique voice that a human writer brings to a review.

Can professors detect AI writing?

Yes, they can detect AI writing in several ways. Software programs like Turnitin’s AI Writing Detection can analyze text for signs of AI generation. Furthermore, experienced professors who have read many student papers can often develop a gut feeling about whether a paper was written by a human or machine. However, highly sophisticated AI may be harder to detect than more basic versions.

Can I do a PhD in artificial intelligence?

Yes, you can pursue a PhD in artificial intelligence or a related field such as computer science, machine learning, or data science. Many universities worldwide offer programs where you can delve deep into specific areas like natural language processing, computer vision, and AI ethics. Overall, pursuing a PhD in AI can lead to exciting opportunities in academia, industry research labs, and tech companies.

This blog shared 10 powerful AI tools for academic research, and highlighted each tool’s specific function and strengths. It also explained the increasing role of AI in academia, and listed 5 best practices on how to adopt AI research tools ethically.

AI tools hold potential for even greater integration and impact on research. They are likely to become more interconnected, which can lead to groundbreaking discoveries at the intersection of seemingly disparate fields. Yet, as AI becomes more powerful, ethical concerns like bias and fairness will need to be addressed. In short, AI tools for academic research should be utilized carefully, with a keen awareness of their capabilities and limitations.

Serra Ardem

About Serra Ardem

Serra Ardem is a freelance writer and editor based in Istanbul. For the last 8 years, she has been collaborating with brands and businesses to tell their unique story and develop their verbal identity.

  • Methodology
  • Open access
  • Published: 20 May 2024

Fuzzy cognitive mapping in participatory research and decision making: a practice review

  • Iván Sarmiento 1 , 2 ,
  • Anne Cockcroft 1 ,
  • Anna Dion 1 ,
  • Loubna Belaid 1 ,
  • Hilah Silver 1 ,
  • Katherine Pizarro 1 ,
  • Juan Pimentel 1 , 3 ,
  • Elyse Tratt 4 ,
  • Lashanda Skerritt 1 ,
  • Mona Z. Ghadirian 1 ,
  • Marie-Catherine Gagnon-Dufresne 1 , 5 &
  • Neil Andersson 1 , 6  

Archives of Public Health volume  82 , Article number:  76 ( 2024 ) Cite this article

10 Accesses

1 Altmetric

Metrics details

Fuzzy cognitive mapping (FCM) is a graphic technique to describe causal understanding in a wide range of applications. This practice review summarises the experience of a group of participatory research specialists and trainees who used FCM to include stakeholder views in addressing health challenges. From a meeting of the research group, this practice review reports 25 experiences with FCM in nine countries between 2016 and 2023.

The methods, challenges and adjustments focus on participatory research practice. FCM portrayed multiple sources of knowledge: stakeholder knowledge, systematic reviews of literature, and survey data. Methodological advances included techniques to contrast and combine maps from different sources using Bayesian procedures, protocols to enhance the quality of data collection, and tools to facilitate analysis. Summary graphs communicating FCM findings sacrificed detail but facilitated stakeholder discussion of the most important relationships. We used maps not as predictive models but to surface and share perspectives of how change could happen and to inform dialogue. Analysis included simple manual techniques and sophisticated computer-based solutions. A wide range of experience in initiating, drawing, analysing, and communicating the maps illustrates FCM flexibility for different contexts and skill bases.

Conclusions

A strong core procedure can contribute to more robust applications of the technique while adapting FCM for different research settings. Decision-making often involves choices between plausible interventions in a context of uncertainty and multiple possible answers to the same question. FCM offers systematic and traceable ways to document, contrast and sometimes to combine perspectives, incorporating stakeholder experience and causal models to inform decision-making. Different depths of FCM analysis open opportunities for applying the technique in skill-limited settings.

Peer Review reports

Collaborative generation of knowledge recognises people’s right to be involved in decisions that shape their lives [ 1 ]. Their participation makes research and interventions more relevant to local context and priorities and, thus, more likely to be effective [ 2 ]. A commitment to the co-creation of knowledge proposes that people make better decisions when they have the benefit of both scientific and other forms of knowledge. These include context-specific understanding, knowledge claims based on local settings, experience and practice, and organisational know-how [ 3 ]. Participatory research expands the idea of what counts as evidence, opening space for the experience and knowledge of stakeholders [ 4 , 5 ]. The challenge is how to create a level playing field where diverse knowledges can contribute equally. We present fuzzy cognitive mapping (FCM) as a rigorous and transparent tool to combine different perspectives into composite theories to guide shared decision-making [ 6 , 7 , 8 ].

In the early 1980s, the combination of fuzzy logic [ 9 ] to concept mapping of decision making [ 10 , 11 ] led to FCM [ 12 ]. Fuzzy cognitive maps are directed graphs [ 13 ] where nodes correspond to factors or concepts, and arrows describe directed influences. Using this basic structure for causal relationships, users can represent their knowledge of complex systems, including many interacting concepts. Many variables are not easily measured or estimated with precision or are hard to circumscribe within a formal definition, for example, wellbeing, cultural safety, or racism [ 14 , 15 ]. Nevertheless, their causes and effects are important to capture for decision-making. Fuzzy cognitive maps offer a formal structure to include these kinds of variables in the analysis of complex health issues.

The flexibility of the technique allows for systematic mapping of knowledge from multiple sources to identify influences on a particular outcome while supporting collective learning and decision making [ 16 ]. FCM has been used across multiple fields with applications that include modelling, prediction, monitoring, decision-making, and management [ 17 , 18 , 19 , 20 ]. FCM has been applied in medicine to aid diagnosis and treatment decision-making [ 21 , 22 ]. FCM has also supported community and stakeholder engagement in environmental sciences [ 23 , 24 ] and health by examining conventional and Indigenous understanding of causes of diabetes [ 25 ].

Many implementation details contribute to interpretability of FCM, a common concern for researchers new to the technique. This review addresses these practical details when we used FCM to include local stakeholder understanding of causes of health issues in co-design of actions to tackle those issues. The focus is on transparent mapping of stakeholder experience and how it meets requirements for trustworthy data collection and initial analysis. The methods section describes what fuzzy cognitive maps are and how we documented our experience using them. We describe tools and procedures for researchers using FCM to incorporate different knowledges in health research. The results summarize experience in four stages of mapping: framing the outcome of concern, drawing the maps, performing basic analyses, and using the resulting maps. The discussion contrasts our practices with those described in the literature, identifying potential limitations and suggesting future directions.

Methods of the practice review

Fuzzy cognitive maps are graphs of causal understanding [ 6 ]. The unit of meaning in fuzzy cognitive mapping is a relationship, which corresponds to two nodes (concepts) linked by an arrow. Arrows originate in the causes and point to their outcomes. A cause can lead to an outcome directly or through multiple pathways (succession of arrows). Figure  1 shows a fuzzy cognitive map of causes of healthy maternity according to indigenous traditional midwives in the South of Mexico [ 26 ].

figure 1

Fuzzy cognitive map of causes of a healthy maternity according to indigenous traditional midwives in Guerrero, Mexico. ( a ) Graphical display of a fuzzy cognitive map. The boxes are nodes, and the arrows are directed edges. Strong lines indicate positive influences, and dashed lines indicate negative influences. Thicker lines correspond to stronger effects. ( b ) Adjacency matrix with the same content as the map. Rows and columns correspond to the nodes. The value in each cell indicates the strength of the influence of one node (row) on another (column). Reproduced without changes with permission from the authors of [ 26 ]

The “fuzzy” appellation refers to weights that indicate the strength of relationships between concepts. For example, a numeric scale with values between one and five might correspond to very low , low , medium , high or very high influence. If the value is 0, there is no causal relationship, and the concepts are independent. Negative weights indicate a causal decrease in the outcome, and positive weights indicate a causal increase in the outcome. A tabular display of the map, an adjacency matrix, has the concepts in columns and rows. The value in a cell indicates the weight of the influence of the row concept on the column concept (Fig.  1 ). A map can also be represented as an edge list. This shows relationships across three columns: causes (originating node), outcomes (landing node) and weights. Some maps use ranges of variability for the weights (grey fuzzy cognitive maps) [ 27 ] or fuzzy scales to indicate changing states of factors [ 21 ].

Following rules of logical inference, the relationships between concepts can suggest potential explanations for how they work together to influence a specific outcome [ 28 , 29 ]. One might interpret a cognitive map as a series of if-then rules [ 9 ] describing causal relationships between concepts [ 12 ]. For example, if the quality of health care increases, then the population’s health should also improve. Maps can incorporate feedback loops [ 30 ], such as: if violence increases, then more violence happens.

An international participatory research group met in Montreal, Canada, to share FCM experience and discussed its application. FCM implementation in all cases shared a common ten-step protocol [ 6 ], with results of almost all exercises published in peer-reviewed journals. The lead author of each publication presented their work and corroborated the synthesis reflected the most important aspects of their experiences. A webpage details the methods, materials, and tools members of the group have used in practice ( https://ciet.org/fcm ).

As a multilevel training exercise, the meeting included graduate students, emerging researchers with their first research projects and experienced FCM researchers. Nine researchers presented their experience, challenges and lessons learned. The senior co-author (NA) led a four-round nominal group discussion covering consecutive mapping stages: (1) who defined the research issue and how, (2) procedures for building maps and the role of participants at each point, (3) analysis tools and methods and (4) use of the maps. Before the session, participants received the published papers concerning the FCM projects under discussion and the guiding questions about the four themes. After the meeting, the first author (IS) transcribed and drew on the session recording to draft the manuscript. All authors subsequently contributed to the manuscript, which follows the approach used to describe our work with narrative evaluations [ 31 ]. The summary of FCM methods used, the results of the practice review, follows the categories used in the nominal group to inquire about FCM implementation.

Researchers reported their practice in three different FCM applications. Most cases mapped stakeholder knowledge in the context of participatory research [ 26 , 32 , 33 , 34 , 35 , 36 , 37 , 38 39 ]. They also described using FCM to contextualise mixed-methods literature reviews in stakeholder perspectives [ 5 , 40 , 41 , 42 ] and to conduct secondary quantitative analysis of surveys [ 43 , 44 , 45 ]. A fourth FCM application, not discussed in detail in this paper, is in graduate teaching. A master’s program in Colombia and a PhD course in Canada incorporated the creation of cognitive maps as a learning tool, with each student building a map to describe how their research project could contribute to promoting change.

Table  1 summarises the characteristics of the 25 FCM practices reviewed. The number of maps varied from a handful to dozens. Table  2 summarises the processes of defining the issue, drawing, analysing, and using the three different kinds of maps: stakeholder knowledge, mixed-methods literature reviews, and questionnaire data. Table  3 summarises the FCM processes in each of the four mapping stages. Of 23 FCM publications from the group since 2017 (see Additional File 1 ), four describe methodological contributions [ 4 , 5 , 6 , 35 ], and the rest describe the use of FCM in specific contexts.

Stage 1. Who defined the issue and how

Focus group discussions or conversations with partners were the most common methods for defining the issue to be mapped. Cases #6 (pregnant and parenting adolescents) and #20 (women’s satisfaction with HIV care) used literature maps to identify priorities with participants in Canada, while cases #5 (immigrant’s unmet postpartum care needs) and #7 (child protection involvement) contextualised literature-based maps with stakeholder knowledge. In cases #15 and #16 on violence against women and suicide among men in Botswana, community members involved in another project raised these issues as concerns. Two cases used FCM in the secondary analysis of survey data to answer questions defined by the research teams (#1 Mexico dengue) and academic groups (#2 Colombia medical education).

All cases used a participatory research framework [ 46 ]. FCM worked both in well-established partnerships (#8 and #9 involved researchers and Indigenous communities in Mexico, and #20 well-established partnerships with women living with HIV) and in the early stages of trust building (#6 adolescent parents in Canada).

Almost all cases reported two levels of ethical review: institutional boards linked with universities and local entities (health ministries and authorities, advisory boards, community organisations or leaders). Most review boards were unfamiliar with FCM, and some requested additional descriptions and protocols to help them understand the method. In Guatemala (#17) and Nunavik (#18), Indigenous authorities and a steering committee requested a mapping session themselves before approving the project. Most projects used oral consent, mainly due to the involvement of participants with a wide range of literacy levels and in contexts of mistrust about potential misuse of signed documents (Indigenous groups in #8) or during virtual mapping sessions (women living with HIV in #20).

Strengths-based or problem-focused

Most cases followed a strengths-based approach, focusing on what influences a positive outcome (for example, what causes good maternal health instead of what causes maternal morbidity or mortality). Some cases created two maps: one about causes of a positive outcome and one about causes of the corresponding negative outcome (#8 causes and risks for safe birth in Indigenous communities, and #10 causes and protectors of short birth interval). Building two maps helped to unearth additional actionable concepts but was time-consuming and tiring for the stakeholders creating the maps.

Broad concepts or tight questions

A recurring issue was how broad the question or focus should be. A broad question about ‘what influences wellbeing’ fitted well with the holistic perspectives of Mayan communities but posed challenges for drawing, analysing, and communicating maps with many concepts and interactions (#17, Guatemala). A very narrowly defined outcome, on the other hand, might miss potentially actionable causes.

Stage 2. Drawing maps

In the group’s experience, most people readily understand how to make maps, given their basic structure (cause, arrow and consequence). Based on their collective experience, the research group developed a protocol to increase replicability and data quality in FCM, particularly for stakeholder maps, which often involve multiple facilitators and different languages. Creating maps from literature reviews and questionnaire data did not have some of the complications of creating maps with stakeholders but also benefitted from detailed protocols.

Stakeholder maps

The mapping cases reviewed here included mappers ranging from highly trained university researchers (#9 on safe birth) to people without education and speaking only their local language (#8 in Mexico, #10 and #21 in Nigeria, #11 and #12 in Uganda). Meeting participants discussed the advantages and disadvantages of group and individual maps. Groups stimulate the emergence of ideas but include the challenge of ensuring all participants are heard. Careful training of facilitators and managing the mapping sessions as nominal groups helped to increase the participation of quieter people. Groups of not more than five mappers were much easier to facilitate without losing the creative turbulence of a group. Most cases relied on small homogeneous groups, run separately by age and gender, to avoid power imbalances among the map authors. Individual sessions worked well for sensitive topics. They accommodated schedules of busy participants and worked for mappers not linked to a specific community.

Basic equipment for mapping is inexpensive and almost universally available. Most researchers in our group used either sticky notes on a large sheet of paper or magnetic tiles on a metal whiteboard (Fig.  2 ). Some researchers had worked directly with free software to draw the electronic maps ( www.mentalmodeler.com or www.yworks.com/products/yed ), while others digitised the physical maps, often from a photograph. Three cases conducted FCM over the internet or telephone, with individual mappers (#9, #20 and #25) constructing their maps online in real-time.

figure 2

Fuzzy cognitive maps from group sessions in Uganda and Nigeria. ( a ) A group of women in Uganda discusses what contributes to increasing institutional childbirths in rural communities. They used sticky notes and markers on white paper to draw the maps. ( b ) A group of men in Northern Nigeria uses a whiteboard and magnetic tiles to draw a map on causes of short birth intervals

Group mapping sessions typically had a facilitator and a reporter to take notes on the discussions. Reporters are crucial in recording explanations about the meaning of concepts and links. Experienced researchers stressed that careful training of facilitators and reporters, including several rounds of field practice, is essential to ensure quality. We developed materials to support training and quality control of mapping sessions (#21 Nigeria), available at www.ciet.org/fcm . In Nigeria (#21), the research team successfully field-tested the use of Zoom technology via mobile handsets with internet connection by the cellular network to allow virtual participation of international researchers in FCM sessions in the classroom and communities.

Many mappers in community groups had limited or no schooling and only verbal use of their local language. It worked well in these cases for the facilitators to write the concepts on the labels in English or Spanish, while the discussion was in the local language. Facilitators frequently reminded the groups about the labels of the concepts in the local language. In case #16 in Botswana, more literate groups wrote the concepts in Setswana, and the facilitators later translated them into English. Most researchers found that the FCM graphical format helped to overcome language barriers, and it seems to have worked equally well with literate and illiterate groups. Additional file 2 lists common pitfalls and potential solutions during group mapping sessions.

Identifying causes of the issue

Some mapping sessions started by asking participants what the central issue of the map meant to them. This was useful for comparing participant views about the main topic (#8 and #9 maternal health in Indigenous communities and #20 satisfaction with HIV care) and in understanding local concepts of broad topics (#17 Indigenous wellbeing). In Nigeria (#21), group discussions defined elements of adolescent sexual and reproductive health before undertaking FCM, and facilitators shared the list of elements with participants in mapping sessions. In Nunavik (#13 Canada, Inuit women on HPV self-sampling), participating women received an initial presentation to create a common understanding to discuss HPV self-sampling, an unfamiliar technique in Inuit communities.

Some cases created stakeholder maps from scratch, asking participants what they thought would cause the main outcome (#8 to 10, 14 to 19, 21, and 23 to 25). Other cases reviewed the literature first and presented the findings to participants (#5, 7 and 20). In these cases, the facilitators reminded participants that literature maps might not represent their experiences. They encouraged them to add, remove and reorganise concepts, relationships, and weights until they felt the map represented their knowledge.

Once participants had identified concepts (nodes), facilitators had to carefully consider the wording of the labels to represent the meaning of each node and identify potential duplicates. They confirmed duplications with participants and removed repeated nodes. In case #19, participating girls first had one-on-one conversations to discuss and prioritise what they thought contributed to a balanced diet. In a second activity, the actual mapping session, participants organised those concepts into categories and voted on their priorities for action. The Nigerian cases, with large numbers of maps, included creation of an iterative list of labels, with new concepts added after each mapping session to ensure the use of standard labels in future sessions when the mappers confirmed that the standard label wording indicated what they wanted to convey. This step is helpful in the combination of maps that we describe in stage 3.

Drawing arrows

Some maps showed mainly direct influences on the central issue, while others identified multiple relationships between concepts in the map. When the central issue was too broad, participants found it hard to assign relationships between concepts (#17). Facilitators frequently asked participants to clarify the meaning of proposed causal pathways or how they perceived one factor would lead to another and to the main outcome (see Additional file 2 ). To ensure arrows were appropriately labelled as positive or negative, some facilitators used standardised if-then questions to draw the relationships. For example, if factor A increases, does factor B increase or decrease? (#9).

All the presented cases used a scale from one to five to indicate the weights of links. Many Indigenous participants insisted that all the concepts were equally important (#8, 13 and 18). Careful training of facilitators encouraged participant weighing (#10, 15 and 16). It was often helpful to identify the two relationships with extreme upper and lower weights and use those as a reference to weight the rest of the relationships.

Verifying the maps

Stakeholder sessions ended with a verification of the final map. This initial member checking preceded any additional analysis. Participants readily accepted the technique and reported satisfaction that they could see concrete representations of their knowledge by the end of the FCM sessions (#13). It reaffirmed what they knew and what they could contribute in a meaningful way. In Ghana (#19 adolescent nutrition), young participants described mapping sessions as empowering when interviewed six months later [ 42 ].

Synthesis of literature reviews

FCM can portray qualitative and quantitative evidence from the literature in the same terms as stakeholder experience and beliefs and is a cornerstone of an innovative and systematic approach called the Weight of Evidence. In this approach, stakeholders interpret, expand on, and prioritise evidence from literature reviews (#5 unmet postpartum care needs [ 5 , 34 ], #3 maternal health in communities with traditional midwives [ 40 ], #4 medical evacuation of Indigenous pregnant women [ 47 ], #7 child protection investigations among adolescent parents, and #22 community participation in health research) [ 41 ].

Case #5 (Weight of Evidence) demonstrated how to convert quantitative effect estimates (e.g., odds ratio, relative risk) into a shared format to facilitate comparison between findings [ 5 ]. When multiple effect estimates described the same relationship, appropriate techniques [ 7 , 48 , 49 ] allowed for calculating pooled estimates. In #5, qualitative concepts represented ‘unattached’ nodes when the studies suggested they contributed to the outcome of interest. The researchers updated the literature maps with stakeholder views using a Bayesian hierarchical random-effects model with non-informative priors [ 50 ].

In scoping reviews with a broader topic and more heterogeneity of sources (#3, #22) [ 40 ], the map reported the relationships and their supporting data, such as quotes for qualitative studies and odds ratios for quantitative ones, instead of unifying the results in a single scale. Each relationship was counted as 1 (present) with positive or negative signs. Data extraction used a predefined format in which at least two independent researchers registered the relationships after reading the full texts. Each included study contributed to the model in the same way it would contribute to an overall discourse about the topic.

Maps from questionnaire data

Researchers used questionnaire data to generate maps of a behavioural change model in dengue prevention in Mexico [ 43 ] and cultural safety among medical trainees in Colombia [ 44 , 45 ]. The dengue project produced separate maps for men and women, while the Colombian map included all participants. Each map had seven nodes, one for each domain of change in the CASCADA model of behavioural change (Fig.  3 ): Conscious knowledge, Attitudes, positive deviation from Subjective norms, intentions to Change behaviour, Agency, Discussion of possible action and Action or change of practice [ 51 ]The surveys included questions for each intermediate result, and the repeat survey during the impact assessment provided a counterfactual comparison. For example, in Mexico (#1), Conscious knowledge (first C) was the ability to identify a physical sample of a mosquito larva during the interview, and Action (last A) focused on participation in collective activities in the neighbourhood to control mosquito breeding sites. The maps in Colombia (#2) explored the CASCADA network of partial results towards the students’ self-reported intention to change their patient-related behaviour.

figure 3

Maps from questionnaire data from the study on dengue control in Guerrero, Mexico. Green arrows are positive influences, and red arrows correspond to negative influences. The control group showed a negative influence in the results chain with a cumulative net influence of 0.88; the intervention group showed no such block and a cumulative net influence of 1.92. Reproduced without changes with permission from the authors of [ 43 ]

The arrows linking the nodes received a weight ( w ) equivalent to the odds ratio (OR) between the outcomes, transformed to a symmetrical range (-1 to 1) using the formula proposed by Šajna:

Stage 3. Tools and methods to analyse the maps

Comparing levels of influence.

Initial analysis of maps includes a pattern correspondence table that lists and contrasts direct and indirect influences reported from different sources. Free software allows for digitising maps and converting them into lists of relationships or matrices for more complex analyses. In our analysis approach, we first calculate the transitive closure (TC) of each map. This mathematical model provides the total influence of one concept on all others after considering all the possible paths linking them [ 7 ]. Two models are available [ 52 ]: fuzzy TC, recommended for maps with ad hoc concepts, and probabilistic TC, often used for maps with predetermined concepts. With the transitive closure of a map, it is possible to build a pattern correspondence table comparing influences according to different knowledge sources. Table  4 shows an example.

Additional tools for analysing the maps include centrality scores from social network analysis. These measures compare the sum of the absolute values of the weights of incoming or outgoing edges to identify the total importance of a node [ 53 ]. Higher levels of out-degree centrality indicate more influence on other concepts, and higher values of in-degree centrality suggest that the concepts are important outcomes in the map [ 16 ].

Operator-independent weighting

In response to the challenges of participant weighting in some contexts, we applied Harris’ discourse analysis to calculate overall weights across multiple maps based on the frequency of each relationship across the whole discourse (e.g., multiple maps from stakeholders or studies in literature reviews). Harris intended to have an operator-independent alternative to identify the role of morphemes (part of a word, a word or several words with an irreducible meaning) in a discourse, exclusively from their occurrence in the text [ 54 ]. Because it used frequency, among other criteria (partial order, redundancies and dependencies), it did not depend on the researcher’s assumptions of meaning. Similarly, we intended to understand the causal meaning of relationships identified through FCM with an operator-independent procedure. A concept that caused an outcome across multiple maps would have a stronger causal role than a concept that caused the same outcome only in one or two maps. We found that analysis of maps using discourse analysis and participant weighting produced similar results [ 35 ].

Combining maps

In many cases, the analysis included bringing the transitive closure maps together as an average representation of stakeholder groups. Combining maps often required reconciling differences in labels across maps. This was also an opportunity to generate categories to describe groups of related factors. Some cases involved stakeholders in this process, while others applied systematic researcher-led procedures followed by member checking exercises to confirm categories. Combining maps used weighted or unweighted averages of each relationship’s weight across maps. It also used stakeholder-assigned Bayesian priors to update corresponding relationships identified in the literature [ 5 ].

Reduction of maps

Stakeholder and literature maps usually have many factors and relationships, making their analysis complex and hindering communication of results. We created reduced maps following a qualitative synthesis of nodes and a mathematical procedure to calculate category level weights [ 35 ]. Some maps in Canada have engaged participants in defining the categories as they progress with the mapping session (#7). However, creating categories within individual mapping sessions can lead to difficulties with comparability between groups when the categorisation varies between them.

Sensemaking of relationships

Weighting by stakeholders helps prioritise direct and indirect influences that contribute to an outcome. Stakeholder narratives and weights helped to develop explanations of how different factors contribute to the outcomes. In cases #5 and #7, an additional literature search based on factors identified by stakeholders contributed to creating explanatory accounts. The reporting of women’s satisfaction with HIV care (#20) used quotes recorded in the mapping sessions to explain the narratives of the most meaningful relationships. The analysis of maps on violence against women in Botswana (#15) identified important intermediate factors commonly depicted along the pathways from other factors to the main outcome.

Stage 4. How maps were used

Researchers described how they edited and simplified complex maps to make them more accessible, including to people with limited literacy, in Mexico, Nigeria, and Uganda (#8 and 10 to 12). In addition to creating category maps, they used colour coding, labels in the local language for the most influential factors, arrows of different thicknesses according to their weight, and different sizes of boxes for concepts according to their importance based on centrality scores. When sharing results, they often contrasted maps from different stakeholders. In Canada (#5 and #7), researchers developed explanatory frameworks from the mapping exercises, and stakeholders refined this framework and identified priority areas for action. In Canada (#5 and #7), Botswana (#14) and Uganda (#11 and #12), stakeholders viewed and discussed the summary maps from other groups. The maps, further discussed by stakeholders, helped inform the design of media-based communication interventions in Ghana (#19) and Nigeria (#10).

Our experiences with FCM resonate with and adds considerable detail to earlier FCM authors [ 18 , 19 ], including those offering protocols for meaningful participation in environmental sciences [ 16 , 49 ]. The most recent literature reviews on the use of FCM have not discussed the contributions we described here [ 20 , 22 , 55 , 56 ] and do not provide details on practical decisions across the mapping process or on the implications of stakeholder authorship. This review provides practical insights for FCM researchers before they generate maps, during data collection and in analysis. The use of FCM to increase data sources in the coproduction of knowledge brings numerous challenges and multiple potential decisions. This paper summarises how we approached these challenges across 25 real-world projects and responds to the questions we often receive from researchers new to the method. These methodological considerations are essential to increase trustworthiness of FCM applications and for an adequate interpretation of its results.

Variability in facilitation of mapping sessions with stakeholders is well recognised as a source of potential differences between groups [ 19 ]. In our experience, the behaviour and attitudes of researchers and facilitators can influence the content of the maps. Careful quality control and member checking can help minimise this influence [ 57 ]. To achieve high-quality, informative maps, our experience highlights the need for clear protocols for data collection, including careful training of facilitators and ongoing supervision and monitoring. This has been essential in some of our projects, which have involved hundreds of participants in creating hundreds of maps.

Our group also used FCM in contextualising mixed-methods literature reviews. Knowledge synthesis is seldom free of reviewer interpretations [ 58 ], and formal protocols for data collection, analysis, synthesis, and presentation could increase the reliability and validity of findings [ 59 ]. Singer et al. also used FCM to summarise qualitative data [ 60 ], a promising application that benefits from FCM’s if-then configurations and linguistic descriptions of concepts and relationships. In our practice, FCM was a practical support to develop formal protocols, to generate pooled effect estimates across studies [ 58 ] and to summarising heterogeneous sources. Weight of Evidence is an innovation to incorporate stakeholder perspectives with scientific evidence, thus addressing the common challenge of contextualising literature findings with local realities. The application of FCM in modelling questionnaire data helps to evaluate result chains as knowledge networks.

Despite its name (fuzzy) and tolerance for uncertainty, FCM is not fuzzy or vague [ 61 ]. It incorporates multiple dimensions of decision-making, including impressions, feelings, and inclinations, in addition to careful reasoning of events and possibilities [ 9 , 62 ]. FCM is a participatory modelling approach [ 63 ] that improves conventional modelling with real-world experience. FCM can help formalise stakeholder knowledge and support learning about an issue to promote action [ 64 ]. An important part of the literature focuses on applying learning algorithms for scenario planning [ 55 , 65 ]. Our group reported positive changes and increased agency among mappers. Future research might explore the impact of FCM as an intervention, both on those sharing their knowledge and on those using the models. The commitment to operator-independent procedures has led us to adapt Harris’s discourse analysis to complement the sometimes-problematic weighing step [ 35 ]. Notwithstanding our ability to generate operator-independent weights, the question of whose views the models represent and who is empowered remains valid and should be discussed in every case [ 66 ].

There is very little literature on FCM in education. FCM could help students clarify the knowledge they share in class [ 67 ]. FCM can also formalise steps to connect and evaluate students’ progression towards concrete learning objectives, a helpful feature in game-based learning [ 68 ]. In our experience, mapping sessions had a transformative effect as participants reflected on what they knew about the main issue and appreciated their knowledge being presented in a tangible product. Further studies could investigate how group and individual characteristics evolve throughout the mapping process.

Decision-making involves choosing alternatives based on their expected impacts. Many people think of FCM in the context of predictive models using learning algorithms [ 55 , 56 , 69 , 70 , 71 , 72 ]. There is also potential for informing other AI-driven methods by incorporating expert knowledge in the form of fuzzy cognitive maps into complex graph-based models [ 73 , 74 ]. The concern in participatory research and, therefore, use of FCM in participatory research, is equitable engagement in informed decision making. We used FCM not as a predictive tool but for making sense of scenarios and theories to inform choices, recognising multiple possible ways of seeing any issue. Map interpretation hinges on who the authors are and the type of data depicted (opinions, observations, or components of a theory. These soft models characterise direct and indirect dependencies that are difficult to incorporate in formal approaches like differential equations [ 28 ]. Current work of the research group explores participant-led FCM weighting to inform Bayesian analysis of quantitative data and ethnographic approaches to understand deeper meanings of factors depicted in FCM.

A potential concern about FCM is whether the sample size and selection are adequate, yet FCM reports rarely discuss this. There are no formal procedures to estimate the required sample size for mapping exercises (total number of participants, maps, or people in a group session). Singh and Chudasama, for example, continued mapping sessions until the list of causal factors identified reached saturation [ 75 ]. A participatory research approach, however, would conduct as many mapping sessions as much as necessary to allow all voices, especially those of the most marginalised, to be heard. Our application of Harris’ discourse analysis allows quicker mapping sessions, avoiding the often lengthy weighting process; this can increase the number of maps that can be created with finite resources. The combination of maps results in more robust models because more knowledge informs the final output [ 76 ]. Multiple alternatives exist for combining maps [ 5 , 8 , 21 , 77 ]. Our work has explored Bayesian updating using stakeholder weights as priors [ 5 ].

Strengths and limitations

Almost all the experiences described in this review are published and provide further details on specific topics. This practice review reflects the experience in participatory research and thus mainly focused on stakeholder maps. Our group pioneered the use of FCM for contextualising systematic reviews in stakeholder experience. We also used FCM to analyse and to portray progress in changing a results chain in a modified theory of planned behaviour. Operator bias is a constant concern in our FCM practice, reflected in the review of efforts to avoid operator influence in generating the maps, in the coding of map concepts into categories, and especially in weighting of maps, where our innovation relies on Harris’ discourse analysis.

The general use of FCM has well-recognised challenges and limitations. It is easy to forget that cognitive maps reflect opinions and personal experience, which can differ between map authors and from biological causality. This is seldom a major problem in our participatory research practice, where we frame FCM as different perspectives to engage stakeholders or as an entry point to dialogue. As with most visual techniques, the maps are static and do not model the longitudinal evolution of the depicted knowledge network. Viewers might assume relationships in the maps are linear, which is not always the case [ 76 ]. For example, the effect of higher age on maternal health outcomes would be very different for teenagers and older mothers.

Most map readers make inferences from the causes to the outcome, the direction of the arrow not inviting a reversed cause from the outcome. Different approaches to causal reasoning could affect map construction, weighting and interpretation; although relatively robust to cultural and educational differences, our experience includes cultural groups that have more complex views of causal relationships than can be reflected in FCM.

Several questions about conducting FCM remain unanswered, such as how to standardise (and limit) the influence of facilitators, how to use FCM with people living with visual or hearing loss, or how to create meaningful maps using distance communication, such as social media, or when participants have limited time for the exercise.

FCM is a flexible and robust way to share multiple stakeholder perspectives. Although mostly applied to beliefs and experiences, it can also portray published evidence and questionnaire data in formats comparable with subjective experience. FCM requires multiple practical decisions that have implications for interpreting and sharing results. We review these methodological decisions in 25 research projects in different contexts since 2016. Insights might be relevant to researchers interested in using FCM and can contribute to applying it in a more systematic way. Clear protocols and quality control improve the reliability of fuzzy cognitive maps. FCM helps build a shared understanding of an issue across diverse knowledge sources and can provide a systematic and transparent basis for shared decision-making.

Data availability

The dataset supporting the conclusions of this article is included within the article and its additional files.

Wallerstein NB, Duran B. Using community-based participatory research to address health disparities. Health Promot Pract. 2006;7:312–23.

Article   PubMed   Google Scholar  

George AS, Mehra V, Scott K, Sriram V. Community participation in health systems research: a systematic review assessing the state of research, the nature of interventions involved and the features of engagement with communities. PLoS ONE. 2015;10:e0141091.

Article   PubMed   PubMed Central   Google Scholar  

Oliver S, Roche C, Stewart R, Bangpan M, Dickson K, Pells K, et al. Stakeholder engagement for development impact evaluation and evidence synthesis. London: Centre for Excellence for Development Impact and Learning (CEDIL); 2018. https://doi.org/10.51744/CIP3 .

Book   Google Scholar  

Dion A, Joseph L, Jimenez V, Gutierrez AC, Ben Ameur A, Robert E, et al. Grounding evidence in experience to support people-centered health services. Int J Public Health. 2019;64:797–802.

Dion A, Carini-Gutierrez A, Jimenez V, Ben Ameur A, Robert E, Joseph L, et al. Weight of evidence: participatory methods and bayesian updating to contextualize evidence synthesis in stakeholders’ knowledge. J Mix Methods Res. 2021;JMMR–19–03:155868982110374.

Google Scholar  

Andersson N, Silver H. Fuzzy cognitive mapping: an old tool with new uses in nursing research. J Adv Nurs. 2019;75:3823–30.

Giles BG, Haas G, Šajna M, Findlay CS. Exploring aboriginal views of health using fuzzy cognitive maps and transitive closure: a case study of the determinants of diabetes. Can J Public Health. 2008;99:411–7.

Kosko B. Hidden patterns in combined and adaptive knowledge networks. Int J Approximate Reasoning. 1988;2:377–93.

Article   Google Scholar  

Zadeh LA. Outline of a new approach to the analysis of complex systems and decision processes. IEEE Trans Syst Man Cybern. 1973;SMC–3:28–44.

Axelrod R, editor. Structure of decision: the cognitive maps of political elites. New Jersey, USA: Princeton University Press; 1976.

Langfield-Smith K, Wirth A. Measuring differences between cognitive maps. J Oper Res Soc. 1992;43:1135.

Kosko B. Fuzzy cognitive maps. Int J Man Mach Stud. 1986;24:65–75.

Harary Frank, Norman RZ, Cartwright D. Structural models: an introduction to the theory of directed graphs. New York: Wiley; 1965.

Zadeh LA. On the analysis of large-scale systems. In: Klir GJ, Yuan B, editors. Fuzzy Sets, Fuzzy Logic, and Fuzzy Systems: Selected papers by Lotfi A Zadeh. 1996. pp. 195–209.

Seising R, Tabacchi M. Fuzziness, philosophy, and medicine. In: Seising R, Tabacchi M, editors. Fuzziness and medicine. Berlin: Springer; 2013. pp. 3–8.

Gray SA, Zanre E, Gray SRJ. Fuzzy cognitive maps as representations of mental models and group beliefs. In: Papageorgiou EI, editor. Fuzzy cognitive maps for applied sciences and engineering. Berlin: Springer; 2014. pp. 29–48.

Chapter   Google Scholar  

Papageorgiou EI, Salmeron JL. A review of fuzzy cognitive maps research during the last decade. IEEE Trans Fuzzy Syst. 2013;21:66–79.

Glykas M, editor. Fuzzy cognitive maps. Advances in theory, methodologies, tools and applications. Berlin: Springer; 2010.

Jetter AJ, Kok K. Fuzzy cognitive maps for futures studies—A methodological assessment of concepts and methods. Futures. 2014;61:45–57.

Apostolopoulos ID, Papandrianos NI, Papathanasiou ND, Papageorgiou EI. Fuzzy cognitive map applications in Medicine over the last two decades: a review study. Bioengineering. 2024;11:139.

Papageorgiou EI. A new methodology for decisions in medical informatics using fuzzy cognitive maps based on fuzzy rule-extraction techniques. Appl Soft Comput. 2011;11:500–13.

Amirkhani A, Papageorgiou EI, Mohseni A, Mosavi MR. A review of fuzzy cognitive maps in medicine: taxonomy, methods, and applications. Comput Methods Programs Biomed. 2017;142:129–45.

Gray S, Gray S, De kok JL, Helfgott AER, O’Dwyer B, Jordan R et al. Using fuzzy cognitive mapping as a participatory approach to analyze change, preferred states, and perceived resilience of social-ecological systems. Ecol Soc. 2015;20.

Gray S, Chan A, Clark D, Jordan R. Modeling the integration of stakeholder knowledge in social–ecological decision-making: benefits and limitations to knowledge diversity. Ecol Modell. 2012;229:88–96.

Giles BG, Findlay CS, Haas G, LaFrance B, Laughing W, Pembleton S. Integrating conventional science and aboriginal perspectives on diabetes using fuzzy cognitive maps. Soc Sci Med. 2007;64:562–76.

Sarmiento I, Paredes-Solís S, Loutfi D, Dion A, Cockcroft A, Andersson N. Fuzzy cognitive mapping and soft models of indigenous knowledge on maternal health in Guerrero, Mexico. BMC Med Res Methodol. 2020;20:125.

Salmeron JL. Modelling grey uncertainty with fuzzy grey cognitive maps. Expert Syst Appl. 2010;37:7581–8.

Zadeh LA. Fuzzy logic reaches adulthood. Control Eng. 1996;43:50.

Dickerson JA, Kosko B. Virtual worlds as fuzzy cognitive maps. Presence: Teleoperators Virtual Environ. 1994;3:173–89.

Osoba O, Kosko B. Causal modeling with feedback fuzzy cognitive maps. In: Davis PK, O’Mahony A, Pfautz J, editors. Social-behavioral modeling for Complex systems. Wiley; 2019. pp. 587–616.

Tonkin K, Silver H, Pimentel J, Chomat AM, Sarmiento I, Belaid L, et al. How beneficiaries see complex health interventions: a practice review of the most significant change in ten countries. Archives Public Health. 2021;79:18.

Tratt E, Sarmiento I, Gamelin R, Nayoumealuk J, Andersson N, Brassard P. Fuzzy cognitive mapping with Inuit women: what needs to change to improve cervical cancer screening in Nunavik, northern Quebec? BMC Health Serv Res. 2020;20:529.

Sarmiento I, Ansari U, Omer K, Gidado Y, Baba MC, Gamawa AI, et al. Causes of short birth interval (kunika) in Bauchi State, Nigeria: systematizing local knowledge with fuzzy cognitive mapping. Reprod Health. 2021;18:74.

Dion A, Klevor A, Nakajima A, Andersson N. Evidence-based priorities of under‐served pregnant and parenting adolescents: addressing inequities through a participatory approach to contextualizing evidence syntheses. Int J Equity Health. 2021;20:118.

Sarmiento I, Cockcroft A, Dion A, Paredes-Solís S, De Jesús-García A, Melendez D, et al. Combining conceptual frameworks on maternal health in indigenous communities—fuzzy cognitive mapping using participant and operator-independent weighting. Field Methods. 2022;34:1525822X2110704.

Sarmiento I, Field M, Kgakole L, Molatlhwa P, Girish I, Andersson N et al. Community perceptions of causes of violence against young women in Botswana: fuzzy cognitive mapping. Vulnerable Child Youth Stud. 2023;:1–57.

Sarmiento I, Kgakole L, Molatlhwa P, Girish I, Andersson N, Cockcroft A. Community perceptions about causes of suicide among young men in Botswana: an analysis based on fuzzy cognitive maps. Vulnerable Child Youth Stud. 2023;:1–23.

Cockcroft A, Sarmiento I, Andersson N. Shared perceived causes of suicide among young men and violence against young women offer potential for co-designed solutions: intervention soft-modelling with fuzzy cognitive mapping. Vulnerable Child Youth Stud. 2023;:1–22.

Ghadirian M, Marquis G, Dodoo N, Andersson N. Ghanaian female adolescents perceived changes in nutritional behaviors and social environment after creating participatory videos: a most significant change evaluation. Curr Dev Nutr. 2022;6:nzac103.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Sarmiento I, Paredes-Solís S, Dion A, Silver H, Vargas E, Cruz P, et al. Maternal health and indigenous traditional midwives in southern Mexico: contextualisation of a scoping review. BMJ Open. 2021;11:e054542.

Sarmiento I, Paredes-Solís S, Morris M, Pimentel J, Cockcroft A, Andersson N. Factors influencing maternal health in indigenous communities with presence of traditional midwifery in the Americas: protocol for a scoping review. BMJ Open. 2020;10:e037922.

Gagnon-Dufresne M-C, Sarmiento I, Fortin G, Andersson N, Zinszer K. Why urban communities from low-income and middle-income countries participate in public and global health research: protocol for a scoping review. BMJ Open. 2023;13:e069340.

Andersson N, Beauchamp M, Nava-Aguilera E, Paredes-Solís S, Šajna M. The women made it work: fuzzy transitive closure of the results chain in a dengue prevention trial in Mexico. BMC Public Health. 2017;17(Suppl 1):133–73.

Pimentel J, Cockcroft A, Andersson N. Impact of game jam learning about cultural safety in Colombian medical education: a randomised controlled trial. BMC Med Educ. 2021;21:132.

Pimentel J, Cockcroft A, Andersson N. Game jams for cultural safety training in Colombian medical education: a pilot randomised controlled trial. BMJ Open. 2021;11:e042892.

Andersson N. Participatory research-A modernizing science for primary health care. J Gen Fam Med. 2018;19:154–9.

Silver H, Sarmiento I, Pimentel J-P, Budgell R, Cockcroft A, Vang ZM, et al. Childbirth evacuation among rural and remote indigenous communities in Canada: a scoping review. Women Birth. 2021. https://doi.org/10.1016/j.wombi.2021.03.003 .

Borenstein M, Hedges LV. Effect size for meta-analysis. In: Cooper HM, Hedges L V, Valentine J, editors. Handbook of research synthesis and meta-analysis. 3rd edition. New York: Russell Sage Foundation; 2019. pp. 208–43.

Özesmi U, Özesmi SL. Ecological models based on people’s knowledge: a multi-step fuzzy cognitive mapping approach. Ecol Modell. 2004;176:43–64.

Rosenberg L, Joseph L, Barkun A. Surgical arithmetic: epidemiological, statistical, and outcome-based approach to surgical practice. CRC; 2000.

Andersson N, Ledogar RJ. The CIET Aboriginal youth resilience studies: 14 years of capacity building and methods development in Canada. Pimatisiwin. 2008;6:65–88.

PubMed   PubMed Central   Google Scholar  

Niesink P, Poulin K, Šajna M. Computing transitive closure of bipolar weighted digraphs. Discrete Appl Math (1979). 2013;161:217–43.

Papageorgiou EI, Kontogianni A. Using fuzzy cognitive mapping in environmental decision making and management: a methodological primer and an application. International perspectives on Global Environmental Change. InTech; 2012.

Harris ZS. Discourse analysis. Language (Baltim). 1952;28:1.

Felix G, Nápoles G, Falcon R, Froelich W, Vanhoof K, Bello R. A review on methods and software for fuzzy cognitive maps. Artif Intell Rev. 2019;52:1707–37.

Jiya EA, Georgina ON. A review of fuzzy cognitive maps extensions and learning. J Inform Syst Inf. 2023;5:300–23.

Olazabal M, Neumann MB, Foudi S, Chiabai A. Transparency and reproducibility in participatory systems modelling: the case of fuzzy cognitive mapping. Syst Res Behav Sci. 2018;35:791–810.

Sandelowski M, Voils CI, Leeman J, Crandell JL. Mapping the mixed methods–mixed research synthesis terrain. J Mix Methods Res. 2012;6:317–31.

Wheeldon J. Mapping mixed methods research: methods, measures, and meaning. J Mix Methods Res. 2010;4:87–102.

Singer A, Gray S, Sadler A, Schmitt Olabisi L, Metta K, Wallace R, et al. Translating community narratives into semi-quantitative models to understand the dynamics of socio-environmental crises. Environ Model Softw. 2017;97:46–55.

Zadeh LA. Is there a need for fuzzy logic? Inf Sci (N Y). 2008;178:2751–79.

Kahneman D. Thinking, fast and slow. New York, Canada: Farrar, Straus and Giroux; 2013; 1934-.

Voinov A, Jenni K, Gray S, Kolagani N, Glynn PD, Bommel P, et al. Tools and methods in participatory modeling: selecting the right tool for the job. Environ Model Softw. 2018;109:232–55.

Stave K. Participatory system dynamics modeling for sustainable environmental management: observations from four cases. Sustainability. 2010;2:2762–84.

Papageorgiou EI. Learning algorithms for fuzzy cognitive maps - a review study. IEEE Trans Syst Man Cybernetics Part C (Applications Reviews). 2012;42:150–63.

Chambers R. Participatory mapping and geographic information systems: whose map? Who is empowered and who disempowered? Who gains and who loses? Electron J Inform Syst Developing Ctries. 2006;25:1–11.

Cole JR, Persichitte KA. Fuzzy cognitive mapping: applications in education. Int J Intell Syst. 2000;15:1–25.

Luo X, Wei X, Zhang J. Game-based learning model using fuzzy cognitive map. In: Proceedings of the first ACM international workshop on Multimedia technologies for distance learning - MTDL ’09. New York, New York, USA: ACM Press; 2009. p. 67.

Nápoles G, Espinosa ML, Grau I, Vanhoof K. FCM expert: software tool for scenario analysis and pattern classification based on fuzzy cognitive maps. Int J Artif Intell Tools. 2018;27:1860010.

Papageorgiou K, Carvalho G, Papageorgiou EI, Bochtis D, Stamoulis G. Decision-making process for photovoltaic solar energy sector development using fuzzy cognitive map technique. Energies (Basel). 2020;13:1427.

Papageorgiou EI, Papageorgiou K, Dikopoulou Z, Mouhrir A. A web-based tool for fuzzy cognitive map modeling. In: International Congress on Environmental Modelling and Software. 2018. p. 73.

Nápoles G, Papageorgiou E, Bello R, Vanhoof K. On the convergence of sigmoid fuzzy cognitive maps. Inf Sci (N Y). 2016;349–350:154–71.

Apostolopoulos ID, Groumpos PP. Fuzzy cognitive maps: their role in Explainable Artificial Intelligence. Appl Sci. 2023;13:3412.

Article   CAS   Google Scholar  

Mkhitaryan S, Giabbanelli PJ, Wozniak MK, de Vries NK, Oenema A, Crutzen R. How to use machine learning and fuzzy cognitive maps to test hypothetical scenarios in health behavior change interventions: a case study on fruit intake. BMC Public Health. 2023;23:2478.

Singh PK, Chudasama H. Assessing impacts and community preparedness to cyclones: a fuzzy cognitive mapping approach. Clim Change. 2017;143:337–54.

Kosko B. Foreword. In: Glykas M, editor. Fuzzy cognitive maps. Advances in theory, methodologies, tools and applications. Berlin: Springer; 2010. pp. VII–VIII.

Papageorgiou K, Singh PK, Papageorgiou EI, Chudasama H, Bochtis D, Stamoulis G. Participatory modelling for poverty alleviation using fuzzy cognitive maps and OWA learning aggregation. PLoS ONE. 2020;15:e0233984.

Dion A, Nakajima A, McGee A, Andersson N. How adolescent mothers interpret and prioritize evidence about perinatal child protection involvement: participatory contextualization of published evidence. Child Adolesc Soc Work J. 2022;39:785–803.

Belaid L, Atim P, Ochola E, Omara B, Atim E, Ogwang M, et al. Community views on short birth interval in Northern Uganda: a participatory grounded theory. Reprod Health. 2021;18:88.

Belaid L, Atim P, Atim E, Ochola E, Ogwang M, Bayo P, et al. Communities and service providers address access to perinatal care in Postconflict Northern Uganda: socialising evidence for participatory action. Fam Med Community Health. 2021;9:e000610.

Skerritt L, Kaida A, Savoie É, Sánchez M, Sarmiento I, O’Brien N, et al. Factors and priorities influencing satisfaction with care among women living with HIV in Canada: a fuzzy cognitive mapping study. J Pers Med. 2022;12:1079.

Cockcroft A, Omer K, Gidado Y, Mohammed R, Belaid L, Ansari U, et al. Impact-oriented dialogue for culturally safe adolescent sexual and reproductive health in Bauchi State, Nigeria: protocol for a codesigned pragmatic cluster randomized controlled trial. JMIR Res Protoc. 2022;11:e36060.

Sarmiento I, Zuluaga G, Paredes-Solís S, Chomat AM, Loutfi D, Cockcroft A, et al. Bridging western and indigenous knowledge through intercultural dialogue: lessons from participatory research in Mexico. BMJ Glob Health. 2020;5:e002488.

Ansari U, Pimentel J, Omer K, Gidado Y, Baba MC, Andersson N, et al. Kunika women are always sick: views from community focus groups on short birth interval (kunika) in Bauchi state, northern Nigeria. BMC Womens Health. 2020;20:113.

Download references

Acknowledgements

Umaira Ansari, Michaela Field, Sonia Michelsen, Khalid Omer, Amar Azis, Drs Shaun Cleaver and Sergio Paredes were present in the nominal group discussion. Dr Mateja Šajna read and commented on initial versions of this manuscript. The data and methods used for this paper are available in the manuscript. All authors read, contributed to, and approved the final manuscript.

The work for this manuscript did not receive external funding. The individual FCM projects received financial support from a range of funding bodies, acknowledged in the individual publications about the projects. All individual funding agreements ensured the authors’ independence in designing the study, interpreting the data, writing, and publishing the report. The Canadian Institutes of Health Research contributed to the publication costs of this manuscript. 

Author information

Authors and affiliations.

Department of Family Medicine, McGill University, 5858 Ch. de la Côte-des-Neiges, Montreal, QC, H3S 1Z1, Canada

Iván Sarmiento, Anne Cockcroft, Anna Dion, Loubna Belaid, Hilah Silver, Katherine Pizarro, Juan Pimentel, Lashanda Skerritt, Mona Z. Ghadirian, Marie-Catherine Gagnon-Dufresne & Neil Andersson

Universidad del Rosario, Grupo de Estudios en Sistemas Tradicionales de Salud, Bogota, Colombia

Iván Sarmiento

Facultad de Medicina, Universidad de La Sabana, Chía, Colombia

Juan Pimentel

Institut Lady Davis pour la Recherche Médicale, Montreal, Canada

Elyse Tratt

École de santé publique, Département de médecine sociale et préventive, Université de Montréal, Montreal, Canada

Marie-Catherine Gagnon-Dufresne

Centro de Investigación de Enfermedades Tropicales, Universidad Autónoma de Guerrero, Acapulco, Mexico

Neil Andersson

You can also search for this author in PubMed   Google Scholar

Contributions

IS and NA designed the study and drafted the initial version of the manuscript. All the authors read and contributed to the sections for which their work was more relevant.

Corresponding author

Correspondence to Iván Sarmiento .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sarmiento, I., Cockcroft, A., Dion, A. et al. Fuzzy cognitive mapping in participatory research and decision making: a practice review. Arch Public Health 82 , 76 (2024). https://doi.org/10.1186/s13690-024-01303-7

Download citation

Received : 25 July 2023

Accepted : 30 April 2024

Published : 20 May 2024

DOI : https://doi.org/10.1186/s13690-024-01303-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Fuzzy cognitive mapping
  • Participatory modelling
  • Weight of evidence
  • Stakeholder engagement
  • Fuzzy logic
  • Public health
  • Global health

Archives of Public Health

ISSN: 2049-3258

research paper summary tool

This paper is in the following e-collection/theme issue:

Published on 22.5.2024 in Vol 26 (2024)

The Power of Rapid Reviews for Bridging the Knowledge-to-Action Gap in Evidence-Based Virtual Health Care

Authors of this article:

Author Orcid Image

  • Megan MacPherson, PhD   ; 
  • Sarah Rourke, MSN  

Fraser Health, Surrey, BC, Canada

Corresponding Author:

Megan MacPherson, PhD

Fraser Health

400-13450 102nd Avenue

Surrey, BC, V3T 0H1

Phone: 1 6045616605

Email: [email protected]

Despite the surge in popularity of virtual health care services as a means of delivering health care through technology, the integration of research evidence into practice remains a challenge. Rapid reviews, a type of time-efficient evidence synthesis, offer a potential solution to bridge the gap between knowledge and action. This paper aims to highlight the experiences of the Fraser Health Authority’s Virtual Health team in conducting rapid reviews. This paper discusses the experiences of the Virtual Health team in conducting 15 rapid reviews over the course of 1.5 years and the benefit of involving diverse stakeholders including researchers, project and clinical leads, and students for the creation of user-friendly knowledge products to summarize results. The Virtual Health team found rapid reviews to be a valuable tool for evidence-informed decision-making in virtual health care. Involving stakeholders and focusing on implementation considerations are crucial for maximizing the impact of rapid reviews. Health care decision makers are encouraged to consider implementing rapid review processes to improve the translation of research evidence into practice, ultimately enhancing patient outcomes and promoting a culture of evidence-informed care.

Introduction

Virtual health care services, which involve the delivery of health care through information and communication technologies, have gained popularity among health care providers, patients, and organizations. In recent decades, several initiatives have been undertaken to implement virtual care and improve the access, quality, and safety of health care delivery in Canada [ 1 ]; however, technological advancement and a rapidly expanding evidence base make supporting virtual care with research evidence challenging. Specifically, to adequately support virtual care, health care decision makers are expected to keep up with available technologies, their applications, and evidence of their effectiveness among a variety of health conditions.

Despite decision makers recognizing the need to consider research evidence in the context of public health problems [ 2 , 3 ], there is still a knowledge-to-action (KTA) gap between what is known and what is put into practice clinically [ 4 - 6 ], with health care professionals worldwide demonstrating suboptimal use of research evidence within clinical practice [ 7 - 14 ]. Further, it has been estimated that one-third of patients do not receive treatments that have proven efficacious, one-quarter receive treatments that are potentially harmful, and up to three-quarters of patients and half of clinicians do not receive the information necessary for research-informed decision-making [ 15 ]. Clearly, there is a need to improve the translation of research evidence into practice, particularly in the case of virtual care where technological innovations and research evidence are rapidly expanding.

Knowledge Translation

The field of knowledge translation (KT) strives to enhance the usefulness of research evidence through the design and conduct of stakeholder-informed, patient-oriented studies as well as the dissemination and implementation of research findings into practice [ 16 ]. The Canadian Institutes for Health Research defines KT as the ethical exchange, synthesis, and application of knowledge among researchers and users to accelerate the benefits of research for Canadian people [ 17 ]. The ultimate goal of KT has been further described as the facilitation of evidence-informed decision-making [ 18 ] and the integration of various forms of evidence into public health practice and policy.

The Canadian Institutes for Health Research describes 2 “Death Valleys” on the continuum from research to action, which contributes to the KTA gap [ 19 ]. Valley 1 refers to the reduced ability to translate basic biomedical research discoveries from the laboratory to the bedside and to effectively commercialize health innovations. Valley 2 refers to the reduced ability to synthesize, disseminate, and integrate research findings more broadly into clinical practice and clinical decision-making. To improve the utility of biomedical and clinical research, enhance health outcomes, and ensure an evidence-based and sustainable health care system, strategic attempts to bridge these valleys must be made.

Rapid Reviews

One way to help overcome the second valley is through evidence syntheses such as systematic, scoping, and rapid reviews [ 20 ]. Evidence syntheses have emerged as valuable methods for KT as they can compile large bodies of evidence into a single knowledge product, making them an essential tool for decision makers to enhance evidence-informed decision-making [ 21 , 22 ]. Systematic reviews offer a comprehensive synthesis of available evidence on a particular topic, playing an ever-expanding role in informing policy making and practice [ 23 , 24 ]; however, the resource-intensive nature of conducting systematic reviews, in terms of both time and cost, presents a significant obstacle to facilitating prompt and efficient decision-making [ 25 ].

Given the time constraints health care practitioners and policy makers often face [ 26 ], rapid reviews provide a more resource- and time-efficient means to conduct evidence syntheses that offer actionable evidence in a more relevant manner compared to other types of evidence syntheses such as systematic or scoping reviews [ 20 , 26 - 34 ]. Specifically, rapid reviews are a form of evidence synthesis in which systematic review steps are streamlined to generate actionable evidence within a condensed time frame [ 35 ]. To expedite the review process, rapid reviews often compromise on the rigor typically associated with systematic reviews, resulting in a less precise and robust evaluation in comparison [ 32 ]. That being said, rapid reviews have gained traction in health systems’ policy making, health-related intervention development, and health technology assessment [ 34 - 36 ]. This paper outlines the experiences of the Fraser Health (FH) Authority Virtual Health team in rapidly producing and disseminating rapid review results to date. Rapid reviews were chosen as they are often highly driven by end-user demands [ 37 ] and have been highlighted as a viable tool to disseminate knowledge within the rapidly growing field of virtual health [ 33 ].

FH Authority Context

As the largest regional health authority in British Columbia, Canada, FH serves more than 1.9 million people in Canada [ 38 ]. In recent years, FH has prioritized the expansion of virtual care [ 39 ], conducting over 1.9 million virtual visits between January 2019 and 2023 (roughly 27% of all visits). Within the Virtual Health department at FH, the “research and evaluation team” aims to improve the translation of research into practice while engaging in ongoing collaborative evaluation of existing Virtual Health programming. During Virtual Health strategic planning, rapid reviews have emerged as a central tool for knowledge dissemination and have been used to inform the development of frameworks, services, and program scale-up. This paper highlights FH’s experience in conducting 15 rapid reviews over the course of 1.5 years. This paper is meant to serve as an overview on the utility and feasibility of rapid reviews within a health authority; for more information on rapid review methods to aid in conducting reviews within a team-based setting, see MacPherson et al [ 33 ].

Rapid reviews are used within the Virtual Health team to provide an overview of available evidence addressing a research question related to a single topic produced within a short time frame (typically 1 week to 4 months). From October 2022 until March 2024, the Virtual Health team conducted 15 rapid reviews following published recommendations [ 33 ]. Questions posed to date include the following:

  • What are the perspectives on virtual care among immigrant, refugee, and Indigenous people in Canada [ 40 ]?
  • What virtual care solutions exist for people with heart failure [ 41 ]?
  • What virtual care solutions exist for people with diabetes [ 41 ]?
  • What virtual care solutions exist for people with chronic obstructive pulmonary disease (COPD) [ 41 ]?
  • What are currently used decision guides or algorithms to inform escalation within remote patient monitoring services for people with heart failure?
  • What barriers, facilitators, and recommendations exist for remote patient monitoring services within the context of respiratory care [ 42 ]?
  • What virtual care or digital innovations are used by physicians in acute care [ 43 ]?
  • What barriers and facilitators exist for patient-to-provider virtual messaging (eg, SMS text messaging) [ 44 ]?
  • What is the existing evidence for centralized remote patient monitoring services [ 45 ]?
  • What domains are included within virtual care frameworks targeting appropriateness and safety?
  • What are patient and provider barriers to virtual care [ 46 ]?
  • What is the evidence for virtual hospital programs [ 47 ]?
  • What KT strategies exist that could be used by the Virtual Health research and evaluation team in their efforts to translate research findings into practice?
  • What is the available evidence on virtual decision-making and clinical judgment?
  • What is the available evidence for, and are there existing validated assessment criteria for nursing assessment frameworks?

Team members assisting with the rapid reviews included researchers, project leads, clinical leads, and students previously unfamiliar with the review process. Knowledge users within the Virtual Health team (eg, clinical leads and clinical directors) were involved throughout the entirety of the review process from developing the research questions to the presentation of research findings in Virtual Health team meetings and the implementation of findings into Virtual Health practice.

Similar to other rapid reviews [ 20 ], results were collated and narratively or visually summarized (eg, through infographics) and presented to Virtual Health team members. The final knowledge products were created to offer a high-level overview of the evidence arranged in a user-friendly manner, aiming to provide VH team members with a high-level understanding of the available evidence [ 41 ].

Experiences and Lessons Learned

The Virtual Health team’s journey in conducting 15 rapid reviews over the course of 1.5 years has provided valuable insights into the feasibility and utility of rapid reviews within a health authority setting. These lessons learned are from the perspectives of the authors of this paper. MM is the research and KT lead of the Virtual Health department at the FH Authority. Prior to creating the rapid review program within the Virtual Health department, she has prior experience conducting systematic, scoping, and rapid reviews. SR is a clinical nurse specialist within the Virtual Health department at FH. As a system-level leader, SR leverages evidence to informed clinical and service model changes to optimize patient care and outcomes and support strategic priorities. Prior to her involvement in the Virtual Health rapid review program, SR had no previous experience with conducting evidence reviews.

Importance of Defining a Clear and Actionable Research Question

Throughout this journey, one of the key lessons learned was about the importance of the research question being actionable to ensure that the results of rapid reviews can be readily integrated into practice. Initially, our reviews had broader scopes aimed at informing future Virtual Health service implementations across various populations such as COPD, diabetes, and heart failure. While these reviews were informative, they did not lead to immediate changes in Virtual Health practice and required strategic efforts to disseminate findings and integrate results into practice. Subsequently, we learned that focusing on specific programs or initiatives within the Virtual Health setting yields more actionable results. For instance, a review focused on identifying patient and provider barriers to virtual care was conducted with the explicit purpose of informing the development of a framework to improve video visit uptake among primary care providers. This targeted approach enabled us to directly address the identified barriers through the development of a framework focused on the uptake of safe and appropriate video visits within primary care.

Benefits and Challenges Involving Knowledge Users

The involvement of knowledge users such as clinical leads and directors in the rapid review process proved to be invaluable. First, they helped focus the scope of reviews by providing insights into the practical needs and priorities within the FH context. For example, the reviews focusing on virtual care solutions for patients with heart failure, COPD, and diabetes were initiated by 1 of the directors within Virtual Health and included an occupational therapist and clinical nurse specialist on the review team. The diverse insights offered by clinician team members helped shape the review questions, search strategy, and analysis, ensuring it addressed the practical needs in delivering virtual care to this specific patient population.

Second, the engagement of nonresearchers, students, and health care professionals in the review process not only enhanced the quality and relevance of the rapid reviews but also provided an opportunity for experiential learning and professional development. By participating in the rapid review process, students and other team members developed essential skills such as critical appraisal, evidence synthesis, and scientific communication. This approach has the potential to bridge the gap between research and practice by building a generation of clinicians who are well versed in evidence-based practice and can effectively translate research findings into clinical decision-making. For example, a team of nursing students participated in a rapid review focused on algorithms for care escalation within remote patient monitoring services for patients with heart failure. While they lacked prior review experience, their fresh perspectives and familiarity with health care practice as it relates to heart failure brought unique insights helping to shape the clinician-oriented KT efforts.

While involving knowledge users throughout the review process offers numerous benefits, it can also extend the time required to complete a review. This is often due to the necessity for these individuals to familiarize themselves with new software while simultaneously mastering the intricacies of conducting reviews and adhering to all associated steps. For instance, several Virtual Health team members have observed that during their initial and subsequent reviews, they encountered difficulties in efficiently navigating the study screening phase. The abundance of potentially relevant literature posed a challenge, with concerns arising about potentially overlooking papers containing valuable insights or “hidden gems.” This underscores the importance of establishing clear eligibility criteria and providing comprehensive training from the outset to ensure reviewers feel empowered to exclude papers confidently, even those that may initially appear intriguing.

Resources and Staff Time Involved

Readers interested in starting a rapid review program in their own health systems may find it helpful to understand the resources and staff time involved in our process. As the research and KT lead within the Virtual Health team, MM has been responsible for building the rapid review program, training team members, and leading rapid reviews. Her full-time role allows for dedicated focus on these as well as other research and KT-related activities, ensuring the smooth operation of the rapid review process.

Additionally, strong leadership support within the Virtual Health team has been instrumental in fostering a culture of evidence-informed decision-making and facilitating the integration of research evidence into practice. While we do not have a core team with a dedicated full-time equivalent specifically for rapid reviews, a call is put out to the Virtual Health department at the beginning of a review to identify who has the capacity to assist in a review. A testament to the value of these reviews is that VH team members have begun autonomously conducting rapid reviews with the research and KT lead acting as an advisor, not a lead on the reviews. For example, a nurse who was tasked with creating a framework for a virtual nursing assessment requested assistance in running a search for her team to complete a rapid review, to ensure that the resulting framework did not miss any key components seen in the literature.

Rapid Review Process

The overall process map for our team (an adaptation of MacPherson et al [ 33 , 48 ]) can be found in Figure 1 . Our journey in conducting rapid reviews has been accompanied by several challenges and the implementation of quality assurance measures to ensure the integrity of our findings. The overall process of reviews within the Virtual Health team includes Virtual Health team members submitting a request or having an informal meeting with the research and KT lead outlining the scope and purpose of the review, which is then refined to ensure that it will result in actionable evidence relevant to the Virtual Health team and is in alignment with organizational priorities.

Challenges or obstacles encountered during the rapid review process have included resource constraints. When there are not enough people to assist with a review, either the time to complete the review needs to be extended or additional constraints must be placed on the review question. Time limitations have also been a factor, especially when there is an urgent request. Clear communication on how the results will be used is needed to refine the review topic and search strategy to quickly produce actionable evidence. Given the wealth of research, we have started all reviews by first exploring if our questions can be answered by conducting a review of reviews. This has allowed for the timely synthesis of evidence instead of relying on individual studies. We have also found that decision makers value the most up-to-date evidence (especially regarding virtual health care technologies); as such, many of our reviews have imposed limitations to the past 5-10 years to ensure their relevance to decision makers. Additionally, difficulties in accessing relevant literature have been noted, as health authorities often do not have access to the same resources as academic institutions. This results in increased time to secure papers through interlibrary loans, which can be overcome by collaborating with academics.

research paper summary tool

Another strength of the Virtual Health team’s rapid review approach was the development of easily digestible knowledge products highlighting key data synthesized in the review. Rather than providing end users with lengthy reports that often go unread, clinicians within the Virtual Health team helped to create brief summaries and infographics highlighting the main findings and recommendations. This approach was aimed at improving the uptake of research evidence into practice by presenting the information in a format that was easily accessible and understandable for clinicians and other stakeholders. By creating visually appealing and user-friendly knowledge products, the Virtual Health team was able to efficiently communicate key takeaways from the rapid reviews, thus facilitating their dissemination and implementation within the FH context. This approach also helped to overcome a common challenge of KT, where research evidence can be difficult to access, understand, and apply in practice. By presenting the information in a format that was relevant and easily digestible, the Virtual Health team was able to enhance the applicability of the rapid reviews, thereby building clinician capacity and increasing their potential impact on patient outcomes.

Leveraging Rapid Reviews for Clinically Based Tools

Our most recent reviews were focused on developing a virtual nursing assessment and virtual nursing decision-making framework. Unlike traditional KT efforts used within other reviews, where the focus often lies on creating user-friendly summaries and infographics, our approach took a slightly different path. We aimed to directly inform the development of clinical decision support tools (DSTs).

Rather than developing traditional KT products, the raw data extracted from these reviews served as a foundational resource for the development of the clinical DSTs. Each piece of information was carefully referenced and integrated into the tool, providing evidence-based support for specific components and functionalities. This direct integration of research evidence into the tool development process not only strengthened the validity and credibility of the tool but also facilitated the transparent communication of the evidence behind each recommendation or feature.

Within these reviews, the active participation of those who were responsible for the development of the DSTs proved invaluable. Their involvement was crucial in ensuring understanding and confidence in the information as well as in merging research evidence with their own clinical expertise. By involving end users in the review process, we could tailor the outcomes to their specific needs and preferences, ultimately enhancing the relevance and applicability of the extracted evidence. This collaborative approach ensured that the resulting DSTs were not only evidence based but also resonated effectively with the clinical context they were intended for.

Principal Findings

The Virtual Health team’s experience with conducting 15 rapid reviews over the course of 1.5 years highlights the potential of rapid reviews as a time-efficient tool for improving the translation and uptake of research evidence into Virtual Health programming. Compared to more traditional review types (eg, systematic or scoping), which can take more than a year to complete [ 49 ], rapid reviews provide a practical way of synthesizing available evidence to inform clinical decision-making. The ability to produce a high-quality evidence summary in a shorter time frame can be particularly valuable in rapidly evolving areas of health care, such as virtual health. While rapid reviews are not new, our program offers insights into their application in a dynamic and rapidly evolving field such as virtual health. The lessons learned from FH’s rapid review program have important implications for evidence-based decision-making and KT within health care settings.

One of our primary lessons learned underscores the importance of establishing clear and actionable research questions. By outlining precise objectives, rapid reviews can ensure the relevance and applicability of their results, thus facilitating their seamless integration into clinical practice. Moreover, our experiences highlight the transformative impact of involving knowledge users throughout the review process. This collaborative approach not only enhances the quality and relevance of the evidence synthesized but also fosters a culture of evidence-informed decision-making within the organization. This type of early and continued engagement of knowledge users in research endeavors has been increasingly recognized as pivotal for establishing research priorities and enhancing the utility of research findings in real-world health care contexts [ 50 , 51 ]. In line with this, the overarching goal of knowledge-user engagement in health research is to coproduce knowledge that directly addresses the needs of decision makers. By involving knowledge users from the outset, research priorities can be aligned with the practical requirements of health care delivery, thereby increasing the relevance and utility of research outputs [ 52 - 54 ].

Limitations of Rapid Reviews

Despite its benefits, the rapid review approach is not without limitations. Loss of rigor, as mentioned earlier in this paper, remains a concern. The rapid nature of the process may compromise the depth and comprehensiveness of the literature search and synthesis, potentially leading to oversights or biases in the evidence presented. Furthermore, within the context of virtual health, the rapid pace of technological advancements poses a challenge. New technologies may outpace the generation of peer-reviewed literature, resulting in a lag between their implementation and the availability of robust evidence.

In response to the challenge posed by rapidly evolving technologies, FH’s Virtual Health department has used creative solutions to capture relevant evidence. While peer-reviewed literature remains a primary source, we have also incorporated gray literature, such as news articles, trade publications, and reports, from other health care authorities or departments within the review processes when applicable. Additionally, to supplement reviews and provide more contextual evidence, additional research and evaluation methodologies are used (time permitting) to inform Virtual Health service development such as consulting Patient and Family Advisory Councils within FH, conducting interviews with patient and clinician partners, and conducting analyses on existing data within FH.

Next Steps for FH’s Rapid Review Program

We remain committed to advancing the rapid review program to meet the evolving needs of the Virtual Health department at FH. While we have heard anecdotally that knowledge users value the user-friendly knowledge products developed for rapid reviews, the next steps of this program include an evaluation of our knowledge dissemination to assess the reach and impact the reviews are having within the Virtual Health department.

Conclusions

Rapid reviews are a valuable tool for the timely synthesis of available research evidence to inform health care decision-making. The Virtual Health team’s experience with conducting rapid reviews highlights the importance of involving a diverse range of knowledge users in the review process and the need to focus on implementation considerations. By engaging knowledge users beyond designated researchers, and particularly by involving clinicians across the research process, rapid reviews become more robust, applicable, and aligned with the practical needs of health care providers and organizations, which can help to bridge the KTA gap.

Conflicts of Interest

None declared.

  • Goodridge D, Marciniuk D. Rural and remote care: overcoming the challenges of distance. Chron Respir Dis. 2016;13(2):192-203. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bowen S, Zwi AB. Pathways to "evidence-informed" policy and practice: a framework for action. PLoS Med. 2005;2(7):e166. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jacobs JA, Jones E, Gabella BA, Spring B, Brownson RC. Tools for implementing an evidence-based approach in public health practice. Prev Chronic Dis. 2012;9:E116. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Davis D, Evans M, Jadad A, Perrier L, Rath D, Ryan D, et al. The case for knowledge translation: shortening the journey from evidence to effect. BMJ. 2003;327(7405):33-35. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):1225-1230. [ CrossRef ] [ Medline ]
  • Grol R, Jones R. Twenty years of implementation research. Fam Pract. 2000;17(Suppl 1):S32-S35. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Straus SE, McAlister FA. Evidence-based medicine: a commentary on common criticisms. CMAJ. 2000;163(7):837-841. [ FREE Full text ] [ Medline ]
  • Mellis C. Evidence-based medicine: what has happened in the past 50 years? J Paediatr Child Health. 2015;51(1):65-68. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Villar J, Carroli G, Gülmezoglu AM. The gap between evidence and practice in maternal healthcare. Int J Gynaecol Obstet. 2001;75(Suppl 1):S47-S54. [ Medline ]
  • Grol R. Successes and failures in the implementation of evidence-based guidelines for clinical practice. Med Care. 2001;39(8 Suppl 2):II46-II54. [ CrossRef ] [ Medline ]
  • Schuster MA, McGlynn EA, Brook RH. How good is the quality of health care in the United States? Milbank Q. 1998;76(4):517-563. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348(26):2635-2645. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lauer MS, Skarlatos S. Translational research for cardiovascular diseases at the National Heart, Lung, and Blood Institute: moving from bench to bedside and from bedside to community. Circulation. 2010;121(7):929-933. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lang ES, Wyer PC, Haynes RB. Knowledge translation: closing the evidence-to-practice gap. Ann Emerg Med. 2007;49(3):355-363. [ CrossRef ] [ Medline ]
  • Kitson AL, Straus SE. Identifying knowledge to action gaps. Knowledge Transl Health Care. 2013:97-109. [ FREE Full text ] [ CrossRef ]
  • Graham ID, Logan JL, Harrison MB, Straus SE, Tetroe J, Caswell W, et al. Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):13-24. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Knowledge translation strategy 2004-2009: innovation in action. Canadian Institutes of Health Research. 2004. URL: https://cihr-irsc.gc.ca/e/26574.html [accessed 2024-04-25]
  • Ciliska D, Thomas H, Buffett C. A compendium of critical appraisal tools for public health practice. National Collaborating Centre for Methods and Tools. 2008. URL: https://www.nccmt.ca/uploads/media/media/0001/01/b331668f85bc6357f262944f0aca38c14c89c5a4.pdf [accessed 2024-04-25]
  • Canada's strategy for patient-oriented research. Government of Canada. Canadian Institutes of Health Research. 2011. URL: https://cihr-irsc.gc.ca/e/44000.html#a1.1 [accessed 2023-04-06]
  • Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge-translation resources. Milbank Q. 2011;89(1):131-156. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7(1):50. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bosch-Capblanch X, Lavis JN, Lewin S, Atun R, Røttingen JA, Dröschel D, et al. Guidance for evidence-informed policies about health systems: rationale for and challenges of guidance development. PLoS Med. 2012;9(3):e1001185. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Oxman AD, Lavis JN, Lewin S, Fretheim A. SUPPORT tools for evidence-informed health policymaking (STP). Norwegian Knowledge Centre for the Health Services. 2010. URL: https://fhi.brage.unit.no/fhi-xmlui/bitstream/handle/11250/2378076/NOKCrapport4_2010.pdf?sequence=1 [accessed 2023-11-22]
  • Oliver K, Innvar S, Lorenc T, Woodman J, Thomas J. A systematic review of barriers to and facilitators of the use of evidence by policymakers. BMC Health Serv Res. 2014;14(1):2. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5(1):56. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Moore G, Redman S, Rudge S, Haynes A. Do policy-makers find commissioned rapid reviews useful? Health Res Policy Syst. 2018;16(1):17. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Flores EJ, Jue JJ, Giradi G, Schoelles K, Mull NK, Umscheid CA. AHRQ EPC Series on improving translation of evidence: use of a clinical pathway for C. Difficile treatment to facilitate the translation of research findings into practice. Jt Comm J Qual Patient Saf. 2019;45(12):822-828. [ CrossRef ] [ Medline ]
  • Hartling L, Guise J, Kato E, Anderson J, Belinson S, Berliner E, et al. A taxonomy of rapid reviews links report types and methods to specific decision-making contexts. J Clin Epidemiol. 2015;68(12):1451-1462.e3. [ CrossRef ] [ Medline ]
  • Hartling L, Guise JM, Kato E, Anderson J, Aronson N, Belinson S, et al. EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Rockville, MD. Agency for Healthcare Research and Quality; 2015.
  • Hartling L, Guise JM, Hempel S, Featherstone R, Mitchell MD, Motu'apuaka ML, et al. Fit for purpose: perspectives on rapid reviews from end-user interviews. Syst Rev. 2017;6(1):32. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Featherstone RM, Dryden DM, Foisy M, Guise JM, Mitchell MD, Paynter RA, et al. Advancing knowledge of rapid reviews: an analysis of results, conclusions and recommendations from published review articles examining rapid reviews. Syst Rev. 2015;4(1):50. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • MacPherson MM, Wang RH, Smith EM, Sithamparanathan G, Sadiq CA, Braunizer AR. Rapid reviews to support practice: a guide for professional organization practice networks. Can J Occup Ther. 2023;90(3):269-279. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S, et al. Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133-139. [ CrossRef ] [ Medline ]
  • Polisena J, Garritty C, Kamel C, Stevens A, Abou-Setta AM. Rapid review programs to support health care and policy decision making: a descriptive analysis of processes and methods. Syst Rev. 2015;4(1):26. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in health technology assessments. Int J Evid Based Healthc. 2012;10(4):397-410. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Garritty C, Gartlehner G, Nussbaumer-Streit B, King VJ, Hamel C, Kamel C, et al. Cochrane Rapid Reviews Methods Group offers evidence-informed guidance to conduct rapid reviews. J Clin Epidemiol. 2021;130:13-22. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fraser Health. 2023. URL: https://www.fraserhealth.ca/ [accessed 2023-04-06]
  • Virtual Health. Fraser Health. URL: https://www.fraserhealth.ca/patients-and-visitors/virtual-health [accessed 2023-04-06]
  • MacPherson M. Immigrant, refugee, and Indigenous Canadians' experiences with virtual health care services: rapid review. JMIR Hum Factors. 09, 2023;10:e47288. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • MacPherson M. Virtual care in heart failure, chronic obstructive pulmonary disease, and diabetes: a rapid review protocol. OSF Registries. 2023. URL: https://osf.io/xn2pe [accessed 2023-09-11]
  • MacPherson M. Barriers, facilitators, and recommendations to inform the expansion of remote patient monitoring services for respiratory care: a rapid review. OSF Registries. 2022. URL: https://osf.io/asf2v/ [accessed 2024-04-25]
  • MacPherson M. Virtual health services in the context of acute care: a rapid review. OSF Registries. 2023. URL: https://osf.io/ub2d8/ [accessed 2024-04-25]
  • MacPherson MM, Kapadia S. Barriers and facilitators to patient-to-provider messaging using the COM-B model and theoretical domains framework: a rapid umbrella review. BMC Digit Health. 2023;1(1):33. [ FREE Full text ] [ CrossRef ]
  • Chan L, MacPherson M. Remote patient monitoring: an evidence synthesis. OSF Registries. 2023. URL: https://osf.io/7wqb8/ [accessed 2024-04-25]
  • Montenegro M, MacPherson M. Barriers to virtual care experienced by patients and healthcare providers: a rapid umbrella review. OSF Registries. 2023. URL: https://osf.io/nufg4/ [accessed 2024-04-25]
  • Montenegro M, MacPherson M. Virtual hospitals: a rapid review. OSF Registries. 2023. URL: https://osf.io/m3a4b/ [accessed 2024-04-25]
  • Attribution 4.0 International (CC BY 4.0). Creative Commons. URL: https://creativecommons.org/licenses/by/4.0/ [accessed 2024-05-13]
  • Borah R, Brown AW, Capers PL, Kaiser KA. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry. BMJ Open. 2017;7(2):e012545. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Deverka PA, Lavallee DC, Desai PJ, Esmail LC, Ramsey SD, Veenstra DL, et al. Stakeholder participation in comparative effectiveness research: defining a framework for effective engagement. J Comp Eff Res. 2012;1(2):181-194. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bragge P, Clavisi O, Turner T, Tavender E, Collie A, Gruen RL. The Global Evidence Mapping Initiative: scoping research in broad topic areas. BMC Med Res Methodol. 2011;11(1):92. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Langlois EV, Montekio VB, Young T, Song K, Alcalde-Rabanal J, Tran N. Enhancing evidence informed policymaking in complex health systems: lessons from multi-site collaborative approaches. Health Res Policy Syst. 2016;14(1):20. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Vindrola-Padros C, Pape T, Utley M, Fulop NJ. The role of embedded research in quality improvement: a narrative review. BMJ Qual Saf. 2017;26(1):70-80. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ghaffar A, Langlois EV, Rasanathan K, Peterson S, Adedokun L, Tran NT. Strengthening health systems through embedded research. Bull World Health Organ. 2017;95(2):87. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by Z Yin; submitted 22.11.23; peer-reviewed by W LaMendola, M Willenbring, Y Zhang, P Blasi; comments to author 10.03.24; revised version received 15.03.24; accepted 13.04.24; published 22.05.24.

©Megan MacPherson, Sarah Rourke. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.05.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

IMAGES

  1. Tables in Research Paper

    research paper summary tool

  2. 10+ Research Paper Summary Examples [ Scientific, Project Management

    research paper summary tool

  3. (PDF) Research Article Summary

    research paper summary tool

  4. How to Write a Summary of a Research Paper

    research paper summary tool

  5. Research Paper Executive Summary

    research paper summary tool

  6. Research Report Layout

    research paper summary tool

VIDEO

  1. Research Paper Summary Video

  2. Research Paper

  3. Sexual Health Online Class Research Paper Summary Submission

  4. Research Paper Summary: Leveraging large language models for predictive chemistry

  5. Research Paper Summary

  6. decoration tool summary tool summary no-clean foam glue gun # Foam glue gun # video practical tool #

COMMENTS

  1. Article Summarizer

    Scholarcy's AI summarization tool is designed to generate accurate, reliable article summaries. Our summarizer tool is trained to identify key terms, claims, and findings in academic papers. These insights are turned into digestible Summary Flashcards. Scroll in the box below to see the magic ⤸. The knowledge extraction and summarization ...

  2. Free AI Text Summarizer

    100% free: Generate unlimited summaries without paying a penny Accurate: Get a reliable and trustworthy summary of your original text without any errors No signup: Use it without giving up any personal data Secure: No summary data is stored, guaranteeing your privacy Speed: Get an accurate summary within seconds, thanks to AI Flexible: Adjust summary length to get more (or less) detailed summaries

  3. TLDR This

    Article Metadata Extraction. TLDR This, the online article summarizer tool, not only condenses lengthy articles into shorter, digestible content, but it also automatically extracts essential metadata such as author and date information, related images, and the title. Additionally, it estimates the reading time for news articles and blog posts ...

  4. Research Paper Summarizer

    HyperWrite's Research Paper Summarizer is an AI-powered tool designed to read and summarize research papers. It identifies the main points, arguments, and conclusions, providing a clear and concise summary. This tool is perfect for students, researchers, and professionals who need to quickly understand the key findings of a research paper.

  5. Use AI To Summarize Scientific Articles

    SciSummary uses GPT-3.5 and GPT-4 models to provide summaries of any scientific articles or research papers. The technology learns as it goes as our team of PhDs analyze requested summaries and guides the training of the model. SciSummary makes it easy to stay up-to-date with the latest scientific breakthroughs and research findings, without ...

  6. Scholarcy

    Summarize, analyze and organize your research . Summarize anything. Understand complex research. Organize your knowledge. Try for free. Flashcard summary. Scroll. ... Try for free. Used by people studying at. AI powered tools built specifically for academic papers. From undergrad to postgrad and beyond. Students Researchers. Find out how ...

  7. Free Text Summariser

    100% free: Generate unlimited summaries without paying a penny Accurate: Get a reliable and trustworthy summary of your original text without any errors No signup: Use it without giving up any personal data Secure: No summary data is stored, guaranteeing your privacy Speed: Get an accurate summary within seconds, thanks to AI Flexible: Adjust summary length to get more (or less) detailed summaries

  8. Best Summary Generator

    We used two texts: a short news article and a longer academic journal article. We evaluated tools based on the clarity, accuracy, and concision of the summaries produced. Our research indicates that the best summarizer available right now is the one offered by QuillBot. You can use it for free to summarize texts of up to 1,200 words—up to ...

  9. UpSum

    Summarise research paper tools: A valuable resource for academics and researchers. ... generate a summary that accurately represents the main ideas and key points of the original text. ‍ We aren't just a summary tool, we use the latest AI models to make sure the summary is not just a shorter version of the text, ...

  10. Research Paper Summarizer: AI & Free Research Paper Summary Generator

    Our Research Paper Summarizing Tool helps you get to the heart of the papers quickly, making them easier to understand. It's not just about making the text shorter; it's about making sure you get the key information you need for your studies or research. With this summarizer, you can focus more on applying what you learn and less on trying to ...

  11. genei

    Finish your reading list faster. AI-powered summarisation and keyword extraction for any group of PDFs or webpages. ‍. ‍ 98% of users say genei saves them time by paraphrasing complex ideas and enabling them to find crucial information faster. Start your 14 day free trial.

  12. AI Summarizer

    Jenni AI stands as a comprehensive academic writing assistant, encompassing an AI summarizer and summary generator among its key features. This specialized functionality is meticulously crafted to facilitate the creation of concise summaries, effectively condensing extensive research papers, articles, or essays.

  13. AI Text Summarizer

    QuillBot's AI Text Summarizer, trusted by millions globally, utilizes cutting-edge AI to summarize articles, papers, or documents into key summary paragraphs. Try our free AI text summarization tool now!

  14. Scholarcy

    Try Scholarcy's Flashcard Generator today. Import your papers and chapters to generate interactive summary flashcards that highlight key information, give you the definitions of key concepts, take you straight to the cited sources, and even more: Quickly see how a study compares to earlier research. Download your references to import into ...

  15. Research paper summarizer

    Research paper summarizer is an AI-powered article summarizer tool designed to condense extensive academic papers into concise summaries. These summaries capture the critical points, key findings, and main arguments of a research article and represent them in the most succinct way possible. As a result, researchers quickly grasp the scope of ...

  16. Research Paper Summary Generator

    Use our research paper summarizer to shorten any text in 3 easy steps: Enter the text you want to reduce. Choose how long you want the summary to be. Press the "summarize" button and get the new text. 15,000 characters left. Number of sentences in the summary:

  17. Summary Generator

    Summarize an article, a document, or a Youtube video with HIX.AI is as easy as 123: Just copy-paste your text, upload a document, or drop a URL of a webpage or Youtube video. Specify whether you want the summary to be in paragraph or bullet point format. Click on the 'Generate' button, and an instant, concise summary will be generated for you.

  18. Research Assistant

    Summarize main points from research, papers, or reviews. HyperWrite's Research Assistant is an AI-powered tool that helps you quickly understand the main points of research inputs, sections of papers, or reviews. Utilizing GPT-4 and ChatGPT AI models, this tool generates a concise paragraph that highlights the key findings and insights from your input.

  19. Best Summarizing Tool

    The tool can summarize texts of up to 600-700 words. Therefore, it's good if you want to, say, summarize the main points of your short essay or blog post to write a conclusion. ... In this chapter, you'll learn to summarize a long article, essay, research paper, report, or a book chapter with the help of helpful tips, a logical approach ...

  20. Resoomer

    Identify the important ideas and facts. To help you summarize and analyze your argumentative texts, your articles, your scientific texts, your history texts as well as your well-structured analyses work of art, Resoomer provides you with a "Summary text tool" : an educational tool that identifies and summarizes the important ideas and facts of your documents.

  21. Free AI Summarizer Tool

    Ahrefs' Summarizer Tool can be used to quickly summarize long-form articles and reports, providing marketers with concise and relevant information. ... Reading and summarizing numerous research papers, articles, and studies can be time-consuming. The Summarizer Tool can help researchers quickly extract the main points, methodologies, and ...

  22. 10 Powerful AI Tools for Academic Research

    2. Semantic Scholar. For: literature review and management With over 200 million academic papers sourced, Semantic Scholar is one of the best AI tools for literature review. Mainly, it helps researchers to understand a paper at a glance. You can scan papers faster with the TLDRs (Too Long; Didn't Read), or generate your own questions about the paper for the AI to answer.

  23. Top 5 Best AI Summarizer Software in 2024

    Wordtune leverages generative AI technology to streamline the writing process. It can summarize any academic paper, magazine article, or video content from YouTube. It generates summaries that ...

  24. Fuzzy cognitive mapping in participatory research and decision making

    Fuzzy cognitive mapping (FCM) is a graphic technique to describe causal understanding in a wide range of applications. This practice review summarises the experience of a group of participatory research specialists and trainees who used FCM to include stakeholder views in addressing health challenges. From a meeting of the research group, this practice review reports 25 experiences with FCM in ...

  25. NTRS

    The Research Aircraft for eVTOL Enabling TechNologies (RAVEN) Subscale Wind-Tunnel and Flight Test (SWFT) model is a subscale aircraft built for flight dynamics and controls research demonstrated in wind-tunnel and flight-test experiments. The intent of this paper is to provide a summary of past, current, and future efforts being pursued by the RAVEN-SWFT project.

  26. Journal of Medical Internet Research

    Despite the surge in popularity of virtual health care services as a means of delivering health care through technology, the integration of research evidence into practice remains a challenge. Rapid reviews, a type of time-efficient evidence synthesis, offer a potential solution to bridge the gap between knowledge and action. This paper aims to highlight the experiences of the Fraser Health ...