A key objective of research is to ensure that audiences—including scientists, policymakers, and lay readers—find it valuable and apply it effectively. Yet, communicating research to these audiences is rarely straightforward. Readers are inundated with information, making accessible, understandable communication critical. Trust in the credibility of the content and its authors is also essential. Generative artificial intelligence (AI) tools such as OpenAI’s ChatGPT can instantly summarize and interpret research findings using clear language, charts, and other tools (though they also make mistakes). They also create a particular dilemma: If a blog post summarizing research is written with AI, does it lose credibility? Might readers trust it less because there may be less work effort behind it—making the potential value ambiguous—or because they have doubts about AI capabilities?
These questions matter because effective research communication can be the bridge between evidence and action. With the Sustainable Development Goals 2030 deadline approaching, development challenges require clear, accessible policy research communication to inform decisions and mobilize resources.
In a recent randomized experiment, we tested how readers across ten countries (Bangladesh, Egypt, Ethiopia, Ghana, India, Kenya, Malawi, Nigeria, Rwanda, Sudan, Tajikistan, Uganda) evaluate the quality and trustworthiness of research blog posts written by humans vs. by ChatGPT. The findings offer practical insights for researchers who rely on blogs to amplify their work. Our findings suggest that transparency plays an important role in the perception of AI-generated communications and in maintaining trust.
The experiment’s goal was to examine whether GAI-written research blog posts are perceived differently from those written by humans and how disclosing the use of AI affects these perceptions. Over 350 participants, drawn from development stakeholder groups including policymakers and academics, were presented with research blog posts written by humans and by AI, based on IFPRI-authored papers.
We used previously published papers that already had an accompanying human-written blog post and an AI-written blog post that was then generated from the same paper. The experiment randomly varied the authorship (AI or human) and whether participants were told the posts were AI-written. After the experiment, participants were told the true author of the post they read. The content was carefully standardized: The AI-generated blog posts were created using ChatGPT-4 with iterative prompts, and figures from the original research papers were included to ensure consistency.
Participants rated the blog post’s quality on several dimensions: Catchy title, easy to read, appropriate length, representation of a wide range of views, and clear and detailed policy recommendations. We also wanted to test how the author’s perceived identity affected reader intentions to further engage with the material (for example, re-read it, share it, etc.). Participants then noted their likely engagement with it, and how they thought others might engage with it.
When they did not know the authorship, readers rated the AI-written blog posts lower in quality compared to those authored by humans, despite the fact that the AI-generated blog posts were objectively more accessible, i.e., written at a lower grade level. Interestingly, this negative perception disappeared when participants were told before reading that the content was AI-generated. A possible reason for this is that if there is ambiguity in authorship, readers will penalize what they believe to be AI-written blog posts.
Transparency about AI authorship effectively removes ambiguity in the credibility of the research. Interestingly, whether the blog post was written by AI or a human, and whether respondents were told whether the blog post was written by AI or a human had no significant effect on readers’ stated intention to engage with the content or their beliefs about others’ engagement. This finding suggests that—quality notwithstanding—AI authorship does not undermine readers’ willingness to share, revisit, or act on research blog posts, provided the content remains relevant and credible. These results highlight a clear message for researchers: generative AI can reduce the time and effort required for creating accessible communication without compromising trust, as long as authorship is disclosed.
More broadly, these results indicate the importance of transparency in the use of AI for research communications. If readers are wondering if something is AI-written, they may penalize this ambiguity in authorship, reducing their trust in AI-generated blog posts. Disclosing the use of AI effectively neutralizes this penalty, signaling transparency and maintaining credibility.
To ensure AI improves research communication without undermining trust, we recommend the following actions:
- Disclose AI usage clearly and consistently: Transparency eliminates the ambiguity penalty and reassures readers about the credibility of the content. Journals, organizations, and researchers should adopt standardized disclosure practices. This process is currently in its infancy.
- Leverage AI for accessibility: Generative AI applications can produce easy-to-understand content that resonates with non-technical audiences, improving outreach and relevance. Policymakers and donors benefit when research is clear and actionable.
- Complement AI tools with human oversight: While AI can reduce effort, human review ensures accuracy and context relevance. Missteps in content quality could erode trust further if left unchecked.
- Focus on building trust in AI-written content: Efforts to promote awareness of generative AI’s potential applications and benefits for research communication can improve perceptions over time. Readers familiar with AI tools are already more open to engaging with AI-generated content.
- Avoid overreliance on AI in technical spaces: Highly specialized audiences may prefer more complex language and content that reflects human expertise. Balancing accessibility and technical rigor are essential.
These measures can help reduce distrust and ambiguity in research communication. Transparency not only helps to foster trust but can also help unlock the full potential of AI, enabling researchers to communicate effectively, reduce costs, and connect evidence with decision-making effectively.
Michael Keenan is an Associate Research Fellow with IFPRI’s Development Strategies and Governance (DSG) Unit, based in Nairobi, Kenya; Jawoo Koo is a Senior Research Fellow with IFPRI’s Natural Resources and Resilience Unit; Christine Mwangi is a DSG Research Officer based in Nairobi; Naureen Karachiwalla is a Research Fellow with IFPRI’s Poverty, Gender, and Inclusion (PGI) Unit, based in Nairobi; Clemens Breisinger is Program Leader for the Kenya Strategy Support Program and a DSG Senior Research Fellow; MinAh Kim is a former IFPRI Climate Data Consultant. This post is based on research that is not yet peer-reviewed. Opinions are the authors’.
An initial draft of this post was generated using ChatGPT from author prompts, then revised and edited.
This work was supported by the CGIAR Initiative on National Policies and Strategies (NPS) and Digital Innovations Initiatives.