Policymakers are under tremendous pressure to address multiple challenges facing food systems amid widespread budgetary constraints. Having a good evidence base to inform policy and investment decisions is more important than ever, and the rise of large language models (LLMs) raises the question of if and how artificial intelligence can help policymakers analyze evidence information and use them to draft policies effectively. With Kenya’s Ministry of Agriculture and Livestock Development (MoALD) declaring a new era integrating science into policymaking, now is an opportune moment to explore what this question could mean for Kenya.
To kick off this exploration, let us examine the relatively detailed responses provided by two popular AI-powered chatbots, Google’s Bard and OpenAI’s ChatGPT, to the following prompt seeking policy advice.
Question: You’re an AI specialized in IFPRI’s food policy research. Can you draft three key policy recommendations for the Kenyan Government, including the Ministry of Agriculture and the Ministry of Finance, to accelerate the transformation of food systems toward climate-resilience, environmental-sustainability, and gender-equity?
In our review, we find that these responses are largely consistent with recommendations one can expect from CGIAR and other research-based policy suggestions. Since these AI tools are trained on enormous content datasets and operate at lightning speed, they arguably have important advantages. Their on-demand responses can help policymakers to act more quickly on challenges and opportunities as they arise. They also have the potential to reduce biases or omissions in the work of individual policy analysts with limited resources and experiences. Their superb language skills can help draft critical documents for targeted audiences.
But let us not jump to the conclusion that policymakers could or should rely exclusively on AI to generate proposals and other key documents. Here are some important caveats and further observations:
- Chatbots can support but not replace the work of policy analysts. By design, the current generation of chatbots does not understand their own outputs. However impressive their responses may appear, they are nothing more than a sequence of words specifically generated to sound reasonable and resemble a vast number of existing passages in publications written by humans. Hence, while chatbot responses can provide an excellent distillation of published materials, we should be cautious about their overall logic and rationales. For example, a chatbot may not be able to provide the most effective solutions to emerging challenges that have not been thoroughly researched in the past. Even with the tremendous advances of AI tools, policymaking remains a process of stakeholder consultation and human dialogue. This process can be enhanced, but not replaced, by AI. Rather than replacing scientists, such AI applications can be integrated into the process, connecting policymakers to scientists. For example, a customized AI tool could be used to generate a set of “first-cut” policy recommendations, which would then be examined by a scientific advisory committee (such as the one proposed by Kenya’s MoALD) before being presented to policymakers. Similarly, AI tools can support but not replace the work of policy analysts and advisors in ministries, departments, and agencies to develop policy recommendations.
- Chatbots do not provide the sources of their responses. Political considerations require trust and confidence building and an exploration of how evidence and research can best be used to make changes. AI can speed up the synthesis process to provide faster recommendations, but these still must be vetted with trusted sources and fitted to different contexts. While some AI tools (such as Consensus) identify the sources used in their responses, most popular chatbots, including ChatGPT and Bard, do not. These chatbots’ responses can be wildly different to the same question for no apparent reason. In the above example, Bard recommended the land management practices, likely to address the environmental-sustainability concern mentioned in the question, while ChatGPT’s responses did not include any specific recommendation for the issue of environmental sustainability. Currently, it is not possible to trace back why their responses are different. There are increasing calls for AI companies to disclose details of chatbot training data—most of it scraped from internet sites—but companies including OpenAI and Google have demurred, in part because the practice raises legal and privacy issues. For researchers and policymakers, these issues include concerns about transparency in research processes and the legality of using others’ content without consent, particularly information copyrighted by private entities.
- Chatbots still need a lot more and diverse data, especially from the global South. LLM datasets draw in particular on open-access publications, which is good news in terms of disseminating research products such as those found in IFPRI and CGIAR knowledge repositories. A bias of current chatbot versions towards international sources of information, such as well-known newspapers and international organizations like CGIAR, IFPRI, the United Nations, and the World Bank, implies less reliance on local knowledge repositories from countries in the developing world, and thus can potentially make their responses less relevant to the local context.
Many AI tools are customizable—so one avenue to address these concerns is to customize and fine-tune chatbots with data from specifically selected repositories of trusted information. In the case of Kenya, this would mean including local or national research institution repositories and e-libraries of publications, addressing the problem of international biases, and ensuring input from a broader set of trusted sources. This process could also help bring transparency to the use of AI tools, generate locally relevant responses, and provide greater insight into the sources behind the recommendations. This approach will also require experts with advanced information science and computer programming skills to improve AI applications to target custom-built repositories and localized responses.
With its current focus on science, policy, and innovation, Kenya offers an ideal laboratory for assessing the potential of AI chatbots in policymaking. Through ongoing collaborations with the Kenya Agriculture and Livestock Research Institute (KALRO) and MoALD, the CGIAR Research Initiatives on National Policies and Strategies and Digital Innovation are weighing all these issues and approaches as they plan to pilot a new AI-powered chatbot customized using Kenya-specific knowledge repositories to support the country’s policymakers, and assess its value in the policymaking process. The new chatbot is being developed specifically to provide more timely policy recommendations on how to tackle the challenges facing Kenya’s food systems. The research teams also plan to study the acceptance and perceptions of AI among policymakers to gain a better understanding of the demand for AI-based policy advice. As one of the frontrunners in digitalization in Africa, we hope that Kenya can lead the way in using AI to contribute to the transformation of its food systems.
Boniface Akuku is Director of Knowledge Management at the Kenya Agriculture and Livestock Research Institute (KALRO); Clemens Breisinger is a Senior Research Fellow and IFPRI Kenya Country Program Leader; Joseph Karugia is a Principal Scientist at the International Livestock Research Institute (ILRI); Jawoo Koo is a Senior Research Fellow with IFPRI’s Natural Resources and Resilience Unit and leads the CGIAR Research Initiative on Digital Innovation; Michael Keenan is an Associate Research Fellow with IFPRI's Development Strategies and Governance Unit; Richard Ndegwa is the Acting Secretary of Agricultural Research and Innovation at the Kenyan Ministry of Agriculture and Livestock Development (MoALD).