
July 31, 2025
https://hai.stanford.edu/news/top-scholars-call-for-evidence-based-appro...
In an era where artificial intelligence is reshaping industries and society at large, the need for thoughtful policy has never been more pressing. To govern this fast-moving technology, how can policymakers best make use of scientific research?
Leading experts at Stanford, UC Berkeley, Princeton, the Carnegie Endowment for International Peace, and more are calling for an evidence-based approach to AI policy. In a new paper published in Science, 20 scholars – including Stanford Institute for Human-Centered AI scholars Fei-Fei Li, Yejin Choi, Daniel E. Ho, Percy Liang, and Rishi Bommasani – address the research-policy interface by articulating how policymakers can grow the evidence base now.
In this conversation, lead author Bommasani, a senior research scholar at Stanford HAI, explains this evidence-based approach.
What does this paper attempt to address?
Our multidisciplinary team provided a vision for evidence-based AI policy. While the general idea of using evidence to inform policy is agreeable, the specifics of how to do it in the AI domain, especially given failures of evidence-based policy in other domains historically, are not straightforward. Our paper reflects our consensus on how to optimize the relationship between evidence and policy by keying in on the idea of how policy can accelerate the rate of evidence generation to avoid being paralyzed by the lack of high-quality evidence.
What do you mean by evidence-based policy?
Good policy should, in theory, integrate evidence that reflects scientific understanding. In practice, this is not certain for a variety of reasons. Different policy domains rely on different types of evidence. In health policy, evidence might refer to observed randomized controlled trial results. In economic policy, evidence might be predicted macroeconomic forecasts.
What are the challenges to this approach?
One immediate challenge is figuring out what counts as credible evidence. The examples I gave of health and economic policy illustrate that some domains might heavily rely on observed outcomes, while others might accept other, more theoretical methods.
Another is incentivizing the production of high-quality evidence. Historical examples show that the desire for evidence has been used to stymie governance or that partial evidence has been used to portray lack of scientific certainty.
What should policymakers be doing today?
The key idea we present is how policymakers can engender credible evidence. We break this down based on different evidence-generating mechanisms. Frontier AI companies could be more transparent on high-priority topics like how they mitigate risks. These companies could be incentivized to have external entities test their models prior to release, with published test results along with clarity on the testing conditions; e.g., the time and level of access afforded for the testing. More broadly, safe harbors could protect third-party evaluators conducting good-faith safety testing.
Policymakers need to think proactively, not just about generating evidence but how that evidence can best inform action. Currently, there is a deluge of content – academic papers, company reports, news articles – about the latest developments in AI. To make sense of all of this, policymakers need mechanisms to synthesize credible evidence. The recent International Scientific Report on the Safety of Advanced AI is the starting point, which brings together almost 100 international AI experts through a nomination process involving 30 countries. But policymakers can go further by helping to form scientific consensus. Forging consensus will be difficult given the striking divides in the AI community on core issues, but we believe policymakers can develop the environments for serious attempts to be made. In its best form, scientific consensus, including on areas of uncertainty or immaturity, would be a powerful primitive for better AI policy.
Read “Advancing Science- and Evidence-Based AI Policy” in Science. Authors: Rishi Bommasani, Sanjeev Arora, Jennifer Chayes, Yejin Choi, Mariano-Florentino Cuéllar, Fei‑Fei Li, Daniel E. Ho, Dan Jurafsky, Sanmi Koyejo, Hima Lakkaraju, Arvind Narayanan, Alondra Nelson, Emma Pierson, Joelle Pineau, Scott Singer, Gaël Varoquaux, Suresh Venkatasubramanian, Ion Stoica, Percy Liang, Dawn Song
July 31, 2025