The AI Futures Project

Full Text Sharing

https://ai-futures.org/about/


 

We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.

We wrote a scenario that represents our best guess about what that might look like.1 It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.2


 

 

 

 


Our research on key questions (e.g. what goals will future AI agents have?) can be found here.

The scenario itself was written iteratively: we wrote the first period (up to mid-2025), then the following period, etc. until we reached the ending. We then scrapped this and did it again.

We weren’t trying to reach any particular ending. After we finished the first ending—which is now colored red—we wrote a new alternative branch because we wanted to also depict a more hopeful way things could end, starting from roughly the same premises. This went through several iterations.6

Our scenario was informed by approximately 25 tabletop exercises and feedback from over 100 people, including dozens of experts in each of AI governance and AI technical work.



About


The AI Futures Project is a 501(c)(3) nonprofit research organization (EIN 99-4320292). We are funded entirely by charitable donations and grants.

Contact

You can reach us at info@ai-futures.org or [firstname]@ai-futures.org. We look forward to hearing from you.

Team

Daniel Kokotajlo, Executive Director: Daniel oversees our research and policy recommendations. He previously worked as a governance researcher at OpenAI on scenario planning. When he left OpenAI, he called for better transparency of top AI companies. In 2021, he wrote What 2026 Looks Like, an AI scenario forecast for 2022–2026 that held up well. See also his Time100 AI 2024 profile.

Eli Lifland, Researcher: Eli works on scenario forecasting and specializes in forecasting AI capabilities. He also co-founded and advises Sage, which builds interactive AI explainers. He previously worked on Elicit, an AI-powered research assistant, and co-created TextAttack, a Python framework for adversarial examples in text. He ranks first on the RAND Forecasting Initiative all-time leaderboard.

Thomas Larsen, Researcher: Thomas works on scenario forecasting and focuses on understanding the goals and real-world impacts of AI agents. He previously founded the Center for AI Policy, an AI safety advocacy organization, and worked on AI safety research at the Machine Intelligence Research Institute.

Romeo Dean, Researcher: Romeo specializes in forecasting AI chip production and usage. He is completing a computer science master’s degree at Harvard University with a focus on security and hardware. He previously was an AI Policy Fellow at the Institute for AI Policy and Strategy.

Jonas Vollmer, COO: Jonas focuses on our communications and operations. Separately, he also helps manage Macroscopic Ventures, an AI venture fund and philanthropic foundation. He previously co-founded the Atlas Fellowship, a global talent program, and the Center on Long-Term Risk, an AI safety research non-profit.

 

Position: Co -Founder of ENGAGE,a new social venture for the promotion of volunteerism and service and Ideator of Sharing4Good

About Us

The idea is simple: creating an open “Portal” where engaged and committed citizens who feel to share their ideas and offer their opinions on development related issues have the opportunity to do...

Contact

Please fell free to contact us. We appreciate your feedback and look forward to hearing from you.

Empowered by ENGAGE,
Toward the Volunteering Inspired Society.