Stanford HAI Announces Hoffman-Yee Grants Recipients for 2024 (Stanford University-Human Centered Artificial Intelligence)

Full Text Sharing

 

https://hai.stanford.edu/news/stanford-hai-announces-hoffman-yee-grants-...

Nikki Goth Itoi

 

Recent advances in AI, combined with the vast amount of large-scale biomedical data, are creating novel opportunities to model the intricacies of human cells. However, each different cell is a dynamic system in which complex behavior emerges from a myriad of molecular interactions. Accordingly, any useful model of a human cell must be applicable across many data modalities and vast biological scales. 

Making progress toward this vision involves an ambitious collaboration across diverse teams that would not be possible without funding from a resource like the Hoffman-Yee Research Grants program. 

“The award we’ve just received is incredibly important and timely,” said Emma Lundberg, associate professor of bioengineering and pathology at Stanford. “It will help us build a grand unified foundation model for human cells that spans across DNA, RNA, protein sequence, protein structure, and cellular organization — unifying sequence, structure, and biological image datasets. This work could pave the way for groundbreaking discoveries in biology using virtual cell models.” 

Lundberg’s project is one of six Stanford teams that have received a combined $3 million in Hoffman-Yee grants, a key initiative of the Stanford Institute for Human-Centered Artificial Intelligence (HAI) since the institute’s founding. Together, these scholars are ready to shape the future of human-centered AI. 

Funding for Big, Bold Ideas

The Hoffman-Yee grants are one of our most impactful programs, said Stanford HAI Co-Director James Landay. "These interdisciplinary teams are exploring breakthrough ideas that could accelerate our understanding of the human brain and reimagine the future for visual media and AI training data."

Funded through a generous gift from philanthropists Reid Hoffman and Michelle Yee, Hoffman-Yee Research Grants are awarded each year to groups of scholars who demonstrate bold thinking about pressing scientific, technical, or societal challenges. To be considered, research projects must align with Stanford HAI’s key areas of focus:

  • Understanding the human and societal impact of AI
  • Augmenting human capabilities
  • Developing AI technologies inspired by human intelligence.

“Each year, we receive dozens of innovative proposals from across the Stanford community, and we look forward to giving a select few the support they need to further the field of human-centered AI,” said HAI Director of Research Programs Vanessa Parli. “This year, we are particularly excited about pushing the boundaries of AI for scientific discovery.”

This year’s grant winners were selected from a pool of 39 interested teams. All seven Stanford schools are represented in the proposals. Each team submitted a detailed proposal with context for the problem they aim to solve, research objectives, methods, and potential impact. A committee of multidisciplinary Stanford faculty and AI experts evaluated the proposals, and the finalists went through Stanford’s Ethics & Society Review process.

The Hoffman-Yee Research Grants program has awarded more than $20 million in grants to date. Read on for snapshots of the six new projects that received funding for the upcoming academic year:

Take a multidimensional odyssey into the human mind

Despite advancements in the field of neuroscience and related disciplines, we cannot yet explain the complexities of the human brain. Foundation models hold promise, but so far these applications have focused only on one way of imaging the brain, such as MRI, or one of its functions, such as vision. The team intends to build on these early efforts by considering four interconnected domains that are crucial to our health: psychological, biological, environmental, and physical. First, they will unify large-scale datasets related to different aspects of human health. Next, they will train domain-specific models in an end-to-end fashion to create one Brain World Model. With these insights, the team hopes to enable improved diagnostic precision and personalized patient care in clinical contexts.

Developing a class of genomic foundation models 

If a machine learning method could capture the underlying structure of genomic data, it would revolutionize the natural sciences. A multidisciplinary team consisting of experts in machine learning and computational biology has started developing a class of genomic foundation models, called Evo, that are designed to enable prediction and generation from the molecular to genomic scale. Their research addresses the two main challenges of contemporary approaches: computational needs and scaling requirements. Since genomes are at the core of all biological sciences, this work could have a wide-ranging impact on many areas of biology, such as understanding certain diseases and developing specialized therapeutic treatments.

The promise of large-scale analysis of body-worn camera footage

Over the last decade, many police reform efforts have focused on body-worn cameras; yet the footage generated by these cameras is rarely examined. Scholars from the Stanford School of Humanities and Sciences, Graduate School of Business, School of Engineering, and Stanford Law School see an opportunity to reimagine public safety. They propose to apply AI and large language models to analyze body-worn camera footage, so we can better understand the nature of law enforcement encounters with the public and, ultimately, improve police-community relations. This project calls for building AI tools and infrastructure on campus at Stanford to receive and process police footage from routine traffic stops for four law enforcement agencies in the Bay Area. With this data, the team will evaluate the effectiveness of two initiatives: a statewide legal intervention to improve interactions during traffic stops and a national de-escalation training. 

“With the assistance of large language models and other AI systems, we are now in the position to leverage the power of computation to examine how police make contact with the public,” said social psychologist and project lead  Jennifer Eberhardt. “The Hoffman-Yee grant will not only allow us to better understand interactions between the police and the public, but it also will allow us to rigorously test some of the sweeping interventions designed to improve them. We could not be more thrilled to receive this award.”

Guiding principles for building datasets in the age of generative AI

Massive datasets are the cornerstone for developing large language models (LLMs) and other

generative AI. But as developers seek to build new datasets — from both real and synthetic sources — legal and policy considerations come into play, from privacy concerns to copyright issues. Focusing on high-impact applications in law and medicine, the team intends to establish guiding principles for assembling datasets and ensuring that they are used responsibly. Then, they plan to develop scalable methods to trace the generative AI outputs to specific training data, enabling data attribution and valuation. Finally, they will investigate how to use and monitor synthetic data produced by generative AI.

Using generative AI to enable human expression through visual media

In the digital era, people communicate stories, ideas, and information through visual content; yet human visual design expertise is scarce. What if generative AI tools could make it easy for anyone to express their ideas using images, videos, and animations, regardless of their design or technical skills? So far, we’ve seen early attempts fall short of this potential. AI applications often misinterpret the intent of the user and users lack a predictive conceptual model for what the AI will produce given a certain prompt. A team of scholars representing the Stanford Schools of Engineering, Education, and Humanities and Sciences proposes to draw on human design processes to develop a shared conceptual grounding for users and AIs to understand each other and produce better visual results. Ultimately, they envision human creators collaborating with generative AI tools using a combination of natural language, example content, and code snippets and taking turns to produce the desired visual media.

A foundation model for building digital “twins” of your cells

Imagine if health care teams could one day simulate how individual patients will respond to drug treatments — considering variables such as sex, age, and comorbidities — before exposing their bodies to a drug. The key to this kind of medical future requires developing end-to-end frameworks for cell modeling. A research team representing seven Stanford departments within the School of Medicine and School of Engineering has proposed a multipronged approach to develop a human-centered AI called the Grand Unified Cell Model. It will feature a visual-language model (VLM) chat interface for biologists to access the tools in an intuitive way. 

“Today we stand at a transition point, where we can begin to use AI to transform a wealth of data into real understanding of the function of human cells,” Lundberg said. “To showcase the capabilities of our cell model, we will model how dynamic hormonal states for women, over monthly cycles and lifetimes, impact drug effectiveness on cardiovascular disease. As a woman I want to use these models to address current gaps of inequality in medicine, such as women’s health.”

Learn more about Hoffman-Yee Grants and our prior winners. See more details on this year’s winning teams. 

 

More News Topics

 

Position: Co -Founder of ENGAGE,a new social venture for the promotion of volunteerism and service and Ideator of Sharing4Good

About Us

The idea is simple: creating an open “Portal” where engaged and committed citizens who feel to share their ideas and offer their opinions on development related issues have the opportunity to do...

Contact

Please fell free to contact us. We appreciate your feedback and look forward to hearing from you.

Empowered by ENGAGE,
Toward the Volunteering Inspired Society.