Exploring the security, adaptability and effectiveness of AI for the real world
Next week marks the start of the 40th International Conference on Machine Learning (ICML 2023), which will take place July 23-29 in Honolulu, Hawai’i.
ICML brings together the artificial intelligence (AI) community to share new ideas, tools and data sets, and build connections to advance the field. From computer vision to robotics, researchers from around the world will present their latest advances.
Our Director for Science, Technology and Society, Shakir Mohamed, will give a talk about machine learning for social purposesaddress health and climate challenges, adopt a sociotechnical view, and strengthen global communities.
Google DeepMind researchers are presenting more than 80 new papers at ICML this year. As many articles have been submitted before Google Brain and DeepMind join forcesArticles originally submitted as part of a Google Brain affiliation will be included in a Google Search Blogwhile this blog features articles submitted under a DeepMind affiliation.
AI in the world (simulated)
The success of AI that can read, write, and create relies on foundational models: AI systems trained on large data sets that can learn to perform many tasks. Our latest research explores how we can translate these efforts into the real world and lays the foundation for more generally capable and embodied AI agents that can better understand the dynamics of the world, opening new possibilities for tools more useful AI.
In an oral presentation, we present AdA, an AI agent that can adapt to solve new problems in a simulated environment, just like humans do. In just a few minutes, AdA can accomplish difficult tasks: combine objects in new ways, navigate invisible terrain, and cooperate with other players.
Likewise, we show how we could use vision language models to help train embodied agents – for example, telling a robot what it does.
The future of reinforcement learning
To develop responsible and trustworthy AI, we need to understand the goals that are at the heart of these systems. In reinforcement learning, this can be defined by the reward.
In an oral presentation, we aim to settle the reward hypothesis first postulated by Richard Sutton, stating that all goals can be seen as maximizing the expected cumulative reward. We explain the precise conditions under which this occurs and clarify the types of goals that can – and cannot – be captured by reward in a general form of reinforcement learning problem.
When deploying AI systems, they need to be robust enough for the real world. We look at how to improve training reinforcement learning algorithms under constraintsbecause AI tools often need to be limited for safety and efficiency reasons.
In our research, which was rewarded with a 2023 ICML Award for Outstanding Paperwe explore how we can teach models complex long-term strategy under uncertainty with imperfect information games. We explain how models can play to win two-player games even without knowing the position and possible moves of the other player.
Challenges at the AI frontier
Humans can easily learn, adapt and understand the world around us. Developing advanced AI systems that can generalize in the same way humans do will help create AI tools that we can use in our daily lives and address new challenges.
One way AI adapts is by quickly changing its predictions in response to new information. During an oral presentation, we examine plasticity in neural networks and how this knowledge can be lost during training – and ways to prevent losses.
We also present research that could help explain the type of learning in context that emerges in large language models by studying neural networks meta-trained on data sources whose statistics change spontaneously, as in natural language prediction.
In an oral presentation, we present a new family of recurrent neural networks (RNN) that perform better in long-term reasoning tasks to unlock the promise of these models for the future.
Finally, in ‘allocation of quantile credits“We propose an approach to disentangling luck and skill. By establishing a clearer relationship between actions, outcomes and external factors, AI can better understand complex real-world environments.