How is the field of artificial intelligence evolving and what does it mean for the future of work, education and humanity? MIT President Sally Kornbluth and OpenAI CEO Sam Altman covered all this and more during a wide-ranging discussion on the MIT campus on May 2.
The success of OpenAI's large ChatGPT language models has helped spur a wave of investment and innovation in artificial intelligence. ChatGPT-3.5 became the fastest-growing consumer software application in history after its release in late 2022, with hundreds of millions of people using the tool. Since then, OpenAI has also introduced AI-based image, audio and video generation products and partnered with Microsoft.
The event, held in a packed Kresge Auditorium, captured the excitement of the moment around AI, with an eye toward what's next.
“I think most of us remember the first time we saw ChatGPT and we were like, ‘Oh my God, this is so cool!’” Kornbluth said. “Now we’re trying to figure out what the next generation of this is going to be.”
For his part, Altman welcomes the high expectations around his company and more broadly in the field of artificial intelligence.
“I think it's great that for two weeks everyone was freaking out about ChatGPT-4, and then the third week everyone was like, 'Come on, where's GPT-5?' “, Altman said. “I think it says a lot about human expectations and human effort and why we all need to (work to) make things better.”
AI problems
At the start of their discussion, Kornbluth and Altman discussed the many ethical dilemmas posed by AI.
“I think we've made surprisingly good progress on how to align a system around a set of values,” Altman said. “As much as people like to say, 'You can't use these things because they release toxic waste all the time,' GPT-4 behaves the way you want it to, and we're able to get it to follow a given set of values, not perfectly good, but better than I expected at this point.
Altman also pointed out that people disagree on exactly how an AI system should behave in many situations, complicating efforts to create a universal code of conduct.
“How do we decide what values a system should have? » asked Altman. “How do we decide what a system should do? To what extent does society set limits on putting these tools in the hands of the user? Not everyone will use them the way we want them to, but that's kind of the case with tools. I think it's important to give people a lot of control…but there are some things that a system just shouldn't do, and we'll have to collectively negotiate what those things are.
Kornbluth acknowledged that it would be difficult to do things like eradicate bias in AI systems.
“It's interesting to think about whether or not we can create models that are less biased than we are as human beings,” she said.
Kornbluth also cited privacy concerns associated with the large amounts of data needed to train today's large language models. Altman said society has grappled with these concerns since the dawn of the internet, but AI makes these considerations more complex and important. He also sees entirely new questions raised by the prospect of powerful AI systems.
“How will we manage the trade-offs between privacy, utility and security? » asked Altman. “Where we all decide individually to define these trade-offs and the benefits that will be possible if someone lets the system be trained throughout their life, that's a new thing that society has to deal with. I don't know what the answers will be.
Regarding AI privacy and power consumption concerns, Altman said he believes advancements in future versions of AI models will help.
“What we want from GPT-5 or 6 or whatever is for it to be the best possible reasoning engine,” Altman said. “It's true that right now the only way to do that is to train it on tons and tons of data. In that process it learns something about how to do reasoning or reasoning. very, very limited cognition or whatever you want to call it, but the fact that it can memorize data, or the fact that it stores data all in its parameter space, I think we'll look into. back and say, “That was kind of a weird waste of resources. I guess at some point we'll figure out how to separate the reasoning engine from the need for tons of data or storing the data in (the model).” , and can treat them as separate elements.
Kornbluth also asked how AI could lead to job losses.
“One of the things that annoys me the most about people who work on AI is when they stand up with a straight face and say, 'This will never lead to any job cuts.' It's just an additive thing. This whole thing is going to be great,” Altman said. “It’s going to eliminate a lot of current jobs, and it’s going to change the way a lot of current jobs work, and it’s going to create entirely new jobs.” This always happens with technology. »
The promise of AI
Altman believes that advances in AI will solve all of the field's current problems.
“If we spent 1% of the world's electricity training powerful AI, and that AI helped us understand how to access non-carbon-based energy or improve deep carbon capture, that would be a huge win” , Altman said.
He also said that the application of AI that interests him most is scientific discovery.
“I believe that (scientific discovery) is the primary driver of human progress and that it is the only way to generate sustainable economic growth,” Altman said. “People are not satisfied with GPT-4. They want things to improve. Everyone wants to live more, better and faster, and science is the way to get there.
Kornbluth also asked Altman for his advice for students thinking about their careers. He urged students not to limit themselves.
“The most important lesson to learn early in your career is that you can kind of figure out anything, and no one has all the answers at the beginning,” Altman said. “You just stumble your way, have a fast iteration speed and try to drift towards the problems that are most interesting to you, and be around the most awesome people and have the confidence that you will successfully iterate towards the right thing . …You can do more than you think, faster than you think.”
This advice was part of a larger message Altman had about the need to remain optimistic and work to create a better future.
“The way we teach our young people that the world is completely fucked up and that there is no point in trying to solve the problems, that all we can do is sit in our room in the dark and think how horrible we are, is a really deeply unproductive attitude,” Altman said. “I hope MIT is different from a lot of other college campuses. fight against this in your life mission. Prosperity, abundance, a better life next year, a better life for our children. It's the only way to go. It's the only way to have a. functioning society…and I hope that you will all fight against the anti-progress trend, the anti-'the people deserve a good life' trend.”