Krishna called on Washington to hold AI developers accountable for flaws in their systems that result in real harm. Meanwhile, companies deploying AI should be responsible when their use of the technology causes problems. For example, an employer should not be able to avoid accusations of employment discrimination simply by using AI, he said.
Krishna argues that AI should not follow the lead of social media, where broad legal protections established at the dawn of the Internet continue to shield companies from legal liability. Instead, he said, AI companies will be more likely to create safer systems that respect existing laws, such as copyright and intellectual property, if violators could be found before the courts.
He’s now making this argument for greater accountability around Washington. Krishna joined CEOs of companies like Meta, Google, Amazon and X last month at a forum for advise senators on how to regulate AI. IBM is also one of the companies that signed an agreement White House commits to building safe models.
Krishna’s calls for AI creators to be held accountable “probably don’t make me very popular among everyone,” he said. “But I think people are realizing that the moment you start talking about critical infrastructure and critical use cases, the bar for deploying AI goes up.”
He also acknowledged that his own company would be at legal risk under the rules he supports, although he says this is limited in part because IBM primarily builds AI models for other companies , who have a financial interest in respecting the law. In contrast, AI models like OpenAI’s ChatGPT or Meta’s Llama 2 are more publicly accessible and therefore responsive to a wide range of users.
Krishna did not explain how lawmakers should enforce this responsibility, but his comments echoes a recent blog post outlining IBM’s broader AI policy recommendations. He also warned regulators against requiring licenses to develop the technology – break away from competitors like Microsoft – and against allowing AI creators and deployers to avoid legal liability with complete immunity.
“We can have a voice,” but ultimately it’s lawmakers who have to write the rules, Krishna said. “Congress and the federal government own it. And that’s why we encourage them to think about the rules of liability.”
Late last month, IBM announced that it would provide legal coverage to business customers accused of unintentionally infringing copyright or intellectual property rights using its generative AI. This follows a similar move by Microsoft in an effort to allay concerns about generative AI models that use a wealth of data from the Internet and other sources to create images, text and other contents.
“I believe it will accelerate the market and I think it will set a precedent,” Krishna said. “We insist that everyone be responsible. Now we can debate how to hold people accountable.
Annie Rees contributed to this report.
To listen to daily interviews with technology leaders, subscribe to POLITICO Tech on Apple, Spotify, Google or wherever you get your podcasts.