Many believe that artificial intelligence (AI) tools have been successful, especially the popular ChatGPT app. Or did they?
This week, on the national stage, a Virginia Tech professor known for her research on the history and culture of computing and the Internet will challenge how society defines success and failure. AI.
Janet Abbate, an Arlington-based professor of science, technology and society, will speak Friday at a congressional briefing on Capitol Hill to discuss historical perspectives on the challenges posed by AI. The briefing, hosted by the American Historical Association, will feature Abbate and professors from Princeton University, Columbia University and the University of Minnesota.
“When we automate something done by human intelligence, we define what intelligence is,” said Abbate, who wrote two books, “Inventing the Internet” and “Recoding Gender: Women's Changing Participation in Computing “.
“There is a narrow idea of what is included in intelligence,” she said. “It's great if you want to be a computer that plays chess, but now we have AI that does very social things. We did not include social intelligence.
Abbate said she hopes lawmakers will consider some of her questions when setting policies for the nation.
“These fundamental questions that should be asked at the beginning are not being asked at all,” she said.
Abbate will discuss the following:
AI lacks moral reasoning and social intelligence.
“We cannot automate moral reasoning or an ethic of care,” Abbate said.
Meanwhile, social intelligence arises when people live in a society where their actions have consequences for others.
“AI has prioritized or replaced socially disconnected forms of intelligence that may not be appropriate for or replace current uses of AI expected to occur in interpersonal social interactions,” she said. declared.
What it means for AI to replace or become equivalent to a human being.
AI produces an expected or stereotypical version of a human being.
“Part of AI's success in imitating or replacing a human being is that we see what we want or expect to see and respond to conversational cues with our social instincts” , Abbate said. “We're filling in the blanks, and in doing so we're exaggerating the intelligence of the computer.”
What it means for AI to solve a problem.
It's important that AI tools have defined success criteria, she said. For example, if AI needs to be trained, who decides what the appropriate labels are?
Failure also needs a criterion.
“Should we allow things that steal people’s intellectual property? Or are they used to harass or impersonate people? » » said Abbot. “Failure is not the absence of success. We have a lot of things that are commercial successes and societal failures. This is where politics comes in.