For months, Google and Alphabet’s Discord have been hosting an invite-only chat for heavy users of Bard, Google’s AI-powered chatbot. Google product managers, designers and engineers use the forum to openly discuss the effectiveness and usefulness of the AI tool, with some questioning whether the enormous resources devoted to development are worth it. From a report: “My general rule is not to trust LLM results unless you can independently verify them,” Dominik Rabiej, senior product manager at Bard, wrote in Discord chat in July, referring to large models of language – AI systems trained on massive bases. quantities of text that constitute the building blocks of chatbots like Bard and OpenAI’s ChatGPT. “I would love to get to a possible point, but it’s not there yet.”
“The biggest challenge I still think about: what are LLMs really for, in terms of usefulness?” said Googler Cathy Pearl, head of user experience at Bard, in August. “It really makes a difference. TBD!” (…) Two participants in Google’s Bard community on the Discord chat platform shared details of discussions on the server with Bloomberg from July to October. Dozens of messages reviewed by Bloomberg offer a unique window into how Bard is used and criticized by those who know it best, and show that even the company executives responsible for developing the chatbot feel conflicted about the potential of the tool. Explaining his response regarding “not trusting” responses generated by large language models, Rabiej suggested limiting Bard’s use to “creative/brainstorming applications.” Using Bard for coding was also a good option, Rabiej said, “since you inevitably check if the code works!”