60 Minutes Google AI
60 Minutes Google AI

Researchers Accused 60 Minutes Google AI

After CBS showed a high-profile 60 Minutes interview with Google CEO Sundar Pichai on Sunday, AI researchers are calling out both CBS and Google for overselling AI. In the episode, reporter Scott Pelley said that a Google AI program had taught itself a language it had never seen before. Pichai called AI technology a “black box” that even people who work in the field don’t fully understand.

“Of the AI issues we talked about, the most mysterious is called ‘emergent properties,’” Pelley said in the segment. “Some AI systems are teaching themselves skills that they aren’t expected to have. How this happens is not well understood.”

The segment then cuts to a video of an AI program made by Google that the network doesn’t name. The video shows a person asking the program questions in Bengali, a language spoken in Bangladesh and India, and the program answering in both Bengali and English. The software in question is called PaLM, and it uses the same technology as Google’s newly released AI chatbot Bard.

Pelley said that the program “adapted on its own” when it was asked questions in Bengali, which he said it wasn’t taught to know.

“We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali,” James Manyika, a Google vice president also interviewed by 60 Minutes, said on the segment. “So now, all of a sudden, we have a research effort where we’re now trying to get to a thousand languages.”

60 Minutes Google AI
60 Minutes Google AI

Two well-known AI experts asked questions about these claims in popular Twitter threads. Margaret Mitchell, a researcher and ethicist at AI startup Hugging Face who used to co-lead Google’s AI ethics team, pointed out that PaLM was trained on Bengali, according to a study written by Google’s own researchers. The paper says that 0.026% of PaLM’s training data was in Bengali.

“By prompting a model trained on Bengali with Bengali, it will quite easily slide into what it knows of Bengali: This is how prompting works,” Mitchell tweeted. It is not possible, she added, for AI “to speak well-formed languages that you’ve never had access to.”

BuzzFeed News asked a CBS representative for a statement on the record, but the person did not answer. Last year, Google showed PaLM for the first time at its yearly conference for developers. On stage, Pichai showed that the software could understand Bengali questions and answer them.

“What is so impressive is that PaLM has never seen parallel sentences between Bengali and English,” Pichai said at that event. “It was never explicitly taught to answer questions or translate at all. The model brought all of its capabilities together to answer questions correctly in Bengali, and we can extend the technique to more languages and other complex tasks.”

Jason Post, a Google spokesman, told BuzzFeed News that the company had never said it didn’t train PaLM in Bengali. “While the PaLM model was trained on basic sentence completion in many languages, including English and Bengali, it was not trained to 1) translate between languages, 2) answer questions in a Q&A format, or 3) translate information across languages while answering questions,” Post said in a statement. “It taught itself these new skills on its own, which is a very impressive feat.”

Emily M. Bender, a professor and researcher at the University of Washington, wrote on Twitter about the 60 Minutes segment and disagreed with Manyika’s statements. Bender told BuzzFeed News that the claim that the computer can translate “all of Bengali” is “unscoped and unproven.”

“What does it mean that “all of Bengali”? Bender tweeted. “How did they test this?” She also said that Manyika’s remarks ignored or covered up the fact that Bengali texts are in the training data. You must check The Long Waited Skull And Bones Release Date Again Delayed!

Several other people in the tech space also publicly criticized CBS and Google. You can see below:

Bender wrote on Twitter that “emergent properties” seems to be the polite way to say “AGI.” AGI stands for “artificial general intelligence,” a made-up term for a technology that can learn independently and do things better than humans. She said, “It’s still a load of crap.”

Mitchell was equally blunt on Twitter. “Maintaining the belief in ‘magic’ properties, and amplifying it to millions (thanks for nothin @60Minutes!) serves Google’s PR goals,” Mitchell tweeted. “Unfortunately, it is disinformation.”

In a second Medium post, Bender asked companies like Google to “take into account the needs and experiences of those your tech affects” instead of presenting AI as “some mysterious, magical, autonomous being.”

Bender told BuzzFeed News that wrong information about AI could hurt people. “When tech leaders make it hard to understand how the technology actually works and try to make us think it has mysterious “emergent” properties, it makes it harder to come up with the right rules,” Bender said. “It is very important right now that we hold companies responsible for the technology they put out into the world and don’t let them put that responsibility on so-called AI systems.”

About Diana Sutton 738 Articles
Diana is a master of the written word, with a keen eye for detail and a passion for crafting content that informs, inspires, and engages her audience. As a content editor, she brings a wealth of knowledge and experience to the table, working closely with writers and contributors to ensure that the content produced is of the highest quality and meets the needs of the target audience. With her exceptional communication skills and deep understanding of the website's editorial goals and brand identity, Diana is able to balance the needs of different stakeholders and manage multiple projects and deadlines with ease.

Leave a Reply

Your email address will not be published.


*