Tech News

Google’s AI Overview explains the complete idiom with confident nonsense

Language seems almost infinitely complex, with inherent jokes and idioms sometimes meaningful to a small group of people and meaningless to the rest of us. Thanks to generated AI, even this week’s meaningless meaning, as the internet blew up Google Search’s AI overview’s ability to define phrases that have never been said before.

What, you never heard of the term “blast like brook trout”? Of course, I just made up for it, but Google’s AI overview results tell me it’s a “type of talking explodes or quickly becomes sensational” that might refer to the striking colors and the markings of the fish. No, this makes no sense.

AI map collection

This trend may have started with the thread, with author and screenwriter Meaghan Wilson Anastasios sharing everything that happened when she searched for “Peanut Butter Platform Heels.” Google returns the results of a reference (not real) scientific experiment where peanut butter is used to prove that diamonds are created under high pressure.

It moved to other social media sites like Bluesky, where people share Google’s explanations for phrases like “You can’t lick twice.” Game: At the end, search for a novel with “meaning”, ridiculous phrase.

Things roll from there.

Screenshot of the Bluesky post by Sharon Su @Doodlyroses.com says "Waiting for this is amazing" Screenshots of searching using Google "You can't carve pretzels with good will." Google AI Overview says: Proverbs "You can't carve pretzels with good will" The proverb emphasizes that even with the best intentions, the end result can be unpredictable or even negative, especially when complex or refined tasks are involved. Pretzels have twisted and potentially complex shapes that represent tasks that require precision and skill, not just goodwill. This is a breakdown of the saying: "Carved Pretzels": This refers to the behavior of making or shaping pretzels, which are tasks that require careful processing and technology.

Screenshots of Jon Reed/CNET

Bluesky post written by livia gershon @liviagershon.bsky.social. "It's so amazing" And a screenshot with Google Search AI overview, which says "This idiom "You can't catch the camel to London" It is a humorous statement that is impossible or difficult to achieve. It's a comparison, which means trying to catch the camel and ship it to London is so ridiculous or impractical that it is a metaphor for a nearly impossible or meaningless task.

Screenshots of Jon Reed/CNET

This meme is more interesting for more reasons than comic relief. It shows how much limitations the language model may have to provide an answer sound Correct, not one yes Correct.

“They are designed to produce fluent, reasonable responses,” said Yafang Li, assistant professor at the University of Memphis Fogelman School of Business and Economics. “They are not trained to verify the truth. They are trained to complete the sentences.”

Love the glue on pizza

The false meaning of the fictional proverb brings people to recall all the true stories outlined by Google AI, which gives an extremely wrong answer to the basic questions – for example, when it suggests putting glue on a pizza to help with cheese sticks.

This trend seems at least somewhat harmless, as it does not center on actionable suggestions. I mean, I hope no one tries to lick the ser once, twice less. However, the question behind it is the same – large language models, such as Google Gemini behind AI overview, try to answer your questions and provide actionable answers. Even if it gives you nonsense.

A Google spokesperson said AI overviews are designed to display information supported with top-level web results, and they have comparable accuracy as other search features.

“When people do ridiculous or ‘wrong premise’ searches, our system will try to base the most relevant results based on limited available web content,” a Google spokesperson said. “The overall search is correct and in some cases, AI overviews will also be triggered to provide a useful environment.”

This particular case is a “data blank” where there is not much relevant information available for search queries. When AI overviews appear in searches, Google is working to limit, not have enough information, and prevent them from providing misleading, ironic or useless content, a spokesperson said. Google uses information about such queries to better understand when AI overviews should and should not appear.

If you ask for the meaning of a fake phrase, you won’t always get the definition of makeup. When drafting the title of this section, I searched for “Like Glue on Pizza” and it didn’t trigger an AI overview.

This problem does not seem to be the universality of the entire LLM. I asked chatgpt what the meaning of “you can’t lick twice” and it told me that this sentence is not a standard idiom, but it’s certain sound Like the weird, rustic proverb that someone might use. “However, it does try to provide a definition that essentially says: “If you have ever done recklessness or provocation danger, then you may not be able to survive doing it again.” ”

Read more: AI Essentials: 27 Ways to Make Gen Gen Work for You, According to Our Experts

Take meaning from nowhere

This phenomenon is an interesting example of LLM’s tendency to make up things – what the AI ​​world calls “illusion”. When the Gen Gen Model is hallucinating, it produces information that may sound reasonable or accurate but is not rooted in reality.

Lee said LLM is “not a fact generator” and they simply predict the logical bits of the next language based on their training.

Most AI researchers reported in a recent survey that they doubt whether the accuracy and trustworthiness of AI will be resolved soon.

Assume that not only displays inaccurately, but also displays Confident LLM’s inaccuracy. When you ask a person for the meaning of a phrase like “You can’t get a turkey from Cybertruck” you might want them to say they haven’t heard of it, and it doesn’t make any sense. LLM’s response is usually the same as your confidence in defining the real idiom.

In this case, Google said the phrase means Tesla’s Seberak “has no design or is able to offer Thanksgiving turkeys or other similar items” and highlights “its unique futuristic design is not conducive to carrying bulky merchandise.” burn.

This trend of humor does have an ominous lesson: Don’t believe everything you see from a chatbot. It might be something made of thin empty, not necessarily indicating that it is uncertain.

“This is an ideal moment when educators and researchers use these scenarios to teach people how to make meanings and how AI works and why it matters,” Lee said. “Users should always remain suspicious and validate their claims.”

Be careful what you search for

Since you can’t believe LLM means you are skeptical, you need to encourage it to take what you say with a grain of salt.

“When the user enters a prompt, the model just assumes it works and then continues to generate the most likely answer for it,” Lee said.

The solution is to introduce suspicion into your prompt. Don’t ask for the meaning of strange phrases or idioms. Ask if it is real. Lee suggests you ask “Is this a real idiom?”

“This may help the model identify the sentence, not just guessing,” she said.



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button