World News

This trend of AI-generated action characters is spread all over social media. But is it just harmless fun?

If you have recently browsed social media, you may notice a lot…dolls.

There are dolls on X and Facebook feeds. Instagram? baby. tiktok? You guessed it: dolls, and tutorials on how to make dolls. There are even dolls on LinkedIn, arguably the most serious and interesting member of the gang.

You can call it Barbie AI therapy or Barbie box trend. Or, if Barbie isn’t your business, you can use AI Action Characters, Action Graphics Starter Pack, or Chatgpt Action Character Trends. But no matter you use it, dolls seem to be everywhere.

Although they have some similarities (imitating Mattel’s Barbie dolls, personality-driven accessories, boxes and packaging that look like plastic smiles), they are all as different as the people who posted them, except for one key, common trait: they are not real.

In the new trend, people are using generative AI tools like chatgpt to reimagine their dolls or action characters with accessories. It turns out it’s very popular, not just influencers.

Celebrities, politicians and major brands all jumped in. Journalists covering this trend make themselves a version of the microphone and camera (although this journalist won’t let you suffer). From billionaire Elon Musk to actress and singer Ariana Grande, users have made versions of almost any famous person you can imagine.

According to Tech Media website The Verge, it actually started with the professional social networking site LinkedIn, which is popular amongst the marketers looking for participating. As a result, many of the dolls you see there are trying to promote business or be busy. (Think, “Social Media Marketer Doll” or “SEO Manager Doll”.)

But this has leaked to other platforms and it seems everyone is happy to discover whether life in plastic is really great. That said, it doesn’t have to be harmless fun, according to several AI experts who talked to CBC News.

“It’s still the wild west when it comes to generating AI,” said Anatoliy Gruzd, professor and research director at the Social Media Lab at Metropolitan University.

“Most policy and legal frameworks are not fully catching up with innovation, which allows AI companies to determine how to use the personal data you provide.”

Privacy Issues

Matthew Guzdial, an assistant professor of computer science at the University of Alberta, said that from a sociological perspective, the popularity of doll generation trends is simply not surprising.

“This is an internet trend since owning social media. Maybe it was a forwarding email or a quiz that you could share the results,” he told CBC News.

But, like any AI trend, there are some concerns about the use of its data.

Generally speaking, generated AI presents important data privacy challenges. As Stanford University points out with artificial intelligence (Stanford HAI), data privacy issues and the Internet are not new, but AI is so “hungry for data” that it increases the scale of risk.

“If you give you very personal data, such as your face, work or the color you like, then you should do that, understanding that data will not only help to get instant results (such as dolls),” said Wendy Wong, a political science professor at Columbia University in the UK, studying AI and human rights.

Wong explained that the data will be fed back to the system to help them create future answers.

The photo illustration was taken on April 2 and shows a woman viewing Facebook user profile, showing images generated by Studio Ghibli animation-style artificial intelligence. After Chatgpt’s Image Generator was released, it became a popular trend online. (by Getty Images AFP)

Additionally, Stanford Hai noted that there are concerns that “bad actors” can use data scratched online to target people. In March, for example, the Canadian Competition Agency warned of the rise in AI-related fraud.

According to new research from TMU’s Social Media Lab, about two-thirds of Canadians try to use generative AI tools at least once. But of 1,500 people, researchers have little understanding of the companies’ collection or storage of personal data, the report said.

Gruzd, who uses the lab, recommends pursuing caution when using these new applications. However, if you do decide to experiment, he suggests you look for an option to use the data for training or other third-party purposes under the settings.

“If there is no such option, you may want to rethink using the app; otherwise, don’t be surprised if your similarity occurs in unexpected situations, such as online advertising.”

Environmental and cultural impact of AI

Then there is the environmental impact. CBC Weird and Quark AI systems have been previously reported as an energy-intensive technology that has the potential to consume as much electricity as the entire country.

A study from Cornell University claims that the GPT-3 language model trained OpenAI at Microsoft’s U.S. data center can directly evaporate 700,000 liters of clean fresh water. Goldman Sachs estimates that AI will increase electricity demand in data centers by 160%.

Watch | AI is Desire:

Decompose the climate impact of AI

The energy needed to generate AI remains behind a considerable carbon footprint, but it is also increasingly used as a tool for climate action. CBC’s Nicole Mortillaro breaks down where AI emissions come from and how the technology is used to help the planet.

Average query query 10 times According to some estimates, there is more power than Google searches.

Even Openai CEO Sam Altman said he was concerned about the popularity of generating images, writing on X last month that it had to introduce some restrictions temporarily while it made it more effective because its graphics processing unit was “melt.”

Meanwhile, as AI-generated dolls take over our social media feeds, artists using the hashtag #starterpacknoai are also spread by artists who care about the depreciation of their work.

Previously, the last AI trend has attracted attention, in which users generate their own images in the style of Tokyo animation studio Ghibli and initiated a debate on whether it is stealing work from human artists.

But despite concerns, Guzdial said the trends are positive — for AI companies trying to develop their user base. He said these models are very expensive to train and continue to run, but if enough people use them and rely on them, the company can increase its subscription price.

“That’s why these trends are so beneficial for these deep red companies.”



Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button