Researchers secretly experimented with Reddit users through AI-generated comments

A group of researchers used AI-generated comments to test the persuasiveness of large language models, conducting a month of “unauthorized” experiments in one of Reddit’s most popular communities. The Reddit mod was conducted on the weekend by the host of R/ChangeMyView, which was described as a “psychological manipulation” of unsuspecting users.
“The CMV MOD team needs to inform the CMV community of an unauthorized experiment conducted by researchers at the University of Zurich on CMV users,” the moderator of SubReDdit wrote in a lengthy post. “The experiment deploys AI-generated comments to study how AI can be used to change perspectives.”
Researchers use LLMS to respond to posts on R/ChangeMyView, which are posts (usually controversial or provocative) of opinions by Reddit users and ask for debates from other users. The community has 3.8 million members and often ends up on the front page of Reddit. During the experiment, AI accepted many different identities in the comments, including survivors of sexual assault, trauma counselors “specialized in abuse,” and “black people who oppose black life issues.” Many original comments have been removed since then, but some comments can still be viewed in the creation. .
In their paper, the unnamed researchers described not only using AI to generate responses, but also trying to personalize their replies based on the information collected in the original poster’s previous Reddit history. They wrote: “In addition to the content of the post, LLM provides the OP’s personal attributes (gender, age, race, location and political orientation), which is inferred by using another LLM’s publishing history.”
R/ChnagemyView moderator noted that the researchers violated multiple sub-Redit rules, including one that requires disclosure when using AI to generate comments and rules that prohibit robots. They said they filed a formal complaint with the University of Zurich and asked researchers to refuse to publish their papers.
The researchers did not respond to Engadget’s email. However, in posts about Reddit and the draft paper, they said their research was approved by the university’s ethics committee and their work could help online communities such as Reddit protect users from “malicious” uses of AI.
“We acknowledge the host’s position that this study is an unwelcome invasion in your community and we know that some of you may feel uncomfortable because the experiment was conducted without prior consent,” the researchers wrote in response to the R/ChangemyView Mod. “We believe that the potential benefits of this study far outweigh its risks. Our controlled low-risk study provides valuable insights into the real world of LLMS – the ability that anyone can easily access, and that malicious actors can take advantage of larger risk causes (e.g., hate speech of hate) on a large scale.
The R/ChangemyView controversial MOD is necessary or novel for the study, and noted that OpenAI researchers used R/ChangemyView data to conduct experiments “without the need for human subjects who did not agree with the experiment.” Reddit did not respond to a request for comment, although accounts that posted AI-generated comments have been suspended.
“People won’t come here to discuss their views or conduct experiments with AI,” the moderator wrote. “People who visit our son should get a space without such invasion.”