Researchers secretly experimented with Reddit users through AI-generated comments
A group of researchers used AI-generated comments to test the persuasiveness of large language models, conducting a month of “unauthorized” experiments in one of Reddit’s most popular communities. The experiment revealed by the Reddit mod by the host of R/Changemyview over the weekend was described as a “psychological manipulation” of unsuspecting users.
“The CMV MOD team needs to inform the CMV community of an unauthorized experiment conducted by researchers at the University of Zurich on CMV users,” the moderator of SubReDdit wrote in a lengthy post. “The experiment deploys AI-generated comments to study how AI can be used to change perspectives.”
Researchers use LLMS to respond to posts on R/ChangeMyView, which are posts (usually controversial or provocative) of opinions by Reddit users and ask for debates from other users. The community has 3.8 million members and often ends up on the front page of Reddit. During the experiment, AI accepted many different identities in the comments, including survivors of sexual assault, trauma counselors “specialized in abuse,” and “black people who oppose black life issues.” Many original comments have since been deleted but can still be viewed in the archives created from this 404 Media.
In their draft paper, unnamed researchers described not only using AI to generate replies, but also trying to personalize their replies based on the information collected in the original poster’s previous Reddit history. They wrote: “In addition to the content of the post, LLM provides the OP’s personal attributes (gender, age, race, location and political orientation), which is inferred by using another LLM’s publishing history.”
R/ChnagemyView moderator noted that the researchers violated multiple sub-Redit rules, including one that requires disclosure when using AI to generate comments and rules that prohibit robots. They said they filed a formal complaint with the University of Zurich and asked researchers to refuse to publish their papers.
Reddit seems to be considering some kind of legal action. Chief Legal Officer Ben Lee responded to the controversy on Monday, writing that the researchers’ actions were “ethically wrong on both moral and legal levels” and violated Reddit’s rules of location.
We prohibit all accounts related to the University of Zurich’s research work. Additionally, while we are able to detect many of these fake accounts, we will continue to enhance our unreal content detection capabilities, and we have been in touch with the audit team to make sure we deleted any AI-generated content related to this study.
We are in contact with formal legal requirements for the University of Zurich and this specific research team. We want to do our best to support the community and make sure researchers are responsible for their misconduct here.
Researchers at the University of Zurich instructed Engadget to the university’s media relations department in an email, which did not immediately answer questions. The researchers said in a draft post and paper about Reddit that their research has been approved by the university’s ethics committee and that their work could help online communities such as Reddit protect users from more “malicious” uses of AI.
“We acknowledge the host’s position that this study is an unwelcome invasion in your community, and we know that some of you may feel uncomfortable because the experiment was conducted without prior consent,” the researchers wrote in a comment commenting on the R/ChangemyView Mod response. “We believe that the potential benefits of this study far outweigh its risks. Our controlled low-risk study provides valuable insights into the real world of LLMS-the ability that anyone can easily access, and that malicious actors can take advantage of larger risk causes (e.g., hate speech of hate) on a large scale.
The R/ChangemyView controversial MOD is necessary or novel for the study, and noted that OpenAI researchers used R/ChangemyView data to conduct experiments “without the need for human subjects who did not agree with the experiment.”
“People won’t come here to discuss their views or conduct experiments with AI,” the host wrote. “People who visit our son should get a space without such invasion.”
Updated, April 28, 2025, 3:45 pm PT: This post has been updated to add details in the statement of Reddit’s Chief Legal Officer.