Conducting experiments on our fellow humans is a necessary step in the scientific process. But it wasn’t always as tightly regulated as it is today. We are lucky to live in an age where there are so many rules and regulations surrounding human experimentation. And one of the most pivotal aspects of these regulations is consent.
It is completely illegal, as well as highly unethically, to experiment on anyone without their knowing consent. And this means that the participant needs to have a complete and total understanding of what the experiment entails and why it is being conducted. So when the University of Zurich recently conducted an experiment on Reddit without getting consent from anyone it caused quite the stir.
The Persuasion Of AI
As you likely know, Reddit is a social media platform that is divided into small communities known as subreddits. Each subreddit is dedicated to a specific topic. And our story is set on the subreddit r/ChangeMyView. This is one of the more interesting subreddits on the site. Users can post a view or belief they have with the hopes of being challenged on it by other users. As the name implies, users are looking to see if someone can change their currently held view.
Researchers at the University of Zurich wanted to see how persuasive a large language model (LLM) is. ChatGPT is a prime example of an LLM. We have all seen just how far AI has come in recent years. And a subreddit like this is the ideal testing ground to see just how lifelike and persuasive LLM’s can be. But an experiment like that requires the audience to be unaware of the conditions for the results to be truly valid. Which puts the researchers into a tough position. And it is this position that led them to engage in unethical behaviour.
The University allowed their AI LLM to create over 1700 posts on the subreddit. Under numerous different accounts. The posts were designed to actively fool people into thinking they were posted by a human. And, more importantly, to try and see if an AI could actively change the mind of a human being.
The Backlash
We aren’t sure exactly how long the experiment was running before it was uncovered. But the backlash to it was swift and vehement. Naturally the users of this subreddit were outraged that they had been unwillingly forced to participate in an experiment. The university claimed that they checked every post and comment that the LLM made before it was posted to avoid anything harmful being said. But many an investigation into the experiment has revealed that some of the LLMs made some startling claims such as being victims of sexual assault. Within the context of this information being used to actively change the minds of Reddit users it is understandable that people feel wronged.
The university had originally planned to reveal the results of the experiment. But they have since decided to avoid publishing the results in light of the pushback from the Reddit community. But we have since been able to work out how well the LLMs performed by looking at the reactions to the comments and the number of upvotes they received.
For those who aren’t aware, Reddit uses an upvote system instead of the typical ‘Like’ system employed by most social media platforms. Users can upvote or downvote posts and comments to express approval or disapproval. And it is a very good metric by which we can measure the success of the LLMs.
But it is the responses to the LLM comments that really highlight how effective the experiment was. External groups have determined that the LLM comments were more effective at changing the minds of users than other users were. And this is a very troubling conclusion that has left many people asking if we should be considering heavier legislation on AI.
The Fallout
The moderators of the subreddit submitted an ethical complaint towards the university. This, paired with the media outrage when the story initially broke, very quickly forced the university to issue an apology. They said they would not be publishing the results and would not be using the dataset acquired from the experiment in any research or experiments going forward.
But many seem to think the damage has been done. And it has also raised a lot of questions surrounding how AI could be used in the future. Reddit hasn’t released any official statement regarding the experiment. But many subreddits have started changing their rules to forbid the use of AI in relation to posting or commenting.