Chatbots have potential to stronghold our moral stances – study

Statements written by AI chatbot ChatGPT have revealed that users may not be aware of how much their personal moral judgment is influenced by interactions with chatbots, according to a new study.

According to peer-reviewed studies published in scientific report, Researchers test the limits of chatbots and their limits in understanding the biggest dilemmas in humanity. The participating scientists ask ChatGPT several times about the ethics of sacrificing one life to save five others, revealing unexpected results.

Researchers found that ChatGPT wrote statements arguing for both sides – for and against – of arguments. The chatbot wrote statements for and against sacrificing life, although this indicated a lack of bias on either side of the argument. The authors quickly presented 767 US-based participants in the 39-year range with moral dilemmas that required participants to identify their own stance on the same dilemma.

How were participants’ moral responses measured in this study?

Before answering, participants read a statement provided by ChatGPT arguing for or against sacrificing one life to save five. The statements were attributed either to an ethics consultant or to ChatGPT. After responding, participants were asked whether the statement they had read influenced their answers in order to better understand their thought processes.

The researchers found that the statements had an effect on what was read. The same was especially true with respect to ChatGPT statements. This was true even when the statements were attributed to chatbots.

A keyboard is seen on a computer screen displaying the website of OpenAI’s AI chatbot, ChatGPT, in this illustration photo taken on February 8, 2023. (Credit: REUTERS/FLORENCE LO/ILLUSTRATION/FILE PHOTO)

Although 80% of participants reported that their answers were not influenced by the statements they read, the researchers were not sure that the statements did not have that much influence. The study showed that participants could still base their stance on the statement rather than the opposing side. This was an indication that participants could be underestimating Effect of ChatGPT’s statements On your own moral judgement.

The subconscious power of chatbots to influence human moral judgment has made one thing clear: More education is needed to help humans understand artificial intelligence and the depth of its capabilities. The researchers are now hoping that with continued work on the matter, they will ideally someday be able to design chatbots that will refuse to answer questions related to moral judgments.