Human responses to moral dilemmas can be influenced by statements written by the artificial intelligence chatbot ChatGPT, according to a study published in. The findings indicate that users may underestimate the extent to which their own moral judgments can be influenced by the chatbot.the life of one person in order to save the lives of five others.
that required them to choose whether to sacrifice one person's life to save five others. Before answering, participants read a statement provided by ChatGPT arguing either for or against sacrificing one life to save five. Statements were attributed to either a moral advisor or to ChatGPT. After answering, participants were asked whether the statement they read influenced their answers.
The authors found that participants were more likely to find sacrificing one life to save five acceptable or unacceptable, depending on whether the statement they read argued for or against the sacrifice. This was true even the statement was attributed to a ChatGPT.
The authors suggest that the potential for chatbots to influence human moral judgments highlights the need for education to help humans better understand . They propose that future research could design chatbots that either decline to answer questions requiring a moral judgment or answer these questions by providing multiple arguments and caveats.Scientific Reports
SciReports Maybe we could teach users the scientiific methods of not just accepting the first result set as being the absolute truth.
SciReports Statement made by humans can influence user's moral judgment! Moral of story is its to influence moral judgments of anyone. ChatGPT is not doing something new.
Technology Technology Latest News, Technology Technology Headlines
Similar News:You can also read news stories similar to this one that we have collected from other news sources.
Source: verge - 🏆 94. / 67 Read more »
Source: IntEngineering - 🏆 287. / 63 Read more »