Hear From an Expert on AI August 1 (1pm ET)

“I’d Rather Talk to an AI”: Examining the Moral Risks of Outsourcing Belief Revision to ChatGPT

Presented by Martina Orlandi, Assistant Professor in the Applied Artificial Intelligence Program at Trent University Durham, Canada.

Convincing people that their beliefs are unwarranted is a notoriously challenging task. Granted that nobody enjoys being lectured, the culprit is that individuals often have non-epistemic interests that motivate them to hold onto certain beliefs. This is the case of conspiracy theorists. The standard view in both psychology and philosophy argues that conspiracy theorists are drawn to conspiracy theories for non-epistemic reasons and that reiterating the evidence not only is insufficient for belief revision, but it can also have a boomerang effect (Douglas et al. 2017; Horne at al. 2015).
      However, in April 2024 a comprehensive study from a group of psychologists at MIT has challenged this received wisdom and showed a surprising result: that while individuals struggle to persuade conspiracy theorists that they are wrong, ChatGPT can successfully change their minds by engaging in an evidence-based dialogue (Costello et al. 2024). What’s more, this change seems to be durable and to last for months.
      What should we, philosophers, make of these results? In this talk, I examine the philosophical import of outsourcing belief revision to AI. Insofar as abandoning false beliefs is epistemically rational, ChatGPT seems to bring about positive consequences by leading conspiracy theorists to revise their beliefs in light of factual evidence. However, I argue that outsourcing belief revision to AI also carries ethical risks by undermining moral growth. For example, when it comes to morally loaded conspiracy theories that target particular segments of the population (like those that drive distrust in scientists, or target vulnerable minorities, etc.), reconciling with factual beliefs can also restore trust in those targets, thus bringing about positive collective consequences by strengthening the social fabric. But this can only occur when true beliefs are delivered by other persons. Outsourcing belief revision to ChatGPT necessarily eradicates such positive returns because it undermines this relational benefit.

Published by Nancy Burkhalter

I am in love with words. Trained as a linguist, journalist and researcher, I write, teach writing, and research everything about writing, especially how writing aids critical thinking. I've taught around the world, including three years in Kazakhstan, and a year each in Russia, Saudi Arabia, and Germany.

Leave a comment

Caitlin Johnstone

Daily Writings About The End Of Illusions

LET'S MOSEY ON

A Moving Account

Straw Clay Wood

Natural Building with Michael G. Smith

Seattle Muses

Conscious Fashion Studio

QA Productions

Ebooks = Real Books