If you need to persuade someone, maybe let AI do the talking.
Vered Shwartz, assistant professor of computer science at the University of British Columbia, explores why this might be.
Vered Shwartz is an Assistant Professor of Computer Science at the University of British Columbia, a CIFAR AI Chair at the Vector Institute, and the author of “Lost in Automatic Translation: Navigating Life in English in the Age of Language Technologies”. Her research interests focus on natural language processing, with the fundamental goal of building models capable of human-level understanding of natural language. She is currently working on testing and improving the capabilities of large language models and vision and language models, developing culturally-competent AI, and responsible NLP applications in sensitive domains (e.g., legal, medical). Before joining UBC, she was a postdoctoral researcher at the Allen Institute for AI (AI2) and the Paul G. Allen School of Computer Science & Engineering at the University of Washington. Prior to that, she did her PhD (2019) in Computer Science at Bar-Ilan University.
Who is more persuasive, AI chatbots or humans? It turns out, AI.
In a recent study, we recruited people to role play a person considering a lifestyle change such as becoming vegan or attending grad school. Half of the participants were paired with a human who was tasked with persuading them to make a particular decision, and the other half was paired with GPT-4, a popular AI large language model or LLM. People were not only more easily persuaded by GPT-4, but also perceived it as more empathetic.
Analyzing the conversations, we found several reasons. First, GPT-4 has access to vast knowledge from training on text from the web. Crucially, it’s also able to access that knowledge quickly and generate long responses. While the human participants were allowed to use Google, they were slower at the task. The result is that GPT-4 generated not only multiple arguments in favor of the decision but also concrete logistical support – for example recommending brands of meat substitutes – which proved very effective. This speed also allowed GPT-4 to add “niceties” such as greetings and validation which likely made people feel seen and further helped persuade them.
Finally, GPT-4’s choice of words made it seem more authoritative, further increasing its persuasiveness. Because of their authoritative style, we typically assume that LLMs “know what they are talking about”, although they often “hallucinate” facts. As more people turn to LLMs for advice, this perception could lead to adverse effects. Recently, a person was hospitalized after taking sodium bromide based on advice from ChatGPT for reducing table salt consumption.
As individuals, it’s more important than ever to get basic AI education and develop critical thinking. LLMs are a very useful tool, but not a perfect one, and we must ensure we’re all aware of their risks and limitations.
Read More:
The 3rd Workshop on Social Influence in Conversations











