Dialogue between people with different moral expressions and LLMS on abortion : a natural language processing of human-AI interaction
No Thumbnail Available
Authors
Meeting name
Sponsors
Date
Journal Title
Format
Thesis
Subject
Abstract
The purpose of this study is to explore whether there are hidden biases and inequities in Large Language Models (LLMs) and under what situations they would be evoked. Using the "abortion" topic as an entry point, the study based on Moral Foundations Theory (MFT) and uses target participatory crowdsourcing to allow participants from different groups to debate with GPT. This study uses Natural Language Processing (NLP) and basic statistic to analyze the relationships and differences between participants and GPT on their moral expressions under five moral dimensions. Through moral expression quantification, topic modeling, correlation analysis, multiple linear regression, one-way ANOV A, cosine similarity analysis, and Kolmogorov-Smimov test, this study evaluates the consistency, differences, and preference tendency of the expressions of GPT and participants on the five moral dimensions. The results indicate that the moral responses of GPT show a general "structural pandering effect" that is consistent with the participants' moral expressions. However, the pandering is not balanced across moral dimensions and has a structural influenced. In this study, the GPT is most aligned with participants on the Harm/Care and Degradation/Sanctity dimensions, showing a significant structural preference. In addition, as the topic transitioned from individual rights to social responsibility, the moral expression of GPT shows a leftward liberal expression, which causes a greater deviation from the moral expressions of participants. This finding suggests that the moral expressions of GPT have the ability to mimic human expressions, but also unavoidably influenced by the values implicit in their training corpus and activated in specific topics.
Table of Contents
DOI
PubMed ID
Degree
M.A.
