Embracing Dialectic Intersubjectivity: Coordination of Differential Perspectives in Content Analysis with LLM Persona Simulation
Published in Social Science Computer Review, 2026
Recommended citation: Kang, T., Thorson, K., Peng, T. Q., Lee, S., Hiaeshutter-Rice, D., & Soroka, S. (2026). Embracing dialectic intersubjectivity: Coordination of different perspectives in content analysis with LLM persona simulation. Social Science Computer Review. https://doi.org/10.1177/08944393251410155
Abstract
This study attempts to advance automated content analysis from consensus-oriented to coordination-oriented practices, thereby embracing diverse coding outputs and exploring the dynamics among differential perspectives. As an exploratory investigation, we evaluate six GPT-4o configurations to analyze sentiment in Fox News and MSNBC transcripts on Biden and Trump during the 2020 US presidential campaign. By assessing each model’s alignment with partisan perspectives, we explore how partisan selective processing can be identified in LLM-Assisted Content Analysis (LACA). The findings indicate that LLM-based partisan perspective simulations reflect politically polarized standpoints across partisan groups, revealing a pronounced divergence in sentiment analysis between Democrat-aligned and Republican-aligned persona models. This pattern is evident in intercoder-reliability metrics, which are higher among same-partisan than cross-partisan persona model pairs. Results also suggest that LLM partisan simulations exhibit stronger ideological biases when analyzing politically congruent content. This approach enhances the nuanced understanding of LLM outputs and advances the integrity of AI-driven social science research and may also enable simulations of real-world implications.
Keywords
large language models, text and content analysis, agent-based modeling, measurement, political sociology and culture
