Decades of data show toxicity is a staple of online conversations

(Photo credit: Adobe Stock)

In a study published in Nature, researchers have found evidence of consistent patterns of toxicity in human conversations across various social media platforms, unaffected by the platform type, discussion topic, or time period. The study found that longer online conversations tend to escalate in toxicity and polarization, particularly when they involve conflicting viewpoints. Surprisingly, such toxic interactions do not deter users.

Previous research has focused on polarization, misinformation, and antisocial behavior online, but a comprehensive understanding of how intrinsic human behavior patterns manifest on these platforms remains elusive. The new study aimed to fill that gap by exploring the inherent dynamics of toxicity across various digital environments.

“Social media platforms have become central to communicating, gathering information, and forming opinions,” explained study author Walter Quattrociocchi, a full professor of computer science at Sapienza University of Rome and author of Polarizzazioni: informazioni, opinioni e altri demoni nell’infosfera.

“However, the prevalence of toxicity undermines these processes and can have detrimental effects on users’ mental health and the quality of public discourse. By studying these patterns over a long period, we hoped to uncover underlying mechanisms and potential solutions to mitigate such behaviors, ultimately contributing to healthier digital environments.”

The research team collected data from eight different social media platforms, amassing a total of approximately 500 million comments spanning a period of 34 years. This large dataset included platforms widely used in the public sphere such as Facebook, Twitter, and Reddit, as well as less mainstream platforms like Gab and Voat. The dataset also included comments from USENET, a worldwide distributed discussion system established in 1980 — over a decade before the world wide web went online to the general public.

The comments collected were associated with various topics such as politics, news, environment, and vaccinations. This diversity in topics helped to minimize the thematic biases that might affect the nature of online conversations and allowed a more generalized understanding of toxicity across different discussion contexts.

“Analyzing multiple platforms is key to isolating genuinely human behavioral patterns from simple reactions to the idiosyncratic online environments,” said co-author Andrea Baronchelli, a professor of complexity science at City, University of London. “The attention is too often focused on the specific platform, forgetting human nature. Our study is an important step to change this attitude and move the spotlight back on who we are and how we act.”

To analyze the collected comments for toxicity, the researchers utilized the Perspective API, a state-of-the-art machine learning tool developed by Google. This API is designed to detect the presence of toxic language, defined in this study as “rude, disrespectful, or unreasonable comments likely to make someone leave a discussion.” This definition allowed the researchers to quantify and compare the level of toxicity across different platforms and timeframes.

The Perspective API assigns a toxicity score to each comment, which the researchers used to determine the prevalence and distribution of toxic comments within their dataset. By employing such an automated tool, the study could handle the vast volume of data efficiently.

One of the key discoveries was that the longer an online conversation continues, the more likely it is to become toxic. This pattern held true irrespective of the social media platform, the topic of discussion, or the historical context in which the conversation occurred. This suggests that as discussions drag on, they tend to devolve into more polarized and hostile exchanges.

“Toxic behavior is pervasive across all types of social media platforms and discussions, even in non-polarizing contexts,” Quattrociocchi told PsyPost. “This suggests that while the platform’s design and the topic of conversation are essential, there are inherent aspects of online interaction that facilitate toxicity.”

Contrary to the common assumption that toxic interactions deter engagement, the researchers found that toxicity does not discourage users from participating. In fact, users are more likely to remain active in discussions where toxic comments are prevalent. This finding indicates that not only do toxic environments fail to repel users, but they may also foster a type of engagement that keeps users returning to the conversation, possibly driven by emotional investment or a sense of conflict.

“One of the most surprising findings was that despite the presence of toxic comments, conversations often continued rather than ending abruptly,” Quattrociocchi said. “This challenges the conventional notion that toxicity solely disrupts dialogue. It suggests that users might be becoming more accustomed to such interactions, or they have developed strategies to engage constructively despite negative comments. This resilience opens new avenues for understanding how people adapt to and manage their social media environments.”

The researchers observed that these patterns of toxicity and engagement are consistent across different social media platforms. This consistency suggests that the dynamics of online toxicity might be a fundamental aspect of human interaction in digital spaces, rather than being strongly influenced by the specific design, culture, or moderation policies of individual platforms.

Despite the extensive data and robust analysis, the study has limitations. One of the primary challenges is distinguishing inherent human behavior patterns from those influenced by the platform’s design and algorithmic structures. The use of automated systems to detect toxicity, while necessary for handling large datasets, also introduces potential biases due to the complexity of natural language and the subtleties of human communication.

Future research will need to focus on refining toxicity detection technologies, understanding the triggers of toxic behavior, and exploring the role of platform algorithms in shaping these dynamics. Furthermore, investigating the effects of these interaction patterns in offline settings could provide deeper insights into the pervasive nature of toxicity in human interactions.

“A significant caveat of our research is the limitation in comparing online behaviors directly with offline behaviors,” Quattrociocchi explained. “Due to the digital nature of the data, our understanding of online interactions is more nuanced and data-driven, whereas comprehensive and analogous offline data is more complex to obtain. This restricts our ability to fully explore how these toxic dynamics might differ in non-digital settings.”

“Our primary long-term goal is to deepen the understanding of human behavior on social media platforms, moving beyond mere speculation to a robust, empirically-based comprehension. We aim to systematically analyze how and why people behave the way they do online, identifying the triggers and contexts of toxic behaviors and positive interactions.”

“Understanding the persistent nature of toxicity on social media can empower users to engage more mindfully,” Quattrociocchi added. “We hope that our findings inspire other researchers to explore innovative solutions and that platform developers will consider these insights as they design future iterations of their interfaces.”

The study, “Persistent interaction patterns across social media platforms and over time,” was authored by Michele Avalle, Niccolò Di Marco, Gabriele Etta, Emanuele Sangiorgio, Shayan Alipour, Anita Bonetti, Lorenzo Alvisi, Antonio Scala, Andrea Baronchelli, Matteo Cinelli, and Walter Quattrociocchi.