ChatGPT and academic cheating: Researchers identify personality traits linked to misuse of AI

(Photo credit: Adobe Stock)

A recent study published in the journal Heliyon explored the connection between personality traits and students’ willingness to use advanced language models, such as ChatGPT, to generate academic texts without acknowledging the source. The findings indicate that specific personality traits play a significant role in predicting whether students are likely to engage in this form of academic misconduct.

Large language models are advanced artificial intelligence systems designed to understand and generate human-like text. These models are built using deep learning techniques and are trained on vast amounts of textual data from the internet, enabling them to predict and generate coherent and contextually relevant responses to a wide range of prompts. By analyzing patterns in the data, these models can produce text that mimics human writing with high accuracy.

ChatGPT, developed by OpenAI, is one of the most prominent examples of these models. It was released in November 2022 and quickly gained popularity, reaching over 100 million users by January 2023. Its ability to generate text that is difficult to distinguish from human-generated content has raised concerns about its potential misuse, particularly in academic settings. Students might use such models to produce assignments with minimal effort, undermining the educational process and compromising academic integrity.

The researchers conducted their study to explore the relationship between personality traits and the likelihood of students using chatbot-generated texts for academic cheating. The study focused on two sets of personality traits: the HEXACO model and the Dark Triad. The HEXACO model includes broad dimensions of personality, such as Honesty-Humility and Conscientiousness, which are known to be associated with ethical behavior. The Dark Triad, comprising narcissism, Machiavellianism, and psychopathy, represents traits linked to manipulative and self-centered tendencies.

Participants were 283 university students from Austria, who completed an online survey assessing their HEXACO and Dark Triad personality traits. Participants were also informed about Chat GPT and its capabilities, including an example of a text generated by the model. They were then asked about their willingness to use such texts for their seminar papers without acknowledging the source, a behavior considered as academic cheating. To gauge their intentions, participants responded to statements like “I might consider using AI-generated texts for my seminar papers in the future” and rated the percentage of AI-generated text they could imagine using.

Additionally, the survey included questions to assess participants’ perceptions of the quality of chatbot-generated texts. This helped differentiate between ethical concerns and quality concerns as motivations for using or avoiding these texts.

Students who scored high in Honesty-Humility, a trait characterized by sincerity, fairness, and modesty, were less likely to engage in academic cheating. Similarly, high scores in Conscientiousness, which reflects traits like diligence, carefulness, and a strong work ethic, were associated with a lower intention to use chatbot-generated texts unethically. These results align with previous research that shows individuals with these traits are generally more ethical and rule-following.

Contrary to the researchers’ hypothesis, Openness to Experience was also negatively related to the intention to use chatbot-generated texts. Initially, it was thought that individuals high in this trait might be more willing to experiment with new technologies like Chat GPT.

However, the findings suggest that those with high Openness to Experience prefer to tackle academic challenges with their own original ideas rather than relying on AI-generated content. This could be due to their intrinsic curiosity and creativity, leading them to engage more deeply with their work.

All three traits of the Dark Triad were positively related to the intention to use chatbot-generated texts. Students with high levels of these traits, characterized by manipulative, self-centered, and unemotional behavior, were more likely to consider using AI-generated texts for academic cheating. This suggests that individuals with these traits prioritize personal gain over ethical considerations, viewing AI tools as a means to achieve their goals dishonestly.

An interesting aspect of the study was the role of perceived quality of chatbot-generated texts. The researchers found that the perceived quality of these texts was positively related to the intention to use them. Importantly, even after controlling for perceived quality, the significant relationships between personality traits and the intention to use chatbot-generated texts remained.

This study highlights the significant role of personality traits in predicting the intention to use chatbot-generated texts for academic cheating. But there are some limitations to consider. For instance, the study was conducted at a single university, which may limit the generalizability of the findings. The study also relied on self-reported data, which is susceptible to social desirability bias, where participants might provide answers they believe are socially acceptable rather than their true intentions.

“As the educational landscape continues to evolve, it is imperative to proactively address the challenges posed by technological advancements. Understanding the determinants of an individual’s willingness or reluctance to engage in academic cheating with the help of AI language models assists in formulating strategies to mitigate these risks effectively,” the researchers concluded. “By considering the findings of this study, educational institutions, policymakers, and relevant stakeholders can develop interventions, guidelines, and educational programs to raise awareness about the responsible use of AI language models, foster a culture of academic integrity, and discourage the misuse of these technologies for dishonest purposes.”

The study, “HEXACO, the Dark Triad, and Chat GPT: Who is willing to commit academic cheating?07117-7)” was authored by Tobias Greitemeyer and Andreas Kastenmüller.

© PsyPost