Date of Award

5-2025

Document Type

Thesis open access

First Advisor

Dr. Jennifer Henderson

Abstract

With the rapid rise of Artificial Intelligence, people around the world are growing concerned about the quality of the content they receive -- especially the information they read in the news. Artificial Intelligence, “the development of computer systems capable of performing tasks that typically require human intelligence, such as speech recognition, problem- solving, and pattern detection” (Climavision, 2024), are at the heart of these concerns. There are many forms of artificial intelligence. However, the study focuses explicitly on generative AI models, also referred to as Large Language Models (LLMs). A 2023 study surveying newsrooms nationwide reveals that Artificial Intelligence is used in 32% of content creation (Watson, 2024). As Artificial Intelligence becomes more and more prevalent in the journalistic sphere, it is imperative that people can correctly detect AI-generated text in order to better monitor the quality of the overall article. To test whether the average person can accurately and consistently distinguish between human and AI-generated news, this study asks participants to select from among articles authored by both humans and AI and identify their confidence level in this selection. The study also analyzes whether external factors such as education, socioeconomic status, and age impact their ability to distinguish between human and AI-authored news articles.

Share

COinS