Abstract
The rise of generative artificial intelligence (AI) has pioneered the age of content closely resembling human writing but written by a language model. This poses challenges in distinguishing between AI-generated and human-written paragraphs. This study explores humans' ability to recognize AI-generated text, identifying characteristics that lead participants to believe a paragraph is AI-written or human-written. A Qualtrics survey presented participants with paragraphs on various topics and asked them to identify the AI-generated paragraph and rate their confidence level. Results showed an average accuracy of 64.04% in identifying AI paragraphs, with accuracy varying across topics. Participants displayed "Some" or "Fair" confidence levels, regardless of correctness. Linguistic indicators such as incorrect grammar and first-person usage signaled human writing, while technical and factual language indicated AI. Participants noted the direct writing style of AI and the use of jargon and informal phrasing by humans in their responses. The study highlights the need to train individuals in detection and suggests the potential for improving AI sophistication. Limitations include the focus on written content rather than auditory or visual content, and the timing of the study in the early stages of generative AI, while future work involves monitoring AI's development and analyzing AI text lengths and challenging topic prompts. Understanding the distinctions between AI and human writing can guide AI's future improvements and applications.
留言