Article Review 2

Adryanna Smith
March 30, 2025

Uncovering Human Capacity for Deepfake Images

BLUF (Bottom Line Up Front)
Below is a critique of Bray, Johnson, and Kleinberg’s (2023) paper “Testing Human Ability to Detect ‘Deepfake’ Images of Human Faces.” The paper tests the accuracy of human ability to detect AI-produced human faces against real faces, the efficacy of educational interventions, and the social and psychological impact. The paper provides critical insight into the vulnerability of society to digital disinformation in its current form, especially to vulnerable groups.

How the Subject Relates to Principles of Social Science
This topic includes basic social science concepts:

Perception and Cognition: The experiment studies how people process visual information and make decisions based on psychological theories.
Technology and Society: It examines the impact of emerging technologies like AI on human decision-making and behavior.
Digital Behaviour and Ethics: Deepfakes usage and abuse have ethical connotations in digital sociology and criminology.
Research Questions or Hypotheses
The authors asked the following primary research questions:

Can human beings always detect deepfake images?
Are simple interventions (training, advice, reminders) better at detecting?
Is self-reported confidence consistent with detection accuracy?
They hoped treatments would improve performance and confidence would be a function of accuracy—though outcome showed little benefit and low agreement.

Research Methods
Participants (N = 280) were assigned to one of four conditions: control, familiarization, single advice, and advice with reminders. All rated 20 images (deepfake or real), reported their confidence, and explained their response.

The experiment had strict controls (random assignment, control group filler tasks), and used real and false photos from the same database for ecological validity.

Data and Analysis
Quantitative data (accuracy scores, confidence ratings) and qualitative data (image-click reasoning, free-text answers) were collected. Statistical (ANOVAs, t-tests) comparisons across conditions were used. Textual reasoning was manually coded using NVIVO. There was 62% mean accuracy—just about better than chance—with limited gain after treatments.

Concepts From Class That Connect to the Article
Media literacy and fake news
Human mistake while utilizing technology Faith in web content

Psychological effects of misinformation

These concepts contribute to understanding the effect of online manipulation on conduct and public trust.

Relevance to Marginalized Groups

Already vulnerable groups are made even more vulnerable to exploitation by deepfakes—especially women and ethnic minorities, who can be vulnerable to being harassed, having their identities hijacked, or being manipulated in doctored media (e.g., revenge porn or scams). The inability to identify such media increases existing systemic inequalities in the online space. Total Contributions to Society This study provides timely insight into a new cybersecurity threat. It calls for publicization, policy reform, and human-oriented detection systems. Through an examination of human vulnerabilities to online judgment, it helps develop better tools for protecting society from AI-produced disinformation. Conclusion Bray et al. (2023) illustrate that human ability to detect deepfakes is untrustworthy—even when instructed. This poses grave risks in the information age of the digital world where disinformation can be spread rapidly. The study points to the urgent need for technical and pedagogical solutions, especially to safeguard the vulnerable.

Reference Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1), 1–18. https://doi.org/10.1093/cybsec/tyad011