Article Review

Article review: Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers

Angel Hernandez

September 26, 2025

BLUF

This article reviews 56 studies that were based on how well people can detect deepfakes. It looks at how accurate they were, if training could help, and how confident they were in their responses. The findings show that people perform close to chance when identifying deepfakes, but accuracy varies widely. (Diel et al., 2024).

Relation to Social Science Principles

The article connects to social science by examining how people think and make decisions when facing realistic fake media. It shows the flaws in human perception and judgment, which are main topics in psychology and cognitive science.

Research Question, Hypotheses, IV, and DV

The study’s  main questions were whether humans could detect deepfakes, what influences their decisions, and whether training improves results. The hypothesis was that participants alone would not be very reliable at spotting deepfakes, but that some strategies could make them a little better. Since this was a meta-analysis, the independent variables were the different ways those experiments were set up, and the dependent variable was participants’ performance in detecting deepfakes, measured through accuracy rates (Diel et al., 2024).

Research Methods

The article analysed 56 studies that tested human ability to identify deepfakes across different formats such as images and videos. The meta-analysis allowed them to combine results and look for patterns, strengths, and weaknesses in human’s abilities to detect deepfakes.

Data and Analysis

The study found that people’s performance was low and often close to chance when it came to spotting deepfakes. Training or advice improved results, but not by much. These findings show that people are not very reliable at telling real and fake media apart, which makes the problem of deepfakes even more concerning (Diel et al., 2024).

Impact on Marginalized Groups and Society

Deepfakes can create challenges for women, children, and other minority groups who are often the ones that are targeted with harmful or misleading content. They can be the target of harassment, reputational damage, and emotional harm. The study shows that people are not very accurate at spotting fake images, which makes these risks even more serious. The study contributes to society by raising awareness of how deepfakes can threaten public trust in the media. It shows the need for better tools, education, and policies to protect people online.

Conclusion

This study shows that people are not very good at spotting deepfakes. Training and advice only help a little, and people often feel sure even when they are wrong. These findings are important because they show how deepfakes can fool people and put groups at risk. The study helps us understand how people think in the digital age and shows why we need better ways to defend against deepfake misuse. 

References

Diel, A., Lalgi, T., Schröter, I. C., MacDorman, K. F., Teufel, M., & Bäuerle, A. (2024). Human performance in detecting deepfakes: A systematic review and meta-analysis of 56 papers. Forensic Science International: Digital Investigation, 46, 301171. https://www.sciencedirect.com/science/article/pii/S2451958824001714