ARTICLE REVIEW #2

 Article Review #2: An Analytical Review of “Testing Human Ability to Detect ‘Deepfake’ Images of Human Faces 

Authors: Sergi D. Bray, Shane D. Johnson, & Bennett Kleinberg 

Introduction/BLUF 

The term “deepfake” refers to something created, normally with AI, that represents a false reality. Deepfakes can come in the form of pictures or videos and include face swapping and lip syncing. Malicious use of deepfakes can affect a large number of people and could increase victim targeting. This article explores humans’ ability to distinguish between deepfake material and actual images of people (Bray et al., 2023). 

Upon the completion of this study, it was found that the number of people who can determine the difference between real material and deepfake material is high but not perfect (Bray et al., 2023). 

Relation/Connection to Social Science Principles 

The experiment conducted in this study explores many of the social science principles. The researchers had to practice objectivity and conduct the experiment in a value free manner and must practice ethical neutrality. They also can only study the behavior and results they can physically see. I also believe with this study the principle of determinism was used. Participants were promised bonus money if they scored high enough and this could have increased confidence in answers (Bray et al., 2023). 

Research Question/Hypothesis/Independent Variable/Dependent Variable

There were three research questions asked before the start of this study. “Are participants able to differentiate between deepfake and images of real people above chance levels? Do simple interventions improve participants’ deepfake detection accuracy? Does a participant’s self-reported level of confidence in their answer align with their accuracy at detecting deepfakes?” The hypothesis was that people are generally not good at detecting deepfake images. The independent variable was the group to which the participants were assigned. There was a control group, a group given familiarization conditions, a group given one-time advice, and a group advised with reminders. The dependent variable was whether or not participants were able to identify the images as deepfakes (Bray et al., 2023). 

Types of Research Methods Used 

For this study, people were recruited using an online platform and received payment. To increase effectiveness, they were offered an extra payment if they placed in the top 50%. They were also told to remember the content and they would be asked about it at a later time. The primary research method used was an online-based experiment (Bray et al., 2023). 

Types of Data Analysis used 

Numerical data was gathered in a chart and broken into percentages. (Bray et al., 2023). 

Connections to the Concerns or Contributions of Marginalized Groups 

Technology use can greatly affect marginalized groups, and the use of deepfake images is no different. Older people may be more susceptible to believing these AI generated images and spread misinformation. This study can help to understand this and create better detection tools (Bray et al., 2023).

Overall Societal Contributions of the Study/Conclusion 

This study was able to explore decision-making among the participants. This is important because it can help to shape the future of AI and its use (Bray et al., 2023). 

Works Cited

Bray, S.D, Johnson, S.D., & Kleinberg, B. (2023). Testing Human Ability to Detect ‘Deepfake’ Images of Human Faces. Journal of Cybersecurity, 9(1), Article tyad011. https://doi.org/10.1093/cybsec/tyad011