Article Review #1 – Assessing Human accuracy identifying AI generate faces
Kaedon Denton
02-13-2025
Introduction
While we have been observing cyber security and social sciences, we have been taught the most crucial role we can learn is that of being able to learn and understand the human factors that happen. These factors can include how technology is used for both good and bad, and how the implementation of proper techniques to use their technology, can lead to one protecting themselves from future cyber threats. As AI (artificial intelligence) further develops so does its capabilities, it is of importance for users to know how to protect themselves from incidents caused by AI. This article only covers one part of what AI can do which in particular is deepfakes, and if individuals can distinguish between a photo of a real persons face and a AI generated picture of a face.
The Tests and Results
As laid out in the articlethere were three distinct ways that this study used to help test the accuracy of human detection of AI generated images. To start the study used an a-priori statistical power analysis to ensure their sample sizes would be sufficient to detect differences in conditions if they should exist, as well that the survey for the study was prepared on a custom Django webapp which used JavaScript to ensure randomization. The second way is that each participant was shown only 20 images to not lead to cognitive overload in participants as well that the images were shown individually as they would be seen in a real-world scenario and not between two different photos. The third and final way that was used was the confidence of the participants in their choice selection of whether it was a AI generated image or a real photo. It was done by having the participants supply their reasoning for the choice and by providing an answer to an open question by indicating which parts of the image their response is about. With this laid out there was three research questions that were being asked and those were: Are participants able to differentiate between deepfake and images of real people above chance levels, Do simple interventions improve participants’ deepfake detection accuracy, and Does a participants’ self reported level of confidence in their answer align with their accuracy at detecting deepfakes? (Sergi D. Bray, 2023)
The results from the experiments yielded data that on average the participants were correct ~60% of the time, and for the overall performance none of the intervention conditions led to significant improvement in the accuracy of participants choices. But in those participants that were part of the experimental conditions were more likely to correctly identify the deepfake images from real images, however it was stated that the accuracy did increase with the “intensity” of intervention. (Sergi D. Bray, 2023)
Conclusion
This study showed and outlined the complexity of a problem that AI capabilities might and maybe even already introduced at what might seem a small scale. That even with people being informed of what to look for and not to look for that AI can generate almost undesignable fake people and pass off as real ones. As technology further advances and AI along side it, there has to be further intervention from humans to help limit or mitigate the damage that future AI development can cause problem wise. In conclusion due to the severity of the damage that a person could cause with malicious intent with AI tools it is best to educate people on what to look for in what might be a “deepfake” or AI generate content moving forward.
References
Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1). https://doi.org/10.1093/cybsec/tyad011