Article Review #2

Dawn Weston

November 5, 2023

CYSE 201S_22131

Article Review #2 

I reviewed the article entitled “Testing human ability to detect ‘deep fake’ images of human faces.” The topic relates to the principles of the social sciences, specifically cognitive psychology, by addressing how people interact with AI and if their brain can differentiate between what is real and what is fake. 

To begin, the article touched on flaws of previous related studies: too small of sample sizes, participant fatigue, issues with training, unknown background of how the studies gathered participants and if they were random, validity of the experiment’s steps, and lack of error statistics. 

The researchers felt as though a newer study would be needed to address the flaws of the previous ones. They accomplished this by conducting their own experiment which was three-fold: to determine if people can differentiate between deep fake images and real people, to see if interventions in the study improve accuracy, and determine if the participant’s confidence in their answer aligns with their accuracy. 

To conduct the study, researchers used the StyleGAN2 (Style Generative Adversarial Network 2) framework  trained on the Flickr FFHQ dataset to generate human faces. The sample size of participants was higher than previous studies at 280. To find participants the Prolific online platform was used and participants received £6/hr. They were given motivation of earning an additional 50% bonus if they scored in the top 50% of all participants. The study consisted of a control and three experimental conditions. In the experimental conditions the participants were trained on deep fake images, given tips on common flaws in AI images, or a combination of both. 

Results of the study concluded that though accuracy was better than previous studies, humans are not very proficient at detecting deep fake images, there was no significant improvement in accuracy for those that were given training vs those who were not, and often the confidence they reported was misplaced based.

In conclusion, AI presents benefits as well as challenges in the cyber field. The problems with deep fakes include fraud, forgery, as well as a host of other crimes such as phishing others on dating sites or opening bank accounts with false identities. As these generative programs become better, humans will need to be more aware and develop new tools of identifying them. 

Works Cited:

Bray, Sergi D., et al. “Testing Human Ability to Detect ‘Deepfake’ Images of Human Faces.” Journal of Cybersecurity, vol. 9, no. 1, Oxford UP, Jan. 2023, https://academic.oup.com/cybersecurity/article/9/1/tyad011/7205694.

Leave a Reply

Your email address will not be published. Required fields are marked *