Article Review: Measuring the Human Ability to Spot AI Generated Images

on

The article I have chosen to review examines whether people can accurately tell the difference between real photos of faces and fake ones created by artificial intelligence (AI), also known as deepfakes. The researchers explore how well people detect deepfakes and whether hints can help improve accuracy. As deepfake technology becomes more sophisticated, this study dives into the unsettling gap between human perception and digital deception.

This study relates to the social science concepts of objectivity, determinism, and relativism. The researchers provide examples of positive and negative uses of deepfakes, but the research methods and results are not swayed in one direction. The research merely seeks to show how well people can detect deepfakes and how confident they are in their ability. The research relates to determinism because using deepfake images and audio is an inevitable evolution of AI technology. In this current, unregulated form, some may seek to use the technology for criminal purposes. As the technology grows and is regulated, we will see it transform into something beneficial to humanity. The potential positive and negative uses of deepfakes show a clear connection to the principle of relativism. This technology has several applications in the healthcare industry but has also paved the way for new types of crime.

The study asked three main questions:

  1. Can people correctly identify AI-generated faces above chance levels?
  2. Does giving people advice on what to look for help them spot fakes?
  3. Are people’s confidence levels in their answers accurate?

To answer these questions, the researchers conducted an online experiment with 280 participants, split into four groups. The first group was the control group and received no help. The second group were shown 20 examples of deepfake images before the test. The third and fourth groups were shown a list of common inconsistencies with AI generated images. The third group was shown this list once prior to the test, while the list was displayed for the entire duration of the test for the fourth group. Each person was shown 10 real images and 10 fake images, and were asked to label them. They were also asked how confident they were in their answer and to provide their reasoning.

The results showed that people were only slightly better than chance at spotting fakes. Surprisingly, extra training and hints did not significantly improve performance. Participants were confident in their answers, even when they are wrong. This could be a cause for concern for educational approaches to addressing the issue. However, the study did find that certain images were easier to spot as deepfakes. For example, images with impossible backgrounds and those with jewelry were easier to spot as deepfakes. Although not a perfect solution, I think this shows that providing education and awareness can help.

This research connects to several topics discussed in class, such as the human factors of psychology, sociological paradigms, and the importance of cybersecurity culture. From a human factors perspective, the research shows how easily people can be tricked by fake images, as well as their cognitive bias that they know it is real or not. The research relates to the concept of symbolic interactionalism as a sociological paradigm, as users could interact with an artificial intelligence online and be entirely unaware that it is not a real person. Finally, it underscores the importance of cybersecurity culture, as people should be educated about the threat deepfakes pose, and how to tell what is and is not real online.

Marginalized groups may be disproportionately impacted by the spread of deepfakes. Deepfake images can be added to any fabricated story in an attempt to add credibility, which seek to spread false claims about individuals and groups. Used in this way, deepfakes will only serve to manipulate political opinions, which will worsen existing inequalities. Better education and regulation is needed to ensure that this does not happen.

This study makes important contributions to society by highlighting how easily tricked people can be by deepfakes. Even when provided hints, detecting AI generated content was proven to be difficult for the study’s participants. The researchers call for better technology to counter the AI behind deepfakes. However, until that technology is developed, human interventions will be necessary, which will require more education and awareness of the threat deepfake images pose.

References

Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1). https://doi.org/10.1093/cybsec/tyad011

Leave a Reply

Your email address will not be published. Required fields are marked *