Article Review 2: The Human Factor and Artificial Intelligence

Introduction

There is currently a growing concern surrounding artificial intelligence especially in regards to ‘deepfakes’ which are images or videos that are generated with the help of artificial intelligence. The use of deepfake technology aided abuse and harassment of public figures, women, minorities and many other groups (Bray et al., 2023). Additionally, deepfake technology aids in crimes such as cyberbullying, sextortion, and phishing which are crimes that mainly affect underage youth, women, and the elderly.

Research questions, hypotheses, methods and results

As stated in the article, Bray et al. (2023) conducted the study with the following three research questions in mind:

Are participants able to differentiate between deepfake and images of real people above chance levels? Do simple interventions improve participants’ deepfake detection and accuracy? Does participant’s self-reported level of confidence in their answer align with their accuracy at detecting deepfakes? (Research Questions Section).

Despite the research questions listed above, Bray et al. did not state any hypotheses. The purpose of the study is to collect data and improve upon previous studies conducted by other researchers. Bray et al. intended to gauge people’s ability to spot deepfakes and determine factors that may contribute to that ability. The experiment had 280 participants separated into four groups one of which is the control group, an experimental group that were given examples of deepfake images, an experimental group that was given ten ways to differentiate between deepfake and real images, and an experimental group that were given the same signs as the previous group as well as reminders about those signs (Bray et al., 2023). The researchers administered a test via a web application in which participants were shown 20 images and asked to differentiate between the deepfake and real images as well as provide justification on their choice.

The researchers calculated the accuracy in detecting deepfake and real images for all groups and overall into a percentage as well as conducting an analysis of variance (Bray et al., 2023). Results of the study showed that education did not play a part in a human’s ability in detecting deepfake images. Additionally, the majority of individuals were confident whether or not they were correct therefore, confidence also did not contribute to a human’s ability.

Relationship with Social Science

Determinism relates to the article due to the experiment’s intention to identify the factors that influences a human’s ability to identify deepfake images. The study tries to determine if human ability was caused or influenced by preceding events.  Objectivity and ethical neutrality is shown in that the researchers collect data without an opinion and have no bias regarding their subjects. Empiricism is also present in that the data collected is regarding human behavior and what they see.

Deepfakes are generally used in social media and contributes to the harmful effects such as bullying, fake news, and harming relationships. A human’s ability to identify deepfakes relates to the integrity portion of the risk triangle which ensures that information is accurate and reliable. The study also looks into human factors as it relates specifically to human ability and artificial intelligence generated images. The research also delves into victim precipitation and human enabled errors due to the fact that it studies human behavior and their ability which contributes to the possibility of their victimization.

Conclusion

Although the study did not reveal significant data, it highlighted the dangers of deepfakes especially in real world scenarios. The study eliminated education and confidence as factors in a human’s ability to spot deepfakes which indicates society needs to take other steps to protect against deepfakes such as policies or stricter controls on artificial intelligence. The study helps in prioritizing cybersecurity measures however; further study needs to be conducted to identify factors that affect a human’s ability to identify deepfake images.

References

Bray, S., Johnson, S., & Kleinberg, B. (2023). Testing Human Ability to Detect ‘Deepfake’ Images of Human Faces. Journal of Cybersecurity, 9(1). https://doi.org/10.1093/cybsec/tyad011