Brandon Vuono
CYSE 200S
Article Review 1
Article reviewed
Sergi D Bray, Shane D Johnson, Bennett Kleinberg (2023) Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity, Volume 9, Issue 1, 2023, tyad011, https://doi.org/10.1093/cybsec/tyad011
This study dives into how well people can identify real human faces and “deepfake” images created by artificial intelligence. These “deepfake” images are a threat to cybersecurity because they use artificial intelligence to manipulate images and challenge the authenticity of images. In the day an age of “fake news” something as little as artificial intelligence images can be detrimental. The article can relate to social science principles by explaining cognitive bias, human error, and the influence of technology.
The authors of this article research the question of how accurately humans can identify AI-generated images. Further to solving the question, the article will attempt to identify interventions to improve the ability to identify the “deepfake” images. The study is attempting to increase accuracy in recognizing manipulated images. This in turn will increase society’s ability to understand what the truth is making for a better-informed population.
Participants in this study were placed into four groups to get the amount of data needed. One control and three intervention groups. They were shown twenty images chosen from a set of fifty real images and fifty images created by artificial intelligence. The groups were asked to label each image real or fake. With their choice, they were asked to describe their confidence and reasoning for choosing their answer. The results concluded that accuracy was around sixty-two percent. The data in the study is both quantitative and qualitative. The quantitative portion is the data collected and the percentage of accuracy from the participants. The qualitative portion of this study is the explanations from the participants on their answers. This study reflects several scientific principles such as objectivity, skepticism, and determinism. Objectivity is shown by how the researchers used measurable data to conduct the study. Skepticism is shown by how the researchers tested the assumption that people can easily identify fake images. Determinism is shown by how the researchers believe that human error can be predicted by their cognitive bias.
Overall, this study contributes to society by increasing awareness of how much of an impact artificial intelligence images can have. It highlights the need for improved detection technology, stronger need for cyber security policy, and public awareness. This shows that cyber security is not only a technology problem set but a social science issue, the connection between human behavior and perception