A test of a human’s ability to discover deepfakes images of a person’s face
Introduction
The article I chose to review today talks about deepfakes and how they effect people
because of different factors which as the article explains can have them “make entities
that falsely represent reality”. This article also goes over test to see a human’s ability to
see a deepfake while providing results and information on what was used for the test
and how the test was comprised of.
Relations to the principles of social sciences
The article relates to the principles of social science by showing principles like
Relativism which shows how deepfake technology impacts the different types of
systems like trust. other principles It seems to relate to is Empiricism, Parsimony,
Ethical Neutrality, and determinism. For Empiricism, it uses controlled experiments and
statistics to analyze how accurate humans are at detecting a deepfake. For Parsimony,
the article is straightforward in its explanation that people will struggle to spot a
deepfake due to the subtle flaws in the deep fake that are hard to point out. With
Ethical Neutrality, the Article does not try to side with one opinion as it tries to make a
balance between the pros and the cons of this technology. Finally determinism, this is
shown in article by stating that being exposed and being familiar with deepfakes can
help improve the accuracy of detecting it.
The Studies questions and hypotheses
The article goes over three specific research questions. The first being whether
“participants are able differentiate between deepfake and real images above chance
levels.” The Second is “Do simple interventions improve participants’ deepfake
detection accuracy?” The third research questions are “Does a participant’s self reported level of confidence in their answer align with their accuracy at detecting
deepfakes?”
The Research method
For the research method the researchers used an experimental research method as
well as and online survey. The researcher assigned 280 participants in one of four
groups, so they are going for a sort of independent, dependent, and control type of
situation. Each group was asked to figure out whether each picture that was shown to
them was real of a deepfake. This was done to test if familiarization and other factors
could help improve detection accuracy.
Types of data
The Data shown in the article were both quantitative and qualitative data. For
quantitative data it included the accuracy rates, confidence levels, and statistics of the
participants ability to detect a deepfake. For qualitative data it comes from the
participants’ explanations and how they describe there reasoning for chose which
picture was a deepfake or not.
Concept relationships with the PowerPoints
The relationship between the article and the PowerPoints we went over expressed
human emotions and how they are easily manipulated by cyber crimes like this and
phishing attacks.
Challenges, Concerns, and Contributions
In the article the main challenge shown was the human accuracy for detecting deepfakes
is only slightly better than chance as interventions is failing to significantly improve being
detected. A concern is how confidant the participants were in the answers they got wrong
which suggest that they are more likely to fall for a deepfake then not. The contributions
in the article include a structed analysis of the abilities of a human detecting a deepfake,
the insight into how effected a simple intervention can be, and how we should improve
our technology and education so that we can stop the continuous threat of deepfake
detection.
Conclusion
Overall, this Article does a good job of explaining how we as people still have a long
way to go before being able accurately detect a deepfake from just seeing it alone. As
the results of the research done it we can definitely conclude that deepfakes are very
hard to spot without the proper equipment right now so it is best to research it some
more as deepfakes can be both a good a bad thing depending on how it is used.
References
Bray, S. D., Johnson, S. D., & Kleinberg, B. (2023). Testing human ability to detect
‘deepfake’ images of human faces. Journal of Cybersecurity, 9(1).
https://doi.org/10.1093/cybsec/tyad011