Article 2 Review

Chase Lawson
CYSE201
10/17/2025
Testing Human Ability to detect ‘deepfake’ images of human faces
Introduction
In today’s world there are endless possibilities when it comes to technology and
that especially with Artificial Intelligence. Artificial Intelligence has come a long way since
the 1950’s and continues to improve every day. Artificial Intelligence (AI) can be used for
various reasons. AI can be used to come up with recipes for someone’s favorite dish or to
write an elegant resume. With the progression of AI comes a new problem and that new
problem is “deepfake” pictures and media. Deepfake is a term used for when someone
creates an image or a video with a particular person’s face and places it on someone else’s
body. Another way it’s used is through voice chat as well. AI can use someone’s voice
pretending to be them. Over the years deepfakes have become tricker to detect.
In this article review I am going to discuss how detecting deep-fake images relates
to social science principles. Deep fakes can be used for illicit activities like the invasion of
someone’s privacy or be used to steal someone’s identity. Deepfakes can be used to make
fake media content to reach a wider audience and can potentially scam an audience.
When it comes to interacting by viewing photos and media on the internet it could be hard
to determine if it’s real or a deepfake.
When viewing deep-fake pictures and media this raises some questions. How can
you tell if the picture is real and not a deepfake? What should people look for when looking
at a potential deep-fake picture? There were a few research studies that were done
involving focus groups and using various techniques to determine whether the pictures
were either real or deepfakes. Participants were recruited using the Prolific online platform
and received compensation for their time. The study o􀆯ers an investigation into the ability
of humans to detect deep-fake images from similar authentic images. As the studies
continue the percentage pointing out real images over deepfakes increases as the
participants got better at discerning the di􀆯erence between the two.
In conclusion observing and learning the di􀆯erence between real pictures and
media over deepfakes are quite necessary. It already has been shown through various
applications and websites that use deep-fake to mimic celebrities convincingly. The
average end-user may not know the di􀆯erence so they may think it’s real when it’s really a
deep-fake. By hosting a small dataset awareness can be achieved and this showcases the
output of such deep-fake generation methods.