{"id":417,"date":"2025-03-30T23:45:15","date_gmt":"2025-03-30T23:45:15","guid":{"rendered":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/?page_id=417"},"modified":"2025-03-30T23:46:17","modified_gmt":"2025-03-30T23:46:17","slug":"article-review-2","status":"publish","type":"page","link":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/article-review-2\/","title":{"rendered":"Article Review 2"},"content":{"rendered":"\n<p>Adryanna Smith<br>March 30, 2025<\/p>\n\n\n\n<p>Uncovering Human Capacity for Deepfake Images<\/p>\n\n\n\n<p>BLUF (Bottom Line Up Front)<br>Below is a critique of Bray, Johnson, and Kleinberg&#8217;s (2023) paper &#8220;Testing Human Ability to Detect \u2018Deepfake\u2019 Images of Human Faces.&#8221; The paper tests the accuracy of human ability to detect AI-produced human faces against real faces, the efficacy of educational interventions, and the social and psychological impact. The paper provides critical insight into the vulnerability of society to digital disinformation in its current form, especially to vulnerable groups.<\/p>\n\n\n\n<p>How the Subject Relates to Principles of Social Science<br>This topic includes basic social science concepts:<\/p>\n\n\n\n<p>Perception and Cognition: The experiment studies how people process visual information and make decisions based on psychological theories.<br>Technology and Society: It examines the impact of emerging technologies like AI on human decision-making and behavior.<br>Digital Behaviour and Ethics: Deepfakes usage and abuse have ethical connotations in digital sociology and criminology.<br>Research Questions or Hypotheses<br>The authors asked the following primary research questions:<\/p>\n\n\n\n<p>Can human beings always detect deepfake images?<br>Are simple interventions (training, advice, reminders) better at detecting?<br>Is self-reported confidence consistent with detection accuracy?<br>They hoped treatments would improve performance and confidence would be a function of accuracy\u2014though outcome showed little benefit and low agreement.<\/p>\n\n\n\n<p>Research Methods<br>Participants (N = 280) were assigned to one of four conditions: control, familiarization, single advice, and advice with reminders. All rated 20 images (deepfake or real), reported their confidence, and explained their response.<\/p>\n\n\n\n<p>The experiment had strict controls (random assignment, control group filler tasks), and used real and false photos from the same database for ecological validity.<\/p>\n\n\n\n<p>Data and Analysis<br>Quantitative data (accuracy scores, confidence ratings) and qualitative data (image-click reasoning, free-text answers) were collected. Statistical (ANOVAs, t-tests) comparisons across conditions were used. Textual reasoning was manually coded using NVIVO. There was 62% mean accuracy\u2014just about better than chance\u2014with limited gain after treatments.<\/p>\n\n\n\n<p>Concepts From Class That Connect to the Article<br>Media literacy and fake news<br>Human mistake while utilizing technology Faith in web content<\/p>\n\n\n\n<p>Psychological effects of misinformation<\/p>\n\n\n\n<p>These concepts contribute to understanding the effect of online manipulation on conduct and public trust.<\/p>\n\n\n\n<p>Relevance to Marginalized Groups<\/p>\n\n\n\n<p>Already vulnerable groups are made even more vulnerable to exploitation by deepfakes\u2014especially women and ethnic minorities, who can be vulnerable to being harassed, having their identities hijacked, or being manipulated in doctored media (e.g., revenge porn or scams). The inability to identify such media increases existing systemic inequalities in the online space. Total Contributions to Society This study provides timely insight into a new cybersecurity threat. It calls for publicization, policy reform, and human-oriented detection systems. Through an examination of human vulnerabilities to online judgment, it helps develop better tools for protecting society from AI-produced disinformation. Conclusion Bray et al. (2023) illustrate that human ability to detect deepfakes is untrustworthy\u2014even when instructed. This poses grave risks in the information age of the digital world where disinformation can be spread rapidly. The study points to the urgent need for technical and pedagogical solutions, especially to safeguard the vulnerable. <\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>Reference Bray, S. D., Johnson, S. D., &amp; Kleinberg, B. (2023). Testing human ability to detect \u2018deepfake\u2019 images of human faces. Journal of Cybersecurity, 9(1), 1\u201318. https:\/\/doi.org\/10.1093\/cybsec\/tyad011<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Adryanna SmithMarch 30, 2025 Uncovering Human Capacity for Deepfake Images BLUF (Bottom Line Up Front)Below is a critique of Bray, Johnson, and Kleinberg&#8217;s (2023) paper &#8220;Testing Human Ability to Detect \u2018Deepfake\u2019 Images of Human Faces.&#8221; The paper tests the accuracy of human ability to detect AI-produced human faces against real faces, the efficacy of educational&#8230; <\/p>\n<div class=\"link-more\"><a href=\"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/article-review-2\/\">Read More<\/a><\/div>\n","protected":false},"author":30365,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"_links":{"self":[{"href":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/wp-json\/wp\/v2\/pages\/417"}],"collection":[{"href":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/wp-json\/wp\/v2\/users\/30365"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/wp-json\/wp\/v2\/comments?post=417"}],"version-history":[{"count":2,"href":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/wp-json\/wp\/v2\/pages\/417\/revisions"}],"predecessor-version":[{"id":421,"href":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/wp-json\/wp\/v2\/pages\/417\/revisions\/421"}],"wp:attachment":[{"href":"https:\/\/sites.wp.odu.edu\/adryannasmithcyse201s\/wp-json\/wp\/v2\/media?parent=417"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}