{"id":337,"date":"2026-02-15T19:06:40","date_gmt":"2026-02-15T19:06:40","guid":{"rendered":"https:\/\/sites.wp.odu.edu\/brandonvuono\/?page_id=337"},"modified":"2026-02-26T01:51:04","modified_gmt":"2026-02-26T01:51:04","slug":"article-1","status":"publish","type":"page","link":"https:\/\/sites.wp.odu.edu\/brandonvuono\/cyse201s\/article-1\/","title":{"rendered":"Article 1"},"content":{"rendered":"\n<p>Brandon Vuono&nbsp;<\/p>\n\n\n\n<p>CYSE 200S&nbsp;<\/p>\n\n\n\n<p>Article Review 1&nbsp;<\/p>\n\n\n\n<p>Article reviewed&nbsp;&nbsp;<\/p>\n\n\n\n<p>Sergi D Bray, Shane D Johnson, Bennett Kleinberg (2023)&nbsp;Testing human ability to detect \u2018deepfake\u2019 images of human faces.&nbsp;<em>Journal of Cybersecurity<\/em>, Volume 9, Issue 1, 2023, tyad011,&nbsp;<a href=\"https:\/\/doi.org\/10.1093\/cybsec\/tyad011\" target=\"_blank\" rel=\"noreferrer noopener\">https:\/\/doi.org\/10.1093\/cybsec\/tyad011<\/a>&nbsp;<\/p>\n\n\n\n<p>This study dives into how well people can&nbsp;identify&nbsp;real human&nbsp;faces and \u201cdeepfake\u201d images created by artificial intelligence.&nbsp;These&nbsp;\u201cdeepfake&#8221; images are a threat to cybersecurity because they&nbsp;use artificial intelligence to manipulate images and&nbsp;challenge&nbsp;the&nbsp;authenticity&nbsp;of images. In the&nbsp;day&nbsp;an age of \u201cfake news\u201d something as&nbsp;little as artificial intelligence images can be detrimental.&nbsp;The article can relate to social science principles by explaining cognitive bias, human error,&nbsp;and&nbsp;the&nbsp;influence of&nbsp;technology.&nbsp;&nbsp;<\/p>\n\n\n\n<p>The authors of this article research the question of&nbsp;how accurately humans can&nbsp;identify&nbsp;AI-generated images. Further&nbsp;to&nbsp;solving&nbsp;the&nbsp;question,&nbsp;the article will&nbsp;attempt&nbsp;to&nbsp;identify&nbsp;interventions to improve the ability to&nbsp;identify&nbsp;the \u201cdeepfake\u201d images.&nbsp;The study is&nbsp;attempting&nbsp;to increase&nbsp;accuracy&nbsp;in recognizing manipulated images. This in turn will&nbsp;increase society&#8217;s ability to understand what the truth is making for a better-informed population.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Participants in this study were placed into four groups&nbsp;to get the amount of data needed. One control and&nbsp;three intervention groups. They were shown twenty images chosen from a set of fifty real images and fifty&nbsp;images created by artificial intelligence. The groups were asked to label each image real or fake. With their&nbsp;choice,&nbsp;they were asked to describe&nbsp;their&nbsp;confidence and&nbsp;reasoning for choosing their answer.&nbsp;The results concluded that accuracy was around sixty-two&nbsp;percent. The&nbsp;data in the study is both quantitative and qualitative.&nbsp;The quantitative&nbsp;portion&nbsp;is the data collected and the percentage of accuracy from the participants. The qualitative&nbsp;portion&nbsp;of&nbsp;this study is the explanations from the participants on their answers.&nbsp;This study reflects several scientific principles such as objectivity, skepticism, and determinism.&nbsp;Objectivity is shown by how the researchers used measurable data to conduct the study.&nbsp;Skepticism is shown by how the researchers&nbsp;tested the assumption that people can easily&nbsp;identify&nbsp;fake&nbsp;images. Determinism is shown by how&nbsp;the researchers believe that human error can be predicted by their&nbsp;cognitive&nbsp;bias.&nbsp;&nbsp;<\/p>\n\n\n\n<p>Overall, this study contributes to society by increasing awareness of&nbsp;how much of an impact artificial intelligence images can have.&nbsp;It highlights the need for improved detection technology, stronger need for cyber security policy, and public awareness. This shows that cyber security&nbsp;is not only a technology problem set but a social science issue,&nbsp;the connection between human behavior and&nbsp;perception&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Brandon Vuono&nbsp; CYSE 200S&nbsp; Article Review 1&nbsp; Article reviewed&nbsp;&nbsp; Sergi D Bray, Shane D Johnson, Bennett Kleinberg (2023)&nbsp;Testing human ability to detect \u2018deepfake\u2019 images of human faces.&nbsp;Journal of Cybersecurity, Volume 9, Issue 1, 2023, tyad011,&nbsp;https:\/\/doi.org\/10.1093\/cybsec\/tyad011&nbsp; This study dives into how well people can&nbsp;identify&nbsp;real human&nbsp;faces and \u201cdeepfake\u201d images created by artificial intelligence.&nbsp;These&nbsp;\u201cdeepfake&#8221; images are a threat&#8230; <\/p>\n<div class=\"link-more\"><a href=\"https:\/\/sites.wp.odu.edu\/brandonvuono\/cyse201s\/article-1\/\">Read More<\/a><\/div>\n","protected":false},"author":18452,"featured_media":0,"parent":308,"menu_order":1,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"_links":{"self":[{"href":"https:\/\/sites.wp.odu.edu\/brandonvuono\/wp-json\/wp\/v2\/pages\/337"}],"collection":[{"href":"https:\/\/sites.wp.odu.edu\/brandonvuono\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.wp.odu.edu\/brandonvuono\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/brandonvuono\/wp-json\/wp\/v2\/users\/18452"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/brandonvuono\/wp-json\/wp\/v2\/comments?post=337"}],"version-history":[{"count":2,"href":"https:\/\/sites.wp.odu.edu\/brandonvuono\/wp-json\/wp\/v2\/pages\/337\/revisions"}],"predecessor-version":[{"id":343,"href":"https:\/\/sites.wp.odu.edu\/brandonvuono\/wp-json\/wp\/v2\/pages\/337\/revisions\/343"}],"up":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/brandonvuono\/wp-json\/wp\/v2\/pages\/308"}],"wp:attachment":[{"href":"https:\/\/sites.wp.odu.edu\/brandonvuono\/wp-json\/wp\/v2\/media?parent=337"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}