{"id":324,"date":"2025-04-30T00:26:03","date_gmt":"2025-04-30T00:26:03","guid":{"rendered":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/?page_id=324"},"modified":"2025-04-30T00:40:20","modified_gmt":"2025-04-30T00:40:20","slug":"article-review-two","status":"publish","type":"page","link":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/article-review-two\/","title":{"rendered":"Article Review Two"},"content":{"rendered":"\n<p class=\"has-text-align-center\"><strong><em>A test of a human\u2019s ability to discover deepfakes images of a person\u2019s face<\/em><\/strong><\/p>\n\n\n\n<p class=\"has-text-align-center\"><strong>Introduction<\/strong><br>The article I chose to review today talks about deepfakes and how they effect people<br>because of different factors which as the article explains can have them \u201cmake entities<br>that falsely represent reality\u201d. This article also goes over test to see a human\u2019s ability to<br>see a deepfake while providing results and information on what was used for the test<br>and how the test was comprised of.<br><strong>Relations to the principles of social sciences<\/strong><br>The article relates to the principles of social science by showing principles like<br>Relativism which shows how deepfake technology impacts the different types of<br>systems like trust. other principles It seems to relate to is Empiricism, Parsimony,<br>Ethical Neutrality, and determinism. For Empiricism, it uses controlled experiments and<br>statistics to analyze how accurate humans are at detecting a deepfake. For Parsimony,<br>the article is straightforward in its explanation that people will struggle to spot a<br>deepfake due to the subtle flaws in the deep fake that are hard to point out. With<br>Ethical Neutrality, the Article does not try to side with one opinion as it tries to make a<br>balance between the pros and the cons of this technology. Finally determinism, this is<br>shown in article by stating that being exposed and being familiar with deepfakes can<br>help improve the accuracy of detecting it.<br><strong>The Studies questions and hypotheses<\/strong><br>The article goes over three specific research questions. The first being whether<br>\u201cparticipants are able differentiate between deepfake and real images above chance<br>levels.\u201d The Second is \u201cDo simple interventions improve participants\u2019 deepfake<br>detection accuracy?\u201d The third research questions are \u201cDoes a participant\u2019s self reported level of confidence in their answer align with their accuracy at detecting<br>deepfakes?\u201d<br><strong>The Research method<\/strong><br>For the research method the researchers used an experimental research method as<br>well as and online survey. The researcher assigned 280 participants in one of four<br>groups, so they are going for a sort of independent, dependent, and control type of<br>situation. Each group was asked to figure out whether each picture that was shown to<br>them was real of a deepfake. This was done to test if familiarization and other factors<br>could help improve detection accuracy.<br><strong>Types of data<\/strong><br>The Data shown in the article were both quantitative and qualitative data. For<br>quantitative data it included the accuracy rates, confidence levels, and statistics of the<br>participants ability to detect a deepfake. For qualitative data it comes from the<br>participants\u2019 explanations and how they describe there reasoning for chose which<br>picture was a deepfake or not.<br><strong>Concept relationships with the PowerPoints<\/strong><br>The relationship between the article and the PowerPoints we went over expressed<br>human emotions and how they are easily manipulated by cyber crimes like this and<br>phishing attacks.<br><strong>Challenges, Concerns, and Contributions<\/strong><br>In the article the main challenge shown was the human accuracy for detecting deepfakes<br>is only slightly better than chance as interventions is failing to significantly improve being<br>detected. A concern is how confidant the participants were in the answers they got wrong<br>which suggest that they are more likely to fall for a deepfake then not. The contributions<br>in the article include a structed analysis of the abilities of a human detecting a deepfake,<br>the insight into how effected a simple intervention can be, and how we should improve<br>our technology and education so that we can stop the continuous threat of deepfake<br>detection.<br><strong>Conclusion<\/strong><br>Overall, this Article does a good job of explaining how we as people still have a long<br>way to go before being able accurately detect a deepfake from just seeing it alone. As<br>the results of the research done it we can definitely conclude that deepfakes are very<br>hard to spot without the proper equipment right now so it is best to research it some<br>more as deepfakes can be both a good a bad thing depending on how it is used.<br><strong>References<\/strong><br>Bray, S. D., Johnson, S. D., &amp; Kleinberg, B. (2023). Testing human ability to detect<br>\u2018deepfake\u2019 images of human faces. Journal of Cybersecurity, 9(1).<br>https:\/\/doi.org\/10.1093\/cybsec\/tyad011<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A test of a human\u2019s ability to discover deepfakes images of a person\u2019s face IntroductionThe article I chose to review today talks about deepfakes and how they effect peoplebecause of different factors which as the article explains can have them \u201cmake entitiesthat falsely represent reality\u201d. This article also goes over test to see a human\u2019s&#8230; <\/p>\n<div class=\"link-more\"><a href=\"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/article-review-two\/\">Read More<\/a><\/div>\n","protected":false},"author":30282,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"_links":{"self":[{"href":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/wp-json\/wp\/v2\/pages\/324"}],"collection":[{"href":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/wp-json\/wp\/v2\/users\/30282"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/wp-json\/wp\/v2\/comments?post=324"}],"version-history":[{"count":3,"href":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/wp-json\/wp\/v2\/pages\/324\/revisions"}],"predecessor-version":[{"id":330,"href":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/wp-json\/wp\/v2\/pages\/324\/revisions\/330"}],"wp:attachment":[{"href":"https:\/\/sites.wp.odu.edu\/ashawnrobertson\/wp-json\/wp\/v2\/media?parent=324"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}