{"id":384,"date":"2025-12-06T04:02:21","date_gmt":"2025-12-06T04:02:21","guid":{"rendered":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/?page_id=384"},"modified":"2025-12-06T04:18:19","modified_gmt":"2025-12-06T04:18:19","slug":"journal-entire-15","status":"publish","type":"page","link":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/journal-entire-15\/","title":{"rendered":"Journal Entire 15"},"content":{"rendered":"\n<p>What struck me: Ethical questions from the presentation.<\/p>\n\n\n\n<p>Watching the video made me think deeply about several serious ethical problems tied to AI, deepfakes, and social-cybersecurity. Key concerns:<\/p>\n\n\n\n<ul>\n<li><strong>Consent &amp; identity misuse<\/strong> \u2014 The presentation shows how malware and AI allow hackers to generate fake videos or audio impersonating a real person, without their permission. This undermines a person\u2019s right to control their own image and likeness. According to research, this unconsented manipulation harms victims\u2019 privacy and autonomy. <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2590291125006102?utm_source=chatgpt.com\">ScienceDirect+2ijmeet.org+2<\/a><\/li>\n\n\n\n<li><strong>Erosion of trust \/ authenticity crisis<\/strong> \u2014 As deepfakes become more realistic, any video or recording may be dismissed as \u201cfake,\u201d even if it\u2019s genuine. This undermines trust in evidence \u2014 in journalism, courts, politics, and interpersonal communication. <a href=\"https:\/\/cloudsecurityalliance.org\/blog\/2024\/06\/25\/ai-deepfake-security-concerns?utm_source=chatgpt.com\">Cloud Security Alliance+2Tech Daring+2<\/a><\/li>\n\n\n\n<li><strong>Potential for exploitation and abuse<\/strong> \u2014 Deepfakes can be weaponized for misinformation, harassment, revenge porn, scams, or political manipulation. This raises serious moral, legal and social concerns. <a href=\"https:\/\/www.geeksforgeeks.org\/artificial-intelligence\/is-deepfake-a-threat-to-humans\/?utm_source=chatgpt.com\">GeeksforGeeks+2SpringerLink+2<\/a><\/li>\n\n\n\n<li><strong>Social and psychological harm<\/strong> \u2014 Victims of non-consensual deepfakes may suffer reputational damage, emotional distress, and long-term consequences to personal and professional life. The technology allows manipulation of people\u2019s identities in ways they cannot easily control or undo. <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2590291125006102?utm_source=chatgpt.com\">ScienceDirect+2ijmeet.org+2<\/a><\/li>\n<\/ul>\n\n\n\n<p>In the video\u2019s cybersecurity context, these concerns aren\u2019t theoretical \u2014 they\u2019re tangible risks that hackers may exploit, potentially impacting everyday people, institutions, and global trust.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">How I think society should address these ethical concerns<\/h2>\n\n\n\n<p>Given the scope and severity of deepfake risks, we need a <strong>multi-pronged social, technological, and legal response<\/strong>. Here\u2019s what I believe should be done:<\/p>\n\n\n\n<p><strong>1. Legal Regulation &amp; Accountability<\/strong><\/p>\n\n\n\n<ul>\n<li>Enact laws criminalizing non-consensual deepfakes, especially those involving explicit content, identity theft, defamation, or fraud.<\/li>\n\n\n\n<li>Require clear labeling of synthetic media (e.g., watermarking, metadata tags) so deepfakes can be distinguished from original content. This helps preserve trust and evidence integrity.<\/li>\n\n\n\n<li>Hold creators and distributors legally responsible for misuse. Platforms should enforce the removal of non-consensual or harmful deepfakes.<\/li>\n<\/ul>\n\n\n\n<p><strong>2. Technological Defenses and Authentication Tools<\/strong><\/p>\n\n\n\n<ul>\n<li>Develop and deploy robust deepfake detection tools (watermarking, AI detection, video provenance systems). The research community already shows promising methods. <a href=\"https:\/\/arxiv.org\/abs\/2206.09842?utm_source=chatgpt.com\">arXiv+2arXiv+2<\/a><\/li>\n\n\n\n<li>Use identity verification \u2014 e.g., two-factor authentication tied to biometrics or trusted identities \u2014 when sensitive media (financial, legal, official) is involved.<\/li>\n\n\n\n<li>Encourage media platforms to require proof of authenticity before hosting or sharing sensitive media.<\/li>\n<\/ul>\n\n\n\n<p><strong>3. Public Awareness &amp; Education<\/strong><\/p>\n\n\n\n<ul>\n<li>Incorporate digital literacy education (in schools, workplaces, public campaigns) to teach people how to recognize, question, and verify suspicious media.<\/li>\n\n\n\n<li>Promote skepticism and verification: encourage people not to accept every video or audio at face value, especially if it seems sensational or out of character.<\/li>\n\n\n\n<li>Support victims by publicizing resources, counseling, and legal support for people targeted by malicious deepfakes.<\/li>\n<\/ul>\n\n\n\n<p><strong>4. Ethical Standards &amp; Industry Responsibility<\/strong><\/p>\n\n\n\n<ul>\n<li>Tech companies and developers should adopt ethical guidelines for generative AI use, including consent mechanisms and disclaimers for synthetic media.<\/li>\n\n\n\n<li>Encourage open research and transparency around AI tools \u2014 so that detection tools evolve as generative tools evolve.<\/li>\n\n\n\n<li>Foster cooperation between governments, tech companies, academic researchers, and civil society to monitor and respond to emerging threats.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">Personal Reflection &amp; View<\/h2>\n\n\n\n<p>Watching the video intensified for me the idea that cybersecurity isn\u2019t just about firewalls and encryption \u2014 it&#8217;s also about <strong>human dignity, trust, and social fabric<\/strong>. Deepfake-powered attacks can destroy lives, reputations, and institutions \u2014 especially since victims often have little control over the spread once content is online.<\/p>\n\n\n\n<p>As both a citizen and someone interested in cybersecurity, I believe we must treat deepfakes not just as a technical challenge but as a <strong>societal crisis<\/strong> requiring ethics, law, technology, and education to respond together.<\/p>\n\n\n\n<p>If you like \u2014 I can also sketch a <strong>short policy proposal<\/strong> (in 5\u20136 bullet points) for how our government or university should respond to deepfake threats \u2014 that might be a useful discussion contribution (or assignment submission).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>What struck me: Ethical questions from the presentation. Watching the video made me think deeply about several serious ethical problems tied to AI, deepfakes, and social-cybersecurity. Key concerns: In the video\u2019s cybersecurity context, these concerns aren\u2019t theoretical \u2014 they\u2019re tangible risks that hackers may exploit, potentially impacting everyday people, institutions, and global trust. How I&#8230; <\/p>\n<div class=\"link-more\"><a href=\"https:\/\/sites.wp.odu.edu\/juniorjohnson\/journal-entire-15\/\">Read More<\/a><\/div>\n","protected":false},"author":31436,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"_links":{"self":[{"href":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/wp-json\/wp\/v2\/pages\/384"}],"collection":[{"href":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/wp-json\/wp\/v2\/users\/31436"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/wp-json\/wp\/v2\/comments?post=384"}],"version-history":[{"count":2,"href":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/wp-json\/wp\/v2\/pages\/384\/revisions"}],"predecessor-version":[{"id":388,"href":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/wp-json\/wp\/v2\/pages\/384\/revisions\/388"}],"wp:attachment":[{"href":"https:\/\/sites.wp.odu.edu\/juniorjohnson\/wp-json\/wp\/v2\/media?parent=384"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}