{"id":93,"date":"2023-08-26T06:01:53","date_gmt":"2023-08-26T06:01:53","guid":{"rendered":"https:\/\/sites.wp.odu.edu\/cyberimpact1\/?page_id=93"},"modified":"2025-10-11T00:15:21","modified_gmt":"2025-10-11T00:15:21","slug":"law-ethics","status":"publish","type":"page","link":"https:\/\/sites.wp.odu.edu\/nickcarpenter\/law-ethics\/","title":{"rendered":"AI Ethics"},"content":{"rendered":"\n<p class=\"has-x-large-font-size\">Ethical Implications of EU&#8217;s AI Act<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-content\/uploads\/sites\/33778\/2025\/10\/Security-1-1024x683.jpg\" alt=\"\" class=\"wp-image-622\" style=\"width:407px;height:auto\" srcset=\"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-content\/uploads\/sites\/33778\/2025\/10\/Security-1-1024x683.jpg 1024w, https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-content\/uploads\/sites\/33778\/2025\/10\/Security-1-300x200.jpg 300w, https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-content\/uploads\/sites\/33778\/2025\/10\/Security-1-768x512.jpg 768w, https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-content\/uploads\/sites\/33778\/2025\/10\/Security-1-1536x1024.jpg 1536w, https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-content\/uploads\/sites\/33778\/2025\/10\/Security-1-2048x1365.jpg 2048w, https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-content\/uploads\/sites\/33778\/2025\/10\/Security-1-960x640.jpg 960w, https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-content\/uploads\/sites\/33778\/2025\/10\/Security-1-450x300.jpg 450w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n<p><\/p>\n\n\n\n<p>This project showcases the ability to analyze a complex topic and instruct security professionals to consider the implications of their products. Security professionals will always have to balance efficiency with security.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p>AI implementation can boost efficiency, lower costs, and benefit various sectors with lower workloads (Bogaerts, 2023). However, there are also many ethical issues that need to be addressed when considering AI. The European Union is currently the frontrunner for addressing these issues through various guidelines within the EU\u2019s AI act. The act aims to place people as the focus of their development, rather than AI.<\/p>\n\n\n\n<p><br> The European Union has guidelines within the European AI act that address some of the key ethical challenges that AI presents. These ethical issues include various risks to rights and fundamental freedoms (EU guidelines on Ethics in Artificial intelligence, n.d.). The European Union has taken a stance against AI that poses risk to the people through data protection and privacy, safety and security, and fake news. AI also impacts the job market, taking jobs from the citizens and opening the door to automation and misinformation (EU guidelines on Ethics in Artificial intelligence, n.d.).<\/p>\n\n\n\n<p><br>The European Union AI act has taken a humancentric approach when dealing with the impacts of AI (EU guidelines on Ethics in Artificial intelligence, n.d.). The guidelines restricting AI, however, could have substantial effects on their economy. The exact amount the act will cost to protect its people is unsure; however, many have calculated losses in the billions of dollars (Lauer, 2021). These guidelines within the act place limitations on their economy in exchange for ensuring that AI development does not inflict on human rights. The act outlines that humans should have full control over their data, systems, and decision-making.<\/p>\n\n\n\n<p><br>AI systems are constructed using machine learning. However, these systems are still developed by an individual, and, therefore, can contain biases. This is an ongoing ethical concern with AI for many. In addition to bias, development of AI and collection of data limits the transparency with individuals. \u201cWhen organizations are not transparent about why and how data is collected and stored, privacy is at risk\u201d (Bogaerts, 2023). The EU AI Act guidelines strive to ensure diversity and fairness within its systems, protecting the integrity of their data by limiting high risk systems. In addition, the systems that are created with AI should be traceable and identifiable (EU guidelines on Ethics in Artificial intelligence, n.d.). The European Union promotes accountability and responsibility of any outcome produced by AI with various standards (Ethics guidelines for Trustworthy AI. Shaping Europe\u2019s digital future, 2019).<\/p>\n\n\n\n<p><br>The European AI act also helps to ensure that systems are created for societal and environmental well-being (Ethics guidelines for Trustworthy AI. Shaping Europe\u2019s digital future, 2019). The European Union understands that inventions created today shape the future for tomorrow. Therefore, it is essential to be aware of what is being created and how it could potentially affect future generations. Additionally, AI implementation should also ensure that the<\/p>\n\n\n\n<p>The European AI act is the frontrunner of restricting AI and protecting the people, future generations, environment, and privacy. AI offers various benefits such as lower costs, higher efficiency, and advancement for various sectors. However, the development of AI also poses various challenges and ethical considerations. The EU AI Act aims to restrict any unintended outcomes that AI may produce. Although the EU AI act has addressed many of these challenges, it must constantly evolve to keep up with the constant demand and challenges presented by new technology.<br><\/p>\n\n\n\n<p><br><\/p>\n\n\n\n<p>Bibliography<br>Bogaerts, B. (2023, January 31). Linking the AI act with privacy &amp; ethics. KPMG.<br><a href=\"https:\/\/kpmg.com\/be\/en\/home\/insights\/2023\/01\/drma-linking-the-ai-act-with-privacy-and- ethics.html\">https:\/\/kpmg.com\/be\/en\/home\/insights\/2023\/01\/drma-linking-the-ai-act-with-privacy-and- ethics.html<\/a>.<br><\/p>\n\n\n\n<p>EU guidelines on Ethics in Artificial Intelligence (n.d.). <a href=\"https:\/\/www.europarl.europa.eu\/RegData\/etudes\/BRIE\/2019\/640163\/EPRS_BRI(2019)640163_EN.pdf\">https:\/\/www.europarl.europa.eu\/RegData\/etudes\/BRIE\/2019\/640163\/EPRS_BRI(2019)640163_EN.pdf<\/a>.<br><\/p>\n\n\n\n<p>Ethics guidelines for Trustworthy Ai. Shaping Europe\u2019s digital future. (2019, April 8). <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai \">https:\/\/digital-strategy.ec.europa.eu\/en\/library\/ethics-guidelines-trustworthy-ai<\/a>.<\/p>\n\n\n\n<p>Lauer, M. (2021, September 24). Clarifying the costs for the EU\u2019s AI Act. Clarifying the costs for the EU\u2019s AI act. <a href=\"https:\/\/www.ceps.eu\/clarifying-the-costs-for-the-eus-ai-act\/\">https:\/\/www.ceps.eu\/clarifying-the-costs-for-the-eus-ai-act\/<\/a>.<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Ethical Implications of EU&#8217;s AI Act This project showcases the ability to analyze a complex topic and instruct security professionals to consider the implications of their products. Security professionals will always have to balance efficiency with security. AI implementation can boost efficiency, lower costs, and benefit various sectors with lower workloads (Bogaerts, 2023). However, there&#8230; <\/p>\n<div class=\"link-more\"><a href=\"https:\/\/sites.wp.odu.edu\/nickcarpenter\/law-ethics\/\">Read More<\/a><\/div>\n","protected":false},"author":27159,"featured_media":0,"parent":0,"menu_order":1,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"_links":{"self":[{"href":"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-json\/wp\/v2\/pages\/93"}],"collection":[{"href":"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-json\/wp\/v2\/users\/27159"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-json\/wp\/v2\/comments?post=93"}],"version-history":[{"count":5,"href":"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-json\/wp\/v2\/pages\/93\/revisions"}],"predecessor-version":[{"id":680,"href":"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-json\/wp\/v2\/pages\/93\/revisions\/680"}],"wp:attachment":[{"href":"https:\/\/sites.wp.odu.edu\/nickcarpenter\/wp-json\/wp\/v2\/media?parent=93"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}