IDS 300W: How Does Artificial Intelligence Impact Perceptions of Online Privacy in the United States?

Abstract

            With the United States so heavily dependent on AI (Artificial Intelligence), it is a curious topic how AI and modern technology influence American society. Furthermore, much research has investigated what effect technology users’ understanding and interpretation have on AI and its entailments. Another detail of great discussion is that of online privacy. Determining how online privacy should be properly defined and adequately protected is a controversial topic that many US (United States) citizens are concerned with. This inquiry leads many to wonder, how Artificial Intelligence impacts perceptions of online privacy in the United States? Between the variety of perspectives of those interested and affected by technology’s national domination, and the rising generations bringing about even more different viewpoints, satisfying and appropriately managing every potential digital outburst challenges protective privacy policy, companies, and individuals. In addition to digital privacy protection, looking into the criminological, sociological, and cybersecurity-focused stances and contributions to this subject offers a more in-depth grasp of what online privacy looks like and how it can both benefit and restrict its users. Although these disciplines come together to elaborate on the issue of digital privacy, some conflict is sparked when technology does not impact every individual the same way. The ensured privacy of one individual may cause biased or unnecessary exposure to another. Each of these attributes alters American attitudes toward online privacy, as well as the measures put in place to protect its users. 

Introduction

        Artificial intelligence (AI) plays a significant role in the lives of the nation’s citizens, affecting nearly everything experienced on an everyday basis involving online privacy. This concept leads many to wonder how artificial intelligence impacts perceptions of online privacy in the United States? Furthermore, our study would not be complete without diving into the criminological, sociological, and cybersecurity-focused viewpoints on this matter. With each employing unique and important perspectives, careful consideration of each will take place. Criminology plays a role in what privacy means in legal terms and when that is violated. Cybersecurity offers a similar mark as it speaks on the direct and indirect correlation between privacy and artificial intelligence. Additionally, sociology discusses the human element of each and how it impacts society. Using an interdisciplinary approach not only assists me as a cybercrime major who must use several perspectives as the major studies multiple disciplines but also supports the occasional disagreement among them. Therefore, it is appropriate to bring each insight together coherently.

Definitions

        Throughout this paper, a variety of possibly unfamiliar terms are used including scamming and cybercrime. Braucher & Orbach (2016) defines scamming as “schemes where the confidence man (the “con. man”) persuades the victim (the “mark”) to trust him with her money” (p. 249).  Additionally, Cybercrime is defined as “crimes committed on the internet using the computer as either a tool or a targeted victim” (Aghatise, 2006, p. 1).

Sociology

        In a nation so heavily influenced by AI and its entailments, it can bring about curiosity regarding the effects it has on society, its growth, shortcomings, and overall outcomes. Specifically, looking into how privacy is affected by this technology begins with what society interprets as privacy. A nation so reliant on AI is subject to certain limitations relating to information accessibility (Nissim & Wood, 2018, p. 3). Furthermore, the accuracy and frequency of what is published about an individual also cause turmoil between AI and its users. The sociological intensity of how its systems dominate the public and private lives of many often alters the opinion of those subject to its harm (Nissim & Wood, 2018, p. 2).

        The consideration of how AI and online privacy matters are perceived by different social distinctions (such as age, gender, income, etc.) is another significant element in sociological rationality (Zeissig et al., 2017, p. 7). Similarly, these varying social groups have been shown to alter not only how privacy protection matters are viewed, but also how each group handles and responds to its proceedings (Zeissig et al., 2017, p. 5).

Criminology

        After analyzing the sociological perspective and from the vantage point of the numerous societal varieties, directing the focus to a criminological stance evokes further discussion. The conceptions that lie in determining the varying topic of privacy hold some authority on how it influences a democratic nation. It also can put United States citizens at an increased risk of discriminatory and prejudiced criminal trials, due to the possible information loss AI jeopardizes (Manheim & Kaplan, 2019, p. 13). This criminological discussion strengthens the interest users have in AI privacy impairments.

        Accordingly, United States citizens, who are typically the victims of criminal online intrusions, raise concerns about where possibilities lie regarding cybercrimes. Although most online deceits are motivated by financial gain, there are accounts of military and espionage offenses, which could circulate into a more substantial crime than previously understood (Manheim & Kaplan, 2019, p. 31). Each of these points provides an immense justification as to why criminology and criminal viewpoints are so incredibly needed to prevent any type of fraudulent and malevolent measures these criminals may undertake.

Cybersecurity

        Cybersecurity serves as an accumulation of every privacy policy, perspective, and consideration implemented throughout the country. In other words, the strength of privacy-related matters is put on total display and rigorously tested in the implementation of cybersecurity. “Cybersecurity measures are essential to protect data (e.g., against intrusions or theft by hackers). However, they may not be sufficient to protect privacy” (Fefer, 2020, p. 8). This can mean that regardless of how greatly privacy is prioritized, any shortcomings in its execution will be exposed to the public eye.

        Let us investigate an example. It is well known that organizations often seek out their customer’s personal information (Fefer, 2020, p. 4). However, the majority of people do not wish to freely give out their personal data, which introduces the concept that personal data is personal private property (Fefer, 2020, p. 5). This conclusion leads companies to contemplate the benefits and liabilities their organizations hold, as well as possibly encourage their consumers to reserve or restrict certain information in an attempt to combat this (Fefer, 2020, p. 5).

Common Ground

        Sociology, criminology, and cybersecurity are all key aspects that comprise the interrelation of privacy and modern technology. Initially, the joint effort of sociological and criminological correspondence is demonstrated through development and increased concern for an infallible privacy assurance. Also, AI’s responsibility to maintain and properly distribute major components needed during criminal cases affects the validity of convictions and court conclusions (Manheim & Kaplan, 2019, p. 52). Moreover, this integration holds authority and possibly hinders the people’s trust in the American democracy and justice system; however, the people’s confidence in today’s cybersecurity is immensely conditioned to bestow privacy in the utmost importance, so when these expectations are not met, faith in these security systems becomes obstructed (Manheim & Kaplan, 2019, p. 4).

        Consequently, criminals often utilize skepticism as a time to strike and wreak havoc on the nation’s security (Fefer, 2020, p. 20). These points exhibit the monumental significance of these previously mentioned disciplines that contribute to this contentious topic. Additionally, the cruciality of how these concerns are inflicted upon American citizens, their technological safety and the ease of criminal offenses prove how incredibly detrimental these disciplines truly are, as well as their momentous analytic contributions.

Disciplinary Conflicts

        Despite the incredible foundations the aforementioned disciplines provide, there are some remaining conflicts that persist throughout this discussion. One of these can be the cybersecurity viewpoint on privacy versus transparency (Christen et al., 2020, p. 59). Between the countless different perceptions of every individual interested in this topic and the variety of disclosed information, opposing interpretations are inevitable and often spark heated debate. Nevertheless, whether general or specific, the act of protecting information while simultaneously granting a complete and accurate revelation of a particular subject contradicts itself when you add in the varying morals of those assessing the information (Christen et al., 2020, p. 91).

        Another conflicting attribute is when the ensured privacy of one individual results in a discriminatory influence on another, leading to a greater contemplation on how these matters will be handled (Christen et al., 2020, p. 49). Unfortunately, determining options that allow values to be of primary concern typically adds remarkable difficulty in responding appropriately to this issue (Christen et al., 2020, p. 51).

Constructing a More Comprehensive Understanding or Theory

        Given the analysis thus far, it can be easy to caution oneself and even worry about the potential dangers that AI is capable of. Conversely, these concerns are often put at ease once a more in-depth grasp of AI’s purpose, abilities, and outcomes is understood. Artificial Intelligence is meant to be just that, artificial. This means that whatever is constructed is man-made, and consequently, controlled and used by man. These results can vary, especially when considering by whom and how AI’s resources are being used (Chen et al., 2017, p. 292).

        It is primarily when AI is being used with malicious intentions, as can be seen throughout cybersecurity infiltrations and criminal cases, misdemeanors are frequently targeted towards more naïve or unsuspecting potential victims (Chen et al., 2017, p. 292). This may contribute to further fraudulent offenses against innocent yet unaware individuals. Simultaneously, applications such as the internet and scamming mechanisms (i.e. phishing) offer cybercriminals more opportunities to use and manipulate these typically young victims (Chen et al., 2017, p. 291). These risks raise something of a panic within American society as the simplicity of being scammed is so publicized. Similarly, being subject to internet scams can produce a sense of distrust for cybersecurity affairs among the U.S. population.

        Due to these concerns, anxiety, wariness, and misconceptions may develop a stunt in the growth of technology. Nonetheless, many stakeholders actively attempt to counter these doubts by examining and even reconstructing privacy-protective policies, some of which create completely new policies for their respective businesses (Fefer, 2020, p. 20). The Global Data Alliance is a great example of this innovation. This association is in the business of combining a variety of industries (airlines, entertainment, financial services, etc.) all to build a company capable of internationally distributing information safely while also accounting for any restrictive data requirements (Fefer, 2020, p. 23). This is an incredible showing of how corporations work to provide a preferable, more inclusive, and contemporary adaptation of privacy policies, all while easing the concerns society has about online violations.

Reflecting On, Testing, and Communicating the Understanding or Theory

The regulatory implementations that affect our understanding and usage of online privacy have a great hold on what users interpret as adequate privileges and suitable rights. Unfortunately, these demands become somewhat altered and more difficult to fulfill when numerous sources, platforms, perspectives, and institutions are considered (Nissim & Wood, 2018, p. 13). For example, the Family Educational Rights and Privacy Act preserves the safety of only educational information, such as school records (Nissim & Wood, 2018, p. 2). This practically limits the security ensured to only certain types of data, leaving many other forms of information at a higher risk of exposure to criminals and privacy breaches. On the contrary, specialized security such as the Educational Rights and Privacy Act does present helpful assurance that the nation’s education-focused information is well protected and made a priority.

        Interestingly, privacy risks and cybercrimes are not entirely understood by reading through legislation alone (Nissim & Wood, 2018, p. 4). It requires practical data and a human factor to genuinely and fairly recognize the law’s practices (Nissim & Wood, 2018, p. 4). Nissim and Wood (2018) explain it as this, “In contrast with regulatory requirements, a formal privacy model like differential privacy offers general protections and can, in principle, be applied wherever statistical or machine learning analysis is performed on collections of personal information, regardless of the contextual factors at play” (p. 11). This statement reiterates why privacy is so dire in the lives of many, and how truly their suggested application plays a role in every area of this nation’s viability.

Conclusion

        Throughout this analysis, a variety of topics, perspectives, and data sources have served as valuable contributions to this paper’s entirety. Moreover, the exposition of this research would not be complete if not supported by cybersecurity, sociological, and criminological disciplines. Subsequently, these disciplines’ unions tie together to build a solid foundation of how Artificial Intelligence genuinely impacts perceptions of online privacy in the United States. Conflicting views of fair representations and unbiased accusations are immensely subject to online privacy perspectives and what American society decides to be satisfactory. Making online security and infiltration awareness a priority also factors in as a component that influences the execution of these online privacy principles. Conclusively, the results of these findings grant a deeper curiosity and recognition of the many implications this topic bestows.

References

Aghatise, J. (June, 2006). Cybercrime Definition. https://www.researchgate.net/publication/265350281_Cybercrime_definition#:~:text=Cybercrime%20is%20defined%20as%20crimes,targeted%20victim

Braucher, J., & Orbach, B. (2016). Scamming: The Misunderstood Confidence Man. Yale Journal of Law & the Humanities, 27:2, 249–290.

Chen, H., Beaudoin, C. E., & Hong, T. (2017). Securing online privacy: An empirical test on internet scam victimization, online privacy concerns, and privacy protection behaviors. Computers in Human Behavior, 70, 291–302. https://doi.org/10.1016/j.chb.2017.01.003

de Poel, I. van. (2020). Core Values and Value Conflicts in Cybersecurity: Beyond Privacy Versus Security (Vol. 21, Ser. The International Library of Ethics, Law and Technology). The International Library of Ethics, Law and Technology.

Fefer, R. F. (2020, March 26). Data Flows, Online Privacy, and Trade Policy. https://sgp.fas.org/crs/misc/R45584.pdf

Manheim, K., & Kaplan, L. (2019). Artificial Intelligence: Risks to Privacy and Democracy, 108–188.

Nissim, K., & Wood, A. (2018). Is privacy privacy? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2128), 20170358. https://doi.org/10.1098/rsta.2017.0358

Zeissig, E.-M., Lidynia, C., Vervier, L., Gadeib, A., & Ziefle, M. (2017). Online Privacy Perceptions of Older Adults, 1–20.

Leave a Reply

Your email address will not be published. Required fields are marked *