Reflective Writing
In this course I have learned a great deal about different ethical frames and how they apply to real world situations. One perspective I learned a great deal from is the ethical frame of consequential-ism and how this utilitarian viewpoint can be used to assist in developing processes for the greater good even if the method could be considered morally grey. It changed my perspective on things like surveillance for the sake of security and how some personal rights may be infringed upon morally for the sake of the greater good. In the future I would like to remember this as I work into my career in cyber security, as the development of some processes can feel unethical in nature or require you to compromise previously held beliefs, it will allow me to understand what would be ethical to compromise for the sake of the greater good. Another perspective I gained a great deal from was the de-ontological ethical view. This deepened my understanding on how in this ethical nature, it is not enough to simply do the right thing, you must also be doing it for the right reasons for it to be considered moral. This is important as it prevents a level of cognitive dissonance when performing your tasks, as you can feel that your values are aligned within your project. My takeaway from this is that I would like to represent this value with the jobs I would choose to take in my profession, it is much easier to make money if you are willing to work in a morally grey area, but to make sure that the tasks I perform are inherently ethical for the greater good also allows for me to take greater pride in my work and deepen my sense of responsibility towards proper implementation of my work. Another perspective I deepened is the ethical and unethical nature of whistle-blowing. Previously I had not considered the level to which it can be unethical through not trying to take proper channels for internal change or how it is important that when whistleblowers release information, if they want the focus to be on the information they are trying to alert the public about, then they must carefully craft their reports as so the proof only focuses on their specific topic. As I move through my career I want to remember the ethical nature of whistle-blowing and how it is my responsibility to the company to make appropriate efforts to support it by creating ethical changes within the organization, as not only for public benefit, but also for the benefit of the company in the ways of improving public relations and preventing possible channels for litigation. The last topic that was discussed that I took a great deal of knowledge from was the ethics on data mining and how those large databases must be used responsibly. It deepened my understanding of the amount of data we are taking on people and how this information can be used unethically for things like public persuasion. I also learned much of the need for proper legislation that allows the public to control how much information is being collected on them and for what reasons. As I make my way through my career and new technologies are implemented and become widespread, I will take the knowledge I gained from this concept to look at the ethical nature of such technologies more critically in ways such as if allowing for proper consent, how can they be taken advantage of through proper means, and what are the dangers of allowing organizations access to these technologies and if they will be able to exploit the organic channels to their own benefit.
Below are some of my writings from class that showcase my personal takes on different articles discussed in class to showcase the greater ethical implications of certain events.
Information Warfare
In the article “What Did Facebook do to Democracy” it discusses the aftermath of the 2016 election and how we are becoming more aware of the power that Facebook has over elections through methods of distributing both information and disinformation and how the large data bases that Facebook has collected are allowing people to control the distribution of their propaganda more effectively to sway public opinion in their favor. Combined with the influence of Facebook becoming the biggest news distributor while tailoring the articles it presents you by your interests, the proliferation of fake news, and that foreign adversaries have access to the same resources to create propaganda in the United States; the public is more at risk than ever for disinformation and outside influence. In this Case Analysis I will argue that the ethical tool of virtue shows us that Facebook did engage in information warfare because they allowed for the usage of targeted propaganda from politicians and foreign adversaries, and further that they were partly responsible for the election outcome because it gave the Trump campaign tools to radicalize his supporters for the sake of voter turnout.
In the Prier article, it is discussed the dangers of social media in the form of information warfare through outlets of propaganda, news, and information/disinformation sharing. One of the concepts brought up by Prier is the idea of tailoring news posts or the reach of posts to groups that may be more specifically interested in the topic. While not inherently a moral failure on its own, in the frame of another concept brought up by Prier, propaganda, it does begin to have consequences. Through this method for the purpose of propaganda, messages are more easily spread to like-minded individuals and the tailoring of posts shown to users creates a feedback loop that makes it less likely for contrasting information to reach those that are now susceptible to persuasion. Through the frame of virtue this is morally wrong, as in many political debates the morally right answer is up to personal interpretation, the idea of preventing certain information from reaching someone while providing them with an overwhelming amount of information of another type takes away the ability for the person to form their own choices as they cannot access contrasting viewpoints that could pursued them against their own ideology. This relates to the Facebook case in that the Trump campaign was able to rally much more support through their use of Facebook targeted ads, and because of Facebooks algorithms for the content being shown to users, they were much more likely to get more ads from Trump and suggest news articles and groups that rally people around him, which causes people to become radicalized. This is morally wrong as it is in a sense a way of brainwashing people and cutting off outside influence, which robs them of their freedom of choice. A possible solution to this is that ads that are shown to users would contain unbiased information on both candidates with links to sites that would allow the user to support either candidate, giving them their freedom of choice back on every ad they receive. Another concept brought up in the Prier article was Russia weaving propaganda into existing narrative to sway public opinion against individuals or groups of their choosing. Through the ethical tool of virtue this is highly immoral as allowing this puts the public at risk for foreign interference in things like elections in the name of corporate profits. Allowing this also lets foreign adversaries discredit institutions and spread fear to create civil unrest amongst, pitting the community against itself. As applied in the case study, Russian information operatives were allowed by Facebook to create disinformation campaigns by commenting on certain American news sites and articles to boost engagement of different articles that would then become more likely to be pushed by Facebook to consumers and the consumers would also become less likely to be able to interact with articles that condemned Russia’s actions. This caused consumers to interact with Russian backed propaganda in a more organic context as they would not directly know of the foreign involvement. This should have been flagged by security agencies in America, but Facebook should have had their own first line of defense against possible foreign influence and at best Facebook can be described as negligent and at worst supportive of these actions. The solution to this would simply be to keep the same data that Facebook is collecting on everyone anyway but have certain flags for organizational levels of movement. If Facebook is going to be allowed to run these algorithms that boost articles with high levels of engagement, they need an auditing process to detect artificial inflation of engagement numbers.
In the Scott article, the author addresses the ethical, moral, and political issues concerned with cyber conflict, and tries to establish “rules of war”. One concept brought up by Scott under the section Twitter and Tear Gas is that technology has allowed us to connect much more quickly, and rally support for causes much more effectively as a single person’s “voice” now has a much broader reach and now it is easier for people to dispel disinformation with physical proof, but as discovered in the 2016 election cycle and through studies on the activities of the Russian “internet Research Agency” it now works conversely as well. Through the ethical frame of virtue, this can both be considered moral and immoral. The advent of this technology is moral as it now allows for more power in the hands of the people against disinformation and using this in the way of helping connect people to resources, safe spaces, and organizations to get involved with allows people to act upon their own values; but organizations using the same processes to more effectively spread disinformation is immoral as this technology was created as a way of connecting people only serves to snuff the voices of people acting upon their own virtue and make it more difficult for others to trust the resources available to them. This applies to the case study through the example of foreign involvement from Russia during the 2016 election. As mentioned previously, Russia was able to use social media to boost articles that coincided with their agenda as a means of hiding ones that oppose it. The solution to this is that every country and company must have an active role in preventing corporations and government entities from being able to affect the algorithms that control what articles you can see, if not adjust the algorithm to show more opposing viewpoints alongside ones that are more like your own beliefs. By equally pushing articles with opposing viewpoints, the audience can make more well-informed decisions on their own and can more effectively dispel information themselves. Another concept brought up in this article is the regulation of internet use by blocking access to unacceptable sites and requirements for users and their devices to display good practice, and certain denial of access processes either in hardware or software designs. While I do believe that this is proposed with the best of intentions, I think that when analyzed through the ethical tool of virtue, it would be immoral as it takes away personal freedoms and creates risk for government control that could be used to reduce the impact of organizations such as civil rights groups. From my perspective I believe that this method fails on the grounds that while it is done for the right reasons, it is the wrong thing to do because of the risks that are associated with it and it should be the organizations responsibility as they have incentive to both give resources to organizations such as civil rights groups, and protect public interest. This relates to the case study from the 2016 election where Facebook should have done more to protect their users from politicians and foreign entities trying to persuade public opinion. This can be done from a level of government intervention where they audit campaign ads and require Facebook to show opposing political viewpoints to lower the chances of organizations using inbuilt systems to push agendas.
Through the ethical frame of virtue, I believe that Facebook did engage in immoral practices that resulted in information warfare. Facebook should have had preventive measures in place to prevent manipulation of their systems and their aid in the 2016 presidential election was an abuse of power and a major proponent in getting Trump elected. The wider implications of allowing organizations to use processes in these way gives much power to those that have financial backing to repress information and push their own agenda. This not only endangers many protected groups and civil rights groups, but also is a direct threat to the democratic process and the rights of free speech and to prostest. Facebook has too much unchecked power over their user base and combined with their data mining practices, they have near infinite amounts of data to refine their algorithm with to amplify or regulate this abuse of their system and should be held accountable for any outcomes.
User Data
In the article by Palmer, it is discussed that the EU has taken steps to implement policies that aim to better protect citizens data. The article shows that the EU is regulating how data is collected and held, creating responsibilities for the organizations that hold citizens data and punishments in the cases where they fail to either by negligence or otherwise. The GDPR also includes broadening what could be considered personal identifiers and personal data as a means of better protecting individuals, requiring opt out for those that do not wish to have their data collected, and regulations that require organizations to have data protection safeguards built in from the earliest stages of development. As seen through the ethical tool of care, this shows a great deal of care for citizens’ rights to their personal data, privacy, and anonymity. In this Case Analysis I will argue that the ethical tool of care shows us that the United States should follow Europe’s lead because of the oversite of organizations that are tracking our citizens puts their privacy at great risk.
In Micheal Zimmer’s article Facebook data was collected on a number of college students from an undisclosed university for a study, this study gained much criticism due to how quickly people were able to narrow down potential schools and were even able to guess individual students. In the article it is discussed how these students were not aware of their data being collected for this study. Through the ethical tool ethics of care this is a moral failing on behalf of both the researchers and the institution, there is a level of privacy that everyone expects even on things they share publicly as they often perceive it to be only showing those that would personally know them, not people seeking it out at random. As applied to the case study, the new EU regulations would require it be disclosed to those involved that their data will be collected and the ability to opt out. This is the right thing to do as it gives people more ability to control their data and how they are being tracked. It is also discussed in the study that student volunteers were the ones to gather the data through Facebook and because of many of the students possibly being friends on Facebook, this is taken a step further by the fact that they used resident associates to collect the information, as these individuals are in a position that gives them a much greater likelihood of being Facebook friends with many of the students living in the same residence halls. This allowed for information to be collected on students that had even taken steps to control who is able to view their profile. Through the ethics of care, it would have been the morally right thing to do to allow the students to opt out of the study if they did not want their data to be gathered as they have a right to control the amount of data they share publicly and a right to privacy. The study was also insufficient at protecting the privacy of the data they had collected on the students. The study publicly released the data set for anyone to view, coupled with the information that they had released allowed those that viewed it to deduce the school and possibly the individual student of the study, this almost completely erases any anonymity created in this study. Through the ethics of care, we see that steps should have been taken to further hide the identity of the students by means of including more schools, removing students that could be easily deduced because they are the only one of the collected sample within their major or the only student representing their nationality, or making the data set private for the sake of the participants anonymity. Taking steps to get the consent of people being used in a study and taking steps to ensure their privacy and anonymity are an essential part of the begging steps of collecting data when ethical care is taken into consideration as it is a right of the individual to refuse at any part of the process. Without this the community of the students is then called into question, destroying the interdependent relationship between them, especially with those that have authority and should be working to protect them such as their RA’s and school organization. This relates to the article as in the new GDPR guidelines, the organization would receive punishments, often in the form of fines, for the mishandling of the data that they collected that put the students at risk, along with the required opt out options and the basic requirements of data protection safeguards for those that do opt in.
In Buchnan’s article one concept brought up is a twitter model that uses certain characteristics in profiles such as groups, followers, hashtags, and mentions to identify if they could have possible ISIS or ISIL ties. While this was originally created to help prevent the spread of Islamic extremism and identify potentially dangerous subject, it calls into question the use of Big Data for its processes. A common ethical question that arises from models created off data sets that use Big Data is the idea of if an individual would have given their consent for their data to be used for this purpose. It is often called into question the ethical nature of using personal data for marketing purposes, but what about a model that is used to identify supporters of an organization, this could be just as harmful to one organization as it is helpful to another, and this is called into question even more so when the movement seems to be inherently grey such as the Black Lives Matter movement brought up within the article. While many rallied to the support to end police violence against minority groups, some were against it, others were for it but were outspoken against the riots that had erupted. If this program were to be used against supporters of Black Lives Matter, would it be inherently unethical as it would be using the data of the people supporting the group and they would most likely not consent to the use of their data for such a matter. Through the ethical tool of care, the use of this model would be unethical in nature as in the case for the possible use against the Black Lives Matter movement, it would be a case of law enforcement or the government working against the people within the movement that is, in their belief, trying to fight for their human rights and the use of a program like this to track and possibly persecute individuals by association violates the rights of people to be mutually interdependent. Even if a person is associated with someone else that has committed a crime, it does not make them worthy of prosecution to the same level as the offender and they are and they should not be subject to the same level of tracking and scrutiny as the offender as they are not guilty of anything. This applies to the case as the GDPR would require informing the group of their data being collected for what purpose, this allows them to opt out as to protect them from the processes and application of these algorithms that could be used to discriminate against them. Another point brought up in the article is the potential danger surrounding the fact that we do not have control over who has access to data that can be used to identify individuals. It is a considerable invasion of privacy that many organizations can pay to have access to this personal data without the informed consent of individuals. Through the ethical lens of care this is seen as immoral as this is a known issue to many and the policy around it often does not protect individuals from the usage of this data outside of cases where the data is mishandled and there is a security breach. Caring policy would require that organizations must make individuals aware of the collection and usage of their data and put-up safeguards where individuals are allowed to at the least opt out of having their data used for certain projects. Without the application of consent it is no longer an interdependent relationship, it is now a predetory one where the collector takes advantage of the users ignorance to collect their data for their own gain, even if this would negatively affect the person whose data is being collected. This applies to the GDPR case as the EU is now requiring that people are informed before their data is collected and this would inform the user of who is collecting their data and to what purpose they may be using their data.
It is my belief that the EU’s policy on protecting user data is a good first step on making sure organizations are held accountable for protecting the data that is collected. Their improvements on policy concerning informed consent to the individual, accountability toward protecting user data, and laying groundwork for proper punishments for negligence and mishandling will greatly affect the process of global companies and create a safer online environment for everyone. On a wider implication, American following suit would greatly impact the trajectory of online user data laws and should be put in place nationwide. Respecting citizens right to have control over the data that is collected on them and holding companies accountable for the use of that data is the first step towards more ethical practice on creating technologies based on the gathered data of online users. Consent towards the collection and usage of personal data is the best way to also build an informed population and respect everyone’s independent right to privacy. Those that oppose this regulation may argue that this will make it more difficult to acquire a full data set, but I argue that this is solved by a widening of the population being recorded as this will not only give a better picture of how the study applies to a more diversified population, it will also allow for any processes that come of it to be better fitted towards real world applications.