Case Analysis on User Data

In 2018, the European Commission’s General Data Protection Regulation plan (or GDPR) took affect throughout Europe. This legislation was designed to put forth comprehensive privacy regulations for companies and governments to ensure that the data of Europe’s citizens was kept secure and anonymous. Any organization that conducts operations in the UK must abide by these new regulations, meaning that much of the rest of the world is going to benefit from the regulations as well. However, the E.U.’s implementation of these new policies begs the question: why hasn’t the United States implemented similar regulations? While the U.S. does in fact benefit from the GDPR regulations as many of the organizations that conduct business in the U.S. also conduct business in Europe. However, many smaller businesses and organizations do not, leaving a large amount of data still unsecured, not to mention any loopholes the larger companies may find to avoid providing the same protections to U.S. citizens as those in the E.U. In this Case Analysis, I will argue that from the point of view of a consequentialist, the United States should implement a similar if not identical set of laws to the GDPR as this would do the greatest good for the most amount of people. Before I can make my argument however, I must first explain what consequentialism is. Consequentialism is the idea that for an action to be considered “good”, it must maximize the amount of good in the world. In effect this means that actions are good when they have a positive outcome for most people. An example would be the classic trolly dilemma. In that situation, a consequentialist would always choose to flip the switch, sparing the five people tied up on one branch of the tracks and killing the single person on the other branch. Thankfully the case I will be examining today is less extreme than that. In order to fully examine the case, I will be utilizing the help of two external papers discussing privacy protections. The first of these is a paper written by Elizabeth Buchanan and published in 2017. In the paper, Buchanan discusses the ethics of big data research specifically concerning its potential to aid in the War on Terror. For our purposes, Buchanan makes a point that is particularly important: “one may implicitly agree to one’s data being used for marketing purposes while that same person would not want their data used in intelligence gathering. But, big data research does not necessarily provide us with the opportunity to consent to either use, regardless of the intent” (Buchanan, 2017). This is clearly a huge oversight in privacy regulations, at the least in Buchanan’s eyes. The U.S. has little in the way of clear regulations for how consent should be collected and given for data research, at least the time of Buchanan’s writing this piece. This means that as far as the U.S. government is concerned, any organization conducting big data research has relatively free reign over our information so long as they fulfill some basic privacy requirements (ones that have failed time and time again to protect our data from malicious attackers). The potential for data collected by these organizations to be leaked and linked back to the persons it was collected from is terrifying.

When reading the regulations contained in the European Union’s new regulations, it is clear that Buchanan’s solutions lies in the policies outlined there. At its core, the design philosophy of the GDPR is to allow citizens to better control and regulate how their data is collected and used. There are more specific policies aimed at data control as well: companies are required to tell consumers how their data is being used and allow them to opt in or out of the collection of that data. For some companies, consent is acquired through emails that detail the extent of data collection operations. For others, they have implemented comprehensive dashboards providing users with easy-to-use tools allowing them to control the company’s access to data and in some cases remove the data altogether. By giving users control over their data, these rules and regulations stand to benefit millions of United States citizens. Stand to benefit millions, and only stand to harm a few corporations financially. Failing to implement these policies would ensure that these millions continue to live with their data out of their hands. When looking considering these two courses of action with the philosophy of a consequentialist and in the context of Buchanan’s thoughts, it is clear that the implementation of GDPR-like policies in the U.S. would be the best choice, as it would do the most good for the most people and address Buchanan’s problems with big data research. Otherwise, you would simply be allowing these people to undergo continued harm at the hands of organizations who take their data without their consent and without allowing them control over it. However, this is not the only issue that the GDPR could fix in the U.S.

When discussing research data collection gone wrong, Michael Zimmer posits the idea that “if it [data] is potentially privacy invading content, it should simply not be released” (Zimmer, 2010). According to the General Data Protection Regulation plan put forth by the European Union this includes things like addresses, names, phone numbers, photos, IP addresses, friends, family members, etc. Anything that could be used to potentially identify the person that the data was collected from is considered “potentially privacy invading content”. Currently, the United States government has very weak protections in place regarding the anonymization of private data before release and regarding the release of that data should it not be anonymized. This has resulted financial loss, identity theft, and manipulation for millions over the years, but the GDPR’s regulations would address the issue if implemented in the United States.

Built into the GDPR are anonymity regulations that require all data collected for any purpose to be completely anonymized to the point where it cannot be linked back to the person it concerns. These regulations also ban the release of said information should it not be anonymized. In addition, there are limits on how the data can be combined to make a virtual profile to ensure that the anonymous data cannot be combined in such a way to make identification possible. Should organizations fail to adequately anonymize the data or prevent its release to the public, they will face heavy fines and other legal action depending on the severity of the breach. To address this oversight, the U.S. has two possible options: to continue down its current path and leave data protections up to the individual organizations or to implement GDPR-like policies that enforce effective anonymization and prevent the release of private data. Using the philosophy of consequentialism, we can reach our answer. The consequences of choosing to not implement GDPR laws in the U.S. would be mostly negative, as organizations will still be able to release information that will compromise the privacy of individuals without any strong repercussions, and even the data that isn’t released will remain identifiable should the servers storing it be compromised by malicious actors. The consequences of implementing the GDPR laws, however, would be mostly positive. Millions could rest assured that their data is fully anonymized and nearly impossible to link back to them or not even released at all. Things like ad manipulation, political manipulation, and other forms of targeted marketing (not to mention potential identity theft and other illegal activities) would be much harder to conduct on U.S. citizens. However, this may cut into company profits and potentially hinder legal investigations with all of the additional regulations put into place. In the eyes of a consequentialist though, the path that does makes the most improvements in the lives of the most people is the correct one to take, meaning that the U.S. should implement GDPR in this case as it stands to benefit the largest portion of the population. Most people are not criminals and organizations aren’t people. In conclusion, the United States government would stand to benefit almost every one of its citizens should it implement laws like those found in the E.U.’s GDPR plan in its own privacy laws. The only ones that wouldn’t benefit are the organizations collecting the data, though there are less organizations doing data collection than there are people. Thus, consequentialist thought dictates that GDPR laws should be put into practice in the U.S. Taking the real world into account, however, there are some flaws with this line of thinking. First, many U.S. organizations already do business in the European Union, meaning that they are already under GDPR law. This means that such regulations in the U.S. might be a waste of legislative time due to redundancy. The counter to this is to keep in mind that GDPR laws are in fact subject to change and may not be enforced by E.U. authorities as strongly as they should be. With organizations facing pressure from two major governing bodies, they would be even less inclined to violate privacy protections. Secondly, the additional financial overhead added by this plan may prove too expensive to be maintained, especially in the case of smaller domestic organizations like local governments. To this argument I have no effective counter, only suggestions. I would hope that the government would fund local governments so that they can afford to adhere to the changes, but in the case of private companies, I think that if they can’t afford to adequately protect data then they shouldn’t be collecting and storing it anyway. Whatever the case, the GDPR would help more people than it would harm.