Case Analysis on User Data

The need to regulate data consistently becomes a more pressing issue in the ever-evolving digital age. While many countries are currently working on plans to create better data privacy, the European Union’s GDPR plan is the most well-known and influential plan established by any governing body. In Palmer’s explanation of the General Data Protection Regulation, we learn how the plan works. It was set as a means of giving members of the EU greater data privacy and more influence over what happens with their data. The plan ensures that companies can only harvest data legally in a limited sense but also holds the companies that harvested the data accountable for any data breaches or misuse of the gathered data. The GDPR applies to any company which does business with the EU in some way, which means it spreads the influence of the plan across the entire world as a means of better-protecting data security. While the plan cannot stop data leaks, it is set up to penalize the misuse of any form of leaked data significantly. In addition, it provides a safer internet experience for EU citizens and lets internet users take control of their data back into their own hands. In this case analysis. Furthermore, the plan’s most significant benefits are the right to be informed, data erasure, rectification, and restriction processing. I will argue that the consequentialist tool shows us that the United States should follow Europe’s lead because the protection provided to everyday citizens and better business practices should be commonplace in the digital age considering its positive benefits to users.

Zimmer’s article “But the data is already public” examines the extensive amount of information that can be pulled from public profile data scraping. However, the more significant flaw came when the researchers attempted to remove identifying information. The argument was made that due to the data collected, sure students in the study could not retain anonymity and, therefore, could be negatively impacted by the study. The way this data was released and clearly cannot protect anonymity shows benefits at the immediate surface of the GDPR with its intentions to counter this problem with its right to data erasure. While it is undeniable that the research team acted in good faith to try and remove the identifying data, they could not accomplish this. At this point, the GDPR would allow students who felt that a dataset had an unmistakable identifying feature pointing to them to call upon the erasure of that data and have it removed from the study. The inherent issue in this is the dataset becoming incomplete when an individual is released. However, the partial dataset still provides excellent data to many researchers while refraining from harming individuals. This also would have allowed some data to remain intact and usable instead of completely removing the entire dataset from the public. Palmer’s discussion of the case pointed to the GDPR’s idea of data protection by design.

While this was intended for new products and technologies, it clearly would have benefitted the designers of the “Tastes, Ties, and Times.” The T3 team would have set out from the beginning with the experiment needing to be designed in a way that would protect specifically identifiable data, such as categorizing any participant who had one clearly defining feature in a category labeled other or classified. While it can be argued that the dataset was incredibly beneficial to researchers, this claim needs to be viewed through the lens of consequentialism. When looking at the negative impacts of T3’s release, there is a disproportionate amount of harm done compared to its benefits. The dataset can reveal specific individuals’ political preferences and sexual orientations, which could result in the ostracization of that individual from certain social circles or even mistreatment by others in some cases. While the dataset’s potential could be beneficial, the other aspect to consider is that this is only one instance of a test like this. For the dataset to become completely scientifically valid, this data-scraping experiment would need to be replicated with multiple other college classes in other states across the country. With the already existing concern that a small group of people could be negatively affected at one college, that number rises dramatically once the dataset is gathered at enough colleges for the data to be scientifically accurate. Consequentialism argues that every action is inherently good or bad based on whether or not its outcome is good or bad. While it is clearly up for debate, it is fair to reason that risking the safety of any group of individuals by not protecting their right to data privacy is a bad outcome meaning the action itself is terrible. Suppose the United States were to adopt something like Europe’s privacy laws. In that case, even if the experiment failed to protect the anonymity of its participants, they could still call for the data to be removed, giving them their privacy back while leaving the rest of the results intact. By removing the potentially damaging data from the survey, whether through the team’s actions or the participant’s, the experiment’s outcome becomes positive, thus making the investigation beneficial, thanks to the introduction of privacy laws like the EU’s.

Buchanan’s article “Considering the ethics of big data research: A case of Twitter and ISIS/ISIL” analyzes the impacts of the IVCC model monitoring for ISIS supporters on Twitter, does make a positive case for data mining. At the same time, it calls forward the ethics of gaining this data in the first place and presents that as the most significant concern in this case. The IVCC model uses data collected from a secondary data mining company and puts it into the model to determine if a profile matches that of a potential ISIS supporter. The goal is an attempt to understand which communities fall victim to believing extremist groups and beliefs while also attempting to come up with an answer as to why. While it is true that this monitoring system is beneficial in helping to identify ISIS supporters before they can spread their beliefs, Buchanan also calls into question the issue with the data mining companies which provide the data that helps run the IVCC. As stated by Buchanan, these technologies “can be used to identify to ISIS supporters as readily as they can identify WalMart shoppers or political dissidents.” This shows the underlying issue lies not with the IVCC model but with the fact that the data-mined content is not controlled by those looking to profile ISIS supporters. Instead, they collect data on anyone online, and outside sources can use that information however they please. Buchanan’s case shows the impressive and beneficial uses of harvested data proving that not all of it is used negatively. Still, it argues that the technologies created are one small part of the outcome of data harvesting and should be viewed as one positive when weighing the benefits and negative impacts of data mining. This is where the tool of consequentialism comes into play to determine whether or not the overall benefits outweigh the problems. If the data gathered only impacted ISIS supporters, it could be argued that by supporting or even joining a terrorist group, they would become void of their data privacy; however, this is not the case as this information is gathered from all individuals. At the same time, in a situation where the United States implemented data privacy laws, it would allow these supporters to either hide their identities or continue to spread their dangerous beliefs without being identified. When considering this issue through the lens of consequentialism, it makes it more difficult to gauge whether or not data privacy laws in the United States would then become a less beneficial tool. Everything considered, this is an ethical debate, and in the USA, there are specific individuals who give up certain liberties when their decisions threaten the country, such as terrorist groups and their supporters. When these privacy protection laws become void to people such as those identified by the IVCC model, it is made more evident through consequentialism that privacy models similar to the GDPR would benefit the country extensively as the negative impacts would be inferior compared to its benefits. As mentioned earlier, data-mined information can target anyone. It can sway people in decisions such as where to shop and who to vote for, and in a free country, it only makes logical sense to remove anything that breaches privacy and then furthermore illusions people into thinking they’re making their own decisions.

Consequentialism proves that Europe’s introduction of data privacy protection laws was a favorable decision as it improved the everyday life of most law-abiding citizens. The GDPR plan’s impact on the daily internet usage of EU citizens gives clear evidence of the need to implement similar laws in the United States. While it’s true that the United States is a different country with a different culture and a population that acts uniquely, the GDPR has already worked its way into all American businesses used in the EU. There have been little to no reports of adverse impacts due to this. The only difference is that this plan would truly impact giving these benefits to the individual by giving them more rights and causing all US-exclusive companies to conform to the same laws. At the same time, more extensive data privacy rights may make it more challenging to identify dangerous individuals online; however, we do not need to apply this plan in the same way the EU did. Law enforcement could still have the right to data mine individuals given probable cause, and the only difference would be barring private corporations from having access to that same information for personal gain. All in all, the benefits of a set of data privacy laws show clear positive impacts for the future of a digital world, proving that the implementation is a good choice that the country should consider.