General Data Protection Regulation, or GDPR, is somewhat recent legislation that seeks to protect Europeans’ data. In my opinion, GDPR set the groundwork for the rest of the world in what measures should be taken to protect individuals’ data through privacy and security regulations. Given that the EU is a huge component of the digital age, GDPR affects organizations worldwide that handle data of EU residents. As noted by Palmer, things that we would expect to be personal, such as a name, address, date of birth, are covered. However, there are other personal datasets that are protected. Datasets that could be used in conjunction to identify a specific individual. Palmer also mentioned that when GDPR went into effect, several newspaper organizations put a block on access from the EU. This shows that organizations had to stop and regroup to determine how they would better protect these individuals’ data in order to comply with GDPR. The thing to point out is that companies now must determine how to mitigate the risks that users face when giving their data to organizations. I will point out how contractarianism shows that the United States should follow Europe’s lead because it has been shown repeatedly throughout this century that organizations have mostly taken a reactive approach to handling user data, instead of the proactive approach that GDPR forces these organizations to comply with. Based on social contract theory, the most at risk people are those entrusting their data to the organizations. At the very least, if the data of these people is protected then the social contract can be beneficial to all.
Zimmer presents an introduction that mentions the T3 research team took steps to separate student names from the dataset. However, T3 only delayed in releasing other information which would be deemed personal and could also be used to identify students, who were the subjects of the study. Early on it is apparent that the research team did not get external help to audit the data, which could have potentially found the pieces of data that was used to identify individuals at Harvard. We can deduce that a social contract would have been breached when the decision of data collection was left up to Facebook and Harvard. Those students in the study, had no idea that their data was being collected. If the research assistants were thinking in terms of a social contract, they would most likely decide that would most certainly censor or leave out certain data on their profile. They may also decide to completely delete their Facebook profile, given that they know it would be used in studies for decades to come. In this situation, a user notice should have been displayed that would give students the right to authorize their data to be collected and distributed for research.
Contractarian ethics calls for a set of rules that would be beneficial and allow an acceptable quality of life even for the most underprivileged of society. Based on this, the research assistant students would not wish to be in the place of the regular students who they are taking advantage of. A more beneficial social contract in this case would include parameters to protect the privacy of all and to, at the very least, provide users an acknowledgement about what their data would be used for.
The main argument from the lead of the T3 research team state that “our dataset contains almost no information that isn’t on Facebook.” The problem with this is that, while it is on Facebook, that does not mean everyone had the right to see it. Students who had their privacy settings to only allow “friends” to view their data were ousted by the research assistants who had privileged rights to view their data simply by being “friends” with the subjects. Kaufman also made a reference about the issue of the ethics of the research his team was doing. He posed the question “Would you require that someone sitting in a public square, observing individuals and taking notes on their behavior, would have to ask those individuals’ consent in advance?” There are quite a few problems with this stance Kaufman is taking. The main one is that is quite apparent that individuals will behave very differently in public than on social media. A group of students out in public may talk about some of their interests, or possible the weather. That same group of students in a library, or other private setting, would possibly have a much more private conversation about politics, religion, or other personal things. The same comparison can be made about social media. When individuals make their profile private, they may post personal information that they trust will not be exposed to the general public.
Social contract theory would call for privacy rights for everyone in the hierarchy of society. In this instance of the social contract, the T3 research team and Harvard administration would be at the top, the student research assistants would be under that tier, followed by the students at the bottom. The overstepping of privacy in this instance shows that the students were left out and a social contract would have protected their rights much better.
On another note, T3 did put a pause on requests to view the data. This is very similar to how publishers blocked EU citizens from accessing their online publications. The interesting thing is that all of this could have been avoided if the research team had simply presented the students with the idea of the study and asked for student authorization to have their data collected. While this would possibly create a much different dataset, it would be ethical from a contractarian viewpoint.
Buchanan introduces are core argument from the viewpoint of people who argue in favor of ethics and privacy. Those who favor ethics and privacy are against mining data for the sake of national security. Buchanan notes that this is impossible due to the amount of data that is provided to the public by individuals. Using the veil of ignorance, we can easily determine that being a government test subject would be well beyond what is acceptable in a social contract. However, that is essentially what happens when individuals have no control over who, or what accesses their data. The algorithms that collect the data can do so in a myriad of ways. Buchanan notes that big data science can determine patterns and anomalies in the sets of data.
The key point of Twitter in this study, is that Twitter users behave much differently than Facebook users. Twitter is a more open forum for users to interact with each other, most of the users acknowledge that their tweets could potentially be seen by thousands of people they do not know simply because of a retweet. There is a major difference between simply reading an individual’s tweet and mining the data to analyze it and compare it to past tweets the user made, as well as saving it to analyze it amongst other users’ data.
With the protection of GDPR, users have more say in how their data can be used and accessed. As stated, individuals would have different standards on access to their data based on the actual use for data collection. For instance, there are many individuals who operate a personal social media profile, as well as a “meme” profile to make funny posts with. Using data mining and analytics, the personal profile of the user who runs the “meme” profile could be determined based upon early tweets. The individual’s intent was to keep their private life separate from the “meme” profile, however their privacy was compromised.
Contractarian ethics would rule heavily in favor of the GDPR, which protects all European citizens. That is paramount to contractarian theory, that every single member of society is afforded enough basic rights and privilege to allow a decent quality of life.
One of the last, and most intriguing points comes from Benigni, Joseph, and Carley. They pose the question “are there 119, 156 individual subjects, or are they a group, a collective data subject?” From a contractarian perspective, we can determine that having data collected to be consolidated into a group data set would be much preferable when compared to having data collected to form a “profile” of one individual. Data collected as a group can be compared to a census, where individuals give demographic and other details about themselves. The very important difference is that, in a census, individuals have the right to decline and not give personal information. Something that is vastly different from the data collection of the cyber space.
I believe GDPR and contractarianism has the answer to some of Buchanan’s questions in his conclusion. While data scientists may view the Twitter users as subjects, GDPR would help limit data access only to those who willingly participate. Under a social contract, individuals would never agree to be lawful subjects, but instead participates with their own privacy protected.
The GDPR has set the bar for how user data should be protected, and that consent must be given as to how said data is utilized. Luckily, GDPR has created a ripple effect for privacy rights. California created the California Consumer Privacy Act, which seeks to give Californians more control over their own personal information. Data Protection Officers, a core requirement of GDPR for some organizations, are becoming more and more commonplace in the United States. On the other hand, I realize that based on contractarianism, it is beneficial to give up some rights to privacy in order to protect society as a whole. If public tweets could be analyzed to determine ISIS members or ISIS sympathizers, many people would support that for the added feeling of protection and security. If users always had to give consent to data access and collection, then many criminal cases would be lacking in evidence and said criminals may even evade prosecution.