Case Analysis on Users Data

Generally, the GDPR privacy regulation attempts to benefit two sides of the regulation spectrum. GDPR creates reliable, modern digital economy to organizations on one hand and enforce the data and information privacy protection of EU citizen’s on the other hand. For the purposes of our argument, we will start by noting that the ethical dimension of this regulation rests on the will and choices of the EU citizen’s agreement to the data’s storage and processing terms and conditions on one side and Organization’s strict compliance to legally gather and protect consumer data and information on the other side. Based on the article by Palmer, on the account of our ethical tool for this work ‘’deontology’’ I argue that the United States should adopt privacy regulations similar to the EU’s GDPR. Because when individuals are allowed to control their information and asked to give consent for everything that involves their data and information I believe regulation of this kind provides dignity based moral status which minimizes other entities other than the owner of that data to decide on it.

 Here, at least two theories are raised by Zimmer. They are both moral theory. One is the harm based privacy theory which is related to the consequentialist theory, where people may act in good way just from the perspective of how much harm it may cause if they do it wrong. Its source is the calculation of the end result rather than encompassing moral obligation centered in the dignity of who may be involved in it. In our particular subject of privacy, based on the harm based theory for example is, people may protect the privacy of individuals just because failing to do so may bring some harm to other people or to themselves. May be fine or jail time. The other concept, the Dignity based theory, also known as the Kantian deontology theory centers an action from a duty based perspective. This theory tries to clarify that the center of morality is not only in making sure that certain action do not cause harm people but also include the moral duty in considering people as an independent individuals worth of dignity. Based on deontology theory, it is possible to morally harm people even if the consequence of an action is not harmful by itself.

  In the case of GDPR, these two theories are detected. On one hand, the harm based theory can be seen when the GDPR regulation specifies in details to what is considered as personal identifiable information and what harm a breach to PII can bring to an individual’s life. The GDPR also reminds organizations to what a reputational and financial cost a breach in a consumer’s data may bring. For example about 50 million euros financial penalty has incurred on Google as a result of a regulation breach. This can clearly serve Google to consider taking individual’s data seriously, may be because it understands what harm it may bring to a consumer or what penalty it may face in the future. It’s all harm based theory. On the other hand, the dignity based theory is reflected on when the GDPR regulation requires the consent of the Consumer at every point where the data is processed or stored. The GDPR also presents a dignity to the consumer when it requires a company to notify any breaches to consumer’s information.

 Deontology, the ethical tool for this analysis, I think is closely applied in this GDPR regulation. Deontology as a concept tries to go beyond a good result of an action. It wants to know what the intentions of a certain action are. As we know companies depend on many personals with different moral levels. Regulation by themselves could not guide a moral compass of company employees. Therefore GDPR narrowing these choices to be taken by the individuals, the owner of the data, the closest to decide from best intentions to their own data, I believe is good deontological perspective.

 The main ethical concept raised on the Buchanan paper is in relation to the concept of iterative vertex clustering and classification (IVCC) data research model. This methodology uses different connecting methods, for example tracing hash tags to identify individuals related to a certain subject or group. The ethical dimension to this is not to totally oppose the use of this technology for the purpose of identifying a terrorist or national security threats, but the ethicality of these methodologies can lead to. Is it ethical to violate individual’s liberty in the name of national intelligence and national security reasons? If allowed then, to what extent will these methodologies be used?  Who will be allowed to access and use the data gained through these methodologies?

   Appling the iterative vertex clustering and classification (IVCC) methodology to our case, I think even though the GDPR can be considered a robust data protection regulation, the IVCC still can go beyond the core concepts of the GDPR. This method does not require the consent of the individual to do its analysis. It can follow a thread on one end spread its methods across network to gather intelligence and information. To a large extent IVCC is in contradiction to the GDPR. The way the data is collected which can be without the knowledge of the data owner; to what purpose the data is being used if it ever is notified, or who is going to use these data can’t be clearly noted out. This method is not as clear and specific as the GDPR requires it to be. After all there is no further-apart between these two concepts that while GDPR is centered on the protection of privacy of EU citizens in a containing fashion, the IVCC is a method which is attempting to unveil, connect and relate data in patterns in order to identify and classify in a stripping fashion.

The actions taken on the GDPR regulation, Palmer’s case, are necessary steps in securing and protecting individual’s private data and information. The concept of deontology is also nearly accomplished as the GDPR is mainly giving the power of permission to data owner to consent to how organizations use their data. Consent and intention are related moral concepts. The fact that deontology depends on the good intention of an action rather than in the good action itself, I found the consent requirement needed from the EU citizens data owners on the GDPR regulation more as deontological concept. On the other hand, I found the case of IVCC on contrary to the concept of deontology. I found the IVCC not to respect data given in a good will of data owner, not to mention those harvested through its methodology without the knowledge of its owners. Deontology puts goodness when intentions are good, IVCC intentions can be anything from economic to political interests.

As a conclusion, I would summarize the GDPR requirement of user’s consent for data processing and storage as a dignity based theory of privacy. If a user chooses to allow his or her data to be used for a certain cause, it’s a symbol of authority in their own data hence dignifying. Dignity based theory of privacy becomes the deontological theory when owner of the data, the most closely once, decides for themselves the permission of their data from their best intentions. Who may think good for his own than the owner himself/herself. Finally the GDPR law to monitor companies and organizations through the introduction of guidelines, standards compliances, and penalties if they fail to accomplish that is based on the harm based theory of privacy protection. This is theory may be useful to deter companies and organizations from conducting irresponsible data privacy practices. Many data companies are profit companies, and penalties as big as 50 million dollars is a big hole in a company’s pocket, therefore if harm based privacy protection from the perspective of not harming the individuals user couldn’t deter companies, these big money loss will do.