Topic: Data Ethics
Tool: Consequentialism
Question: Should the United States adopt some like Europe’s new Privacy Laws?
In this case analysis I will argue that Consequentialism shows us that the United States should not follow Europe’s lead because they simply do not need to. They already have a system in place that benefits those that it is meant to. I will argue that, from a Consequentialist’s point of view, the current system does little to protect individuals’ privacy rights and that is OK. That is a feature of the system, not a bug in the system. Under the GDPR, the definition of personal data was expanded to include IP addresses and sensitive personal data such biometric and genetic data. Existing legislation already included names, addresses and photos. This is obviously better for the average user. Under GDPR, companies in the UK have much a much stronger legal obligation to protect their users’ data than companies in the United States. Penalties for violating that obligation can be harsh. This is obviously a bad thing for the companies. Ethically it might seem like a no-brainer for companies to want to do everything in their power, to take every precaution, to protect their user’s data. But from a consequentialist perspective the cost of doing so is too high. For companies in the United States the cost of compliance is too great and agreeing to those regulations could put a fraction of their profits at risk. Companies in the United States are already doing the absolute bare minimum to protect user data and as far as their profits are concerned, that is enough. Consequentialism can show that new regulations would be detrimental to the main goal of any American company, generating the maximum amount of profit.
In the first article, Zimmer makes the distinction between personally identifiable information (PII) and information that would violate your privacy if released. This is a very important distinction going forward. National laws regarding privacy in the United States are vague and open to flexible legal interpretation. Companies including social media networks and retail service providers collect data from their users and are permitted to sell this information to advertising and marketing firms, as well as any other interested buyer, as long as the data has been “de-identified”, indicating that the buyer should not be able to identify induvial users based on that data. This works well in theory but not in practice as there are methods of re-identifying anonymous data. “Re-identification algorithms are agnostic to the semantics of the data elements. It turns out there is a wide spectrum of human characteristics that enable re-identification: consumption preferences, commercial transactions, Web browsing, search histories and so forth.” (Zimmer 2010) As technology advances even further terms such as “personally identifiable” and “quasi-identifiable” will begin to merge and be referred to as data that can be used to identify a user.
Zimmer claims that our current privacy protection laws “rely on the fallacious distinction between identifying and non-identifying attributes.” He concludes by saying, “this distinction might have made sense in the context of the original attack but is increasingly meaningless as the amount and variety of publicly available information about individuals grows exponentially.” (Zimmer 2010) He is implying that our laws are not written well enough to protect user privacy. National privacy laws are vague and there are not many restrictions in place on how personal data is collected, shared and used. Personal information is extremely valuable to marketers looking to promote their products and services to a targeted audience. These marketers and advertisers are willing to pay a premium for this information because it is very cost effective for them to do so. Targeting a specific audience with a specific product or service is much more efficient than just putting your product out in front of the masses. In the United States, IP addresses are not considered PII so companies can use your session cookies to advertise products even after you leave their website.
From a Consequentialist perspective, companies in the United States are doing enough to protect their user’s PII. Any further regulation would be detrimental to their goals. Additional regulations would cost more money and use more resources. Non-compliance penalties would be severe. In the case of GDPR, these fines can be upwards of twenty million euros or 4% of Global Annual Turnover, whichever is higher. American companies are not interested in that. These companies are doing the minimum required to protect user data while still being able to profit from that data. This is by design. De-identification only provides a weak form of privacy, but under current US laws, it is all that is required. Data privacy is not a new concept. If the government was actually concerned with user privacy they would have taken some action to regulate company’s practices by this point. Tech companies and their lobbyists are doing enough to make sure that this change never comes to the United States. A Consequentialist, from a data company’s point of view, would agree that this is a good thing.
In the second article, Buchanon talks about the ethics of big data research methodologies. She is responding to a study that used the IVCC model to identify ISIS/ISIL supporters among Twitter users. The IVCC method, “enables greater detection of specific individuals and groups in large data sets, with its enhanced capabilities to identify and represent following, mention, and hashtag ties.” (Buchanon, 2017) Essentially the researchers were able to train an AI algorithm to recognize ISIS supporters among Twitter uses by analyzing publicly available user activity data. Buchanon worries that this type of large-scale data mining may be unethical and a potential misuse of customer data. One of her main arguments is that, while yes, Twitter users agree to a terms of service agreement in which they forfeit ownership of their data, there is still a reasonable expectation of privacy between the platform and the user. She argues while “one may implicitly agree to one’s data source being used for marketing purposes while that same person would not want their data used in intelligence gathering.” (Buchanon, 2017) Her point is that the users do not have a choice. Once they accept Twitter’s mandatory terms of service they do not get to pick-and-choose who uses their data, or for what purpose it is used. Users do not get the opportunity to consent to each individual transaction after their initial general approval.
Her point is extremely important when determining whether or not to change our current policies on data privacy. What if Twitter users, or social media users in general, had the opportunity to consent to every transaction involving their data? Or had the option to opt-out of collection completely? Currently Twitter has different privacy policies based on the user’s location. Users inside the United States are governed by Twitter’s head office in San Francisco, under US law, while Twitter users outside the United States are governed by Twitter International, which is based in Ireland. Ireland is part of the European Union and therefor all account information will be subject to GDPR compliance regulations. Two different privacy policies offering two levels of protection. In order to be GRPR compliant, “ organisations (that collect data) have to ensure that personal data is gathered legally and under strict conditions, but those who collect and manage it are obliged to protect it from misuse and exploitation, as well as to respect the rights of data owners – or face penalties for not doing so.” (Palmer, 2019) The GDPR holds companies responsible for user data. For the most part, the United States government does not. If companies already have location-based data security practices in place, why wouldn’t it make sense for them to extend those superior practices to users in the United States?
Because, again, from a Consequentialist perspective, they are already enough. Again, these companies are doing the minimum required to protect user data while still being able to profit from that data. Again, this is by design. Even though most of the western world’s big technology companies are American, the US has still not passed a law requiring its companies to abide by meaningful data-privacy protections. The US government and US companies are taking a very Consequentialist stance on these privacy issues. They are more concerned with the end goal, maximizing their profits, than they are with the security and privacy of their data.
The ethics of privacy laws change depending on how you look at them. From the average person’s perspective, stronger regulations would be a good thing. People would not have to worry about any of their data being misused and it would not change their experience at all. However, from the American company’s perspective, these changes would result in increased compliance costs and potentially non-compliance costs as well. From a Consequentialist’s perspective, the ethics of user data privacy policy is not important. Consequentialists would not ask themselves whether these new protections were right or wrong. They would not care. They are fine with the current rules because they benefit from them, providing them with the shortest, most efficient, path to their goal. From a Consequentialist perspective, the US should not adopt Europe’s privacy laws. Alternatively, tighter data privacy regulations are for the benefit of the consumer. The GDPR provides Europe with some of the best, most comprehensive, data protection regulations in the world. They are provided increased data privacy and security as well as more transparency on how companies collect, use and share their data. Personally, I would support similar regulations in the United States.