Case Analysis on Privacy

Jonathan Mukuye

1.4 Case Analysis on Privacy

In Siva Vaidhyanathan’s article, The Googlization of Everything, we’re presented with a compelling list of grievances against the implementation of Google’s “Street View” service in Google Maps. It’s a service that gives users a 360-degree view of intersections, streets, and neighborhoods at ground level. Although the service has great utility, it’s faced a lot of opposition since its launch. In the many places it’ been implemented, there have been varying degrees of concern and denunciation of the service because it, in most people’s estimation, has little to no concern for the privacy of the people it’s supposed to serve. It would seem that Google’s main goal in launching Street View is to provide a highly utilitarian service, easily accessible to all, that encompasses as much information as possible without violating Google’s default privacy policy. However, since Google’s privacy policy does not properly address the privacy desires of the people it seeks to serve, and thus does not entirely satisfy its role in society, according to the Confucian understanding of morality, Google’s implementation of this service was unethical. To properly fulfill its role, Google should have not only provided utility in their service, but also to designed their product in such a way that users can consent or reject to having their images collected as well as implemented away to clearly communicate consequences of Google’s Street View service. 

In James Grimmelmann’s article Privacy as Product Safety, the author is concerned with how protections for privacy in the realm of social software can be improved upon. He outlines his focus in this question: “Is the loss of privacy in social media something lawmakers ought to worry about and, if so, what should they do?” Although the sphere of the internet Grimmelmann is mainly focused on is social media, the insights that he brings can be carried over into our discussion about Google since it operates in such a similar role to the major social media sites. Grimmelmann’s main assertion is that the current, predominant model for understanding the role companies have in protecting people’s privacy needs to be expanded from one that gives a disproportionate amount of attention to the actions users take, to one that focuses more on measures that media companies can take to ensure users are equipped with the knowledge and ability to protect their privacy. The solution proposed by Grimmelmann is to, in conjunction with current database protection policies, develop laws that put the burden of ensuring users are able to rationally make decisions about privacy in large part on social media service providers. Simply put, just like companies are constrained by product liability law to provide safety disclaimers about potentially dangerous products like a chainsaw or a pack of cigarettes, and are encouraged to pursue good product design to avoid harm to consumers or consumer misuse, social media platforms should likewise be accountable to their users to disclose potential dangers of use of their platforms or design the platform in such a way that users don’t stumble into the privacy issues they currently do. 

In Vaidhyanathan’s article we see that everywhere Street View was launched, and in most cases, very strongly, there were irrepressible concerns for privacy. Streets, houses, public and private property, and people on the street, under Google’s policy, could all be subject to having their image taken and used for the Street View service. In an effort to show concern for privacy, Google offers to remove or blur any image reported to be troubling to someone, but their default settings in Vaidhyanathan’s assessment “were set for maximum exposure.” One of the core principles of privacy as product safety as outlined by Grimmelmann is good product design. The design of Street View’s operation is wonderful for Google’s data collection, but ill-suited for addressing privacy concerns.  

Another effort by Google to quell potential protest is the provision that lets anyone report an image as troubling, embarrassing or revealing of personal information in order for it to be taken down. Grimmelman writes “good product design makes consequences predictable.” If an image ends up on Street View, often people have no idea what the consequences of having their image or property taken could be. The images have the potential to be seen my anyone with internet access which could have even broader implications for what happens to them after that. Google makes no effort to inform about potential consequences of the Street View service or design it in such a way to address concerns. 

 When people or governments expressed their concerns for privacy, Google’s responses seems to be to do as little as possible to successfully appease the dissatisfied public and not disrupt the way Google Maps was operating rather than redesign their service around the needs that people expressed, which by the when judged by the model Grimmelmann gives us, is a dangerous course of action. In Canada for example, where privacy laws were stricter than the places where street view was already launched, Google discerned that the technology would not readily be accepted, so they made an announcement that Street View would be modified to blur faces and license plates, when in reality, that was a feature that was already operating in Street View. This only one example of a few where Google prioritizes the utility of their Street View service over designing a product that addresses privacy concerns of the people that are affected by it. 

Google has a responsibility as major service provider to design a good product. This not only means to provide convenient goods and services, but also to address the privacy needs of its consumers and anyone else affected by its services. We can see through the contributions that Grimmelman provides that Google falls short of doing that because it does not have a product that is designed around providing its service while protecting the privacy needs of its consumers. Not only that, but it doesn’t provide a means for those affected to fully understand the consequences of Google using collecting and using their images for this service, which Grimmelmann tells us is essential for good product design. Because of this we see that Google under the Confucian understanding of morality fails to do the right thing because it fails to entirely fulfill its role. It provides a useful service, but fails to meet the privacy desires of the people it servs. Google should have implemented a way to fully disclose to people that their images were going to be taken, provided a means to opt out, and let those who consent know the full ramifications of Google’s intentions with the images taken. By doing this, Google would have done that which satisfies both roles in society and thus done what is right in the lens of the Confucian ethic. 

In Luciano Floridi’s book, The 4th Revolution: How the Infosphere is Reshaping Human Reality, the author explores the different ways privacy has been thought of throughout history as our technologies for communication have evolved. Floridi establishes that as information communication technologies (ICTs) become more advanced, the difficulty in obtaining information (informational friction) decreases. Because of the rapid pace at which ICTs are increasing and informational friction is decreasing, old models of thinking about privacy are growing antiquated and are no longer sufficient for the informational sphere we live in today. He submits to the reader a new model for privacy which is designed to be adapted to an environment where there are many information communication technologies. He asserts that privacy has a self-constitutive value, meaning that each person is constituted by or holds their identity in their information. Thinking about privacy like this means that an act against a person’s privacy would be treated like aggression toward that person instead of thinking about that matter in terms of stealing someone’s property or thinking of the negative consequences of a privacy breach. 

Using this definition of privacy, we see clearly that Google violated a lot of people’s privacy in the way that they implemented Street View. Street View gathers images first and worries about matters of privacy later and does not have an avenue for seeking permission or for images. Instead, steps like blurring faces and sensitive information are taken after images are collected without consent and in many cases knowledge that the images are even being taken. Often after blurring things like faces, licenses plates, and other things Google considers personally identifiable information, the elements left in an image can still be used to identify people at least by people familiar with them. Under Floridi’s model for understanding privacy, Google has already violated privacy by this point because they’re capturing personal information, which under the self-constitutive model, is an act of undesired capturing of a piece of one’s own identity and then Google places it on the internet for millions to have access to, further increasing the severity of the offense toward privacy. Doing this, in the lens of the privacy model submitted to us by Floridi, is equivalent to taking pieces of a person’s identity, or one’s “self” and without their consent, posting it for the world to see. 

The actions taken by Google clearly fail to show to an ethical adherence to privacy concerns under the self-constitutive model of privacy. Because Google has a responsibility to its consumers and others affected by its service to provide both utility in their product, while not undermining the right to anyone’s right to privacy, Google fails to properly fill the role it has in society, failing to do the right thing by Confucian standards. In order to the right thing Google should have implemented a system for obtaining consent to image collection before actually going out and collecting pictures for Street View and honoring any rejections at their proposals. By doing this, Google would have effectively addressed privacy concerns under the self-constitutive model and properly fulfilled their role as a service provider. 

In summary, Google failed in the avenue of privacy when launching their Street View service. By the self-constitutive model of understanding privacy Google has committed offenses toward the identities of many people that were minding their business. By the product safety understanding of privacy Google falls short because of its reckless implementation design and the lack of communication to users for consequences of Googles actions with their images. In doing this, Google created an incredibly useful service, but because Google sacrificed the privacy concerns of the people it has a role to serve, Google fails to fully satisfy the role it has in society, and thus acted unethically. The company should have designed the service in a way that allows people to consent to or reject have their image or their property’s image taken as well as communicated with everyone involved the full potential implications of having the images on Street View. There are those that might object and say that Google has no duty in society to conform to the privacy concerns of anyone as long as they don’t break the law. If that were true, then Google has done nothing wrong. However, as a service provider, Google’s role is to meet the needs of people not only in their need for utility in the service they provide, but also to respect their rights.