Siva Vaidhyanathan analyzes the reaction to the introduction of Google Street View in his selection “The Googlization of Everything.” Google Street View, a software that takes 360-degree pictures from the ground up of numerous streets and intersections throughout the world, was met with negativity and hostility upon its initial rollout. Google created and utilized this software by dispatching “Google mobiles,” cars with cameras mounted on the roofs of their vehicles that have the capability to take pictures as they travel along the road. Initially, when Google Street View was first introduced, there was some initial pushback as well as backlash in every country, with criticisms that the service was too intrusive and failed to address, account for, or respect potential privacy concerns arising from the affected public. To comply with privacy rules when it opened in Canada, Google had to begin blurring faces and license plates. Google also started implementing the blurring procedure to include all their previous and future images. However, even given the stated changes, Canadian privacy advocates still did not want to embrace the software as they stated that people have identifying marks other than their faces that were not blurred out which in turn, helped in convincing the Canadian government to stop the utilization of Google Street View. Furthermore, when the software was first introduced in Europe and countries such as Germany, there was such an intense opposition that many people put signage on their homes requesting that GSV not be used and asking that no images be taken of them. In Greece, the local government even forbade Google from operating in streets on the grounds that there was no effective system in place to notify residents when imaging would take place. Upon Google Street View’s arrival in Japan, IT professional Osamu Higuchi explained features of Japanese culture in an open letter to Google employees. As this case analysis is conducted, I will utilize the consequentialist or “utilitarianist” tool to argue as to why Google should have refrained from rolling out their Google Street View software until public privacy concerns were sufficiently addressed, and further protocols were implemented.
Firstly, to avoid backlash regarding Google Street View, Google could have taken a few courses of action to ensure that the public would not suffer any ill effects regarding their software. Google should have fully realized the easily identifiable privacy concerns regarding their software and implemented various stages to include alpha and beta testers picked from a non-bias pool of applicants. This action would have ensured that their software fell in line with contributing to the greater good of society, as is outlined in ethical reasoning from a consequentialist standpoint. One of the most well-regarded privacy theories the “reductionist interpretation”, is described by Luciano Floridi. According to a reductionist view, privacy is grounded on unfavorable outcomes. Regarding the similarities between a reductionist view and the definition of consequentialism, Floridi states that “privacy is a utility” and that it should preserve human dignity among other things. This falls in line with how consequentialism is the view that any decision that is made, should be made with the greater good of humanity in mind, a view Google did not initially consider when implementing Google Street View.
The reductionist interpretation also considers that informational privacy needs to be protected due to the potential consequences that a lack of protection can have on an individual with some examples including personal distress or social unfairness. In the article Siva Vaidhyanathan states that whilst face blurring technology had been implemented, other identifying factors, in this case, his easily recognizable dog that he walks daily, could give away his identity, as well as unfairly implicate him in illegal gambling rings that are present in the area in which he walks his dog. This is just one example regarding reductionist views that directly coincide with consequentialism. Had Vaidhyanathan been unfairly implicated in a gambling ring, his social reputation and potentially his professional standing would be tarnished. This particular dilemma can multiply exponentially as we consider the many countries and large populations that utilize Google Street View in multiple capacities. Whilst Google did attempt to rectify this privacy issue in which they implemented a reporting system that allowed users affected by said privacy to report any questionable or identifying images. The images in question would then be reviewed and removed if found to be in violation of privacy standards. However, the system arguably was not good enough. Issues that arise with this system being that the amount of time that could have lapsed between the posting and discovery of the photos could have already caused a breach in privacy, as well as social or personal harm to the individual in question. This example is not hypothetical and happened to a couple who attempted but failed to sue Google for breach of privacy regarding pictures posted of their private residence. Whilst this lawsuit did fail, it set in motion questions of ethics regarding what is private individual data that can harm potential users. Furthermore, this raises the question of; What response time for these image removal requests is considered to be legally adequate on the behalf of Google? In this case I believe the response time to be subjective as any amount of time in which images harmful to the privacy of individuals remain online violates the ethics regarding reductionist interpretation, as well as consequentialism.
Secondly, when it comes to the writings of James Grimmelemann, one of his key concepts is that whilst it seems that users don’t care about their privacy, in this case regarding social media, that is simply a myth. Users do value privacy to a certain aspect, and when utilizing tools such as Facebook, or in the case of this analysis, Google Street View, they have a certain expectation that their privacy will be sufficiently protected. The problem with this arises in that the assumption of what is considered “private data” can vary from user to user, age group to age group, culture to culture, and so on. With the focus on Google Street View, some people may have no issue with their faces or addresses being public record whilst many others may be outraged. The disconnect lies in unconsidered variables that also lie in the issues brought on by social media.
When it comes to privacy online, most people viciously care about the privacy of their data. The issue is that this privacy is often hard to achieve, and the tools provided to basic users can be extremely nuanced and ever changing making it almost impossible to achieve the levels of privacy that should be considered as standard. This ties in with the example Grimmelmann gives regarding a woman unknowingly posting a picture of her and Bono on the beach together. In this example, the woman intended to only post these photos to her private network. However, Facebook had recently and rather clandestinely rolled out a” feature” in which a user’s posts are automatically shared with specific geographic regional networks they associate with, New York City, in this example. The woman had no reason to assume that this scenario would occur to both the detriment of her and Bono, and it wasn’t necessarily a matter of her “not caring about her privacy”. This coincides with the issues regarding Google Street View. Most users value their privacy but fail to consider unknown or miscellaneous variables that can be put into play regarding the posting of their faces or vehicles on a vastly public and global software such as Google Street View and unlike in cases such as the one outlined above with social media; they don’t necessarily have a choice in what data of theirs Google decides to make public. The consequences regarding these non-consensual breaches of private data can be quite severe and help potential offenders with misconduct such as stalking, burglary, and casing of private properties. This directly violates consequentialism as the software is not evaluating the overall positive impact of society, but rather putting many individuals at risk by blasting their physical locations on a global software platform. However, the implementation of further privacy protocols on Google Street View such as facial blurring, systems to remove images at user request, among others can turn the negative aspects and effects on individuals into positive ones. Google Street view does have many positive aspects to include helping authors write books, architects scope out build sites, and regular citizens gauge the parking situation in a bustling downtown area before a night out. With a broadened scope on what should be privatized to protect users, Google Street View’s potentials to provide a positive impact on the greater good of society and satisfy the consequentialist way of thinking, is in fact achievable.
In conclusion, Google Street View is a helpful tool for daily use, as well as a useful tool that businesses and creative types such as architects and authors can utilize to research places of interest without physically being there. Many people protested the introduction of Google Street view as they did not feel comfortable with members of the public being able to access their images. Google eventually began to conceal faces and license plate numbers as well as roll out their software to other nations that were still actively voicing concerns regarding the software. As for ethical reasoning, consequentialism should have been further considered by Google’s teams as it states that anything implemented should consider “the greatest good for the greatest number”. When rolling out Google Street view, Google should have considered the potential suffering regarding their lack of privacy protocols as they pertained to every individual affected by their software, in that it made the vast majority of people worldwide uncomfortable that their identities, cars, and addresses had the potential to be known by any and all in its initial stages. Arguably, Google rushed Google Street View into fruition without stopping to ethically verify that this software would contribute overall to the happiness of society. Google could have intensively verified privacy protocols such as face-blurring software, as well as implemented a “beta” launch in which test audiences could have voiced these concerns about privacy pro-actively. The problem being that Google failed to view this software from an ethical and consequentialist standpoint, and more so from the standpoint of a large corporation.