{"id":277,"date":"2021-04-05T16:01:52","date_gmt":"2021-04-05T16:01:52","guid":{"rendered":"https:\/\/sites.wp.odu.edu\/phil355efall21\/?page_id=277"},"modified":"2021-04-05T16:03:11","modified_gmt":"2021-04-05T16:03:11","slug":"information-warfare-case-analysis","status":"publish","type":"page","link":"https:\/\/sites.wp.odu.edu\/phil355efall21\/law-ethics\/phil-355e\/information-warfare-case-analysis\/","title":{"rendered":"Information Warfare Case Analysis"},"content":{"rendered":"\n<p>In the article, <em>What Facebook Did to American Democracy<\/em>, Madrigal points out there is an underlying trend of showing just how powerful Facebook can be. The power that Facebook posses is unknown in a broader scope, instead its power is revealed as events happen. A great example of this is the 2016 Presidential Election. In this election we can see Facebooks influence on the people of America and their political views. As the election raged on it was almost certain that Trump was not going to win the election. However, thanks to Facebook and the engagement that Facebook gave to Trump, he was able to win.<\/p>\n\n\n\n<p>Now, I am not saying that Facebook is 100% responsible for Trumps victory. But it can be seen in Madrigals article, in Wylies act of Whistleblowing, and even some confirmed statistics from Facebook themselves. With that being said, I will be the first one to admit that I am not educated in Politics even in the slightest. I have never voted and don\u2019t intend to ever vote, until I feel as if I am educated enough with the system. It can obviously be seen that Facebook has an enormous amount of power almost no one can even begin to fathom. People are only getting the tip of the iceberg of Facebooks power from the events that unfolded with the 2016 Presidential Election. In this Case Analysis I will argue that Care Ethics shows us that Facebook did engage in information warfare because all parties involved where not taken into consideration when information was released on Facebook, and further that they were partly responsible for the election outcome because of the total engagement that falsified information brought to the platform and Trumps campaign.<\/p>\n\n\n\n<p>In the article, <em>Commanding the Trend<\/em> by Prier, he discusses the unforeseen power that social media websites have on the world. The two social media sites that are of focus here are Facebook and Twitter. The social media conglomerates Twitter and Facebook has unveiled this massive imbalance in power due to their ties to \u201cserving their customers\u201d and their ties with \u201cserving their businesses\u201d. The structure of both sites lends itself to algorithmic trending topics on the site. These trending topics can easily be \u201chijacked\u201d, have a ton of false information spread within them, and end up not relating to this original topic at all. Because of the way that the two sites are structured, combined with the easy accessibility of the Internet, it is hard for these social media sites to regulate the spread and creation of false information for a few major reasons.<\/p>\n\n\n\n<p>The first major reason is that it is hard to confirm whether information is false or not in a relative amount of time. By the time the article has soared to 10\u2019s of millions of views it is too late, you already touched a greater portion of it\u2019s intended, and unintended audience creating changed mindsets and formed opinions on the new topic. The second major reason is that some of the business\u2019s that are on these social media platforms are the ones that are spreading this information. By taking down these ads, links, or posts that are spreading \u201cfake news\u201d, it would be an obvious intrusion in the paycheck that these companies and the social media platform in question receive at the end of the day. A third major reason is that this kind of stuff can be manipulation on a large scale. Let\u2019s say that Facebook takes down a post from the former President Trump because they think that it is wrong. This could be a political bias by some and a tactic of persuasion by others. By hiding the post, they can limit the exposure of this political view thus decreasing its exposure to the masses.<\/p>\n\n\n\n<p>Now if we take these concepts and analyze the case with Facebook and Trumps 2016 presidential campaign, we can see that these line up perfectly. Prior to the election, Facebook had no real way to sort fake news from factual news. This is due to all concepts mentions above: They do not want to show bias in anyway, they did not want to upset the people or businesses that use their services, and they wanted to maximize their profits. All of these lined up in a way that allowed for users to exploit Facebook to be involved in informational warfare. Now, whether this was known to Facebook or not is up to question to as of the day of this writing.<\/p>\n\n\n\n<p>If we look at this through the eyes of Care Ethics, we can see an obvious lack or overall care for the users of the platform. In an Ethics of Care model, we would be looking to maximize relationships and basing our decisions on compassion, mainly. Instead of basing their decisions on compassion, Facebook has decided to throw that viewpoint out the window and allows for decisions based solely on profit maximization. If Facebook had taken into consideration the personal relationships that its users have with its platform then more studies would have gone into the affects that the platform can have on individuals. This in turn would have led to increased regulations on ads and the dispersion of information. It also would have led to increased personal relationships with Facebook and its users, granting the users a feeling of inclusion.<\/p>\n\n\n\n<p>There is an obvious lack of maximization of the needs of the people of Facebook. A great resolution to this would be increased internal studies within Facebook and its user community. These studies would reveal who desires what on Facebook and how Facebook could help with the maximization of this. Another thing that this could do as well is decrease the number of bots that are on Facebook. There is no better entity to tackle this situation than Facebook themselves. The reason for this is because of the amount of information that Facebook holds on people. In the wrong hands, this data can have massive repercussions to not only Facebook, but it\u2019s user community. This is something that can be seen in the Cambridge Analytica scandal where millions of user data was leaked to an outside company. An internal rework of how Facebook structures its algorithms, alongside an more compassion driven business model would lead to a more ethical framework for Facebook, with its focus being care ethics.<\/p>\n\n\n\n<p>In the article, <em>A Second Amendment for Cyber?<\/em> By Scott, he talks about the issues that are coming with the increased number of networked devices and the minimal regulation that are on these devices. There are no set laws that regulate digital devices. The only thing that we have going for regulation that can be applied in some way to digital devices is traditional law. Laws like, do not break the law, do not steal, do harm others, etc. These laws can be applied to everything but have seen increasingly gray areas within the informational realm. As the spread of technology grows and the capabilities of computers grow, we see these gray areas grow ever larger. The larger this area grows the more that regulation is needed.<\/p>\n\n\n\n<p>Scott states in his article that anyone with a connected digital device can do intentional or unintentional harm to others. This is a very powerful statement because it takes a lot to understand what the repercussions are of your digital actions. A great, but tragic example of this could be seen on a social media site like Facebook. You could post on your Facebook some drama that was going on at your local college campus. This drama would spread to millions causing the person in the focus of this drama to kill themselves. Did the person share the drama to have them commit suicide? Most likely not, or at least we could hope so. Did the person know that this was going to go this far? Again, most likely not. The reach of our digital devices and out digital presence is very unknown to most and everyone\u2019s presence is different. However, anyone could cause damage to anything at any point in time whether they want to or not. This is only possible because of the interconnectedness that the Internet has provided to the world.<\/p>\n\n\n\n<p>With these concepts in mind, if we analyze the case with the 2016 presidential election, we can see how these applied to it. Russian \u201ctrollers\u201d understood the power and the influence that social media had on the world. They figured this out many moons ago, which was smart on their part because they where able to establish a presence within the major competitors of news distribution. This \u201cmassive following\u201d gave their platform a sense of honesty allowing for more users to jump on the bandwagon. Once they had everyone on the bandwagon, they were able to alter peoples viewpoints to the way that they wanted people to see them increasing the influence Donald Trump had in his campaign. This form of manipulation was only possible because of the lack of regulations on digital devices.<\/p>\n\n\n\n<p>Now, the Russians used their digital power to manipulate people. This manipulation in turn lead to harm to thousands of people during protests. Again, this was probably something that the Russians did not think would happen. This is a great example of the reach and influence digital presence has on the people of the world.<\/p>\n\n\n\n<p>If we approached this situation through a Care Ethics viewpoint, we could understand that there is no achievable way to create a relationship between the user and the Internet, or between the user and the device. Instead the best way to ensure that an ethical framework of care is utilized is through a global regulation. This global regulation would ensure that the needs of the masses is taken care of. This need would generally boil down to the safety of the people. This is a need that almost everyone desires. This regulation would be carried out through a series of laws and potentially some kind of global software or global user education course. The digital world is such a massive area at this point that it is very hard even notion at what could potentially fix the issues that we are having. These are merely just suggestions that come to mind when I start to think about the issues.&nbsp;<\/p>\n\n\n\n<p>In conclusion, I believe that the world would see a decreased amount of potential harm in all realms if the digital realm was regulated in someway by a framework of ethical care. This framework would increase user satisfaction by maximizing their most basic achievable needs, like their safety. To do something like this, we would need to understand completely what all is encompassed by the digital world. This solution is not a full proof solution. I have said many ties in this analysis that there are many gray areas in the digital realm and these areas are only growing bigger as time passes. It would be best if we where to start studying everything about the digital world now because creating a solution for this imbalance of power within the world is going to take a very long time. Counterarguments to this solution could be on the basis that technology grows too fast for us to ever understand what could come from it. Another counterargument that could come from this is that the cost of implementing something like this would be more than it\u2019s worth. There are many angles that you could approach this situation on, but no matter the angle, it is still hard to fathom everything that the digital world can encompass.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the article, What Facebook Did to American Democracy, Madrigal points out there is an underlying trend of showing just how powerful Facebook can be. The power that Facebook posses is unknown in a broader scope, instead its power is revealed as events happen. A great example of this is the 2016 Presidential Election. In&#8230; <\/p>\n<div class=\"link-more\"><a href=\"https:\/\/sites.wp.odu.edu\/phil355efall21\/law-ethics\/phil-355e\/information-warfare-case-analysis\/\">Read More<\/a><\/div>\n","protected":false},"author":20527,"featured_media":0,"parent":207,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"_links":{"self":[{"href":"https:\/\/sites.wp.odu.edu\/phil355efall21\/wp-json\/wp\/v2\/pages\/277"}],"collection":[{"href":"https:\/\/sites.wp.odu.edu\/phil355efall21\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/sites.wp.odu.edu\/phil355efall21\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/phil355efall21\/wp-json\/wp\/v2\/users\/20527"}],"replies":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/phil355efall21\/wp-json\/wp\/v2\/comments?post=277"}],"version-history":[{"count":1,"href":"https:\/\/sites.wp.odu.edu\/phil355efall21\/wp-json\/wp\/v2\/pages\/277\/revisions"}],"predecessor-version":[{"id":278,"href":"https:\/\/sites.wp.odu.edu\/phil355efall21\/wp-json\/wp\/v2\/pages\/277\/revisions\/278"}],"up":[{"embeddable":true,"href":"https:\/\/sites.wp.odu.edu\/phil355efall21\/wp-json\/wp\/v2\/pages\/207"}],"wp:attachment":[{"href":"https:\/\/sites.wp.odu.edu\/phil355efall21\/wp-json\/wp\/v2\/media?parent=277"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}