Facebook’s Role in the 2016 U.S. Election: An Analysis of Information Warfare and Electoral Influence

In the detailed exploration by Alexis C. Madrigal for The Atlantic, the case of Facebook’s influence on the 2016 U.S. presidential election is meticulously unpacked. The article titled “What Facebook Did to American Democracy” delves into how the social media giant became a pivotal platform for political information, where issues of misinformation, algorithmic biases, and the potential manipulation of voter behavior came to the forefront. The research highlighted includes the study of Facebook’s “I Voted” button, which possibly increased voter turnout, and the broader implications of its algorithmic content curation that may have deepened political polarization by creating echo chambers. Madrigal also raises concerns about Facebook’s role in unwittingly enabling foreign entities to exploit the platform to influence American political discourse, particularly through targeted ads and false news stories that reached a significant portion of the electorate. These facets establish the groundwork for scrutinizing Facebook’s ethical responsibilities and the extent of its engagement in what could be described as information warfare. In this case analysis, I will argue that consequentialist moral reasoning demonstrates that Facebook did engage in information warfare because its platform facilitated the widespread distribution of manipulative and false content, influencing public opinion and electoral behaviors. Furthermore, I will contend that Facebook was partly responsible for the outcome of the 2016 election because its algorithms and design choices significantly shaped the information landscape, affecting voter perceptions and actions. This analysis will explore the consequences of Facebook’s actions and policies, evaluating the moral implications of its impact on democracy and election integrity.

Jarl Prier’s research delves deeply into the dynamics of information warfare and its implications on modern democracies, especially through the lenses of digital platforms. One central concept from Prier is the idea of “weaponized narratives,” where social media platforms can be utilized to spread targeted propaganda and misinformation to influence public opinion and political outcomes. This concept is particularly relevant to the discussion of Facebook’s role in the 2016 U.S. presidential election.
In analyzing the case presented by Madrigal, it becomes evident that Facebook’s architecture and algorithms inadvertently set the stage for an information warfare environment. Facebook’s algorithms prioritize content that is likely to generate user engagement, which often includes sensational and polarizing content. This algorithmic preference can lead to the amplification of misleading or outright false information if such content engages users more than more accurate, less sensational information does. During the 2016 election, this feature of the platform was exploited by various actors, including foreign entities identified as having ties to the Russian government, who used fake news and divisive content to sway public opinion—a direct application of Prier’s concept of weaponized narratives.
From a consequentialist viewpoint, the ethical assessment of Facebook’s actions—or lack thereof—in the lead-up to the election focuses on the outcomes of those actions. The platform’s design and its algorithms facilitated an environment where misinformation could thrive, significantly impacting public discourse and political processes. Consequentialism would argue that the moral weight of Facebook’s decisions lies in these undeniably profound outcomes, influencing voter behavior and possibly the election outcome itself.
The right thing to have done, based on this assessment and analysis, would have been for Facebook to take a more proactive role in mitigating the spread of misinformation and in safeguarding the integrity of the information ecosystem on its platform. This could have involved:
Implementing more rigorous content verification processes to filter out false information and fake news, despite the potential impact on engagement metrics.
Adjusting the algorithms to not only prioritize engagement but also to ensure a healthier balance of information and to promote news literacy among its users.
Increasing transparency about how content is distributed and how decisions are made regarding content prioritization.
These actions would not only have aligned with ethical business practices by prioritizing the welfare of the public and the integrity of democratic processes over company metrics like engagement and time spent on the platform but also could have prevented the platform from being used as a tool in information warfare. Implementing these changes could have mitigated the negative consequences identified by the consequentialist assessment, aligning Facebook’s operations more closely with ethical standards that prioritize public good and democratic health.
Thus, from a consequentialist perspective, while Facebook may not have intended to engage in information warfare, its failure to anticipate and counteract the weaponization of its platform made it a participant in such activities. The ethical responsibility, therefore, was for Facebook to have recognized and addressed these risks proactively, ensuring that the power of its platform could not be misused to undermine the very fabric of democratic society.



James C. Scott, in his exploration of social structures and state power, introduces the concept of “legibility” as a central tool through which governments understand and control complex societies. Legibility refers to the state’s attempt to simplify and organize society in such a way that it is easier to monitor, tax, and manage. This concept, while originally intended to describe state governance practices, can be adapted to understand the role of digital platforms like Facebook in structuring the information landscape.
Facebook’s platform organizes and filters vast amounts of information through algorithms that decide what is shown to users, thus simplifying and making the behaviors and preferences of its user base legible—not to a state, but to the platform and its advertisers. This legibility allows Facebook to monetize user attention effectively but also makes it a powerful tool for those seeking to manipulate public opinion and discourse, as evidenced in the 2016 U.S. presidential election.
Applying Scott’s concept of legibility to the Facebook case under a consequentialist lens, we see that Facebook’s algorithms, by making user preferences and behaviors legible, also made them manipulable. The platform’s design prioritized engagement, which inadvertently optimized for content that was often sensational or divisive. This system, therefore, facilitated the spread of misinformation and allowed malicious actors to specifically target groups with propaganda, exacerbating political divides.
From a consequentialist viewpoint, the ethical implications of Facebook’s actions are judged by the outcomes of these actions. The spread of misinformation and the manipulation of political opinions through Facebook had demonstrably negative consequences on the democratic process, contributing to a misinformed electorate and a polarized public discourse. The ethical tool of consequentialism would argue that Facebook had a moral responsibility to prevent such outcomes by foreseeing and mitigating potential abuses of its platform.
The right course of action, from this perspective, would have involved Facebook taking several proactive steps:
Increasing algorithmic transparency: Facebook could have made its content-ranking algorithms more transparent to enable users and regulators to understand how information is being filtered and prioritized.
Enhancing user control over content: Instead of solely optimizing for engagement, Facebook could have provided users with more control over the content they see, potentially allowing them to opt out of algorithmic curation altogether.
Robust fact-checking and moderation policies: Implementing more stringent checks on content authenticity and origin, especially for political advertisements and news, to ensure that misinformation is quickly identified and corrected.
These actions would align Facebook’s operations more closely with ethical standards prioritizing public welfare and the integrity of democratic discourse. They would also mitigate the risk of the platform being used as a tool for information warfare, thus fulfilling its responsibility as a mediator of public discourse.
Applying Scott’s idea of legibility and a consequentialist ethical framework suggests that Facebook should have anticipated the ways its platform could be misused and taken steps to prevent such outcomes. By making its platform less amenable to manipulation, Facebook would not only have protected its users but also contributed to the maintenance of a healthy democratic society.
In this analysis, I have argued that Facebook engaged in information warfare and was partly responsible for the outcome of the 2016 U.S. presidential election, as viewed through the lenses of consequentialism and the concepts introduced by Prier and Scott. Prier’s idea of “weaponized narratives” shows how Facebook’s platform was used to spread manipulative and misleading content, which consequentialism deems unethical due to its negative outcomes on democracy. Scott’s concept of “legibility” illustrates how Facebook’s algorithms, by simplifying and structuring user data, made user behaviors predictable and manipulable, thus amplifying the effects of misinformation and political manipulation.
One objection to this position could be the argument that Facebook is merely a platform and that the responsibility for content lies with those who create it, not with the medium through which it is conveyed. This viewpoint stresses that blaming Facebook may detract from addressing the root causes of misinformation, such as the actors who deliberately produce and spread false content.
However, recognizing Facebook’s role in shaping the information ecosystem suggests that the platform does bear responsibility. As a powerful intermediary that significantly influences public discourse, it is reasonable to expect Facebook to implement safeguards against abuses that undermine democratic processes.
A related case is the role of other social media platforms, like Twitter and YouTube, which face similar challenges regarding misinformation and political manipulation. This comparison underscores a broader issue within the tech industry: the need for comprehensive strategies to manage the balance between open communication and protections against the abuse of these platforms.
While acknowledging that no solution is perfect and that efforts to control misinformation could impinge on free speech, it is crucial for platforms like Facebook to actively engage in mitigating the risks their technologies pose. By doing so, they can help ensure that their influence on public discourse and democracy is both positive and ethically responsible. Thus, the debate is not about winning an argument but about identifying and implementing the best
possible measures to safeguard democratic integrity in the digital age.































References:

Madrigal, A. C. (2017, October 12). What Facebook did to American democracy. The Atlantic. https://www.theatlantic.com/technology/archive/2017/10/what-facebook-did/542502/

Prier, J. (2017). Commanding the trend: Social media as information warfare. Strategic Studies Quarterly, 11(4), 50-85. Retrieved from http://www.airuniversity.af.edu/Portals/10/SSQ/documents/Volume-11_Issue-4/Prier.pdf