Case Analysis on Information Warfare

In Alexis C. Madrigal’s article, “What Facebook Did to American Democracy,” the focus
is on Facebook’s role in political campaigns, particularly the 2016 presidential election.
The article examines how Facebook employs algorithms to deliver targeted ads to
individual users based on their interactions—such as likes, comments, and shares—on the
platform. Madrigal argues that the use of this social media tool contributed to skewing the
election in favor of one party over the other. Additionally, the article explores how Russia
allegedly “infiltrated” Facebook and other social media sites to conduct disinformation
campaigns during the election. In this Case Analysis, I will contend that, from a contractarian
perspective, Facebook did not participate in information warfare as it remained neutral
towards both parties. Furthermore, Facebook should not be held accountable for the election’s
outcome since it is not responsible for the actions of foreign adversaries who attempted to
exploit its platform. This analysis will argue that Facebook’s role was limited to providing a
platform rather than influencing or controlling the content disseminated on it.
Using Keith Scott’s paper, “A Second Amendment for Cyber? – Possession, Prohibition,
and Personal Liberty for the Information Age.” I will explain how Facebook should not be held
accountable. This paper discusses the shifting dynamics of cyber connectivity in an era where
nearly everyone has access to the internet with minimal restrictions. Scott suggests
implementing standardized testing or certification to ensure safe internet use, but such
measures could undermine personal freedoms, particularly given that internet access is often a
paid service.
Scott highlights a significant concern: the proliferation of software that allows for the
manipulation of audio and video, making it possible to create fabricated content that
appears authentic. He notes, “From the standpoint of Information Warfare and ‘fake
news,’ the open distribution of software which allows the editing of sound and video to in
effect make anyone say and do anything to and with anyone (Suwajanakorn et al.,
2017; Farokhmanesh, 2018) means that politics, bullying, and cyberstalking are going to
become even darker and crueler than the present.” This capability to manipulate media
content is not unique to Facebook; it affects all digital platforms and reflects broader
media practices.
The media has long been adept at editing and sensationalizing stories to capture
attention, irrespective of the platform used for dissemination. The problem of content
manipulation and misinformation extends beyond any single platform and is deeply
rooted in media practices as a whole. Thus, while Facebook provides a space for
content sharing, it is not fundamentally responsible for the inaccuracies or
manipulations that occur within the broader media landscape. The challenge lies not in
the platform itself but in the pervasive nature of media manipulation and the
responsibility of users to critically assess the information they encounter.
Media platforms should embrace a contractarian approach to content management,
applying reasonable standards. While it’s appropriate to censor explicit content to
protect younger viewers, the broader principle involves assessing whether the overall
distribution of benefits and risks is equitable. This is the foundation of Facebook’s
content policies and a reasonable framework to follow. Ultimately, it is up to individual
users to scrutinize and evaluate the content they encounter, rather than having
Facebook decide for everyone.
Scott highlights that in a democratic society, citizens cannot be forced to give up their
freedoms, even if those freedoms may be risky. However, they can be persuaded
through clear and compelling messages to make sacrifices for the greater good. This
view is subjective because what one person perceives as dangerous might not be seen
the same way by others. This underscores why Facebook should not be held directly
accountable for the outcomes of the 2016 election. Even if foreign entities like Russia
attempted to influence content, it is up to users to critically evaluate and decide on the
validity of the information they see. Facebook’s role is not to filter or judge content for its
users but to provide a platform where individuals can make their own informed
decisions.
In analyzing why Facebook did not directly influence the results of the 2016 presidential
election, I will draw on Lt. Col. Jarred Prier’s paper, “Commanding the Trend: Social
Media as Information Warfare.” Prier’s research delves into how U.S. adversaries,
particularly Russian operatives, have manipulated social media to spread
disinformation. He provides detailed accounts of Russian activities, including hacking,
espionage, and strategic manipulation during the election. These efforts involved using
social media to propagate “fake news” and misleading information with the intent of
influencing electoral outcomes. Despite the potential effects of these disinformation
campaigns, Facebook itself did not engage in actions designed to sway the election for
either political side. Instead, Facebook’s primary function is to serve as a platform that
enables users to connect, share, and promote their businesses. It is designed to
facilitate communication and interaction, not to scrutinize or validate the accuracy of
every piece of content shared on the platform.
Ensuring that the platform operates effectively and remains available for diverse uses is
Facebook’s core responsibility. The platform’s role is not to act as a gatekeeper of
content or to oversee the truthfulness of information disseminated by users. While it is
crucial for Facebook to address and manage misuse of its platform, it is ultimately up to
users to critically evaluate the content they encounter. The broader issue of
misinformation involves systemic media practices and user responsibility rather than
being solely attributable to the platform itself.
According to a Facebook spokesperson, “We as a company are neutral – we have not
and will not use our products in a way that attempts to influence how people vote.” This
assertion underscores Facebook’s commitment to maintaining neutrality, ensuring that its
platform is not manipulated to unfairly benefit any political party. The company’s stance is that
it should offer a neutral environment where users have the freedom to share and engage with
content without undue influence.
In contractarian terms, Facebook operates under a principle akin to a “veil of
ignorance,” meaning the company’s internal views on political matters are kept hidden,
thereby preventing any potential bias from impacting users. This approach ensures that
while there might be inherent biases or operational tendencies within the platform,
Facebook’s fundamental role remains that of a neutral facilitator of interaction and
content sharing.
Additionally, Facebook’s neutrality is pivotal in its operational philosophy. While the
company does implement measures to address misinformation and harmful content, it is
not equipped to oversee the full spectrum of political influence that may occur through
its platform. The company’s responsibility is to provide a space where users can freely
exchange ideas and information. Users are tasked with critically assessing the content
they encounter, rather than relying on Facebook to moderate or validate every piece of
information.
Moreover, Facebook’s role extends beyond mere content hosting; it involves creating an
environment where diverse perspectives can coexist. The challenge of misinformation
and political manipulation is not unique to Facebook but is a broader issue within the
media landscape. By upholding its commitment to neutrality, Facebook aims to support
an open forum for democratic discourse while recognizing that the responsibility for
discerning truth and forming political opinions ultimately lies with the users themselves.
Any website is vulnerable to infiltration by external entities, which is a characteristic of
our digital era. Prier’s paper addresses several strategies used to disseminate
disinformation on social media, such as trend distribution, trend jacking, and trend
creation. These methods allow individuals with malicious intent to saturate the platform
with targeted, misleading content. Although Facebook makes efforts to limit the spread
of such disinformation, it is a technology platform that cannot entirely prevent all forms
of digital manipulation or attacks. The complexity and scale of modern digital
ecosystems make it exceedingly difficult to eliminate every instance of malicious
activity. Eric Hoffer’s observation that “propaganda on its own cannot force its way into
unwilling minds, neither can it inculcate something wholly new” underscores the
limitations of any platform in controlling the content it hosts. While Facebook can
implement measures to address disinformation, it ultimately remains a tool for users,
and the responsibility for discerning and verifying the accuracy of information lies with
individuals. Facebook’s primary duty is to maintain an operational platform, not to act as
a content censor or arbiter of truth. As such, users must navigate and critically evaluate
the content they encounter, understanding that the platform’s role is to provide access
rather than enforce content accuracy.
In summary, while there is evidence that outside actors sought to influence the election
by promoting one side more aggressively, similar efforts could be employed by any
political faction. Although these manipulative tactics took place on Facebook, it is not
the platform’s direct responsibility to verify the authenticity of every ad or piece of
content. While Facebook might develop and implement more sophisticated algorithms
to flag misleading information in the future, such technologies could remain inherently
subjective and unreliable.
Furthermore, it is crucial to recognize that social media platforms like Facebook are
designed to facilitate free speech and expression rather than to act as arbiters of truth.
The primary responsibility lies with individuals to approach content with critical thinking
and not to rely solely on social media for guidance or validation. We must actively
engage in discerning the credibility of information and make informed decisions based
on our own judgment.
Ultimately, while platforms can and should take steps to mitigate misinformation, the
onus of evaluating content and forming political opinions rests with us as users. Social
media serves as a tool for communication and expression, but it is our duty to maintain
intellectual autonomy and not let these platforms shape our perceptions or decisions
unduly.

Leave a Reply

Your email address will not be published. Required fields are marked *