Case Analysis: Cyberwarfare Actions in Hybrid Kinetic Warfare
The article “Digital Battlegrounds: Evolving Hybrid Kinetic Warfare” by Veeneman (2023) explores cyberwarfare actions during the Israel-Hamas conflict in October 2023, highlighting their role in hybrid warfare, which combines kinetic and non-kinetic tactics to destabilize adversaries. On October 6th, the pro-Iranian hacktivist group Cyber Av3ngers launched a DDoS attack on Israel’s electricity industry Independent System Operator (Noga), disrupting critical infrastructure (Veeneman, 2023). The following day, Anonymous Sudan and AnonGhost targeted the Red Alert system, spamming false missile alerts via vulnerable APIs, causing widespread panic and hindering emergency responses (Veeneman, 2023). Further attacks compromised the Israel Electric Corporation, components of the Iron Dome system, and the Government of Israel website while also targeting financial institutions like Discount Bank Israel (Veeneman, 2023). These actions aimed to sow chaos, disrupt civilian life, and undermine national security by targeting essential services like power grids, alert systems, and telecommunications, which are vital for societal stability. Although these cyberattacks were bloodless, their impact on civilian infrastructure raises ethical concerns about their role in warfare. In this Case Analysis, I will argue that utilitarianism shows us that these actions could not be part of a just war because they disproportionately harm civilians and destabilize societal well-being, outweighing any strategic gains.
Analysis Using Taddeo’s Concepts
Taddeo’s framework in “An Analysis for a Just Cyber-Warfare” provides a robust ethical lens for evaluating cyberwarfare by merging Just War Theory with Information Ethics. She defines cyberwarfare as “the warfare grounded on certain uses of ICTs within an offensive or defensive military strategy endorsed by a state and aiming at the immediate disruption or control of the enemy’s resources, and which is waged within the informational environment, with agents and targets ranging both on the physical and non-physical domains and whose level of violence may vary upon circumstances” (Taddeo, 2012). This definition underscores two critical aspects of cyberwarfare: its informational nature, driven by the use of ICTs to “elaborate, manage and communicate data and information,” and its transversality, which allows it to “escalate from non-violent to more violent forms” across domains (Taddeo, 2012). Taddeo argues that Information Ethics imposes an obligation to “respect the integrity of informational entities,” including societal systems like communication networks, to prevent unjust harm (Taddeo, 2012). She also highlights a key ethical challenge: cyberwarfare disrupts the Just War principle of “war as last resort,” which traditionally assumes war’s violent nature. Taddeo notes, “The application of this principle is shaken when CW is taken into consideration because, in this case, war may be bloodless and not involve physical violence at all” (Taddeo, 2012). This raises the possibility that bloodless cyberattacks might justify earlier intervention to avert traditional conflict, but such actions must still adhere to the ethical duty to protect non-combatants.
In the Israel-Hamas conflict, the cyberattacks described by Veeneman (2023) exemplify Taddeo’s concept of transversality. The attacks range from non-violent DDoS disruptions to potentially escalatory actions with broader implications. On October 6th, the pro-Iranian hacktivist group Cyber Av3ngers launched a DDoS attack on Israel’s electricity industry Independent System Operator (Noga), and the following day, Anonymous Sudan and AnonGhost targeted the Red Alert system, “spamming false Red Alert system missile alerts via exposed vulnerable APIs” (Veeneman, 2023). These actions targeted critical informational infrastructure, which Taddeo would classify as informational entities deserving ethical protection under Information Ethics. The Red Alert system, for instance, is a societal lifeline designed to “automatically activate the public broadcast warning system” and send alerts to mobile devices during rocket launches (Veeneman, 2023). Disrupting it directly impaired civilian safety; as Veeneman (2023) notes, “The loss of alerting and communication systems at the early stages of the conflict had catastrophic consequences… achieving the immediate goal of exacerbating panic and confusion among the populace.” This focus on civilian systems violates the Just War principle of non-combatant immunity, as the attacks caused harm to civilians rather than military targets, conflicting with Taddeo’s ethical framework.
Taddeo’s argument about the potential of bloodless cyberattacks to avert greater violence is particularly relevant here. She posits that a cyberattack could resolve tensions and “avert the possibility of a traditional war in the foreseeable future,” as it might affect only “the informational grid of the other state, and there would be no casualties” (Taddeo, 2012). In the Israel-Hamas case, one might argue that disrupting Noga or the Red Alert system could pressure military operations to de-escalate by creating operational challenges. However, this rationale fails because the attacks primarily targeted civilian infrastructure, not military assets. The harm did not avert greater violence but exacerbated civilian suffering (Veeneman, 2023). For example, the false missile alerts led to widespread panic, undermining civilians’ ability to respond effectively to real threats, while the attack on Noga disrupted electricity, a fundamental societal need. This misalignment with Information Ethics highlights the ethical failure of these actions, as they damaged societal systems rather than achieving a military objective that could justify such intervention.
Additionally, Taddeo warns of cyberwarfare’s transversality across domains, noting that it “may involve a computer virus able to disrupt or deny access to the enemy’s database… without exerting physical force or violence” but can also escalate to cause physical harm, such as “a cyber attack targeting a military aerial control system causing aircraft to crash” (Taddeo, 2012). In this case, while the attacks were initially bloodless, their potential to escalate is evident in the broader context. Veeneman (2023) points out that “lingering malware implanted in critical systems during the conflict… could be activated at any time,” posing risks like power outages or further disruptions to civilian life. The attack on the Israel Electric Corporation, which impacted “components or function of the Iron Dome system,” illustrates this escalation potential, as it could indirectly lead to physical harm by weakening defense systems (Veeneman, 2023). This transversality underscores the ethical complexity of cyberwarfare, as the initial non-violent nature of the attacks does not negate their broader harmful impact on civilians, further violating Taddeo’s ethical standards.
Utilitarianism, as an ethical tool, deems these actions unjust within a just war framework. Utilitarianism judges actions based on their results, focusing on increasing happiness and decreasing suffering. Cyberattacks generally have bad results for ordinary people. These attacks often cause widespread fear and disruption. They can interrupt vital services that people depend on daily, leading to chaos and instability. The impact is not just immediate; it also creates lasting security issues that can make everyone feel more vulnerable in the future. Veeneman (2023) notes that the attacks led to “delayed decision-making” and “constrained recovery efforts,” decreasing societal happiness and increasing suffering. The temporary strategic disruption achieved by the hacktivist groups does not outweigh the societal harm, as civilian infrastructure bore the brunt of the attacks. A utilitarian approach would advocate for avoiding civilian targets, focusing instead on military-specific cyber operations. For example, targeting encrypted military communications or logistics systems could achieve strategic goals without endangering civilian lives.
Analysis Using Boylan’s Concepts
Michael Boylan’s ethical framework in “Can There Be a Just War?” centers on protecting fundamental human goods, which he terms the “Goods of Agency.” These include life, basic needs (such as food, shelter, and safety), and the capacity to act as a moral agent, all essential for human flourishing (Boylan, 2013). Boylan aligns his framework with the Just War Theory, emphasizing principles like non-combatant immunity and proportionality. He argues that actions in warfare must prioritize minimizing harm to civilians and ensure that the intended good outweighs the harm caused. Additionally, Boylan stresses the importance of intention, asserting that morally permissible actions should aim to achieve a greater good without unnecessarily targeting innocent parties (Boylan, 2013). This perspective demands that warfare actions respect the fundamental rights of individuals, especially those not directly involved in the conflict, to maintain ethical integrity.
In the context of the Israel-Hamas conflict, the cyberattacks described by Veeneman (2023) directly undermine Boylan’s Goods of Agency for civilians. The DDoS attack on Noga disrupted electricity, a basic need critical for societal functioning. At the same time, the spamming of false Red Alert missile alerts caused widespread panic, compromising civilian safety and the ability to respond to emergencies (Veeneman, 2023). These actions targeted civilian infrastructure rather than military assets, violating the principle of non-combatant immunity. For example, the disruption of the Red Alert system meant that “the loss of alerting and communication systems… impacted the ability of both civilian and military personnel to respond effectively,” exacerbating chaos (Veeneman, 2023). This directly impaired civilians’ capacity to act as moral agents in a stable environment, as they were left vulnerable to misinformation and delayed emergency responses. The intention behind these attacks further conflicts with Boylan’s requirement for morally permissible actions, as the primary harm was directed at civilian well-being rather than military objectives.
Using utilitarianism as the ethical tool, these cyberattacks are unjustifiable within a just war framework. Utilitarianism evaluates actions based on their consequences, seeking to maximize overall happiness and minimize suffering. The consequences of these cyberattacks were overwhelmingly negative for civilians: widespread fear disrupted essential services like electricity and long-term vulnerabilities such as “lingering malware implanted in critical systems” that could cause further damage post-conflict (Veeneman, 2023). While the hacktivist groups may have gained a temporary strategic advantage by sowing chaos, this does not outweigh the societal harm inflicted, as the attacks primarily targeted civilian infrastructure rather than military capabilities. A utilitarian approach would demand that the actors refrain from targeting civilian systems, instead focusing on military-specific targets to minimize collateral damage. For instance, attacking military communication networks rather than public alert systems could achieve strategic goals without endangering civilian lives. By prioritizing civilian safety, the actors could align with utilitarian principles, ensuring that the overall consequences promote societal well-being rather than suffering, thus adhering to the ethical standards of a just war.
Conclusion
This analysis, utilizing Boylan’s Goods of Agency and Taddeo’s Information Ethics alongside utilitarianism, argues that the cyberwarfare actions in the Israel-Hamas conflict—specifically the DDoS attacks on Noga, the Red Alert system, and other civilian infrastructure—cannot be part of a just war. These actions disproportionately harmed civilians by disrupting essential services, violating non-combatant immunity, and causing more societal suffering than strategic gain, as evidenced by their exacerbation of panic and hindrance of emergency responses (Veeneman, 2023). An alternate perspective might argue that such cyberattacks are justifiable to prevent escalation; as Taddeo (2012) notes, bloodless actions could avert traditional violence. However, this view falters here, as the attacks targeted civilians, not military assets, increasing harm rather than reducing it. Given hybrid warfare’s evolving threats, a broader implication is the urgent need for international norms to protect civilian infrastructure in cyber warfare. A limitation of this position is the challenge of distinguishing military from civilian targets in interconnected digital systems, which may complicate utilitarian assessments in future conflicts.
References
Boylan, M. (2013). Can there be a just war? Marymount University.
Taddeo, M. (2012). An analysis for a just cyber-warfare [Conference paper]. ResearchGate. https://www.researchgate.net/publication/261488493
Veeneman, P. (2023). Digital battlegrounds: Evolving hybrid kinetic warfare. Industrial Cyber. https://industrialcyber.co/analysis/digital-battlegrounds-evolving-hybrid-kinetic-warfare/
Case analysis on information warfare
In their article, “The Covert War for American Minds,” David Shedd and Ivana Stradner (2024) expose the sophisticated disinformation campaigns orchestrated by Russia, China, and Iran to undermine U.S. elections. They cite a 2024 case where Russia channelled $10 million through a Tennessee-based media startup to produce divisive social media content, aligning with Moscow’s “interest in amplifying U.S. domestic divisions” (Shedd & Stradner, 2024, p. 1). Russia employs AI-driven bot farms, fake profiles, and manipulated videos, such as a fabricated clip showing ballots being destroyed in Pennsylvania, to erode trust in electoral integrity (Shedd & Stradner, 2024). Iran targets specific candidates by leaking hacked materials from the Trump campaign, while China focuses on down-ballot races to “sow doubts about U.S. leadership” (Shedd & Stradner, 2024, p. 4). These efforts exploit social media’s global reach to manipulate public opinion and deepen polarization, posing a significant threat to democratic processes. In this Case Analysis, I will argue that Virtue Ethics demonstrates that these nations engaged in information warfare against the U.S. because their deceptive tactics undermine trust, justice, and civic order, rendering them unjustifiable. Similarly, U.S. interference in foreign elections using comparable methods would constitute information warfare and be unjustifiable, as it violates the virtues of honesty and fairness.
Analysis Using Prier’s Concepts and Virtue Ethics
Jarred Prier (2017) provides a robust framework for understanding social media as a tool of information warfare through the concept of “commanding the trend.” He explains, “Using existing online networks in conjunction with automatic ‘bot’ accounts, foreign agents can insert propaganda into a social media platform, create a trend, and rapidly disseminate a message faster and cheaper than through any other medium” (Prier, 2017, p. 52). This strategy relies on four key factors: a message that resonates with an existing narrative, a group of true believers predisposed to the message, a small cyber team to craft the content, and a network of automated bot accounts to amplify it (Prier, 2017). Russia’s interference in the 2016 and 2024 U.S. elections exemplifies this approach, with bot-driven campaigns amplifying divisive issues such as immigration, abortion, and U.S. support for Ukraine to deepen societal polarization (Shedd & Stradner, 2024). Prier notes, “A trending topic transcends networks and becomes the mechanism for the spread of information across social clusters” (Prier, 2017, p. 53), a tactic Russia employs through fake media sites, paid advertisements, and AI-generated content. Similarly, Iran’s leaking of hacked materials from the Trump campaign and China’s Spamouflage operation, which uses fake American profiles to criticize U.S. policies, demonstrate the same manipulative strategies (Shedd & Stradner, 2024).
These actions unequivocally constitute information warfare, as they weaponize information to coerce voter behavior and destabilize democratic institutions. Shedd and Stradner (2024) emphasize, “The goal of influence operations is to engineer a shift in enemy decision-making by shaping the views of the citizenry” (p. 4). By flooding social media with disinformation, these nations exploit the availability heuristic, where “the mind creates a shortcut based on the most—or most recent—information available” (Prier, 2017, p. 57). This psychological manipulation distorts public perception, as evidenced by Russia’s smear campaigns against Kamala Harris and false narratives questioning election integrity (Shedd & Stradner, 2024). The scale of these operations, enabled by social media’s mass reach, amplifies their impact, making them a potent tool for sowing discord.
Virtue Ethics, which prioritizes character traits such as honesty, justice, and temperance, deems these actions morally unjustifiable. Honesty demands truthfulness in public discourse, yet Russia, China, and Iran rely on deception, deploying “fake profiles that promote AI-generated content” and “links to websites that impersonate legitimate media” (Shedd & Stradner, 2024, p. 3). This violates the virtue of justice, which requires fairness in democratic processes, as these campaigns manipulate voters and undermine the integrity of elections. Prier underscores the manipulative nature of propaganda, stating, “Propaganda on its own cannot force its way into unwilling minds, neither can it inculcate something wholly new” (Prier, 2017, p. 56), highlighting how these nations exploit existing societal divisions. This lack of temperance—choosing chaos over constructive dialogue—further erodes civic trust. Shedd and Stradner (2024) note that the cumulative effect of these campaigns is to “seed chaos, discontent, and suspicion within a target population over an extended period” (p. 4), a direct affront to the virtuous pursuit of societal harmony.
If the U.S. were to engage in similar interference in foreign elections, it would also be committing information warfare and would be equally unjustifiable. Deceptive tactics, such as spreading false narratives or manipulating social media trends, would contradict the virtue of honesty and risk escalating global distrust, potentially destabilizing diplomatic relations. Virtue Ethics calls for a higher standard, one rooted in integrity and transparency. Shedd and Stradner (2024) advocate for this approach, stating, “The United States should not just fend off foreign adversaries by publicly exposing their actions; it should go on the offensive” through truthful narratives (p. 7). This would be in accordance with the virtue of courage, practicing a courageous devotion to democratic principles while building up resistance. It is also a display of prudence, making sure U.S. policy upholds moral credibility rather than contributing to international tensions. By stepping forward with open communication, the U.S. can push back against disinformation credibly while maintaining the ethical standards that promote cooperation and trust.
Analysis Using Morkevičius’ Concepts and Virtue Ethics
Valerie Morkevičius, in Gruszczak and Kaempf (2024), positions information warfare within the normative framework of jus ad vim. She argues, “Information-psychological efforts aimed at undermining states’ domestic cohesion and international soft power capabilities are grave enough to warrant characterization as uses of coercive force” (Gruszczak & Kaempf, 2024, p. 2). This framework applies directly to the disinformation campaigns by Russia, China, and Iran, which manipulate voter perceptions to weaken U.S. democratic cohesion (Shedd & Stradner, 2024). Morkevičius distinguishes between denial (withholding information) and deception (manipulating reality), asserting that “Concealment is morally problematic when it undermines individuals’ trust in each other, particularly when it leads to mistrust in trustworthy sources of information” (Gruszczak & Kaempf, 2024, p. 6). Russia’s fake media sites, Iran’s hacked leaks, and China’s Spamouflage operation exemplify deceptive concealment, masquerading as legitimate sources to mislead the public (Shedd & Stradner, 2024).
These actions constitute information warfare by coercing public opinion through psychological manipulation, a form of non-kinetic force that destabilizes societies. Morkevičius draws on Walzer’s critique of deceptive ambushes, arguing that such tactics in information warfare are impermissible because they “endanger non-combatants by making the opponent more likely to treat them as potential threats” (Gruszczak & Kaempf, 2024, p. 6). Russia’s false ballot-tampering videos and China’s fake American profiles erode trust in democratic institutions, aligning with Morkevičius’ concern that “tactics that significantly undermine order are morally impermissible” (Gruszczak & Kaempf, 2024, p. 4). Shedd and Stradner (2024) reinforce this, noting that “Russian operatives have spread incendiary messages on social media about hot-button issues… to deepen polarization and fractiousness in the United States” (p. 2). The deliberate targeting of societal cleavages, such as racial tensions or political divides, exacerbates disorder, threatening the moral end of peaceful coexistence.
Virtue Ethics further condemns these actions as violations of core character traits. The virtue of justice is undermined when foreign actors destabilize democratic institutions, as seen in Iran’s leaks designed to “foment unrest among Americans” (Shedd & Stradner, 2024, p. 4). Honesty is compromised by spin and outright lying, which Morkevičius deems unjustifiable: “Lying is unjustifiable, for reasons drawn both from a pragmatic perspective and virtue ethics” (Gruszczak & Kaempf, 2024, p. 5). These deceptive practices erode the trust necessary for democratic discourse, violating the virtue of temperance by prioritizing chaos over moderation. If the U.S. were to adopt similar tactics in foreign elections, it would violate these same virtues, deceiving foreign publics and risking escalation of global mistrust. Morkevičius’ principle of proportionality, which “encourages the economy of force” (Gruszczak & Kaempf, 2024, p. 4), suggests that the U.S. should pursue defensive measures, such as declassifying disinformation evidence, as Shedd and Stradner (2024) propose: “The Cybersecurity and Infrastructure Security Agency… should review and declassify material about information warfare” (p. 5). This approach aligns with the virtue of prudence, minimizing harm while fostering trust.
Moreover, Morkevičius’ emphasis on necessity requires that actions be essential to achieve moral ends (Gruszczak & Kaempf, 2024). Russia, China, and Iran’s indiscriminate disinformation campaigns fail this test, causing widespread harm without apparent strategic necessity, thus violating temperance. The U.S. ought to avert these types of tactics, instead using various platforms, including Radio Free Europe, as Shedd and Stradner (2024) recommend, to promote truthful narratives that uphold justice and order (p. 7). Again, Morkevičius’ principle of order underscores that actions should not sow widespread disorder akin to humanitarian crises (Gruszczak & Kaempf, 2024). These states’ disinformation campaigns destabilize civic trust and democratic order, creating chaos that is contrary to the moral telos of peaceful coexistence. Conversely, U.S. countermeasures need to be informed by virtuous restraint, using open communication to stabilize trust and order, as Shedd and Stradner (2024) suggest through coordinated efforts to expose foreign interference.
Should the U.S. mirror the identified deceptive techniques in foreign elections, it would engage in information warfare and be morally indefensible under Virtue Ethics. Using the identified false narratives or manipulating social media would betray honesty. Such actions would also erode trust both domestically and on an international level. Such actions could also fuel tensions, as foreign governments and publics grow wary of U.S. intentions. The results of all these would be mass destabilization diplomacy. Virtue Ethics requires integrity and transparency, compelling the U.S. to counter disinformation openly. Shedd and Stradner (2024) back such an assertion, stating that, “The United States should not just fend off foreign adversaries by publicly exposing their actions; it should go on the offensive” with fact-based narratives (p. 7). This embodies courage, underscoring a commitment to democratic values and prudence. Transparent engagement gives room for the U.S. to model virtuous leadership. This also helps them effectively counter disinformation while sticking to ethical principles that sustain global cooperation.
Conclusion
In summary, Russia, China, and Iran’s interference in US elections is an information war since their manipulative efforts frighten the voters and break democratic solidarity, violating the virtues of honesty, justice, and temperance. Prier’s “commanding the trend” model demonstrates how the efforts are amplified through social media, while Morkevius’ jus ad bellum model reveals their coercive nature and moral permissibility. Similarly, U.S. interference in foreign elections using deception would be information warfare and unjustifiable, contradicting virtuous governance. An objection might argue that geopolitical necessity justifies such tactics to counter adversaries. However, this risks escalating mistrust and undermining moral credibility, as “tactics that significantly undermine order are morally impermissible” (Gruszczak & Kaempf, 2024, p. 4). A counterview might propose limited, transparent influence operations, but Virtue Ethics prioritizes honesty to maintain trust. The broader implication is that democracies must model transparency to counter disinformation, as Shedd and Stradner (2024) advocate, while acknowledging the challenge of measuring proportionality in information warfare’s diffuse impacts. A limitation of this argument is the difficulty in quantifying psychological harm, necessitating further ethical scrutiny to refine responses to this evolving threat.
References
Gruszczak, A., & Kaempf, S. (Eds.). (2024). Routledge handbook of the future of warfare. Routledge. https://doi.org/10.4324/9781003299011
Prier, J. (2017). Commanding the trend: Social media as information warfare. Strategic Studies Quarterly, 11(4), 50–85.
Shedd, D. R., & Stradner, I. (2024, October 29). The covert war for American minds: How Russia, China, and Iran seek to spread disinformation and chaos in the United States. Foreign Affairs. https://www.foreignaffairs.com/united-states/covert-war-american-minds
Analysis Using Prier’s Concepts and Virtue Ethics
Jarred Prier (2017) provides a robust framework for understanding social media as a tool of information warfare through the concept of “commanding the trend.” He explains, “Using existing online networks in conjunction with automatic ‘bot’ accounts, foreign agents can insert propaganda into a social media platform, create a trend, and rapidly disseminate a message faster and cheaper than through any other medium” (Prier, 2017, p. 52). This strategy relies on four key factors: a message that resonates with an existing narrative, a group of true believers predisposed to the message, a small cyber team to craft the content, and a network of automated bot accounts to amplify it (Prier, 2017). Russia’s interference in the 2016 and 2024 U.S. elections exemplifies this approach, with bot-driven campaigns amplifying divisive issues such as immigration, abortion, and U.S. support for Ukraine to deepen societal polarization (Shedd & Stradner, 2024). Prier notes, “A trending topic transcends networks and becomes the mechanism for the spread of information across social clusters” (Prier, 2017, p. 53), a tactic Russia employs through fake media sites, paid advertisements, and AI-generated content. Similarly, Iran’s leaking of hacked materials from the Trump campaign and China’s Spamouflage operation, which uses fake American profiles to criticize U.S. policies, demonstrate the same manipulative strategies (Shedd & Stradner, 2024).
These actions unequivocally constitute information warfare, as they weaponize information to coerce voter behavior and destabilize democratic institutions. Shedd and Stradner (2024) emphasize, “The goal of influence operations is to engineer a shift in enemy decision-making by shaping the views of the citizenry” (p. 4). By flooding social media with disinformation, these nations exploit the availability heuristic, where “the mind creates a shortcut based on the most—or most recent—information available” (Prier, 2017, p. 57). This psychological manipulation distorts public perception, as evidenced by Russia’s smear campaigns against Kamala Harris and false narratives questioning election integrity (Shedd & Stradner, 2024). The scale of these operations, enabled by social media’s mass reach, amplifies their impact, making them a potent tool for sowing discord.
Virtue Ethics, which prioritizes character traits such as honesty, justice, and temperance, deems these actions morally unjustifiable. Honesty demands truthfulness in public discourse, yet Russia, China, and Iran rely on deception, deploying “fake profiles that promote AI-generated content” and “links to websites that impersonate legitimate media” (Shedd & Stradner, 2024, p. 3). This violates the virtue of justice, which requires fairness in democratic processes, as these campaigns manipulate voters and undermine the integrity of elections. Prier underscores the manipulative nature of propaganda, stating, “Propaganda on its own cannot force its way into unwilling minds, neither can it inculcate something wholly new” (Prier, 2017, p. 56), highlighting how these nations exploit existing societal divisions. This lack of temperance—choosing chaos over constructive dialogue—further erodes civic trust. Shedd and Stradner (2024) note that the cumulative effect of these campaigns is to “seed chaos, discontent, and suspicion within a target population over an extended period” (p. 4), a direct affront to the virtuous pursuit of societal harmony.
If the U.S. were to engage in similar interference in foreign elections, it would also be committing information warfare and would be equally unjustifiable. Deceptive tactics, such as spreading false narratives or manipulating social media trends, would contradict the virtue of honesty and risk escalating global distrust, potentially destabilizing diplomatic relations. Virtue Ethics calls for a higher standard, one rooted in integrity and transparency. Shedd and Stradner (2024) advocate for this approach, stating, “The United States should not just fend off foreign adversaries by publicly exposing their actions; it should go on the offensive” through truthful narratives (p. 7). This would be in accordance with the virtue of courage, practicing a courageous devotion to democratic principles while building up resistance. It is also a display of prudence, making sure U.S. policy upholds moral credibility rather than contributing to international tensions. By stepping forward with open communication, the U.S. can push back against disinformation credibly while maintaining the ethical standards that promote cooperation and trust.
Analysis Using Morkevičius’ Concepts and Virtue Ethics
Valerie Morkevičius, in Gruszczak and Kaempf (2024), positions information warfare within the normative framework of jus ad vim. She argues, “Information-psychological efforts aimed at undermining states’ domestic cohesion and international soft power capabilities are grave enough to warrant characterization as uses of coercive force” (Gruszczak & Kaempf, 2024, p. 2). This framework applies directly to the disinformation campaigns by Russia, China, and Iran, which manipulate voter perceptions to weaken U.S. democratic cohesion (Shedd & Stradner, 2024). Morkevičius distinguishes between denial (withholding information) and deception (manipulating reality), asserting that “Concealment is morally problematic when it undermines individuals’ trust in each other, particularly when it leads to mistrust in trustworthy sources of information” (Gruszczak & Kaempf, 2024, p. 6). Russia’s fake media sites, Iran’s hacked leaks, and China’s Spamouflage operation exemplify deceptive concealment, masquerading as legitimate sources to mislead the public (Shedd & Stradner, 2024).
These actions constitute information warfare by coercing public opinion through psychological manipulation, a form of non-kinetic force that destabilizes societies. Morkevičius draws on Walzer’s critique of deceptive ambushes, arguing that such tactics in information warfare are impermissible because they “endanger non-combatants by making the opponent more likely to treat them as potential threats” (Gruszczak & Kaempf, 2024, p. 6). Russia’s false ballot-tampering videos and China’s fake American profiles erode trust in democratic institutions, aligning with Morkevičius’ concern that “tactics that significantly undermine order are morally impermissible” (Gruszczak & Kaempf, 2024, p. 4). Shedd and Stradner (2024) reinforce this, noting that “Russian operatives have spread incendiary messages on social media about hot-button issues… to deepen polarization and fractiousness in the United States” (p. 2). The deliberate targeting of societal cleavages, such as racial tensions or political divides, exacerbates disorder, threatening the moral end of peaceful coexistence.
Virtue Ethics further condemns these actions as violations of core character traits. The virtue of justice is undermined when foreign actors destabilize democratic institutions, as seen in Iran’s leaks designed to “foment unrest among Americans” (Shedd & Stradner, 2024, p. 4). Honesty is compromised by spin and outright lying, which Morkevičius deems unjustifiable: “Lying is unjustifiable, for reasons drawn both from a pragmatic perspective and virtue ethics” (Gruszczak & Kaempf, 2024, p. 5). These deceptive practices erode the trust necessary for democratic discourse, violating the virtue of temperance by prioritizing chaos over moderation. If the U.S. were to adopt similar tactics in foreign elections, it would violate these same virtues, deceiving foreign publics and risking escalation of global mistrust. Morkevičius’ principle of proportionality, which “encourages the economy of force” (Gruszczak & Kaempf, 2024, p. 4), suggests that the U.S. should pursue defensive measures, such as declassifying disinformation evidence, as Shedd and Stradner (2024) propose: “The Cybersecurity and Infrastructure Security Agency… should review and declassify material about information warfare” (p. 5). This approach aligns with the virtue of prudence, minimizing harm while fostering trust.
Moreover, Morkevičius’ emphasis on necessity requires that actions be essential to achieve moral ends (Gruszczak & Kaempf, 2024). Russia, China, and Iran’s indiscriminate disinformation campaigns fail this test, causing widespread harm without apparent strategic necessity, thus violating temperance. The U.S. ought to avert these types of tactics, instead using various platforms, including Radio Free Europe, as Shedd and Stradner (2024) recommend, to promote truthful narratives that uphold justice and order (p. 7). Again, Morkevičius’ principle of order underscores that actions should not sow widespread disorder akin to humanitarian crises (Gruszczak & Kaempf, 2024). These states’ disinformation campaigns destabilize civic trust and democratic order, creating chaos that is contrary to the moral telos of peaceful coexistence. Conversely, U.S. countermeasures need to be informed by virtuous restraint, using open communication to stabilize trust and order, as Shedd and Stradner (2024) suggest through coordinated efforts to expose foreign interference.
Should the U.S. mirror the identified deceptive techniques in foreign elections, it would engage in information warfare and be morally indefensible under Virtue Ethics. Using the identified false narratives or manipulating social media would betray honesty. Such actions would also erode trust both domestically and on an international level. Such actions could also fuel tensions, as foreign governments and publics grow wary of U.S. intentions. The results of all these would be mass destabilization diplomacy. Virtue Ethics requires integrity and transparency, compelling the U.S. to counter disinformation openly. Shedd and Stradner (2024) back such an assertion, stating that, “The United States should not just fend off foreign adversaries by publicly exposing their actions; it should go on the offensive” with fact-based narratives (p. 7). This embodies courage, underscoring a commitment to democratic values and prudence. Transparent engagement gives room for the U.S. to model virtuous leadership. This also helps them effectively counter disinformation while sticking to ethical principles that sustain global cooperation.
Conclusion
In summary, Russia, China, and Iran’s interference in US elections is an information war since their manipulative efforts frighten the voters and break democratic solidarity, violating the virtues of honesty, justice, and temperance. Prier’s “commanding the trend” model demonstrates how the efforts are amplified through social media, while Morkevius’ jus ad bellum model reveals their coercive nature and moral permissibility. Similarly, U.S. interference in foreign elections using deception would be information warfare and unjustifiable, contradicting virtuous governance. An objection might argue that geopolitical necessity justifies such tactics to counter adversaries. However, this risks escalating mistrust and undermining moral credibility, as “tactics that significantly undermine order are morally impermissible” (Gruszczak & Kaempf, 2024, p. 4). A counterview might propose limited, transparent influence operations, but Virtue Ethics prioritizes honesty to maintain trust. The broader implication is that democracies must model transparency to counter disinformation, as Shedd and Stradner (2024) advocate, while acknowledging the challenge of measuring proportionality in information warfare’s diffuse impacts. A limitation of this argument is the difficulty in quantifying psychological harm, necessitating further ethical scrutiny to refine responses to this evolving threat.
References
Gruszczak, A., & Kaempf, S. (Eds.). (2024). Routledge handbook of the future of warfare. Routledge. https://doi.org/10.4324/9781003299011
Prier, J. (2017). Commanding the trend: Social media as information warfare. Strategic Studies Quarterly, 11(4), 50–85.
Shedd, D. R., & Stradner, I. (2024, October 29). The covert war for American minds: How Russia, China, and Iran seek to spread disinformation and chaos in the United States. Foreign Affairs. https://www.foreignaffairs.com/united-states/covert-war-american-minds
Reflective Writing
This semester’s learning has profoundly enhanced my understanding of moral responsibility. I have had the chance to gain more knowledge about the context of my planned career in cybersecurity. The course materials, ranging from information warfare to cyber warfare, data misuse, and diverse ethical frameworks, have provided me with insights into the ethical challenges I will face in safeguarding digital systems and maintaining societal trust. These lessons are central to my professional aspirations and managing the complexities of modern life in a technology-driven world.
The case analysis of information warfare, by Shedd and Stradner (2024) and Prier (2017), foregrounded the extent to which state actors like Russia, China, and Iran manipulate public opinion using social media to destabilize democracies. Virtue Ethics highlighted the ethical insolvency of manipulative acts, which compromise trust and justice. In cybersecurity, this calls for the identification and counter-strategy against disinformation campaigns. My work will develop mechanisms for identifying AI-powered bot farms or false profiles, ensuring information integrity flows. This aligns with the virtue of honesty since I must be open and maintain public trust in online platforms.
The analysis of cyberwarfare by Veeneman in 2023 and Taddeo in 2012 focused on the moral issues of attacking civilian targets during wars, like the Israel-Hamas conflict. According to the idea of utilitarianism, which aims to benefit the most people, such attacks cause too much harm and break the rule of keeping non-combatants safe. In cybersecurity, this means building defenses that prioritize protecting civilians. This includes securing important systems, like power grids and warning alerts, to protect people from harm. The principle of minimizing harm will guide my work in preventing cyberattacks that disrupt essential services, ensuring societal stability.
The Cambridge Analytica case (Chang, 2018) and Sourour’s (2016) experience with deceptive coding further illustrated the ethical perils of data misuse. Deontological ethics and professional codes (ACM, 2018; IEEE, 2020) underscored the duty to uphold transparency and avoid harm. As a cybersecurity professional, I will advocate for ethical data practices, ensuring user consent and protecting against manipulative algorithms. This responsibility extends to my personal life, where I must critically evaluate the information I encounter and share.
Literary analyses, such as Butler’s The Evening and the Morning and the Night and Chen’s The Year of the Rat, introduced contractarianism and Ruism, emphasizing social responsibility and role-based ethics. These perspectives will inform my approach to fostering ethical collaboration in cybersecurity teams, balancing individual duties with collective goals. Ubuntu’s focus on communal identity, as explored in Dick’s The Little Black Box, reminds me that my work impacts broader societal networks, reinforcing my commitment to ethical decision-making.
In conclusion, this course has equipped me with a robust ethical framework to navigate cybersecurity challenges. By applying Virtue Ethics, utilitarianism, deontology, and professional codes, I will strive to protect digital infrastructures while upholding honesty, justice, and societal well-being. These principles will also guide my interactions, ensuring I contribute to a trustworthy and interconnected world.
References
ACM. (2018). ACM Code of Ethics and Professional Conduct. Association for Computing Machinery.
Chang, A. (2018). The Facebook and Cambridge Analytica scandal is explained in a simple diagram. Vox.
IEEE. (2020). IEEE Code of Ethics. Institute of Electrical and Electronics Engineers.
Prier, J. (2017). Commanding the trend: Social media as information warfare. Strategic Studies Quarterly, 11(4), 50–85.
Shedd, J., & Stradner, I. (2024). How Russia, China, and Iran are interfering in the presidential election. Foreign Policy.
Sourour, B. (2016). The code I am still ashamed of. FreeCodeCamp.
Taddeo, M. (2012). An information-based solution for the puzzle of cyberwarfare. Philosophy & Technology, 25(2), 149–164.
Veeneman, M. (2023). The law of war and cyberwarfare: An ethical analysis of the Israel-Hamas conflict. Small Wars Journal.
The case analysis of information warfare, by Shedd and Stradner (2024) and Prier (2017), foregrounded the extent to which state actors like Russia, China, and Iran manipulate public opinion using social media to destabilize democracies. Virtue Ethics highlighted the ethical insolvency of manipulative acts, which compromise trust and justice. In cybersecurity, this calls for the identification and counter-strategy against disinformation campaigns. My work will develop mechanisms for identifying AI-powered bot farms or false profiles, ensuring information integrity flows. This aligns with the virtue of honesty since I must be open and maintain public trust in online platforms.
The analysis of cyberwarfare by Veeneman in 2023 and Taddeo in 2012 focused on the moral issues of attacking civilian targets during wars, like the Israel-Hamas conflict. According to the idea of utilitarianism, which aims to benefit the most people, such attacks cause too much harm and break the rule of keeping non-combatants safe. In cybersecurity, this means building defenses that prioritize protecting civilians. This includes securing important systems, like power grids and warning alerts, to protect people from harm. The principle of minimizing harm will guide my work in preventing cyberattacks that disrupt essential services, ensuring societal stability.
The Cambridge Analytica case (Chang, 2018) and Sourour’s (2016) experience with deceptive coding further illustrated the ethical perils of data misuse. Deontological ethics and professional codes (ACM, 2018; IEEE, 2020) underscored the duty to uphold transparency and avoid harm. As a cybersecurity professional, I will advocate for ethical data practices, ensuring user consent and protecting against manipulative algorithms. This responsibility extends to my personal life, where I must critically evaluate the information I encounter and share.
Literary analyses, such as Butler’s The Evening and the Morning and the Night and Chen’s The Year of the Rat, introduced contractarianism and Ruism, emphasizing social responsibility and role-based ethics. These perspectives will inform my approach to fostering ethical collaboration in cybersecurity teams, balancing individual duties with collective goals. Ubuntu’s focus on communal identity, as explored in Dick’s The Little Black Box, reminds me that my work impacts broader societal networks, reinforcing my commitment to ethical decision-making.
In conclusion, this course has equipped me with a robust ethical framework to navigate cybersecurity challenges. By applying Virtue Ethics, utilitarianism, deontology, and professional codes, I will strive to protect digital infrastructures while upholding honesty, justice, and societal well-being. These principles will also guide my interactions, ensuring I contribute to a trustworthy and interconnected world.
References
ACM. (2018). ACM Code of Ethics and Professional Conduct. Association for Computing Machinery.
Chang, A. (2018). The Facebook and Cambridge Analytica scandal is explained in a simple diagram. Vox.
IEEE. (2020). IEEE Code of Ethics. Institute of Electrical and Electronics Engineers.
Prier, J. (2017). Commanding the trend: Social media as information warfare. Strategic Studies Quarterly, 11(4), 50–85.
Shedd, J., & Stradner, I. (2024). How Russia, China, and Iran are interfering in the presidential election. Foreign Policy.
Sourour, B. (2016). The code I am still ashamed of. FreeCodeCamp.
Taddeo, M. (2012). An information-based solution for the puzzle of cyberwarfare. Philosophy & Technology, 25(2), 149–164.
Veeneman, M. (2023). The law of war and cyberwarfare: An ethical analysis of the Israel-Hamas conflict. Small Wars Journal.
Career Paper: The Role of Cybersecurity Analysts in Social Science
In a contemporary society where security issues are significantly interconnected with innovative technology, the job of a cybersecurity analyst is vital. These professionals safeguard companies’ information and counter threats to computer networks and systems. However, their work goes beyond the mere development of this technical expertise. Cybersecurity analysts use social science and its methods and findings to interpret and ‘tackle’ the social aspects central to cybersecurity threats and measures. This paper will discuss how social security systems are applied in the operation of cybersecurity professionals and their implications on minorities and the general public.
Understanding Human Behavior in Cybersecurity
Cyber security analysts rely on social science in many ways, most commonly by determining human behavior. Cybersecurity threats can be described as human deeds with a negative impact on the integrity and security of the computer systems and networks in a specific Organization. Academics from the social sciences can help explain why people would participate in behaviors that put them at risk of attack on the web, fall for a phishing expedition, or become insiders that pose potential dangers to their organizations. For instance, behavioral economics, a part of social sciences, can help explain people’s choices, which may compromise security measures. This state will likely bring about market failures since some parties hold more information than others (Emerson, 2019). Cybersecurity analysts apply these principles to design strategies that eliminate such asymmetries while making users aware of the risks and consequences of their online actions.
In addition, the element of psychology can also help in carrying out security awareness programs effectively. In this context, such bias and trigger elements of human behavior allow the analysts to design the training materials effectively to change the risky behavior of users on the net. For instance, the theories related to risk perception, which encompasses how users perceive and experience risks and challenges relevant to security, can enable analysts to create suitable interventions compatible with the users’ mental models.
Social Engineering and Ethical Decision-Making
Social engineering attacks exploit people’s behavior by forcing them to disclose sensitive data. Cybersecurity analysts must understand the nature of social engineering and how the principles of social sciences can be utilized to develop methods of combating these techniques. This encompasses learning about pressure, coercion, and other psychological tactics the attackers use to compromise the targets. The Sridhar & Ng (2021) article points out the importance of ethical issues arising in cybersecurity. One of the problematic choice’s analysts are presented with is whether privacy rights override security concerns, similar to society’s surveillance dilemma. Using ethical theory from social science, the analyst is in a position to overcome such issues and work within the bounds of the law and what is right.
The Role of Cybersecurity Analysts in Society
Cybersecurity analysts have a critical responsibility that involves the protection of vulnerable societies and the fight against social injustices. The digital divide results in the marginalization of specific communities, and they become easy targets for cybercriminals. Analysts must be aware of these discriminations and take initiatives for gender-sensitive cyber security strategies. The social contract theory focuses on the understanding between people and the authorities (Seabright et al., 2021). Analysts continue to uphold this contract, given that vulnerable groups should be provided equal cybersecurity opportunities and protection. Security solutions also require supporting policies to reduce the digital divide and develop security solutions that fit the target users.
Cybersecurity analysts also help enhance the level of trust and openness in society. According to the Tech (n.d.) article, clarity and timeliness in communication should always be maintained. In so doing, analysts build credibility within social groups, strengthening the social fabric and encouraging collective safeguard strategies when reporting breaches and weaknesses.
Addressing Cybersecurity’s Impact on Society
Cybersecurity analysts are also responsible for addressing the implications of their work for the larger society. Cyber threats affect the availability of services, citizens’ trust, and possible consequences on the financial system. Experts discuss the effects that cyber events cause on society by applying social science theories to establish strategies that can prevent such impacts. The risk society concept exemplifies that in today’s societies; risk management is a prominent concern (Risk Society Summary PDF | Ulrich Beck, n.d.). In this case, analysts use this tactic by taking affirmative steps to avert the dangers of cyber threats and raising the public’s awareness of possible hazards. This timely approach replaces the reactive defense measures with a more effective risk management approach that protects society.
Conclusion
Security analysts have a broader function not only in computing but also in understanding social psychology needed to avoid human error in the work of an organization. Incorporating behavioral economics, psychology, and social psychology theories, the analysts can design strategies to secure digital assets and increase the general awareness of security risks. They created social benefits by increasing representation for marginalized people in discussions about digital rights and by helping to foster improved progressive society and security. In the modern world, it is almost impossible to overestimate cybersecurity analysts’ role in the ever-developing digital environment, being the link between people and technology while keeping society safe from potential threats.
References
Emerson, P. M. (2019, October 28). Module 22: Asymmetric Information. Open.oregonstate. education; Oregon State University. https://open.oregonstate.education/intermediatemicroeconomics/chapter/module-22/
Risk Society Summary PDF | Ulrich Beck. (n.d.). Risk Society Summary PDF | Ulrich Beck. Retrieved July 20, 2024, from https://www.bookey.app/book/risk-society
Seabright, P., Stieglitz, J., & Van der Straeten, K. (2021). Evaluating social contract theory in the light of evolutionary social science. Evolutionary Human Sciences, 3, e20.
Sridhar, K., & Ng, M. (2021). Hacking for good: Leveraging Hacker One data to develop an economic model of Bug Bounties. Journal of Cybersecurity, 7(1), tyab007.
Tech, C. (n.d.). 11 Illegal Things You Unknowingly Do on the Internet. Clario.co. https://clario.co/blog/illegal-things-you-do-online/#h2_9
Understanding Human Behavior in Cybersecurity
Cyber security analysts rely on social science in many ways, most commonly by determining human behavior. Cybersecurity threats can be described as human deeds with a negative impact on the integrity and security of the computer systems and networks in a specific Organization. Academics from the social sciences can help explain why people would participate in behaviors that put them at risk of attack on the web, fall for a phishing expedition, or become insiders that pose potential dangers to their organizations. For instance, behavioral economics, a part of social sciences, can help explain people’s choices, which may compromise security measures. This state will likely bring about market failures since some parties hold more information than others (Emerson, 2019). Cybersecurity analysts apply these principles to design strategies that eliminate such asymmetries while making users aware of the risks and consequences of their online actions.
In addition, the element of psychology can also help in carrying out security awareness programs effectively. In this context, such bias and trigger elements of human behavior allow the analysts to design the training materials effectively to change the risky behavior of users on the net. For instance, the theories related to risk perception, which encompasses how users perceive and experience risks and challenges relevant to security, can enable analysts to create suitable interventions compatible with the users’ mental models.
Social Engineering and Ethical Decision-Making
Social engineering attacks exploit people’s behavior by forcing them to disclose sensitive data. Cybersecurity analysts must understand the nature of social engineering and how the principles of social sciences can be utilized to develop methods of combating these techniques. This encompasses learning about pressure, coercion, and other psychological tactics the attackers use to compromise the targets. The Sridhar & Ng (2021) article points out the importance of ethical issues arising in cybersecurity. One of the problematic choice’s analysts are presented with is whether privacy rights override security concerns, similar to society’s surveillance dilemma. Using ethical theory from social science, the analyst is in a position to overcome such issues and work within the bounds of the law and what is right.
The Role of Cybersecurity Analysts in Society
Cybersecurity analysts have a critical responsibility that involves the protection of vulnerable societies and the fight against social injustices. The digital divide results in the marginalization of specific communities, and they become easy targets for cybercriminals. Analysts must be aware of these discriminations and take initiatives for gender-sensitive cyber security strategies. The social contract theory focuses on the understanding between people and the authorities (Seabright et al., 2021). Analysts continue to uphold this contract, given that vulnerable groups should be provided equal cybersecurity opportunities and protection. Security solutions also require supporting policies to reduce the digital divide and develop security solutions that fit the target users.
Cybersecurity analysts also help enhance the level of trust and openness in society. According to the Tech (n.d.) article, clarity and timeliness in communication should always be maintained. In so doing, analysts build credibility within social groups, strengthening the social fabric and encouraging collective safeguard strategies when reporting breaches and weaknesses.
Addressing Cybersecurity’s Impact on Society
Cybersecurity analysts are also responsible for addressing the implications of their work for the larger society. Cyber threats affect the availability of services, citizens’ trust, and possible consequences on the financial system. Experts discuss the effects that cyber events cause on society by applying social science theories to establish strategies that can prevent such impacts. The risk society concept exemplifies that in today’s societies; risk management is a prominent concern (Risk Society Summary PDF | Ulrich Beck, n.d.). In this case, analysts use this tactic by taking affirmative steps to avert the dangers of cyber threats and raising the public’s awareness of possible hazards. This timely approach replaces the reactive defense measures with a more effective risk management approach that protects society.
Conclusion
Security analysts have a broader function not only in computing but also in understanding social psychology needed to avoid human error in the work of an organization. Incorporating behavioral economics, psychology, and social psychology theories, the analysts can design strategies to secure digital assets and increase the general awareness of security risks. They created social benefits by increasing representation for marginalized people in discussions about digital rights and by helping to foster improved progressive society and security. In the modern world, it is almost impossible to overestimate cybersecurity analysts’ role in the ever-developing digital environment, being the link between people and technology while keeping society safe from potential threats.
References
Emerson, P. M. (2019, October 28). Module 22: Asymmetric Information. Open.oregonstate. education; Oregon State University. https://open.oregonstate.education/intermediatemicroeconomics/chapter/module-22/
Risk Society Summary PDF | Ulrich Beck. (n.d.). Risk Society Summary PDF | Ulrich Beck. Retrieved July 20, 2024, from https://www.bookey.app/book/risk-society
Seabright, P., Stieglitz, J., & Van der Straeten, K. (2021). Evaluating social contract theory in the light of evolutionary social science. Evolutionary Human Sciences, 3, e20.
Sridhar, K., & Ng, M. (2021). Hacking for good: Leveraging Hacker One data to develop an economic model of Bug Bounties. Journal of Cybersecurity, 7(1), tyab007.
Tech, C. (n.d.). 11 Illegal Things You Unknowingly Do on the Internet. Clario.co. https://clario.co/blog/illegal-things-you-do-online/#h2_9
Journal Entry 14: The Connection between a Career in Digital Forensics Investigators and the Social Sciences
TEDx Talks (2015) “Digital Forensics | Davin Teo | TEDxHongKongSalon” expounds on the role of digital forensic experts and the strategies they use to handle their professional issues and crimes. Davin Teo, a digital forensic investigator talks about his journey into the profession, serving in different countries, including China, Australia, and London. His professional journey began when he pursued a law degree, which built a strong foundation for him to understand the role of human behaviors and their legal implications. However, as a technology enthusiast, he shifted to digital forensics because he wanted to understand it and its interactions with human actions. Thus, after pursuing the profession, he realized that it requires professionals to have technical expertise and understand human behavior, motivations, and legal structures.
His career demonstrates digital forensic investigators must have technical expertise and have the capacity to interpret digital data within the social context because human behavior influences digital crimes. In this case, Teo’s profession requires him to understand criminal psychology and social trends. He also has to understand the ethical consequences of digital print investigations so his actions could be morally upright. In such roles, the social sciences provide crucial insights into people’s motivations for engaging in cybercrimes, their interaction with digital spaces, and the societal impacts of digital crimes.
Most importantly, Teo in his TedTalk, Teo mentions that digital forensic professionals should use multidisciplinary approaches in handling digital forensic issues. The strategy entails applying technology knowledge and evaluating human behavior to understand the motivation for digital crimes. As such, his career path depicts that the digital forensic profession is a technical field that requires professionals to understand human nature and motivations, social structures, and social justice deeply. Thus, the profession demonstrates that there is a close interconnection between technology and social sciences, helping professionals solve modern-day digital crimes.
References
TEDx Talks. (2015). Digital Forensics | Davin Teo | TEDxHongKongSalon [Video]. YouTube. https://www.youtube.com/watch?v=Pf-JnQfAEew
His career demonstrates digital forensic investigators must have technical expertise and have the capacity to interpret digital data within the social context because human behavior influences digital crimes. In this case, Teo’s profession requires him to understand criminal psychology and social trends. He also has to understand the ethical consequences of digital print investigations so his actions could be morally upright. In such roles, the social sciences provide crucial insights into people’s motivations for engaging in cybercrimes, their interaction with digital spaces, and the societal impacts of digital crimes.
Most importantly, Teo in his TedTalk, Teo mentions that digital forensic professionals should use multidisciplinary approaches in handling digital forensic issues. The strategy entails applying technology knowledge and evaluating human behavior to understand the motivation for digital crimes. As such, his career path depicts that the digital forensic profession is a technical field that requires professionals to understand human nature and motivations, social structures, and social justice deeply. Thus, the profession demonstrates that there is a close interconnection between technology and social sciences, helping professionals solve modern-day digital crimes.
References
TEDx Talks. (2015). Digital Forensics | Davin Teo | TEDxHongKongSalon [Video]. YouTube. https://www.youtube.com/watch?v=Pf-JnQfAEew
Journal Entry 12: Understanding Internet Activities That May Be Illegal
The Internet is highly relevant today as it is an essential tool in people’s lives, providing comfort and information. But, as Andriy Slynchuk has pointed out, many things’ people do on the Web are unlawful even though most of them are unaware of it (Tech, n.d.). Downloading movies and music through unauthorized applications and torrents is quite popular among the population, but at the same time, it is entirely unlawful. Such actions infringe the provisions of the law on copyright and lead to prosecution. Most users fail to consider the legal issues, and the simplicity and low costs attract them. Likewise, unauthorized utilization of copyrighted images, particularly for business or one’s use, could result in infringement of copyright laws. Only an image that is copyright-free or has permission from the copyright holder can be used.
In addition, it is also unlawful to post passwords, addresses, and photos of other people or any other information without their permission. To post someone’s address with ill intent is unfair, a clear violation of their privacy, and could result in legal consequences. Another common challenge is cyberbullying, associated with bullying and trolling, facilitated by the anonymity of the Internet (Tech, n.d.). Nonetheless, these behaviors are being covered by laws, with severe cases even attracting criminal charges. Another of these relatively unknown but unlawful acts is recording VoIP calls without the consent of the involved parties. One must get the permission of the involved persons before taping a conversation; not getting this consent could lead to legal consequences.
Additionally, lying about one’s identity in terms of your username and password, using the details of another individual, or giving a wrong age has consequences. This practice is against service terms and may lead to a legal problem if the agent engages in fraud or deception. Utilizing others’ internet networks without permission is as bad as theft and leverages resources that the owners pay for. Likewise, the unauthorized data collection from anyone below the age of 13 violates the Children’s Online Privacy Protection Act (COPPA) concerning children’s privacy on the internet.
In addition, getting music from YouTube without authorization and doing unlawful searches on the net are other little-known hazards. YouTube strictly denies downloading/ripping the content, which is against copyright rules and regulations regarding the intellectual property rights of the content creators (Tech, n.d.). Some keywords are likely to give suspicion to law enforcement agencies; although Google does not inform users, their algorithms may hinder or monitor suspicious activities. To avoid any legal issues, users should focus more on the privacy and security aspects of their browsing habits through better passwords, private browsing, and safe, anonymous browsing through VPN, and all such searches are far safer and more responsible.
References
Tech, C. (n.d.). 11 Illegal Things You Unknowingly Do on the Internet. Clario.co. https://clario.co/blog/illegal-things-you-do-online/#h2_9
In addition, it is also unlawful to post passwords, addresses, and photos of other people or any other information without their permission. To post someone’s address with ill intent is unfair, a clear violation of their privacy, and could result in legal consequences. Another common challenge is cyberbullying, associated with bullying and trolling, facilitated by the anonymity of the Internet (Tech, n.d.). Nonetheless, these behaviors are being covered by laws, with severe cases even attracting criminal charges. Another of these relatively unknown but unlawful acts is recording VoIP calls without the consent of the involved parties. One must get the permission of the involved persons before taping a conversation; not getting this consent could lead to legal consequences.
Additionally, lying about one’s identity in terms of your username and password, using the details of another individual, or giving a wrong age has consequences. This practice is against service terms and may lead to a legal problem if the agent engages in fraud or deception. Utilizing others’ internet networks without permission is as bad as theft and leverages resources that the owners pay for. Likewise, the unauthorized data collection from anyone below the age of 13 violates the Children’s Online Privacy Protection Act (COPPA) concerning children’s privacy on the internet.
In addition, getting music from YouTube without authorization and doing unlawful searches on the net are other little-known hazards. YouTube strictly denies downloading/ripping the content, which is against copyright rules and regulations regarding the intellectual property rights of the content creators (Tech, n.d.). Some keywords are likely to give suspicion to law enforcement agencies; although Google does not inform users, their algorithms may hinder or monitor suspicious activities. To avoid any legal issues, users should focus more on the privacy and security aspects of their browsing habits through better passwords, private browsing, and safe, anonymous browsing through VPN, and all such searches are far safer and more responsible.
References
Tech, C. (n.d.). 11 Illegal Things You Unknowingly Do on the Internet. Clario.co. https://clario.co/blog/illegal-things-you-do-online/#h2_9
Discussion Board 14: Application of Routine Activities Theories on Cybercrime
CBS Mornings’ (2021) “Cybercrime spikes as online holiday shopping picks up” elaborates that online shopping activities surged during the pandemic, and cybercriminals were aware of it. As stipulated in the video, the Federal Trade Commission recognized that scams on social media tripped during the outbreak, and most people were victims of the crimes. For instance, one consumer bought a riding dinosaur, which turned out to be a fake toy, even though it appeared like the original ones on the scammer’s site. Besides, many other online buyers were scammed, especially during the Black Friday mega sales, depicting spiked cybercrimes during online holiday shopping pickups.
The happenings in the video relate to the routine activity theory because it stipulates that crime revolves around three things. The theory expounds tat crimes happens because of a “potential offender, a suitable target, and the absence of a capable guardian” (Andresen & Ha, 2018). According to the theory, crime trends escalate based on suitable targets, especially due to changes in people’s routine activities and the presence of more people to target. Though not present for all individuals and communities, these changes, significantly impact people at societal levels. As such, the theory relates to the phenomenon depicted in the video because it posits that crime occurs when the availability of suitable targets and no guardianship motivated criminals to engage in crime. During the festivities, most people rushed to shop online, and cybercriminals were motivated to exploit their period with minimal resistance or oversight. Thus, most online shoppers were unaware that scammers were readily available to trick them into purchasing items that did not meet the stipulated requirements because with increased shopping activities, there were minimal cyber security measures to protect or enlighten buyers.
References
Andresen, M. A., & Ha, O. K. (2018). Routine activity theory. In The Routledge companion to criminological theory and concepts (pp. 536-539). Routledge.
CBS Mornings. (2021). Cybercrime spikes as online holiday shopping picks up [Video]. YouTube. https://www.youtube.com/watch?v=DOGUzKJ7Hog
The happenings in the video relate to the routine activity theory because it stipulates that crime revolves around three things. The theory expounds tat crimes happens because of a “potential offender, a suitable target, and the absence of a capable guardian” (Andresen & Ha, 2018). According to the theory, crime trends escalate based on suitable targets, especially due to changes in people’s routine activities and the presence of more people to target. Though not present for all individuals and communities, these changes, significantly impact people at societal levels. As such, the theory relates to the phenomenon depicted in the video because it posits that crime occurs when the availability of suitable targets and no guardianship motivated criminals to engage in crime. During the festivities, most people rushed to shop online, and cybercriminals were motivated to exploit their period with minimal resistance or oversight. Thus, most online shoppers were unaware that scammers were readily available to trick them into purchasing items that did not meet the stipulated requirements because with increased shopping activities, there were minimal cyber security measures to protect or enlighten buyers.
References
Andresen, M. A., & Ha, O. K. (2018). Routine activity theory. In The Routledge companion to criminological theory and concepts (pp. 536-539). Routledge.
CBS Mornings. (2021). Cybercrime spikes as online holiday shopping picks up [Video]. YouTube. https://www.youtube.com/watch?v=DOGUzKJ7Hog
Journal 11: Entry 2
The article on bug bounty policies emphasizes the policies’ relevance in cybersecurity by analyzing the economic approaches toward them. The literature review also underlines that bug bounties are a different model to penetration testing, with gig workers as opposed to specialized researchers. This brings cybersecurity talent down to the standard and is helpful to other companies regardless of their size or market influence (Sridhar & Ng, 2021). In particular, the review specifies that hackers involved in such programs are motivated by factors other than financial incentives, including experience and reputation, considerations that extend and are supplemented by the program’s economic rewards.
The discussion of findings gives rise to several insights. Firstly, using the price elasticity of demand concept, it can be seen that the hackers’ supply is not highly elastic, meaning they are not motivated mostly by monetary gains. Such inelasticity is especially seen among young hackers willing to create awareness of their work. Secondly, it demonstrates that company revenue and brand profile are statistically different and contribute equally to companies’ reported vulnerabilities. Still, the economic significance of such a difference is negligible (Sridhar & Ng, 2021). This implies that bug bounties work for any company, large or small, as they level the playing field, given the same ability to increase the strength of their security systems.
Cross-sector research shows that financial and retail businesses report less because the cost of reporting weaknesses is higher in these industries. The number of new programs does not reduce or increase the volume of reports and provides constant access to hacker participation (Sridhar & Ng, 2021). Moreover, older programs get fewer reports submitted, meaning the bounty must be readjusted constantly to keep hackers interested.
In conclusion, the article highlights how bug bounty policies efficiently draw hackers’ interest by appealing to functional and positional incentives. It underscores the necessity of future research to identify further the factors that affect the behavior of hackers and the steadiness of such programs.
References
Sridhar, K., & Ng, M. (2021). Hacking for good: Leveraging HackerOne data to develop an economic model of Bug Bounties. Journal of Cybersecurity, 7(1), tyab007
The discussion of findings gives rise to several insights. Firstly, using the price elasticity of demand concept, it can be seen that the hackers’ supply is not highly elastic, meaning they are not motivated mostly by monetary gains. Such inelasticity is especially seen among young hackers willing to create awareness of their work. Secondly, it demonstrates that company revenue and brand profile are statistically different and contribute equally to companies’ reported vulnerabilities. Still, the economic significance of such a difference is negligible (Sridhar & Ng, 2021). This implies that bug bounties work for any company, large or small, as they level the playing field, given the same ability to increase the strength of their security systems.
Cross-sector research shows that financial and retail businesses report less because the cost of reporting weaknesses is higher in these industries. The number of new programs does not reduce or increase the volume of reports and provides constant access to hacker participation (Sridhar & Ng, 2021). Moreover, older programs get fewer reports submitted, meaning the bounty must be readjusted constantly to keep hackers interested.
In conclusion, the article highlights how bug bounty policies efficiently draw hackers’ interest by appealing to functional and positional incentives. It underscores the necessity of future research to identify further the factors that affect the behavior of hackers and the steadiness of such programs.
References
Sridhar, K., & Ng, M. (2021). Hacking for good: Leveraging HackerOne data to develop an economic model of Bug Bounties. Journal of Cybersecurity, 7(1), tyab007
Article Analysis
Review of Social Science and Cybersecurity Articles
Introduction
The articles reviewed in the annotated bibliography relate to the themes of cybersecurity from the social sciences perspective. The understanding of how these topics link with the principles of the social sciences, the research questions or hypotheses of these studies, the research methodologies used, how the data was analyzed, the application of the notions in class, the degree of consideration given towards the plight of minority groups, and the societal impact of these topics shall form the basis of this review.
1. Relation to Social Science Principles
As illustrated by the critical approaches in these articles, social sciences are evident in cybersecurity. For instance, Dwyer et al. (2022) step outside the box and ask questions about power relations while presenting a case for justice in cybersecurity. This is because it corresponds with many tenets of social sciences, especially those that relate to issues of power, equity and the rights of the suffering or oppressed groups.
In the same context, Wu et al. (2022) pay attention to social cybersecurity, which incorporates social aspects into technical ones. This approach speaks volumes about behavioural and social relations in formulating security and eradicating cybercrime, which aligns well with the social science discipline.
2. Research Questions and Hypotheses
Each study poses unique research questions that bridge the gap between cybersecurity and social sciences:
• Dwyer et al. (2022): Dwyer et al. (2022): What does critical cybersecurity entail, and how can it aid the progression of social justice against cyber threats?
• Medoh and Telukdarie (2022): This paper seeks to answer the following questions: 1. What are the consequences of the Fourth Industrial Revolution in cybersecurity scenarios? 2. Can the System Dynamics Modeling (SDM) assist in strategic planning and execution of cybersecurity measures?
• Wu et al. (2022): Which social factors should be incorporated into the technical security models to improve the handling of security and privacy issues?
• Popoola et al. (2024): What influences the execution of cybersecurity awareness training programs in Africa and the United States?
3. Research Methods
The articles employ various research methods:
• Dwyer et al. (2022): To assess the current strategies in cybersecurity, qualitative research, critical theory, and political sociology will be used to present the new concepts.
• Medoh and Telukdarie (2022): System Dynamics Modeling (SDM) to model and predict the effects of cybersecurity initiatives and future undertakings.
• Wu et al. (2022): Application of Systematization of knowledge (SoK) technique to classify the literature and discover the research gaps in social cybersecurity.
• Popoola et al. (2024): Using a comparative literature analysis to compare the theoretical concepts of cybersecurity awareness and training programs in various cultural and economic environments.
4. Data and Analysis
• Dwyer et al. (2022): Qualitative information should be employed to question prevailing structures regulating cybersecurity conduct and supporting social justice formations.
• Medoh and Telukdarie (2022): Make industry contribution and dynamic modelling data to build a living model for the cybersecurity plan.
• Wu et al. (2022): Assessment of the current literature to establish trends and the lack of social cybersecurity practices.
• Popoola et al. (2024): Organize a comparison between two program designs, their implementation and results to point out inconsistencies and to make recommendations.
5. Class Concepts
The articles are associated with the concepts explored in social science classes, including power relations, fairness, behaviourism, and technology adaptation to society. For instance, Dwyer et al. (2022) describe the problem of power dynamics and stress the need to change the cybersecurity paradigm to reflect social justice and equity theories. Thus, following Wu et al. (2022), special attention should be paid to social behaviour within security practices that touch on social psychology and human-computer interaction issues.
Relevance to Marginalized Groups
The studies highlight the importance of considering marginalized groups in cybersecurity:
• Dwyer et al. (2022): Develop a new, more critical cybersecurity understanding focusing on the vulnerabilities and power issues affecting marginalized communities.
• Medoh and Telukdarie (2022) Emphasize the role of creating segmented cybersecurity policies, considering the consequences of digitalization in various parts of society.
• Wu et al. (2022) Stress the importance of the social factors that need to be considered in the existing security approaches since this strategy might safeguard the threatened groups’ members by addressing the individual abilities that would allow them to implement the security options provided.
• Popoola et al. (2024) draw a contrast between the cybersecurity awareness programs in Africa and those of the USA, examining how the different factors, such as culture and economy, influence the results and why the absence of a generic solution is necessary.
6. Overall Contributions to Society
Collectively, the studies are insightful because they may offer novel strategies for more equitable, efficient and socially conscious cybersecurity. It is, therefore, imperative that as advanced as the field of cybersecurity’s work is, it owes it to society to begin taking the lead from Dwyer et al. (2022) by integrating social justice into the workflows to enhance fairness and protection possibly. Alaa Medoh and Telukdarie (2022) provide a dynamic model that will enable organizations to develop a strategic cybersecurity plan amidst these advancements. Wu et al. (2022) offer a framework that further explains how social factors can be included in cybersecurity, which, in turn, will lead to more secure and comfortable solutions. In their work, Popoola et al. (2024) explain possible cultural and economic differences that can be considered when improving the cybersecurity education processes at the international level.
Conclusion
The reviewed articles help to continue discussions about the lack of connection between social sciences and cybersecurity and the need to factor social justice, human behaviour, and cultural perception into cybersecurity solutions. Thus, these studies call for a synergy in research efforts across the disciplines to deal with complicated cybersecurity problems and secure endangered communities. Therefore, through engaging social sciences concepts, the study provides valuable input in achieving better, open, and fair practices in cybersecurity to support society as a whole.
References
Dwyer, A. C., Stevens, C., Muller, L. P., Cavelty, M. D., Coles-Kemp, L., & Thornton, P. (2022). What can a critical cybersecurity do? International Political Sociology, 16(3), olac013.
Medoh, C., & Telukdarie, A. (2022). The future of cybersecurity: A system dynamics approach. Procedia Computer Science, 200, 318-326.
Popoola, O. A., Akinsanya, M. O., Nzeako, G., Chukwurah, E. G., & Okeke, C. D. (2024). Exploring theoretical constructs of cybersecurity awareness and training programs: a comparative analysis of African and US Initiatives. International Journal of Applied Research in Social Sciences, 6(5), 819-827.
Wu, Y., Edwards, W. K., & Das, S. (2022, May). SoK: Social Cybersecurity. In 2022 IEEE Symposium on Security and Privacy (SP) (pp. 1863-1879). IEEE.
Introduction
The articles reviewed in the annotated bibliography relate to the themes of cybersecurity from the social sciences perspective. The understanding of how these topics link with the principles of the social sciences, the research questions or hypotheses of these studies, the research methodologies used, how the data was analyzed, the application of the notions in class, the degree of consideration given towards the plight of minority groups, and the societal impact of these topics shall form the basis of this review.
1. Relation to Social Science Principles
As illustrated by the critical approaches in these articles, social sciences are evident in cybersecurity. For instance, Dwyer et al. (2022) step outside the box and ask questions about power relations while presenting a case for justice in cybersecurity. This is because it corresponds with many tenets of social sciences, especially those that relate to issues of power, equity and the rights of the suffering or oppressed groups.
In the same context, Wu et al. (2022) pay attention to social cybersecurity, which incorporates social aspects into technical ones. This approach speaks volumes about behavioural and social relations in formulating security and eradicating cybercrime, which aligns well with the social science discipline.
2. Research Questions and Hypotheses
Each study poses unique research questions that bridge the gap between cybersecurity and social sciences:
• Dwyer et al. (2022): Dwyer et al. (2022): What does critical cybersecurity entail, and how can it aid the progression of social justice against cyber threats?
• Medoh and Telukdarie (2022): This paper seeks to answer the following questions: 1. What are the consequences of the Fourth Industrial Revolution in cybersecurity scenarios? 2. Can the System Dynamics Modeling (SDM) assist in strategic planning and execution of cybersecurity measures?
• Wu et al. (2022): Which social factors should be incorporated into the technical security models to improve the handling of security and privacy issues?
• Popoola et al. (2024): What influences the execution of cybersecurity awareness training programs in Africa and the United States?
3. Research Methods
The articles employ various research methods:
• Dwyer et al. (2022): To assess the current strategies in cybersecurity, qualitative research, critical theory, and political sociology will be used to present the new concepts.
• Medoh and Telukdarie (2022): System Dynamics Modeling (SDM) to model and predict the effects of cybersecurity initiatives and future undertakings.
• Wu et al. (2022): Application of Systematization of knowledge (SoK) technique to classify the literature and discover the research gaps in social cybersecurity.
• Popoola et al. (2024): Using a comparative literature analysis to compare the theoretical concepts of cybersecurity awareness and training programs in various cultural and economic environments.
4. Data and Analysis
• Dwyer et al. (2022): Qualitative information should be employed to question prevailing structures regulating cybersecurity conduct and supporting social justice formations.
• Medoh and Telukdarie (2022): Make industry contribution and dynamic modelling data to build a living model for the cybersecurity plan.
• Wu et al. (2022): Assessment of the current literature to establish trends and the lack of social cybersecurity practices.
• Popoola et al. (2024): Organize a comparison between two program designs, their implementation and results to point out inconsistencies and to make recommendations.
5. Class Concepts
The articles are associated with the concepts explored in social science classes, including power relations, fairness, behaviourism, and technology adaptation to society. For instance, Dwyer et al. (2022) describe the problem of power dynamics and stress the need to change the cybersecurity paradigm to reflect social justice and equity theories. Thus, following Wu et al. (2022), special attention should be paid to social behaviour within security practices that touch on social psychology and human-computer interaction issues.
Relevance to Marginalized Groups
The studies highlight the importance of considering marginalized groups in cybersecurity:
• Dwyer et al. (2022): Develop a new, more critical cybersecurity understanding focusing on the vulnerabilities and power issues affecting marginalized communities.
• Medoh and Telukdarie (2022) Emphasize the role of creating segmented cybersecurity policies, considering the consequences of digitalization in various parts of society.
• Wu et al. (2022) Stress the importance of the social factors that need to be considered in the existing security approaches since this strategy might safeguard the threatened groups’ members by addressing the individual abilities that would allow them to implement the security options provided.
• Popoola et al. (2024) draw a contrast between the cybersecurity awareness programs in Africa and those of the USA, examining how the different factors, such as culture and economy, influence the results and why the absence of a generic solution is necessary.
6. Overall Contributions to Society
Collectively, the studies are insightful because they may offer novel strategies for more equitable, efficient and socially conscious cybersecurity. It is, therefore, imperative that as advanced as the field of cybersecurity’s work is, it owes it to society to begin taking the lead from Dwyer et al. (2022) by integrating social justice into the workflows to enhance fairness and protection possibly. Alaa Medoh and Telukdarie (2022) provide a dynamic model that will enable organizations to develop a strategic cybersecurity plan amidst these advancements. Wu et al. (2022) offer a framework that further explains how social factors can be included in cybersecurity, which, in turn, will lead to more secure and comfortable solutions. In their work, Popoola et al. (2024) explain possible cultural and economic differences that can be considered when improving the cybersecurity education processes at the international level.
Conclusion
The reviewed articles help to continue discussions about the lack of connection between social sciences and cybersecurity and the need to factor social justice, human behaviour, and cultural perception into cybersecurity solutions. Thus, these studies call for a synergy in research efforts across the disciplines to deal with complicated cybersecurity problems and secure endangered communities. Therefore, through engaging social sciences concepts, the study provides valuable input in achieving better, open, and fair practices in cybersecurity to support society as a whole.
References
Dwyer, A. C., Stevens, C., Muller, L. P., Cavelty, M. D., Coles-Kemp, L., & Thornton, P. (2022). What can a critical cybersecurity do? International Political Sociology, 16(3), olac013.
Medoh, C., & Telukdarie, A. (2022). The future of cybersecurity: A system dynamics approach. Procedia Computer Science, 200, 318-326.
Popoola, O. A., Akinsanya, M. O., Nzeako, G., Chukwurah, E. G., & Okeke, C. D. (2024). Exploring theoretical constructs of cybersecurity awareness and training programs: a comparative analysis of African and US Initiatives. International Journal of Applied Research in Social Sciences, 6(5), 819-827.
Wu, Y., Edwards, W. K., & Das, S. (2022, May). SoK: Social Cybersecurity. In 2022 IEEE Symposium on Security and Privacy (SP) (pp. 1863-1879). IEEE.
Journal Entry 9: Social Media and Cybersecurity
My score on the social media disorder scale depicts that I am addicted to social media as every aspect of my life, including my relationship with friends and family, revolves around my access to social sites. For instance, I tend to be significantly preoccupied and dissatisfied when I cannot access social media, and in most cases experience withdrawal feelings when I am offline. Besides, my efforts to reduce my time on social media have been fruitless, ultimately causing me to neglect other important activities and rely on social platforms to handle negative emotions. As such, I have found myself deceiving or in conflict with my friends and family to spend time surfing.
The high score on the social media disorder scale portrays that I significantly depend on the sites, which is detrimental to my health, relationships, and daily routines. Besides impacting my relationships and daily tasks, spending most time sharing my life on social media exposes me to cybersecurity threats, as attackers are always looking for weaknesses to exploit on social sites. Trend Micro (2019) denotes that social media users engage in risky behaviors, including accepting friend requests from unknown and suspicious users or sharing posts exposing their private information for hackers to breach their accounts. Based on my social media habits, I am likelier to experience such attacks as I may unconsciously expose my private information to malicious third parties. Thus, as most criminals hide behind fake profiles, it is hard for social media users like me to know when being tricked into sharing sensitive data. I need to reflect on the pervasive influence of social media on my life to ensure I balance its positive impacts without compromising my overall wellness.
References
Trend Micro. (2019). How Cybercriminals Can Use Your Social Media Activity Against You [Video]. YouTub
The high score on the social media disorder scale portrays that I significantly depend on the sites, which is detrimental to my health, relationships, and daily routines. Besides impacting my relationships and daily tasks, spending most time sharing my life on social media exposes me to cybersecurity threats, as attackers are always looking for weaknesses to exploit on social sites. Trend Micro (2019) denotes that social media users engage in risky behaviors, including accepting friend requests from unknown and suspicious users or sharing posts exposing their private information for hackers to breach their accounts. Based on my social media habits, I am likelier to experience such attacks as I may unconsciously expose my private information to malicious third parties. Thus, as most criminals hide behind fake profiles, it is hard for social media users like me to know when being tricked into sharing sensitive data. I need to reflect on the pervasive influence of social media on my life to ensure I balance its positive impacts without compromising my overall wellness.
References
Trend Micro. (2019). How Cybercriminals Can Use Your Social Media Activity Against You [Video]. YouTub