Write-Up: Hacking Humans
Noel Considine
June 8, 2025
As popularity of the promises of potential health benefits and insight into our families and our own pasts rises, DNA digitization grows. Direct-to-consumer DNA testing has become a growingly popular option for people to learn about their heritage, family history, and possibly provide them medical research advancements, however, a growing threat looms alongside it. Digitizing our DNA leaves us vulnerable to hackers and cyber threats that could breach this data and use it for nefarious purposes, posing the question, is digitizing DNA worth the risk? A recent breach at MyHeritage, a popular genealogy platform, leaked the details of over 92 million users, leaving them vulnerable to having their data used or sold. Large scale breaches like this shed light on the need for intense cybersecurity measures as ever progressing technological advancements like DNA digitization take place. There is an emerging term for the type of cybersecurity defending against threats to DNA digitization, which is being dubbed as ‘cyberbiosecurity’, focusing on threats to DNA and what could happen if there is a leak. As sources have mentioned many times, DNA being digitized poses serious questions about valid uses and ethical guidelines. For example, what if DNA used from a private organization that poses themself as a ‘genealogy platform’ ends up in the possession of a police data base, and leads to an arrest on an ongoing or previous case that had DNA evidence. The way that DNA is being digitized on a large scale has also surely caught the attention of dark web hackers, eager to get their hands on something far more valuable and personalized than your social security number. Though there are many risks to the digitization of DNA, our technological progression as a race is inevitable, and security of DNA digitization must be treated with the utmost importance. If it is not, there is no telling to the impact this could have on identity fraud and DNA crime.
References
Rizkallah, J. (n.d.). Hacking humans_ protecting our DNA from cybercriminals.pdf. Google Drive. https://drive.google.com/file/d/17vZTrd3tyRkIuXtLfYKSeZypU7WpCkmM/view?pli=1
Write Up: CIA Triad
Noel Considine
May 25, 2025
The CIA triad consists of three foundational principles that are considered the most important concepts within information security. Confidentiality, integrity, and availability make up the CIA triad and are used to guide policies for information security inside an organization. Confidentiality precautions are outlined to block sensitive information from being accessed by unauthorized personnel. Data is typically stored and categorized in a hierarchical system, making the most sensitive information only accessible to those who are authorized and need access to it. This protects organization’s sensitive data and ensures data privacy, protecting against possible damage if the data was acquired by someone with ill intent. Integrity refers to maintaining consistency and reputability of data over the course of its lifecycle. Data integrity protects the data against unauthorized alteration while in both storage and transit. Availability means that data should be easily and frequently accessible by those who are authorized. Systems must be readily accessible, creating a need for constant maintenance on hardware and software, and making sure that the systems that store and present the data are functioning properly.
Though both authentication and authorization are fundamental to securing systems and data, there are some key distinctions that should be noted between the two. Authentication is the confirmation of the identity, system, or device before one is given access to data, resources, or a general system. This process prevents unauthorized access to any sensitive data or systems. Basic forms of authentication used on nearly every service are measures like a password, sometimes with specialized guidelines that are acceptable to ensure complex passwords. Now there is a more extensive list of available services, ranging from two factor authentication to one time codes. Authorization is the step that typically follows successful authentication. Since one has confirmed their identity, they are now authorized access to the things that they are permitted. The authorization step also protects against people gaining access to data that is above their level. Even if someone has confirmed their identity and logged in, authorization prevents them from accessing data that is not directly provided to them by the system. It runs a check in the system to verify that they are granted access, then either accepts or denies the request accordingly.
Understanding the CIA triad is fundamental to understanding and interpreting cybersecurity policy. Once you have a grasp on the core principles, it makes it easier to identify, replicate, and ensure success and prevention against data breaches and while constructing policy. Being able to identify the differences between authorization and authentication also go hand in hand with this understanding. It is crucial to know the difference and how to implement each properly in the framework of your information security, as they play an important role in the protection of sensitive data and systems.
References
Authentication vs. authorization. Splunk. (n.d.). https://www.splunk.com/en_us/blog/learn/authentication-vs-authorization.html
Chai, W. (n.d.). What is the CIA triad_ definition, explanation, examples – techtarget.pdf. Google Drive. https://drive.google.com/file/d/1898r4pGpKHN6bmKcwlxPdVZpCC6Moy8l/view
Write Up: Exploring Attacks on Availability
Noel Considine
May 25, 2025
Attacks on availability target access points for consumers to critical resources or processes, rendering them cutoff. Any kinds of incidents that are centered around preventing access to legitimate users, like DDos and ransomware, fall under this category of ‘attack on availability’. In our modern world these types of attacks can be devastating for businesses’ economic and social reputations. In this write-up I will be exploring the NotPetya attack and how it targeted Maersk specifically in an attack on availability, initially disguising itself as ransomware, but having a deeper, more damaging true purpose.
The NotPetya attack took place in 2017, decrypting loads of critical data irreversibly and destroying infected systems. NotPetya turned out to be more than a ransomware, and was truly a wiper that’s purpose was to damage the systems and data of companies like Maersk that would do economic business with Ukraine. There is heavy speculation that Russian state operatives are behind the malware in the attack that exploited vulnerabilities in Microsoft Windows to spread itself across systems. Strategies like the one used in NotPetya, which heavily encrypted critical data once access was gained, are seen commonly throughout these kinds of attacks. The damage that these attacks can do is devastating to businesses’ financials, public image, and requires enormous recovery processes. Maersk lost between $200 and $300 million during downtime of halted operations, and were only able to recover due to a stroke of luck with a backup server in a remote location. These attacks can ruin the image of a business, lower consumer confidence in dealing with organizations affected in the future, and all around stagger them for long periods of time. To defend against these threats companies need to have constant monitoring, patching, network segmentation, and offline backups. These attacks can be devastating and proper precautions must be taken to prevent them.
Reference
Notpetya: A Columbia University Case Study. (n.d.). https://www.sipa.columbia.edu/sites/default/files/2022-11/NotPetya%20Final.pdf
Module 6 Journal Entry
Noel Considine
July 6, 2025
When it comes to the world of hacking, there are many common misconceptions regarding motives, application, and even misconceptions behind the characteristics of those who perpetrate ‘hacks. Investigating this world through the scope of my classes and my own personal research has opened my eyes to many of the preconceived notions that I held about hackers, and how some of those were correct and others incorrect. In this entry I will look into some of the misconceptions or correct notions that I held about hackers and why.
One of the largest misconceptions I had before taking cybersecurity courses was about the technicality and intellect of hacking. I watched TV show Mr. Robot a while ago, and I had seen a few videos breaking down home. Some of the hacks are realistic and follow the guidelines that one would go through to carry out that type of attack, however, it zoned my focus onto that very technical side of hacking. I blocked out the idea that a hacker can be someone who has very good people skills and knows how to manipulate phishing attacks to gain access to systems and sensitive information in ways far more creative and simple than anything executed throughout the show. Though many types of hacking take extreme precision, practice, and calculated execution, hacking ranges far wider than sitting behind a computer coding malware or working your way through files. One of the largest ways that hackers attack is by utilizing the human aspect of hacking. By attacking the person, the weakest link, rather than trying to go through tons of firewalls, something as simple as a lie about a fake identity to the right person can bypass you into a far higher level of security than you would have been able to gain with persistent time and technical hacking skills. One person who utilized this that fascinated me was Kevin Mitnick, when I first learned about Mitnick I hadn’t really even thought about hacking being performed this way. While Mitnick was a technical hacking aficionado as well, he used tactics like phishing to get much farther than his computer skills could have ever taken him in breaching sensitive information. He changed my perception of what a hacker looks like and made me realize that a hacker can be a normal guy who goes to the gym and pool daily instead of a guy sitting behind a screen constantly typing away like Mr. Robot.
Another misconception I held was that hack prevention was just about setting up firewalls and defenses, and given what I learned in the paragraph above I have realized that it is so much more. Having risk management, understanding the layout of your system, and practicing the human defense aspect through extensive training is all just as important to keeping your system safe. A lot of cyber security is preparing and training your employees against phishing attacks and in preventative measures to strengthen the weakest link. Probing employees and using supplemental training in the forms of modules, introducing experts and having seminars, and even silently testing them in simulated threats created by the company are all ways that you can strengthen your staff and test the effectiveness of the training you are giving them. If the person aspect of your company is strong, the technical aspect can be strengthened, however, if the human aspect is weak, all of your technical aspects can become virtually useless. As I take more cyber security courses this point is truly hammered home in my head, and it is such a stark change from what I thought before.
Before taking these courses I also had the misconception that analysts and cyber defense specialists operated a bit like lone wolves, with the strongest hackers for companies leading in the ways of innovating defense. While that is somewhat true as they can be heading teams, I undervalued the aspect of policy. I had barely known anything about cyber policy prior to taking my courses, and now I understand that without a policy cyber defense is virtually pointless. Reactive measures are just as important as proactive measures, and if that pro hacker lone wolf fails in preventing the defense, without reactive measures everything is going downhill. Protocols like having backup servers, network segmentation, strict procedures for authentication that employees follow, and reactive plans in case of a successful attack to mitigate downtime are just as important to a company’s flow and security as any seasoned veteran hacker who would try to take down active threats.
Going into my cyber security courses I had a false idea of how hacking worked and what hackers were like. I am slowly learning that they can be the everyday person, that prevention can look like reactive protocols, and that even though you can do your best to set yourself up in defense, there will always be new vulnerabilities. The cyber landscape is changing day by day and it is a hard task to keep up with, but learning has shown me that it is as much about the people aspect as it is about the system itself.
Reference
Insider. (2021, March 16). YouTube. https://www.youtube.com/watch?app=desktop&v=6BqpU4V0Ypk
Module 8 Journal Entry: Hacking Misconceptions
Noel Considine
July 13, 2025
Cybersecurity is a dynamic field that has been heavily explored and depicted in film and media. This is mostly shown in the media as ‘hacking’. Popping up as a popular concept in the 80’s, hacking has made its way into many popular films and tv shows. The way the media portrays cybersecurity defense and offense is typically inaccurate, however, this is changing in modern media and some tv shows are incorporating accurate and detailed scenes of cybersecurity.
In popular films as early as the 80’s, such as Ferris Bueler’s Day Off (1986), there are accurate depictions of hacking or ‘phreaking’ as it was called at the time. However, this accuracy was uncommon at the time. The media played a big role in giving people a glimpse of what cybersecurity looked like, and at the early stages, it was depicted as flashy 3D models being circled with fast paced text and lots of cybersecurity jargon. In modern media we still see many of these aspects in film/television that are not centered around cybersecurity, however, the understanding and portrayal of cybersecurity in shows and movies oriented toward that has been seriously revamped. In the tv show Mr. Robot, accurate hacking protocols with realistic scenarios are frequently shown and expert analysis has supported these findings. Cybersecurity is still shown commonly as a flashy and difficult to understand process, involving scrolling lists of code and 3D models to hack, however, media that really centers around the topic has upgraded itself and refined their understanding and depiction of the field. As cybersecurity and media both progress the way it is shown in media grows more and more accurate.
For those of us studying the field, we can see the inaccuracies in popular media that portray cybersecurity, however, for many who do not have a deeper understanding of the field, this is how they view cybersecurity and hacking. The media influences our understanding of cybersecurity profoundly, and hopefully as depictions grow more accurate, the general public’s understanding will grow to be more accurate as well.
References
YouTube. (n.d.-a). YouTube. https://www.youtube.com/watch?app=desktop&v=6BqpU4V0Ypk
YouTube. (n.d.-b). YouTube. https://www.youtube.com/watch?v=lsCrY2vWSr8
YouTube. (n.d.-c). YouTube. https://www.youtube.com/watch?v=SZQz9tkEHIg
Module 10 Journal Entry 1: Cybersecurity Analyst
Noel Considine
August 5, 2025
The description of a cybersecurity analyst job according to the video provided relates to social behaviors in many ways. Cybersecurity analyst may seem like a purely technical job, however, its description shows far more. From job requirements and expectations, to networking and posing yourself as the best candidate, social behaviors relate to the job of cybersecurity analyst hand in hand.
One of the social dynamics that relates to the description of a cybersecurity analyst is the fact that as the ‘first line of defense’ at a company one must be trusted and attentive. Analysts are trusted to help prevent threats such as phishing attacks, teaching others the best practices for identifying and steering clear of these misleading threats. As a cybersecurity analyst, identifying threats, responding to phishing threats, and protecting company users gives you a sense of responsibility over not just the data but the people working alongside and for you. This theme of social protector takes great responsibility and extends beyond just data protection, but upholds the safety and security of the entire networked social organization.
The video provided also put an emphasis on the flexibility of work timing and location. Showing how typical workplace hours and proximity to job are being broken down in the growing digital world. The dynamic of 9-5 is no longer a fixed point in this working world, taking away the stricter social norms of office workplace hours and adapting to the modern work world. However, many positions with ‘graveyard hours’ are emerging which will cause for social adaptation of analysts to be available at any time, anywhere, to maintain the duties of their position.
Another social theme that is talked about is the high demand with low prospects in this job field. Though this has changed in recent years, this is showing a social shift in demand for technological advancements and their subsequent support roles. The job of cybersecurity analyst also demands social networking in order to secure strong positioning with competitive wages. Those with the best networking skills and most prestigious technological education will be granted high paying, secure positions that are growing in social value day by day.
Though the position of cybersecurity analyst is very techno-centric, it has deep levels of social meaning and influence as well. The position is at a meeting ground between technology and our modern society, providing protection and security to those in our networked society. Cybersecurity analyst illustrates the way our society and technology have become indisputably fused with one another.
References
Enesse, N. (2021, May 6). What does a Cybersecurity Analyst Do? Salaries, Skills & Job Outlook. YouTube. https://www.youtube.com/watch?v=iYtmuHbhmS0
Module 10 Journal Entry 2: Information Warfare
Noel Considine
August 5, 2025
Social cybersecurity is a growing field and topic of discussion in all levels of society. As our society’s technological advancements are becoming ever more interconnected with all aspects of life, the threat of “information warfare” looms. Targeting the “nucleus and gravity” of a society, information warfare targets the people of a society to drive fissures and weaken political, economic, and social support for a nation. As this information warfare is more constantly recognized in our society, the need for social cybersecurity at all levels becomes more pertinent.
Information warfare erodes trust in one’s political systems, nation’s economics, and social cohesion, causing degradation in a nation without even declaring war. The technological revolution of our modern society has made this style of warfare accessible to both state and non-state entities. Russia has been using strategies of information warfare for a long time, however, since the technological age of the early 2000’s, the scale and possibilities of this harmful warfare strategy have increased tenfold. Targeting existing, and creating new fissures in a nation’s society, a state entity like Russia can do serious damage to the society and national institutions of a country. For example, Russia’s data leaks and influence in our 2016 presidential election mobilized an information warfare strategy to directly tamper with and influence the masses of our social world, in order to make a change in our political system that best fit their agenda.
One of the biggest changes about this modern information warfare, given the dynamic technological growth we have seen in recent years, is the lack of physical proximity required in order to carry out attacks. In the past state and non-state entities would have been restricted by physical boundaries from influencing public thought and creating fissures in the trust of political, social, and economic as well as national institutions of a nation. However, given the possibilities with modern technology, an entity can use bots to carry out large scale information attacks on social media, providing confirmation bias to sow divisiveness between the people in a country. As stated in the article, a divided nation is less potent in a war. By carrying out attacks like these nations are weakened and even though war has not been formally declared, they are already under attack.
The growing level of information warfare in our modern world subsequently creates a serious need for social cybersecurity experts and the topic to be implemented at the highest levels of government and society. Through educating our government and society about the threat of information warfare and how to form true opinions based on fact rather than divisiveness sown against us, by restoring trust between our society and our political systems, and creating policy and actionable approaches to tackling social cybersecurity, we can take steps against the growing information warfare. Social cybersecurity is necessary in our dynamic technological world, and will grow in demand for the strength of our nation.
References
Social Cybersecurity an emerging national security requirement. Army University Press. (n.d.). https://www.armyupress.army.mil/Journals/Military-Review/English-Edition-Archives/Mar-Apr-2019/117-Cybersecurity/b/
Module 11 Journal Entry 1: Sample Breach Letter
Noel Considine
August 6, 2025
There are some clear economic and social science theories that relate to the sample breach letter provided. On the economic side, cost-benefit analysis theory can be seen alongside consumer choice theory. As for social science theories, social contract theory and accountability theory can be related throughout the letter. All these theories will be related to and explained in the context of the letter.
Cost-benefit analysis theory is the first economic theory that can be related to this sample breach letter. This concept of this theory is weighing the costs and benefits of something before taking action. This can be seen through Glasswasherparts’ immediate response to the breach. They weighed the costs and benefits of immediately notifying the customer, and decided that it would be best to delay so their investigation could be complete without interruption. This likely also delayed customer frustration and backlash while there was an ongoing investigation, allowing them to focus fully on the investigation before dealing with the blowback. This theory helps to understand why they delayed the release of information to the public while weighing the costs and benefits of doing so. The second economic theory we can relate to this breach letter is behavioral economics theory. The concept of this is how psychological, mental, and social factors influence economic decisions. This theory shows that people don’t always act rationally. The breach letter details to the customer that to their knowledge there had been no misuse of the stolen information at this time, however, behavioral economics theory can be related to this as customers may overestimate the effects of such a breach and panic because of the breach notification. The month delay in the release of the information can also be evaluated in the lens of this theory, possibly creating uncertainty in customers’ trust of the company, causing them to not shop with the organization again and lash out at the company.
Social contract theory, which is individuals giving up certain freedoms in exchange for benefits and protections from society or institutions can be applied to this breach letter. Customers gave up the privacy and freedom of their personal information and data expecting it to be protected by this company, only for the breach to represent a violation of this perceived contract. This theory shows that this breach was a breaking of trust between the company and individual customer as it did not uphold its duty. Accountability theory can also be related to this breach letter. The concept of this is that institutions and individuals are supposed to be held accountable for their actions. In the “what happened” portion of the breach letter the company attempts to put responsibility on the third party company or platform provider since they were the ones who experienced the breach, however, many customers through this theory may still see them as the institution responsible since they took initial responsibility during the transaction for safeguarding customers data. These theories underline how important ethical and responsible reactions to breaches are.
These economic and social science theories relate to the contents of the letter. They give a view of how it can be seen from a scientific and individual perspective. By examining the letter through these lenses, one can see the importance of a company’s response to a breach like this, and the importance of them upholding their ethical obligations.
Reference
Sample Data Breach Notification [customer first … (n.d.). https://dojmt.gov/wp-content/uploads/Glasswasherparts.com_.pdf
Module 11 Journal Entry 2: Bug Bounties
Noel Considine
August 6, 2025
In the article “Hacking for good: Leveraging HackerOne data to develop an economic model of Bug Bounties” , bug bounty programs are discussed as a policy tool used in cybersecurity. The article addresses how bug bounty programs are cost effective, how there is demand and shortage in skilled cybersecurity professionals, and how using gig economic security researchers can give more diverse penetration testing and strengthen results. The authors also talked about the various motivators for these pen testers, extending beyond just monetary gain, to passion and desire for fame and challenge.
The findings in this article have shown many significant patterns in the use of bug bounties. Even smaller and less economically significant companies receive similar volumes of reports on security vulnerabilities found. This shows that these factors do not significantly affect program results. The findings also show that as the number of companies using the service rises, there is no growing shortage of security vulnerabilities found for all users. As companies exist for longer periods on the bug bounty sites, the number of security vulnerabilities found does gradually decline. However, it is speculated that if the program was to allow more pen testing tools and code for hacking, there would be more security vulnerabilities to be found.
This article shows that bug bounty programs are cost effective, easily modified, and applicable to companies of any size. This gives wide access to cybersecurity resources that produce strong results in protection against vulnerabilities. Gaining expert findings on vulnerabilities at a cost effective price with many sets of eyes can prove to be an invaluable tool. Though there is more research to be done on the amount of hackers accessible and how to enhance the tool to prevent against age regression on reports, bug bounties are promising in the growing world of cyber defense.
References
Hacking for good: Leveraging hackerone data to develop an economic model of Bug Bounties | Journal of Cybersecurity | Oxford academic. (n.d.-a). https://academic.oup.com/cybersecurity/article/7/1/tyab007/6168453
Module 12 Journal Entry: Common Illegal Behaviors
Noel Considine
August 6, 2025
In her article “11 Illegal Things You Unknowingly Do on the Internet”, Andriy Slynchuk relates to readers and their growing interconnectedness with technology in our modern world. In this article she outlines eleven things that people commonly do on the internet without knowing that they are illegal. Slynchuk finishes this article with giving a few tips to keep internet users out of trouble and protect their personal information.
In our modern world our lives are so connected to technology that it goes without saying that it is used daily. Many people surf the web and consider themselves to be experts on the subject. However, there are many illegal things that internet users do without being aware of the possible criminal ramifications of their actions. As follows, Sylnchuk lists these as the eleven common illegal things people unknowingly do on the internet: Use unofficial streaming services, use torrent services, use copyrighted images, share passwords and addresses or photos of others, bullying and trolling online, recording VoIP calls without others consent, faking your identity online, using other people’s networks, collecting info about people under age 13, extracting audio from youtube, and finally using google or other services to conduct illegal searches. Though some of these seem like common knowledge like using illegal streaming, many people are unaware that there are illegal things to search and that some forms of bullying and online trolling are illegal. As daily life becomes more intertwined with technology it is important to educate oneself on matters such as this. Slynchuk’s article gives a concise and informative list and description on common unknown examples of these illegal activities and provides itself as a helper to keep you out of trouble. Following this list of illegal things people unknowingly do, Slynchuk provides the reader with four tips that will serve to help keep them out of trouble. Slynchuk tells readers to limit the information they share, so it can not be used by cybercriminals for serious threats like identity theft, or even just mischievous activity like cyber bullying. Readers are told to keep strong passwords that are unique and different from site to site. This helps to prevent people using your devices for illegal activities, and prevents having your personal information and data breached and possibly sold online. Slynchuk encourages people to browse in private mode to best prevent cookies being used and history being saved. Finally, readers are encouraged to use a virtual private network (VPN) while using the internet. This can help protect you if you accidentally stumble onto illegal information or sites you did not mean to, it will give you anonymity of IP address, history, activity, devices, and location. These are useful tools to protect yourself in cyberspace.
Knowledge of these illegal internet uses are important to have. This can keep you out of unnecessary criminal trouble, and the protections provided at the end can keep you safe while using the internet. This article serves as a strong educational tool to provide you with strategies to do both.
References
Slynchuk, A. (2021, June 1). 11 illegal things you unknowingly do on the internet. Clario. https://clario.co/blog/illegal-things-you-do-online/
Module 12 Journal Entry: Common Illegal Behaviors
Noel Considine
August 6, 2025
In her article “11 Illegal Things You Unknowingly Do on the Internet”, Andriy Slynchuk relates to readers and their growing interconnectedness with technology in our modern world. In this article she outlines eleven things that people commonly do on the internet without knowing that they are illegal. Slynchuk finishes this article with giving a few tips to keep internet users out of trouble and protect their personal information.
In our modern world our lives are so connected to technology that it goes without saying that it is used daily. Many people surf the web and consider themselves to be experts on the subject. However, there are many illegal things that internet users do without being aware of the possible criminal ramifications of their actions. As follows, Sylnchuk lists these as the eleven common illegal things people unknowingly do on the internet: Use unofficial streaming services, use torrent services, use copyrighted images, share passwords and addresses or photos of others, bullying and trolling online, recording VoIP calls without others consent, faking your identity online, using other people’s networks, collecting info about people under age 13, extracting audio from youtube, and finally using google or other services to conduct illegal searches. Though some of these seem like common knowledge like using illegal streaming, many people are unaware that there are illegal things to search and that some forms of bullying and online trolling are illegal. As daily life becomes more intertwined with technology it is important to educate oneself on matters such as this. Slynchuk’s article gives a concise and informative list and description on common unknown examples of these illegal activities and provides itself as a helper to keep you out of trouble. Following this list of illegal things people unknowingly do, Slynchuk provides the reader with four tips that will serve to help keep them out of trouble. Slynchuk tells readers to limit the information they share, so it can not be used by cybercriminals for serious threats like identity theft, or even just mischievous activity like cyber bullying. Readers are told to keep strong passwords that are unique and different from site to site. This helps to prevent people using your devices for illegal activities, and prevents having your personal information and data breached and possibly sold online. Slynchuk encourages people to browse in private mode to best prevent cookies being used and history being saved. Finally, readers are encouraged to use a virtual private network (VPN) while using the internet. This can help protect you if you accidentally stumble onto illegal information or sites you did not mean to, it will give you anonymity of IP address, history, activity, devices, and location. These are useful tools to protect yourself in cyberspace.
Knowledge of these illegal internet uses are important to have. This can keep you out of unnecessary criminal trouble, and the protections provided at the end can keep you safe while using the internet. This article serves as a strong educational tool to provide you with strategies to do both.
References
Slynchuk, A. (2021, June 1). 11 illegal things you unknowingly do on the internet. Clario. https://clario.co/blog/illegal-things-you-do-online/
Annotated Bibliography
Noel Considine
June 29, 2025
Mohammed AlMakhmari, Al-Hammouri, A., Al-Billeh, T., & Almamari, A. (2024).
Criminal Liability for Misuse of Social Media in Omani and UAE Legislation. OpenAccess. https://cybercrimejournal.com/menuscript/index.php/cybercrimejournal/article/view/420/121
This article breaks down how social media users can face criminal responsibility under UAE and Omani legislation, examining this possible legal action on topics like religious values, public order, and individual conduct. These authors take a deep look at how places like the UAE already have strict social media and IT laws on things like religious offenses in place, while Omani is falling behind in this technological landscape. The authors talk about how social media could disrupt societal norms and religious practices by being misused, and there needs to be a push for understanding and controlling these cyber crimes, however, they also highlight the need for limitation as this could encroach on people’s rights and freedoms. A need for a balance between restriction of cyber crime and promotion of personal freedoms needs to be met.
Alashwali, E., Peca, J., Lanyon, M., & Crannor, L. (2025, April 26). Volume 11 issue 1 |
Journal of Cybersecurity | Oxford academic. journal of cyber security. https://academic.oup.com/cybersecurity/issue/11/1
The study by these authors looks into some of the privacy concerns that arose as the boundary between the workplace and home working was blurred for employees that began to work from home during the COVID-19 pandemic. These authors surveyed 2014 workers from home in the United States and found that even though the harm was minimal and only slightly psychologically affecting, lots of the privacy breaches caused discomfort in employees that worked from outside the workplace. They talked about worker’s habits to avoid privacy measures that made them feel discomforted, and used these findings to write about how tech designers and policy makers could change their methods to keep privacy but reduce discomfort in the home workplace.
Kuźnicka-Błaszkowska, D., & Kostyuk, N. (2025, May 9). Emerging need to regulate
deepfakes in international law: The russo–ukrainian war as an example | journal of cybersecurity | oxford academic. journal of cyber security. https://academic.oup.com/cybersecurity/article/11/1/tyaf008/8127651
This article breaks down the role and possible consequences of using deepfakes as a disinformation tool, examining specifically how this strong tool is/can be used in the Russo-Ukrainian war. The authors examine the current legislation and regulation on deepfakes and argue that the current preventative/reactive measures are inadequate, and if anything only infringe on personal rights. The authors stress the need to control deepfakes by creating an international/regional right to cognitive liberty which can address the difficulties with this tool in geopolitics while protecting personal rights.
Whitt, S., Shkliarov, V., & Mironova, V. (2025, April 18). Going public about cyber
attacks: Public threat sensitivity and support for escalation in the United States and Russia | Journal of Cybersecurity | Oxford Academic. journal of cyber security. https://academic.oup.com/cybersecurity/article/11/1/tyaf007/8115914
This article examines how the public reacts to government disclosure that cyber attacks have been performed by rival states. The authors do this by comparing surveys in both the United States and Russia. These studies show that Americans are more sensitive to cyber threats than their counterparts and posit theories for why this may be. The authors note that Americans are more in favor of taking militant action against cyber attacks, despite both states truly desiring deescalation. The article overall looks at how the different structures of the countries attributes to the public’s sensitivity and desire for reaction to attacks.
Article Analysis: Going Public About Cyber Attacks: US-Russia Cyber Conflict
Noel Considine
August 7, 2025
This article examines how the public in Russia and the USA reacts to state allegations of cyber attacks. Whitt, Shkliarov, and Mironova then investigate whether these allegations influence support for escalation. This study encompasses cybersecurity issues along with social science research on public opinion, using experimentation to understand how autocratic versus democratic leadership influences public opinion on cyber threats and their preferences for governmental response.
This examination of public opinion and regime structuring is heavily connected to the social sciences. The authors examine themes of public opinion and government influence, authority and power dynamics, and differences between Russia and the US, and analyze how public support for escalation or de-escalation differs by trust in authority. This article applies aspects of political science by looking at government responsiveness to public opinion and government priming. It applies itself to sociology by examining group behaviors, the way the government frames cyber attacks, and public opinion. Additionally, the article relates to psychology by examining how threats are perceived across regimes, how different groups emotionally respond to cyber threats, and how public opinions are influenced through government priming techniques. We can also tie a few theories of social science to the authors’ analysis, starting with cost-benefit theory. Individuals and the public together evaluate the cost and benefits of either escalating or de-escalating conflict in response to accusations of a foreign state cyber attack, subsequently swaying their decision in what response to give. Conflict theory, which posits the idea that society is a struggle between power from the ones in control and the ones beneath them, can be tied to the article. Those in power positions use going public on cyber attacks and government priming strategies to influence public opinion and how threats are framed to support their authority in the power dynamic. Lastly, this can also be seen through the lens of political culture theory, which looks at how attitudes, beliefs, and values shape political behavior in societies. The article relates by the exploration of how distinctions in Russian and US political cultures sway the public’s reaction to alleged cyberattacks by rival powers, and how they choose to proceed with escalation or de-escalation.
This study analyzes how Russian and US citizens/public interpret accusations of cyber attacks from rival powers. The authors focus on whether government priming (the way the government influences public opinion and awareness about alleged cyber threats/attacks by portraying the threats) affects public support/opinion on escalation/de-escalation. The authors also take a big focus on whether democratic versus authoritarian regime form regulates public threat sensitivity and escalation/de-escalation. Whitt, Shkliarov, and Mironova also put forth a few key hypotheses. They say going public raises the public sensitivity to cyber threats. They think that government priming and going public increases public support for response escalation. Lastly, they believe “Americans are more sensitive to cyberattacks and more willing to support response escalation than Russians”.
The authors use a few different research methods to support their findings. They use a cross-national survey design that is carried out in Russia and the US. The survey looks at the public’s sensitivity to cyber attacks and their approval of responding with escalation in both the US and Russia. They did this by randomly assigning participants to different priming conditions, one being from the Russian government’s priming perspective of a US cyber attack, one being from the US government’s priming technique of a Russian cyber attack, and one being a control group with no priming. Afterwards, they asked surveyed participants what they thought possible government responses should be. This strategy showed whether priming affected and influenced public opinions on response tactics and helped to compare the democratic to the authoritarian regime, showing how the regime type influenced the public’s views on cybersecurity and possible responses. Between the US and Russia a total of 3336 responses were given to the survey, researchers used these quantitative responses from both countries to compare the primed and not primed responses as well as to examine regime type’s influence over survey responses. They compared and identified patterns in sensitivity to threats and support for escalation. The data showed that Americans were more sensitive to threats and susceptible to government priming. While both countries did support defensive responses rather than military/government escalation, Americans were found to be more prone to escalatory options. Priming influence was major in the US and very minor in Russia, the authors attributed it to possible differences in faith in national institutions.
Concepts of social science throughout our course can be deeply and easily related to this article. Aspects of political science, sociology, psychology, etc… can be applied to this article and supported through analysis of government responses, public opinion and psychological/emotional responses, and sociological societal responses to cyber attacks. Theories reviewed in our coursework can also be tied to this article, supporting the notion that it has deep relativity to what has been learned in the class. Cost-benefit theory, political culture theory, conflict theory, and more can be related to this reading and seen throughout the text. The concepts included in our coursework are relevant and related to the topic and content of this article.
Though this article does not explicitly analyze marginalized/minority groups, some potential connections can be made. In authoritarian regimes like Russia, marginalized groups could face limited access to media and information due to state control, causing them to have lowered threat awareness and reduced political engagement. Marginalized groups existing in a very state sponsored society like Russia may also face increased susceptibility to misinformation by state actors, causing them to have less influence/reaction to national security policy. It can be inferred that the study may not include all perspectives of marginalized individuals in society, especially those who are excluded from decision making processes may hold different views about escalation/de-escalation responses.
This article expands understanding of how public opinion plays a role in escalation of cyber conflicts. It also displays how regime differences cause public perception to change on the matter, giving insight into possible differences in national institutions’ trust. This helps the public to learn how democratic regimes may face increased susceptibility to government priming and how authoritarian regimes may be able to suppress threat perception to fit their agenda. This study uses quantitative data to support their claims and opens up the floor for discussion and more research comparing responses to cyber threats and attacks across the world. The authors highlight trust, communication, and openness as decisive factors in influencing public opinion on cyber security and response to threats. The authors warn against the threats of elite manipulation and government priming in the dynamic cyber-incident-filled world.
The study and article show significant distinctions in public response to cyber attacks, threats, and incidents based on regime types and whether government priming was involved. The researchers do so with experimentation and quantitative data to support their claims and findings and explore social science differences in cybersecurity and response tactics internationally. This article connects social science theories into understanding cyber conflict and public opinion. They show the necessity of ongoing and expanded research in political and cultural scenes to give continuous insight into policy and theory of the social sciences and cybersecurity.
Reference
Whitt, S., Shkliarov, V., & Mironova, V. (2025, April 18). Going public about cyber attacks: Public threat sensitivity and support for escalation in the United States and Russia | Journal of Cybersecurity | Oxford Academic. journal of cyber security. https://academic.oup.com/cybersecurity/article/11/1/tyaf007/8115914
Career Paper: Building the Human Firewall, Security Awareness Training
Noel Considine
August 7, 2025
Technology is advancing rapidly every day, providing better defense and offense measures in the expanding cyber world. Regardless of these advancements, the human aspect of cybersecurity still persists as the ‘weakest link’. In the role of a Security Awareness Training Specialist, one defends and prevents cyber threats that attack the human side of cybersecurity with strategies like phishing, malware, and social engineering. Security Awareness Training Specialists use knowledge of things like behavioral psychology, understanding of group dynamics, and research methodology. Through the understanding of these values and theories of social science, they mitigate risk, impact behaviors, and be sure everyone has the skills to use technology safely and effectively.
Security Awareness Training Specialists have a diverse range of job responsibilities. A main task of the position is designing and deploying educational security programs to teach staff how to notice and respond to cybersecurity risks. These specialists make videos, quizzes, and modules to educate safe behavior, even carrying out simulations with attacks like phishing to improve staff identification and vigilance. They are constantly updating their teachings with the current best practices, ensuring that staff are knowledgeable and not jeopardizing safety of the organization’s cybersecurity. They are focused on maintaining an office environment that is conscious and upholding the best cybersecurity practices, mitigating the exploitation of human behavior in cybersecurity. This role is extremely important in modern cybersecurity; according to Verizon’s 2024 Data Breach Investigations Report, “68% of breaches involved a non-malicious human element, like a person falling victim to a social engineering attack or making an error”. The success of cybersecurity tools are heavily reliant on human focused prevention rather than technological tools.
Many key concepts and theories we have reviewed while discussing the social sciences are able to be related when discussing the career of a Security Awareness Training Specialist. The social science of psychology, specifically behavioral psychology and aspects discussed in class like motivation theory are very relevant in this type of specialist’s career. Security Awareness Training Specialists use the social science of psychology to understand what pushes users to act safely or riskily on their systems. They use this knowledge of what motivates individuals to understand how they can employ users’ internal personal values and external rewards or incentives to encourage safe cyber practices. They also likely use this motivation theory to understand what motivates hackers to attack a specific individual when targeting the human aspect of cybersecurity. Through this knowledge they can help present their educated users as non-targets, and mitigate that risk of human exploitation at the starting point. They can show this positive reinforcement to change behaviors through activities like phishing simulations, rewarding cautious behavior so that it becomes a common practice in the workplace. Bada, Sasse, and Nurse emphasize the need of motivational and reinforcing practices rather than just awareness of best practices by saying “Changing behaviour requires more than providing information about risks and reactive behaviours – firstly, people must be able to understand and apply the advice, and secondly, they must be motivated and willing to do so – and the latter requires changes to attitudes and intentions”. On top of understanding individual motives, it is important for a Security Awareness Training Specialist to investigate the situational factors that allow and encourage cyber crime to take place. This makes Routine Activity Theory an important topic for this type of specialist to understand and apply in their work. Routine Activity Theory is described as the situation when a motivated offender meets a suitable target without a capable guardian. These types of specialists work as the capable guardian, educating staff and learning users on how to identify threats and keep themselves safe from risky behaviors. This interrupts the cycle and provides them with practices that enable them to no longer be a suitable target. For example, by running phishing simulations, the Security Awareness Training Specialist teaches staff to identify and dispose of suspicious emails or messages, reducing the amount of suitable targets an attacker can hack. Chaudhary’s study underlines the importance of leadership backing and integrating cybersecurity best practices into the company/organization’s norms when training awareness, as it produces the most effective results. These findings support the need for these kinds of specialists, as these specialists provide leadership backing that creates strong security norms and takes away suitable targets from motivated attackers who may fall into the habits of Routine Activity Theory. Additionally, Neutralization Theory similarly deals with situational factors, explaining how individuals (whether it is staff using unsafe practices or attackers themselves) justify risky behaviors or carrying out attacks despite knowing the risks/damage. In Aldawood and Skinner’ article they highlight the necessity of understanding and handling individuals’ attitudes and workplace conditions that can take away from significant cybersecurity behavioral change. Awareness specialists work to negate these justifications and show the potential consequences through practices like simulations or video explanations, and promote individual responsibility in the workplace. There are many awareness specialists who share experiences of real life breaches that have occurred because of behavior that involved neutralization theory, showing users that “everyone is doing it” and “it was just a one time small mistake” do not hold up when attackers find a way in. Some specialists also try to understand how hackers justify their attacks so they can pose potential targets as individuals who are not justifiable to attack. Another key topic we discussed that relates to the career of a Security Awareness Training Specialist is the concept of the human firewall. This concept presents employees as an active defense rather than an existing liability. The idea shows how secure behavior is a responsibility of everyone in the organization/company rather than just a technical aspect. Awareness specialists help to reinforce this firewall by teaching the best defensive tactics and cybersecurity practices, to strengthen the human side of cybersecurity and create a stronger, more resilient organization. By expanding users’ knowledge, identification of cyber threats, and giving them the best tools on how to respond to them, specialists in this field contribute greatly to the concept of the human firewall, playing a direct role in strengthening it through their phishing simulations and social engineering. Aldawood and Skinner talk about this need for awareness of social engineering attacks and methods to prevent them, talking about how users must be educated on identifying and preventing/mitigating attacks, relating to the necessity to strengthen the human firewall.
These specialists also have to consider ethical concerns when carrying out simulated attacks, gaining consent to simulate phishing attacks, targeting positive reinforcement strategies rather than negative punishment, and respecting users’ privacy while carrying out these practices and teaching strategies. Being inclusive and accepting in these ways help to mitigate exploitation of staff who fall into marginalized groups. By promoting a safe learning environment that is mindful of age gaps for elderly as an example, or non-native speakers, digital literacy and defense strategies can be applied and learned equally across societal differences. Studies show that socioeconomically poorer individuals have less access to digital learning, making them less tech-savvy and more likely to fall victim to a phishing attack. By tailoring educational practices to include all societal groups, making sure everyone understands and is able to employ it, there will be best safety practices and marginalized groups will not find themselves unknowingly vulnerable. More and more studies are being conducted that conclude technological and cyber jargon drive a big gap in societal understanding of best cyber practices, tech-savviness, and digital literacy. Security Awareness Training Specialists must work to bridge this gap to allow better, more understandable resources for marginalized groups and society as a whole. By following these strategies to teach all groups, we build a safer and more technologically fair society.
Society’s influences on this career are prevalent through the ways public attitude affects individuals’ willingness to participate in training. Announcements of large scale breaches create a sense of urgency among the public to exercise best practices, and it adapts the teachings that a specialist uses to employ the new best cybersafety practices. Awareness specialists help to foster a society with digital accountability and safety.
Security Awareness Training Specialists illustrate the interconnectedness of cybersecurity and the social sciences, creating safe and equitable habits through research, psychology, ethics, and sociological theories. As technology evolves the human link will be growingly exploited, which is why we must reinforce our human firewall as it has the possibility to be the strongest cyber defense tool one can utilize.
References
Aldawood & Skinner (2019):
Aldawood, H., & Skinner, G. (2019). Social engineering attacks: A survey of techniques, prevention and future directions. Computer Fraud & Security, 2019(7), 9–17. https://doi.org/10.1016/S1361-3723(19)30065-0
Bada, Sasse & Nurse (2019):
Bada, A., Sasse, M. A., & Nurse, J. R. C. (2019). Cybersecurity awareness campaigns: Why do they fail to change behaviour? arXiv preprint arXiv:1901.02672. https://arxiv.org/abs/1901.02672
Verizon Data Breach Investigations Report (2024):
Verizon. (2024). 2024 Data Breach Investigations Report. https://www.verizon.com/business/resources/reports/dbir/