Article Reviews

Article review #1:

The world of generative AI: deepfakes and large language models, by Alakananda Mitra, E. Kougianosis, and S. Mohanty is an article highlighting the effects AI has had on cybercrime. This article focuses on the problems that have arisen within cyber space due to the sudden rise in AI advancement. Mitra talks about many social, and criminal issues throughout this article such as the sudden shift in scalability that these social engineering attacks can have, as well as the sudden increase in believability and realism of scams due to tools like large language models and deepfakes being used by cybercriminals. These tools can assist cybercriminals in a multitude of criminal or hateful activities; common phishing attacks, propaganda, AI pornography and more are all issues that have become prevalent due to AI and Mitra argues there has been a severe lack in regulations put on these tools as well as punishments for those who use them for harm. She believes that government bodies need to focus on laws and regulations for AI and social media platforms where misinformation and harmful AI generated content is primarily being spread and need to start regulating the content better.

Social uses and criminal uses of AI-
There are numerous issues with the widespread use of AI tools like deepfakes, and Large Language Models (LLMs). Mitra believes that the primary issue with these AI tools is that they are generally being used to “erode trust” and are slowly creating what she believes to be a zero trust environment. This is because if AI progresses as it has been, without any regulations, misinformation through fake videos and text will become more common than real information, if this becomes reality there will be a time where almost nothing seen on the internet can be trusted. Because the internet is where almost everyone currently gets their information, if that information suddenly cannot be trusted it could potentially break or reshape how modern society functions. Social issues aren’t the only issues with these tools, LLMs can be used to create human-like phishing scams; these phishing scams can be done at a much faster rate than traditional scams and depending on the LLM many can be almost indistinguishable from a real person. Deepfakes are another way criminals are utilizing AI, by using Deepfake videos cybercriminals are creating fake pornagraphic images in people’s likeness, creating videos of crimes being committed, or by combining a deepfake with a audio LLM they can a make an AI video of someone saying something. AI is being used to spread misinformation of not just people, but political parties, marginalized groups of people, and more. This issue goes far beyond the scope of one person, it affects everyone.
Deepfakes-
One of the most well known uses of AI is deepfake technology, its popularity goes back to 2023 when a AI generated video of will smith circulated the internet, this video was met with laughter and scrutiny due to how unrealistic it was. Since that time AI has improved dramatically, to the point where many AI generated videos cannot be distinguished from reality. Many believe that AI deepfakes have become “too good”, it is too easy for anyone to put someone’s likeness onto a video or photo. The social and legal ramifications of this are almost endless. The internet is constantly growing and getting faster; information spreads like wildfire and travels incredibly fast. With the addition of incredibly believable pictures and videos circulating media outlets, many have begun believing before verifying, and in some cases it’s almost impossible to verify. For example, it is incredibly easy to verify the integrity of a photo of someone famous like Donald Trump, but if you were to create an AI generated photo or video of someone with little internet presence or popularity it would be incredibly difficult to verify that continent’s integrity. This can get very messy quickly, and as a society we’ve already begun to see the issues this technology is causing. Deepfake porn, AI generated videos of crime, video scam calls, and more are all the beginnings of cyber crime with this tool, and as the tool evolves and people evolve with it these crimes will only become more complex, and problematic. It is clear that deepfakes can improve rapidly, but there needs to be a shift from development to protection against the technology before these issues become any more problematic.

Large Language Models (LLM)-
Large Language Models are an AI tool that uses large amounts of text or audio data to create realistic responses, to prompts, questions, etc. Most AI seen uses some form or another of this, because of the nature of LLM’s the more data you throw at the model the more accurate the responses will be. While this tool seems simple its uses in the cybercriminal space is far but simple. LLMs have started to be used in complex social engineering attacks by cyber criminals, the most common way this is seen is by using them to create highly realistic and personalized emails in phishing attempts. As stated in Mitra’s paper, these LLMs are not only being used for text based phishing scams but they are also being used in tandem with deepfake technology to create videos with believable AI generated audio. The uses of these videos are endless but just like with all the aforementioned tools they have been primarily used by cyber criminals to scam, or misinform people online. These phishing attacks are much more believable than their non-AI counterparts and because AI tools can typically process and generate information much faster than a human, it allows those using these tools to broaden the scale of their attacks, leading to a high quality and quantity of attempts.

Combating the issue-
Fighting the issues caused by AI is not a simple task, the tools are already in the hands of so many individuals so it’s almost impossible to properly regulate the tool itself. The article argues that there needs to be both AI specific legislation passed by governing bodies, and platform level regulations on sites like X, TikTok, and Instagram. With these changes Mitra believes some level of control can be regained, while a more permanent solution can be made. Regulations and laws are merely band-aids, if laws and regulations truly prevented crime we would live in a crime free society, long term solutions like placing watermarks on AI generated images and videos, creating sophisticated AI detection software and more all current ideas to fight against these issues. The largest problem faced with AI however is that AI can grow faster than people can combat the issues, AI to remove watermarks are already being developed and eventually AI detection software can be nullified by new forms of AI. That’s why Matra believes that the only true way to eliminate the issue, which is the one least likely to happen, is to keep it outside the public’s use, or outlaw it altogether.

Conclusion-
Overall The world of generative AI: deepfakes and large language models, by Alakananda Mitra, E. Kougianosis, and S. Mohanty, is an article highlighting the dangers of the current popular AI tools. This article outlines a realistic but bleak view on the current effects AI is having on society and cyber crime, as well as the ramifications it will have for the future. AI has evolved social engineering attacks to a degree that no one was prepared for, it has amplified the scale of these attacks to more than double what it previously while also making them more believable. AI has also begun the creation of a “zero-trust” environment, where information is almost always looked at with scrutiny, even if it’s true. Matria believes there is no true solution for this problem currently but does believe there needs to be strong regulations and laws put in place soon, before the issues caused by LLMs and deepfakes get any further out of hand.

Article Review #2:

BLUF
This study is looking into the factors that determine employees willingness to comply with information security protocols. The research observed 261 employees and found that an organization’s culture, awareness in cybersecurity, employee involvement, and trust in upper management were the largest factors in determining employee compliance.

Connection to Social Science Principles:
These study results found that many ideas and themes of social science were very beneficial to improving employees’ willingness to comply with cyber security policies. Some of the principles used in this study, and that were found effective are culture, trust, and social norms. The study found that having trust in management was crucial to improving compliance, this finding correlates with Max Weber’s ideas of charismatic authority, where people are more inclined to follow a leader that exhibits good personality traits. They also found that workplace culture and norms had a large factor as well, and that by focusing efforts on creating a security centered culture they could effectively raise cooperativeness. Overall they found that cybersecurity is just as much a social battle as it is a technical one, and that by focusing on and understanding the human side of cyber security great improvements can be found.

Research Question /Hypothesis/ Independent Variable/Dependent Variable:

  • Research Question: “How does awareness, organizational culture, and trust in management influence employees information security compliance behavior, and therefore help control cybercrime within organizations?”
  • Hypothesis: If new protocols for awareness, organizational culture, and trust in management are instilled in companies then employees information security compliance behavior will improve
  • Independent Variable(s): awareness, organizational culture, and trust in management
  • Dependent Variable: Employees information security compliance behavior
  • Research Methods/Data Analysis used: The research method used a quantitative research method to find their data, and was based on 261 employees each from one of three departments, IT, human resources, and quality assurance. Each subject was given a questionnaire that would rate these variables, “organizational culture, cybersecurity awareness, employee engagement, trust in top management, and information security compliance behavior”, on a 5 point scale. The averages, trends, and notable
    information then found in these questionnaires were then used to conclude the results of the study.

Connections to other Course Concepts:
The study reinforces a lot of concepts for the course, from issues and benefits of using a survey for research to how relativism relates to human error there are many connections that can be made between the study and the course material. I believe the study reinforced some of the issues presented with the issues of using surveys, mainly being trust is hard to develop between the researchers and respondents and many respondents are not aware that they might have been victimized already. The other concept that was reinforced through this study was the idea of relativism, that social behaviors can deeply change the effectiveness of cybersecurity.

Overall societal contributions of the study/Conclusion:
This study found that cybersecurity is just as much a social battle as it is a technical one, and that there needs to be a stronger focus on the human side of cybersecurity. When creating an open and transparent environment for employees built on trust and understanding, they found that there was a better response for compliance with cyber security policies. Human error is one of the largest risk factors for any system, with the knowledge provided in this study companies know an idea on how to reduce the potential for human errors. This reduction in human errors could ultimately reduce money and data lost from cyber attacks and also gives companies a better outline for efficient cyber policies.