Artificial Intelligence and Cybercrime
Article Review #1
Introduction
The academic article Investigating the Intersection of AI and Cybercrime: Risks, Trends, Investigating the Intersection of AI and Cybercrime, the authors explore the relationship between artificial intelligence cybercrime. With the increasing popularity of artificial intelligence due to both the ease of access and use, it has shown itself to be advantageous in increasing efficiency and productivity. However, with artificial intelligence’s increased ease of access, this has led to an increased use in cybercrime.
Research Questions
The article aims to address a variety of research questions. Information involving the malicious use of artificial intelligence is examined along with the role of the media in the spread of AI-facilitated cybercrime. In addition, the article explores the possible ways an individual’s online practices can reduce the risk related to the threats posed by artificial intelligence.
Relations in Social Science
Throughout the article, it is evident a variety of social science disciplines are utilized to form a conclusion. For example, beyond the criminology in relation to cybercrime, the Cyber Routine Activities Theory (Cyber RAT) “sheds light on lifestyle factors that contribute to potential victimization in computer crimes” (Shetty et al.). Using Cyber RAT, the everyday behavior and routines of an individual is examined to determine whether they elevate the risk of cybercrime victimization, blending both the study of sociology and psychology.
Research Methods
Data was collected by both traditional and cyberspace field studies. The authors were able to collected actual prompts used with various artificial intelligence tools such as ChatGPT, in addition to examining multiple online forums which exchanged AI-generated prompts aimed for malicious use. This aided the authors to gather information on the sociological aspect by directly examining the communications and prompts used on various forums. Experts in cybercrime, cybersecurity, and criminal justice were also interviewed. Both open-ended and more structured questions were asked in the goal of gathering information relevant to the study.
Data and analysis
102 chat prompts were collected from ChatGPT and similar tools. From these prompts, it was shown the AI-tools “were employed for a range of malicious activities, such as creating malware, ransomware, phishing schemes, and jailbreaking techniques” (Shetty et al.). Further more, it was observed the number of users on the online forums ranged from 4,430 to 4,600,000 individuals, showing that the “user engagement highlights the potential reach and impact of AI-generated prompts for malicious activities across diverse online communities” (Shetty et al.). The interviews with experts discussed a variety of topics, including how the ways individuals conducted themselves online may contribute to the risk of being victimized by AI-based attacks, the media’s portrayal of artificial intelligence as threatening to human jobs, to the lack of regulation of artificial intelligence online.
Challenges and Concerns
The article acknowledges the need of a multifaceted approach. For example, in the context of victim precipitation, young children online may experience increased victimization online. The author notes “by moving away from a one-size-fits-all model, we can better address the specific cybersecurity needs of different age groups” (Shetty et al.). As an example, the article suggests that interactive educational games can be utilized as a means to provide cybersecurity education for children. Furthermore, the authors suggest televised advertisements to effectively raise awareness for the senior citizen demographic.
Conclusion
In conclusion, while artificial intelligence can be beneficial, this article highlights how it can be use to facilitate cybercrime. By examining how online behavior can increase the risk of victimization, the study employs the Cyber Routine Activities Theory to blend the disciplines of sociology and psychology, in combination with cyber criminology, better solution can be better geared towards mitigating these risks. The authors note “there is a pressing need for more tailored cybercrime
statutes to regulate the dissemination of malicious acts” (Shetty et al.), specifically to address the malicious use of artificial intelligence and related technologies. With the data collected from the research, frameworks, policies, and other changes can be made to better prepare from the use of the quickly evolving artificial intelligence as a threat.
Works Cited:
Shetty, Sanaika, et al. “Investigating the Intersection of AI and Cybercrime.” Investigating the Intersection of AI and Cybercrime: Risks, Trends, and Countermeasures, 16 Sept. 2024, https://vc.bridgew.edu/cgi/viewcontent.cgi?article=1187&context=ijcic.