Article 1

Understanding the Use of Artificial Intelligence in Cybercrime


Introduction
Choi, Dearden, and Parti’s publication “Understanding the Use of Artificial Intelligence
in Cybercrime”, which was published in the International Journal of Cybersecurity Intelligence
& Cybercrime in 2024, examines the growing exploitation of artificial intelligence (AI) for illicit
purposes. Although AI has many advantages, including data processing, automated decision-
making, and predictive analytics, it has also opened fresh paths for criminal activity. AI is now
being used by cybercriminals to create deepfakes, improve phishing attempts, automate malware,
and even more easily exploit human vulnerabilities.
Three major pieces that focus on healthcare cybersecurity, AI-driven threats like large
language models (LLMs), and a new integrated theoretical framework for explaining the
changing nature of cybercrime are all contained in this special issue. When combined, they offer
a multidisciplinary strategy involving social problems, advances in technology, and theories of
crime. This review’s objective is to analyze the article’s research design, methodology, and
contributions while linking them to social science concepts, class concepts, marginalized groups,
and broad societal effects.
Source of the Article
Choi, S., Dearden, T., and Parti, K. (2024). Understanding the use of artificial intelligence in
cybercrime. International Journal of Cybersecurity Intelligence and Cybercrime, 7(2), 1–3.
https://doi.org/10.52306/2578-3289.1185

A special edition of the journal has this section as its editorial introduction. The authors sum up
three peer-reviewed studies chose from the 2024 International White Hat Conference. These
studies showcase both empirical and theoretical work on AI-enabled cybercrime. By merging
real-world case studies, mixed-method research, and new criminological frameworks, the issue
makes a remarkable advanced to the understanding of cybercrime in the digital age.
Research Design
The three distinct studies included in this special edition each with their own design:
Study 1 (Praveen et al., 2024): This study examines healthcare victimization taking Routine
Activity Theory (RAT) and the VIVA framework. It analyses how to attribute of the target such
as value, inertia, visibility, and accessibility (VIVA) form the risks faced by healthcare
organizations.
Independent Variables (IV): Target features (e.g., sensitivity of healthcare data, accessibility of
systems).
Dependent Variable (DV): Cyber-victimization is likely.
Study 2 (Shetty et al., 2024): This research analyzes AI-driven risks, especially the misusage of
large language models (LLMs) and AI-based malicious code. Using a mixed-method design, the
study takes in both qualitative interviews with experts and quantitative analysis of AI prompts.
Independent Variables (IV): Use of AI tools such as LLMs and AI-generated malicious activity.
Dependent Variable (DV): Exposure of new cybersecurity risks and security weakness.
Study 3 (Smith, 2024): This theoretical study introduces the Integrated Model of Cybercrime
Dynamics (IMCD), which investigates the interactions between person attributes, online
behavior, and environmental factors.

Independent Variables (IV): Individual characteristics and online habits.
Dependent Variable (DV): Offending and victimization results.
Methods & Analysis
Each study uses various techniques.:
Study 1: A case study technique centered on the healthcare industry. Using RAT and Cyber-
RAT, it shows how offenders use institutional vulnerabilities and how poor “digital
guardianship” elevates risk. The report suggests preventive measures include technical
safeguards, personnel awareness training, and legal frameworks.
Study 2: A combination of qualitative and quantitative methods using a mixed-method design.
Professional interviews provide insider viewpoints on the vulnerabilities posed by LLMs, while
an analysis of AI prompts demonstrates how attackers could utilize these tools. The report
suggests that increased cybersecurity and user awareness are crucial as AI tools grow more
Study 3: An introduction to the IMCD framework using a conceptual approach. It maps how
individual characteristic, online customs, and environmental circumstances interact to shape both
offending and adversity. This model gives flexibility for policy formulation, education, and
upcoming empirical research.
Linking PowerPoint and the Article
The Module 5 PowerPoint describes why individuals commit cybercrimes, the psychological
theories behind their actions, and why victims can be vulnerable. It also indicates how cyber
specialist must realize criminal behavior.
The article joins that AI makes these subjects worse by allowing of phishing, deepfakes, and
malware. Both accepts that while motives remain the identical, AI rises the scale and risk of
cybercrime.

Relation to Social Science Principles
The research is based on social science and criminology theories:
Routine Activity Theory (RAT): Illustrates how in cyberspace, ideal targets, motivated
criminals, and the lack of effective guardians come together.
Cyber-RAT Extension: By applying RAT to online activities, the Cyber-RAT Extension
demonstrates how digital routines expose people and businesses to artificial intelligence dangers.
Victimization Theory: Describes how structural flaw, such as weak security infrastructure,
make certain groups overmuch vulnerable.
Sociotechnical Systems Perspective: Emphasizes the ways in which technological
vulnerabilities interact with social elements, such as employee awareness and training.
By combining their findings into these frameworks, the authors ensure that the research is not
just technically valuable but also a sociological perspective meaningful.
Class Concepts
The article is directly related to course concepts:
Social Engineering: AI improves phishing by producing convincing and targeted messages.
Deepfakes: These tools facilitate scam, imposture, cyberbullying, and disinformation.
Cyber Hygiene: Highlighted as an essential defense against increasingly sophisticated attacks.
Policy and Law: The issue underscores the necessity for new policies to keep up with rapid
technological progress.
Marginalized Groups
The issue highlights how marginalized populations are disproportionately affected by AI-driven
cybercrime:

Healthcare Patients: Their sensitive entry makes them main targets in data breaches, putting
them at risk of long-term financial and emotional harm.
Non-technical Users: People with minimal digital literacy are more vulnerable to AI-powered
phishing or deepfake frauds.
Developing Countries and Small Organizations: Their lack of resources and insufficient
infrastructure cause them more vulnerable to AI-powered hacks.
Recognizing these differences underlines the moral need of academics and decision-makers to
safeguard vulnerable groups. The issue demonstrates how marginalized individuals suffer the
most by AI-driven cybercrime.
Societal Contributions
The research together provides major societal benefits:
1. Healthcare Resilience: Study 1 suggests multilayer prevention strategies—technical, legal,
and educational—to top safeguard healthcare systems.
2. AI Risk Awareness: Study 2 highlights the significance of cyber hygiene and digital literacy
while enhancing awareness of the new risks posed by LLMs.
3. Theoretical Advancement: The IMCD model in Study 3 offers a framework that may be
modified for use in research, instruction, and policy.
4. Policy and Collaboration: The issue highlights the necessity of interdisciplinary cooperation
between scholars, practitioners, and policymakers in order to combat the ever-evolving threats
posed by cyberspace.
The special issue helps to educate decision-making and build long-term resilience by connecting
research and practice.
Conclusion

Understanding the Use of Artificial Intelligence in Cybercrime proposals an insightful
look at how AI technologies are utilized for criminal purposes and how the community can
respond. The special issue offers a comprehensive viewpoint on new risks through empirical
research, mixed-method techniques, and innovative theory. It promotes proactive tactics,
stresses the need of protecting underprivileged groups, and draws attention to the applicability of
social scientific theories. The study concludes by showing that a multidisciplinary strategy
combining criminology, sociology, technology, and policy is necessary to tackle AI-enabled
cybercrime. This contribution makes society more prepared to handle the difficulties posed by
AI-driven cyberthreats by strengthening both academic knowledge and useful solutions.
References
Choi, S., Dearden, T., & Parti, K. (2024). Understanding the use of artificial intelligence in
cybercrime. International Journal of Cybersecurity Intelligence & Cybercrime, 7(2), 1–3.
https://doi.org/10.52306/2578-3289.1185
Praveen, Y., Kim, M., & Choi, K. (2024). Cyber victimization in the healthcare industry:
Analyzing offender motivations and target characteristics through Routine Activities Theory
(RAT) and Cyber-Routine Activities Theory (Cyber-RAT). International Journal of
Cybersecurity Intelligence & Cybercrime, 7(2), 4–27.
Shetty, S., Choi, K., & Park, I. (2024). Investigating the intersection of AI and cybercrime:
Risks, trends, and countermeasures. International Journal of Cybersecurity Intelligence &
Cybercrime, 7(2), 28–53.
Smith, T. (2024). Integrated model of cybercrime dynamics: A comprehensive framework for
understanding offending and victimization in the digital realm. International Journal of
Cybersecurity Intelligence & Cybercrime, 7(2), 54–70.