Introduction:
This article review focuses on Choi et al.’s (2024) paper Understanding the Use of Artificial Intelligence in Cybercrime from the International Journal of Cybersecurity Intelligence and Cybercrime. This study examines how artificial intelligence (AI) is used in cybercrime, particularly deepfakes and social engineering, and suggests preventive measures. The review will explore how the article relates to social science principles, research questions, methods, data analysis, and contributions to marginalized groups and society.
Relevance to Social Science Principles:
The topic of AI in cybercrime closely aligns with social science principles by addressing how technology influences human behavior and criminal activity. It touches on the following principles: Human interaction with technology – The article explores how AI enhances legitimate and criminal activities, reflecting the growing impact of technology on social interactions.
Social structures and power – AI, as described in the article, creates a new dynamic in which cybercriminals can manipulate social structures (e.g., using deepfakes for blackmail or fraud), showing the intersection of technology and social power.
Cultural and societal change – The adoption of AI in crime highlights a societal shift towards more sophisticated criminal methods, influencing how society responds to and prevents crime.
Research Questions:
The article presents critical research questions that drive the investigation:
Primary Research Question: How is AI being used to enhance the effectiveness of cybercrime?
Secondary Questions: How are large language models (LLMs) and deep fakes utilized in criminal activity? What preventive measures are needed to combat these AI-driven threats? These questions frame the article’s focus on understanding AI’s technological capabilities in criminal contexts and how society can respond.
Research Methods
The authors employed a mixed-method approach incorporating qualitative and quantitative research methods. This included:
Qualitative Methods: Expert interviews were conducted to gather insights on the role of AI in cybercrime.
Quantitative Methods: Data from AI-generated prompts was analyzed to understand how large language models (LLMs) could be manipulated for malicious purposes. This combination of methods provided a comprehensive view of AI’s potential to enhance cybercrime.
Data and Analysis:
The article uses qualitative data from expert interviews and quantitative data from AI prompt analysis to evaluate the scope and risks of AI in cybercrime. Expert interviews revealed industry perspectives on AI threats, while prompt analysis demonstrated how easily LLMs can be adapted to generate harmful content like phishing emails or social engineering scripts. The data emphasized the urgent need for better awareness and preventive strategies.
Relation to Class Concepts
The article is closely related to several key concepts from the course, including:
Cyber-Routine Activities Theory (Cyber-RAT) – This theory is central to understanding how routine activities in cyberspace increase vulnerability to AI-enabled cybercrime, similar to concepts discussed in class.
Social Engineering—The article deepens our understanding of social engineering, highlighting how AI tools can amplify these attacks, a recurring topic in our coursework.
Cyber Hygiene – The importance of proactive cybersecurity measures aligns with class lessons on maintaining good cyber hygiene to prevent data breaches and cyberattacks.
Relevance to Marginalized Groups
The study acknowledges that marginalized groups are particularly vulnerable to AI-driven cybercrime. For instance, deepfakes can disproportionately affect women and minority communities by targeting them for harassment, extortion, or identity theft. Additionally, these groups may lack the resources or digital literacy to defend themselves against sophisticated AI-generated attacks adequately. This highlights the need for more inclusive policies and education around cybersecurity.
Societal Contributions:
This research contributes to society in several ways:
Raising Awareness: The study illuminates the growing threat of AI-enabled cybercrime, helping individuals and organizations understand the risks.
Policy and Practice: The study proposes multi-layered cybersecurity frameworks and provides actionable recommendations for policymakers, law enforcement, and organizations to enhance resilience against AI-driven threats. The article emphasizes the need for collaboration across sectors to stay ahead of rapidly evolving cybercriminal tactics.
Conclusion
In summary, Understanding the Use of Artificial Intelligence in Cybercrime by Choi et al. (2024) thoroughly explores how AI is revolutionizing cybercrime. Through its mixed-method approach, the article identifies the risks and suggests ways to mitigate them, making significant contributions to cybersecurity research. Its relevance to social science principles, marginalized groups, and society underscores the importance of ongoing collaboration and innovation to counter these emerging threats.
Choi, S., Dearden, T., & Parti, K. (2024). Understanding the Use of Artificial Intelligence in Cybercrime. International Journal of Cybersecurity Intelligence and Cybercrime, 7(2), 1-3.
https://vc.bridgew.edu/cgi/viewcontent.cgi?article=1185&context=ijcic