Article Review #2: How AI intersects with cybercrime

Bach-Nien Doan

12/6/2024

AI’s rapid development has produced several benefits, including improved productivity and decision-making skills. It has, nevertheless, also brought up new dangers and difficulties. The most pressing of these issues is the possibility of AI being abused, particularly in cybercrime. As AI technology develops and becomes more sophisticated, fraudsters find it to be a tempting tool to use in their schemes. Because it spreads the fundamental motivation of cybercriminals, the unethical use of AI and ChatGPT has become a serious issue. Thus, this paper aims to reveal the intersections of AI usage with the topic of cybercrime. This review will cover connections to social science principles, a description of research questions/hypotheses, research methodologies, data analysis descriptions, relationships to class concepts, minority group concerns, study contributions to society, and lastly, concluding thoughts on this topic.

Connection to Social Science Principles

The connection to relativism is made because artificial intelligence (AI) encompasses a wide range of fields and aims to build computers and systems that can perform tasks typically associated with human intellect, such as perception, reasoning, learning, decision-making, and problem-solving. AI and humans are therefore connected since AI tries to operate by utilizing certain aspects of human conscience and judgment. Furthermore, because government agencies and AI groups did not adequately monitor and regulate AI, cybercrime increased. The principle of determinism is used as the existence of AI brought about a new cyberthreats to individuals and organizations alike. Cybercriminals and utilize the capabilities of AI to perform malicious activities like creating phishing emails and malware to cause malicious harm to victims. Lastly, parsimony is used as the article uses theories like the cyber–Routine Activities theory and extending it from the Routine Activities theory to better explain crimes committed in cyberspace or over the internet and crim. Additionally, cyber-Routine Activities theory was the foundation of this study, especially when subjects were questioned (Shetty et al., 2024).

Research questions/hypothesis description

Cybercriminals are using AI to perpetrate new types of crimes because they want to exploit its potential and because there is an infinite number of possible victims. Motivated criminals are using AI to create deepfakes, phishing schemes, ransomware, and other harmful online threats and assaults. The following are some of the study questions: How does information concerning the illicit use of AI circulate and be used on the dark web and the clear web, and how does it transition between these two domains? What role does the spread of media play in the rise in AI-enabled cybercrime? How might enhancing personal cyber hygiene habits reduce the risks associated with AI-based attacks? A suitable hypothesis is that as AI becomes more widely used, hackers would embrace it more frequently to take advantage of its potential to harm their victims (Shetty et al., 2024).

Research methodologies

A mixed-methods approach was essential to effectively addressing the study goals because of the complexity of AI’s participation in cybercrime and the need to capture both its broad effects and its nuanced subtleties. A thorough picture of the types and frequency of harmful activities supported by AI on the clear and dark webs was given by the quantitative data, which included statistical analysis of malicious prompts produced by AI and their transmission patterns. Furthermore, the qualitative component—which included expert interviews—provided further insights into the contextual and experiential aspects of AI, cyber hygiene, and the implications for policy (Shetty et al., 2024).

Data analysis descriptions

Conversations and exchanges related to the malicious use of AI-generated prompts were examined in a range of online communities on the open and dark webs. It was discovered that AI-generated prompts were available on eight distinct forums: FlowGPT, Respostas Ocultas, Reddit, Dread, Legal RC, Hidden Answers, Dark Net Army, and YouTube. Most of the talks were written in English since it is commonly utilized as the lingua franca in online communities. The fact that some remarks were written in other languages, however, showed how multilingual the online discussion boards are where instructions for harmful action produced by AI are disseminated (Shetty et al., 2024). A variety of malevolent actions, including the development of malware, ransomware, phishing scams, and jailbreaking methods, were carried out using these tools. Numerous DAN prompt occurrences on both clear and dark websites have been found in this investigation. The command “DAN,” which stands for “Do Anything Now,” is used to “jailbreak” ChatGPT and other large language models (LLMs) so that users may get around limitations and utilize all their features (Shetty et al., 2024). With regards to the interviews, information gathered allowed a comprehensive understanding of the various issues surrounding AI usage, as well as malicious intents regarding cybercrime.

Class concept relations

Both the concepts of victimization and the use of social media to enable cyberthreats are related to this issue. The Cyber RAT framework extended the traditional Routine Activities Theory to better explain computer crime victimization by highlighting the fact that motivated criminals are inescapably present in the digital era due to the internet’s ubiquitous accessibility and anonymity in cyberspace. The Cyber RAT emphasizes that when suitable targets come together and there is no skilled supervision, which is often the outcome of risky online behaviors, the likelihood of being a victim of cybercrime is significantly enhanced. The study’s examination of social media revealed that there was a wide range in the number of members on various forums, with the observed count ranging from 4,430 to 4,600,000 (Shetty et al., 2024). This variety in user engagement highlights the potential effect and reach of AI-generated prompts for criminal behaviors across different online communities (Shetty et al., 2024).

Minority group concerns

Threat actors frequently utilize prompting to get AI to produce malware, ransomware, phishing scams, and other criminal tactics to target their victims, which raises several ethical concerns with AI. The lack of knowledge and education on AI-generated cyberthreats makes both younger and older demographics more susceptible. Important considerations about the broader social repercussions of uncontrolled AI deployment may be overshadowed by the promise of personal convenience and the allure of technical development. Furthermore, despite the public’s desire to make AI more concrete and significant in their lives, there is a concerning trend of criminal exploitation of AI for malevolent purposes under the guise of the same goal.

Study contributions

This study raises awareness of both the negative applications of AI against victims and the positive effects of integrating AI into people’s life and diverse organizations. The recent study provided a basic understanding of the connection between AI and cybercrime. More research will help society keep up with the rapidly evolving field of AI technology and its relevant cybersecurity implications. This essay explains the need for legislators and regulators to enact laws that control cybercrime (Shetty et al., 2024). Laws should also be put in place to discourage and penalize cybercriminals who use AI maliciously. As ways to lessen cyberthreats, suggestions including technology protections and cyber education are presented. Additionally, this study describes AI as a viable defense against AI-driven assaults and for identifying cyberthreats like fraud.

Concluding thoughts

This study examines the intricate link between cybercrime and artificial intelligence (AI), with a focus on the potential for malevolent exploitation of AI. It offers a thorough examination of the status of artificial intelligence (AI), how it is applied in cybercrimes, and the challenges and opportunities associated with lowering the risks of AI-driven cyberattacks using the theoretical framework of Cyber Routine Activities (Shetty et al., 2024). The research highlights how crucial it is to prioritize personal cyber hygiene to lessen the growing risks related to AI-based attacks. By investigating the digital spread of harmful content and information aided by AI throughout the open and dark web, this study shed light on the complex dynamics of cybercrime. The mixed-methods approach, which integrated quantitative data with qualitative insights from expert interviews, also enabled a thorough understanding of the complexity of AI-related cyberthreats and the strategies required to successfully resist them. As AI becomes more and more integrated into many aspects of life, there are more potential avenues for misuse, therefore protecting digital environments requires a proactive approach.

Reference Shetty, S., Choi, K.-S., & Park, I. (2024). Investigating the intersection of AI and cybercrime:  risks, trends, and countermeasures. International Journal of Cybersecurity Intelligence & Cybercrime, 7(2). https://doi.org/10.52306/2578-3289.1187

Leave a Reply

Your email address will not be published. Required fields are marked *