Article Review #1- The Integration of Artificial Intelligence and Cybercrime

The relation of A.I and cybercrime to social sciences

This article explains how AI is exploited by criminals to commit cybercrimes against individuals in society. Examples of the exploitation of AI by criminals are through two studies. The first study talks about an article titled “Victimization by Deepfake in the Metaverse: Building A Practical Management Framework” (Stavola & Choi, 2023) mentions criminology theories like Routine activities theory and Eysenck’s theory of criminality to explain the demographics of criminal offenders and victims of deepfake crimes. The article also mentions how victims are harmed by deepfake crimes psychologically because the victim’s physical features are being used for financial income and obscene reasons. The article mentions how victims need psychological healing programs for the trauma that the deepfake crime caused. The second study for the paper Harnessing Large Language Models to Simulate Realistic Human Responses to Social Engineering Attacks: A Case Study (Asfour & Murillo, 2023) explains how AI can mock human responses toward cyber-attacks. The article mentions personality theories to show how some individuals with specific traits are more or less likely to be susceptible to an attack. 

The questions and hypotheses of cybercrime through AI exploitation

The questions that were asked and answered in the article were: How does AI contribute to cybercrime? How is a target’s vulnerability toward an attack through AI measured and How were criminal offenders identified and their motives for AI exploited attacks? Their hypothesis of how AI contributes to cybercrime is that AI is exploited by criminals through deep fake crimes and simulating human responses toward cyber-attacks, most commonly phishing. Their hypothesis of how a target’s vulnerability toward an attack through AI is measured is that a target’s personality traits and demographics are large determining factors of whether the target may or may not be at risk for an attack. Their hypothesis of how were criminal offenders identified and their motives identified for AI-exploited attacks was that the offender’s demographics and types of attacks identified the offender and their motives. 

Research methods, data, analysis

The research methods of this article were studies from two papers about Victimization by deep fake crimes and the simulation of human responses toward cyber-attacks. The first study’s information is collected through interviews that the article mentions “Conducting eight semi-structured interviews with policy, academic, and industry experts in South Korea, the authors used thematic analysis to identify themes in the expert testimonies on the topics of deepfake crime in the metaverse.”(Parti, Dearden, Choi, 2023). The results of these interviews talk about how criminology theories are frameworks toward a target’s risk of an attack as well as an offender’s motivation for an attack. The second study’s information is collected through the results of the paper “Harnessing Large Language Models to Simulate Realistic Human Responses to Social Engineering Attacks: A Case Study” (Asfour & Murillo, 2023). The results talk about how personality theories and traits measure how at-risk individuals are toward a cyber-attack.  

Class presentations relating to this article 

Some concepts in the article are mentioned in some of the class presentations that were listed in Canvas. Personality theories that were introduced into the Module 5 notes were mentioned in the article as well, for example, “Using the Big Five Personality Traits model to categorize human personality traits, the authors find that personae with the qualities of naivety, carelessness, and impulsivity were particularly susceptible to attacks.”(Parti, Dearden, Choi, 2023). In the Module 4 notes, cyberpsychology is explained in the notes as how technology impacts us, as well as how technology affects our psychological state. This is mentioned in the article during the first study and how victims of deep fake crimes’ psychological state is affected to the point where victims need psychological healing programs. In Module 3 notes, some strategies to study cybersecurity including social sciences with research methods were mentioned including field studies. Field studies were used in this article to provide information and the ways that AI can be exploited by offenders to engage in cybercrime toward victims. 

The challenges, concerns, and contributions, of this article toward marginalized groups and society

The topic in this article can bring up challenges for certain individuals. The article mentions in the first study how targets among the younger population are more at risk of a deep fake crime and that individuals with high agreeableness, low conscientiousness, and high neuroticism personality traits are more at risk toward a cyber attack, specifically phishing. Some challenges that also pop up are that technology changes fast year by year, leaving cybersecurity experts and professionals to catch up on new technological advances and create new policies for them. Some concerns are that younger offenders may continue to engage in cybercrime by exploiting AI if policymakers do not understand and do not make policies for it. Also, another concern is that new motivations for cybercrime may come in the future, making cybercrime increase more if experts and professionals don’t discover those motives. The contributions of this article are that they give us knowledge and examples of how criminals can exploit a useful technological innovation like AI. This gives potential solutions to these problems and helps cybersecurity workers battle these types of cyberattacks. 

Conclusion

Overall, this article goes into great detail about how AI can exploited by offenders to engage in cybercrime with victims. The article explains this by mentioning studies, criminology theories, personality theories, and traits as well as providing motives for offenders to engage in cybercrime. 

Sources

Parti, K., Dearden, T., & Choi, S. (2023, August 30). Understanding the use of artificial intelligence in Cybercrime, International Journal of Cybersecurity Intelligence & Cybercrime https://vc.bridgew.edu/cgi/viewcontent.cgi?article=1170&context=ijcic 

Leave a Reply

Your email address will not be published. Required fields are marked *