Article Review #1

Article Review #1 – AI and Cybercrime

Introduction

Artificial intelligence (AI) has taken over cyberspace and the average consumer’s life. With everyone quickly accessing various AI  technologies, natural cybercrime will follow. I am reviewing Sanaika Shetty, Kyung Choi, and Insun Park’s Investigating the Intersection of AI and Cybercrime: Risk, Trends, and Countermeasures to see how the topic relates to the principle of social science, how it relates to social science concepts and how it relates to the challenges and contributions to society. 

Research Question 

Shetty and company examine risk, trends, and possible countermeasures while examining the intersection of AI and cybercrime. This leads them to ask how information gathered from AI may be used maliciously on the dark and clear web (Shetty et al., 2024). The clear component of this question is how malicious actors like hackers can take advantage of AI to either take or manipulate your information on all web forms. Another question explored is how the media portrays and spreads AI-affiliated cybercrime (Shetty et al., 2024). With technology and media making massive strides in people outreach, it is interesting to see how AI crime is disseminated. Lastly, how can someone improve their internet security versus AI cyber-attacks (Shetty et al., 2024)? This is a priority of concern for all technology users, not only in the AI space.

Method

The research used both a qualitative and quantitative approach to gathering data. For the quantitative approach, the researchers searched for 120 AI-generated prompts, such as Chat GPT and others, that can be used for malicious intent on the dark and clear web. They used the TOR browser to ensure security and privacy while gathering this information. For the qualitative approach, the researchers interviewed six experts in cybercrime and criminal justice using open-ended questions followed by questions relevant to the study (Shetty et al., 2024). 

Results

Observing the results, it was discovered that 102 of the 120 prompts provided procedures to conduct malicious activity, giving instructions for actions like SQL injection, worms, brute force attacks, and jailbreaking GPT programs. Overall, the prompts gave 19 different options. In addition, the internet forums were flooded with information about the same, provided by AI prompts, which reached millions of users. 

The results from the interviews were more informative than data-filled as they provided a framework for the researchers  “to identify strategies for enhancing capable guardianship and increasing awareness among suitable targets or internet users” (Shetty et al., 2024).”

Social Science Principles 

The principles expressed in the study are objectivity, parsimony, and ethical neutrality. The study displayed objectivity by providing the facts; it wasn’t so much to support its authors’ argument as to answer its research questions. The study also displayed parsimony by simplifying the confusing topics of technology and AI using data tables to make understanding the data presented easier. Lastly, for ethical neutrality, discussions of how they can improve cyber education by targeting different age groups as the researchers continue to educate AI use. 

Concepts and Challenges 

The article relates to concepts being studied in my Cybersecurity Social Science class, with the article targeting Criminological and Geographical concepts. The article studies these two areas of Social Science, representing how AI can be used with malicious intent, like committing a crime, and geographically because of the new landscape that AI is paving in technology and human relationships. Two more concepts are the researchers’ views on the scientific method from a social science point of view. Like many other social science categories, the researchers used surveys to gather information that yielded helpful information for mitigation of threats in the future. Lastly, they explored their version of a field study by examining the websites or sources of AI that may be distributing malicious content.

The challenges created by this study are also the concerns raised, as the ease of access to this malicious information and the broad opinion from media sources cloud the public perception of AI and even advertise its ease of use. Both these concerns will continue to be worse if safeguards and knowledgeable media sources aren’t implemented. 

Societal Contributions 

The study’s contributions are multifaceted, as it provides information on a topic that has ultimately been widespread as a tool to help. With data collected from the survey, they also hope to enact effective policies with coordination between Cybersecurity and Infrastructure Security Agencies to move away from a one-size-fits-all approach to reporting through education and a diverse category reporting system. 

Conclusion

Shetty, Choi, and Park’s research highlights the significant risks of AI in facilitating cybercrime, revealing how malicious actors exploit AI-generated prompts for various activities. Utilizing qualitative and quantitative methods, the study displays the reach of these threats while adhering to social science principles like objectivity, parsimony, and ethical neutrality. The research emphasizes AI’s dual role in innovation and malicious intent by connecting it to criminological and geographical concepts. It also addresses challenges such as easy access to harmful information and the media’s influence on public perception, ultimately calling for enhanced education and tailored cybersecurity policies to combat AI-driven cyber threats effectively.

References

Shetty, S., Choi, K., & Park, I. (2024, September 9). International Journal of Cybersecurity Intelligence & Cybercrime. Investigating the Intersection of AI and Cybercrime: Risks, Trends, and Countermeasures, 7(2). https://doi.org/10.52306/2578-3289.1187