Article Review #1
Matthew Burd
1/28/25
An Analysis of the Impact of Cybersecurity and AI’s Related Factors on Incident Reporting Suspicious Behaviour and Employees Stress: Moderating Role of Cybersecurity Training
Article Link: https://cybercrimejournal.com/menuscript/index.php/cybercrimejournal/article/view/330/99
Introduction
Muthuswamy and Esakki’s (2024) article, “Impact of Cybersecurity and AI’s Related Factors on Incident Reporting Suspicious Behaviour and Employees Stress: Moderating Role of Cybersecurity Training,” investigates the relationship between cybersecurity measures, artificial intelligence (AI), and employee stress. The study examines organizational behavior, human-technology interaction, and workplace dynamics to see how cybersecurity training can reduce stress in firms.
Social Science Principles in the Article
Determinism: AI monitoring and cybersecurity training influence employee stress and reporting habits, demonstrating how work environments govern behavior (Muthuswamy & Esakki, 2024).
Skepticism: According to the study, AI can raise stress and decrease reporting if it is not properly trained, casting doubt on the notion that it enhances security (Muthuswamy & Esakki, 2024).
Empiricism: Results are derived from statistical analysis and employee surveys, guaranteeing that conclusions are supported by facts rather than conjecture (Muthuswamy & Esakki, 2024).
Relativism: The effects of cybersecurity measures varied in various work settings, highlighting the fact that organizational context affects employee reactions (Muthuswamy & Esakki, 2024).
Behavioral psychology
The article investigates how cybersecurity and AI affect employee stress and reporting behavior. Behavioral psychology is important here because it examines how workplace stresses, such as cybersecurity technologies and AI systems, influence employee behavior (Muthuswamy & Esakki, 2024).
Sociology & Organizational Behavior
The study looks at how organizational policies, specifically cybersecurity training, affect employee stress. This underscores sociology’s focus in studying how societal institutions influence individual behavior, and training has been proven to lessen stress (Muthuswamy & Esakki, 2024).
Social System Concepts
The study emphasizes the interconnection of AI, cybersecurity, and employee behavior in organizational systems. It highlights how changes in one element of the system (e.g., training or AI deployment) affect the entire system (e.g., employee stress and reporting behavior), which lends validity to social systems theory.
Research Questions and Hypotheses
The article investigates how cybersecurity and AI-related factors influence employees’ reporting of suspicious behavior and stress. The researchers hypothesize that cybersecurity training moderates the relationship between these factors and employee stress (Muthuswamy & Esakki, 2024). Additionally, they examine the impact of AI on behavior monitoring and employee stress when training is insufficient.
Research Methods Used
The authors employed a quantitative research approach, distributing questionnaires to employees across sectors. These surveys explored perceptions of cybersecurity training, AI’s role in detecting suspicious activity, and stress levels (Muthuswamy & Esakki, 2024).This method allows for the identification of correlations between the variables and patterns within the sample population.
Data & Analysis
Survey responses were analyzed to examine relationships between cybersecurity training, AI influence, and employee stress. The study found that employees who received cybersecurity training reported lower stress levels and were more likely to report suspicious behavior, suggesting that training mitigates the negative effects of technology (Muthuswamy & Esakki, 2024).
Marginalized Groups and Contributions
The article highlights how marginalized groups, particularly those with limited access to cybersecurity training or AI resources, face greater challenges. Employees who feel unprepared are more likely to experience stress due to surveillance technologies. The findings indicate that marginalized groups are disproportionately affected by a lack of access to training, leading to higher stress and less reporting of suspicious activities (Muthuswamy & Esakki, 2024).
Contributions to Society
This study contributes to society by offering practical recommendations for organizations to better equip employees to handle the stresses associated with cybersecurity and AI. By emphasizing the role of cybersecurity training, the research provides insights for businesses looking to implement programs that reduce stress and encourage suspicious behavior reporting, ultimately improving employee well-being and corporate security (Muthuswamy & Esakki, 2024).
Conclusion
Muthuswamy and Esakki’s research provides valuable insights into the intersection of cybersecurity, AI, and employee stress. It underscores the importance of cybersecurity training in mitigating the negative effects of these technologies, with broader implications for marginalized groups in the workplace. The findings have significant theoretical and practical implications, particularly for developing organizational policies that support employee well-being in the face of technological advancements.
__WordCount__= 490 (not including In text citations)
REFERENCE(s)
Muthuswamy, V., & Esakki, S. (2024). Impact of cybersecurity and AI’s related factors on incident reporting suspicious behaviour and employees stress: Moderating role of cybersecurity training. International Journal of Cyber Criminology, 18(1), 1-15. https://cybercrimejournal.com/menuscript/index.php/cybercrimejournal/article/view/330/99
ARTICLE REVIEW 2
Article Review #2
Understanding the Use of Artificial Intelligence in Cybercrime
Matthew Burd
3/20/25
https://vc.bridgew.edu/cgi/viewcontent.cgi?article=1185&context=ijcic
Understanding the Intersection of AI and Cybercrime
Introduction – (BLUFF Heading)
The growing role of artificial intelligence (AI) in enabling cybercrime is examined in the article, “Understanding the Use of Artificial Intelligence in Cybercrime” by Choi, Dearden, and Parti (2024). The authors highlight how hackers have been able to intensify their illegal activities due to technological improvements, especially artificial intelligence (AI), which has made attacks more complex and challenging to identify. This evaluation discusses research questions and hypotheses, assesses the methodology employed, looks at how the article complies with important social science principles, and highlights the study’s contributions to the field of cybersecurity.
Social Science Principles and Research Framework
This article examines AI-driven cybercrime using three fundamental social science concepts as described in the PowerPoints. Determinism, skepticism, and empiricism. Cybercriminals’ predictable use of AI, including automation, deepfakes, and social engineering, is deterministic (Choi et al., 2024). This is consistent with criminological views that attribute technical opportunities to criminal activity. When evaluating AI’s dual role in cybersecurity, skepticism is essential, and ongoing examination of security protocols is encouraged. The study’s data-driven methodology, which assesses AI-powered cyber threats through case studies and statistical analyses, reflects empiricism. The paper concludes that developing AI necessitates sophisticated security responses after examining how AI promotes cybercrime, new trends, and successful defenses.
Research Methods and Data Analysis
Using a mixed-methods approach, the study combines quantitative and qualitative analysis. The authors look at case examples of cybercrimes that are made possible by AI, including malware that is fueled by AI, phishing emails that are created by AI, and deepfake scams (Shetty et al., 2024). A statistical analysis of cybercrime trends is also included in the paper, which assesses the rising sophistication and frequency of attacks using AI technologies. A thorough examination of the dynamics of cybercrime is made possible by the incorporation of criminological frameworks, such as RAT and Cyber-RAT.
The article’s data analysis shows a distinct trend: AI is growing in strength as a tool for cybercriminals since it can automate extensive attacks and get beyond conventional security measures. To lessen AI-driven cyber threats, the report emphasizes the necessity of enhanced user cyber hygiene, stricter digital guardianship, and legislative measures.
Relevance to Marginalized Groups and Conceptual Linkages
Marginalized groups are disproportionately impacted by AI-enhanced crimes, especially those with less access to cybersecurity services and digital literacy. The study highlights how people from low-income backgrounds are especially vulnerable to financial fraud and scams powered by artificial intelligence. Additionally, because of behavioral vulnerabilities, AI-based social engineering attacks often target women and elderly persons.
The subject is related to several ideas from criminology and cybersecurity courses, such as digital forensics, cyber risk management, and the psychological manipulation techniques employed in social engineering. By connecting these ideas to AI-driven cybercrime, the study emphasizes the necessity of a multidisciplinary approach to addressing cybersecurity risks.
Contributions and Conclusion
Two significant additions to the field of cybersecurity are made by the study. First, it offers a thorough examination of how artificial intelligence is changing cybercrime, providing insightful information for security experts, legislators, and law enforcement. Second, to explain cybercrime activities, it suggests an innovative theoretical structure called the Integrated Model of Cybercrime Dynamics (IMCD), which integrates environmental variables, individual behaviors, and AI-driven tools (Smith, 2024).
In conclusion, Choi offers a convincing examination of AI’s increasing role in cybercrime, showcasing how it might improve the intricacy and potency of cyberattacks. According to the study, to reduce AI-driven dangers, there is an urgent need for interdisciplinary cooperation, improved cybersecurity, and well-informed legislation. The results of this work provide an essential basis for future research and policy creation, as hackers persist in using AI for malicious intent.
References
Choi, S., Dearden, T., & Parti, K. (2024). Understanding the Use of Artificial Intelligence in Cybercrime. International Journal of Cybersecurity Intelligence & Cybercrime, 7(2). https://doi.org/10.52306/2578-3289.1185
Shetty, S., Choi, K., & Park, I. (2024). Investigating the Intersection of AI and Cybercrime: Risks, Trends, and Countermeasures. International Journal of Cybersecurity Intelligence and Cybercrime, 7(2), 28-53.
Smith, T. (2024). Integrated Model of Cybercrime Dynamics: A Comprehensive Framework for Understanding Offending and Victimization in the Digital Realm. International Journal of Cybersecurity Intelligence and Cybercrime, 7(2), 54-70.