Cameron White, Feb. 11, 2024
Introduction
This article review will be about how interpersonal cybercrime effect people on the internet and in the metaverse. Interpersonal cybercrime is defined as “any act of criminal activity in cyberspace by an offender against another individual through digital interaction and communication, whether they have a legitimate or imaginary relationship” (Stavola, 2023, p.2). The purpose of the article is to identify cybercrime victimization from deepfakes and how to lower the chances of this happening and how to support and help people out after an attack. This article relates to the social sciences because of how they are forming a question doing research and coming up with a plan that can help people socially on the internet stay safe, and not become a victim of cybercrime.
How it happens
The metaverse is an interactive world or “virtual reality” built within the internet. In the metaverse you create an avatar, and this avatar is a 3D virtual image of you which can make bodily movements and make facial expressions. This can leave people to become a victim to deepfakes. Deepfakes are when AI takes the likeness of someone else to use for other purposes, while this can be used in harmless ways sometimes it can be used in very bad contexts. For example, people can use deep fakes of people to say or do things that are not socially acceptable, or just things that do not align with their beliefs. This can lead people to get in trouble with the law or even not be able to secure a job.
Research done
During the research process they screened people asking them if they were even aware of deepfakes, and if they could confidently identify a deepfake. They found that “out of 93 participants, 65% of them are totally unaware of deepfake, only 13% are confident that the general population of internet users will be able to detect deepfake, and 57% of participants believe that the average internet user will not be able to differentiate a deepfake from authentic media” (Ahmed et al., 2021, p.3). Continuing, most researchers believe that people that will commit these crimes are mostly in their twenties, while most of the victims are younger children and teenagers. (Stavola, 2023, p.13).
Legal action
Recently the legal system has been taking more action against deep fakes (Meskys et al., 2020, p.6). For example, there was The Malicious Deep Fake Prohibition Act of 2018, it was established so that people who created and distributed deepfake material were able to have legal action taken against them (Ferraro, 2019, p.6). However, in some cases the legal systems do not take these offences seriously. A study showed that “11.1%-25.8% of subjects responded that they believed cyber-harassment to be a less serious offense” (Leukfeldt & Holt, 2019, p.342). “In an additional study conducted by Broll & Huey (2015), police officers were interviewed on them
perspective of the seriousness of cyberbullying. The results displayed that these officers acknowledged cyberbullying to be a non-criminal interpersonal incident” (Leukfeldt & Holt, 2019, p.343).
How to Help
There was a conference held that highlighted the importance of safter for people who dive into the AR/VR world. Not talking to people who seem suspicious, not telling anyone personal information over the internet, and keeping any “digital twin” (Stavola, 2023, p.2) avatar basic, not showing every detail about you so your likeness is not out there for everyone to see and possibly use. These are all ways you yourself can help mitigate your risk online and in the metaverse. Furthermore, other ways at a bigger scale that can keep you safe from cybercrime is for the company who makes the VR equipment have more methods of authentication. These authentication methods could include things like information-based, biometric, and multi-model authentication. (Kurtunluoglu, 2022, p.3). This will help stop people from logging into accounts that are not theirs through deepfakes, as it includes more steps.
Challenges
The challenges in stopping deepfake and interpersonal cybercrime are that it’s just hard to educate all people on how to be aware and stay away from risks online and in the metaverse. Also, technology and criminals are always improving their ways and staying one step ahead, so stopping these attacks completely is hard if not impossible. There are not many ways these crimes target one specific marginalized group, deepfakes and other cybercrimes are really put out there to catch anyone out in the wrong situation.
Contribution to Society
This article is helpful for society because it brings an unusual kind of cybercrime to light deepfakes. While this has been a thing for some time, the popularity and severity has risen over the past few years. Now the AI that makes these deepfakes are available to most anyone and works well at tricking people and other technology that they are someone else. Showing who is targeted most and who might be at risk based off what they do or what application they are on is extremely helpful. It also brings to light the laws against people who commit these crimes, so people who may be doing it thinking there are little to no legal actions that can be taken upon them can understand there is, and that the severity of prosecution is on the rise as these crimes become more common.
Summary
In summary, interpersonal cybercrime is on the rise and if the right precautions are not taken then this can lead to mass attacks on people. Furthermore, continuing to improve laws against deepfakes and cyberattacks will help deter people from committing these crimes without them thinking there is no legal repercussions. Teaching people how to protect themselves, and having companies improve on login methods will help protect people from deepfake login attempts and protect people from having their sensitive information stolen.
References
Ahmed, B., Ali, G., Hussain, A., Baseer, A., & Ahmed, J. (2021). Analysis of text feature extractors using deep learning on fake news. Engineering, Technology & Applied Science Research, 11(2), 7001-7005
Kürtünlüoğlu, P., Akdik, B., & Karaarslan, E. (2022). Security of virtual reality authentication methods in metaverse: An overview. arXiv preprint arXiv:2209.06447.
Leukfeldt, R., & Holt, T. J. (Eds.). (2019). The human factor of cybercrime. Routledge.
Meskys, E., Kalpokiene, J., Jurcys, P., & Liaudanskas, A. (2020). Regulating deep fakes: legal and ethical considerations. Journal of Intellectual Property Law & Practice, 15(1), 24-31.
Stavola, J. & Choi, K. (2023). Victimization by Deepfake in the Metaverse: Building a Practical Management Framework. International Journal of Cybersecurity Intelligence & Cybercrime: 6(2)