Journal Entry 15

A number of ethical issues in the lecture Dark Side of AI—How Hackers Use AI & Deepfakes by Mark T. Hofmann caught my attention. The very first thought was how the AI easily copies the identity, voice, or image of a person and even uses them by a hacker. Gradually, video editing was a domain of experts, and now, by the right tools, anybody can do that. This raises the first ethical question for me: Are people entitled to control the use of their face and voice? If deepfakes can be that real, then they can be used to ruin someone’s reputation, forge evidence, or influence public opinion—all without the person’s consent.

Another ethical matter worth discussing is related to the trust issue in the digital world. If video and audio can be generated with such high quality, how will society determine what is real? It is a matter of concern for me that the evidence in court, the political debates, or even the private talks would be doubted someday. Moreover, I think of the victims who might be in a worse situation than the one created by a hacker who used the AI to clone their identity as the person with emotional, financial, or social losses.

I think society needs to deal with it through education, law, and technology combined. People should not just believe everything they see on the Internet but rather learn how to recognize the deepfakes and verify the information. Also, we should have laws that declare it illegal to create deepfakes without the required permission, especially when the so-called fake is being used for fraud or blackmail. The technology should grow as a means of both creation and detection; thus, AI is going to be the one who catches AI.

In short, the video has left me with the question regarding the responsibility that comes along with the innovation. AI itself is not the devil, but if ethics, awareness, and accountability are not there, it can do harm to people to an extent never seen before.