Elena Knudsen
If you’ve been on the internet any time during the past year, there’s a high chance you’ve heard about the discourse surrounding ChatGPT. ChatGPT is an AI-based software described by AP News as “…part of a new generation of AI systems that can converse, generate readable text on demand and even produce novel images and video based on what they’ve learned from a vast database of digital books, online writings and other media.” Aside from being a technological marvel, the software presents issues when it comes to plagiarism and education, particularly due to the fact that it’s available to anyone for free.
As someone who’s both intrigued and concerned with the effect ChatGPT will have on creativity and learning, I decided the next best thing would be to mess with it. I challenged myself to test its limits and see if I could achieve any broken or honestly, hilariously misguided results.
One of the first things that stood out to me was how clearly and frequently ChatGPT will describe itself as a program. I wanted to test how it defines itself and was pleased to discover that the wording was constant: it stated that it was a ‘machine learning model’ or a ‘language model.’ If there was any mention of feelings, it again would state that as a program, it doesn’t have the capacity to understand feelings.
Something I hadn’t heard being discussed very much was the possibility of ChatGPT as a code breaker. I tested out a few codes and ciphers. Notably, it was able to understand sentences in binary code, base64, and leetspeak (where letters are substituted for numbers—for example, E would be substituted for 3). All of these are simple substitution ciphers, meaning that they don’t require a key or any additional functionality. When I chose a substitution cipher that requires a key (the vigenere cipher) and provided said key, ChatGPT responded as if it had correctly decoded my message when it did not. Note that the character length is not equal in my ciphertext and ChatGPT’s failed plaintext.
I was wondering about if ChatGPT had any security measures, and if they were able to detect machine learning generated writing or had any preventative measures built in. The general response to this kind of question is like the screenshot below; ChatGPT describes itself as a tool which can be used both positively and negatively, and it’s mostly on the user to determine when it’s being used responsibly.
The screenshot below is perhaps my favorite way I ‘broke’ ChatGPT, but it seems to be a fairly common error. I asked for it to spell incorrectly, and even as it spells incorrectly, it responds that it cannot. In other words, it did what I asked of it but told me it was incapable of that action.
Overall, it seems that ChatGPT is a software that is consistently improving and is an exciting and accessible AI based tool and if nothing else, it’s revolutionary in its widespread availability. There are, unsurprisingly, many errors still built into the program, yet its language modeling techniques are convincing enough that it’s causing problems in schools, where teachers are unable to decipher what’s AI and not. ChatGPT doesn’t have built in safeguards for fraud or security; it relies on the user to be responsible.
TLDR: ChatGPT is a language model that is both flawed and impressive, and provides its technology for free for anyone, regardless of good or bad intent.
If you’ve stuck around for the whole article, here’s an bonus interaction that made me laugh out loud: