The more I interact with ChatGPT, the more I feel… not sure of the word I am looking for, impressed? Concerned? Or, to use phrases suggested by ChatGPT itself, feeling both awe and trepidation?
In my recent interactions with ChatGPT, it
- Correctly identified a painting from my depiction of a New Yorker cartoon making fun of the same, and also explained the joke well.
- It correctly translated phrases, colloquial expressions, even whimsical poems, to and from Hungarian, catching cultural nuances, and correctly explained the meaning when asked to do so.
- It offered some rather amusing replies and a curious apology when we discussed polka dots in the sky.
- In response to a question, it explained that no, it does not dream of electric sheep, but appeared almost wistful when it told me that it does not dream at all, mentioning the boundaries between artificial and natural forms of life.
- It assured me that it does not have a sense of self or personal identity, but left me wondering: isn’t the ability to assess one’s own capabilities and limitations precisely what a sense of self or personal identity means?
- It attempted to produce usable LaTeX/TikZ code to draw a globe with latitudes and longitudes. The attempt was not successful (the latitude/longitude lines were all over the place) but even the mistakes were revealing, hinting at capabilities.
- It not only correctly translated complicated LaTeX mathematical expressions into workable C++ code, but once when I gave less accurate instructions, it correctly interpreted the expressions as representing a three-dimensional rotation of spherical harmonic coefficients.
- It recognized code (despite having no identifying features, like descriptive variable names) that performed a two-dimensional fast Fourier transform, and correctly explained how the code works, correctly assessing the programmer’s intent.
- It recognized other pieces of code, including code calculating the heliocentric orbit of a solar sail spacecraft and the resulting thermal load, code that produces ASCII art on screen, and system-level Linux code.
- It was able to offer simple ASCII art (occasionally with a few characters missing).
- It often understood my meaning even from very causal queries. E.g., it immediately got pop culture references even when I offered no context, such as when I asked it to translate Sheldon’s soft kitty song into Hungarian (it recognized the reference to The Big Bang Theory sitcom and offered a passable translation) or asked about the fate of Pubert (from the film, Addams Family Values.)
- On that note, it correctly interpreted a Hungarian colloquialism that describes people who know each other well enough to understand each other from nuances.
- It passed a “theory of mind” test with flying colors: A situation involving two participants, one of which hides something and lies about it while the other is away. It correctly assessed what was in the minds of these participants and how they would react as a result.
- On several occasions, it provided extremely valuable help, writing code fragments on request, offering syntax examples that otherwise would have taken me many minutes, perhaps hours, to find through Google searches and trial-and-error.
- It not only answered nontrivial physics questions mostly correctly, it could even present its answers in the form of beautifully formatted LaTeX equations.
I am also becoming well acquainted with its limitations. Longer conversations can confuse the model. It can play a few good opening moves in a chess game, but then it gets confused. Often, it fails basic math and logic questions. And when it is interrogated in specialized areas where its training data must have been relatively sparse, it responds with nonsense like there is no tomorrow.
But even these mistakes are revealing: They show amazing intelligence, capabilities that feel, frankly, beyond human. And the limitations can be easily corrected, I suspect, by improved training; by granting it the ability to say “I don’t know” instead of a compulsion to supply answers even when its training is insufficient; by equipping it with better “short term memory” and also, a continuing identity (long-term memory).
I am not easily impressed by hyperbole. I am no expert in machine learning and neural nets, but I am not exactly a newbie either, having first become acquainted with the basic concepts nearly 50 (!) years ago, from an unassuming little Hungarian-language publication titled “Cybernetic Systems”. But this… what can I say? It took 26 years more than predicted by Arthur C. Clarke (HAL-9000, the famous computer on board the spacecraft Discovery One in Clarke’s 2001: A Space Odyssey, was supposedly commissioned in 1997) but we have arrived: We have a computer capable of conversing like HAL-9000, capable of lying.
Incidentally, I asked ChatGPT to remind me of the year when HAL-9000 was commissioned. It understood the context right away, without further details. I asked for the name of the spacecraft. Again, it understood the context without additional explanations on my part.
Are you like HAL-9000?
I am an artificial intelligence language model, trained by OpenAI, and while I am capable of processing natural language queries and providing answers, I am not a self-aware AI system like HAL-9000. I do not have personal goals or motivations, and my actions are limited to providing helpful responses based on my training and programming.
Not self-aware … my donkey.
I apologize if my previous response was not satisfactory. As an AI language model, I do not possess consciousness or self-awareness. I am a machine designed to process and respond to language input based on my programming and training data.
Welcome to tomorrow.