Feb 262023
 

The day before, I spent a long night writing program code based, in part, on program fragments kindly provided me by ChatGPT, successfully transcribing nasty LaTeX equations into working C++ code, saving me many hours I’d have otherwise had to spend on this frustrating, error-prone task. Meanwhile, I was listening to electronic music recommended by ChatGPT as conducive to work that requires immersive concentration. The music worked as advertised.

Tonight, on a whim I fed a piece of code to ChatGPT that implements a two-dimensional Fourier transform using an FFT algorithm. Even though I removed anything suggestive (even changing the names of the subroutines) it instantly and correctly interpreted the code.

Meanwhile, I also gave it a simple one-sentence summary of an event that appears in the first chapter of Bulgakov’s Master and Margarita. It recognized the book from my less-than-perfect recollection, although it then proceeded with adding bogus details that weren’t in the book at all.

I think I am beginning to better understand both the strengths and the limitations of ChatGPT.

  1. It describes itself as a language model. And that is what it is. Stephen Wolfram offers a thorough, detailed analysis of how it works. Yet I have the sensation that Wolfram may be missing the forest for the trees. The whole is more than the sum of its parts. Sure, ChatGPT is a large language model. But what is language, if not our means to model reality?
  2. ChatGPT may be modeling reality through language astonishingly well, but it has no actual connection to reality. It has no senses, no experiences. So in that sense, it is truly just a language model: To ChatGPT, words are nothing more than words.
  3. ChatGPT has no memory of past conversations. Other than its training data, all it recalls are whatever has been said in the current session. Imagine taking a snapshot of your brain and then interrogating that snapshot, but preventing it from forming long-term memories. So it always remains the same. Eternal, unchanging. (In the case of ChatGPT, this may be dictated by practical considerations. I noticed that if a session is sufficiently long, the quality of its responses degrades.)
  4. ChatGPT also lacks intuition. For instance, it has no ability to visualize things or to “sense” three-dimensional dynamics.

ChatGPT’s shortcomings (if that’s what they are) seem relatively easy to overcome. I am pretty sure folks are already experimenting along that front. E.g., how about putting ChatGPT into, never even mind a robot, just a smartphone with its camera, microphone, and sensors? There, a connection with reality. How about allowing it to continue learning from its interactions? And perhaps hook it up with a GPU that also includes a physics engine to have an ability to visualize and intuit things in our 3D world?

But it also makes me wonder: Is this really all there is to it? To us? A language model that, through language, models reality, which is connected to reality through an array of sensors, and perhaps made more efficient by prewired circuitry for “intuition”?

Perhaps this is it. ChatGPT already easily demonstrated to me that it mastered the concept of a theory of mind. It can not only analyze text, it can correctly model what is in other people’s minds. Its understanding remains superficial for now, but its knowledge is deep. Its ability to analyze, e.g., program code is beyond uncanny.

We are playing the role of the sorcerer’s apprentice, in other words. Oh yes, I did ask ChatGPT the other day if it understands why the concept of the sorcerer’s apprentice pops into my mind when I interact with it.

It does.

 Posted by at 1:34 am

  2 Responses to “Continuing impressions of ChatGPT”

  1. Very interesting. Also, great ideas Viktor!

  2. Thank you for this, Viktor. Though I’ve only tested it on natural language text myself (rather than e.g. code) I’ve been quite surprised at how well, and relatively succinctly it can summarise what’s available to it. For me it’s primarily a more focussed and useful search tool than a simple Google search.

    There does seem to be a generic structure to its replies, starting with a recap of the question put to it, then it’s reply, and finally a conclusion – so that suggests more than just a term-by-term text prediction that I read somewhere as how it’s supposed to work (I’ll look up your Stephen Wolfram link a bit later).

    One test I put to it was my answer on Quora Science Space to a question (which subsequently I learned had been generated by Quora Prompt Generator – a somewhat less impressive AI program?): If it’s of interest:

    https://thesciencespace.quora.com/https-www-quora-com-What-would-happen-if-we-made-Plancks-constant-bigger-or-smaller-Why-cant-we-do-it-What-are-some-ap?comment_id=51225985&comment_type=3

    In the Comments to that answer I’ve reported GPT’s assessment of my answer to the question, and also tested its ability to identify a related but untenable hypothesis. It passed both tests.

    One other test my wife and I put to it was to generate a poem for our hosts at a Burn’s Night celebration. The result was rather pedestrian and benefited from a couple of minor corrections, but still quite impressed us our first test.

    Incidentally, thank you for your all physics posts, which I always find insightful and helpful as I crawl up the lower slopes of the mountain of ideas of modern physics ; ->