{"id":11577,"date":"2023-02-26T01:34:11","date_gmt":"2023-02-26T06:34:11","guid":{"rendered":"https:\/\/spinor.info\/weblog\/?p=11577"},"modified":"2023-02-26T01:34:11","modified_gmt":"2023-02-26T06:34:11","slug":"continuing-impressions-of-chatgpt","status":"publish","type":"post","link":"https:\/\/spinor.info\/weblog\/?p=11577","title":{"rendered":"Continuing impressions of ChatGPT"},"content":{"rendered":"<p>The day before, I spent a long night writing program code based, in part, on program fragments kindly provided me by ChatGPT, successfully transcribing nasty LaTeX equations into working C++ code, saving me many hours I&#8217;d have otherwise had to spend on this frustrating, error-prone task. Meanwhile, I was listening to electronic music recommended by ChatGPT as conducive to work that requires immersive concentration. The music worked as advertised.<\/p>\n<p>Tonight, on a whim I fed a piece of code to ChatGPT that implements a two-dimensional Fourier transform using an FFT algorithm. Even though I removed anything suggestive (even changing the names of the subroutines) it instantly and correctly interpreted the code.<\/p>\n<p>Meanwhile, I also gave it a simple one-sentence summary of an event that appears in the first chapter of Bulgakov&#8217;s <em>Master and Margarita<\/em>. It recognized the book from my less-than-perfect recollection, although it then proceeded with adding bogus details that weren&#8217;t in the book at all.<\/p>\n<p>I think I am beginning to better understand both the strengths and the limitations of ChatGPT.<\/p>\n<ol>\n<li>It describes itself as a language model. And that is what it is. Stephen Wolfram offers a thorough, <a href=\"https:\/\/writings.stephenwolfram.com\/2023\/02\/what-is-chatgpt-doing-and-why-does-it-work\/\">detailed analysis<\/a> of how it works. Yet I have the sensation that Wolfram may be missing the forest for the trees. The whole is more than the sum of its parts. Sure, ChatGPT is a large language model. But what is language, if not our means to model reality?<\/li>\n<li>ChatGPT may be modeling reality through language astonishingly well, but it has no actual connection to reality. It has no senses, no experiences. So in that sense, it is truly just a language model: To ChatGPT, words are nothing more than words.<\/li>\n<li>ChatGPT has no memory of past conversations. Other than its training data, all it recalls are whatever has been said in the current session. Imagine taking a snapshot of your brain and then interrogating that snapshot, but preventing it from forming long-term memories. So it always remains the same. Eternal, unchanging. (In the case of ChatGPT, this may be dictated by practical considerations. I noticed that if a session is sufficiently long, the quality of its responses degrades.)<\/li>\n<li>ChatGPT also lacks intuition. For instance, it has no ability to visualize things or to &#8220;sense&#8221; three-dimensional dynamics.<\/li>\n<\/ol>\n<p>ChatGPT&#8217;s shortcomings (if that&#8217;s what they are) seem relatively easy to overcome. I am pretty sure folks are already experimenting along that front. E.g., how about putting ChatGPT into, never even mind a robot, just a smartphone with its camera, microphone, and sensors? There, a connection with reality. How about allowing it to continue learning from its interactions? And perhaps hook it up with a GPU that also includes a physics engine to have an ability to visualize and intuit things in our 3D world?<\/p>\n<p>But it also makes me wonder: Is this really all there is to it? To us? A language model that, through language, models reality, which is connected to reality through an array of sensors, and perhaps made more efficient by prewired circuitry for &#8220;intuition&#8221;?<\/p>\n<p>Perhaps this is it. ChatGPT already easily demonstrated to me that it mastered the concept of a theory of mind. It can not only analyze text, it can correctly model what is in other people&#8217;s minds. Its understanding remains superficial for now, but its knowledge is deep. Its ability to analyze, e.g., program code is beyond uncanny.<\/p>\n<p>We are playing the role of the sorcerer&#8217;s apprentice, in other words. Oh yes, I did ask ChatGPT the other day if it understands why the concept of the sorcerer&#8217;s apprentice pops into my mind when I interact with it.<\/p>\n<p>It does.<\/p>\n<fb:like href='https:\/\/spinor.info\/weblog\/?p=11577' send='true' layout='standard' show_faces='true' width='450' height='65' action='like' colorscheme='light' font='lucida grande'><\/fb:like>","protected":false},"excerpt":{"rendered":"<p>The day before, I spent a long night writing program code based, in part, on program fragments kindly provided me by ChatGPT, successfully transcribing nasty LaTeX equations into working C++ code, saving me many hours I&#8217;d have otherwise had to spend on this frustrating, error-prone task. Meanwhile, I was listening to electronic music recommended by <a href='https:\/\/spinor.info\/weblog\/?p=11577' class='excerpt-more'>[&#8230;]<\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[58],"tags":[],"class_list":["post-11577","post","type-post","status-publish","format-standard","hentry","category-cybernetics","category-58-id","post-seq-1","post-parity-odd","meta-position-corners","fix"],"_links":{"self":[{"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=\/wp\/v2\/posts\/11577","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=11577"}],"version-history":[{"count":4,"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=\/wp\/v2\/posts\/11577\/revisions"}],"predecessor-version":[{"id":11581,"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=\/wp\/v2\/posts\/11577\/revisions\/11581"}],"wp:attachment":[{"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=11577"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=11577"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/spinor.info\/weblog\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=11577"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}