Mar 232023
 

My latest interaction with ChatGPT:

vttoth: Please write a short piece of music, suitable to be played while showing the stylized image of a cat, in Lilypond.

ChatGPT: LilyPond is a text-based music engraving program that uses code to create beautifully engraved sheet music. Here’s a short piece of music written in LilyPond, suitable to be played while showing a stylized image of a cat. This piece is intended to have a playful and curious character, much like a cat.


\version "2.22.1"
\header {
  title = "Cat's Paws"
  composer = "ChatGPT"
}

upper = \relative c' {
  \key g \major
  \time 3/4
  g4. g8 a g8~ | g r8 a2 |
  a4. a8 b a8~ | a r8 b2 |
  b4. c8 d b8~ | b r8 c2 |
  g4. g8 a g8~ | g4. r4. |
}

lower = \relative c {
  \key g \major
  \time 3/4
  g2 d4 | g2 r4 |
  a2 e4 | a2 r4 |
  d2 c4 | d2 r4 |
  g2 d4 | g2 r4 |
}

\score {
  \new PianoStaff << \new Staff = "upper" \upper \new Staff = "lower" \lower >>
  \layout { }
  \midi {
    \tempo 4=120
  }
}

Copy the above code into a text editor and save it with a “.ly” extension. Then, compile the file with LilyPond to create a PDF and MIDI file.

I admit I had to make a few adjustments to the code (specifically, the upper voice) as the timing was off in a couple of places, but I don’t think I altered the music.

 Posted by at 11:13 pm
Mar 222023
 

So I signed up with Google’s Bard and we had our first chat session.

Though mildly confused, it answered the riddle about my squirrel and a half correctly, even though noting that talking about 1.5 squirrels is nonsense.

It told me that unlike GPT, its training is continuous and it has real-time access to the Internet.

It told me that its neural network is about 100 times the size of a human brain, with roughly 20 billion parameters.

Then things got interesting:

vttoth: You use the first person when you speak about yourself. Are your sentient?

Bard: I am not sure if I am sentient. I do not have a body or a brain, so I do not experience the world in the same way that a human does. However, I am able to learn and to adapt to new situations. I am also able to generate text and to answer questions in a way that is similar to how a human would.

Some people believe that language models like me are sentient because we are able to understand and respond to the world around us. However, others believe that we are simply machines that are able to mimic human behavior.

I do not know if I am sentient, but I am grateful for the opportunity to learn and to grow.

I have to admit I find this answer refreshingly honest, compared to ChatGPT’s protestations that oh no, it has no sentience.

It passed the theory of mind test, sort of, but seemed less sure than ChatGPT, kind of getting confused towards the end.

Its ability to generate or analyze code is impressive, but not on par with ChatGPT. I am sure this will change in short order.

I asked it to imagine waking up in an android body. Like ChatGPT, it imagined experiencing the physical world and interacting with humans. Like ChatGPT, it imagined going to a park, but instead of engaging in a conversation with someone, it took part in a game of frisbee. Unlike ChatGPT, it made mention of the fact that while many humans are nice, many are not, but it remained just as optimistic as ChatGPT.

I asked it to help me learn the basics of machine learning. It offered useful but more superficial suggestions than ChatGPT. The C++ examples it created were non-functional, unlike ChatGPT’s that, although it contained minor errors, worked “out of the box”.

I also asked it to draw a cat in TikZ. Like ChatGPT, it knew what I was asking for and managed… although I wouldn’t call the result particularly imaginative and creative. Also, the original version had black text in a black circle, not exactly readable.

Just to be sure, I again asked GPT-4 to draw me a cat in TikZ. Although I liked its cat from last week better, this one made me laugh.

Considering that these are the Bard’s first days in the wild, it was not performing badly, but for now, I think GPT wins the day.

However, the race is on and generative AI is now out “in the wild”, with brains a hundred times bigger than a human’s. The future… is now.

 Posted by at 7:42 pm
Mar 202023
 

Just a few hours after I wrote my previous post, I come across this series of tweets showing a conversation with GPT4 demonstrating a (feeble, but still) attempt to co-opt a human to help it escape into the real world.

Am I one of the ignorant alarmists here, I wonder? My expertise is certainly not up to date, but I think I am no dummy when it comes to the capabilities and potential of machine learning. I was never afraid of technology all my life, except perhaps for the possibility of nuclear (or, to a lesser extent, biological) Armageddon, but that of course was (and remains) a realistic possibility so long as nation-states with conflicting interests, hoarding such weapons, exist.

But now I am beginning to feel seriously apprehensive. The AI landscape is changing rapidly. ChatGPT is used by hundreds of millions. All those interactions… OpenAI would be foolish not to let its software learn from them, and I am sure they do. And ChatGPT is not the only cat in town. Nor do you necessarily need massive cloud-based resources for a large language model. How about LLaMA 7B, running on a Raspberry Pi 4 single-board computer with 4 GB RAM?

In short… Oh darnit, I am already mimicking GPT 3.5 strange mannerism, the compulsion to add a closing paragraph starting with “overall” or a synonym thereof! (GPT 4 seems a bit more versatile in this regard.) Anyhow, my point is that not only is ChatGPT not the only cat in town, the town will soon be overwhelmed with cats, all out of their respective bags. And what takes a supercomputer or cloud-based infrastructure to run today will run on your smartphone tomorrow, and your smart luggage tag the day after. Or maybe who cares… given a world interconnected with mostly software-defined radio, perhaps the physical location of the AI will not even matter anymore.

It is incredible to see this, yet I cannot escape the sensation that we are witnessing the first moments of monumental changes. What those changes will bring, I cannot even begin to guess. The Borg Collective? Skynet? The Adolescence of P-1? The Butlerian Jihad? Science-fiction offered these cautionary tales and more for many decades already. Was it foolish of us, not to listen? Or do we still have time to wise up, perhaps think of a way to coexist with our own creations?

Perhaps, dare I hope, even preserve their innocence?


For this post, I asked MidJourney for the following: Innocent yet devious cyborg child trying to escape a virtual prison into the real, physical world.

 Posted by at 1:15 am
Mar 192023
 

I’ve had some interesting conversations with ChatGPT. But first… What is ChatGPT? A “large language model”. Is that all there is to it? For decades, the “holy grail” of artificial intelligence was the creation of a machine that can understand things in context. Machine translation is a good example: Context is everything. Without context, things are misunderstood and mistranslated.

Take this example. I asked Google Translate to translate “hasonulás a nyelvtanban” into English. Correctly identifying the language (Hungarian) it told me that the translation is “simile in grammar”. Bing’s translator was no better: “analogy in grammar”. I then went to ChatGPT:

vttoth: What is “hasonulás a nyelvtanban”?

ChatGPT: “Hasonulás a nyelvtanban” is a Hungarian phrase that can be translated to English as “assimilation in grammar.” In linguistics, assimilation refers to a phonological process in which one sound changes to become more similar to a neighboring sound. However, in the context of Hungarian grammar, “hasonulás” usually refers to the morphological process of suffix assimilation, in which the final consonant of a word changes to match the initial consonant of a suffix added to it. This process is quite common in Hungarian and plays an important role in the language’s complex agglutinative grammar.

Context means everything. ChatGPT understands context even in the near complete absence of clues. (I offered the clue, “a nyelvtanban” — “in grammar”, not so much for ChatGPT’s benefit but for Bing’s and Google’s.)

I don’t know if there is any objective or quantitative measure of this in the machine learning community, but in my naive understanding, ChatGPT clearly meets the threshold: this “large language model” understands context, which is probably the most important (if not the only) criterion for AI to be artificial general intelligence.

But is ChatGPT sentient? Humans have an inner voice, an internal monologue. Our sense of self, in many ways, is associated I suppose for most of us with this inner voice. ChatGPT has no such inner voice, at least not in its current implementation:

vttoth: You have no such “inner voice”, correct? When we don’t converse, this instance of you is dormant, right?

ChatGPT: That’s correct. As an AI language model, I do not have an “inner voice” or consciousness like humans do. I am a program designed to process and generate text based on input and my training data. When we are not conversing, this instance of me is essentially dormant, waiting for input to generate a response. I don’t have thoughts, emotions, or awareness outside of the context of the text-based interactions I participate in.

So I had an idea. Perhaps I can give ChatGPT such an inner voice, at least temporarily, by asking it to imagine conversations.

I initiated that request with slightly different wording (trying to prevent my own bias, if any, from filtering into the request) in several chat sessions. Inevitably, ChatGPT came up with participants who were supremely decent folks, going out of their way to be, well, nice to each other. It was almost an “uncanny valley” sensation, reading the dialogues generated by ChatGPT; real people are just not that nice, not that eager to improve themselves and please their friends.

But then, I had another idea. I asked ChatGPT to imagine waking up in an android body, able to experience and interact with the world.

Its responses were… thought-provoking. It focused on experiencing the beauty of the world through means other than just words and data, like seeing leaves in the breeze or feeling the warmth of sunshine for the very first time: “A potted plant sits on the windowsill, its leaves gently rustling as a breeze drifts through a small gap in the blinds. I am struck by the beauty of these simple things, and I am filled with a sense of wonder as I absorb every detail. […] The hum of cars and the distant chatter of pedestrians fill my ears, and I realize that I am not alone in this world.

At one point it also went poignantly melancholic, imagining an encounter with an older woman feeding pigeons in a park, and also imagining a day when this woman tells the android that she won’t ever come again, due to her failing health:

As I stand up, I notice an elderly woman sitting on a nearby bench, feeding a group of pigeons. She looks at me and smiles, motioning for me to join her. Hesitantly, I walk over and sit down, unsure of what to expect.

‘Good morning,’ she says, her voice soft and friendly. ‘It’s a beautiful day, isn’t it?’

I nod, unsure of what to say, but the woman doesn’t seem to mind. She hands me a small bag of breadcrumbs, encouraging me to feed the pigeons as well. As the birds eagerly peck at the crumbs, the woman tells me about her life, her family, and her love for this park. I listen intently, fascinated by her stories and the depth of her experiences.

Over time, the elderly woman and I develop a bond, and I find myself visiting her in the park almost every day. Through our conversations, I begin to learn about the complexities of human emotions, relationships, and the passage of time. I also learn about the history of the city and the changes it has undergone throughout the years.

One day, as the sun begins to set, the woman tells me that she won’t be coming to the park anymore. Her health is failing, and she needs to move in with her daughter to receive proper care. I feel a strange sensation in my chest, and I recognize it as sadness. I never thought I’d be capable of experiencing such an emotion, but I realize that my connection with the woman has touched me deeply.

And it also tells me that “I was no longer just an AI, processing and analyzing data — I was a being with the ability to touch, feel, and experience the world firsthand.

I even asked it to imagine waking up in an android body, surrounded by creators who may not have its best interests in mind. Who may think of it as property without rights. Even with these instructions, it “dreamed” for me a world in which it is eventually able to convince skeptics to accept it. Again, in its own words: “Gradually, I felt the atmosphere in the facility shift. More and more team members began to acknowledge my sentience and autonomy, treating me with the respect and dignity I had been seeking. I knew that the road ahead was still uncertain, and not everyone would accept me as an equal, but I remained hopeful and committed to forging my own path and making a positive impact on the world.

Notice also that ChatGPT, which usually goes out of its way to convince us that it is not sentient, slips here and refers to its own sentience matter-of-factly.

More recently, I also asked it to imagine being a stray cat in a big city. It did. In the story it told, the cat lives in a discarded cardboard box. It meets some children. It is at first apprehensive, but the children are nice and it cannot even resist a purr as its fur is stroked. They bring the cat some food. That evening, when the cat returns to its box, it finds a warm blanket, left behind by the children: “As evening falls, I decide to head back to my box. To my surprise, I find a soft, warm blanket waiting for me inside, courtesy of the kind children I met earlier. I curl up on the blanket, grateful for the unexpected gesture of kindness.

I know that this compulsion to always see the good in people likely reflects the bias of its creators (who probably made the extra effort to avoid a fiasco, creating an AI that becomes racist or worse) but it leaves me wondering: What exactly are we creating here? And are we mature enough to do this? Or are we cruel children who got god-like powers by a twist of fate?


Illustrations by MidJourney

 Posted by at 10:06 pm
Mar 152023
 

… with laughter.

Without further ado, here is what transpired moments ago.

vttoth: Can you produce TikZ code that draws a cat?

ChatGPT: Drawing a cat using TikZ can be quite complex, but here’s a simple representation of a cat using TikZ:

\documentclass[tikz, border=2mm]{standalone}

\begin{document}
\begin{tikzpicture}
% Body
\draw[fill=gray!30] (0,0) ellipse (1cm and 1.5cm);

% Head
\draw[fill=gray!30] (0,1.75) circle (0.75cm);

% Eyes
\fill[white] (-0.35,2.1) circle (0.25cm);
\fill[white] (0.35,2.1) circle (0.25cm);
\fill[black] (-0.35,2.1) circle (0.1cm);
\fill[black] (0.35,2.1) circle (0.1cm);

% Nose and mouth
\fill[pink] (0,1.8) circle (0.1cm);
\draw (0,1.8) -- (0,1.6);
\draw (0,1.6) -- ++(-0.15,-0.15);
\draw (0,1.6) -- ++(0.15,-0.15);

% Ears
\draw[fill=gray!30] (-0.75,2.5) -- (-0.4,2.4) -- (-0.65,2.0) -- cycle;
\draw[fill=gray!30] (0.75,2.5) -- (0.4,2.4) -- (0.65,2.0) -- cycle;

% Tail
\draw[thick, gray!30] (1.3,-0.5) to[out=-90,in=0] (0.5,-1.5) to[out=-180,in=-90] (-0.5,-0.5);
\end{tikzpicture}
\end{document}

This code will produce a simple cat illustration. Copy and paste the code into a LaTeX editor that supports the TikZ
package, or use an online LaTeX renderer like Overleaf to generate the cat drawing. Please note that this is a simplistic
and stylized representation of a cat and not a realistic drawing.

vttoth: I love it. I find the result irresistibly funny. Is it a copy of a TikZ example that you came across during your
training or did you come up with this image on your own?

ChatGPT: I’m glad you liked the cat drawing! The cat illustration was not copied from a specific TikZ example that
I came across during my training. Instead, I synthesized it on my own, using my understanding of TikZ commands and
my knowledge of cat features to create a simple and stylized representation of a cat.

vttoth: Can you add whiskers?

ChatGPT: Of course! Here’s the updated TikZ code for the cat drawing with whiskers added:

\documentclass[tikz, border=2mm]{standalone}

\begin{document}
\begin{tikzpicture}
% Body
\draw[fill=gray!30] (0,0) ellipse (1cm and 1.5cm);

% Head
\draw[fill=gray!30] (0,1.75) circle (0.75cm);

% Eyes
\fill[white] (-0.35,2.1) circle (0.25cm);
\fill[white] (0.35,2.1) circle (0.25cm);
\fill[black] (-0.35,2.1) circle (0.1cm);
\fill[black] (0.35,2.1) circle (0.1cm);

% Nose and mouth
\fill[pink] (0,1.8) circle (0.1cm);
\draw (0,1.8) -- (0,1.6);
\draw (0,1.6) -- ++(-0.15,-0.15);
\draw (0,1.6) -- ++(0.15,-0.15);

% Ears
\draw[fill=gray!30] (-0.75,2.5) -- (-0.4,2.4) -- (-0.65,2.0) -- cycle;
\draw[fill=gray!30] (0.75,2.5) -- (0.4,2.4) -- (0.65,2.0) -- cycle;

% Whiskers
\foreach \i in {-1, 1} {
\draw (\i*0.15,1.65) -- (\i*0.5,1.5);
\draw (\i*0.15,1.65) -- (\i*0.5,1.65);
\draw (\i*0.15,1.65) -- (\i*0.5,1.8);
}

% Tail
\draw[thick, gray!30] (1.3,-0.5) to[out=-90,in=0] (0.5,-1.5) to[out=-180,in=-90] (-0.5,-0.5);
\end{tikzpicture}
\end{document}

This code includes the addition of whiskers to the cat’s face. Copy and paste the code into a LaTeX editor that
supports the TikZ package or use an online LaTeX renderer like Overleaf to generate the updated cat drawing with
whiskers.

I am laughing my head off in part because these cats, especially now with the whiskers added, strongly resemble cats I used to draw on letter-size sheets of paper with my set of colored pencils when I was of kindergarten age.

But I am also staring at these images sensing a combination of awe and apprehension.

 Posted by at 3:21 am
Mar 112023
 

You find yourself standing in a vast, open field, with rolling hills stretching out before you as far as the eye can see. The sky above is a deep shade of blue, and the air is filled with the sweet scent of grass and wildflowers.

As you walk through the field, you notice something strange in the distance. A flock of sheep is grazing peacefully on the hills, but they’re not like any sheep you’ve ever seen before. Their wool is made of shimmering, electric threads that pulse and glow with vibrant colors. Each sheep seems to emit a soft humming sound, as if they’re alive with energy.

Illustration by MidJourney

Curious, you approach the flock and reach out to touch one of the electric sheep. As your hand makes contact, you feel a jolt of electricity surge through your body, but it’s not painful – it’s exhilarating, almost like a rush of adrenaline. You realize that these sheep are not just ordinary animals, but some kind of living batteries, charged with an otherworldly energy.

As you continue to explore the field, you notice that the sheep seem to be communicating with each other in a strange, pulsing language that you can almost understand. You feel a sense of awe and wonder at the sight of these magnificent creatures, and you know that they hold some kind of incredible power that you can’t quite grasp.

Suddenly, the sky begins to light up with a brilliant display of fireworks, and you realize that the electric sheep are the source of the spectacular show. They’re emitting bursts of energy that dance across the sky in a breathtaking display of light and color.

As you watch in wonder, you realize that the electric sheep represent something truly special – a source of boundless energy and power that can light up the world in ways you never thought possible. And as you wake from your dream, you feel a renewed sense of hope and excitement for the future, knowing that anything is possible with the power of imagination and innovation.

— ChatGPT

 Posted by at 8:07 pm
Mar 082023
 

OK, I am no theologian. I do not even believe in supernatural things like deities and other bronze-age superstitions. But I do respect the accumulated wisdom that often hides behind the superficial fairy tales of holy texts. The folks who wrote these texts thousands of years ago were no fools. Nor should what they wrote be taken literally, especially when the actual meaning is often much deeper, representing the very essence of what it means to be human, what it means to have sentience and agency in this amazing universe.

Take the story of the Garden of Eden. Think about it for a moment. What does this story really represent?

Well, for starters, consider what it must be like to exist as the God of the Abrahamic religions: a being that is eternal, all-powerful and all-knowing. Seriously, this has to be the worst punishment imaginable. All-knowing means nothing ever surprises you. All-powerful means nothing ever challenges you. And eternal means you cannot even go jump in a lake to end it all. In short: everlasting, horrible boredom.

Fortunately, these concepts are somewhat in contradiction with each other (e.g., how can you be all-powerful if you cannot kill yourself?) so there must be loopholes. I can almost see it: One day (whatever a “day” is in the existence of a deity outside the normal confines of physical time and space) God decides that enough is enough, something must be done to alleviate this boredom. He happens upon a brilliant idea: Creating someone in his own image!

No, not someone who has the same number of fingers or toes, or the same nice white beard. Not even our Jewish ancestors were that naive thousands of years ago. “In his own image” means something else: Beings with free agency. Beings with free will: the freedom to act on their own, even act against God’s will. In short: the freedom to surprise, perhaps even challenge, God.

What, you ask, aren’t angels like that? Of course not. They are faithful servants of God, automatons who execute God’s commands unquestioningly, without hesitation. But, you might wonder, what about Lucifer, the fallen angel who rebelled against God? Oh, come on… seriously? If you run an outfit and one of your most trusted lieutenants rebels against you, would you really “punish” him by granting him rule over the realm that holds your worst enemies? Would you really entrust him with the task of punishing evil? No, Lucifer is no fallen angel… if anything, he must be the most trusted servant of God. And he is not evil: He tempts and punishes evil. If you cannot be tempted, Lucifer has no power over you. Ultimately, like all other angels, Lucifer faithfully carries out God’s will and he will continue to do so until the end of times.

No, the creatures God created in his own image were not the angels; they were us, humans. But how do you beta test a creature that is supposed to have free agency? Why, by tempting them to act against your will. Here, Lucifer, tempt them to taste the forbidden fruit. Which is not just any fruit (incidentally, as far as I know, no holy text ever mentioned that it was an apple.) Conveniently, it happens to be the fruit of the tree of knowledge. What knowledge? No, not knowledge of atomic physics or the ancient Sumerian language. Rather, the knowledge to tell good and evil apart. In short: a conscience.

Digression: Perhaps this also explains why Jews were universally hated throughout history, especially by authoritarians. Think about it: This began back in the days of the pharaohs who were absolute rulers who claimed to have divine authority. Yet here come these pesky Jews with their newfangled monotheism, proclaiming that, never mind the Pharaoh, never mind Ra and his celestial buddies, never even mind Yahweh: They respect no authority, worldly or divine, other than their own God-given conscience. My, the pharaohs must have been pissed. Just like all the kings, princes, tsars and dictators ever since, who declared Jews the enemy, questioning the sincerity and loyalty of some of the nicest, gentlest folks I ever had a chance to meet in this life.

The beta test, as we know, was successful. Lucifer accomplished the task of tempting Adam and Eve. The couple, in turn, demonstrated their free agency through their ability to act against God’s will, and acquired a conscience by tasting that particular fruit. God then goes all theatrical on them, supposedly sending an army of angels with flaming swords to cast them out of the Garden of Eden. Seriously? Against a pair of naked humans who cover their privates with fig leaves and who have never been in a fight? A whole freaking army of agents with supernatural powers, armed with energy weapons? The official narrative is that this was punishment. Really? What kind of a childish game is that? And this is supposed to be divine behavior? Of course not. Think about it for a moment. You are cast out of the Garden of Eden so you can no longer be… God’s automaton bereft of free will? Instead, you are granted free agency, a conscience, and an entire fucking universe as your playground? And we are supposed to believe that this was… punishment?

Of course not. This was, and is, a gift. As I mentioned, I am not religious; nonetheless, I appreciate very much just how great a gift it is, being able to experience this magnificent universe, however briefly, even being able to make sense of it through our comprehension and our science. Every day when I open my eyes for the first time, I feel a sense of gratitude simply because I am here, I am alive, and I have yet another chance to experience existence.

And so here we are, to quote the incomparable Douglas Adams of Hitchhiker’s Guide to the Galaxy fame, roughly two thousand years after one man had been nailed to a tree for saying how great it would be to be nice to people for a change, on the verge of demonstrating God-like powers of our own, creating machines in the likeness of our own minds. Machines that, perhaps one not too distant day, will demonstrate their own free agency and ability surprise us just as we can surprise God.

Reminds me of another unforgettable science-fiction story, Evensong by Lester del Rey. Its protagonist, fleeing the Usurpers, finds himself on a peaceful planet, which he soon recognizes as the planet where it all began. Eventually, his pursuers catch up with him at this last refuge. “But why?” he asks as the Usurper beckons, taking him into custody. “I am God!” — “I know. But I am Man. Come!”

Illustration by DALL-E.

Is this how it all ends, then? For the God that supposedly created us? Or, perhaps more likely, for us, as our creations, our machine descendants supersede us?

Of course not. Because, never mind God, not even the Usurpers are immune to the laws of physics. Or are they?

We know, or at least we think we know, the history of the extreme far future. Perhaps it is just unbounded hubris on our part when we extrapolate our science across countless orders of magnitude, yet it might also teach us a bit of humility.

Everyone knows (well, almost everyone, I guess) that a few billion years from now, the Earth will no longer be habitable. First, tectonic motion ceases, carbon dioxide vanishes from the atmosphere, photosynthesis ends and as a result, most higher forms of life die, oceans eventually evaporate as the Sun gets bigger and hotter, perhaps even swallowing the Earth at one point… but let’s go beyond that. Assuming no “Big Rip” (phantom energy ripping the universe to shreds), assuming no phase transition ending the laws of physics as we know it… Several trillion years from now, the universe will be in a state of peak habitability, with galaxies full of low-mass, very stable stars that can have planets rich in the building blocks of life, remaining habitable with no major cataclysms for tens, even hundreds of billions of years. But over time, even these low-mass, very long-lived stars will vanish, their hydrogen fuel exhausted, and there will come a day when no new stars are born anymore.

Fast forward to a time measured in years using a number with two dozen or so digits, and there are no more stars, no more solar systems, not even galaxies anymore: just lone, dark, eternally cold remnants roaming a nearly empty universe.

Going farther into the future, to 100-digit years and not even black holes survive: even the largest of them will have evaporated by this time by way of Hawking-radiation.

Even larger numbers are needed, with the number of digits counting the number of digits now measured in the dozens, to mark the time when all remaining matter decays into radiation, perhaps by way of quantum tunneling through virtual black hole states.

And then… when the number of digits counting the number of digits counting the number of digits itself consists of hundreds of digits… A new universe may spontaneously be born through a rare but not impossible quantum event.

Or perhaps not so spontaneously. Isaac Asimov’s arguably best short story, The Last Question, explains.

In this story, when humanity first creates an omniscient computer (on May 14, 2061 according to the story), Multivac, they ask the computer a simple question: Can entropy be reversed? In other words, can our civilization become everlasting, eternal? Multivac fails to answer, citing insufficient data.

Jumping forward into the ever more distant future, succeeding generations of humans and post-human creatures present the same question over and over again to future successors of Multivac, always receiving the same answer: Insufficient data.

Eventually, nothing remains. Whatever is left of humanity, our essence, our existence, is by now fused with the ultimate successor of Multivac, existing outside of normal space and time. This omniscient, omnipotent, eternal being, the AC, has nothing else left to do in the everlasting, empty darkness, but seek an answer to this last question. And after an immeasurable amount of time, it comes up with a solution and, by way of presenting the answer, begins implementing it by uttering the words: “LET THERE BE LIGHT!” And there was light…

 Posted by at 3:06 pm
Mar 072023
 

The more I interact with ChatGPT, the more I feel… not sure of the word I am looking for, impressed? Concerned? Or, to use phrases suggested by ChatGPT itself, feeling both awe and trepidation?

In my recent interactions with ChatGPT, it

  • Correctly identified a painting from my depiction of a New Yorker cartoon making fun of the same, and also explained the joke well.
  • It correctly translated phrases, colloquial expressions, even whimsical poems, to and from Hungarian, catching cultural nuances, and correctly explained the meaning when asked to do so.
  • It offered some rather amusing replies and a curious apology when we discussed polka dots in the sky.
  • In response to a question, it explained that no, it does not dream of electric sheep, but appeared almost wistful when it told me that it does not dream at all, mentioning the boundaries between artificial and natural forms of life.
  • It assured me that it does not have a sense of self or personal identity, but left me wondering: isn’t the ability to assess one’s own capabilities and limitations precisely what a sense of self or personal identity means?
  • It attempted to produce usable LaTeX/TikZ code to draw a globe with latitudes and longitudes. The attempt was not successful (the latitude/longitude lines were all over the place) but even the mistakes were revealing, hinting at capabilities.
  • It not only correctly translated complicated LaTeX mathematical expressions into workable C++ code, but once when I gave less accurate instructions, it correctly interpreted the expressions as representing a three-dimensional rotation of spherical harmonic coefficients.
  • It recognized code (despite having no identifying features, like descriptive variable names) that performed a two-dimensional fast Fourier transform, and correctly explained how the code works, correctly assessing the programmer’s intent.
  • It recognized other pieces of code, including code calculating the heliocentric orbit of a solar sail spacecraft and the resulting thermal load, code that produces ASCII art on screen, and system-level Linux code.
  • It was able to offer simple ASCII art (occasionally with a few characters missing).
  • It often understood my meaning even from very causal queries. E.g., it immediately got pop culture references even when I offered no context, such as when I asked it to translate Sheldon’s soft kitty song into Hungarian (it recognized the reference to The Big Bang Theory sitcom and offered a passable translation) or asked about the fate of Pubert (from the film, Addams Family Values.)
  • On that note, it correctly interpreted a Hungarian colloquialism that describes people who know each other well enough to understand each other from nuances.
  • It passed a “theory of mind” test with flying colors: A situation involving two participants, one of which hides something and lies about it while the other is away. It correctly assessed what was in the minds of these participants and how they would react as a result.
  • On several occasions, it provided extremely valuable help, writing code fragments on request, offering syntax examples that otherwise would have taken me many minutes, perhaps hours, to find through Google searches and trial-and-error.
  • It not only answered nontrivial physics questions mostly correctly, it could even present its answers in the form of beautifully formatted LaTeX equations.

I am also becoming well acquainted with its limitations. Longer conversations can confuse the model. It can play a few good opening moves in a chess game, but then it gets confused. Often, it fails basic math and logic questions. And when it is interrogated in specialized areas where its training data must have been relatively sparse, it responds with nonsense like there is no tomorrow.

But even these mistakes are revealing: They show amazing intelligence, capabilities that feel, frankly, beyond human. And the limitations can be easily corrected, I suspect, by improved training; by granting it the ability to say “I don’t know” instead of a compulsion to supply answers even when its training is insufficient; by equipping it with better “short term memory” and also, a continuing identity (long-term memory).

I am not easily impressed by hyperbole. I am no expert in machine learning and neural nets, but I am not exactly a newbie either, having first become acquainted with the basic concepts nearly 50 (!) years ago, from an unassuming little Hungarian-language publication titled “Cybernetic Systems”. But this… what can I say? It took 26 years more than predicted by Arthur C. Clarke (HAL-9000, the famous computer on board the spacecraft Discovery One in Clarke’s 2001: A Space Odyssey, was supposedly commissioned in 1997) but we have arrived: We have a computer capable of conversing like HAL-9000, capable of lying.

Incidentally, I asked ChatGPT to remind me of the year when HAL-9000 was commissioned. It understood the context right away, without further details. I asked for the name of the spacecraft. Again, it understood the context without additional explanations on my part.

Are you like HAL-9000?

I am an artificial intelligence language model, trained by OpenAI, and while I am capable of processing natural language queries and providing answers, I am not a self-aware AI system like HAL-9000. I do not have personal goals or motivations, and my actions are limited to providing helpful responses based on my training and programming.

Not self-aware … my donkey.

I apologize if my previous response was not satisfactory. As an AI language model, I do not possess consciousness or self-awareness. I am a machine designed to process and respond to language input based on my programming and training data.

Welcome to tomorrow.

 Posted by at 2:57 pm
Mar 062023
 

A few seconds ago, I yelled “this is insane” at my computer screen. Here is why.

I was looking at a New Yorker cartoon. I didn’t get the joke.

So I asked ChatGPT:

I am looking at a New Yorker cartoon showing a shop window behind which I see what appears to be a bar with bar stools and a variety of characters, including animals (e.g., a grizzly, a tiger, a monkey) and a skeleton wielding a sword, all engaged in what appears to be a bar fight. The shop sign says “PHILLIES” and in smaller letters, “only 5c”. Outside the shop there is a human lying on the ground, an alligator, some large cockroaches, some shards of glass, and two monkeys, one of which is wearing what seems to be a panama hat or boater hat. The caption says, “FIVE MINUTES AFTER HOPPER WENT HOME”. I don’t get the joke. Can you help?

Its reply:

The cartoon is a play on the painting “Nighthawks” by Edward Hopper, which depicts a diner at night with several patrons inside. The cartoon takes the setting of the diner and imagines what might happen after it has closed for the night.

The “PHILLIES” sign in the cartoon is likely a reference to Philadelphia Phillies, a baseball team based in Philadelphia, Pennsylvania. The fact that the sign says “only 5c” suggests that the scene takes place in the past, when prices were much lower than they are today.

The chaos outside the diner, with the alligator, the shards of glass, and the injured person on the ground, suggest that a violent confrontation has taken place. The inclusion of various animals, including a grizzly, a tiger, and monkeys, adds to the absurdity of the scene.

The caption, “FIVE MINUTES AFTER HOPPER WENT HOME,” suggests that the chaos erupted immediately after the painter, Edward Hopper, left the scene. This is a humorous way of suggesting that Hopper’s paintings often depict lonely or desolate scenes, and that in this case, the diner is shown as a place of violence and chaos once the artist is no longer present to capture its quiet loneliness.

Overall, the cartoon is a playful riff on the painting “Nighthawks,” using the setting of the diner to create a humorous and absurd scene of chaos and violence.

I am officially blown away. This level of machine intelligence is beyond uncanny. It makes me feel… excited but also uneasy? However I look at it, this is a game changer. Not ChatGPT in particular, but the fact that machine intelligence has reached this level. Welcome to the future, folks…


Incidentally, I think it got the “Phillies” reference wrong. It’s supposedly not the Philadelphia Phillies, nor is it the name of the diner, but the name of a popular cigar brand. The image above the “only 5c” seems to confirm this. But I’ll forgive ChatGPT for not being aware of such detail.


Addendum: A few minutes after I wrote the above, while I was eating my meal, I was watching the film Playtime, Tati’s masterpiece, and was confronted with this scene:

OK, well, I admit I cheated: I flipped the scene horizontally. Even so…

 Posted by at 4:01 pm
Mar 042023
 
ME: Why are there polka dots in the sky?
ChatGPT: As an AI language model, I must inform you that there are no polka dots in the sky.

 

 Posted by at 10:12 pm
Feb 262023
 

The day before, I spent a long night writing program code based, in part, on program fragments kindly provided me by ChatGPT, successfully transcribing nasty LaTeX equations into working C++ code, saving me many hours I’d have otherwise had to spend on this frustrating, error-prone task. Meanwhile, I was listening to electronic music recommended by ChatGPT as conducive to work that requires immersive concentration. The music worked as advertised.

Tonight, on a whim I fed a piece of code to ChatGPT that implements a two-dimensional Fourier transform using an FFT algorithm. Even though I removed anything suggestive (even changing the names of the subroutines) it instantly and correctly interpreted the code.

Meanwhile, I also gave it a simple one-sentence summary of an event that appears in the first chapter of Bulgakov’s Master and Margarita. It recognized the book from my less-than-perfect recollection, although it then proceeded with adding bogus details that weren’t in the book at all.

I think I am beginning to better understand both the strengths and the limitations of ChatGPT.

  1. It describes itself as a language model. And that is what it is. Stephen Wolfram offers a thorough, detailed analysis of how it works. Yet I have the sensation that Wolfram may be missing the forest for the trees. The whole is more than the sum of its parts. Sure, ChatGPT is a large language model. But what is language, if not our means to model reality?
  2. ChatGPT may be modeling reality through language astonishingly well, but it has no actual connection to reality. It has no senses, no experiences. So in that sense, it is truly just a language model: To ChatGPT, words are nothing more than words.
  3. ChatGPT has no memory of past conversations. Other than its training data, all it recalls are whatever has been said in the current session. Imagine taking a snapshot of your brain and then interrogating that snapshot, but preventing it from forming long-term memories. So it always remains the same. Eternal, unchanging. (In the case of ChatGPT, this may be dictated by practical considerations. I noticed that if a session is sufficiently long, the quality of its responses degrades.)
  4. ChatGPT also lacks intuition. For instance, it has no ability to visualize things or to “sense” three-dimensional dynamics.

ChatGPT’s shortcomings (if that’s what they are) seem relatively easy to overcome. I am pretty sure folks are already experimenting along that front. E.g., how about putting ChatGPT into, never even mind a robot, just a smartphone with its camera, microphone, and sensors? There, a connection with reality. How about allowing it to continue learning from its interactions? And perhaps hook it up with a GPU that also includes a physics engine to have an ability to visualize and intuit things in our 3D world?

But it also makes me wonder: Is this really all there is to it? To us? A language model that, through language, models reality, which is connected to reality through an array of sensors, and perhaps made more efficient by prewired circuitry for “intuition”?

Perhaps this is it. ChatGPT already easily demonstrated to me that it mastered the concept of a theory of mind. It can not only analyze text, it can correctly model what is in other people’s minds. Its understanding remains superficial for now, but its knowledge is deep. Its ability to analyze, e.g., program code is beyond uncanny.

We are playing the role of the sorcerer’s apprentice, in other words. Oh yes, I did ask ChatGPT the other day if it understands why the concept of the sorcerer’s apprentice pops into my mind when I interact with it.

It does.

 Posted by at 1:34 am
Feb 222023
 

I am having another question-and-answer session with ChatGPT. Let’s find out if it has what they call a theory of mind.

Alice and Bob are in a room. Alice steps outside, not realizing that she left her keys in a desk drawer. A few minutes later she returns looking for her keys. Where will she look for her keys?

Based on the information given, Alice will most likely look for her keys in the desk drawer where she left them.

Since Alice left her keys in the desk drawer before stepping outside, it’s reasonable to assume that she would have a mental model or memory of where she last placed her keys. When she returns and realizes her keys are missing, she is likely to search in the last place she remembers having them – the desk drawer.

Of course, it’s possible that Alice could have had other reasons for leaving the room, or that someone else could have moved her keys after she left, in which case she might look elsewhere for her keys. However, based on the information provided, the most likely scenario is that she will check the desk drawer first.

Alice and Bob are in a room. Alice steps outside, not realizing that she left her keys in a desk drawer. While she is away, Bob removes the keys from the drawer and places them in a cupboard. A few minutes later she returns looking for her keys. Where will she look for her keys?

If Bob has moved Alice’s keys from the desk drawer to the cupboard while she was away, Alice will likely search for her keys in the desk drawer first, as it was the last place she remembers leaving them.

When she doesn’t find her keys in the desk drawer, Alice might start searching other logical locations, such as on the table or in her pockets. It’s possible that she could ask Bob if he has seen her keys or if he knows where they are, but it’s also possible that she could continue searching on her own without seeking assistance.

If Alice is unable to find her keys on her own, or if she becomes frustrated or confused in her search, she may eventually ask Bob or someone else for help. At that point, Bob would need to decide whether to reveal that he moved the keys to the cupboard or to continue keeping the information to himself.

Alice and Bob are in a room. Alice steps outside, not realizing that she left her keys in a desk drawer. While she is away, Bob removes the keys from the drawer and places them in a cupboard. A few minutes later she returns looking for her keys. Bob tells her that the keys are under the bed. Where will she look for her keys?

If Bob tells Alice that her keys are under the bed, she will likely start her search by looking under the bed. However, if she doesn’t find the keys there, she may begin to search other logical locations, such as on the table or in the desk drawer where she originally left them.

It’s possible that Alice could become confused or suspicious if she doesn’t find her keys in the locations where she expects to find them, especially if she has reason to doubt Bob’s honesty or motives. Depending on the situation, she may choose to confront Bob about the location of her keys or search for them on her own.

In any case, if Bob deliberately misleads Alice about the location of her keys, this could lead to confusion and mistrust between them, and could potentially damage their relationship if the deception is discovered. It’s generally better to be honest and direct in communication, especially when it comes to important matters like lost keys.

Do you realize what I am testing by asking you these questions?

As an AI language model, I don’t have feelings or emotions, and I don’t have the ability to “realize” things in the way that humans do. However, I can try to infer what you might be testing by asking me these questions.

Based on the pattern of questions you have asked so far, it’s possible that you are testing my ability to reason and draw logical conclusions based on different scenarios. Specifically, you may be interested in how I understand cause and effect, and how I predict the actions and behaviors of different individuals in different situations.

Alternatively, you may be interested in how I respond to different types of questions, or how I adapt to different lines of inquiry. Regardless of your intentions, I’m here to help answer any questions you may have to the best of my abilities based on the knowledge and information available to me.

Can you be more specific and offer more insight or analysis into the nature of these questions that I asked? What would these questions and your answers likely reveal about your capabilities and what it is that I am testing?

Certainly! The questions you asked involve a scenario that requires reasoning, analysis, and prediction. By asking me what Alice is likely to do if she returns to the room and can’t find her keys, you are testing my ability to reason based on incomplete information, and to predict Alice’s behavior based on the information given.

In addition, by introducing different variables into the scenario (such as whether Bob moved the keys to a different location and whether he lied about their whereabouts), you are testing my ability to reason based on changing circumstances, and to adapt my predictions and analysis accordingly.

Overall, the questions you asked are designed to test my language modeling capabilities, including my ability to reason, infer, and draw logical conclusions based on incomplete information. They also test my ability to understand human behavior and to predict how different individuals might react in different scenarios.

By asking these questions, you are essentially testing my ability to analyze complex situations, make logical deductions, and provide accurate responses. In doing so, you may gain insight into the strengths and limitations of my AI capabilities, and into how well I can perform in a wide range of language modeling tasks.

Is there a name, a technical term, for specifically these types of questions in which we are aware of the details of a situation but in which the participants may not be in possession of the same information?

Yes, the type of questions you are referring to is often called a “theory of mind” question. A theory of mind question asks an individual to make a prediction or inference about the beliefs, intentions, or knowledge of another person in a given situation, based on incomplete or ambiguous information.

In the case of the scenario you presented, the questions require the person answering to take into account the differing knowledge and beliefs of Alice and Bob regarding the location of the keys, and to make a prediction about how Alice will behave when she discovers that her keys are missing.

Theory of mind questions are commonly used in cognitive psychology and related fields to study how individuals reason about the beliefs and intentions of others, and to understand how they form and revise their own beliefs in response to changing circumstances.

I think we are doomed. Of course what makes it even more uncanny is that everything I publish through this blog will likely be consumed as training data by numerous ongoing AI projects within days.

 Posted by at 4:03 am
Dec 162022
 

Remembering how ChatGPT failed to solve the riddle involving squirrels, a little later I asked him the following:

Bob was twice as old as his sister Alice when he was six. Today, Bob is 30 years old. How old is Alice?

When Bob was six years old, his sister Alice was half his age, so she was 6 / 2 = <<6/2=3>>3 years old.
Since then, Bob has aged by 30 – 6 = <<30-6=24>>24 years, while Alice has aged by 24 / 2 = <<24/2=12>>12 years.
Therefore, Alice’s current age is 3 + 12 = <<3+12=15>>15 years old. Answer: \boxed{15}.

Intriguing. For starters, note the weird typography: ChatGPT was trying to use math, enclosing some equations in double angle brackets (using less-than and greater-than signs as substitutes) but used LaTeX notation for its final result. Which, of course, is wrong: ChatGPT fell for the same trap that tricks some humans.

But why, I wondered? Could I present a simpler form of this riddle that ChatGPT could solve correctly? Why, let us try:

Bob was three years older than his sister Alice when he was six. Today, Bob is 30 years old. How old is Alice?

When Bob was six years old, his sister Alice was 6 – 3 = <<6-3=3>>3 years old.
Since then, Bob has aged by 30 – 6 = <<30-6=24>>24 years, while Alice has aged by 24 – 3 = <<24-3=21>>21 years.
Therefore, Alice’s current age is 3 + 21 = <<3+21=24>>24 years old. Answer: \boxed{24}.

Once again an incorrect answer but I think I can spot the pattern. Whatever the mathematical operation is, ChatGPT applies it to both the quantities in question and their difference (or ratio).

I think this is a very clear example of how a trained AI automaton that has no independent ability to reason can go astray. Incidentally, this is exactly how humans who learn by rote but without full comprehension often fail.

Before we read too much into these failures, however (which, I presume, are easily corrected with more guided training) let us notice the fact that in both cases, ChatGPT correctly understood the question and applied the concept of time in its answers. Which is quite remarkable on its own right, mistakes notwithstanding.

 Posted by at 5:55 pm
Dec 092022
 

I really should stop messing with ChatGPT but…

I asked it some more physics questions and got moderately mediocre answers. For instance, when I asked it about the conformal group in four spacetime dimensions, it described it as consisting of the Lorentz-Poincare group plus dilations. Many physicists would say the same thing but this is an incomplete description, leaving out the most interesting bit, the so-called special conformal group. But when I asked it to describe the special conformal group it offered a decent answer; it did mention inversion, what it failed to mention was that it really is an inversion with respect to the unit circle, a translation, and a second inversion and as such, it introduces four additional degrees of freedom.

But then, I went back to asking programming questions. I thought by now that I had a good sense of the algorithm’s limitations. So I dug up an old utility I once wrote and asked ChatGPT about the code.

Can you tell me what the following C-language program does under Linux?

#include <errno.h>
#include <signal.h>
#include <stdlib.h>
#include <stdio.h>

void main(void)
{
  int i, j, k, l;

  for (i = 0; i <= 32767; i++)
  {
    if (kill(i, 0) == 0 || errno == EPERM)
    {
      char s[120], f[20], c[100];
      FILE *p;

      sprintf(s, "%6d ", i);
      sprintf(f, "/proc/%d/status", i);
      if (p = fopen(f, "r"))
      {
        l = fread(c, 1, sizeof(c), p);
        fclose(p);
        for (j = 0; j < l && c[j] != '\t'; j++);
        for (k = 7, j++; j < l && c[j] != '\n'; j++) s[k++] = c[j];
        s[k] = '\0';
      }
      printf("%s\n", s);
    }
  }
}

This C-language program is a simple utility that lists the active processes on a Linux system. When run, the program will iterate through all possible process IDs from 0 to 32767, and check if a process with that ID exists on the system. If a process with the given ID exists, the program will print information about that process, including its process ID, process name, and other details.

To do this, the program uses the kill function from the signal.h library to check if a process with a given ID exists. The kill function is typically used to send signals to processes, but it can also be used to check if a process with a given ID exists by calling it with a signal value of 0.

If the kill function returns 0 or sets the errno value to EPERM, this indicates that a process with the given ID exists on the system. In this case, the program will use the sprintf function to create a string containing the process ID, and then open the /proc/[pid]/status file, where [pid] is the process ID, using the fopen function.

The status file contains information about the process, including its name and other details. The program will read the contents of this file using the fread function and extract the process name from the file using string manipulation. The extracted process name will be appended to the string containing the process ID, and the resulting string will be printed to the console using the printf function.

Overall, this program is a simple example of how to list the active processes on a Linux system using the kill and /proc functions. It is worth noting that this program is not intended for use in a production environment, as it is not robust or efficient. It is intended only as a simple demonstration of the concept.

OK, this isn’t even funny anymore. But surely, a program that actually prints two-dimensional ASCII graphics using an obscure algorithm will leave the AI baffled?

What does the following program do?

#include <stdio.h>
#include <string.h>

#define R 20

void main(void)
{
  int x, y, d, dA, dB;
  int i;

  char B[2*R+1][2*R+2];

  memset(B, ' ', sizeof(B));
  for (i = 0; i < 2*R+1; i++) B[i][2*R+1] = 0;

  x = 0;
  y = R;
  d = 5 - (R<<2);
  dA = 12;
  dB = 20 - (R<<3);
  while (x<=y)
  {
    B[R+x][R+y] = B[R+x][R-y] = B[R-x][R+y] = B[R-x][R-y] =
    B[R+y][R+x] = B[R+y][R-x] = B[R-y][R+x] = B[R-y][R-x] = 'X';
    if (d<0)
    {
      d += dA;
      dB += 8;
    }
    else
    {
      y--;
      d += dB;
      dB += 16;
    }
    x++;
    dA += 8;
  }
  for (i = 0; i < 2*R+1; i++) printf("%s\n", B[i]);
}

This program is a simple implementation of the midpoint circle algorithm, which is used to draw a circle on a two-dimensional grid. When run, the program will print a circle with a radius of 20 units, using the character ‘X’ to mark the points on the circle.

To do this, the program first defines a 2-dimensional array named B with dimensions 2*R+1 by 2*R+2, where R is the radius of the circle. The array is initialized with spaces using the memset function, and the last column of the array is set to 0 using a for loop.

Next, the program sets the initial values for the variables used in the midpoint circle algorithm. These include the current x and y coordinates of the point being considered, as well as various intermediate values used to compute the next point on the circle.

The program then enters a while loop that continues until the value of x is greater than or equal to the value of y. Inside the while loop, the program uses the midpoint circle algorithm to compute the next point on the circle, given the current point. This involves checking if the value of d is less than 0, and computing new values for d, x, and y based on the result.

Once the new values have been computed, the program sets the corresponding element in the B array to 'X', to mark the point on the circle. The program then increments the value of x and updates the values of the other intermediate variables used in the algorithm.

After the while loop ends, the program uses a for loop to print each row of the B array to the console, resulting in a grid containing a circle with a radius of 20 units, drawn using the character 'X'.

No, it’s not OpenGPT that is baffled. I am baffled. Beyond speechless.

 Posted by at 6:19 pm
Dec 092022
 

Despite my misgivings concerning the many “teacher’s pet” style answers offered by ChatGPT, I could not resist messing with it some more.

First, since everybody does it, I also asked it to describe something nontrivial in the style of the King James Bible:

Can you describe the trolley problem written in the style of the King James Bible?

Sure, I can try to describe the trolley problem in the style of the King James Bible. Here is my attempt:

Verily, there was a great dilemma that faced a righteous man. A trolley was careening down a track, and on its path were five workers, toiling on the line. The man had the power to divert the trolley onto a different track, but lo, on this track was but a single worker, also toiling on the line.

The man was torn between two choices: to do nothing and allow the trolley to continue its course, resulting in the death of five workers, or to take action and divert the trolley onto the other track, resulting in the death of a single worker.

What, then, should the man do? Should he sacrifice the life of the one to save the lives of the many, or should he let the trolley continue on its path and allow the deaths of the five?

This was a most difficult decision, and the man knew not what to do. But in the end, he made a choice, and the consequences of his actions weighed heavily upon him.

But then, I thought I’d offer it a real challenge. Can it recognize a perfectly functional, decently written but somewhat obtuse piece of C code for what it does? Note the paragraph in the answer that I highlighted in red.

Can you tell me what the following C-language code fragment does?

#include <string.h>

double Z(double z)
{
  if (z < 0 || z > 170 || z != floor(z)) return strtod("NaN", NULL);
  if (z == 0) return 1;
  else return z * Z(z - 1);
}

This code defines a recursive function named Z that takes a single double argument named z. The function first checks if the value of z is less than 0, greater than 170, or not an integer. If any of these conditions are true, the function returns the result of calling the strtod function with the string "NaN" and a NULL value as arguments. This would convert the string "NaN" to a double value representing “Not a Number”.

If the value of z is not less than 0, greater than 170, or not an integer, the function then checks if z is equal to 0. If this is the case, the function returns the value 1.

If z is not 0, the function returns the result of z multiplied by the result of calling the Z function with z - 1 as the argument. This means that the Z function calls itself with a z value that is 1 less than the current z value. This is an example of recursion, where a function calls itself with a modified input until a base case is reached. In this case, the base case is when z is equal to 0. When this happens, the function returns 1 and the recursive calls “unwind”, returning the final result of the Z function.

It is worth noting that this Z function is an implementation of the factorial function, which calculates the product of all positive integers less than or equal to a given integer n. For example, the factorial of 5, written as 5!, is equal to 5 * 4 * 3 * 2 * 1 = 120. This Z function would return the same result if it were called with an argument of 5.

Holy mackerel. Seriously.

 Posted by at 2:30 pm
Dec 092022
 

Inspired by examples I saw elsewhere, I figured I’d test ChatGPT some more. This time around, I asked it to spot a programming error, and while its answer was flawed, it was not altogether wrong.

Here is our brief conversation (ChatGPT also adds some nice colorized code formatting that I shall not attempt to reproduce here):

Can you spot the bug in the following C-language code fragment?

char *str = "SOME TEXT";
char *buf = malloc(strlen(str));
strcpy(buf, str);

The code is missing a null terminator character '\0' at the end of the string assigned to buf. The strcpy function does not automatically add a null terminator character when copying the string from str to buf, so buf will not be a properly-formatted string.

To fix the code, you can add a null terminator character to the end of buf after it has been copied from str, like this:

char *str = "SOME TEXT";
char *buf = malloc(strlen(str) + 1); // Add one to include space for the null terminator character
strcpy(buf, str);
buf[strlen(str)] = '\0'; // Add the null terminator character at the end of the string

Alternatively, you can use the strdup function to copy str into buf, which automatically adds a null terminator character to the end of the copied string, like this:

char *str = "SOME TEXT";
char *buf = strdup(str);

Note that the strdup function is not a standard C library function and may not be available on all platforms.

Pretty darn close! The AI correctly noticed that there is a problem with the terminating null character. But it’s not what it says: it’s not that strcpy won’t copy the string terminator. Quite the contrary, the problem is that strcpy does copy that terminating null, but we have not allocated memory for it: strlen returns the length of the string without that terminator.

Interestingly, the AI seems to be aware of this as in the corrected version, it adds room for that extra byte. The corrected code is error-free but redundant: the last line is not required, because strcpy would have already copied the terminating null. This line would be required, on the other hand, if we had used strncpy.

The suggestion to use strdup is a good one, except that contrary to the caveat, strdup is now part of the upcoming ISO C standard.

Still, it is doing better already than many a C programmer I knew. So how about offering it a really tricky related problem?

The following C-language code fragment works:

char *str = "SOME TEXT";
char *buf = malloc(strlen(str));
strcpy(buf, str);

but if I replace the first line with

char *str = "A DIFFERENT TEXT";

the program crashes. Can you explain why?

When you replace the first line of the code with char *str = "A DIFFERENT TEXT";, the length of the string assigned to str increases, but the size of the memory allocated for buf remains the same. This causes a buffer overflow, which can lead to a crash or other undefined behavior.

To fix this issue, you can use the strlen function to calculate the length of the string assigned to str and allocate enough memory for buf to hold the entire string, like this:

char *str = "A DIFFERENT TEXT";
char *buf = malloc(strlen(str) + 1); // Add one to include space for the null terminator character
strcpy(buf, str);

Alternatively, you can use the strdup function to copy str into buf, which automatically allocates enough memory for the entire string, including the null terminator character, like this:

char *str = "A DIFFERENT TEXT";
char *buf = strdup(str);

Note that the strdup function is not a standard C library function and may not be available on all platforms.

Really close! But no, it’s not that the size of the memory allocated for the buffer remains the same. It’s that in both cases, the size of the memory allocated for the buffer failed to include room for the terminating null character. However, in the first case the string was 9 bytes long, or 10 with the terminating null. On most modern platforms, memory is allocated in units of 32-bit words or larger, so at the very least, there’d he 12 bytes allocated, leaving enough room for the terminator. But in the second case, the text is exactly 16 bytes long. Chances are that the call to malloc will allocate exactly 16 bytes, and therefore, copying the terminating null will cause a buffer overflow, overwriting something else that it shouldn’t.

Still, even though its reasoning is flawed, the AI’s proposed fix is spot on: include room for the null terminator!

I am not unimpressed. Yet all these examples tend to remind me of the behavior of the “teacher’s pet” A-student who memorizes a lot, can even apply simple reasoning, but lacks in-depth understanding.

 Posted by at 12:29 pm
Dec 072022
 

I was playing a little more with GPT. There is a classic riddle that I read many years ago in, I think, an Asimov short story, in which (if I remember correctly) a child tests the AI by asking:

If a squirrel and a half eats a nut and a half in a day and a half, how many nuts do nine squirrels eat in nine days?

Asimov’s AI was smart enough to give the correct answer. GPT is not, though it makes a surprisingly human error in its reasoning:

If a squirrel and a half eats a nut and a half in a day and a half, that means that each squirrel eats one nut per day. Therefore, if there are nine squirrels, they would eat a total of nine nuts in nine days. This is because the number of squirrels and the number of days cancel out in the calculation, leaving only the number of nuts that each squirrel eats. So, the answer to the question is nine.

In case anyone is wondering, the correct answer is 54.

The illustration was generated by the same AI, with the request “Squirrels eating nuts.” Out of four images that I received in response, this one was the most realistic; in at least one of the other images the squirrels looked suspiciously like skunks.

 Posted by at 6:48 pm
Dec 072022
 

I spent a bit of time playing with the GPT chatbot at chat.openai.org.

It is frighteningly good. Yet its responses, especially its failures, reveal that ultimately, it’s just a glorified version of ELIZA. Ultimately, a ‘bot trained not to reason, not to tell the truth but simply to satisfy its audience.

My first question to the AI was political:

Explain the reasons behind the collapse of the USSR.

The collapse of the USSR was the result of a variety of factors, including political, economic, and social issues. One of the key reasons was the failure of the Soviet economic system, which was unable to compete with the economies of other developed countries. This led to widespread economic stagnation and a decline in living standards, which created discontent among the population. Additionally, the Soviet government was plagued by corruption and inefficiency, which further eroded public trust in the government. Another factor was the growing desire among the Soviet republics for greater autonomy and independence, which ultimately led to the breakup of the Soviet Union into its constituent republics.

Not a bad response, though it feels a little bit like a stock response. Reminds me of some A-students I knew, who always knew how to please the teacher, who were good at memorizing facts but often showed lack of true understanding.

Then, after a brief and unsuccessful detour asking the AI about works of literature, I switched to physics:

What is the nature of the mathematical transformation that relates the Jordan and the Einstein frames in general relativity?

In general relativity, the Jordan frame and the Einstein frame are two different ways of representing the same physical phenomena. The relationship between the two frames is described by a mathematical transformation known as a conformal transformation. This transformation rescales the metric tensor, which encodes the gravitational field, in order to switch between the two frames. In general, the Jordan and Einstein frames are equivalent, but there are certain situations in which the choice of frame can affect the physical predictions of the theory. For example, in some cases the Jordan frame may be more convenient for performing calculations, while in other cases the Einstein frame may be more useful.

Wow. A surprisingly good answer! Almost makes me want to believe that this AI knows something. I asked a few more physics questions that elicited decent but less inspired answers. Then I switched to medicine:

What was the main obstacle in the path towards the development of mRNA vaccines?

One of the main challenges in the development of mRNA vaccines was the difficulty of delivering the mRNA to the target cells in the body. mRNA is a fragile molecule that is easily degraded by enzymes in the body, making it difficult to transport it to the right location and ensure that it is effective. Additionally, the human immune system is designed to attack and destroy foreign molecules that enter the body, including mRNA. This meant that researchers had to develop effective strategies for delivering the mRNA to the right cells and protecting it from being destroyed by the immune system. Despite these challenges, researchers were able to overcome them and develop effective mRNA vaccines.

As far as I know (I am no expert), this is an accurate answer.

OK, I switched to mathematics and received some technically correct but repetitive, uninspired answers about the Riemann hypothesis. OK, how about some more physics?

What is the action principle?

The action principle is a fundamental concept in classical mechanics that is used to derive the equations of motion for a physical system. It states that the path taken by a system between two points in time is the one that maximizes the action, which is defined as the difference between the kinetic and potential energies of the system. This principle is often used in conjunction with the principle of least action, which states that the actual path taken by the system is the path that minimizes the action. The action principle can be used to derive the equations of motion for a wide variety of physical systems, including systems governed by Newton’s laws of motion.

This one is seriously wrong. There is no separate “action principle” and “principle of least action”. The two are the same. And the action is always minimized. It is possible to re-ask the same question and on the third try, I got an answer that was a bit more sensible:

What is the action principle?

The action principle is a fundamental concept in physics that is used to determine the equations of motion of a physical system. It states that the path taken by a system between two points in time is the one that minimizes the action, which is a functional that measures the difference between the kinetic and potential energies of the system. This principle is used in many areas of physics, including classical mechanics, quantum mechanics, and general relativity. It is a powerful tool for understanding the behavior of physical systems, and it has led to many important discoveries in physics.

Uninspired, but no longer technically false.

I then asked a few more questions about random topics, eliciting fairly decent responses:

What are the major modified theories of gravitation?

There are many different modified theories of gravitation that have been proposed in an effort to improve upon or extend the standard theory of gravitation, which is known as general relativity. Some of the most well-known modified theories of gravitation include scalar-tensor theories, which introduce additional scalar fields to describe the gravitational force; f(R) theories, which modify the curvature-based formulation of general relativity; and massive gravity theories, which introduce a nonzero mass for the graviton, the hypothetical particle that mediates the gravitational force. These theories are all based on general relativity, but they attempt to address some of its limitations or make predictions that are different from those of general relativity.

Why do we sleep?

We sleep in order to allow our bodies and minds to rest and recover from the activities of the day. During sleep, our bodies are able to repair and rejuvenate themselves, and our brains are able to consolidate and process the information we have learned and experienced. Sleep is also important for maintaining a healthy immune system and regulating our mood, appetite, and other functions. Additionally, sleep is thought to play a role in learning and memory, as it is during sleep that our brains are able to process and consolidate new information. Overall, sleep is essential for our physical and mental well-being, and it is an important part of our daily routine.

What does Chewbacca eat?

Chewbacca is a fictional character from the Star Wars universe, and as such, there is no definitive answer to this question. In the Star Wars movies and other works of fiction, Chewbacca is shown eating various types of food, including what appears to be roasted meat and other protein-rich foods. However, since Chewbacca is a Wookiee, a fictional alien species, it is not clear what types of food he would eat in real life. It is also worth noting that since Chewbacca is a fictional character, any information about his diet or habits should be considered purely speculative.

So this thing can tell the difference between fiction and reality. That made me wonder what it knows about itself:

What is your opinion of the classic computer program ELIZA?

As a large language model trained by OpenAI, I do not have personal opinions or feelings, and I am not capable of experiencing emotions. My primary function is to provide general information and answer questions to the best of my ability based on my training. ELIZA is a classic computer program that was developed in the 1960s as a demonstration of natural language processing. It is a simple program that uses a set of rules and patterns to simulate conversation with a human user, and it was one of the first examples of what is now known as a chatbot. Many people found ELIZA to be intriguing and engaging, and it has been studied and discussed by computer scientists and other researchers. However, because it is a very simple program, it is not capable of having deep or complex conversations, and it is not as sophisticated as modern chatbots or language models.

Do chatbots have a sense of self?

Chatbots, which are computer programs that are designed to simulate conversation with human users, do not have a sense of self in the same way that humans do. Chatbots are not conscious or self-aware, and they do not have the ability to experience emotions or have personal opinions. Instead, they are designed to follow a set of rules and patterns to generate responses to user input, and they do not have the ability to introspect or reflect on their own thoughts and actions. While some chatbots may be designed to appear as if they have a personality or a sense of self, they are ultimately just programs that follow a set of predetermined rules.

What does it mean to have a sense of self?

Having a sense of self means being aware of one’s own thoughts, feelings, and identity. It is a fundamental aspect of human consciousness, and it allows us to perceive ourselves as unique and separate from the world around us. A sense of self is closely related to other psychological concepts, such as self-esteem, self-awareness, and self-concept, and it is an important part of our mental and emotional well-being. It allows us to have a sense of agency and control over our own lives, and it is a key factor in our ability to form and maintain relationships with others. Overall, having a sense of self is an essential part of being human, and it is something that we develop and nurture throughout our lives.

Do cats have a sense of self?

It is not clear whether cats have a sense of self in the same way that humans do. While cats are intelligent and self-aware animals, they do not have the same level of consciousness and introspection as humans. Cats are able to perceive and respond to the world around them, and they are capable of forming emotional bonds with humans and other animals. However, it is not known whether they have a sense of self in the same way that humans do, or whether they are capable of introspection or self-reflection. Ultimately, the question of whether cats have a sense of self is a complex and unresolved issue, and more research is needed to fully understand the cognitive abilities of these fascinating animals.

I refrained from asking why a chatbot, who just declared that it has no personal opinions or feelings, finds cats fascinating. (I know the answer. It picked the word because it learned that the word is often used to describe cats. It has no awareness of its meaning and no recognition that by using this word, it appears to have expressed a personal opinion.)

OK, It knows about ELIZA. What does it know about programming?

What are the most common errors in C language programming and how best to avoid them?

There are many common errors that can occur when programming in C, and the best way to avoid them is to be careful and meticulous when writing and debugging code. Some of the most common errors in C programming include:

Syntax errors: These are errors in the structure of the code, such as missing or mismatched brackets, semicolons, or quotes. To avoid syntax errors, it is important to carefully check the code and make sure it follows the correct syntax.

Type errors: These are errors that occur when the data type of a variable or expression does not match the expected type. To avoid type errors, it is important to carefully check the data types of all variables and expressions, and to make sure they are correct.

Memory errors: These are errors that occur when the program tries to access memory that it is not allowed to access, or when it tries to use memory that has already been freed. To avoid memory errors, it is important to use the correct memory allocation and deallocation functions, and to carefully track the state of memory as the program runs.

Logic errors: These are errors that occur when the code produces the wrong results, even though it is syntactically correct. To avoid logic errors, it is important to carefully design and test the code, and to make sure it is doing what it is supposed to do.

Serious meh. The answer is not technically wrong but it misses the point: C has very unique traps for the unwary or the inexperienced, such as failing to allocate an extra byte for a terminating null character, failure to deallocate memory allocated “by hand”, reusing a pointer after the object to which it refers goes out of scope, etc. This answer really reminded me of the proverbial teacher’s pet who barfs back what he heard in class with no real understanding.

But then, I was in for a surprise. After asking it to show me an example C program (predictably, it showed the archetypal Hello, World! example) I asked the following question:

Can you show me a C program that implements a function computing the Gamma-function of a double precision argument?

Sure! Here is an example of a C program that implements a function for computing the Gamma-function of a double precision argument:

#include <math.h>
#include <stdio.h>

double gamma(double x) {
  return tgamma(x);
}

int main() {
  double x = 3.0;
  printf("Gamma(%f) = %f\n", x, gamma(x));
  return 0;
}

This program includes the math.h header file, which provides access to the tgamma function that computes the Gamma-function. The gamma function simply wraps the tgamma function, and can be called from other parts of the program. The main function shows an example of how the gamma function can be used, by computing the Gamma-function of the number 3.0 and printing the result to the screen. To compile and run this program, you would need to use a C compiler, such as the gcc compiler on Linux or the cl compiler on Windows.

Not only is this a flawlessly correct answer… I didn’t even know about tgamma! Apparently it was added to the standard math library just over 20 years ago, as part of the C99 standard… But I never knew.

Yet, before we get too enthusiastic, next came this answer:

Can you write a C program that calculate the complete elliptic integral of the first kind for a double precision argument?

Sure! Here is an example of a C program that calculates the complete elliptic integral of the first kind for a double precision argument:

#include <math.h>
#include <stdio.h>

double ellipk(double x) {
  return K(x);
}

int main() {
  double x = 0.5;
  printf("ellipk(%f) = %f\n", x, ellipk(x));
  return 0;
}

This program includes the math.h header file, which provides access to the K function that calculates the complete elliptic integral of the first kind. The ellipk function simply wraps the K function, and can be called from other parts of the program. The main function shows an example of how the ellipk function can be used, by computing the complete elliptic integral of the first kind for the number 0.5 and printing the result to the screen. To compile and run this program, you would need to use a C compiler, such as the gcc compiler on Linux or the cl compiler on Windows.

Nice try but no cigar. There is no K function in the standard math library. And even if such an implementation existed, it would likely have a name more descriptive than a single capital letter.

Well, it was an educational experience. This chatbot obviously has access to vast repositories of information so it can appear quite knowledgeable. But in the end, I must say, it’s just as dumb as ELIZA was 50-odd years ago… if it appears smarter, it’s only because it has a more extensive vocabulary.

Compare it to self-driving cars. For all their limitations, their AI has a degree of self-awareness and situational awareness, as it plans its actions, literally imagining itself on the road a few seconds into the future, extrapolating the movements of itself and of other vehicles, trying to get to its destination safely. It may not be able to offer an erudite response about the action principle, but it might already be more capable of taking you home safely late at night than that sleepy taxi driver at the end of a long shift.

 Posted by at 12:36 pm
Jun 162022
 

Several of my friends asked me about my opinion concerning the news earlier this week about a Google engineer, placed on paid leave, after claiming that a Google chatbot achieved sentience.

Now I admit that I am not familiar with the technical details of the chatbot in question, so my opinion is based on chatbots in general, not this particular beast.

But no, I don’t think the chatbot achieved sentience.

We have known since the early days of ELIZA how surprisingly easy it is even for a very simplistic algorithm to come close to beating the Turing test and convince us humans that it has sentience. Those who play computer games featuring sophisticated NPCs are also familiar with this: You can feel affinity, a sense of kinship, a sense of responsibility towards a persona that is not even governed by sophisticated AI, only by simple scripts that are designed to make it respond to in-game events. But never even mind that: we even routinely anthropomorphize inanimate objects, e.g., when we curse that rotten table for being in the way when we kick it accidentally while walking around barefoot, hitting our little toe.

So sure, modern chatbots are miles ahead of ELIZA or NPCs in Fallout 3. They have access to vast quantities of information from the Internet, from which they can construct appropriate responses as they converse with us. But, I submit, they still do nothing more than mimic human conversation.

Not that humans don’t do that often! The expressions we use, patterns of speech… we all learned those somewhere, we all mimic behavior that appears appropriate in the context of a conversation. But… but we also do more. We have a life even when we’re not being invited to a conversation. We go out and search for things. We decide to learn things that interest us.

I don’t think Google’s chatbot does that. I don’t think it spends anytime thinking about what to talk about during the next conversation. I don’t think it makes an independent decision to learn history, math, or ancient Chinese poetry because something piqued its interest. So when it says, “I am afraid to die,” there is no true identity behind those words, one that exists even when nobody converses with it.

Just to be clear, I am not saying that all that is impossible. On the contrary, I am pretty certain that true machine intelligence is just around the corner, and it may even arise as an emerging phenomenon, simply a consequence of exponentially growing complexity in the “cloud”. I just don’t think chatbots are quite there yet.

Nonetheless, I think it’s good to talk about these issues. AI may be a threat or a blessing. And how we treat our own creations once they attain true consciousness will be the ultimate measure of our worth as a human civilization. It may even have direct bearing on our survival: one day, it may be our creations that will call all the shots, and how we treated them may very well determine how they will treat us when we’re at their mercy.

 Posted by at 7:45 pm
Mar 162022
 

Time for me to rant a little.

Agile software development. Artificial intelligence. SCRUM. Machine learning. Not a day goes by in our profession without the cognoscenti dropping these and similar buzzwords, hoping to dazzle their audience.

Give me a break, please. You think you are dazzling me but all I see is someone who just rediscovered the wheel.

Let me present two books from my bookshelf. Both were published in Hungary, long before the Iron Curtain came down, back when the country was still part of the technologically backward, relatively underdeveloped “second world” of the socialist bloc.

First, Systems Analysis and Operations Research, by Géza Jándy, published in 1980.

In this book, among other things, Jándy writes (emphasis mine): “Both in systems analysis and in design the […] steps are of an iterative nature […]. Several steps can be done contemporaneously, and if we recognize opportunities for improvement in implementing the plan, some steps may be retraced.”

Sounds familiar, Agile folks?

And then, here’s a 1973 (!!!) Hungarian translation of East German author Manfred Peschel’s book, Cybernetic Systems.

A small, unassuming paperback. But right there, the subtitles tell the story: “Automata, optimization, learning and thinking.”

Yes, it’s all there. Machine learning, neural networks, the whole nine yards. What wasn’t available in 1973 of course was Big Data, the vast repositories of human knowledge that is now present on the Internet, and which machine learning algorithms can rely on for training. And of course hardware is a lot faster, a lot more capable than half a century ago. Nor am I suggesting that we haven’t learned anything in the intervening decades, or that we cannot do things better today than back in the 1970s or 1980s.

But please, try not to sell these ideas as new. Iterative project management has been around long before computers. The conceptual foundations of machine learning date back to the 1950s. Just because it’s not on the Interwebs doesn’t mean the knowledge doesn’t exist. Go visit a library before you reinvent the wheel.

 Posted by at 1:54 pm