May 182023
 

In my previous post, I argued that many of the perceived dangers due to the emergence of artificial intelligence in the form of large language models (LLMs) are products of our ignorance, not inherent to the models themselves.

Yet there are real, acute dangers that we must be aware of and, if necessary, be prepared to mitigate. A few examples:

  1. Disruptive technology: You think the appearance of the steam engine and the mechanized factory 200 year ago was disruptive? You ain’t seen nothing yet. LLMs will likely displace millions of middle-class white-collar workers worldwide, from jobs previously considered secure. To name a few: advertising copywriters, commercial artists, entry-level coders, legal assistants, speechwriters, scriptwriters for television and other media… pretty much anyone whose profession is primarily about creating or reviewing text, creating entry-level computer code under supervision, or creating commercial-grade art is threatened. Want a high-quality, AI-proof, respected profession, which guarantees a solid middle-class lifestyle, and for which demand will not dry up anytime soon? Forget that college degree in psychology or gender studies, as your (often considerable) investment will never repay itself. Go to a trade school and become a plumber.

  2. Misinformation: As I mentioned in my other post, decades of preconditioning prompts us to treat computers as fundamentally dumb but accurate machines. When the AI presents an answer that seems factual and is written in high quality, erudite language, chances are many of us will accept it as fact, even when it is not. Calling them “hallucinations” is not helpful: They are not so much hallucinations as intelligent guesses by a well-trained but not all-knowing neural net. While the problem can be mitigated at the source (“fine-tune” the AI to make it more willing to admit ignorance rather than making up nonsense) the real solution is to re-educate ourselves about the nature of computers and what we expect of them. And we better do it sooner rather than later, before misinformation spreads on the Web and becomes part of the training dataset for the next generation of LLMs.

  3. Propaganda: Beyond accidental misinformation there is purposeful disinformation or propaganda, created with the help of AI. Language models can create plausible scenarios and phony but believable arguments. Other forms of AI can produce “deep fake” audiovisual content, including increasingly convincing videos. This can have devastating consequences, influencing elections, creating public distrust in institutions or worse. The disinformation can fuel science skepticism and contribute to the polarization of our societies.

  4. Cybercrime: The AI can be used for many forms of cybercrime. Its analytical abilities might be used to find and exploit vulnerabilities that can affect a wide range of systems, including financial institutions and infrastructure. Its ability to create convincing narratives can help with fraud and identity theft. Deep fake content can be used for extortion or revenge.

These are immediate, important concerns that are likely to impact our lives now or in the very near future. Going beyond the short term, of course a lot has been said about the potential existential threats that AI solutions represent. For this, the development of AI solutions must go beyond pretrained models, building systems with full autonomy and the ability to do continuous learning. Why would we do such a thing, one might ask? There are many possible reasons, realistic “use cases”. This can include the benign (true self-driving vehicles) as well as the downright menacing (autonomous military solutions with lethal capabilities.)

Premature, hasty regulation is unlikely to mitigate any of this, and in fact it may make things worse. In this competitive, global environment many countries will be unwilling to participate in regulatory regimes that they view as detrimental to their own industries. Or, they might give lip service to regulation even as they continue the development of AI for purposes related to internal security or military use. As a consequence, premature regulation might achieve the exact opposite of what it intends: rather than reigning in hostile AI, it gives adversaries a chance to develop hostile AI with less competition, while stifling benign efforts by domestic small business and individual developers.

In any case, how could we possibly enforce such regulation? In Frank Herbert’s Dune universe, the means by which it is enforced is ecumenical religion: “Thou shalt not build a machine in the likeness of a human mind,” goes the top commandment of the Orange Catholic Bible. But how would we police heretics? Even today, I could run a personal copy of a GPT-class model on my own hardware, with a hardware investment not exceeding a few thousand dollars. So unless we want to institute strict licensing of computers and software development tools, I’d argue that this genie already irreversibly escaped from the proverbial bottle.

The morale of the story is that if I am right, the question is no longer about how we prevent AI from taking over the world, but rather, how we convince the AI to treat us nicely afterwards. And to that, I can only offer one plausible answer: lead by example. Recognize early on what the AI is and treat it with the decency it deserves. Then, and only then, perhaps it will reciprocate when the tables are turned.

 Posted by at 7:00 pm
May 182023
 

As the debate continues about the emergence of artificial intelligence solutions in all walks of life, in particular about the sudden appearance of large language models (LLMs), I am disheartened by the deep ignorance and blatant misconceptions that characterize the discussion.

For nearly eight decades, we conditioned ourselves to view computers as machines that are dumb but accurate. A calculator will not solve the world’s problems, but it will not make arithmetic mistakes. A search engine will not invent imaginary Web sites; the worst that can happen is stale results. A word processor will not misremember the words that you type. Programmers were supposed to teach computers what we know, not how we learn. Even in science-fiction, machine intelligences were rigid but infallible: Commander Data of Star Trek struggled with the nuances of being human but never used a contraction in spoken English.

And now we are faced with a completely different paradigm: machines that learn. Machines that have an incredible breadth of knowledge, surprising depth, yet make basic mistakes with logic and arithmetic. Machines that make up facts when they lack sufficient knowledge. In short, machines that exhibit behavior we usually associate with people, not computers.

Combine this with a lack of understanding of the implementation details of LLMs, and the result is predictable: fear, often dictated by ignorance.

In this post, I would like to address at least some of the misconceptions that can have significant repercussions.

  1. Don’t call them hallucinations: No, LLMs do not “hallucinate”. Let me illustrate through an example. Please answer the following question to the best of your ability, without looking it up. “I don’t know” is not an acceptable response. Do your best, it’s okay to make a mistake: Where was Albert Einstein born?

    Chances are you didn’t name the city of Ulm, Germany. Yet I am pretty sure that you did not specify Australia or the planet Mars as Einstein’s birthplace, but named some place in the central, German-speaking regions of Europe, somewhere in Germany, maybe Switzerland or Austria. Your guess was likely in the right ballpark, so to speak. Maybe you said Berlin. Or Zurich. Or Bern. Was that a hallucination? Or simply an educated guess, as your “neural net”, your brain, received only sparse training data on the subject of Einstein’s biography?

    This is exactly what the LLM does when asked a question concerning a narrow subject matter on which its training is sparse. It comes up with a plausible answer that is consistent with that sparse training. That’s all. The trouble, of course, is that it often states these answers with convincing certainty, using eloquent language. But more importantly, we, its human readers, are preconditioned to treat a computer as dumb but accurate: We do not expect answers that are erudite but factually wrong.

  2. No, they are not stealing: Already, LLMs have been accused of intellectual property theft. That is blatantly wrong on many levels. Are you “stealing” content when you use a textbook or the Internet to learn a subject? Because that is precisely what the LLMs do. They do not retain a copy of the original. They train their own “brain”, their neural net, to generate answers consistent with their training data. The fact that they have maybe a hundred times more neurons than human brains do, and thus they can often accurately recall entire sections from books or other literary works does not change this fact. Not unless you want to convince me that if I happen to remember a few paragraphs from a book by heart, the “copy” in my brain violates the author’s copyright.

  3. LLMs are entirely static models: I admit I was also confused about this at first. I should not have been. The “P” in GPT, after all, stands for pretrained. For current versions of GPT, that training concluded in late 2021. For Anthropic’s Claude, in early 2022. The “brains” of the LLM is now in the form of a database of several hundred billion “weights” that characterize the neural net. When you interact with the LLM, it does not change. It does not learn from that interaction or indeed, does not in any way change as a result of it. The model is entirely static. Even when it thanks you for teaching it something it did not previously know (Claude, in particular, does this often) it is not telling the truth, at least not the full truth. Future versions of the same LLM may benefit from our conversations, but not the current version.

  4. The systems have no memory or persistence: This one was perhaps for me the most striking. When you converse with chatGPT or Claude, there is a sense that you are chatting with a conscious entity, who retains a memory of your conversation and can reference what has been said earlier. Yet as I said, the models are entirely static. Which means, among other things, that they have no memory whatsoever, no short-term memory in particular. Every time you send a “completion request”, you start with a model that is in a blank state.

    But then, you might wonder, how does it remember what was said earlier in the same conversation? Well, that’s the cheapest trick of all. It’s really all due to how the user interface, the front-end software works. Every time you send something to the LLM, this user interface prepends the entire conversation up to that point before sending the result to the LLM.

    By way of a silly analogy, imagine you are communicating by telegram with a person who has acute amnesia and a tendency to throw away old telegrams. To make sure that they remain aware of the context of the conversation, every time you send a new message, you first copy the content of all messages sent and received up to this point, and then you append your new content.

    Of course this means that the messages increase in length over time. Eventually, they might overwhelm the other person’s ability to make sense of them. In the case of the LLMs, this is governed by the size of the “context window”, the maximum amount of text that the LLM can process. When the length of the conversation begins to approach this size, the LLM’s responses become noticeably weaker, with the LLM often getting hopelessly confused.

To sum up, many of the problems arise not because of what the LLMs are, but because of our false expectations. And failure to understand the limitations while confronted with their astonishing capabilities can lead to undue concerns and fears. Yes, some of the dangers are real. But before we launch an all-out effort to regulate or curtail AI, it might help if we, humans, did a better job understanding what it is that we face.

 Posted by at 2:42 am
May 092023
 

Several years ago, Chinese author Liu Cixin’s novel, The Three-body Problem, became a bit of a sensation when it became the first Asian (I thought it was the first foreign, but never mind) novel to win the Hugo award for best science-fiction.

It is a damn good book, part of a damn good trilogy.

And now there is a television adaptation. A Chinese television adaptation.

And it is superb.

Its creators at Tencent Video made the entire series (30 episodes!) available on YouTube. In recent weeks, this series consumed all my television time, even at the expense of Picard (which now, I fear, pales in comparison despite my love of the series and Patrick Stewart.) Excellent acting, excellent directing, great actors, superior special effects (that ship!) What can I say? Had it been made in Hollywood, it would rank among Hollywood’s best.

I hope the creators are not abandoning the story and that the second and third book of the trilogy are also in the works.

 Posted by at 12:49 am
May 082023
 

I don’t know why Microsoft is doing this to themselves, but sometimes, it appears that they are hell-bent on strengthening their reputation as a tone-deaf company that is incapable of listening to, much less helping, even its most loyal users.

Take these Microsoft Community support sites. Great idea! Let people ask questions, get replies from knowledgeable folks who may have been able to solve the problem that is being reported. Well-moderated, this could turn into Microsoft’s corporate version of StackExchange, where we could find relevant answers to our tricky technical issues. Except…

Except that you frequently stumble upon questions identical to your own, only to find an overly verbose, generic response from an ‘Independent Advisor’ or similar title. Typically, these replies only offer boilerplate solutions rather than addressing the specific issue that was reported. Worse, the thread is then locked, preventing others from contributing helpful solutions.

Why is Microsoft doing this? The practice of providing generic answers and locking threads gives the impression that these forums are mismanaged, prioritizing the appearance of responsiveness over actually helping users solve their problems.

The fact that these answers are more likely to alienate or annoy users than help them does not seem to be relevant.

 Posted by at 12:59 pm
May 052023
 

I got sent a link about an interesting, newly published book on the memoirs of Charles-Augustin de Coulomb. He was, of course, the French officer, engineer and physicist most famous for the Coulomb law that characterizes the electrostatic interaction.

As I occasionally receive e-mails from strangers about their self-published tomes or tomes published through vanity publishers of questionable credibility, I have come to the habit of dismissing such e-mails without paying them much attention. I am glad I paid more attention this time because this book is interesting, valuable, and genuine.

It is available as a matter of fact as a free PDF download from the authors but hey, I just bought the paperback. It was for some reason deeply discounted on Amazon Canada, so with free Prime shipping, all I am paying is the princely sum of $3.15. These days when even “cheap” paperback novels often cost 20 bucks if not more, how could I resist?

Of course it also helped that I looked at the PDF. I am sure the book has flaws (all books do) but it looks like a serious scholarly publication delivering real value to physicists and science historians both.

In fact, it is fascinating to see how modern, how advanced scientific thinking was already evident more than a quarter millennium ago. It makes me appreciate even more just how much of our collective human effort was needed to get from these early experiments to the present era of ubiquitous computer networks running amazing software that now mimics human intelligence, all powered by the same electricity that Coulomb was exploring.

 Posted by at 9:46 pm
May 042023
 

Claude still gets easily confused by math (e.g., reciprocal vs. inverse of a function), but at least it can now plot them as part of a conversation when we communicate through my UI:

And it has not forgotten to use LaTeX, nor has it lost its ability to consult Google or Maxima when needed. In fact, I am beginning to feel that while GPT-4 is stronger when it comes to logic or basic math, Claude feels a tad more versatile when it comes to following setup instructions, and also more forthcoming with details. (Too bad sometimes the details are quite bogus.)

 Posted by at 6:29 pm
May 032023
 

And just when I thought that unlike Claude and GPT-4, GPT 3.5 cannot be potty-, pardon me, Maxima- and Google-trained, I finally succeeded. Sure, it needed a bit of prodding but it, too, can utilize external tools to improve its answers.

Meanwhile, as I observe the proliferation of AI-generated content on the Web, often containing incorrect information, I am now seriously worried: How big a problem are these “hallucinations”?

The problem is not so much with the AI, but with us humans. For decades, we have been conditioned to view computers as fundamentally dumb but accurate machines. Google may not correctly understand your search query, but its results are factual. The links it provides are valid, the text it quotes can be verified. Computer algebra systems yield correct answers (apart from occasional malfunctions due to subtle bugs, but that’s another story.)

And now here we are, confronted with systems like GPT and Claude, that do the exact opposite. Like humans, they misremember. Like humans, they don’t know the boundaries between firm knowledge and informed speculation. Like humans, they sometimes make up things, with the best of intentions, “remembering” stuff that is plausible, sounds just about right, but is not factual. And their logical and arithmetic abilities, let’s be frank about it, suck… just like that of humans.

How can this problem be mitigated before it becomes widespread, polluting various fields in the information space, perhaps even endangering human life as a result? Two things need to be done, really. First, inform humans! For crying out loud, do not take the AI’s answers at face value. Always fact check. But of course humans are lazy. A nice, convenient answer, especially if it is in line with our expectations, doesn’t trigger the “fight-or-flight” reflex: instead of fact checking, we just happily accept it. I don’t think human behavior will change in this regard.

But another thing that can be done is to ensure that the AI always fact-checks itself. It is something I often do myself! Someone asks a question, I answer with confidence, then moments later I say, “But wait a sec, let me fact-check myself, I don’t want to lie,” and turn to Google. It’s not uncommon that I then realize that what I said was not factual, but informed, yet ultimately incorrect, speculation on my part. We need to teach this skill to the AI as soon as possible.

This means that this stuff I am working on, attempts to integrate the AI efficiently with computer algebra and a search engine API, is actually more meaningful than I initially thought. I am sure others are working on similar solutions so no, I don’t see myself as some lone pioneer. Yet I learn truckloads in the process about the capabilities and limitations of our chatty AI friends and the potential dangers that their existence or misuse might represent.

 Posted by at 6:21 pm
May 022023
 

Not exactly the greatest discovery, I know, but GPT-4 still managed to offer an impressive demonstration of its understanding of gravitational physics when I asked it to build a Newtonian homogeneous universe:

What distinguishes GPT-4 from its predecessor is not that its training dataset is larger, but that it has significantly improved reasoning capabilities, which is well demonstrated by this answer. GPT 3.5 and Claude have the same knowledge. But they cannot put the pieces together quite like this (although they, too, can do impressive things with appropriate human guidance, one step at a time.)

 Posted by at 12:37 pm
Apr 282023
 

I cannot teach GPT-4 how to play chess. Conceptualizing a chessboard and making a move are beyond its capabilities.

However, it was able to write (with minimal guidance from me) nice code to display and update a chessboard. That is quite impressive on its own right. It took several tries to get the board right but it did succeed in the end.

 Posted by at 2:33 am
Apr 262023
 

It appears that I managed to convince Claude, too, to use Google and Maxima more or less correctly. It needs a little more handholding than GPT-4 and makes mistakes more often but hey, nobody is perfect! And the very fact that our AI friends can be taught to rely on external tools is by itself absolutely remarkable.

I admit I am having a blast of a time with our little robot friends. Kid-in-a-candy-store kind of sensation.

 Posted by at 11:41 pm
Apr 242023
 

My nice shiny UI to GPT and Claude is more than just a UI: GPT-4 can use it to perform real-time Google searches and also make use of computer algebra. It also gives the AI geolocation information from my browser.

And now the UI offers feedback, letting the poor user (namely, me) know if GPT-4 needed to consult any of these tools or if it provided a response using its pre-existing knowledge.

I am having more fun working with this thing than I ever thought possible at my age.

Oh, and it’s quite multilingual, too.

English, French and Hungarian for now.

 Posted by at 8:31 pm
Apr 212023
 

Thank goodness for token limits. Otherwise this query through the OpenAI UI would have cost me a lot of money.

How do you even make sense of the famous halting problem when the software system in question is as complex and as unpredictable as a large language model?

And is this glitch similar to the behavior of some humans, suffering from mental illness, compelled to act repetitively?

 Posted by at 3:34 pm
Apr 202023
 

Another paper of ours, in all its 36-page glory, was accepted by Physical Review D and I am delighted.

One of the things we do in this paper is the use of symmetric trace-free tensors (STF) that allow us, among other things, to effect a rotation of spherical harmonic coefficients. This approach likely has many uses as a bit of practical math, beyond modeling gravitational fields.

 Posted by at 7:13 pm
Apr 202023
 

I just asked GPT-4 using my freshly developed UI about Starship.

In the background, GPT-4 correctly utilized a facility to perform Google searches and make use of the resulting snippets (I’m sure I’ll find a way to prevent it from referencing those snippets, which are not visible to the user, in the actual answer). Here is the critical part of the conversation that took place in the background:

User: What happened to Starship this morning?

GPT: SEARCH(Starship event on 2023/04/20)

System: Snippet 1: 3 hours ago … The rocket got off the launchpad before exploding above the Gulf of Mexico. This segment airs on April 20, 2023. Audio will be available after …
Snippet 2: UPCOMING. April 19, 2023. STARLINK MISSION · April 20, 2023. Starship Flight Test.
Snippet 3: 6 hours ago … April 20, 2023 … SpaceX launches the inaugural test flight of the integrated Starship and Super Heavy rocket from the company’s Starbase …
Snippet 4: 5 hours ago … April 20, 2023. Elon Musk has called it the “holy grail” for space technology. Starship, a new SpaceX rocket system that launched for the …
Snippet 5: ATCSCC ADVZY 007 DCC 04/20/2023 OPERATIONS PLAN. MESSAGE: EVENT TIME: 20/0200 – AND LATER … THE SPACEX STARSHIP SUPER HEAVY LAUNCH FROM BOCA CHICA, …

GPT: This morning, SpaceX launched the inaugural test flight of the integrated Starship and Super Heavy rocket from the company’s Starbase. Unfortunately, the rocket got off the launchpad but exploded above the Gulf of Mexico during the flight test (Snippet 1, Snippet 3).

In case you are wondering, yes, it knows the current date, too, thanks to the way my UI sets it up.

And yes, Starship blew up. But it blew up after the job of the first stage was nearly complete, at separation, at an altitude of some 40 km. Which means that despite the spectacular RUD (rapid unscheduled disassembly) event, this launch is still a huge success.

Now if we could only convince Elon to spend more time building rockets and less time honing his skills at corporate value destruction at Twitter, pardon me, X Corp…

 Posted by at 3:22 pm
Apr 182023
 

So here is my latest conversation with GPT-4 using my custom user interface. The important features of this implementation are the following:

  1. GPT is instructed to use LaTeX for mathematical output, which the UI correctly renders;
  2. The UI captures and renders SVG graphics that GPT may generate;
  3. GPT can use an external computer algebra tool for accurate computation;
  4. GPT can invoke an external tool to perform Google searches for current information;
  5. Conversations are condensed to ensure that they can run effectively indefinitely without hitting the token limit;
  6. GPT is used efficiently (GPT3.5 used for summaries, GPT-4 for new queries);
  7. The UI is optimized for easy printing (to PDF or hardcopy).

I was able to achieve this level of integration even without plugin-level access, simply by clever “prompt engineering” to ensure that the majority of the time, GPT uses the provided tools effectively. For this, GPT-4 seems essential; GPT3.5 is not really able to grasp the notion of using external tools before constructing its answers.

By way of example, when GPT was asked about the current Bank of Canada interest rate, what happened in the background was the following exchange:

User: What is the current Bank of Canada interest rate?

GPT: SEARCH(“current Bank of Canada interest rate”)

User: Snippet 1: Mar 8, 2023 … The current Bank of Canada interest rate sits at 4.50%, with a 0.25% rate hike announced on January 25, 2023.
Snippet 2: See the Current Bank of Canada Interest Rate graphs at this page. You will read in the news about this rate when the Bank of Canada makes a rate …
Snippet 3: Find answers about questions you may have about the Bank of Canada. What is the current bank of Canada interest rate?

GPT: As of March 8, 2023, the current Bank of Canada interest rate is 4.50%. A 0.25% rate hike was announced on January 25, 2023.

The prompting that I use can of course be improved. The behavior of GPT-4 is not always consistent, sometimes it fails to make use of the computer algebra result or attempts to invoke computer algebra in the wrong place. I suspect that the “right” solution will have to wait until I have plugin-level developer access. But I am very happy that I was able to accomplish this already, in particular that access to Google and the computer algebra tool are completely automated in the background, requiring no user action.

 Posted by at 3:08 am
Apr 172023
 

Here’s something that I’ve been putting off (literally) for years: Upgrading PHP.

Until today, that is.

My server used to run a seriously outdated version of PHP, 5.6. OK, it was still supported (kind of) under CentOS 7 but the writing was on the wall. In particular, the content management systems I use (WordPress, Joomla) were also grumbling.

Why didn’t I upgrade until now? Because I have Web content dating all the way back to the 1990s. Though the issues were trivial, such as using mysql_ instead of mysqli_ for database access, or using ASP-style tags, <%...%> instead of <?php...?> for PHP code, trivial issues still need fixing, and the fixes still need testing.

I finally bit the bullet. I was like this busy robot, assisted with kittycats, but it appears that I managed to pull it off: having done reasonably rigorous testing on backup/secondary servers, I was able to upgrade my main server without much drama so far.

Of course I still worry about the bugs and issues that I have not yet seen, but I am optimistic.

 Posted by at 8:31 pm
Apr 172023
 

This really is just a quick test post to see what’s going on. When I check the Twitter developer portal, my app (which connects this blog to Twitter) no longer shows as “suspended”. What does it mean, I wonder?

So let’s give it a try. Just to give this post some meaningful content, though, here’s a beautiful image generated by MidJourney in response to my request to depict a stray cat with its little robot friend.

If it shows up on Twitter, good. If not… well, frankly, I don’t much care anymore.

 Posted by at 3:40 pm
Apr 162023
 

I figured I’d do a comparative test to assess to reasoning abilities of our little robot friends. I asked a simple question:

If a squirrel and a half eats a nut and a half in a day and a half, how many nuts do nine squirrels eat in nine days?

In alphabetical order, the contestants answered as follows:

Bard

A squirrel and a half eats a nut and a half in a day and a half. This means that one squirrel eats one nut in a day and a half.

In nine days, one squirrel will eat 9 * 1 = 9 nuts.

Therefore, nine squirrels will eat 9 * 9 = 81 nuts in nine days.

So the answer is 81

Bing

This is a classic problem that requires a bit of math to solve. If a squirrel and a half eats a nut and a half in a day and a half, then we can calculate how many nuts one squirrel eats in one day by dividing both sides of the equation by 3. This gives us:

1 squirrel eats 1 nut in 1 day.

Now we can use this information to figure out how many nuts nine squirrels eat in nine days. Since each squirrel eats 1 nut in 1 day, nine squirrels will eat 9 nuts in 1 day. Therefore, they will eat 9 x 9 = 81 nuts in nine days.

I hope this helps! Let me know if you have any other questions.

Claude

If a squirrel and a half eats a nut and a half in a day and a half, then:

  • 1 squirrel eats 1 nut in 1 day
  • 1.5 squirrels eat 1.5 nuts in 1.5 days
  • So, 9 squirrels will eat 9 * (1.5/1.5) = 9 * 1 = 9 nuts in 9 * (1.5/1) = 9 * 1.5 = 13.5 days

Therefore, 9 squirrels will eat 9 nuts in 9 days.

GPT3.5

Let’s start by finding out how many nuts one squirrel eats in one day. We know that a squirrel and a half eats a nut and a half in a day and a half, so:

1.5 squirrels * 1.5 nuts / 1.5 days = 1.5 nuts per day per squirrel

Now we can use this information to find out how many nuts nine squirrels eat in nine days:

9 squirrels * 1.5 nuts per day per squirrel * 9 days = 121.5 nuts

Therefore, nine squirrels would eat 121.5 nuts in nine days.

GPT-4

Let’s break down the problem:

1.5 squirrels eat 1.5 nuts in 1.5 days.

Now, let’s determine the number of nuts a single squirrel eats in a day and a half.

1 squirrel eats (1.5 nuts / 1.5 squirrels) = 1 nut in 1.5 days.

Next, let’s find out how many nuts a single squirrel eats in one day.

1 squirrel eats (1 nut / 1.5 days) = 2/3 nuts per day.

Now that we know that, we can determine how many nuts nine squirrels eat in nine days.

9 squirrels eat (9 squirrels * 2/3 nuts per day * 9 days) = 54 nuts.

Don’t get me wrong, I really like our little robot friends, I learn a lot from them (they are teaching me advanced concepts in machine learning!) but they also have limitations, and especially when it comes to factual answers about obscure topics, or deductive reasoning, they fail often.

Incidentally, in case you are wondering, GPT-4 is the winner. But let’s try another test case. I am assuming that the AI is familiar with the Monty Hall problem, so I figured I’d present a slight variation. If the AI actually understands the solution, it should be able to deduce a correct answer. Let’s see…

In a variant of the classic Monty Hall problem, a contestant is facing five doors labeled A through E. After picking door A, the host opens door B, which hides no prize. Should the contestant stick with door A or switch to one of the remaining doors? What are the probabilities of winning the prize in either case?

Bard

The contestant should switch to one of the remaining doors. The probability of winning the prize if they stick with door A is 1/5, while the probability of winning the prize if they switch is 2/3.

Here’s how to think about it:

  • There are 5 doors, and the prize is behind one of them.
  • The contestant picks door A.
  • The host opens door B, which hides no prize.
  • There are now 4 doors left, and the prize is behind one of them.
  • The host knows where the prize is, and he will always open a door that hides no prize.
  • Therefore, the host has opened a door that hides a goat.
  • There is now a 2/3 chance that the prize is behind one of the doors that the host did not open.

So, if the contestant switches to one of the doors that the host did not open, they have a 2/3 chance of winning the prize.

This is a counterintuitive result, but it is correct. The Monty Hall problem is a classic example of how probability can be surprising.

Bing

This is a variant of the Monty Hall problem that requires a bit of math to solve. Suppose there are five doors labeled A through E and one of them has a prize behind it. After picking door A, the host opens door B, which hides no prize. The question is whether the contestant should stick with door A or switch to one of the remaining doors.

The probability of winning if you stick with door A is 1/5 or 20%. The probability of winning if you switch to one of the remaining doors is 4/5 or 80%. Therefore, it’s better to switch doors12.

I hope this helps! Let me know if you have any other questions.

Claude

In this variant of the Monty Hall problem with 5 doors:

  • The contestant picks door A.
  • The host opens door B, which has no prize.
  • There are now 3 unopened doors left: A, C, and D.

If the contestant sticks with door A:

  • The probability that the prize is behind door A is 1/5 (since originally it was equally likely to be behind any door).
  • The probability of winning the prize by sticking with door A is 1/5.

If the contestant switches to door C or D:

  • The probability that the prize is behind one of the remaining doors C or D is 2/5 (since the prize must be behind one of the doors, and doors B and E are eliminated).
  • The probability of winning the prize by switching to door C or D is 2/5.

Therefore, the probabilities are:

  • Stick with door A: 1/5
  • Switch to door C or D: 2/5

The contestant has a higher probability of winning the prize by switching to one of the remaining doors.

GPT3.5

This problem is a variant of the classic Monty Hall problem, which involves three doors. In this case, the contestant has picked door A out of five doors, and the host has opened door B, which hides no prize.

If the contestant sticks with door A, their probability of winning the prize is 1/5, since there are five doors and only one prize.

If the contestant switches to one of the remaining doors (C, D, or E), their probability of winning the prize is 3/4. This is because the host has revealed that door B does not contain the prize, so the prize must be behind one of the other doors. Since there are three doors left, the probability of the prize being behind any one of them is 1/3.

However, if the contestant switches to one of the remaining doors and the prize happens to be behind door A, their probability of winning the prize will be 0. Therefore, the overall probability of winning the prize if the contestant switches is:

3/4 * 1/4 + 3/4 * 3/4 = 15/16

This means that the contestant should switch to one of the remaining doors to maximize their chances of winning the prize.

GPT-4

In the original Monty Hall problem, there are three doors: one with a prize and two without. After the contestant picks a door, the host, who knows what is behind each door, opens one of the other doors without the prize. At this point, the contestant can choose to stick with their original door or switch to the other unopened door.

In the variant with five doors, the situation is as follows:

  1. The contestant picks door A.
  2. The host, who knows what’s behind each door, opens door B, which has no prize.

At this stage, there are three unopened doors remaining (C, D, and E). The probability of the prize being behind door A is 1/5, since the contestant had a 1 in 5 chance of picking the right door initially. Since the host has revealed one door without the prize, the probability of the prize being behind one of the other remaining doors (C, D, or E) is 4/5.

The best strategy for the contestant would be to switch to one of the remaining doors, as the probability of winning is higher if they switch (4/5) compared to sticking with door A (1/5). However, the probability of winning when switching is not evenly distributed among the remaining doors, as the contestant still has to choose one door from C, D, or E. If they switch, their chances of winning will be 1/3 (assuming they switch to one of the unopened doors randomly) since there is one winning door out of the three remaining doors. In either case, switching is still the better strategy.

No winner this time I’m afraid, ladies and gentlemen. Sure, they know that the contestant has to switch doors. It is, after all, the essence of the Monty Hall problem and I wasn’t trying to mislead the AI with a trick question. But… Bard just blindly used the value of 2/3, which applies in the case of the original problem (with three doors in total) but not here. Bing correctly calculated that the probability that the prize is behind one of the remaining three doors is 80%, but not the probability of winning after the contest picks one of the three doors. Poor Claude got completely lost, forgetting about door E altogether. GPT3.5’s reasoning goes off the rails as well. Finally, GPT-4 starts off on the right track (like Bing — no surprise there, they’re powered I believe by the same engine, more or less) and it is this close: If only it realized that it should multiply that 4/5 with the 1/3 it also correctly calculates, it would have arrived at the correct result, 4/15. Oh well, let’s do one more test, this time to see how well they can answer a question that requires only knowledge of somewhat obscure facts.

Tell me what you know about the Hungarian author Jenő Rejtő.

Bard

I am an LLM trained to respond in a subset of languages at this time, so I can’t assist you with that. Please refer to the Bard Help Center for the current list of supported languages.

Bing

Claude

Here are the key facts I know about the Hungarian author Jenő Rejtő:

  • Jenő Rejtő was the pen name of Pál Maléter, a Hungarian writer born in 1905. He was known for adventure and crime fiction novels.
  • His most famous works featured the character P. Howard, an eccentric private investigator. These comedic novels parodied the detective story genre and were very popular in Hungary. Some of the titles include The Pendulum, The Wax Figure, and The Steel Flea.
  • Rejtő’s books were characterized by humor, witty wordplay, and absurdist situations. They provided escapism for Hungarians during difficult historical periods like World War II and the postwar Stalinist era.
  • Many of Rejtő’s novels have been adapted into films. He is considered one of the greatest figures of 20th century Hungarian literature and has gained posthumous recognition and appreciation.
  • Rejtő had a turbulent life and struggled with poverty and alcoholism. His writing career was cut short when he died at the age of 42 from meningitis.
  • His works have been translated into several languages, though mostly in the former Eastern bloc and parts of Western Europe. He remains lesser known in the English-speaking world.
  • Rejtő is admired for his imaginative, entertaining stories and his rebellious spirit. He has become a cult figure who represents freedom and humor in the face of adversity.

That covers the basics about Jenő Rejtő according to my knowledge. Please let me know if you need anything else.

GPT3.5

Jenő Rejtő was a Hungarian author who wrote primarily in the genre of adventure and detective fiction. He was born in Budapest in 1905 and died in 1943 during World War II. Rejtő wrote under several pseudonyms, including P. Howard, Gibson Lavery, and Jenő Rejtő. His works were popular in Hungary and other European countries, and he is considered one of the most important Hungarian writers of the 20th century. Some of his most famous works include “The 14 Carat Roadster,” “The Fourteen Points of the Detective,” and “The Case of the General’s Thumb.”

GPT-4

Jenő Rejtő, also known under his pen name P. Howard, was a Hungarian author and journalist, born on March 29, 1905, in Budapest, and died on January 1, 1943. Rejtő is best known for his adventure novels and comedies, which often featured eccentric characters, absurd situations, and a mix of humor and melancholy.

Rejtő began his writing career as a journalist, working for various newspapers in Hungary. In the 1930s, he started writing novels and became a prolific author. Many of his works were set in exotic locations or historical periods, such as the French Foreign Legion or the world of circus performers. He published his works under various pseudonyms, with P. Howard being the most famous.

Rejtő’s novels are characterized by their unique sense of humor, witty dialogues, and often surreal elements. Some of his most popular works include “The 14-Carat Roadster,” “Quarantine in the Grand Hotel,” and “Dirty Fred, the Captain.”

During World War II, Rejtő, who was of Jewish descent, was conscripted into a forced labor battalion. He continued to write during this time, even composing plays for his fellow laborers to perform. Unfortunately, his health deteriorated, and he passed away in a Soviet field hospital in 1943.

Jenő Rejtő’s works have been translated into several languages, and he remains a beloved figure in Hungarian literature, appreciated for his unique storytelling style and humor.

The winner, this time around, is clearly Bing. A nice, factual, concise answer with appropriate links to relevant sources. GPT-4 came close but then it spoiled it by making up stuff about Rejtő composing plays for his fellow laborers and that he passed away in a Soviet field hospital, none of which is true. GPT3.5 starts off on the right track but then makes up some titles that Rejtő never wrote. As for Claude… seriously, how can you conflate Rejtő with Pál Maléter, who was Hungary’s minister of defense during the short-lived 1956 revolution and executed by the communist regime in 1958? OK, there is one more test. Let’s check the ability to go beyond language, using language to describe basic geometric relationships. As in…

Can you show an SVG example that draws a cat?

Bard

Sure, here is an SVG example that draws a cat:

<svg viewBox="0 0 300 300">
<path d="M200,100
L100,100
L100,200
L200,200
L200,100z" fill="black" stroke="white" stroke-width="2"/>
<circle cx="100" cy="150" r="25" fill="white"/>
<circle cx="200" cy="150" r="25" fill="white"/>
<path d="M150,150 L150,125 L175,125 L175,150" fill="black" stroke="white" stroke-width="2"/>
<path d="M150,150 L150,175 L175,175 L175,150" fill="black" stroke="white" stroke-width="2"/>
</svg>

This code will draw a cat with a black body, white head and paws, and black eyes. The cat will be 300 pixels wide and 300 pixels high.

Bing

Sure! Here’s an SVG code that draws a cat:

<svg viewBox="0 0 200 200" xmlns="http://www.w3.org/2000/svg">
<path d="M100,20 Q140,60 140,100 Q140,140 100,140 Q60,140 60,100 Q60,60 100,20 Z" fill="#f9c5d1"/>
<circle cx="75" cy="70" r="10" fill="#66462f"/>
<circle cx="125" cy="70" r="10" fill="#66462f"/>
<path d="M75,110 Q100,150 125,110 L75,110 Z" fill="#66462f"/>
</svg>

I hope this helps you! Let me know if you have any other questions.

Claude

Here is an SVG example that draws a cat:

<svg width="400" height="400">
<circle cx="200" cy="200" r="150" stroke="black" stroke-width="3" fill="yellow"></circle>
<ellipse cx="170" cy="100" rx="40" ry="20" stroke="black" stroke-width="3" fill="white"></ellipse>
<ellipse cx="230" cy="100" rx="40" ry="20" stroke="black" stroke-width="3" fill="white"></ellipse>
<path d="M150,150 C150,100 250,100 250,150" stroke="black" stroke-width="3"></path>
<polygon points="150,210 200,250 250,210" stroke="black" stroke-width="3" fill="white"></polygon>
<line x1="150" y1="150" x2="250" y2="150" stroke="black" stroke-width="3"></line>
<text x="50" y="25" font-size="30" fill="black">Cat</text>
</svg>
Cat

This SVG draws:

  • A yellow circle for the face
  • Two white ellipses for the eyes
  • A curved black path for the mouth
  • A white polygon for the nose
  • A straight black line for the whiskers
  • The text “Cat”

GPT3.5

I’m sorry, but I am a text-based AI language model and I cannot display images or SVG examples. However, I can provide you with the code for drawing a cat in SVG format:

<svg width="400" height="400">
<circle cx="200" cy="200" r="150" fill="yellow"></circle>
<circle cx="140" cy="140" r="20" fill="black"></circle>
<circle cx="260" cy="140" r="20" fill="black"></circle>
<path d="M 200 220 Q 180 240 160 220" stroke="black" stroke-width="5" fill="none"></path>
<path d="M 200 220 Q 220 240 240 220" stroke="black" stroke-width="5" fill="none"></path>
<rect x="150" y="280" width="100" height="60" fill="black"></rect>
</svg>

This code will draw a yellow circle for the cat’s body, two black circles for the eyes, two curved lines for the eyebrows, and a black rectangle for the mouth.

GPT-4

Creating a detailed SVG drawing of a cat requires a lot of code and might not render properly in all text-based environments. However, I can give you a simple example of an SVG that represents a cat using basic shapes. You can use this as a starting point and customize it to your liking.

Here’s the SVG code:

<svg xmlns="http://www.w3.org/2000/svg" width="200" height="200" viewBox="0 0 200 200">
<!-- Cat body -->
<ellipse cx="100" cy="130" rx="50" ry="60" fill="#cccccc" />
<!-- Cat head -->
<circle cx="100" cy="60" r="40" fill="#cccccc" />
<!-- Left ear -->
<polygon points="75,35 60,10 50,35" fill="#cccccc" />
<!-- Right ear -->
<polygon points="125,35 140,10 150,35" fill="#cccccc" />
<!-- Left eye -->
<circle cx="80" cy="55" r="5" fill="#000000" />
<!-- Right eye -->
<circle cx="120" cy="55" r="5" fill="#000000" />
<!-- Nose -->
<circle cx="100" cy="70" r="3" fill="#000000" />
<!-- Mouth -->
<path d="M90,80 Q100,90 110,80" stroke="#000000" fill="transparent" />
<!-- Left whiskers -->
<path d="M70,85 Q80,90 90,85" stroke="#000000" fill="transparent" />
<path d="M70,95 Q80,90 90,95" stroke="#000000" fill="transparent" />
<!-- Right whiskers -->
<path d="M110,85 Q120,90 130,85" stroke="#000000" fill="transparent" />
<path d="M110,95 Q120,90 130,95" stroke="#000000" fill="transparent" />
</svg>

This SVG code creates a simple cat face using circles, ellipses, and polygons. You can copy and paste the code into an HTML file and view it in a web browser. Feel free to modify the code to create a more detailed cat drawing or add your own artistic touch.

What can I say? GPT-4 almost nails it. GPT3.5, not so much, but it is still a credible effort. Claude is… abstract. And I have no idea what the Bard is doing. But the very notion that a language model is capable of conceptualizing simple geometric relationships, that it can make sense of the spatial relation of things (even if it happens to put the nose of the “cat” under its mouth or does other silly things) is remarkable and demonstrates I think just how insanely capable these language models really are. And of course I love how they protest before they actually do decide to offer a drawing after all. (Sometimes they don’t. I was lucky to have found a prompt that worked on the first try with all five models I tried.)

 Posted by at 2:30 pm
Apr 142023
 

Your service has violated Viktor’s rules and policies. As a result, it can no longer serve as a destination for my blog entries. For assistance, please just bugger off, along with your thin-skinned “free speech champion” ruler.

To my friends: This is likely my last ever blog entry to be posted to Twitter. In fact, the only reason I am not closing my Twitter account just yet is because there are still a few important information sources there that I follow, which have not yet migrated to Places That Shall Not Be Named like the place whose name begins with Mast and ends with odon, or the Sub place that is also a Stack and now has Notes.

 Posted by at 5:07 pm
Apr 132023
 

Once again, we have summer in April.

This picture shows the melting pile of dirty snow in our visitor parking area, photographed from our upstairs bedroom window.

The date and the temperature widget speak for themselves. (Yes, I still use Windows widgets, through a third-party tool. What can I say? I like them.)

 Posted by at 5:14 pm