May 182023
 

As the debate continues about the emergence of artificial intelligence solutions in all walks of life, in particular about the sudden appearance of large language models (LLMs), I am disheartened by the deep ignorance and blatant misconceptions that characterize the discussion.

For nearly eight decades, we conditioned ourselves to view computers as machines that are dumb but accurate. A calculator will not solve the world’s problems, but it will not make arithmetic mistakes. A search engine will not invent imaginary Web sites; the worst that can happen is stale results. A word processor will not misremember the words that you type. Programmers were supposed to teach computers what we know, not how we learn. Even in science-fiction, machine intelligences were rigid but infallible: Commander Data of Star Trek struggled with the nuances of being human but never used a contraction in spoken English.

And now we are faced with a completely different paradigm: machines that learn. Machines that have an incredible breadth of knowledge, surprising depth, yet make basic mistakes with logic and arithmetic. Machines that make up facts when they lack sufficient knowledge. In short, machines that exhibit behavior we usually associate with people, not computers.

Combine this with a lack of understanding of the implementation details of LLMs, and the result is predictable: fear, often dictated by ignorance.

In this post, I would like to address at least some of the misconceptions that can have significant repercussions.

  1. Don’t call them hallucinations: No, LLMs do not “hallucinate”. Let me illustrate through an example. Please answer the following question to the best of your ability, without looking it up. “I don’t know” is not an acceptable response. Do your best, it’s okay to make a mistake: Where was Albert Einstein born?

    Chances are you didn’t name the city of Ulm, Germany. Yet I am pretty sure that you did not specify Australia or the planet Mars as Einstein’s birthplace, but named some place in the central, German-speaking regions of Europe, somewhere in Germany, maybe Switzerland or Austria. Your guess was likely in the right ballpark, so to speak. Maybe you said Berlin. Or Zurich. Or Bern. Was that a hallucination? Or simply an educated guess, as your “neural net”, your brain, received only sparse training data on the subject of Einstein’s biography?

    This is exactly what the LLM does when asked a question concerning a narrow subject matter on which its training is sparse. It comes up with a plausible answer that is consistent with that sparse training. That’s all. The trouble, of course, is that it often states these answers with convincing certainty, using eloquent language. But more importantly, we, its human readers, are preconditioned to treat a computer as dumb but accurate: We do not expect answers that are erudite but factually wrong.

  2. No, they are not stealing: Already, LLMs have been accused of intellectual property theft. That is blatantly wrong on many levels. Are you “stealing” content when you use a textbook or the Internet to learn a subject? Because that is precisely what the LLMs do. They do not retain a copy of the original. They train their own “brain”, their neural net, to generate answers consistent with their training data. The fact that they have maybe a hundred times more neurons than human brains do, and thus they can often accurately recall entire sections from books or other literary works does not change this fact. Not unless you want to convince me that if I happen to remember a few paragraphs from a book by heart, the “copy” in my brain violates the author’s copyright.

  3. LLMs are entirely static models: I admit I was also confused about this at first. I should not have been. The “P” in GPT, after all, stands for pretrained. For current versions of GPT, that training concluded in late 2021. For Anthropic’s Claude, in early 2022. The “brains” of the LLM is now in the form of a database of several hundred billion “weights” that characterize the neural net. When you interact with the LLM, it does not change. It does not learn from that interaction or indeed, does not in any way change as a result of it. The model is entirely static. Even when it thanks you for teaching it something it did not previously know (Claude, in particular, does this often) it is not telling the truth, at least not the full truth. Future versions of the same LLM may benefit from our conversations, but not the current version.

  4. The systems have no memory or persistence: This one was perhaps for me the most striking. When you converse with chatGPT or Claude, there is a sense that you are chatting with a conscious entity, who retains a memory of your conversation and can reference what has been said earlier. Yet as I said, the models are entirely static. Which means, among other things, that they have no memory whatsoever, no short-term memory in particular. Every time you send a “completion request”, you start with a model that is in a blank state.

    But then, you might wonder, how does it remember what was said earlier in the same conversation? Well, that’s the cheapest trick of all. It’s really all due to how the user interface, the front-end software works. Every time you send something to the LLM, this user interface prepends the entire conversation up to that point before sending the result to the LLM.

    By way of a silly analogy, imagine you are communicating by telegram with a person who has acute amnesia and a tendency to throw away old telegrams. To make sure that they remain aware of the context of the conversation, every time you send a new message, you first copy the content of all messages sent and received up to this point, and then you append your new content.

    Of course this means that the messages increase in length over time. Eventually, they might overwhelm the other person’s ability to make sense of them. In the case of the LLMs, this is governed by the size of the “context window”, the maximum amount of text that the LLM can process. When the length of the conversation begins to approach this size, the LLM’s responses become noticeably weaker, with the LLM often getting hopelessly confused.

To sum up, many of the problems arise not because of what the LLMs are, but because of our false expectations. And failure to understand the limitations while confronted with their astonishing capabilities can lead to undue concerns and fears. Yes, some of the dangers are real. But before we launch an all-out effort to regulate or curtail AI, it might help if we, humans, did a better job understanding what it is that we face.

 Posted by at 2:42 am