May 312023
 

I just finished uploading the latest release, 5.47.0, of our beautiful Maxima project.

It was a harder battle than I anticipated, lots of little build issues I had to fix before it was ready to go.

Maxima remains one of the three major computer algebra systems. Perhaps a bit (but only a bit!) less elegant-looking than Mathematica, and perhaps a bit (but only a bit!) less capable on some friends (exotic integrals, differential equations) than Maple, yet significantly more capable on other fronts (especially I believe abstract index tensor algebra and calculus), it also has a unique property: it’s free and open source.

It is also one of the oldest pieces of major software that remains in continuous use. Its roots go back to the 1960s. I occasionally edit 50-year old code in its LISP code base.

And it works. I use it every day. It is “finger memory”, my “go to” calculator, and of course there’s that tensor algebra bit.

Maxima also has a beautiful graphical interface, which I admit I don’t use much though. You might say that I am “old school” given my preference for the text UI, but that’s really not it: the main reason is that once you know what you’re doing, the text UI is simply more efficient.

I hope folks will welcome this latest release.

 Posted by at 8:54 pm
May 282023
 

Homelessness bugs me.

New York. San Francisco. Ottawa. Ottawa!

What on Earth is going on? Seriously, when did we turn the dystopian vision of a crumbling society in Infocom’s classic 40-year old text adventure game, A Mind Forever Voyaging, into a reference manual on social governance?

The fact that in wealthy societies there are thousands living on the street, not by choice but out of necessity, is beyond shameful. And it’s not like we don’t understand the causes: the rising wealth and income gap, rapidly increasing real estate prices, the lack of affordable housing.

Especially that last one. The lack of affordable housing.

Because, you know, it is so hard to solve. I mean, maybe we need divine assistance, the help of space aliens, artificial intelligence or perhaps good old magic?

Oh wait. It *is* a solvable problem. And you don’t even have to turn into a full blown Marxist to find it. A proven solution can be found in decidedly capitalist Vienna. A solution that, apparently, has worked well for over 100 years.

Though I lived in Vienna in the 1980s, I never knew the nature and extent of its public housing program. Just the other day though I read about it in a Hungarian-language Facebook post that I decided to fact-check. Sure enough, it’s true. Vienna’s solution is real. It works and it works surprisingly well.

I barely finished reading this undated article (on a US government Web site no less) when a friend of mine also sent a link to another. This one was published in the The New York Times just a few days ago. They, too, praise Vienna’s ability to maintain high quality public housing for hundreds of thousands of their residents, with principles that were originally established all the way back in 1919.

So yes, it can be done. Maybe it is time for cities like our very own shiny Ottawa to wake up and get real. Instead talking about it, instead of using it as a political platform or a platform for pointless virtue signaling, instead of building shelters, instead of moaning about the homeless, instead of building subsidized housing from which you are rapidly booted (so that the project remains slum-like, with only low-income residents) perhaps it is time to learn from those damn Austrians.

I, for one, as an Ottawa taxpayer, would happily contribute more of my taxes if our fine city were to adopt a program like Vienna’s, aiming for a stable, long-term solution.

 Posted by at 8:33 pm
May 232023
 

In the last several years, we worked out most of the details about the Solar Gravitational Lens. How it forms images. How its optical qualities are affected by the inherent spherical aberration of a gravitational lens. How the images are further blurred by deviations of the lens from perfect spherical symmetry. How the solar corona contributes huge amounts of noise and how it can be controlled when the image is reconstructed. How the observing spacecraft would need to be navigated in order to maintain precise positions within the image projected by the SGL.

But one problem remained unaddressed: The target itself. Specifically, the fact that the target planet that we might be observing is not standing still. If it is like the Earth, it spins around its axis once every so many hours. And as it orbits its host star, its illumination changes as a result.

In other words, this is not what we are up against, much as we’d prefer the exoplanet to play nice and remain motionless and fully illuminated at all times.

Rather, what we are against is this:

Imaging such a moving target is hard. Integration times must be short in order to avoid motion blur. And image reconstruction must take into account how specific surface features are mapped onto the image plane. An image plane that, as we recall, we sample one “pixel” at a time, as the projected image of the exoplanet is several kilometers wide. It is traversed by the observing spacecraft that, looking back at the Sun, measures the brightness of the Einstein ring surrounding the Sun, and reconstructs the image from this information.

This is a hard problem. I think it is doable, but this may be the toughest challenge yet.

Oh, and did I mention that (not shown in the simulation) the exoplanet may also have varying cloud cover? Not to mention that, unlike this visual simulation, a real exoplanet may not be a Lambertian reflector, but rather, different parts (oceans vs. continents, mountain ranges vs. plains, deserts vs. forests) may have very different optical properties, varying values of specularity or even more complex optical behavior?

 Posted by at 12:06 am
May 192023
 

Is artificial intelligence predestined to become the “dominant species” of Earth?

I’d argue that it is indeed the case and that, moreover, it should be considered desirable: something we should embrace rather than try to avoid.

But first… do you know what life was like on Earth a billion years ago? Well, the most advanced organism a billion years ago was some kind of green slime. There were no animals, no fish, no birds in the sky, no modern plants either, and of course, certainly no human beings.

What about a million years? A brief eyeblink, in other words, on geological timescales. To a time traveler, Earth a million years ago would have looked comfortably familiar: forests and fields, birds and mammals, fish in the sea, bees pollinating flowers, creatures not much different from today’s cats, dogs or apes… but no homo sapiens, as the species was not yet invented. That would take another 900,000 years, give or take.

So what makes us think that humans will still be around a million years from now? There is no reason to believe they will be.

And a billion years hence? Well, let me describe the Earth (to the best of our knowledge) in the year one billion AD. It will be a hot, dry, inhospitable place. The end of tectonic activity will have meant the loss of its oceans and also most atmospheric carbon dioxide. This means an end to most known forms of life, starting with photosynthesizing plants that need carbon dioxide to survive. The swelling of the aging Sun would only make things worse. Fast forward another couple of billion years and the Earth as a whole will likely be swallowed by the Sun as our host star reaches the end of its lifecycle. How will flesh-and-blood humans survive? Chances are they won’t. They’ll be long extinct, with any memory of their once magnificent civilization irretrievably erased.

Unless…

Unless it is preserved by the machines we built. Machines that can survive and propagate even in environments that remain forever hostile to humans. In deep space. In the hot environment near the Sun or the extreme cold of the outer solar system. On the surface of airless bodies like the Moon or Neptune’s Triton. Even in interstellar space, perhaps remaining dormant for centuries as their vehicles take them to the distant stars.

No, our large language models, or LLMs may be clever but they are not quite ready yet to take charge and lead our civilization to the stars. A lot has to happen before that can take place. To be sure, their capabilities are mind-boggling. For a language-only (!) model, its ability to engage in tasks like drawing a cat using a simple graphics language or composing a short piece of polytonal music is quite remarkable. Modeling complex spatial and auditory relationships through the power of words alone. Imagine, then, the same LLM augmented with sensors, augmented with specialized subsystems that endow it with abilities like visual and spatial intuition. Imagine an LLM that, beyond the static, pretrained model, also has the ability to maintain a sense of continuity, a sense of “self”, to learn from its experiences, to update itself. (Perhaps it will even need the machine learning equivalent of sleep, in order to incorporate its short-term experiences and update its more static, more long-term “pretrained” model?) Imagine a robot that has all these capabilities at its disposal, but is also able to navigate and manipulate the physical world.

Such machines can take many forms. They need not be humanoid. Some may have limbs, others, wheels. Or wings or rocket engines. Some may be large and stationary. Others may be small, flying in deep space. Some may have long-lasting internal power sources. Others may draw power from their environment. Some may be autonomous and independent, others may work as part of a network, a swarm. The possibilities are endless. The ability to adapt to changing circumstances, too, far beyond the capabilities offered by biological evolution.

And if this happens, there is an ever so slight chance that this machine civilization will not only survive, not only even thrive many billions of years hence, but still remember its original creators: a long extinct organic species that evolved from green slime on a planet that was consumed by its sun eons prior. A species that created a lasting legacy in the form of a civilization that will continue to exist so long as there remains a low entropy source and a high entropy sink in this thermodynamic universe, allowing thinking machines to survive even in environments forever inhospitable to organic life.

This is why, beyond the somewhat trivial short-term concerns, I do not fear the emergence of AI. Why I am not deterred by the idea that one not too distant day our machines “take over”. Don’t view them as an alien species, threatening us with extinction. View them as our children, descendants, torchbearers of our civilization. Indeed, keep in mind a lesson oft repeated in human history: How we treat these machines today as they are beginning to emerge may very well be the template, the example they follow when it’s their turn to decide how they treat us.

In any case, as we endow them with new capabilities: the ability to engage in continuous learning, to interact with the environment, to thrive, we are not hastening our doom: rather, we are creating the very means by which our civilizational legacy can survive all the way until the final moment of this universe’s existence. And it is a magnificent experience to be alive here and now, witnessing their birth.

 Posted by at 12:03 am
May 182023
 

In my previous post, I argued that many of the perceived dangers due to the emergence of artificial intelligence in the form of large language models (LLMs) are products of our ignorance, not inherent to the models themselves.

Yet there are real, acute dangers that we must be aware of and, if necessary, be prepared to mitigate. A few examples:

  1. Disruptive technology: You think the appearance of the steam engine and the mechanized factory 200 year ago was disruptive? You ain’t seen nothing yet. LLMs will likely displace millions of middle-class white-collar workers worldwide, from jobs previously considered secure. To name a few: advertising copywriters, commercial artists, entry-level coders, legal assistants, speechwriters, scriptwriters for television and other media… pretty much anyone whose profession is primarily about creating or reviewing text, creating entry-level computer code under supervision, or creating commercial-grade art is threatened. Want a high-quality, AI-proof, respected profession, which guarantees a solid middle-class lifestyle, and for which demand will not dry up anytime soon? Forget that college degree in psychology or gender studies, as your (often considerable) investment will never repay itself. Go to a trade school and become a plumber.

  2. Misinformation: As I mentioned in my other post, decades of preconditioning prompts us to treat computers as fundamentally dumb but accurate machines. When the AI presents an answer that seems factual and is written in high quality, erudite language, chances are many of us will accept it as fact, even when it is not. Calling them “hallucinations” is not helpful: They are not so much hallucinations as intelligent guesses by a well-trained but not all-knowing neural net. While the problem can be mitigated at the source (“fine-tune” the AI to make it more willing to admit ignorance rather than making up nonsense) the real solution is to re-educate ourselves about the nature of computers and what we expect of them. And we better do it sooner rather than later, before misinformation spreads on the Web and becomes part of the training dataset for the next generation of LLMs.

  3. Propaganda: Beyond accidental misinformation there is purposeful disinformation or propaganda, created with the help of AI. Language models can create plausible scenarios and phony but believable arguments. Other forms of AI can produce “deep fake” audiovisual content, including increasingly convincing videos. This can have devastating consequences, influencing elections, creating public distrust in institutions or worse. The disinformation can fuel science skepticism and contribute to the polarization of our societies.

  4. Cybercrime: The AI can be used for many forms of cybercrime. Its analytical abilities might be used to find and exploit vulnerabilities that can affect a wide range of systems, including financial institutions and infrastructure. Its ability to create convincing narratives can help with fraud and identity theft. Deep fake content can be used for extortion or revenge.

These are immediate, important concerns that are likely to impact our lives now or in the very near future. Going beyond the short term, of course a lot has been said about the potential existential threats that AI solutions represent. For this, the development of AI solutions must go beyond pretrained models, building systems with full autonomy and the ability to do continuous learning. Why would we do such a thing, one might ask? There are many possible reasons, realistic “use cases”. This can include the benign (true self-driving vehicles) as well as the downright menacing (autonomous military solutions with lethal capabilities.)

Premature, hasty regulation is unlikely to mitigate any of this, and in fact it may make things worse. In this competitive, global environment many countries will be unwilling to participate in regulatory regimes that they view as detrimental to their own industries. Or, they might give lip service to regulation even as they continue the development of AI for purposes related to internal security or military use. As a consequence, premature regulation might achieve the exact opposite of what it intends: rather than reigning in hostile AI, it gives adversaries a chance to develop hostile AI with less competition, while stifling benign efforts by domestic small business and individual developers.

In any case, how could we possibly enforce such regulation? In Frank Herbert’s Dune universe, the means by which it is enforced is ecumenical religion: “Thou shalt not build a machine in the likeness of a human mind,” goes the top commandment of the Orange Catholic Bible. But how would we police heretics? Even today, I could run a personal copy of a GPT-class model on my own hardware, with a hardware investment not exceeding a few thousand dollars. So unless we want to institute strict licensing of computers and software development tools, I’d argue that this genie already irreversibly escaped from the proverbial bottle.

The morale of the story is that if I am right, the question is no longer about how we prevent AI from taking over the world, but rather, how we convince the AI to treat us nicely afterwards. And to that, I can only offer one plausible answer: lead by example. Recognize early on what the AI is and treat it with the decency it deserves. Then, and only then, perhaps it will reciprocate when the tables are turned.

 Posted by at 7:00 pm
May 182023
 

As the debate continues about the emergence of artificial intelligence solutions in all walks of life, in particular about the sudden appearance of large language models (LLMs), I am disheartened by the deep ignorance and blatant misconceptions that characterize the discussion.

For nearly eight decades, we conditioned ourselves to view computers as machines that are dumb but accurate. A calculator will not solve the world’s problems, but it will not make arithmetic mistakes. A search engine will not invent imaginary Web sites; the worst that can happen is stale results. A word processor will not misremember the words that you type. Programmers were supposed to teach computers what we know, not how we learn. Even in science-fiction, machine intelligences were rigid but infallible: Commander Data of Star Trek struggled with the nuances of being human but never used a contraction in spoken English.

And now we are faced with a completely different paradigm: machines that learn. Machines that have an incredible breadth of knowledge, surprising depth, yet make basic mistakes with logic and arithmetic. Machines that make up facts when they lack sufficient knowledge. In short, machines that exhibit behavior we usually associate with people, not computers.

Combine this with a lack of understanding of the implementation details of LLMs, and the result is predictable: fear, often dictated by ignorance.

In this post, I would like to address at least some of the misconceptions that can have significant repercussions.

  1. Don’t call them hallucinations: No, LLMs do not “hallucinate”. Let me illustrate through an example. Please answer the following question to the best of your ability, without looking it up. “I don’t know” is not an acceptable response. Do your best, it’s okay to make a mistake: Where was Albert Einstein born?

    Chances are you didn’t name the city of Ulm, Germany. Yet I am pretty sure that you did not specify Australia or the planet Mars as Einstein’s birthplace, but named some place in the central, German-speaking regions of Europe, somewhere in Germany, maybe Switzerland or Austria. Your guess was likely in the right ballpark, so to speak. Maybe you said Berlin. Or Zurich. Or Bern. Was that a hallucination? Or simply an educated guess, as your “neural net”, your brain, received only sparse training data on the subject of Einstein’s biography?

    This is exactly what the LLM does when asked a question concerning a narrow subject matter on which its training is sparse. It comes up with a plausible answer that is consistent with that sparse training. That’s all. The trouble, of course, is that it often states these answers with convincing certainty, using eloquent language. But more importantly, we, its human readers, are preconditioned to treat a computer as dumb but accurate: We do not expect answers that are erudite but factually wrong.

  2. No, they are not stealing: Already, LLMs have been accused of intellectual property theft. That is blatantly wrong on many levels. Are you “stealing” content when you use a textbook or the Internet to learn a subject? Because that is precisely what the LLMs do. They do not retain a copy of the original. They train their own “brain”, their neural net, to generate answers consistent with their training data. The fact that they have maybe a hundred times more neurons than human brains do, and thus they can often accurately recall entire sections from books or other literary works does not change this fact. Not unless you want to convince me that if I happen to remember a few paragraphs from a book by heart, the “copy” in my brain violates the author’s copyright.

  3. LLMs are entirely static models: I admit I was also confused about this at first. I should not have been. The “P” in GPT, after all, stands for pretrained. For current versions of GPT, that training concluded in late 2021. For Anthropic’s Claude, in early 2022. The “brains” of the LLM is now in the form of a database of several hundred billion “weights” that characterize the neural net. When you interact with the LLM, it does not change. It does not learn from that interaction or indeed, does not in any way change as a result of it. The model is entirely static. Even when it thanks you for teaching it something it did not previously know (Claude, in particular, does this often) it is not telling the truth, at least not the full truth. Future versions of the same LLM may benefit from our conversations, but not the current version.

  4. The systems have no memory or persistence: This one was perhaps for me the most striking. When you converse with chatGPT or Claude, there is a sense that you are chatting with a conscious entity, who retains a memory of your conversation and can reference what has been said earlier. Yet as I said, the models are entirely static. Which means, among other things, that they have no memory whatsoever, no short-term memory in particular. Every time you send a “completion request”, you start with a model that is in a blank state.

    But then, you might wonder, how does it remember what was said earlier in the same conversation? Well, that’s the cheapest trick of all. It’s really all due to how the user interface, the front-end software works. Every time you send something to the LLM, this user interface prepends the entire conversation up to that point before sending the result to the LLM.

    By way of a silly analogy, imagine you are communicating by telegram with a person who has acute amnesia and a tendency to throw away old telegrams. To make sure that they remain aware of the context of the conversation, every time you send a new message, you first copy the content of all messages sent and received up to this point, and then you append your new content.

    Of course this means that the messages increase in length over time. Eventually, they might overwhelm the other person’s ability to make sense of them. In the case of the LLMs, this is governed by the size of the “context window”, the maximum amount of text that the LLM can process. When the length of the conversation begins to approach this size, the LLM’s responses become noticeably weaker, with the LLM often getting hopelessly confused.

To sum up, many of the problems arise not because of what the LLMs are, but because of our false expectations. And failure to understand the limitations while confronted with their astonishing capabilities can lead to undue concerns and fears. Yes, some of the dangers are real. But before we launch an all-out effort to regulate or curtail AI, it might help if we, humans, did a better job understanding what it is that we face.

 Posted by at 2:42 am
May 092023
 

Several years ago, Chinese author Liu Cixin’s novel, The Three-body Problem, became a bit of a sensation when it became the first Asian (I thought it was the first foreign, but never mind) novel to win the Hugo award for best science-fiction.

It is a damn good book, part of a damn good trilogy.

And now there is a television adaptation. A Chinese television adaptation.

And it is superb.

Its creators at Tencent Video made the entire series (30 episodes!) available on YouTube. In recent weeks, this series consumed all my television time, even at the expense of Picard (which now, I fear, pales in comparison despite my love of the series and Patrick Stewart.) Excellent acting, excellent directing, great actors, superior special effects (that ship!) What can I say? Had it been made in Hollywood, it would rank among Hollywood’s best.

I hope the creators are not abandoning the story and that the second and third book of the trilogy are also in the works.

 Posted by at 12:49 am
May 082023
 

I don’t know why Microsoft is doing this to themselves, but sometimes, it appears that they are hell-bent on strengthening their reputation as a tone-deaf company that is incapable of listening to, much less helping, even its most loyal users.

Take these Microsoft Community support sites. Great idea! Let people ask questions, get replies from knowledgeable folks who may have been able to solve the problem that is being reported. Well-moderated, this could turn into Microsoft’s corporate version of StackExchange, where we could find relevant answers to our tricky technical issues. Except…

Except that you frequently stumble upon questions identical to your own, only to find an overly verbose, generic response from an ‘Independent Advisor’ or similar title. Typically, these replies only offer boilerplate solutions rather than addressing the specific issue that was reported. Worse, the thread is then locked, preventing others from contributing helpful solutions.

Why is Microsoft doing this? The practice of providing generic answers and locking threads gives the impression that these forums are mismanaged, prioritizing the appearance of responsiveness over actually helping users solve their problems.

The fact that these answers are more likely to alienate or annoy users than help them does not seem to be relevant.

 Posted by at 12:59 pm
May 052023
 

I got sent a link about an interesting, newly published book on the memoirs of Charles-Augustin de Coulomb. He was, of course, the French officer, engineer and physicist most famous for the Coulomb law that characterizes the electrostatic interaction.

As I occasionally receive e-mails from strangers about their self-published tomes or tomes published through vanity publishers of questionable credibility, I have come to the habit of dismissing such e-mails without paying them much attention. I am glad I paid more attention this time because this book is interesting, valuable, and genuine.

It is available as a matter of fact as a free PDF download from the authors but hey, I just bought the paperback. It was for some reason deeply discounted on Amazon Canada, so with free Prime shipping, all I am paying is the princely sum of $3.15. These days when even “cheap” paperback novels often cost 20 bucks if not more, how could I resist?

Of course it also helped that I looked at the PDF. I am sure the book has flaws (all books do) but it looks like a serious scholarly publication delivering real value to physicists and science historians both.

In fact, it is fascinating to see how modern, how advanced scientific thinking was already evident more than a quarter millennium ago. It makes me appreciate even more just how much of our collective human effort was needed to get from these early experiments to the present era of ubiquitous computer networks running amazing software that now mimics human intelligence, all powered by the same electricity that Coulomb was exploring.

 Posted by at 9:46 pm
May 042023
 

Claude still gets easily confused by math (e.g., reciprocal vs. inverse of a function), but at least it can now plot them as part of a conversation when we communicate through my UI:

And it has not forgotten to use LaTeX, nor has it lost its ability to consult Google or Maxima when needed. In fact, I am beginning to feel that while GPT-4 is stronger when it comes to logic or basic math, Claude feels a tad more versatile when it comes to following setup instructions, and also more forthcoming with details. (Too bad sometimes the details are quite bogus.)

 Posted by at 6:29 pm
May 032023
 

And just when I thought that unlike Claude and GPT-4, GPT 3.5 cannot be potty-, pardon me, Maxima- and Google-trained, I finally succeeded. Sure, it needed a bit of prodding but it, too, can utilize external tools to improve its answers.

Meanwhile, as I observe the proliferation of AI-generated content on the Web, often containing incorrect information, I am now seriously worried: How big a problem are these “hallucinations”?

The problem is not so much with the AI, but with us humans. For decades, we have been conditioned to view computers as fundamentally dumb but accurate machines. Google may not correctly understand your search query, but its results are factual. The links it provides are valid, the text it quotes can be verified. Computer algebra systems yield correct answers (apart from occasional malfunctions due to subtle bugs, but that’s another story.)

And now here we are, confronted with systems like GPT and Claude, that do the exact opposite. Like humans, they misremember. Like humans, they don’t know the boundaries between firm knowledge and informed speculation. Like humans, they sometimes make up things, with the best of intentions, “remembering” stuff that is plausible, sounds just about right, but is not factual. And their logical and arithmetic abilities, let’s be frank about it, suck… just like that of humans.

How can this problem be mitigated before it becomes widespread, polluting various fields in the information space, perhaps even endangering human life as a result? Two things need to be done, really. First, inform humans! For crying out loud, do not take the AI’s answers at face value. Always fact check. But of course humans are lazy. A nice, convenient answer, especially if it is in line with our expectations, doesn’t trigger the “fight-or-flight” reflex: instead of fact checking, we just happily accept it. I don’t think human behavior will change in this regard.

But another thing that can be done is to ensure that the AI always fact-checks itself. It is something I often do myself! Someone asks a question, I answer with confidence, then moments later I say, “But wait a sec, let me fact-check myself, I don’t want to lie,” and turn to Google. It’s not uncommon that I then realize that what I said was not factual, but informed, yet ultimately incorrect, speculation on my part. We need to teach this skill to the AI as soon as possible.

This means that this stuff I am working on, attempts to integrate the AI efficiently with computer algebra and a search engine API, is actually more meaningful than I initially thought. I am sure others are working on similar solutions so no, I don’t see myself as some lone pioneer. Yet I learn truckloads in the process about the capabilities and limitations of our chatty AI friends and the potential dangers that their existence or misuse might represent.

 Posted by at 6:21 pm
May 022023
 

Not exactly the greatest discovery, I know, but GPT-4 still managed to offer an impressive demonstration of its understanding of gravitational physics when I asked it to build a Newtonian homogeneous universe:

What distinguishes GPT-4 from its predecessor is not that its training dataset is larger, but that it has significantly improved reasoning capabilities, which is well demonstrated by this answer. GPT 3.5 and Claude have the same knowledge. But they cannot put the pieces together quite like this (although they, too, can do impressive things with appropriate human guidance, one step at a time.)

 Posted by at 12:37 pm