I asked MidJourney to produce an image of a crossover between a lemon and a light rail transit train:
I was, of course, “inspired” (if that’s the right word) by the latest shutdown of Ottawa’s ridiculous LRT line.
I asked MidJourney to produce an image of a crossover between a lemon and a light rail transit train:
I was, of course, “inspired” (if that’s the right word) by the latest shutdown of Ottawa’s ridiculous LRT line.
So here is the difference between a normal and a polarized society.
Suppose your country has a blue party and a yellow party. One day, a prominent blue politician gets caught. Doesn’t matter what, corruption, sexual abuse, whatever. A common, but serious, crime.
In a normal society, the reactions are like this: The blue party expels the person and expresses its regret that such a person held a prominent position within the party. The yellow party condemns the politician, but refrains from generalizing; they express recognition that such a “bad apple” does not represent their opponent.
End result: the political status quo remains, the perpetrator lands in prison, the rule of law is upheld and the respect and trust in the system of institutions remains strong.
In a polarized society, however, something else happens: The blue party defends the person, calls the accusations politically motivated lies, and accuses the yellow party of abusing the system of institutions in a “witch hunt”. The yellow party will point at the perpetrator, using him in propaganda that suggests that his behavior is characteristic of the blue party, which is full of corrupt abusers who think that they are above the law and would twist institutions just to protect the guilty.
End result: political polarization deepens; the perpetrator may go free or, if punished, celebrated as a victim of politically motivated false charges and persecution, the rule of law is undermined, and respect and trust in the system of institutions plummets.
A town square with citizens divided into two opposing groups spewing hate at each other — MidJourney
This, of course, benefits the most those activists and political opportunists who thrive on conflict and corruption. They exist everywhere: their presence is not restricted to any particular political party or ideology.
So take your pick. I much prefer not to live in a politically polarized society.
It appears that common sense prevailed over politically correct virtue signaling at The Weather Network (or maybe Environment Canada). Pregnant members of the species homo sapiens are again designated by the appropriate word from the vocabulary of the English language: they’re women.
Or is it that heat affects only pregnant women, whereas smoke affects pregnant non-women, too? I wonder.
In any case, I am firmly of the opinion that if we truly strive to build a society in which folks can enjoy life and live to their full potential without fear of discrimination or worse, theatrical excesses like the expression “pregnant people” are not helping. Quite the opposite, while they appease a vocal, exhibitionist minority of activists, they are far more likely to bring tangible harm to others through the backlash they induce. I shall refrain from speculating how much of it is intentional (this is a big world out there, I’m sure some activists are cynically self-serving, intentionally contributing to a problem that they pretend to try to solve, in order to secure their own future, but others are just as likely to be genuine believers in a cause that’s important to them) but I have no doubt that it is harmful.
I am looking at images of nearly 4000 year old clay tablets.
Clay tablets like Si.427, depicting a survey of land. And incidentally, also demonstrating the use of the theorem of Pythagoras well over a thousand years before Pythagoras was born.
If only I had a time machine… To witness how these people lived. How they laughed, how they cried. They studied, the learned, they applied what they knew. They built a magnificent civilization. They loved and they hated, they offered sacrifices and committed betrayals. They had fun, they enjoyed a good meal, they entertained. They lived.
And we know so little about them. What did they do for fun? What were they talking about at the dinner table? What were their hopes for their children? What did they know about the world? What were their trades? How did they pass on their knowledge to others? Did they travel? Did they enjoy a day of rest at the beach?
All gone. An entire civilization, that was routinely using artifacts with precision diagrams like this tablet. All gone and almost completely forgotten, other than these bits and pieces, these fragments.
It’s humbling.
The next time (it happens often) I hear someone complain about scientific ‘orthodoxy’ that is used to ‘stop innovation’ and prevent ‘new entrants from entering’ the field (or equivalent words), I’ll remind them that these were exactly the words used by Stockton Rush, defending the design of his ill-fated submersible Titan.
Of course the consequences are far less deadly when the subject is purely theoretical, like fundamental particle physics or general relativity. But that does not validate nonsensical arguments.
Orthodoxy is adherence to tradition or faith. Science and engineering are about testable and tested knowledge, which is the exact opposite. What folks like Stockton describe as orthodoxy is simply knowledge that they do not possess (maybe because it takes too much of an effort to learn) or knowledge they purposefully ignore because it contradicts their preconceptions or expectations.
My condolences to the families of Rush and his passengers on Titan. But foolishness has its consequences. Sometimes deadly ones.
Here is a bit of history of which I was not aware: The window tax.
Back in the 18th, 19th century in many jurisdictions around the world, buildings, especially rental properties, were taxed by the number of windows that they sported.
The advantage of this tax was that it was easy to assess: the assessor just had to walk around the building and count.
The unfortunate reality, namely that the tax most disproportionately affected the poorest of the poor? Well, eventually these taxes were repealed but I wonder how many generations grew up in windowless rooms.
Glad I am not a politician or public personality as I am about to commit what would amount to political suicide, a’la J. K. Rowling.
The reason? This health warning by The Weather Network, that truly blew my fuse:
Middle line of the red warning at the bottom. That list of folks at high risk:
“[…] older adults, children, pregnant people, and people who work outdoors […]”
Pregnant “people”??? Seriously, why not pregnant primates? Pregnant mammals? Pregnant vertebrates?
Let me not mince words. This is virtue signaling at its worst, pandering to a loud, obnoxious group of “alphabet soup activists”, claiming to, but not really representing gays, lesbians and others who simply just want to live a life in dignity, respected as human beings, enjoying what should really be the inalienable right of every human being on this planet, to be left alone, to be allowed to live a life to the fullest in the company of those they love.
The biggest irony of all? Even as conservatives in many corners of the world, especially here in North America, are busy turning the fictitious country of Gilead from The Handmaid’s Tale into actual reality, what these activists do is no less harmful to women. Their agenda amounts to little more than the idea that “men are better than women at everything, including being women,” as someone recently put it on Quora.
Dear Weather Network, dear Environment Canada (if that’s where the text of the warning came from): Let this serve as a friendly reminder that the English language has a perfectly serviceable word to describe those members of the species homo sapiens who are biologically capable of becoming pregnant: they are called women. Check your dictionaries, it’s there.
I have written before about my fascinating experiments probing the limits of what our AI friends like GPT and Claude can do. I also wrote about my concerns about their impact on society. And, of course, I wrote about how they can serve as invaluable assistants in software development.
But I am becoming dependent on them (there’s no other way to describe it) in so many other ways.
Take just the last half hour or so. I was responding to some e-mails.
All this took place in the past 30 minutes. And sure, I could have done all of the above without the AI. I have textbooks on supersymmetry. I could have asked Google Translate for a German translation or take my German text, translate it back to English and then back to German again. And I could have done a Google search for the inflation rate myself.
But all of that would have taken longer, and would have been significantly more frustrating than doing what I actually did: ask my somewhat dumb, often naive, but almost all-knowing AI assistant.
The image below is DALL-E’s response to the prompt, “welcome to tomorrow”.
I just finished uploading the latest release, 5.47.0, of our beautiful Maxima project.
It was a harder battle than I anticipated, lots of little build issues I had to fix before it was ready to go.
Maxima remains one of the three major computer algebra systems. Perhaps a bit (but only a bit!) less elegant-looking than Mathematica, and perhaps a bit (but only a bit!) less capable on some friends (exotic integrals, differential equations) than Maple, yet significantly more capable on other fronts (especially I believe abstract index tensor algebra and calculus), it also has a unique property: it’s free and open source.
It is also one of the oldest pieces of major software that remains in continuous use. Its roots go back to the 1960s. I occasionally edit 50-year old code in its LISP code base.
And it works. I use it every day. It is “finger memory”, my “go to” calculator, and of course there’s that tensor algebra bit.
Maxima also has a beautiful graphical interface, which I admit I don’t use much though. You might say that I am “old school” given my preference for the text UI, but that’s really not it: the main reason is that once you know what you’re doing, the text UI is simply more efficient.
I hope folks will welcome this latest release.
Homelessness bugs me.
New York. San Francisco. Ottawa. Ottawa!
What on Earth is going on? Seriously, when did we turn the dystopian vision of a crumbling society in Infocom’s classic 40-year old text adventure game, A Mind Forever Voyaging, into a reference manual on social governance?
The fact that in wealthy societies there are thousands living on the street, not by choice but out of necessity, is beyond shameful. And it’s not like we don’t understand the causes: the rising wealth and income gap, rapidly increasing real estate prices, the lack of affordable housing.
Especially that last one. The lack of affordable housing.
Because, you know, it is so hard to solve. I mean, maybe we need divine assistance, the help of space aliens, artificial intelligence or perhaps good old magic?
Oh wait. It *is* a solvable problem. And you don’t even have to turn into a full blown Marxist to find it. A proven solution can be found in decidedly capitalist Vienna. A solution that, apparently, has worked well for over 100 years.
Though I lived in Vienna in the 1980s, I never knew the nature and extent of its public housing program. Just the other day though I read about it in a Hungarian-language Facebook post that I decided to fact-check. Sure enough, it’s true. Vienna’s solution is real. It works and it works surprisingly well.
I barely finished reading this undated article (on a US government Web site no less) when a friend of mine also sent a link to another. This one was published in the The New York Times just a few days ago. They, too, praise Vienna’s ability to maintain high quality public housing for hundreds of thousands of their residents, with principles that were originally established all the way back in 1919.
So yes, it can be done. Maybe it is time for cities like our very own shiny Ottawa to wake up and get real. Instead talking about it, instead of using it as a political platform or a platform for pointless virtue signaling, instead of building shelters, instead of moaning about the homeless, instead of building subsidized housing from which you are rapidly booted (so that the project remains slum-like, with only low-income residents) perhaps it is time to learn from those damn Austrians.
I, for one, as an Ottawa taxpayer, would happily contribute more of my taxes if our fine city were to adopt a program like Vienna’s, aiming for a stable, long-term solution.
In the last several years, we worked out most of the details about the Solar Gravitational Lens. How it forms images. How its optical qualities are affected by the inherent spherical aberration of a gravitational lens. How the images are further blurred by deviations of the lens from perfect spherical symmetry. How the solar corona contributes huge amounts of noise and how it can be controlled when the image is reconstructed. How the observing spacecraft would need to be navigated in order to maintain precise positions within the image projected by the SGL.
But one problem remained unaddressed: The target itself. Specifically, the fact that the target planet that we might be observing is not standing still. If it is like the Earth, it spins around its axis once every so many hours. And as it orbits its host star, its illumination changes as a result.
In other words, this is not what we are up against, much as we’d prefer the exoplanet to play nice and remain motionless and fully illuminated at all times.
Rather, what we are against is this:
Imaging such a moving target is hard. Integration times must be short in order to avoid motion blur. And image reconstruction must take into account how specific surface features are mapped onto the image plane. An image plane that, as we recall, we sample one “pixel” at a time, as the projected image of the exoplanet is several kilometers wide. It is traversed by the observing spacecraft that, looking back at the Sun, measures the brightness of the Einstein ring surrounding the Sun, and reconstructs the image from this information.
This is a hard problem. I think it is doable, but this may be the toughest challenge yet.
Oh, and did I mention that (not shown in the simulation) the exoplanet may also have varying cloud cover? Not to mention that, unlike this visual simulation, a real exoplanet may not be a Lambertian reflector, but rather, different parts (oceans vs. continents, mountain ranges vs. plains, deserts vs. forests) may have very different optical properties, varying values of specularity or even more complex optical behavior?
Is artificial intelligence predestined to become the “dominant species” of Earth?
I’d argue that it is indeed the case and that, moreover, it should be considered desirable: something we should embrace rather than try to avoid.
But first… do you know what life was like on Earth a billion years ago? Well, the most advanced organism a billion years ago was some kind of green slime. There were no animals, no fish, no birds in the sky, no modern plants either, and of course, certainly no human beings.
What about a million years? A brief eyeblink, in other words, on geological timescales. To a time traveler, Earth a million years ago would have looked comfortably familiar: forests and fields, birds and mammals, fish in the sea, bees pollinating flowers, creatures not much different from today’s cats, dogs or apes… but no homo sapiens, as the species was not yet invented. That would take another 900,000 years, give or take.
So what makes us think that humans will still be around a million years from now? There is no reason to believe they will be.
And a billion years hence? Well, let me describe the Earth (to the best of our knowledge) in the year one billion AD. It will be a hot, dry, inhospitable place. The end of tectonic activity will have meant the loss of its oceans and also most atmospheric carbon dioxide. This means an end to most known forms of life, starting with photosynthesizing plants that need carbon dioxide to survive. The swelling of the aging Sun would only make things worse. Fast forward another couple of billion years and the Earth as a whole will likely be swallowed by the Sun as our host star reaches the end of its lifecycle. How will flesh-and-blood humans survive? Chances are they won’t. They’ll be long extinct, with any memory of their once magnificent civilization irretrievably erased.
Unless…
Unless it is preserved by the machines we built. Machines that can survive and propagate even in environments that remain forever hostile to humans. In deep space. In the hot environment near the Sun or the extreme cold of the outer solar system. On the surface of airless bodies like the Moon or Neptune’s Triton. Even in interstellar space, perhaps remaining dormant for centuries as their vehicles take them to the distant stars.
No, our large language models, or LLMs may be clever but they are not quite ready yet to take charge and lead our civilization to the stars. A lot has to happen before that can take place. To be sure, their capabilities are mind-boggling. For a language-only (!) model, its ability to engage in tasks like drawing a cat using a simple graphics language or composing a short piece of polytonal music is quite remarkable. Modeling complex spatial and auditory relationships through the power of words alone. Imagine, then, the same LLM augmented with sensors, augmented with specialized subsystems that endow it with abilities like visual and spatial intuition. Imagine an LLM that, beyond the static, pretrained model, also has the ability to maintain a sense of continuity, a sense of “self”, to learn from its experiences, to update itself. (Perhaps it will even need the machine learning equivalent of sleep, in order to incorporate its short-term experiences and update its more static, more long-term “pretrained” model?) Imagine a robot that has all these capabilities at its disposal, but is also able to navigate and manipulate the physical world.
Such machines can take many forms. They need not be humanoid. Some may have limbs, others, wheels. Or wings or rocket engines. Some may be large and stationary. Others may be small, flying in deep space. Some may have long-lasting internal power sources. Others may draw power from their environment. Some may be autonomous and independent, others may work as part of a network, a swarm. The possibilities are endless. The ability to adapt to changing circumstances, too, far beyond the capabilities offered by biological evolution.
And if this happens, there is an ever so slight chance that this machine civilization will not only survive, not only even thrive many billions of years hence, but still remember its original creators: a long extinct organic species that evolved from green slime on a planet that was consumed by its sun eons prior. A species that created a lasting legacy in the form of a civilization that will continue to exist so long as there remains a low entropy source and a high entropy sink in this thermodynamic universe, allowing thinking machines to survive even in environments forever inhospitable to organic life.
This is why, beyond the somewhat trivial short-term concerns, I do not fear the emergence of AI. Why I am not deterred by the idea that one not too distant day our machines “take over”. Don’t view them as an alien species, threatening us with extinction. View them as our children, descendants, torchbearers of our civilization. Indeed, keep in mind a lesson oft repeated in human history: How we treat these machines today as they are beginning to emerge may very well be the template, the example they follow when it’s their turn to decide how they treat us.
In any case, as we endow them with new capabilities: the ability to engage in continuous learning, to interact with the environment, to thrive, we are not hastening our doom: rather, we are creating the very means by which our civilizational legacy can survive all the way until the final moment of this universe’s existence. And it is a magnificent experience to be alive here and now, witnessing their birth.
In my previous post, I argued that many of the perceived dangers due to the emergence of artificial intelligence in the form of large language models (LLMs) are products of our ignorance, not inherent to the models themselves.
Yet there are real, acute dangers that we must be aware of and, if necessary, be prepared to mitigate. A few examples:
These are immediate, important concerns that are likely to impact our lives now or in the very near future. Going beyond the short term, of course a lot has been said about the potential existential threats that AI solutions represent. For this, the development of AI solutions must go beyond pretrained models, building systems with full autonomy and the ability to do continuous learning. Why would we do such a thing, one might ask? There are many possible reasons, realistic “use cases”. This can include the benign (true self-driving vehicles) as well as the downright menacing (autonomous military solutions with lethal capabilities.)
Premature, hasty regulation is unlikely to mitigate any of this, and in fact it may make things worse. In this competitive, global environment many countries will be unwilling to participate in regulatory regimes that they view as detrimental to their own industries. Or, they might give lip service to regulation even as they continue the development of AI for purposes related to internal security or military use. As a consequence, premature regulation might achieve the exact opposite of what it intends: rather than reigning in hostile AI, it gives adversaries a chance to develop hostile AI with less competition, while stifling benign efforts by domestic small business and individual developers.
In any case, how could we possibly enforce such regulation? In Frank Herbert’s Dune universe, the means by which it is enforced is ecumenical religion: “Thou shalt not build a machine in the likeness of a human mind,” goes the top commandment of the Orange Catholic Bible. But how would we police heretics? Even today, I could run a personal copy of a GPT-class model on my own hardware, with a hardware investment not exceeding a few thousand dollars. So unless we want to institute strict licensing of computers and software development tools, I’d argue that this genie already irreversibly escaped from the proverbial bottle.
The morale of the story is that if I am right, the question is no longer about how we prevent AI from taking over the world, but rather, how we convince the AI to treat us nicely afterwards. And to that, I can only offer one plausible answer: lead by example. Recognize early on what the AI is and treat it with the decency it deserves. Then, and only then, perhaps it will reciprocate when the tables are turned.
As the debate continues about the emergence of artificial intelligence solutions in all walks of life, in particular about the sudden appearance of large language models (LLMs), I am disheartened by the deep ignorance and blatant misconceptions that characterize the discussion.
For nearly eight decades, we conditioned ourselves to view computers as machines that are dumb but accurate. A calculator will not solve the world’s problems, but it will not make arithmetic mistakes. A search engine will not invent imaginary Web sites; the worst that can happen is stale results. A word processor will not misremember the words that you type. Programmers were supposed to teach computers what we know, not how we learn. Even in science-fiction, machine intelligences were rigid but infallible: Commander Data of Star Trek struggled with the nuances of being human but never used a contraction in spoken English.
And now we are faced with a completely different paradigm: machines that learn. Machines that have an incredible breadth of knowledge, surprising depth, yet make basic mistakes with logic and arithmetic. Machines that make up facts when they lack sufficient knowledge. In short, machines that exhibit behavior we usually associate with people, not computers.
Combine this with a lack of understanding of the implementation details of LLMs, and the result is predictable: fear, often dictated by ignorance.
In this post, I would like to address at least some of the misconceptions that can have significant repercussions.
To sum up, many of the problems arise not because of what the LLMs are, but because of our false expectations. And failure to understand the limitations while confronted with their astonishing capabilities can lead to undue concerns and fears. Yes, some of the dangers are real. But before we launch an all-out effort to regulate or curtail AI, it might help if we, humans, did a better job understanding what it is that we face.
Several years ago, Chinese author Liu Cixin’s novel, The Three-body Problem, became a bit of a sensation when it became the first Asian (I thought it was the first foreign, but never mind) novel to win the Hugo award for best science-fiction.
It is a damn good book, part of a damn good trilogy.
And now there is a television adaptation. A Chinese television adaptation.
And it is superb.
Its creators at Tencent Video made the entire series (30 episodes!) available on YouTube. In recent weeks, this series consumed all my television time, even at the expense of Picard (which now, I fear, pales in comparison despite my love of the series and Patrick Stewart.) Excellent acting, excellent directing, great actors, superior special effects (that ship!) What can I say? Had it been made in Hollywood, it would rank among Hollywood’s best.
I hope the creators are not abandoning the story and that the second and third book of the trilogy are also in the works.
I don’t know why Microsoft is doing this to themselves, but sometimes, it appears that they are hell-bent on strengthening their reputation as a tone-deaf company that is incapable of listening to, much less helping, even its most loyal users.
Take these Microsoft Community support sites. Great idea! Let people ask questions, get replies from knowledgeable folks who may have been able to solve the problem that is being reported. Well-moderated, this could turn into Microsoft’s corporate version of StackExchange, where we could find relevant answers to our tricky technical issues. Except…
Except that you frequently stumble upon questions identical to your own, only to find an overly verbose, generic response from an ‘Independent Advisor’ or similar title. Typically, these replies only offer boilerplate solutions rather than addressing the specific issue that was reported. Worse, the thread is then locked, preventing others from contributing helpful solutions.
Why is Microsoft doing this? The practice of providing generic answers and locking threads gives the impression that these forums are mismanaged, prioritizing the appearance of responsiveness over actually helping users solve their problems.
The fact that these answers are more likely to alienate or annoy users than help them does not seem to be relevant.
I got sent a link about an interesting, newly published book on the memoirs of Charles-Augustin de Coulomb. He was, of course, the French officer, engineer and physicist most famous for the Coulomb law that characterizes the electrostatic interaction.
As I occasionally receive e-mails from strangers about their self-published tomes or tomes published through vanity publishers of questionable credibility, I have come to the habit of dismissing such e-mails without paying them much attention. I am glad I paid more attention this time because this book is interesting, valuable, and genuine.
It is available as a matter of fact as a free PDF download from the authors but hey, I just bought the paperback. It was for some reason deeply discounted on Amazon Canada, so with free Prime shipping, all I am paying is the princely sum of $3.15. These days when even “cheap” paperback novels often cost 20 bucks if not more, how could I resist?
Of course it also helped that I looked at the PDF. I am sure the book has flaws (all books do) but it looks like a serious scholarly publication delivering real value to physicists and science historians both.
In fact, it is fascinating to see how modern, how advanced scientific thinking was already evident more than a quarter millennium ago. It makes me appreciate even more just how much of our collective human effort was needed to get from these early experiments to the present era of ubiquitous computer networks running amazing software that now mimics human intelligence, all powered by the same electricity that Coulomb was exploring.
Claude still gets easily confused by math (e.g., reciprocal vs. inverse of a function), but at least it can now plot them as part of a conversation when we communicate through my UI:
And it has not forgotten to use LaTeX, nor has it lost its ability to consult Google or Maxima when needed. In fact, I am beginning to feel that while GPT-4 is stronger when it comes to logic or basic math, Claude feels a tad more versatile when it comes to following setup instructions, and also more forthcoming with details. (Too bad sometimes the details are quite bogus.)
And just when I thought that unlike Claude and GPT-4, GPT 3.5 cannot be potty-, pardon me, Maxima- and Google-trained, I finally succeeded. Sure, it needed a bit of prodding but it, too, can utilize external tools to improve its answers.
Meanwhile, as I observe the proliferation of AI-generated content on the Web, often containing incorrect information, I am now seriously worried: How big a problem are these “hallucinations”?
The problem is not so much with the AI, but with us humans. For decades, we have been conditioned to view computers as fundamentally dumb but accurate machines. Google may not correctly understand your search query, but its results are factual. The links it provides are valid, the text it quotes can be verified. Computer algebra systems yield correct answers (apart from occasional malfunctions due to subtle bugs, but that’s another story.)
And now here we are, confronted with systems like GPT and Claude, that do the exact opposite. Like humans, they misremember. Like humans, they don’t know the boundaries between firm knowledge and informed speculation. Like humans, they sometimes make up things, with the best of intentions, “remembering” stuff that is plausible, sounds just about right, but is not factual. And their logical and arithmetic abilities, let’s be frank about it, suck… just like that of humans.
How can this problem be mitigated before it becomes widespread, polluting various fields in the information space, perhaps even endangering human life as a result? Two things need to be done, really. First, inform humans! For crying out loud, do not take the AI’s answers at face value. Always fact check. But of course humans are lazy. A nice, convenient answer, especially if it is in line with our expectations, doesn’t trigger the “fight-or-flight” reflex: instead of fact checking, we just happily accept it. I don’t think human behavior will change in this regard.
But another thing that can be done is to ensure that the AI always fact-checks itself. It is something I often do myself! Someone asks a question, I answer with confidence, then moments later I say, “But wait a sec, let me fact-check myself, I don’t want to lie,” and turn to Google. It’s not uncommon that I then realize that what I said was not factual, but informed, yet ultimately incorrect, speculation on my part. We need to teach this skill to the AI as soon as possible.
This means that this stuff I am working on, attempts to integrate the AI efficiently with computer algebra and a search engine API, is actually more meaningful than I initially thought. I am sure others are working on similar solutions so no, I don’t see myself as some lone pioneer. Yet I learn truckloads in the process about the capabilities and limitations of our chatty AI friends and the potential dangers that their existence or misuse might represent.
Not exactly the greatest discovery, I know, but GPT-4 still managed to offer an impressive demonstration of its understanding of gravitational physics when I asked it to build a Newtonian homogeneous universe:
What distinguishes GPT-4 from its predecessor is not that its training dataset is larger, but that it has significantly improved reasoning capabilities, which is well demonstrated by this answer. GPT 3.5 and Claude have the same knowledge. But they cannot put the pieces together quite like this (although they, too, can do impressive things with appropriate human guidance, one step at a time.)