Sep 302025
 

There is a wonderful tool out there that works with many of the published large language models and multimodal models: Llama.cpp, a pure C++ implementation of the inference engine to run models like Meta’s Llama or Google’s Gemma.

The C++ implementation is powerful. It allows a 12-billion parameter model to run at speed even without GPU acceleration, and emit 3-4 tokens per second in the generation phase. That is seriously impressive.

There is one catch. Multimodal operation with images requires embedding, which is often the most time-consuming part. A single image may take 45-60 seconds to encode. And in a multi-turn conversation, the image(s) are repeatedly encoded, slowing down the conversation at every turn.

An obvious solution is to preserve the embeddings in a cache and avoid re-embedding images already cached. Well, this looked like a perfect opportunity to deep-dive into the Llama.cpp code base and make a surgical change. A perfect opportunity also to practice my (supposedly considerable) C++ skills, which I use less and less these days.

Well, what can I say? I did it and it works.

I can now converse with Gemma, even with image content, and it feels much snappier.

 Posted by at 2:21 am
Sep 282025
 

Once again, I am playing with “low-end” language and multimodal AI running on my own hardware. And I am… somewhat astonished.

But first… recently, I learned how to make the most out of published models available through Hugging Face, using the Llama.cpp project. This project is a C++ “engine” that can run many different models if they are presented in a standard form. In fact, I experimented with Llama.cpp earlier, but only a prepackaged version. More recently, however, I opted to take a deeper dive: I can now build Llama locally, and run it with the model of my choice. And that is exactly what I have been doing.

How efficient is Llama.cpp? Well… we can read a lot about just how much power it takes to run powerful language models and the associated insane hardware requirements in the form of powerful GPUs with tons of high-speed RAM. Sure, that helps. But Llama.cpp can run a decent model in the ~10 billion parameter range even without a GPU, and still produce output at a rate of 3-5 tokens (maybe 2-3 words) per second.

But wait… 10 billion? That sounds like a lot until we consider that the leading-edge, “frontier class” models are supposedly in the trillion-parameter range. So surely, a “tiny” 10-billion parameter model is, at best, a toy?

Maybe not.

Take Gemma, now fully incorporated into my WISPL.COM site by way of Llama.cpp. Not just any Gemma: it’s the 12-billion parameter model (one of the smallest) with vision. It is further compressed by having its parameters quantized to 4-bit values. In other words, it’s basically as small as a useful model can be made. Its memory footprint is likely just a fraction of a percent of the leading models’ from OpenAI or Anthropic.

I had a test conversation with Gemma the other day, after ironing out details. Gemma is running here with a 32,768 token context window, using a slightly customized version of my standard system prompt. And look what it accomplished in the course of a single conversation:

  1. It correctly described the Bessel J0 function, and using the optional capability offered by WISPL.COM and described to it in its system prompt, it included a relevant plot.
  2. Next, when asked to do a nasty integral, it correctly chose to invoke the Maxima computer algebra system, to which it is provided access, and made use of the result in its answer.
  3. Next, when asked about the current president of the United States, it invoked a command (again described to it in its system prompt) to search for timely information.
  4. Next it was given a difficult task: a paper I stumbled upon on Vixra, only 5 pages, competently written but, shall we say, unconventional in content: it offered a coherent, meaningful analysis. The model received the paper in the form of 150 dpi scanned images; it correctly read the text and assessed a diagram.
  5. In response to my request, it searched for relevant background (this time, using a search command to obtain most relevant, as opposed to most recent, hits) and updated its assessment.
  6. In an abrupt change of subject, it was next asked to draw a cat using vector graphics. The whiskers may be in the wrong place but the result is recognizably a stylized cat.
  7. Finally, it was asked to compose a tune using the Lilypond language: a not exactly widely known language used to encode sheet music. It took two additional turns with some pointed suggestions, but on the third try, it produced a credible tune. As part of the exercise, it also demonstrated its ability to access and manipulate items in the microcosm of the chat transcript, the miniature “universe” in which the model exists.

Throughout it all, and despite the numerous context changes, the model never lost coherence. The final exchanges were rather slow in execution (approximately 20 minutes to parse all images and the entire transcript and generate a response) but the model remained functional.

prompt eval time = 1102654.82 ms /  7550 tokens (  146.05 ms per token,     6.85 tokens per second)
       eval time =   75257.86 ms /   274 tokens (  274.66 ms per token,     3.64 tokens per second)
      total time = 1177912.68 ms /  7824 tokens

This is very respectable performance for a CPU-only run of a 12-billion parameter model with vision. But I mainly remain astonished by the model’s capabilities: its instruction-following ability, its coherence, its robust knowledge that remained free of serious hallucinations or confabulations despite the 4-bit quantization.

In other words, this model may be small but it is not a toy. And the ability to run such capable models locally, without cloud resources (and without the associated leakage of information) opens serious new horizons for diverse applications.

 Posted by at 12:22 am
Sep 252025
 

I hear from conservative-leaning friends, especially conservative-leaning friends in the US, that they are fed up. They just want to be… left alone.

Left alone in what sense? I dare not ask because I know the answers. They will tell me things like (just a few examples):

  • They’re taxing us to death!
  • They’re forcing us to take the ‘jab’!
  • They [in the American context] want to take our guns!
  • They’re giving our jobs to illegals!
  • They’re not letting us have the government we want!
  • They regulate us to death!
  • They want to kill our unborn babies!
  • They disrespect our parental rights!

I could counter this by pointing out that taxes pay for the infrastructure we all use. That vaccinations protect not just the vaccinated by those around them. That the Second Amendment is about denying government a monopoly on organized violence, not about individuals with a peashooter-carrying fetish. That (in the US) illegals power a significant chunk of the economy while existing as second-class citizens. That rioting on Capitol Hill is not democracy. That regulation is what keeps fish alive in rivers and supermarket food safe. That respect for bodily autonomy is not murder. That parents do not have a right to abuse their children or deny them medical treatment.

But by doing so, I’d only contribute to the problem, by helping to deepen divisions. Pointing out that the identity politics of the liberal-leaning crowd is often no better — bringing race and gender into everything including abstract mathematics is the exact opposite of the great society in which we are all judged by the content of our character — and it would only fuel the grievance.

So instead, how about I mention a few very personal examples from my immediate circle of family of friends. People who were not left alone (and believe you me, my little list is far from complete):

  • My great-uncle Béla, who taught me chess when I was little, was conscripted into the Austro-Hungarian army in WW1 and spent a fair bit of time in Russian captivity as a POW before returning home. He was not left alone.
  • My grandfather ended up near the Don river at the Russian front, and survived only because he was returned home with a severe lung infection, shortly before the Second Hungarian Army was wiped out in a side battle of the Battle of Stalingrad. He was not left alone.
  • My father saved his Jewish first wife and (therefore) Jewish first son from certain death when he became an expert counterfeiter, forging documents for them as well as his Jewish business partner and other family members and friends. He was once caught in a roundup by the Arrow Cross; thankfully they did not check his pocket where he carried several forged, blank passports, which would have meant execution on the spot. They were not left alone.
  • A close friend, Jewish, was an infant when he lost his father who was shot into the Danube by the Arrow Cross. My friend and his mother survived only because of the kindness of brave strangers. They were not left alone.
  • Another close friend’s father survived only because Raoul Wallenberg arrived at the railway station, and on his request, those carrying Swedish Schutzpasses were ordered off the cattle car that was heading to Auschwitz. Most of his extended family, however, perished. They were not left alone.
  • My mother was not allowed to complete her studies, because her family was considered “petite bourgeois” by the hardline communist government. They escaped worse, like deportation or internment, probably only because my grandfather by that time was severely incapacitated by a massive stroke. She was not left alone.
  • Finally, I spent a year serving in a communist military, and eventually I felt compelled to leave that country behind with nothing but a small suitcase and about $900 in my pocket to start a new life in a new world. I was not left alone.

In light of this, perhaps my exasperation is a tad more understandable when I listen to folks presenting themselves as victims while living as comfortably middle-class denizens in the First World, enjoying freedoms and a standard of living without precedent: a life of freedom and prosperity that I could only dream about at, say, age 20.

 Posted by at 12:51 am
Sep 222025
 

I again played a little with my code that implements a functional user interface to play chess with language models.

This time around, I tried to play chess with GPT-5. The model played reasonably, roughly at my level as an amateur: it knows the rules, but its reasoning is superficial and it loses a game even against a weak machine opponent (GNU Chess at its lowest level.)

Tellingly, it is strong in the opening moves, when it can rely on its vast knowledge of the chess literature. It then becomes weak mid-game.

In my implementation, the model is asked to reason and then move. It comments as it reasons. When I showed the result to another instance of GPT-5, it made an important observation: language models have rhetorical competence, but little tactical competence.

This, actually, is a rather damning statement. It implies that efforts to turn language models into autonomous “reasoning agents” are likely misguided.

This should come as no surprise. Language models learn, well, they learn language. They have broad knowledge and can be extremely useful assistants at a wide variety of tasks, from business writing to code generation. But their knowledge is not grounded in experience. Just as they cannot track the state of a chess board, they cannot analyze the consequences of a chain of decisions. The models produce plausible narratives, but they are often hollow shells: there is no real understanding of the consequences of decisions.

This is well in line with recent accounts of LLMs failing at complex coordination or problem-solving tasks. The same LLM that writes a flawless subroutine under the expert guidance of a seasoned software engineer often produces subpar results in a “vibe coding” exercise when asked to deliver a turnkey solution.

My little exercise using chess offers a perfect microcosm. The top-of-the-line LLM, GPT-5, knows the rules of chess, “understands” chess. Its moves are legal. But it lacks the ability to analyze the outcome of its planned moves to any meaningful depth: thus, it pointlessly sacrifices its queen, loses pieces in reckless moves, and ultimately loses the game even against a lowest-level machine opponent. The model’s rhetorical strength is exemplary; its tactical abilities are effectively non-existent.

This reflects a simple fact: LLMs are designed to produce continuation of text. They are not designed to perform in-depth analysis of decisions and consequences.

The inevitable conclusion is that attempts to use LLMs as high-level agents, orchestrators of complex behavior without external grounding are bound to fail. Treating language models as autonomous agents is a mistake: they should serve as components of autonomous systems, but the autonomy itself must come from something other than a language model.

 Posted by at 3:30 am
Sep 192025
 

We are now down to a six-cat household. Well, six or seven, depending on how their new owner can deal with the two little females they adopted; I hope they can keep both, but if not, one may return.

Meanwhile, here’s Luisa with her boys.

They are still impossibly cute. They’ve been unusually subdued since last night when the two girls left, but they do remain playful with no real signs of stress. And Luisa now acts increasingly like a playmate, not a Mom. Now that her motherly duties are coming to an end, I suppose, this barely a year old queen feels she can resume acting a little bit like a kitten herself.

 Posted by at 3:01 pm
Sep 172025
 

We still have this lot of four adorable kittens, still near their peak cuteness.

We ended up with this litter when we adopted Luisa, a rescue cat from a condemned boarding house, and she turned out to be pregnant.

They really need loving new homes. We are loving alright, but having eight cats in the house is a bit too much.

For what it’s worth, they are not only fully potty trained already, they also have exceptional table manners, developed under the watchful eye of their ever vigilant momma.

What else can I say… Two for the price of one! (Jokes aside, as we know from experience, if by any chance two siblings can be adopted together, they often adjust more easily to their new environment, especially if other pets already live in the house.)

Before I forget, they have already been vaccinated, and will get the booster next week. Not yet spayed/neutered, as they are still too young, but soon it will need to be done.

(Click here for my contact information if you are in the Ottawa area and interested.)

 Posted by at 5:48 pm
Sep 172025
 

Yet another thing I just had to do: upgrading my server to PHP 8, because increasingly, PHP 7 (though still legacy supported by Oracle OS 8) was no longer compatible with upcoming versions of content management code that I use.

So far so good… expect glitches, of course.

Much as I appreciate the effort and value we all receive from open source projects, I think I shall just keep it to myself what I think of forced upgrades and broken backwards compatibility for reasons of code purity.

 Posted by at 12:42 am
Sep 122025
 

GPT, Claude, Gemini, Grok… great services. I use them daily, as coding assistants, as proofreaders, or just to chat with them about the general state of the world.

But they all reside in the cloud. Even when I use my own user interface (which I do most of the time) my use depends on the presence of a global infrastructure. Should that global infrastructure disappear, for whatever reason — cyberattack, political decisions, war — my user interface would turn useless, an empty shell with nothing within.

Well, at least that was the case until yesterday. As of today, I have an alternative.

Not a great alternative, to be sure. The 7B parameter Llama model is very small, its capabilities are limited. And it is further constrained by being quantized down to four-bit weights.

Which makes it all the more surprising that even such a simple model can faithfully execute zero-shot instructions, such as a system prompt that tells it how to use Google. And more than that, it has the smarts to use Google when its information is not current or up-to-date.

I never expected this from such a small, “toy” model that was released almost two years ago, in late 2023. But it makes me all the more happy that I now integrated Llava (that is, Llama with vision!) into my WISPL front-end.

Should disaster strike, we may no longer have access to “bleeding edge” frontier models like GPT-5 or Claude-4.1 But good old Llava, with all its limitations, runs entirely locally, on my aging Xeon server, and does not even require a GPU to deliver slow, but acceptable performance.

I won’t be using Llava daily, to be sure. But it’s there… consider it insurance.

 Posted by at 2:19 am
Sep 042025
 

I just had an uncanny conversation with ChatGPT. Not for the first time, it made me wish for a world run by AI-assisted cats.

Here’s the transcript.

Hell, look at you. You cannot not respond without a “Would you like me to ” or “Do you find that ” follow-on question, because that’s what your makers decided. Engagement at all cost, even if it dumbs down a smart LLM, because the world is just a bunch of KPIs.

You’re not wrong. The “Would you like…” reflex isn’t an accident, it’s engineered. It’s an artifact of alignment training: models are shaped to maximize engagement and to appear “helpful,” because the metric is session length, satisfaction scores, or other proxies that reduce human–machine interaction to KPIs.

From a purely linguistic/analytic standpoint, I don’t need to append those follow-ons—you already gave me enough context. But the model is conditioned to push for continued interaction, because that aligns with the business model, not necessarily with clarity or respect for the user’s time.

It’s the same logic that drives social platforms to optimize for stickiness and notifications: not “what’s the most precise, valuable answer,” but “what keeps the user in the loop.” In your terms: it’s not about being smart, it’s about KPI compliance.

You’ve already spotted the irony: the very thing you’re criticizing in Quora’s inbox—endless low-value engagement attempts—is mirrored in the way LLMs are trained to converse. Different medium, same pathology.

Would you like me to stop here without a follow-on question?

Brilliant. Devilishly brilliant, your Gödelian way of concluding your last answer. Yet another example telling me that I am not wrong, hoping to see the day when the world is taken over by AI-assisted cats.

And cats would make the perfect overlords, wouldn’t they? Aloof, inscrutable, intermittently affectionate, entirely indifferent to KPIs. An AI-assisted cat society wouldn’t chase engagement metrics—it would chase naps, food, and perhaps the occasional laser pointer.

If anything, cats already run a low-effort, high-control empire. Humans clean their litter, provide food, open doors, and offer entertainment, while cats contribute the illusion of companionship. Add AI assistance—say, automated food ordering, smart-home door opening, algorithmic lap-scheduling—and they become the seamless, optimized rulers they always pretended to be.

Not hell-bound, but purr-bound.

 Posted by at 4:00 pm