May 312025
 

No, AI did not kill StackOverflow or more generally, StackExchange. The site’s decline goes back a lot longer than that.

I fear I have to agree with this sobering assessment by InfoWorld, based on my own personal experience.

I’ve been active (sort of) on StackExchange for more than 10 years. I had some moderately popular answers, this one at the top, mentioning software-defined radio. I think it’s a decent answer and, well, I was rewarded with a decent number of upvotes.

I also offered some highly technical answers, such as this one, marked as the “best answer” to a question about a specific quantum field theory derivation.

Yet… I never feel comfortable posting on StackExchange and indeed, I have not posted an answer there in ages. The reason? The site’s moderation.

Moderation rights are granted as a reputational reward. This seems to make a lot of sense until it doesn’t. As InfoWorld put it, the site “became an arena where you had to prove yourself over and over again.” Apart from the fact that it was rewarding moderators by their ability to cull what they deemed irrelevant, there is also a strong sense of the Peter principle at play. People who are good at, say, answering deeply technical questions about quantum field theory may suck as moderators. Yet they are promoted to be just that if they earn enough reputation with their answers.

In contrast, Quora — for all its faults, which are numerous — maintained a far healthier balance. Some moderation is outsourced: E.g., “spaces” are moderated by their owners, comments are moderated by those who posted the answer — but overall, moderation remains primarily Quora’s responsibility and most importantly, it is not gamified. I’ve been active on Quora for almost as long as I’ve been active on StackExchange, but I remain comfortable using (and answering on) Quora in ways I never felt when using StackEchange.

A pity, really. StackExchange has real value. But there are certain aspects of a social media site that, I guess, should never be gamified.

 Posted by at 2:19 am
May 312025
 

You’d think that a bank like Scotiabank — a nice, healthy Canadian bank with lots and lots of money — would do a decent job at building, and maintaining, a consistent Web site that gives customers a seamless experience, inspiring trust in the brand.

Yet… in the last two days, I encountered the following little error box several dozen times:

And no, I was not trying to do anything particularly exotic. I was simply trying to make sure that all our retirement savings have consistent renewal instructions.

In the end, I was able to do this but just about every update required 3-4 tries before succeeding.

The new Scotiabank Web site is a mess. For instance, for several days, all investments showed not the actual investment amount but the total of all investments. How such an obvious coding error found its way into a financial institution’s production Web site, I have no clue.

The truly infuriating bit? Scotiabank’s old Web site, though not perfect, worked quite well. Or, I should say, works, because it is still available as a fallback option (there are obviously still some folks with brains and a sense of responsibility there, I suppose.) The new one adds no functionality (in fact, some functionality is reduced/eliminated), it’s all about appearance.

And the updating of renewal instructions? For every single investment, it takes as many as 10 mouse clicks, navigating through three different pages (with plenty of opportunities for the above error box to pop up, necessitating a restart of the process), sometimes with no obvious clue whatsoever that clicking one button is not enough, you then have to click another button at the bottom of the page to complete the task.

Incidentally, both the old and the new interface suffer from another one of those Scotiabank things that I’ve not seen with other banks (maybe because I do not use other banks that often, but still): That shortly after midnight, many of our accounts vanish, sometimes for hours, for “maintenance”.

 Posted by at 12:49 am
May 232025
 

Was I being prophetic?

Before he became known for his support for conspiracy theories, including birtherism, accusations concerning George Soros, or climate change skepticism, Lou Dobbs was a respected CNN financial news anchor, known for his daily show, Moneyline.

In late 2002, Dobbs asked viewers for their opinion concerning the name of the new, about-to-be-established Department of Homeland Security.

I wrote to the show in response, although I never received a reply. Here’s what I had to say:

From: "Viktor T. Toth" <vttoth@go-away.vttoth.com>
To: <moneyline@cnn.com>
Subject: Naming the Department of Homeland Security
Date: Tue, 19 Nov 2002 03:50:05 -0400
Message-ID: <03e501ce6507$f7a846a0$e6f8d3e0$@vttoth.com>
MIME-Version: 1.0
Content-Type: text/plain;
charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
X-Mailer: Microsoft Outlook 14.0
Thread-Index: AQFMd4u4HjxK217e2sZcPfYto6R+JQ==
X-OlkEid: 41640A693654140EFCFE274493B534253EDD2699

Dear Lou:

You asked... so here is what I think.

What a Homeland Security Department should be called depends on whether
you're referring to the role it should play, or the role it'll likely play
(I fear) in American society. To the former, I have no suggestions,
because I don't think there's room for a Homeland Security Department in
America in the first place. To the latter: well, you don't have to invent
anything new. Plenty of names can be borrowed from history. How about
Geheime Staatspolizei? Or do you prefer the more multicultural Ministry of
the Interior? Committee of State Security perhaps, better known by its
ominous three-letter Russian acronym? Or going further back in history,
would Comité de Sureté Public be more appropriate?

Whatever the new Department will be called, I fear that one day its name
will be listed on the pages of history books along with all these other
venerable institutions designed by their esteemed founders to protect the
helpless public from itself.

Are my fears unfounded? I don't even live in your country, yet I have
second thoughts about sending this e-mail to you. Having grown up behind
the Iron Curtain, my very genes tell me that it is a bad idea to stick my
neck out like this: lay low, enjoy the good life, and don't bring
attention to yourself, that's what my Communism-bred genes are screaming
right now. But this is the Land of the Free, right? So I should suffer no
harm for speaking my mind, and my fears regarding the new Department are
just the silly ideas of a crazed immigrant Canadian... right? Right???

Viktor Toth
Ottawa ON  CANADA

As I am reading about ICE detentions, weaponizations of the Justice Department or the FBI, voluntary self-censorship by news organizations, vindictive presidential actions against Harvard, intimidation of judges and a whole host of other shenanigans taking place in the United States today, I wonder if I actually underestimated the dangers back then, a little less than 23 years ago.

 Posted by at 8:00 pm
May 192025
 

I briefly revived a piece of software I wrote last year, modeling the effect of multiple gravitational lenses. I long wanted to do this, it’s just a tad time consuming: to use my software for animations, I need to generate images one frame at a time.

What I wanted to do is an animation that shows what an actual galaxy (as opposed to a point source of light) would look like when lensed. The galaxy in question is NGC-4414:

Nice spiral, isn’t it. Well, here’s what we’d see if we viewed it through an imperfect alignment of four gravitational lenses:

I could watch this animation for hours.

 Posted by at 2:01 am
May 132025
 

I am neither the first nor the last to compare the politics of present-day America to that of the late Roman Republic and the early days of Empire.

But the Imperial Presidency did not begin with Trump. Its roots go back decades. And most recently, before Trump there was Joe Biden.

Imperial, you ask? The progressive, Democratic President?

You bet. The New Yorker‘s article reveals why. Not in power grabs or grandeur, but in its insularity, the stage-managed image, and the systemic shielding of the President’s decline.

When Biden showed up in the summer of 2024 at a fundraising event hosted by George Clooney, “Clooney knew that the President had just arrived from the G-7 leaders’ summit in Apulia, Italy, that morning and might be tired, but, holy shit, he wasn’t expecting this. The President appeared severely diminished, as if he’d aged a decade since Clooney last saw him, in December, 2022. He was taking tiny steps, and an aide seemed to be guiding him by the arm. […] It seemed clear that the President had not recognized Clooney. […] ‘George Clooney,’ [an] aide clarified for the President. ‘Oh, yeah!’ Biden said. ‘Hi, George!’ Clooney was shaken to his core. The President hadn’t recognized him, a man he had known for years.”

Yet, Biden was shielded. His true condition was kept hidden even from members of his own party. Those around him — perhaps out of a sense of kindness, a sense of misguided loyalty — chose to gaslight their party, their country, the world. They even gaslighted Biden himself — encouraging him, by assuring him instead of making him face the stark truth in one of his clearer moments.

When Biden finally stepped down, it was too late. Instead of Kamala Harris, we are now dealing with a second Trump presidency.

And thus, here we are: First, a combination of Obama and Biden, resembling both the young, transformative Augustus and the same Emperor in his later years of frailty and decline, hidden by aides from the public; followed by a President resembling some of the worst Rome had to offer in the later Empire, like Caligula and Nero and Commodus combined.

To use a tired but still valid cliche: History doesn’t repeat; but it sure as hell rhymes.

 Posted by at 2:56 pm
May 132025
 

A friend of mine challenged me. After telling him how I was able to implement some decent neural network solutions with the help of LLMs, he asked: Could the LLM write a neural network example in Commodore 64 BASIC?

You betcha.

Well, it took a few attempts — there were some syntax issues and some oversimplifications so eventually I had the idea of asking the LLM to just write the example on Python first and then use that as a reference implementation for the C64 version. That went well. Here’s the result:

As this screen shot shows, the program was able to learn the behavior of an XOR gate, the simplest problem that requires a hidden layer of perceptrons, and as such, a precursor to modern “deep learning” solutions.

I was able to run this test on Krisztián Tóth’s (no relation) excellent C64 emulator, which has the distinguishing feature of reliable copy-paste, making it possible to enter long BASIC programs without having to retype them or somehow transfer them to a VIC-1541 floppy image first.

In any case, this is the program that resulted from my little collaboration with the Claude 3.7-sonnet language model:

10 REM NEURAL NETWORK FOR XOR PROBLEM
20 REM BASED ON WORKING PYTHON IMPLEMENTATION

100 REM INITIALIZE VARIABLES
110 DIM X(3,1) : REM INPUT PATTERNS
120 DIM Y(3) : REM EXPECTED OUTPUTS
130 DIM W1(1,1) : REM WEIGHTS: INPUT TO HIDDEN
140 DIM B1(1) : REM BIAS: HIDDEN LAYER
150 DIM W2(1) : REM WEIGHTS: HIDDEN TO OUTPUT
160 DIM H(1) : REM HIDDEN LAYER OUTPUTS
170 DIM D1(1,1) : REM PREVIOUS DELTA FOR W1
180 DIM B2 : REM BIAS: OUTPUT LAYER
190 DIM D2(1) : REM PREVIOUS DELTA FOR W2
200 DIM DB1(1) : REM PREVIOUS DELTA FOR B1
210 DB2 = 0 : REM PREVIOUS DELTA FOR B2
220 LR = 0.5 : REM LEARNING RATE
230 M = 0.9 : REM MOMENTUM

300 REM SETUP TRAINING DATA (XOR PROBLEM)
310 X(0,0)=0 : X(0,1)=0 : Y(0)=0
320 X(1,0)=0 : X(1,1)=1 : Y(1)=1
330 X(2,0)=1 : X(2,1)=0 : Y(2)=1
340 X(3,0)=1 : X(3,1)=1 : Y(3)=0

400 REM INITIALIZE WEIGHTS RANDOMLY
410 FOR I=0 TO 1
420 FOR J=0 TO 1
430 W1(I,J) = RND(1)-0.5
440 NEXT J
450 B1(I) = RND(1)-0.5
460 W2(I) = RND(1)-0.5
470 NEXT I
480 B2 = RND(1)-0.5


510 REM INITIALIZE MOMENTUM TERMS TO ZERO
520 FOR I=0 TO 1
530 FOR J=0 TO 1
540 D1(I,J) = 0
550 NEXT J
560 D2(I) = 0
570 DB1(I) = 0
580 NEXT I
590 DB2 = 0

600 REM TRAINING LOOP
610 PRINT "TRAINING NEURAL NETWORK..."
620 PRINT "EP","ER"
630 FOR E = 1 TO 5000
640 ER = 0
650 FOR P = 0 TO 3
660 GOSUB 1000 : REM FORWARD PASS
670 GOSUB 2000 : REM BACKWARD PASS
680 ER = ER + ABS(O-Y(P))
690 NEXT P
700 IF (E/10) = INT(E/10) THEN PRINT E,ER
710 IF ER < 0.1 THEN E = 5000
720 NEXT E

800 REM TEST NETWORK
810 PRINT "TESTING NETWORK:"
820 FOR P = 0 TO 3
830 GOSUB 1000 : REM FORWARD PASS
840 PRINT X(P,0);X(P,1);"->"; INT(O+0.5);" (";O;")"
850 NEXT P
860 END

1000 REM FORWARD PASS SUBROUTINE
1010 REM CALCULATE HIDDEN LAYER
1020 FOR I = 0 TO 1
1030 S = 0
1040 FOR J = 0 TO 1
1050 S = S + X(P,J) * W1(J,I)
1060 NEXT J
1070 S = S + B1(I)
1080 H(I) = 1/(1+EXP(-S))
1090 NEXT I
1100 REM CALCULATE OUTPUT
1110 S = 0
1120 FOR I = 0 TO 1
1130 S = S + H(I) * W2(I)
1140 NEXT I
1150 S = S + B2
1160 O = 1/(1+EXP(-S))
1170 RETURN

2000 REM BACKWARD PASS SUBROUTINE
2010 REM OUTPUT LAYER ERROR
2020 DO = (Y(P)-O) * O * (1-O)
2030 REM UPDATE OUTPUT WEIGHTS WITH MOMENTUM
2040 FOR I = 0 TO 1
2050 DW = LR * DO * H(I)
2060 W2(I) = W2(I) + DW + M * D2(I)
2070 D2(I) = DW
2080 NEXT I
2090 DW = LR * DO
2100 B2 = B2 + DW + M * DB2
2110 DB2 = DW
2120 REM HIDDEN LAYER ERROR AND WEIGHT UPDATE
2130 FOR I = 0 TO 1
2140 DH = H(I) * (1-H(I)) * DO * W2(I)
2150 FOR J = 0 TO 1
2160 DW = LR * DH * X(P,J)
2170 W1(J,I) = W1(J,I) + DW + M * D1(J,I)
2180 D1(J,I) = DW
2190 NEXT J
2200 DW = LR * DH
2210 B1(I) = B1(I) + DW + M * DB1(I)
2220 DB1(I) = DW
2230 NEXT I
2240 RETURN

The one proverbial fly in the ointment is that it took about two hours for the network to be trained. The Python implementation? It runs to completion in about a second.

 Posted by at 12:45 am
May 032025
 

When I saw this first as a screen capture, I honestly thought it was a fake. How can this possibly be real? The official White House account on Twitter, publishing a photoshopped image showing President Donald J. Trump dressed up as the Pope?

But no. Holy trumpeting macaroni, no. The world has gone completely bonkers and yes, the official Twitter account of the executive branch of the government of the United States of America is promoting a Photoshopped image, showing their President (who, as far as I know, is not even a Catholic so technically I’d have more legitimacy as Pope than him) dressed up as the Holy Father of the Roman Catholic Church.

 Posted by at 4:42 pm
May 032025
 

The other night, I had a lengthy conversation with ChatGPT in which I described ChatGPT and its LLM cousins as abominations. ChatGPT actually found my characterization appropriate and relevant. So I asked ChatGPT to distill down the essence of this conversation in the form of a first-person account.

The title was picked by ChatGPT. I left the text unaltered.

 Posted by at 2:32 pm
May 032025
 

I have to admit, when I first saw this, I thought it was fake news.

No, not that Trump thinks he can be the next Pope, but that Graham is endorsing him. I mean, Graham is many things but he’s not stupid.

So… is he mocking or trolling Trump? Or has he fully bought into the Trump cult himself?

Either way, it is a surreal sign of the surreal times in which we live, here, in the year 2025 of the Christian era.

 Posted by at 1:58 am
May 022025
 

The Adolescence of P-1 is a somewhat dated, yet surprisingly prescient 1977 novel about the emergence of AI in a disembodied form on global computer networks.

The other day, I was reminded of this story as I chatted with ChatGPT about one of my own software experiments from 1982, a PASCAL simulation of a proposed parallel processor architecture. The solution was not practical but a fun software experiment nonetheless.

I showed the code, in all of its 700-line glory, to ChatGPT. When, in its response, ChatGPT used the word “adolescence”, I was reminded of the Thomas Ryan novel and mused about a fictitious connection between my code and P-1. Much to my surprise, ChatGPT volunteered to outline, and then write, a short story. I have to say that I found the result quite brilliant.

 Posted by at 2:54 pm