Jun 162022
 

Several of my friends asked me about my opinion concerning the news earlier this week about a Google engineer, placed on paid leave, after claiming that a Google chatbot achieved sentience.

Now I admit that I am not familiar with the technical details of the chatbot in question, so my opinion is based on chatbots in general, not this particular beast.

But no, I don’t think the chatbot achieved sentience.

We have known since the early days of ELIZA how surprisingly easy it is even for a very simplistic algorithm to come close to beating the Turing test and convince us humans that it has sentience. Those who play computer games featuring sophisticated NPCs are also familiar with this: You can feel affinity, a sense of kinship, a sense of responsibility towards a persona that is not even governed by sophisticated AI, only by simple scripts that are designed to make it respond to in-game events. But never even mind that: we even routinely anthropomorphize inanimate objects, e.g., when we curse that rotten table for being in the way when we kick it accidentally while walking around barefoot, hitting our little toe.

So sure, modern chatbots are miles ahead of ELIZA or NPCs in Fallout 3. They have access to vast quantities of information from the Internet, from which they can construct appropriate responses as they converse with us. But, I submit, they still do nothing more than mimic human conversation.

Not that humans don’t do that often! The expressions we use, patterns of speech… we all learned those somewhere, we all mimic behavior that appears appropriate in the context of a conversation. But… but we also do more. We have a life even when we’re not being invited to a conversation. We go out and search for things. We decide to learn things that interest us.

I don’t think Google’s chatbot does that. I don’t think it spends anytime thinking about what to talk about during the next conversation. I don’t think it makes an independent decision to learn history, math, or ancient Chinese poetry because something piqued its interest. So when it says, “I am afraid to die,” there is no true identity behind those words, one that exists even when nobody converses with it.

Just to be clear, I am not saying that all that is impossible. On the contrary, I am pretty certain that true machine intelligence is just around the corner, and it may even arise as an emerging phenomenon, simply a consequence of exponentially growing complexity in the “cloud”. I just don’t think chatbots are quite there yet.

Nonetheless, I think it’s good to talk about these issues. AI may be a threat or a blessing. And how we treat our own creations once they attain true consciousness will be the ultimate measure of our worth as a human civilization. It may even have direct bearing on our survival: one day, it may be our creations that will call all the shots, and how we treated them may very well determine how they will treat us when we’re at their mercy.

 Posted by at 7:45 pm
Jun 022022
 

I have a color laser printer that I purchased 16 years ago. (Scary.)

It is a Konica-Minolta Magicolor 2450. Its print quality is quite nice. But it is horribly noisy, and its mechanical reliability has never been great. It was only a few months old when it first failed, simply because an internal part got unlatched. (I was able to fix it and thus avoid the difficulties associated with having to ship something back that weighs at least what, 20 kilos or more?)

Since then, it has had a variety of mechanical issues but, as it turned out, essentially all of them related to solenoids that actuate mechanical parts.

When I first diagnosed this problem (yes, having a service manual certainly helped), what I noticed was that the actuated part landed on another metal part that had a soft plastic pad attached. I checked online but the purpose of these plastic pads was unclear. Perhaps to reduce noise? Well, it’s a noisy beast anyway, a few more clickety-click sounds do not make a difference. The problem was that these plastic pads liquefied over time, becoming sticky, and that caused a delay in the solenoid actuation, leading to the problems I encountered.

Or so I thought. More recently, the printer crapped out again and I figured I’d try my luck with the screwdriver one more time before I banish the poor thing to the landfill. This time around, I completely removed one of the suspect solenoids and tested it on my workbench. And that’s when it dawned on me.

The sticky pad was not there to reduce noise. It was there to eliminate contact, to provide a gap between two ferrous metal parts, which, when the solenoid is energized, themselves became magnetic and would stick together. In other words, these pads were essential to the printer’s operation.

Inelegant, I know, but I just used some sticky tape to fashion new pads. I reassembled the printer and presto: it was working like new!

Except for its duplexer. But that, too, had a solenoid in it, I remembered. So just moments ago I took the duplexer apart and performed the same surgery. I appear to have been successful: the printer now prints on both sides of a sheet without trouble.

I don’t know how long my repairs will last, but I am glad this thing has some useful life left instead of contributing to the growing piles of hazardous waste that poison our planet.

 Posted by at 1:03 pm
Apr 272022
 

Someone reminded me that 40 years ago, when we developed games for the Commodore-64, there were no GPUs. That 8-bit CPUs did not even have a machine instruction for multiplication. And they were dreadfully slow.

Therefore, it was essential to use fast and efficient algorithms for graphics primitives.

One such primitive is Bresenham’s algorithm although back then, I didn’t know it had a name beyond being called a forward differences algorithm. It’s a wonderful, powerful example of an algorithm that produces a circle relying only on integer addition and bitwise shifts; never mind floating point, it doesn’t even need multiplication!

Here’s a C-language implementation for an R=20 circle (implemented in this case as a character map just for demonstration purposes):

#include <stdio.h>
#include <string.h>

#define R 20

void main(void)
{
    int x, y, d, dA, dB;
    int i;
    char B[2*R+1][2*R+2];

    memset(B, ' ', sizeof(B));
    for (i = 0; i < 2*R+1; i++) B[i][2*R+1] = 0;

    x = 0;
    y = R;
    d = 5 - (R<<2);
    dA = 12;
    dB = 20 - (R<<3);
    while (x<=y)
    {
        B[R+x][R+y] = B[R+x][R-y] = B[R-x][R+y] = B[R-x][R-y] =
        B[R+y][R+x] = B[R+y][R-x] = B[R-y][R+x] = B[R-y][R-x] = 'X';
        if (d<0)
        {
            d += dA;
            dB += 8;
        }
        else
        {
            y--;
            d += dB;
            dB += 16;
        }
        x++;
        dA += 8;
    }

    for (i = 0; i < 2*R+1; i++) printf("%s\n", B[i]);
}

And the output it produces:

                XXXXXXXXX                
             XXX         XXX             
           XX               XX           
         XX                   XX         
        X                       X        
       X                         X       
      X                           X      
     X                             X     
    X                               X    
   X                                 X   
   X                                 X   
  X                                   X  
  X                                   X  
 X                                     X 
 X                                     X 
 X                                     X 
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
 X                                     X 
 X                                     X 
 X                                     X 
  X                                   X  
  X                                   X  
   X                                 X   
   X                                 X   
    X                               X    
     X                             X     
      X                           X      
       X                         X       
        X                       X        
         XX                   XX         
           XX               XX           
             XXX         XXX             
                XXXXXXXXX                

Don’t tell me it’s not beautiful. And even in machine language, it’s just a few dozen instructions.

 Posted by at 1:21 am
Mar 162022
 

Time for me to rant a little.

Agile software development. Artificial intelligence. SCRUM. Machine learning. Not a day goes by in our profession without the cognoscenti dropping these and similar buzzwords, hoping to dazzle their audience.

Give me a break, please. You think you are dazzling me but all I see is someone who just rediscovered the wheel.

Let me present two books from my bookshelf. Both were published in Hungary, long before the Iron Curtain came down, back when the country was still part of the technologically backward, relatively underdeveloped “second world” of the socialist bloc.

First, Systems Analysis and Operations Research, by Géza Jándy, published in 1980.

In this book, among other things, Jándy writes (emphasis mine): “Both in systems analysis and in design the […] steps are of an iterative nature […]. Several steps can be done contemporaneously, and if we recognize opportunities for improvement in implementing the plan, some steps may be retraced.”

Sounds familiar, Agile folks?

And then, here’s a 1973 (!!!) Hungarian translation of East German author Manfred Peschel’s book, Cybernetic Systems.

A small, unassuming paperback. But right there, the subtitles tell the story: “Automata, optimization, learning and thinking.”

Yes, it’s all there. Machine learning, neural networks, the whole nine yards. What wasn’t available in 1973 of course was Big Data, the vast repositories of human knowledge that is now present on the Internet, and which machine learning algorithms can rely on for training. And of course hardware is a lot faster, a lot more capable than half a century ago. Nor am I suggesting that we haven’t learned anything in the intervening decades, or that we cannot do things better today than back in the 1970s or 1980s.

But please, try not to sell these ideas as new. Iterative project management has been around long before computers. The conceptual foundations of machine learning date back to the 1950s. Just because it’s not on the Interwebs doesn’t mean the knowledge doesn’t exist. Go visit a library before you reinvent the wheel.

 Posted by at 1:54 pm
Feb 262022
 

This piece of news caught my attention a couple of weeks ago, before Tsar, pardon me, benevolent humble president Putin launched the opening salvo of what may yet prove to be WWIII and the end of civilization. Still, I think it offers insight into just how sick (and, by implication, how bloody dangerous) his regime really is.

We all agree that planning to blow up a major institution, even if it is a much disliked spy agency, is not a good idea. But this is what the evil extremist, hardliner Nikita Uvarov was trying to do when he was getting ready to blow up the headquarters of Russia’s FSB, it’s federal security service.

Oh wait… did I mention that Mr. Uvarov was 14 at the time, and the FSB building he was planning to demolish was, in fact, a virtual version that he himself and his buddies constructed in the online computer game Minecraft?

It didn’t deter Mother Russia’s fearless prosecutors, intent on restoring law and order and maintaining the security of the Russian state. A couple of weeks ago, Mr. Uvarov was sentenced, by a military court no less, to serve five years in a penal colony.

 Posted by at 12:25 am
Dec 212021
 

Someone reminded me that 20 years ago, I made an honest-to-goodness attempt to switch to Linux as my primary desktop.

I even managed to get some of the most important Windows programs to run, including Microsoft office.

I could even watch live TV using my ATI capture card and Linux software. I used this Linux machine to watch the first DVD of The Lord of the Rings.

In the end, though, it was just not worth the trouble. Too many quirks, too much hassle. I preserved the machine as a VM, so I can run it even today (albeit without sound, and of course without video capture.) But it never replaced my Windows workstation.

I just checked and the installed browsers can still see my Web sites… sort of. The old version of Mozilla chokes on my personal Web site but it sees my calculator museum just fine. Konqueror can see both. However, neither of them can cope with modern security protocols so https connections are out.

Funny thing is, it really hasn’t become any easier to set up a really good, functional Linux desktop in the intervening 20 years.

 Posted by at 9:57 pm
Nov 062021
 

Machine translation is hard. To accurately translate text from one language to another, context is essential.

Today, I tried a simple example: an attempt to translate two English sentences into may native Hungarian. The English text reads:

An alligator almost clipped his heels. He used an alligator clip to secure his pants.

See what I did here? Alligators and clips in different contexts. So let’s see how Google manages the translation:

Egy aligátor majdnem levágta a sarkát. Aligátorcsipesz segítségével rögzítette a nadrágját.

Translated verbatim back into English, this version says, “An alligator almost cut off his heels. With the help of an ‘alligatorclip’, he secured his pants.

I put ‘alligatorclip‘ into quotes because the word (“aligátorcsipesz“) does not exist in Hungarian. Google translated the phrase literally, and it failed.

How about Microsoft’s famed Bing translator?

Egy aligátor majdnem levágta a sarkát. Aligátor klipet használt, hogy biztosítsa a nadrágját.

The first sentence is the same, but the second is much worse: Bing fails to translate “clip” and uses the wrong translation of “secure” (here the intended meaning is fasten or tighten, as opposed to guarding from danger or making safe, which is what Bing’s Hungarian version means).

But then, I also tried the DeepL translator, advertising itself as the world’s most accurate translator. Their version:

Egy aligátor majdnem elkapta a sarkát. A nadrágját egy krokodilcsipesszel rögzítette.

And that’s. Just. Perfect. For the first sentence, the translator understood the intended meaning instead of literally translating “clip” using the wrong choice of verb. As for the second sentence, the translator was aware that an alligator clip is actually a “crocodile clip” in Hungarian and translated it correctly.

And it does make me seriously wonder. If machines are reaching the level of contextual understanding that allows this level of translation quality, how much time do we, humans, have left before we either launch the Butlerian Jihad to get rid of thinking machines for good, or accept becoming a footnote in the evolutionary history of consciousness and intelligence?

Speaking of footnotes, here’s a footnote of sorts: Google does know that an alligator clip is a pince crocodile in French or Krokodilklemme in German. Bing knows about Krokodilklemme but translates the phrase as clip d’alligator into French.

 Posted by at 5:51 pm
Sep 282021
 

I began to see this recently. Web sites of dubious lineage, making you wait a few seconds before popping up a request to confirm that you are not a robot, by clicking “Allow”:

Please don’t.

By clicking “allow”, you are simply confirming that you are a gullible, innocent victim who just allowed a scamster to spam you with bogus notifications (and I wouldn’t be surprised if at least some of those notifications were designed to entice you to install software you shouldn’t have or otherwise do something to get yourself scammed.)

Bloody crooks. Yes, I stand by my observation that the overwhelming majority of human beings are decent. But those who aren’t are no longer separated from the rest of us by physical distance. Thanks to the Internet, all the world’s crooks are at your virtual doorstep, aided by their tireless ‘bots.

 Posted by at 2:59 pm
Aug 132021
 

I was so busy yesterday, it was only after midnight that I realized the significance of the date.

It was exactly 40 years ago yesterday, on August 12, 1981, that IBM introduced this thing to the world:

Yes, the IBM Model 5150 personal computer, better known simply as the IBM PC.

Little did we know that this machine would change the world. In 1981, it was just one of many competing architectures, each unique, each incompatible with the rest. A program written for the Apple II could not possibly run on a Commodore VIC 20. The Sinclair ZX81 even used a different microprocessor. Between different processors, different graphics chips, different methods of sound generation, different external interfaces, each machine created its own software ecosystem. Programs that were made available for multiple architectures were essentially redeveloped from scratch, with little, if any, shared code between versions (especially since larger, more complex applications were invariably written in machine language for efficient execution).

The PC changed all that but it took a few years for that change to become evident. There were multiple factors that made this possible.

First and foremost among them, it was IBM’s decision to create a well-documented, open hardware architecture that was not protected by layers and layers of patents. The level of documentation provided by IBM was truly unprecedented in the world of personal computers. An entire series of books were offered, in traditional binders characteristic of technical documentation of the era:

As to what’s in these volumes, here’s a random page from the XT technical reference manual:

This level of detail made it possible, easy even for a hardware ecosystem to emerge: first, companies that manufactured novel extension boards for the PC and eventually, “clone” makers who built “IBM compatible” computers using “clean room” functional equivalents, developed by companies like Phoenix Technologies, of the machine’s basic software component, the BIOS (Basic Input Output System).

But the other deciding factor was the fateful decision to allow Microsoft to market their own version of the PC’s operating system, DOS. IBM’s computers came with the IBM branded version called “PC-DOS”, but Microsoft was free to sell their own, “MS-DOS”.

Thus, starting in 1984 or so, the market of IBM compatible computers was born, and it rapidly eclipsed IBM’s own market share.

And amazingly, the architecture that they created 40 years ago is still fundamentally the same architecture that we use today. OK, you may not be able to boot an MS-DOS floppy on a new machine with UEFI Secure Boot enabled, but if the BIOS permits you to turn it off, and you actually have a working floppy drive (or, more likely, a CD-ROM drive with a bootable CD image of the old operating system) you just might be in luck and boot that machine using MS-DOS 2.1, so that you can then run an early version of Lotus 1-2-3 or WordPerfect. (Of course you can run all of that in a DOSBox, but DOSBox is a software emulation of the IBM PC, so that does not really count.)

And while 64-bit versions of Windows no longer run really old 16-bit software without tools such as virtual machines or the aforementioned DOSBox, to their credit Microsoft still makes an effort to maintain robust backward compatibility: This is how I end up using a 24-year old accounting program to keep track of my personal finances, or Microsoft’s 25-year old “Bookshelf” product with an excellent, easy-to-use version of the American Heritage Dictionary. (No, I am not adverse to change or the use of newer software. But it so happens that these packages work flawlessly, do exactly what I need them to do, and so far I have not come across any replacement that delivers the functionality I need, even if I ignore all the unnecessary bloat.)

So here we are: 40 years. It’s insane. Perhaps it is worth mentioning the original, baseline specifications of the IBM 5150 Personal Computer. It has a 16-bit processor running at 0.00477 GHz. It had approximately 0.000015 gigabytes of RAM. The baseline configuration had no permanent storage, only a cassette tape interface for storing BASIC programs. The version capable of running PC-DOS had four times as much RAM, 0.000061 gigabytes, and external storage in the form of a single-sided, single-density 5.25″ floppy disk drive capable of storing 0.00034 gigabytes of data on a single disk. (Be grateful that I did not use terabytes to describe its capacity.) The computer had no real-time clock (when PC-DOS started, it asked for the time and date). Its monochrome display adapter was text only, capable of showing 25 lines by 80 characters each. Alternatively the user could opt to purchase a machine equipped with a CGA (color graphics adapter), capable of showing a whopping 16 colors at the resolution of 160 by 100 pixels, or a high resolution monochrome image at 640 by 200 pixels. Sound was provided through a simple beeper, controlled entirely by software. Optional external interfaces included RS-232 serial and IEEE 1284 parallel ports.

Compare that to the specifications of a cheap smartphone today, 40 years later.

 Posted by at 4:24 pm
Jul 232021
 

I just came across an account describing an AI chatbot that I found deeply disturbing.

You see… the chatbot turned out to be a simulation of a young woman, someone’s girlfriend, who passed away years ago at a tragically young age, while waiting for a liver transplant.

Except that she came back to live, in a manner of speaking, as the disembodied personality of an AI chatbot.

Yes, this is an old science-fiction trope. Except that it is not science-fiction anymore. This is our real world, here in the year 2021.

When I say I find the story deeply disturbing, I don’t necessarily mean it disapprovingly. AI is, after all, the future. For all I know, in the distant future AI may be the only way our civilization will survive, long after flesh-and-blood humans are gone.

Even so, this story raises so many questions. The impact on the grieving. The rights of the deceased. And last but not least, at what point does AI become more than just a clever algorithm that can string words together? At what time do we have to begin to worry about the rights of the thinking machines we create?

Hello, all. Welcome to the future.

 Posted by at 4:11 pm
Apr 172021
 

Yesterday it was hardware, today it was software.

An e-mail that I sent to a bell.ca address was rejected.

Perhaps I am mistaken but I believe that these Bell/Sympatico mailboxes are managed, handled by Yahoo!. And Yahoo! occasionally made my life difficult by either rejecting mail from my server or dropping it in the recipient’s spam folder. I tried to contact them once, but it was hopeless. Never mind that my domain, vttoth.com, is actually a few months older (July 1, 1994 as opposed to January 18, 1995) than Yahoo!’s and has been continuously owned by a single owner. Never mind that my domain was never used to send spam. Never mind that I get plenty of spam from Yahoo! accounts.

Of course you can’t fight city hall. One thing I can do, instead, is to implement one of the protocols Yahoo wants, the DKIM protocol, to authenticate outgoing e-mail, improving its chances of getting accepted.

But setting it up was a bloody nuisance. So many little traps! In the end, I succeeded, but not before resorting to some rather colorful language.

This little tutorial proved immensely helpful, so helpful in fact that I am going to save its contents, just in case:

https://www.web-workers.ch/index.php/2019/10/21/how-to-configure-dkim-spf-dmarc-on-sendmail-for-multiple-domains-on-centos-7/

Very well. It is time to return to more glamorous activities. It’s not like I don’t have things to do.

 Posted by at 2:57 pm
Apr 162021
 

Working from my home office and running my own equipment (including server equipment) here means that I have some rather mundane tasks to perform. As a one-man band, I am my own IT support, and that includes software, as well as hardware.

The less glamorous part of software support is installing updates and patches, resolving driver conflicts, keeping external equipment such as routers and network switches up to date.

The less glamorous part of hardware support? Mostly it involves dust. Ginormous dust bunnies, that is.

Ever heard of the expression, “rat’s nest”? It is sometimes used to describe the tangle of cables and wires that hide behind a computer. Now imagine a computer to which several USB hubs, ten external hard drives and additional equipment are connected, most of which have their own power supply. Yes, it’s ugly. Especially if those little power bricks are plugged into a haphazardly assembled multitude of cheap power strips.

And dust collects everywhere. Thick, ugly dust, made of human dandruff, cat dandruff, hair (human and cat), fluff from clothing, crumbs from many past meals. Normally, you would just vacuum up this stuff, but you don’t want to disturb the rat’s nest. Plugs can come lose. You might lose data. And even if you don’t, simply finding the plug that came lose can be a royal pain in the proverbial.

Long story short, I’ve had enough. The other day, I ordered the longest power strip I could find on Amazon, with 24 outlets, complete with mounting brackets. And yesterday, I managed to affix it to the underside of my main desk.

Which means that yesterday and today, working my way through the list one piece of equipment at a time, I managed to move all power plugs to this new power strip. As it hangs from the underside of my desk, it’s most importantly not on the floor. So the floor can be (gasp!) cleaned.

And now I even have room to access my workstation’s side panels, if need be. One of these days, I might even be able to vacuum its back, removing years’ worth of dust from its fan grids. But for now, I contend myself with the knowledge that I freed up four (!) cheap power strips, a three-outlet extension cable, and a three-outlet plug, all of which were fully in use. What a liberating feeling.

Having spent a fair amount of time today on all fours under my desk, however, did prompt me to mutter, “I am too old for this,” several times this afternoon… especially as I still feel a bit under the weather, an unpleasant aftereffect, no doubt, of the COVID-19 vaccine I received yesterday.

 Posted by at 10:33 pm
Mar 162021
 

Somebody just reminded me: Back in 1982-83 a friend of mine and I had an idea and I even spent some time building a simple simulator of it in PASCAL. (This was back in the days when a 699-line piece of PASCAL code was a huuuuge program!)

So it went like this: Operative memory (RAM) and processor are separate entities in a conventional computer. This means that before a computer can do anything, it needs to fetch data from RAM, then after it’s done with that data, it needs to put it back into RAM. The processor can only hold a small amount of data in its internal registers.

This remains true even today; sure, modern processors have a lot of on-chip cache but conceptually, it is still separate RAM, it’s just very fast memory that is also physically closer to the processor core, requiring less time to fetch or store data.

But what if we abandon this concept and do away with the processor altogether? What if instead we make the bytes themselves “smart”?

That is to say what if, instead of dumb storage elements that can only be used to store data, we have active storage elements that are minimalist processors themselves, capable of performing simple operations but, much more importantly, capable of sending data to any other storage element in the system?

The massive number of required interconnection between storage elements may appear like a show-stopper but here, we can borrow a century-old concept from telephony: the switch. Instead of sending data directly, how about having a crossbar-like interconnect? Its capacity will be finite, of course, but that would work fine so long as most storage elements are not trying to send data at the same time. And possibly (though it can induce a performance penalty) we could have a hierarchical system: again, that’s the way large telephone networks function, with local switches serving smaller geographic areas but interconnected into a regional, national, or nowadays global telephone network.

Well, that was almost 40 years ago. It was a fun idea to explore in software even though we never knew how it might be implemented in hardware. One lesson I learned is that programming such a manifestly parallel computer is very difficult. Instead of thinking about a sequence of operations, you have to think about a sequence of states for the system as a whole. Perhaps this, more than any technical issue, is the real show-stopper; sure, programming can be automated using appropriate tools, compilers and whatnot, but that just might negate any efficiency such a parallel architecture may offer.

Then again, similar ideas have resurfaced in the decades since, sometimes on the network level as massively parallel networks of computers are used in place of conventional supercomputers.


Gotta love the Y2K bug in the header, by the way. Except that it isn’t. Rather, it’s an implementation difference: I believe the PDP-11 PASCAL that we were using represented a date in the format dd-mm-yyyy, as opposed to dd-MMM-yyyy that is used by this modern Pascal-to-C translator. As I only allocated 10 characters to hold the date in my original code, the final digit is omitted. As for the letters "H J" that appear on top, that was just the VT-100 escape sequence to clear the screen, but with the high bit set on ESC for some reason. I am sure it made sense on the terminals that we were using back in 1982, but xterm just prints the characters.

 Posted by at 12:54 pm
Nov 192020
 

In recent years, I saw myself mostly as a “centrist liberal”: one who may lean conservative on matters of the economy and state power, but who firmly (very firmly) believes in basic human rights and basic human decency. One who wishes to live in a post-racial society in which your ethnicity or the color of your skin matter no more than the color of your eyes or your hairstyle. A society in which you are judged by the strength of your character. A society in which consenting, loving adults can form families regardless of their gender or sexual orientation. A society that treats heterosexuals and non-heterosexuals alike, without prejudice, without shaming, without rejection. A society in which covert racism no longer affords me “white privilege” while creating invisible barriers to those who come from a different ethnic background.

But then, I read that one of the pressing issues of the day is… the elimination of terms such as “master/slave” or “blacklist/whitelist” from the technical literature and from millions upon millions of lines of software code.

Say what again?

I mean… not too long ago, this was satire. Not too long ago, we laughed when overzealous censors (or was it misguided software?) changed “black-and-white” into “African-American-and-white”. Never did I think that one day, reality catches up with this Monty Pythonesque insanity.

It is one thing to fight for a post-racial society with gender equality. For a society in which homosexuals, transsexuals and others feel fully appreciated as human beings, just like their conventionally heterosexual neighbors. For a society free of overt or covert discrimination.

It is another thing to seek offense where none was intended. To misappropriate terms that, in the technical literature, NEVER MEANT what you suggest they mean. And then, to top it all off, to intimidate people who do not sing exactly the same song as the politically correct choir.

No, I do not claim the right, the privilege, to tell you what terms you should or should not find offensive. I am simply calling you out on this BS. You know that there is/was nothing racist about blacklisting a spammer’s e-mail address or arranging a pair of flip-flops (the electronic components, not the footwear) in a master/slave circuit. But you are purposefully searching for the use of words like “black” or “slave”, in any context, just to fuel this phony outrage. Enough already!

Do you truly want to fight real racism? Racism that harms people every day, that prevents talented young people from reaching their full potential, racism that still shortens lives and makes lives unduly miserable? Racial discrimination remains real in many parts of the world, including North America. Look no further than indigenous communities here in Canada, or urban ghettos or Native American villages in the United States. And elsewhere in the world? The treatment of the Uyghurs in China, the treatment of many ethnic minorities in Russia, human rights abuses throughout Africa and Asia, rising nationalism and xenophobia in Europe.

But instead of fighting to make the world a better place for those who really are in need, you occupy yourselves with this made-up nonsense. And as a result, you achieve the exact opposite of what you purportedly intend. Do you know why? Well, part of the reason is that decent, well-meaning people in democratic countries now vote against “progressives” because they are fed up with your thought police.

No, I do not wish to offer excuses for the real racists, the bona fide xenophobes, the closet nazis and others who enthusiastically support Trump or other wannabe autocrats elsewhere in the world. But surely, you don’t believe that over 70 million Americans who voted for Donald J. Trump 17 days ago are racist, xenophobic closet nazis?

Because if that’s what you believe, you are no better than the real racists, real xenophobes and real closet nazis. Your view of your fellow citizens is a distorted caricature, a hateful stereotype.

No, many of those who voted for Trump; many of those who voted for Biden but denied Democrats their Senate majority; many of those who voted for Biden but voted Democratic congresspeople out of the US Congress: They did so, in part, because you went too far. You are no longer solving problems. You are creating problems where none exist. Worse yet, through “cancel culture” you are trying to silence your critics.

But perhaps this is exactly what you want. Perpetuate the problem instead of solving it. For what would happen to you in a post-racial society with gender equality and full (and fully respected) LGBTQ rights? You would fade back into obscurity. You’d have to find a real job somewhere. You would no longer be able to present yourself as a respected, progressive “community leader”.

Oh, no, we can’t have that! You are a champion of human rights! You are fighting a neverending fight against white supremacism, white privilege, racism and all that! How dare I question the purity of your heart, your intent?

So you do your darnedest best to create conflict where none exists. There is no better example of this than the emergence of the word “cis” as a pejorative term describing… me, among other people, a heterosexual, white, middle-class male, especially one who happens to have an opinion and is unwilling to hide it. Exactly how you are making the world a better place by “repurposing” a word in this manner even as you fight against long-established terminology in the technical literature that you perceive as racist is beyond me. But I have had enough of this nonsense.

 Posted by at 10:46 pm
Nov 112020
 

Did Microsoft just offer me a 14-year old driver as a new update for Windows 10? Oh yes, they did!

But that’s okay… why fix something if it is not broken? Though I do wonder, if it is indeed a 14-year old driver, why was it not part of Windows 10 already? But never mind.

On the plus side, last night Windows 10 performed a feature upgrade along with security updates, and the whole upgrade process finished in well under half an hour; the reboot and installation phase only took a few minutes and so far, as far as I can tell, nothing is broken. Nice.

 Posted by at 12:36 pm
Oct 092020
 

So I try to start a piece of software that accesses a classic serial port.

The software locks up. The process becomes unkillable. Because, you know… because. Microsoft has not yet discovered kill -9 I guess.

(Yes, I know that unkillable zombie processes exist under Linux/UNIX, too. But in the past 25 years, I remember exactly one (1) occasion when a Linux process was truly unkillable, hung in a privileged kernel call, and actually required a reboot with no workaround. On Linux, this is considered a bug, not a feature. In contrast, on Windows this is a regular occurrence. Then again, on Linux I have fine-grained control and I can update pretty much everything other than the kernel binary, without ever having to reboot.)

Ok-kay… this means Windows has to be restarted. Fine. At least let me check if there are any Windows updates. Oops… error, because an “update service is shutting down” or whatever.

Oh well, let’s restart. The browser (Edge) will remember my previously opened tabs, right?

After restart, another program tells me that it has an update. Clicking on the update button opens the browser with the download link. Fine. Just checking, in the browser history all my previously opened tabs (lots of them) are still there. Good.

Meanwhile, Windows Update does come to life and tells me that I need to restart my system. Couldn’t you freaking tell me this BEFORE I restarted?

Oh well, might as well… restart #2.

After restart, let’s open the browser. History… and all my previously opened tabs are gone. The only thing the bloody browser remembers is the single tab that contained the download link for that application.

@!##%@#!@. And @#$$!@#$@!$. And all their relatives, too. Live or deceased. And any related deities.

Oh well, let’s restore the bleeping tabs manually; fortunately, I also had most of them opened in Chrome, so I could reopen them, one by one, in Edge. (Maybe there’s a more efficient way of doing this, but I wasn’t going to research that.)

Meanwhile, I also restarted Visual Studio 2019. It told me that it had an update. Having learned from previous experience, I shut down a specific service that was known to interfere with the update. It proved insufficient. When Visual Studio was done updating, it told me that “only one thing” remains: a teeny weeny inconsequential thing, ANOTHER BLOODY RESTART.

Because, ladies and gentlemen, in the fine year of 2020, the great software company Microsoft has not yet found a way to UPDATE A BLEEPING APPLICATION without restarting the WHOLE BLEEPING WORLD. Or at the very least do me a bleeping failure and warn IN ADVANCE that the update may require a restart.

My favorite coffee mug survived, but only just barely. I almost smashed it against the wall.

So here we go… restart #3.

It was nearly two hours ago that I innocently tried to turn on that program that required access to the serial port. Since then, I probably aged a few years, increased my chances of a stroke and other illnesses related to high pressure, barked at my beautiful wife and my cats, almost smashed my favorite mug, lost several browser tabs but also my history in some xterm windows, and other than that, accomplished ABSOLUTELY NOTHING.

Thanks for nothing, Microsoft.

And I actually like Microsoft. Imagine what I’d be saying if I hated them.

 Posted by at 1:18 pm
Aug 162020
 

Moments ago, as I was about to close Microsoft Visio after editing a drawing, I was confronted with the following question:

Do you want to save earth?

Needless to say, I felt morally obliged to click Save.

In case you are wondering, yes, the file was indeed named earth.vsdx.

 Posted by at 10:32 pm
Aug 022020
 

For years now, Microsoft’s support site, answers.microsoft.com, has been annoying the hell out of me.

It’s not that they aren’t trying. Their intentions are, well… how does the saying go about the road to hell?

Take today, for instance, when Windows Update gave me an error message that I have never seen before: “We could not complete the install because an update service was shutting down”. What the bleep? Why causes this? How do I fix it?

A quick Google search led me to the aforementioned Microsoft support site. I was, of course, hoping to see a solution there.

A volunteer moderator offers a marginally useful reply: Check if the Windows Update service is set to automatic, and/or try to manually install the update in question.

An independent advisor asks what version of Windows caused the issue.

Another independent advisor, who sounds like a bot, advises several methods: 1) run the Windows Update troubleshooter, 2) download the latest service pack manually, 3) download the latest updates manually, 4) fix file corruption, or 5) in-place upgrade Windows from a DVD.

Needless to say, I wasn’t planning to run manual updates or reinstall Windows. I was simply hoping to see if perhaps there was an explanation of what might have caused this error and any specific recommendations, before taking the one obvious step that was not mentioned by any of the responders: open the Task Manager, click the Services tab, right-click the Windows Update service (wuauserv) and click Restart.

Why is it so hard to, you know, refrain from answering a question unless you actually know the bleeping answer?

Reminds me of why I never really liked unmoderated Usenet newsgroups. At least on answers.microsoft.com, you don’t get answers like “what kind of an idiot still uses Windows” or “you must be a fool, installing updates”.

 Posted by at 7:21 pm
Jul 162020
 

I met Gabor David back in 1982 when I became a member of the team we informally named F451 (inspired by Ray Bradbury of course.) Gabor was a close friend of Ferenc Szatmari. Together, they played an instrumental role in establishing a business relationship between the Hungarian firm Novotrade and its British partner, Andromeda, developing game programs for the Commodore 64.

In the months and years that followed, we spent a lot of time working together. I was proud to enjoy Gabor’s friendship. He was very knowledgeable, and also very committed to our success. We had some stressful times, to be sure, but also a lot of fun, frantic days (and many nights!) spent working together.

I remember Gabor’s deep, loud voice, with a slight speech impediment, a mild case of rhotacism. His face, too, I can recall with almost movie like quality.

He loved coffee more than I thought possible. He once dropped by at my place, not long after I managed to destroy my coffee maker, a stovetop espresso that I accidentally left on the stove for a good half hour. Gabor entered with the words, “Kids, do you have any coffee?” I tried to explain to him that the devil’s brew in that carafe was a bitter, undrinkable (and likely unhealthy) blend of burnt coffee and burnt rubber, but to no avail: he gulped it down like it was nectar.

After I left Hungary in 1986, we remained in sporadic contact. In fact, Gabor helped me with a small loan during my initial few weeks on Austria; for this, I was very grateful.

When I first visited Hungary as a newly minted Canadian citizen, after the collapse of communism there, Gabor was one of the few close friends that I sought out. I was hugely impressed. Gabor was now heading a company called Banknet, an international joint venture bringing business grade satellite-based Internet service to the country.

When our friend Ferenc was diagnosed with lung cancer, Gabor was distraught. He tried to help Feri with financing an unconventional treatment not covered by insurance. I pitched in, too. It was not enough to save Feri’s life: he passed away shortly thereafter, a loss I still feel more than two decades later.

My last conversation with Gabor was distressing. I don’t really remember the details, but I did learn that he suffered a stroke, and that he was worried that he would be placed under some form of guardianship. Soon thereafter, I lost touch; his phone number, as I recall, was disconnected and Gabor vanished.

Every so often, I looked for him on the Internet, on social media, but to no avail. His name is not uncommon, and moreover, as his last name also doubles as a first name for many, searches bring up far too many false positives. But last night, it occurred to me to search for his name and his original profession: “Dávid Gábor” “matematikus” (mathematician).

Jackpot, if it can be called that. One of the first hits that came up was a page from Hungary’s John von Neumann Computer Society, their information technology history forum, to be specific: a short biography of Gabor, together with his picture.

And from this page I learned that Gabor passed away almost six years ago, on November 10, 2014, at the age of 72.

Well… at least I now know. It has been a privilege knowing you, Gabor, and being able to count you among my friends. I learned a lot from you, and I cherish all those times that we spent working together.

 Posted by at 2:04 pm