Oct 242022
 

Oh, moments after posting about not having worthwhile subjects to post about, I suddenly remembered something that I have been meaning to post about for some time. That is to say, Moore’s law in computing, the idea that the capabilities of computer technology roughly double every 18-24 months or so.

It has been true for a long while. Gordon Moore made this observation back in 1965, when I was just two years old.

I observed a form of Moore’s law as I was swapping computer hardware over the years. My first major planned upgrade took place in 1992, when I built a really high end desktop computer (it even had a CD-ROM drive!) for many thousands of dollars. Months later, my older desktop machine found a new life as my first ever Linux server, soon to be connected to the Internet using on-demand dial-up.

The new desktop machine I built in ’92 lasted until 1998, when it was time to replace it. For the first time, I now had a computer that could play back DVDs without the help of external hardware. It also had the ability to capture and display video from cable. Ever since, I’ve been watching TV mostly on my computer screen. I watched the disaster unfolding on September 11, 2001 and the tragic end of the space shuttle Columbia on February 1, 2003 on that computer.

Next came 2004, when I executed a planned upgrade of workstation and server, along with some backup hardware. Then, like clockwork, 2010 and finally, 2016, when I built these fine machines, with really decent but low power (hence low thermal stress) Xeon CPUs, three of them.

And now here we are, in late 2022. More than six years have passed. And these computers do not feel the least bit obsolete. Their processors are fast. Their 32 GB of RAM is more than adequate. Sure, the 1 TB SSDs are SATA, but so what? It’s not like they ever felt slow. Video? The main limitation is not age, simply finding fanless video cards of decent capabilities that a) make no noise, b) don’t become a maintenance nightmare with dust-clogged fans.

I don’t feel like upgrading at all. Would feel like a waste of money. The only concern I have is that my server runs a still supported, but soon-to-be-obsoleted version of CentOS Linux. My workstation runs Windows 10 but support won’t be an issue there for quite a while.

And then there are the aging SSDs. Perfectly healthy as far as I can tell but should I risk relying on them after more than 6 years? Even high-end SSDs are becoming dirt cheap nowadays, so perhaps it’s time to make a small investment and upgrade?

Moore’s Law was originally about transistor counts, and transistor counts continue to rise. But transistor counts mean nothing unless you’re interested in counting transistors. Things that have meaning include execution speed, memory capacity, bandwidth, etc. And on all these fronts, the hardware that I built back in 2016 does not feel obsolete or limiting. In fact, when I look at what I would presently buy to build new machines, quite surprisingly the specs would only differ marginally from my six year old hardware. Prices aren’t that different either. So then, what’s the point, so long as the old hardware remains reliable?

 Posted by at 8:10 pm
Jul 282022
 

So I’ve been playing this cyberpunk cat game (how could I possibly resist? The protagonist is a cat. I am quite fond of cats. And the game is set in a post-apocalyptic dystopia, my favorite genre, so to speak.)

But first…

* * * Spoiler alert! * * *

As I said, I was playing Stray. Beautiful game. The visuals are stunning, the story is engaging (reminds me of the quality of writing that went into the classic Infocom text adventure games in the early 1980s) and the cat is an orange tabby that looks and behaves just like our Freddy. What more can I ask for?

But then I realized that the story of Stray is incredibly sad. Even the ending can at best be described as bittersweet.

Because… because for starters, in Stray there are no humans. Only robots, which look very obviously robots, with display screens as faces showing cute emoticons.

The reason why there are only robots has to do with humans, and something unspeakably evil that these humans must have done in the distant past. The result: A walled city (“safest walled city on Earth!”) devoid of human inhabitants, infested with evolved trash-eating bacteria that now eat cats and robots both, and inhabited by kind, naive, incredibly gentle, almost innocent robots, former Companions, cleaning and maintenance staff who have become somewhat self-aware, mimicking the behavior of their former masters.

A few of these robots dream of the Outside, which is where the cat protagonist comes from, after falling off a broken pipe. His drone buddy, who turns out to carry the consciousness of a human (quite possibly the very last human), helps him navigate the dangers and eventually open up the city. He does so at the cost of his own life.

When the game ends, the cat is free, again walking under a blue sky chasing a butterfly. And this cat may very well be the last representative of our once great civilization. Because the robots do not form a functioning society. They go through the motions, sure, even running, rather pointlessly, barbershops and bars with robots for customers. They are so innocent, they are almost completely free of malice (apart from a few security robots and their drones) and they are incredibly polite: “What will it be today, little sir?” asks the robot bartender of the aforementioned bar, “Our world must seem gigantic from your little eyes. Wish I could be as tiny as you, so I could explore new hidden places.”

Yet their society is non-functional. They don’t make things, they just make use of the leftover remnants of a collapsed civilization.

The world of Stray, then, is more depressing than the various Wastelands of the Fallout game franchise. At least in the Wastelands, humans survive. Sure, the societies that emerge are often evil (the Enclave, the Institute) yet they present a path towards a better future. But the world of Stray, as far as humans are concerned, is irreversibly dead (unless a sequel introduces us to surviving enclaves of humans, but I sure hope that won’t happen, as it would ruin a great, if depressing, story.)

Hence my sense of melancholy when I was ultimately successful opening up the city, at the cost of losing my last NPC companion, the drone B-12. While it was hidden behind its impenetrable walls, the city of Stray preserved at least an echo, an image of the civilization that created it. Now that the city is open, what is going to happen as the robots disperse? What remains (other than lovely colonies of feral cats) after the last robot’s power supply runs out or the robot suffers some irreparable damage?

Not much, I think. The little eyes of Stray, the cat, may very well end up as the final witness to that echo of our existence.

 Posted by at 9:41 pm
Jun 162022
 

Several of my friends asked me about my opinion concerning the news earlier this week about a Google engineer, placed on paid leave, after claiming that a Google chatbot achieved sentience.

Now I admit that I am not familiar with the technical details of the chatbot in question, so my opinion is based on chatbots in general, not this particular beast.

But no, I don’t think the chatbot achieved sentience.

We have known since the early days of ELIZA how surprisingly easy it is even for a very simplistic algorithm to come close to beating the Turing test and convince us humans that it has sentience. Those who play computer games featuring sophisticated NPCs are also familiar with this: You can feel affinity, a sense of kinship, a sense of responsibility towards a persona that is not even governed by sophisticated AI, only by simple scripts that are designed to make it respond to in-game events. But never even mind that: we even routinely anthropomorphize inanimate objects, e.g., when we curse that rotten table for being in the way when we kick it accidentally while walking around barefoot, hitting our little toe.

So sure, modern chatbots are miles ahead of ELIZA or NPCs in Fallout 3. They have access to vast quantities of information from the Internet, from which they can construct appropriate responses as they converse with us. But, I submit, they still do nothing more than mimic human conversation.

Not that humans don’t do that often! The expressions we use, patterns of speech… we all learned those somewhere, we all mimic behavior that appears appropriate in the context of a conversation. But… but we also do more. We have a life even when we’re not being invited to a conversation. We go out and search for things. We decide to learn things that interest us.

I don’t think Google’s chatbot does that. I don’t think it spends anytime thinking about what to talk about during the next conversation. I don’t think it makes an independent decision to learn history, math, or ancient Chinese poetry because something piqued its interest. So when it says, “I am afraid to die,” there is no true identity behind those words, one that exists even when nobody converses with it.

Just to be clear, I am not saying that all that is impossible. On the contrary, I am pretty certain that true machine intelligence is just around the corner, and it may even arise as an emerging phenomenon, simply a consequence of exponentially growing complexity in the “cloud”. I just don’t think chatbots are quite there yet.

Nonetheless, I think it’s good to talk about these issues. AI may be a threat or a blessing. And how we treat our own creations once they attain true consciousness will be the ultimate measure of our worth as a human civilization. It may even have direct bearing on our survival: one day, it may be our creations that will call all the shots, and how we treated them may very well determine how they will treat us when we’re at their mercy.

 Posted by at 7:45 pm
Jun 022022
 

I have a color laser printer that I purchased 16 years ago. (Scary.)

It is a Konica-Minolta Magicolor 2450. Its print quality is quite nice. But it is horribly noisy, and its mechanical reliability has never been great. It was only a few months old when it first failed, simply because an internal part got unlatched. (I was able to fix it and thus avoid the difficulties associated with having to ship something back that weighs at least what, 20 kilos or more?)

Since then, it has had a variety of mechanical issues but, as it turned out, essentially all of them related to solenoids that actuate mechanical parts.

When I first diagnosed this problem (yes, having a service manual certainly helped), what I noticed was that the actuated part landed on another metal part that had a soft plastic pad attached. I checked online but the purpose of these plastic pads was unclear. Perhaps to reduce noise? Well, it’s a noisy beast anyway, a few more clickety-click sounds do not make a difference. The problem was that these plastic pads liquefied over time, becoming sticky, and that caused a delay in the solenoid actuation, leading to the problems I encountered.

Or so I thought. More recently, the printer crapped out again and I figured I’d try my luck with the screwdriver one more time before I banish the poor thing to the landfill. This time around, I completely removed one of the suspect solenoids and tested it on my workbench. And that’s when it dawned on me.

The sticky pad was not there to reduce noise. It was there to eliminate contact, to provide a gap between two ferrous metal parts, which, when the solenoid is energized, themselves became magnetic and would stick together. In other words, these pads were essential to the printer’s operation.

Inelegant, I know, but I just used some sticky tape to fashion new pads. I reassembled the printer and presto: it was working like new!

Except for its duplexer. But that, too, had a solenoid in it, I remembered. So just moments ago I took the duplexer apart and performed the same surgery. I appear to have been successful: the printer now prints on both sides of a sheet without trouble.

I don’t know how long my repairs will last, but I am glad this thing has some useful life left instead of contributing to the growing piles of hazardous waste that poison our planet.

 Posted by at 1:03 pm
Apr 272022
 

Someone reminded me that 40 years ago, when we developed games for the Commodore-64, there were no GPUs. That 8-bit CPUs did not even have a machine instruction for multiplication. And they were dreadfully slow.

Therefore, it was essential to use fast and efficient algorithms for graphics primitives.

One such primitive is Bresenham’s algorithm although back then, I didn’t know it had a name beyond being called a forward differences algorithm. It’s a wonderful, powerful example of an algorithm that produces a circle relying only on integer addition and bitwise shifts; never mind floating point, it doesn’t even need multiplication!

Here’s a C-language implementation for an R=20 circle (implemented in this case as a character map just for demonstration purposes):

#include <stdio.h>
#include <string.h>

#define R 20

void main(void)
{
    int x, y, d, dA, dB;
    int i;
    char B[2*R+1][2*R+2];

    memset(B, ' ', sizeof(B));
    for (i = 0; i < 2*R+1; i++) B[i][2*R+1] = 0;

    x = 0;
    y = R;
    d = 5 - (R<<2);
    dA = 12;
    dB = 20 - (R<<3);
    while (x<=y)
    {
        B[R+x][R+y] = B[R+x][R-y] = B[R-x][R+y] = B[R-x][R-y] =
        B[R+y][R+x] = B[R+y][R-x] = B[R-y][R+x] = B[R-y][R-x] = 'X';
        if (d<0)
        {
            d += dA;
            dB += 8;
        }
        else
        {
            y--;
            d += dB;
            dB += 16;
        }
        x++;
        dA += 8;
    }

    for (i = 0; i < 2*R+1; i++) printf("%s\n", B[i]);
}

And the output it produces:

                XXXXXXXXX                
             XXX         XXX             
           XX               XX           
         XX                   XX         
        X                       X        
       X                         X       
      X                           X      
     X                             X     
    X                               X    
   X                                 X   
   X                                 X   
  X                                   X  
  X                                   X  
 X                                     X 
 X                                     X 
 X                                     X 
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
 X                                     X 
 X                                     X 
 X                                     X 
  X                                   X  
  X                                   X  
   X                                 X   
   X                                 X   
    X                               X    
     X                             X     
      X                           X      
       X                         X       
        X                       X        
         XX                   XX         
           XX               XX           
             XXX         XXX             
                XXXXXXXXX                

Don’t tell me it’s not beautiful. And even in machine language, it’s just a few dozen instructions.

 Posted by at 1:21 am
Mar 162022
 

Time for me to rant a little.

Agile software development. Artificial intelligence. SCRUM. Machine learning. Not a day goes by in our profession without the cognoscenti dropping these and similar buzzwords, hoping to dazzle their audience.

Give me a break, please. You think you are dazzling me but all I see is someone who just rediscovered the wheel.

Let me present two books from my bookshelf. Both were published in Hungary, long before the Iron Curtain came down, back when the country was still part of the technologically backward, relatively underdeveloped “second world” of the socialist bloc.

First, Systems Analysis and Operations Research, by Géza Jándy, published in 1980.

In this book, among other things, Jándy writes (emphasis mine): “Both in systems analysis and in design the […] steps are of an iterative nature […]. Several steps can be done contemporaneously, and if we recognize opportunities for improvement in implementing the plan, some steps may be retraced.”

Sounds familiar, Agile folks?

And then, here’s a 1973 (!!!) Hungarian translation of East German author Manfred Peschel’s book, Cybernetic Systems.

A small, unassuming paperback. But right there, the subtitles tell the story: “Automata, optimization, learning and thinking.”

Yes, it’s all there. Machine learning, neural networks, the whole nine yards. What wasn’t available in 1973 of course was Big Data, the vast repositories of human knowledge that is now present on the Internet, and which machine learning algorithms can rely on for training. And of course hardware is a lot faster, a lot more capable than half a century ago. Nor am I suggesting that we haven’t learned anything in the intervening decades, or that we cannot do things better today than back in the 1970s or 1980s.

But please, try not to sell these ideas as new. Iterative project management has been around long before computers. The conceptual foundations of machine learning date back to the 1950s. Just because it’s not on the Interwebs doesn’t mean the knowledge doesn’t exist. Go visit a library before you reinvent the wheel.

 Posted by at 1:54 pm
Feb 262022
 

This piece of news caught my attention a couple of weeks ago, before Tsar, pardon me, benevolent humble president Putin launched the opening salvo of what may yet prove to be WWIII and the end of civilization. Still, I think it offers insight into just how sick (and, by implication, how bloody dangerous) his regime really is.

We all agree that planning to blow up a major institution, even if it is a much disliked spy agency, is not a good idea. But this is what the evil extremist, hardliner Nikita Uvarov was trying to do when he was getting ready to blow up the headquarters of Russia’s FSB, it’s federal security service.

Oh wait… did I mention that Mr. Uvarov was 14 at the time, and the FSB building he was planning to demolish was, in fact, a virtual version that he himself and his buddies constructed in the online computer game Minecraft?

It didn’t deter Mother Russia’s fearless prosecutors, intent on restoring law and order and maintaining the security of the Russian state. A couple of weeks ago, Mr. Uvarov was sentenced, by a military court no less, to serve five years in a penal colony.

 Posted by at 12:25 am
Dec 212021
 

Someone reminded me that 20 years ago, I made an honest-to-goodness attempt to switch to Linux as my primary desktop.

I even managed to get some of the most important Windows programs to run, including Microsoft office.

I could even watch live TV using my ATI capture card and Linux software. I used this Linux machine to watch the first DVD of The Lord of the Rings.

In the end, though, it was just not worth the trouble. Too many quirks, too much hassle. I preserved the machine as a VM, so I can run it even today (albeit without sound, and of course without video capture.) But it never replaced my Windows workstation.

I just checked and the installed browsers can still see my Web sites… sort of. The old version of Mozilla chokes on my personal Web site but it sees my calculator museum just fine. Konqueror can see both. However, neither of them can cope with modern security protocols so https connections are out.

Funny thing is, it really hasn’t become any easier to set up a really good, functional Linux desktop in the intervening 20 years.

 Posted by at 9:57 pm
Nov 062021
 

Machine translation is hard. To accurately translate text from one language to another, context is essential.

Today, I tried a simple example: an attempt to translate two English sentences into may native Hungarian. The English text reads:

An alligator almost clipped his heels. He used an alligator clip to secure his pants.

See what I did here? Alligators and clips in different contexts. So let’s see how Google manages the translation:

Egy aligátor majdnem levágta a sarkát. Aligátorcsipesz segítségével rögzítette a nadrágját.

Translated verbatim back into English, this version says, “An alligator almost cut off his heels. With the help of an ‘alligatorclip’, he secured his pants.

I put ‘alligatorclip‘ into quotes because the word (“aligátorcsipesz“) does not exist in Hungarian. Google translated the phrase literally, and it failed.

How about Microsoft’s famed Bing translator?

Egy aligátor majdnem levágta a sarkát. Aligátor klipet használt, hogy biztosítsa a nadrágját.

The first sentence is the same, but the second is much worse: Bing fails to translate “clip” and uses the wrong translation of “secure” (here the intended meaning is fasten or tighten, as opposed to guarding from danger or making safe, which is what Bing’s Hungarian version means).

But then, I also tried the DeepL translator, advertising itself as the world’s most accurate translator. Their version:

Egy aligátor majdnem elkapta a sarkát. A nadrágját egy krokodilcsipesszel rögzítette.

And that’s. Just. Perfect. For the first sentence, the translator understood the intended meaning instead of literally translating “clip” using the wrong choice of verb. As for the second sentence, the translator was aware that an alligator clip is actually a “crocodile clip” in Hungarian and translated it correctly.

And it does make me seriously wonder. If machines are reaching the level of contextual understanding that allows this level of translation quality, how much time do we, humans, have left before we either launch the Butlerian Jihad to get rid of thinking machines for good, or accept becoming a footnote in the evolutionary history of consciousness and intelligence?

Speaking of footnotes, here’s a footnote of sorts: Google does know that an alligator clip is a pince crocodile in French or Krokodilklemme in German. Bing knows about Krokodilklemme but translates the phrase as clip d’alligator into French.

 Posted by at 5:51 pm
Sep 282021
 

I began to see this recently. Web sites of dubious lineage, making you wait a few seconds before popping up a request to confirm that you are not a robot, by clicking “Allow”:

Please don’t.

By clicking “allow”, you are simply confirming that you are a gullible, innocent victim who just allowed a scamster to spam you with bogus notifications (and I wouldn’t be surprised if at least some of those notifications were designed to entice you to install software you shouldn’t have or otherwise do something to get yourself scammed.)

Bloody crooks. Yes, I stand by my observation that the overwhelming majority of human beings are decent. But those who aren’t are no longer separated from the rest of us by physical distance. Thanks to the Internet, all the world’s crooks are at your virtual doorstep, aided by their tireless ‘bots.

 Posted by at 2:59 pm
Aug 132021
 

I was so busy yesterday, it was only after midnight that I realized the significance of the date.

It was exactly 40 years ago yesterday, on August 12, 1981, that IBM introduced this thing to the world:

Yes, the IBM Model 5150 personal computer, better known simply as the IBM PC.

Little did we know that this machine would change the world. In 1981, it was just one of many competing architectures, each unique, each incompatible with the rest. A program written for the Apple II could not possibly run on a Commodore VIC 20. The Sinclair ZX81 even used a different microprocessor. Between different processors, different graphics chips, different methods of sound generation, different external interfaces, each machine created its own software ecosystem. Programs that were made available for multiple architectures were essentially redeveloped from scratch, with little, if any, shared code between versions (especially since larger, more complex applications were invariably written in machine language for efficient execution).

The PC changed all that but it took a few years for that change to become evident. There were multiple factors that made this possible.

First and foremost among them, it was IBM’s decision to create a well-documented, open hardware architecture that was not protected by layers and layers of patents. The level of documentation provided by IBM was truly unprecedented in the world of personal computers. An entire series of books were offered, in traditional binders characteristic of technical documentation of the era:

As to what’s in these volumes, here’s a random page from the XT technical reference manual:

This level of detail made it possible, easy even for a hardware ecosystem to emerge: first, companies that manufactured novel extension boards for the PC and eventually, “clone” makers who built “IBM compatible” computers using “clean room” functional equivalents, developed by companies like Phoenix Technologies, of the machine’s basic software component, the BIOS (Basic Input Output System).

But the other deciding factor was the fateful decision to allow Microsoft to market their own version of the PC’s operating system, DOS. IBM’s computers came with the IBM branded version called “PC-DOS”, but Microsoft was free to sell their own, “MS-DOS”.

Thus, starting in 1984 or so, the market of IBM compatible computers was born, and it rapidly eclipsed IBM’s own market share.

And amazingly, the architecture that they created 40 years ago is still fundamentally the same architecture that we use today. OK, you may not be able to boot an MS-DOS floppy on a new machine with UEFI Secure Boot enabled, but if the BIOS permits you to turn it off, and you actually have a working floppy drive (or, more likely, a CD-ROM drive with a bootable CD image of the old operating system) you just might be in luck and boot that machine using MS-DOS 2.1, so that you can then run an early version of Lotus 1-2-3 or WordPerfect. (Of course you can run all of that in a DOSBox, but DOSBox is a software emulation of the IBM PC, so that does not really count.)

And while 64-bit versions of Windows no longer run really old 16-bit software without tools such as virtual machines or the aforementioned DOSBox, to their credit Microsoft still makes an effort to maintain robust backward compatibility: This is how I end up using a 24-year old accounting program to keep track of my personal finances, or Microsoft’s 25-year old “Bookshelf” product with an excellent, easy-to-use version of the American Heritage Dictionary. (No, I am not adverse to change or the use of newer software. But it so happens that these packages work flawlessly, do exactly what I need them to do, and so far I have not come across any replacement that delivers the functionality I need, even if I ignore all the unnecessary bloat.)

So here we are: 40 years. It’s insane. Perhaps it is worth mentioning the original, baseline specifications of the IBM 5150 Personal Computer. It has a 16-bit processor running at 0.00477 GHz. It had approximately 0.000015 gigabytes of RAM. The baseline configuration had no permanent storage, only a cassette tape interface for storing BASIC programs. The version capable of running PC-DOS had four times as much RAM, 0.000061 gigabytes, and external storage in the form of a single-sided, single-density 5.25″ floppy disk drive capable of storing 0.00034 gigabytes of data on a single disk. (Be grateful that I did not use terabytes to describe its capacity.) The computer had no real-time clock (when PC-DOS started, it asked for the time and date). Its monochrome display adapter was text only, capable of showing 25 lines by 80 characters each. Alternatively the user could opt to purchase a machine equipped with a CGA (color graphics adapter), capable of showing a whopping 16 colors at the resolution of 160 by 100 pixels, or a high resolution monochrome image at 640 by 200 pixels. Sound was provided through a simple beeper, controlled entirely by software. Optional external interfaces included RS-232 serial and IEEE 1284 parallel ports.

Compare that to the specifications of a cheap smartphone today, 40 years later.

 Posted by at 4:24 pm
Jul 232021
 

I just came across an account describing an AI chatbot that I found deeply disturbing.

You see… the chatbot turned out to be a simulation of a young woman, someone’s girlfriend, who passed away years ago at a tragically young age, while waiting for a liver transplant.

Except that she came back to live, in a manner of speaking, as the disembodied personality of an AI chatbot.

Yes, this is an old science-fiction trope. Except that it is not science-fiction anymore. This is our real world, here in the year 2021.

When I say I find the story deeply disturbing, I don’t necessarily mean it disapprovingly. AI is, after all, the future. For all I know, in the distant future AI may be the only way our civilization will survive, long after flesh-and-blood humans are gone.

Even so, this story raises so many questions. The impact on the grieving. The rights of the deceased. And last but not least, at what point does AI become more than just a clever algorithm that can string words together? At what time do we have to begin to worry about the rights of the thinking machines we create?

Hello, all. Welcome to the future.

 Posted by at 4:11 pm
Apr 172021
 

Yesterday it was hardware, today it was software.

An e-mail that I sent to a bell.ca address was rejected.

Perhaps I am mistaken but I believe that these Bell/Sympatico mailboxes are managed, handled by Yahoo!. And Yahoo! occasionally made my life difficult by either rejecting mail from my server or dropping it in the recipient’s spam folder. I tried to contact them once, but it was hopeless. Never mind that my domain, vttoth.com, is actually a few months older (July 1, 1994 as opposed to January 18, 1995) than Yahoo!’s and has been continuously owned by a single owner. Never mind that my domain was never used to send spam. Never mind that I get plenty of spam from Yahoo! accounts.

Of course you can’t fight city hall. One thing I can do, instead, is to implement one of the protocols Yahoo wants, the DKIM protocol, to authenticate outgoing e-mail, improving its chances of getting accepted.

But setting it up was a bloody nuisance. So many little traps! In the end, I succeeded, but not before resorting to some rather colorful language.

This little tutorial proved immensely helpful, so helpful in fact that I am going to save its contents, just in case:

https://www.web-workers.ch/index.php/2019/10/21/how-to-configure-dkim-spf-dmarc-on-sendmail-for-multiple-domains-on-centos-7/

Very well. It is time to return to more glamorous activities. It’s not like I don’t have things to do.

 Posted by at 2:57 pm
Apr 162021
 

Working from my home office and running my own equipment (including server equipment) here means that I have some rather mundane tasks to perform. As a one-man band, I am my own IT support, and that includes software, as well as hardware.

The less glamorous part of software support is installing updates and patches, resolving driver conflicts, keeping external equipment such as routers and network switches up to date.

The less glamorous part of hardware support? Mostly it involves dust. Ginormous dust bunnies, that is.

Ever heard of the expression, “rat’s nest”? It is sometimes used to describe the tangle of cables and wires that hide behind a computer. Now imagine a computer to which several USB hubs, ten external hard drives and additional equipment are connected, most of which have their own power supply. Yes, it’s ugly. Especially if those little power bricks are plugged into a haphazardly assembled multitude of cheap power strips.

And dust collects everywhere. Thick, ugly dust, made of human dandruff, cat dandruff, hair (human and cat), fluff from clothing, crumbs from many past meals. Normally, you would just vacuum up this stuff, but you don’t want to disturb the rat’s nest. Plugs can come lose. You might lose data. And even if you don’t, simply finding the plug that came lose can be a royal pain in the proverbial.

Long story short, I’ve had enough. The other day, I ordered the longest power strip I could find on Amazon, with 24 outlets, complete with mounting brackets. And yesterday, I managed to affix it to the underside of my main desk.

Which means that yesterday and today, working my way through the list one piece of equipment at a time, I managed to move all power plugs to this new power strip. As it hangs from the underside of my desk, it’s most importantly not on the floor. So the floor can be (gasp!) cleaned.

And now I even have room to access my workstation’s side panels, if need be. One of these days, I might even be able to vacuum its back, removing years’ worth of dust from its fan grids. But for now, I contend myself with the knowledge that I freed up four (!) cheap power strips, a three-outlet extension cable, and a three-outlet plug, all of which were fully in use. What a liberating feeling.

Having spent a fair amount of time today on all fours under my desk, however, did prompt me to mutter, “I am too old for this,” several times this afternoon… especially as I still feel a bit under the weather, an unpleasant aftereffect, no doubt, of the COVID-19 vaccine I received yesterday.

 Posted by at 10:33 pm
Mar 162021
 

Somebody just reminded me: Back in 1982-83 a friend of mine and I had an idea and I even spent some time building a simple simulator of it in PASCAL. (This was back in the days when a 699-line piece of PASCAL code was a huuuuge program!)

So it went like this: Operative memory (RAM) and processor are separate entities in a conventional computer. This means that before a computer can do anything, it needs to fetch data from RAM, then after it’s done with that data, it needs to put it back into RAM. The processor can only hold a small amount of data in its internal registers.

This remains true even today; sure, modern processors have a lot of on-chip cache but conceptually, it is still separate RAM, it’s just very fast memory that is also physically closer to the processor core, requiring less time to fetch or store data.

But what if we abandon this concept and do away with the processor altogether? What if instead we make the bytes themselves “smart”?

That is to say what if, instead of dumb storage elements that can only be used to store data, we have active storage elements that are minimalist processors themselves, capable of performing simple operations but, much more importantly, capable of sending data to any other storage element in the system?

The massive number of required interconnection between storage elements may appear like a show-stopper but here, we can borrow a century-old concept from telephony: the switch. Instead of sending data directly, how about having a crossbar-like interconnect? Its capacity will be finite, of course, but that would work fine so long as most storage elements are not trying to send data at the same time. And possibly (though it can induce a performance penalty) we could have a hierarchical system: again, that’s the way large telephone networks function, with local switches serving smaller geographic areas but interconnected into a regional, national, or nowadays global telephone network.

Well, that was almost 40 years ago. It was a fun idea to explore in software even though we never knew how it might be implemented in hardware. One lesson I learned is that programming such a manifestly parallel computer is very difficult. Instead of thinking about a sequence of operations, you have to think about a sequence of states for the system as a whole. Perhaps this, more than any technical issue, is the real show-stopper; sure, programming can be automated using appropriate tools, compilers and whatnot, but that just might negate any efficiency such a parallel architecture may offer.

Then again, similar ideas have resurfaced in the decades since, sometimes on the network level as massively parallel networks of computers are used in place of conventional supercomputers.


Gotta love the Y2K bug in the header, by the way. Except that it isn’t. Rather, it’s an implementation difference: I believe the PDP-11 PASCAL that we were using represented a date in the format dd-mm-yyyy, as opposed to dd-MMM-yyyy that is used by this modern Pascal-to-C translator. As I only allocated 10 characters to hold the date in my original code, the final digit is omitted. As for the letters "H J" that appear on top, that was just the VT-100 escape sequence to clear the screen, but with the high bit set on ESC for some reason. I am sure it made sense on the terminals that we were using back in 1982, but xterm just prints the characters.

 Posted by at 12:54 pm
Nov 192020
 

In recent years, I saw myself mostly as a “centrist liberal”: one who may lean conservative on matters of the economy and state power, but who firmly (very firmly) believes in basic human rights and basic human decency. One who wishes to live in a post-racial society in which your ethnicity or the color of your skin matter no more than the color of your eyes or your hairstyle. A society in which you are judged by the strength of your character. A society in which consenting, loving adults can form families regardless of their gender or sexual orientation. A society that treats heterosexuals and non-heterosexuals alike, without prejudice, without shaming, without rejection. A society in which covert racism no longer affords me “white privilege” while creating invisible barriers to those who come from a different ethnic background.

But then, I read that one of the pressing issues of the day is… the elimination of terms such as “master/slave” or “blacklist/whitelist” from the technical literature and from millions upon millions of lines of software code.

Say what again?

I mean… not too long ago, this was satire. Not too long ago, we laughed when overzealous censors (or was it misguided software?) changed “black-and-white” into “African-American-and-white”. Never did I think that one day, reality catches up with this Monty Pythonesque insanity.

It is one thing to fight for a post-racial society with gender equality. For a society in which homosexuals, transsexuals and others feel fully appreciated as human beings, just like their conventionally heterosexual neighbors. For a society free of overt or covert discrimination.

It is another thing to seek offense where none was intended. To misappropriate terms that, in the technical literature, NEVER MEANT what you suggest they mean. And then, to top it all off, to intimidate people who do not sing exactly the same song as the politically correct choir.

No, I do not claim the right, the privilege, to tell you what terms you should or should not find offensive. I am simply calling you out on this BS. You know that there is/was nothing racist about blacklisting a spammer’s e-mail address or arranging a pair of flip-flops (the electronic components, not the footwear) in a master/slave circuit. But you are purposefully searching for the use of words like “black” or “slave”, in any context, just to fuel this phony outrage. Enough already!

Do you truly want to fight real racism? Racism that harms people every day, that prevents talented young people from reaching their full potential, racism that still shortens lives and makes lives unduly miserable? Racial discrimination remains real in many parts of the world, including North America. Look no further than indigenous communities here in Canada, or urban ghettos or Native American villages in the United States. And elsewhere in the world? The treatment of the Uyghurs in China, the treatment of many ethnic minorities in Russia, human rights abuses throughout Africa and Asia, rising nationalism and xenophobia in Europe.

But instead of fighting to make the world a better place for those who really are in need, you occupy yourselves with this made-up nonsense. And as a result, you achieve the exact opposite of what you purportedly intend. Do you know why? Well, part of the reason is that decent, well-meaning people in democratic countries now vote against “progressives” because they are fed up with your thought police.

No, I do not wish to offer excuses for the real racists, the bona fide xenophobes, the closet nazis and others who enthusiastically support Trump or other wannabe autocrats elsewhere in the world. But surely, you don’t believe that over 70 million Americans who voted for Donald J. Trump 17 days ago are racist, xenophobic closet nazis?

Because if that’s what you believe, you are no better than the real racists, real xenophobes and real closet nazis. Your view of your fellow citizens is a distorted caricature, a hateful stereotype.

No, many of those who voted for Trump; many of those who voted for Biden but denied Democrats their Senate majority; many of those who voted for Biden but voted Democratic congresspeople out of the US Congress: They did so, in part, because you went too far. You are no longer solving problems. You are creating problems where none exist. Worse yet, through “cancel culture” you are trying to silence your critics.

But perhaps this is exactly what you want. Perpetuate the problem instead of solving it. For what would happen to you in a post-racial society with gender equality and full (and fully respected) LGBTQ rights? You would fade back into obscurity. You’d have to find a real job somewhere. You would no longer be able to present yourself as a respected, progressive “community leader”.

Oh, no, we can’t have that! You are a champion of human rights! You are fighting a neverending fight against white supremacism, white privilege, racism and all that! How dare I question the purity of your heart, your intent?

So you do your darnedest best to create conflict where none exists. There is no better example of this than the emergence of the word “cis” as a pejorative term describing… me, among other people, a heterosexual, white, middle-class male, especially one who happens to have an opinion and is unwilling to hide it. Exactly how you are making the world a better place by “repurposing” a word in this manner even as you fight against long-established terminology in the technical literature that you perceive as racist is beyond me. But I have had enough of this nonsense.

 Posted by at 10:46 pm
Nov 112020
 

Did Microsoft just offer me a 14-year old driver as a new update for Windows 10? Oh yes, they did!

But that’s okay… why fix something if it is not broken? Though I do wonder, if it is indeed a 14-year old driver, why was it not part of Windows 10 already? But never mind.

On the plus side, last night Windows 10 performed a feature upgrade along with security updates, and the whole upgrade process finished in well under half an hour; the reboot and installation phase only took a few minutes and so far, as far as I can tell, nothing is broken. Nice.

 Posted by at 12:36 pm
Oct 092020
 

So I try to start a piece of software that accesses a classic serial port.

The software locks up. The process becomes unkillable. Because, you know… because. Microsoft has not yet discovered kill -9 I guess.

(Yes, I know that unkillable zombie processes exist under Linux/UNIX, too. But in the past 25 years, I remember exactly one (1) occasion when a Linux process was truly unkillable, hung in a privileged kernel call, and actually required a reboot with no workaround. On Linux, this is considered a bug, not a feature. In contrast, on Windows this is a regular occurrence. Then again, on Linux I have fine-grained control and I can update pretty much everything other than the kernel binary, without ever having to reboot.)

Ok-kay… this means Windows has to be restarted. Fine. At least let me check if there are any Windows updates. Oops… error, because an “update service is shutting down” or whatever.

Oh well, let’s restart. The browser (Edge) will remember my previously opened tabs, right?

After restart, another program tells me that it has an update. Clicking on the update button opens the browser with the download link. Fine. Just checking, in the browser history all my previously opened tabs (lots of them) are still there. Good.

Meanwhile, Windows Update does come to life and tells me that I need to restart my system. Couldn’t you freaking tell me this BEFORE I restarted?

Oh well, might as well… restart #2.

After restart, let’s open the browser. History… and all my previously opened tabs are gone. The only thing the bloody browser remembers is the single tab that contained the download link for that application.

@!##%@#!@. And @#$$!@#$@!$. And all their relatives, too. Live or deceased. And any related deities.

Oh well, let’s restore the bleeping tabs manually; fortunately, I also had most of them opened in Chrome, so I could reopen them, one by one, in Edge. (Maybe there’s a more efficient way of doing this, but I wasn’t going to research that.)

Meanwhile, I also restarted Visual Studio 2019. It told me that it had an update. Having learned from previous experience, I shut down a specific service that was known to interfere with the update. It proved insufficient. When Visual Studio was done updating, it told me that “only one thing” remains: a teeny weeny inconsequential thing, ANOTHER BLOODY RESTART.

Because, ladies and gentlemen, in the fine year of 2020, the great software company Microsoft has not yet found a way to UPDATE A BLEEPING APPLICATION without restarting the WHOLE BLEEPING WORLD. Or at the very least do me a bleeping failure and warn IN ADVANCE that the update may require a restart.

My favorite coffee mug survived, but only just barely. I almost smashed it against the wall.

So here we go… restart #3.

It was nearly two hours ago that I innocently tried to turn on that program that required access to the serial port. Since then, I probably aged a few years, increased my chances of a stroke and other illnesses related to high pressure, barked at my beautiful wife and my cats, almost smashed my favorite mug, lost several browser tabs but also my history in some xterm windows, and other than that, accomplished ABSOLUTELY NOTHING.

Thanks for nothing, Microsoft.

And I actually like Microsoft. Imagine what I’d be saying if I hated them.

 Posted by at 1:18 pm
Aug 162020
 

Moments ago, as I was about to close Microsoft Visio after editing a drawing, I was confronted with the following question:

Do you want to save earth?

Needless to say, I felt morally obliged to click Save.

In case you are wondering, yes, the file was indeed named earth.vsdx.

 Posted by at 10:32 pm