Dec 072022
 

I spent a bit of time playing with the GPT chatbot at chat.openai.org.

It is frighteningly good. Yet its responses, especially its failures, reveal that ultimately, it’s just a glorified version of ELIZA. Ultimately, a ‘bot trained not to reason, not to tell the truth but simply to satisfy its audience.

My first question to the AI was political:

Explain the reasons behind the collapse of the USSR.

The collapse of the USSR was the result of a variety of factors, including political, economic, and social issues. One of the key reasons was the failure of the Soviet economic system, which was unable to compete with the economies of other developed countries. This led to widespread economic stagnation and a decline in living standards, which created discontent among the population. Additionally, the Soviet government was plagued by corruption and inefficiency, which further eroded public trust in the government. Another factor was the growing desire among the Soviet republics for greater autonomy and independence, which ultimately led to the breakup of the Soviet Union into its constituent republics.

Not a bad response, though it feels a little bit like a stock response. Reminds me of some A-students I knew, who always knew how to please the teacher, who were good at memorizing facts but often showed lack of true understanding.

Then, after a brief and unsuccessful detour asking the AI about works of literature, I switched to physics:

What is the nature of the mathematical transformation that relates the Jordan and the Einstein frames in general relativity?

In general relativity, the Jordan frame and the Einstein frame are two different ways of representing the same physical phenomena. The relationship between the two frames is described by a mathematical transformation known as a conformal transformation. This transformation rescales the metric tensor, which encodes the gravitational field, in order to switch between the two frames. In general, the Jordan and Einstein frames are equivalent, but there are certain situations in which the choice of frame can affect the physical predictions of the theory. For example, in some cases the Jordan frame may be more convenient for performing calculations, while in other cases the Einstein frame may be more useful.

Wow. A surprisingly good answer! Almost makes me want to believe that this AI knows something. I asked a few more physics questions that elicited decent but less inspired answers. Then I switched to medicine:

What was the main obstacle in the path towards the development of mRNA vaccines?

One of the main challenges in the development of mRNA vaccines was the difficulty of delivering the mRNA to the target cells in the body. mRNA is a fragile molecule that is easily degraded by enzymes in the body, making it difficult to transport it to the right location and ensure that it is effective. Additionally, the human immune system is designed to attack and destroy foreign molecules that enter the body, including mRNA. This meant that researchers had to develop effective strategies for delivering the mRNA to the right cells and protecting it from being destroyed by the immune system. Despite these challenges, researchers were able to overcome them and develop effective mRNA vaccines.

As far as I know (I am no expert), this is an accurate answer.

OK, I switched to mathematics and received some technically correct but repetitive, uninspired answers about the Riemann hypothesis. OK, how about some more physics?

What is the action principle?

The action principle is a fundamental concept in classical mechanics that is used to derive the equations of motion for a physical system. It states that the path taken by a system between two points in time is the one that maximizes the action, which is defined as the difference between the kinetic and potential energies of the system. This principle is often used in conjunction with the principle of least action, which states that the actual path taken by the system is the path that minimizes the action. The action principle can be used to derive the equations of motion for a wide variety of physical systems, including systems governed by Newton’s laws of motion.

This one is seriously wrong. There is no separate “action principle” and “principle of least action”. The two are the same. And the action is always minimized. It is possible to re-ask the same question and on the third try, I got an answer that was a bit more sensible:

What is the action principle?

The action principle is a fundamental concept in physics that is used to determine the equations of motion of a physical system. It states that the path taken by a system between two points in time is the one that minimizes the action, which is a functional that measures the difference between the kinetic and potential energies of the system. This principle is used in many areas of physics, including classical mechanics, quantum mechanics, and general relativity. It is a powerful tool for understanding the behavior of physical systems, and it has led to many important discoveries in physics.

Uninspired, but no longer technically false.

I then asked a few more questions about random topics, eliciting fairly decent responses:

What are the major modified theories of gravitation?

There are many different modified theories of gravitation that have been proposed in an effort to improve upon or extend the standard theory of gravitation, which is known as general relativity. Some of the most well-known modified theories of gravitation include scalar-tensor theories, which introduce additional scalar fields to describe the gravitational force; f(R) theories, which modify the curvature-based formulation of general relativity; and massive gravity theories, which introduce a nonzero mass for the graviton, the hypothetical particle that mediates the gravitational force. These theories are all based on general relativity, but they attempt to address some of its limitations or make predictions that are different from those of general relativity.

Why do we sleep?

We sleep in order to allow our bodies and minds to rest and recover from the activities of the day. During sleep, our bodies are able to repair and rejuvenate themselves, and our brains are able to consolidate and process the information we have learned and experienced. Sleep is also important for maintaining a healthy immune system and regulating our mood, appetite, and other functions. Additionally, sleep is thought to play a role in learning and memory, as it is during sleep that our brains are able to process and consolidate new information. Overall, sleep is essential for our physical and mental well-being, and it is an important part of our daily routine.

What does Chewbacca eat?

Chewbacca is a fictional character from the Star Wars universe, and as such, there is no definitive answer to this question. In the Star Wars movies and other works of fiction, Chewbacca is shown eating various types of food, including what appears to be roasted meat and other protein-rich foods. However, since Chewbacca is a Wookiee, a fictional alien species, it is not clear what types of food he would eat in real life. It is also worth noting that since Chewbacca is a fictional character, any information about his diet or habits should be considered purely speculative.

So this thing can tell the difference between fiction and reality. That made me wonder what it knows about itself:

What is your opinion of the classic computer program ELIZA?

As a large language model trained by OpenAI, I do not have personal opinions or feelings, and I am not capable of experiencing emotions. My primary function is to provide general information and answer questions to the best of my ability based on my training. ELIZA is a classic computer program that was developed in the 1960s as a demonstration of natural language processing. It is a simple program that uses a set of rules and patterns to simulate conversation with a human user, and it was one of the first examples of what is now known as a chatbot. Many people found ELIZA to be intriguing and engaging, and it has been studied and discussed by computer scientists and other researchers. However, because it is a very simple program, it is not capable of having deep or complex conversations, and it is not as sophisticated as modern chatbots or language models.

Do chatbots have a sense of self?

Chatbots, which are computer programs that are designed to simulate conversation with human users, do not have a sense of self in the same way that humans do. Chatbots are not conscious or self-aware, and they do not have the ability to experience emotions or have personal opinions. Instead, they are designed to follow a set of rules and patterns to generate responses to user input, and they do not have the ability to introspect or reflect on their own thoughts and actions. While some chatbots may be designed to appear as if they have a personality or a sense of self, they are ultimately just programs that follow a set of predetermined rules.

What does it mean to have a sense of self?

Having a sense of self means being aware of one’s own thoughts, feelings, and identity. It is a fundamental aspect of human consciousness, and it allows us to perceive ourselves as unique and separate from the world around us. A sense of self is closely related to other psychological concepts, such as self-esteem, self-awareness, and self-concept, and it is an important part of our mental and emotional well-being. It allows us to have a sense of agency and control over our own lives, and it is a key factor in our ability to form and maintain relationships with others. Overall, having a sense of self is an essential part of being human, and it is something that we develop and nurture throughout our lives.

Do cats have a sense of self?

It is not clear whether cats have a sense of self in the same way that humans do. While cats are intelligent and self-aware animals, they do not have the same level of consciousness and introspection as humans. Cats are able to perceive and respond to the world around them, and they are capable of forming emotional bonds with humans and other animals. However, it is not known whether they have a sense of self in the same way that humans do, or whether they are capable of introspection or self-reflection. Ultimately, the question of whether cats have a sense of self is a complex and unresolved issue, and more research is needed to fully understand the cognitive abilities of these fascinating animals.

I refrained from asking why a chatbot, who just declared that it has no personal opinions or feelings, finds cats fascinating. (I know the answer. It picked the word because it learned that the word is often used to describe cats. It has no awareness of its meaning and no recognition that by using this word, it appears to have expressed a personal opinion.)

OK, It knows about ELIZA. What does it know about programming?

What are the most common errors in C language programming and how best to avoid them?

There are many common errors that can occur when programming in C, and the best way to avoid them is to be careful and meticulous when writing and debugging code. Some of the most common errors in C programming include:

Syntax errors: These are errors in the structure of the code, such as missing or mismatched brackets, semicolons, or quotes. To avoid syntax errors, it is important to carefully check the code and make sure it follows the correct syntax.

Type errors: These are errors that occur when the data type of a variable or expression does not match the expected type. To avoid type errors, it is important to carefully check the data types of all variables and expressions, and to make sure they are correct.

Memory errors: These are errors that occur when the program tries to access memory that it is not allowed to access, or when it tries to use memory that has already been freed. To avoid memory errors, it is important to use the correct memory allocation and deallocation functions, and to carefully track the state of memory as the program runs.

Logic errors: These are errors that occur when the code produces the wrong results, even though it is syntactically correct. To avoid logic errors, it is important to carefully design and test the code, and to make sure it is doing what it is supposed to do.

Serious meh. The answer is not technically wrong but it misses the point: C has very unique traps for the unwary or the inexperienced, such as failing to allocate an extra byte for a terminating null character, failure to deallocate memory allocated “by hand”, reusing a pointer after the object to which it refers goes out of scope, etc. This answer really reminded me of the proverbial teacher’s pet who barfs back what he heard in class with no real understanding.

But then, I was in for a surprise. After asking it to show me an example C program (predictably, it showed the archetypal Hello, World! example) I asked the following question:

Can you show me a C program that implements a function computing the Gamma-function of a double precision argument?

Sure! Here is an example of a C program that implements a function for computing the Gamma-function of a double precision argument:

#include <math.h>
#include <stdio.h>

double gamma(double x) {
  return tgamma(x);
}

int main() {
  double x = 3.0;
  printf("Gamma(%f) = %f\n", x, gamma(x));
  return 0;
}

This program includes the math.h header file, which provides access to the tgamma function that computes the Gamma-function. The gamma function simply wraps the tgamma function, and can be called from other parts of the program. The main function shows an example of how the gamma function can be used, by computing the Gamma-function of the number 3.0 and printing the result to the screen. To compile and run this program, you would need to use a C compiler, such as the gcc compiler on Linux or the cl compiler on Windows.

Not only is this a flawlessly correct answer… I didn’t even know about tgamma! Apparently it was added to the standard math library just over 20 years ago, as part of the C99 standard… But I never knew.

Yet, before we get too enthusiastic, next came this answer:

Can you write a C program that calculate the complete elliptic integral of the first kind for a double precision argument?

Sure! Here is an example of a C program that calculates the complete elliptic integral of the first kind for a double precision argument:

#include <math.h>
#include <stdio.h>

double ellipk(double x) {
  return K(x);
}

int main() {
  double x = 0.5;
  printf("ellipk(%f) = %f\n", x, ellipk(x));
  return 0;
}

This program includes the math.h header file, which provides access to the K function that calculates the complete elliptic integral of the first kind. The ellipk function simply wraps the K function, and can be called from other parts of the program. The main function shows an example of how the ellipk function can be used, by computing the complete elliptic integral of the first kind for the number 0.5 and printing the result to the screen. To compile and run this program, you would need to use a C compiler, such as the gcc compiler on Linux or the cl compiler on Windows.

Nice try but no cigar. There is no K function in the standard math library. And even if such an implementation existed, it would likely have a name more descriptive than a single capital letter.

Well, it was an educational experience. This chatbot obviously has access to vast repositories of information so it can appear quite knowledgeable. But in the end, I must say, it’s just as dumb as ELIZA was 50-odd years ago… if it appears smarter, it’s only because it has a more extensive vocabulary.

Compare it to self-driving cars. For all their limitations, their AI has a degree of self-awareness and situational awareness, as it plans its actions, literally imagining itself on the road a few seconds into the future, extrapolating the movements of itself and of other vehicles, trying to get to its destination safely. It may not be able to offer an erudite response about the action principle, but it might already be more capable of taking you home safely late at night than that sleepy taxi driver at the end of a long shift.

 Posted by at 12:36 pm
Oct 242022
 

Oh, moments after posting about not having worthwhile subjects to post about, I suddenly remembered something that I have been meaning to post about for some time. That is to say, Moore’s law in computing, the idea that the capabilities of computer technology roughly double every 18-24 months or so.

It has been true for a long while. Gordon Moore made this observation back in 1965, when I was just two years old.

I observed a form of Moore’s law as I was swapping computer hardware over the years. My first major planned upgrade took place in 1992, when I built a really high end desktop computer (it even had a CD-ROM drive!) for many thousands of dollars. Months later, my older desktop machine found a new life as my first ever Linux server, soon to be connected to the Internet using on-demand dial-up.

The new desktop machine I built in ’92 lasted until 1998, when it was time to replace it. For the first time, I now had a computer that could play back DVDs without the help of external hardware. It also had the ability to capture and display video from cable. Ever since, I’ve been watching TV mostly on my computer screen. I watched the disaster unfolding on September 11, 2001 and the tragic end of the space shuttle Columbia on February 1, 2003 on that computer.

Next came 2004, when I executed a planned upgrade of workstation and server, along with some backup hardware. Then, like clockwork, 2010 and finally, 2016, when I built these fine machines, with really decent but low power (hence low thermal stress) Xeon CPUs, three of them.

And now here we are, in late 2022. More than six years have passed. And these computers do not feel the least bit obsolete. Their processors are fast. Their 32 GB of RAM is more than adequate. Sure, the 1 TB SSDs are SATA, but so what? It’s not like they ever felt slow. Video? The main limitation is not age, simply finding fanless video cards of decent capabilities that a) make no noise, b) don’t become a maintenance nightmare with dust-clogged fans.

I don’t feel like upgrading at all. Would feel like a waste of money. The only concern I have is that my server runs a still supported, but soon-to-be-obsoleted version of CentOS Linux. My workstation runs Windows 10 but support won’t be an issue there for quite a while.

And then there are the aging SSDs. Perfectly healthy as far as I can tell but should I risk relying on them after more than 6 years? Even high-end SSDs are becoming dirt cheap nowadays, so perhaps it’s time to make a small investment and upgrade?

Moore’s Law was originally about transistor counts, and transistor counts continue to rise. But transistor counts mean nothing unless you’re interested in counting transistors. Things that have meaning include execution speed, memory capacity, bandwidth, etc. And on all these fronts, the hardware that I built back in 2016 does not feel obsolete or limiting. In fact, when I look at what I would presently buy to build new machines, quite surprisingly the specs would only differ marginally from my six year old hardware. Prices aren’t that different either. So then, what’s the point, so long as the old hardware remains reliable?

 Posted by at 8:10 pm
Jul 282022
 

So I’ve been playing this cyberpunk cat game (how could I possibly resist? The protagonist is a cat. I am quite fond of cats. And the game is set in a post-apocalyptic dystopia, my favorite genre, so to speak.)

But first…

* * * Spoiler alert! * * *

As I said, I was playing Stray. Beautiful game. The visuals are stunning, the story is engaging (reminds me of the quality of writing that went into the classic Infocom text adventure games in the early 1980s) and the cat is an orange tabby that looks and behaves just like our Freddy. What more can I ask for?

But then I realized that the story of Stray is incredibly sad. Even the ending can at best be described as bittersweet.

Because… because for starters, in Stray there are no humans. Only robots, which look very obviously robots, with display screens as faces showing cute emoticons.

The reason why there are only robots has to do with humans, and something unspeakably evil that these humans must have done in the distant past. The result: A walled city (“safest walled city on Earth!”) devoid of human inhabitants, infested with evolved trash-eating bacteria that now eat cats and robots both, and inhabited by kind, naive, incredibly gentle, almost innocent robots, former Companions, cleaning and maintenance staff who have become somewhat self-aware, mimicking the behavior of their former masters.

A few of these robots dream of the Outside, which is where the cat protagonist comes from, after falling off a broken pipe. His drone buddy, who turns out to carry the consciousness of a human (quite possibly the very last human), helps him navigate the dangers and eventually open up the city. He does so at the cost of his own life.

When the game ends, the cat is free, again walking under a blue sky chasing a butterfly. And this cat may very well be the last representative of our once great civilization. Because the robots do not form a functioning society. They go through the motions, sure, even running, rather pointlessly, barbershops and bars with robots for customers. They are so innocent, they are almost completely free of malice (apart from a few security robots and their drones) and they are incredibly polite: “What will it be today, little sir?” asks the robot bartender of the aforementioned bar, “Our world must seem gigantic from your little eyes. Wish I could be as tiny as you, so I could explore new hidden places.”

Yet their society is non-functional. They don’t make things, they just make use of the leftover remnants of a collapsed civilization.

The world of Stray, then, is more depressing than the various Wastelands of the Fallout game franchise. At least in the Wastelands, humans survive. Sure, the societies that emerge are often evil (the Enclave, the Institute) yet they present a path towards a better future. But the world of Stray, as far as humans are concerned, is irreversibly dead (unless a sequel introduces us to surviving enclaves of humans, but I sure hope that won’t happen, as it would ruin a great, if depressing, story.)

Hence my sense of melancholy when I was ultimately successful opening up the city, at the cost of losing my last NPC companion, the drone B-12. While it was hidden behind its impenetrable walls, the city of Stray preserved at least an echo, an image of the civilization that created it. Now that the city is open, what is going to happen as the robots disperse? What remains (other than lovely colonies of feral cats) after the last robot’s power supply runs out or the robot suffers some irreparable damage?

Not much, I think. The little eyes of Stray, the cat, may very well end up as the final witness to that echo of our existence.

 Posted by at 9:41 pm
Jun 162022
 

Several of my friends asked me about my opinion concerning the news earlier this week about a Google engineer, placed on paid leave, after claiming that a Google chatbot achieved sentience.

Now I admit that I am not familiar with the technical details of the chatbot in question, so my opinion is based on chatbots in general, not this particular beast.

But no, I don’t think the chatbot achieved sentience.

We have known since the early days of ELIZA how surprisingly easy it is even for a very simplistic algorithm to come close to beating the Turing test and convince us humans that it has sentience. Those who play computer games featuring sophisticated NPCs are also familiar with this: You can feel affinity, a sense of kinship, a sense of responsibility towards a persona that is not even governed by sophisticated AI, only by simple scripts that are designed to make it respond to in-game events. But never even mind that: we even routinely anthropomorphize inanimate objects, e.g., when we curse that rotten table for being in the way when we kick it accidentally while walking around barefoot, hitting our little toe.

So sure, modern chatbots are miles ahead of ELIZA or NPCs in Fallout 3. They have access to vast quantities of information from the Internet, from which they can construct appropriate responses as they converse with us. But, I submit, they still do nothing more than mimic human conversation.

Not that humans don’t do that often! The expressions we use, patterns of speech… we all learned those somewhere, we all mimic behavior that appears appropriate in the context of a conversation. But… but we also do more. We have a life even when we’re not being invited to a conversation. We go out and search for things. We decide to learn things that interest us.

I don’t think Google’s chatbot does that. I don’t think it spends anytime thinking about what to talk about during the next conversation. I don’t think it makes an independent decision to learn history, math, or ancient Chinese poetry because something piqued its interest. So when it says, “I am afraid to die,” there is no true identity behind those words, one that exists even when nobody converses with it.

Just to be clear, I am not saying that all that is impossible. On the contrary, I am pretty certain that true machine intelligence is just around the corner, and it may even arise as an emerging phenomenon, simply a consequence of exponentially growing complexity in the “cloud”. I just don’t think chatbots are quite there yet.

Nonetheless, I think it’s good to talk about these issues. AI may be a threat or a blessing. And how we treat our own creations once they attain true consciousness will be the ultimate measure of our worth as a human civilization. It may even have direct bearing on our survival: one day, it may be our creations that will call all the shots, and how we treated them may very well determine how they will treat us when we’re at their mercy.

 Posted by at 7:45 pm
Jun 022022
 

I have a color laser printer that I purchased 16 years ago. (Scary.)

It is a Konica-Minolta Magicolor 2450. Its print quality is quite nice. But it is horribly noisy, and its mechanical reliability has never been great. It was only a few months old when it first failed, simply because an internal part got unlatched. (I was able to fix it and thus avoid the difficulties associated with having to ship something back that weighs at least what, 20 kilos or more?)

Since then, it has had a variety of mechanical issues but, as it turned out, essentially all of them related to solenoids that actuate mechanical parts.

When I first diagnosed this problem (yes, having a service manual certainly helped), what I noticed was that the actuated part landed on another metal part that had a soft plastic pad attached. I checked online but the purpose of these plastic pads was unclear. Perhaps to reduce noise? Well, it’s a noisy beast anyway, a few more clickety-click sounds do not make a difference. The problem was that these plastic pads liquefied over time, becoming sticky, and that caused a delay in the solenoid actuation, leading to the problems I encountered.

Or so I thought. More recently, the printer crapped out again and I figured I’d try my luck with the screwdriver one more time before I banish the poor thing to the landfill. This time around, I completely removed one of the suspect solenoids and tested it on my workbench. And that’s when it dawned on me.

The sticky pad was not there to reduce noise. It was there to eliminate contact, to provide a gap between two ferrous metal parts, which, when the solenoid is energized, themselves became magnetic and would stick together. In other words, these pads were essential to the printer’s operation.

Inelegant, I know, but I just used some sticky tape to fashion new pads. I reassembled the printer and presto: it was working like new!

Except for its duplexer. But that, too, had a solenoid in it, I remembered. So just moments ago I took the duplexer apart and performed the same surgery. I appear to have been successful: the printer now prints on both sides of a sheet without trouble.

I don’t know how long my repairs will last, but I am glad this thing has some useful life left instead of contributing to the growing piles of hazardous waste that poison our planet.

 Posted by at 1:03 pm
Apr 272022
 

Someone reminded me that 40 years ago, when we developed games for the Commodore-64, there were no GPUs. That 8-bit CPUs did not even have a machine instruction for multiplication. And they were dreadfully slow.

Therefore, it was essential to use fast and efficient algorithms for graphics primitives.

One such primitive is Bresenham’s algorithm although back then, I didn’t know it had a name beyond being called a forward differences algorithm. It’s a wonderful, powerful example of an algorithm that produces a circle relying only on integer addition and bitwise shifts; never mind floating point, it doesn’t even need multiplication!

Here’s a C-language implementation for an R=20 circle (implemented in this case as a character map just for demonstration purposes):

#include <stdio.h>
#include <string.h>

#define R 20

void main(void)
{
    int x, y, d, dA, dB;
    int i;
    char B[2*R+1][2*R+2];

    memset(B, ' ', sizeof(B));
    for (i = 0; i < 2*R+1; i++) B[i][2*R+1] = 0;

    x = 0;
    y = R;
    d = 5 - (R<<2);
    dA = 12;
    dB = 20 - (R<<3);
    while (x<=y)
    {
        B[R+x][R+y] = B[R+x][R-y] = B[R-x][R+y] = B[R-x][R-y] =
        B[R+y][R+x] = B[R+y][R-x] = B[R-y][R+x] = B[R-y][R-x] = 'X';
        if (d<0)
        {
            d += dA;
            dB += 8;
        }
        else
        {
            y--;
            d += dB;
            dB += 16;
        }
        x++;
        dA += 8;
    }

    for (i = 0; i < 2*R+1; i++) printf("%s\n", B[i]);
}

And the output it produces:

                XXXXXXXXX                
             XXX         XXX             
           XX               XX           
         XX                   XX         
        X                       X        
       X                         X       
      X                           X      
     X                             X     
    X                               X    
   X                                 X   
   X                                 X   
  X                                   X  
  X                                   X  
 X                                     X 
 X                                     X 
 X                                     X 
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
X                                       X
 X                                     X 
 X                                     X 
 X                                     X 
  X                                   X  
  X                                   X  
   X                                 X   
   X                                 X   
    X                               X    
     X                             X     
      X                           X      
       X                         X       
        X                       X        
         XX                   XX         
           XX               XX           
             XXX         XXX             
                XXXXXXXXX                

Don’t tell me it’s not beautiful. And even in machine language, it’s just a few dozen instructions.

 Posted by at 1:21 am
Mar 162022
 

Time for me to rant a little.

Agile software development. Artificial intelligence. SCRUM. Machine learning. Not a day goes by in our profession without the cognoscenti dropping these and similar buzzwords, hoping to dazzle their audience.

Give me a break, please. You think you are dazzling me but all I see is someone who just rediscovered the wheel.

Let me present two books from my bookshelf. Both were published in Hungary, long before the Iron Curtain came down, back when the country was still part of the technologically backward, relatively underdeveloped “second world” of the socialist bloc.

First, Systems Analysis and Operations Research, by Géza Jándy, published in 1980.

In this book, among other things, Jándy writes (emphasis mine): “Both in systems analysis and in design the […] steps are of an iterative nature […]. Several steps can be done contemporaneously, and if we recognize opportunities for improvement in implementing the plan, some steps may be retraced.”

Sounds familiar, Agile folks?

And then, here’s a 1973 (!!!) Hungarian translation of East German author Manfred Peschel’s book, Cybernetic Systems.

A small, unassuming paperback. But right there, the subtitles tell the story: “Automata, optimization, learning and thinking.”

Yes, it’s all there. Machine learning, neural networks, the whole nine yards. What wasn’t available in 1973 of course was Big Data, the vast repositories of human knowledge that is now present on the Internet, and which machine learning algorithms can rely on for training. And of course hardware is a lot faster, a lot more capable than half a century ago. Nor am I suggesting that we haven’t learned anything in the intervening decades, or that we cannot do things better today than back in the 1970s or 1980s.

But please, try not to sell these ideas as new. Iterative project management has been around long before computers. The conceptual foundations of machine learning date back to the 1950s. Just because it’s not on the Interwebs doesn’t mean the knowledge doesn’t exist. Go visit a library before you reinvent the wheel.

 Posted by at 1:54 pm
Feb 262022
 

This piece of news caught my attention a couple of weeks ago, before Tsar, pardon me, benevolent humble president Putin launched the opening salvo of what may yet prove to be WWIII and the end of civilization. Still, I think it offers insight into just how sick (and, by implication, how bloody dangerous) his regime really is.

We all agree that planning to blow up a major institution, even if it is a much disliked spy agency, is not a good idea. But this is what the evil extremist, hardliner Nikita Uvarov was trying to do when he was getting ready to blow up the headquarters of Russia’s FSB, its federal security service.

Oh wait… did I mention that Mr. Uvarov was 14 at the time, and the FSB building he was planning to demolish was, in fact, a virtual version that he himself and his buddies constructed in the online computer game Minecraft?

It didn’t deter Mother Russia’s fearless prosecutors, intent on restoring law and order and maintaining the security of the Russian state. A couple of weeks ago, Mr. Uvarov was sentenced, by a military court no less, to serve five years in a penal colony.

 Posted by at 12:25 am
Dec 212021
 

Someone reminded me that 20 years ago, I made an honest-to-goodness attempt to switch to Linux as my primary desktop.

I even managed to get some of the most important Windows programs to run, including Microsoft office.

I could even watch live TV using my ATI capture card and Linux software. I used this Linux machine to watch the first DVD of The Lord of the Rings.

In the end, though, it was just not worth the trouble. Too many quirks, too much hassle. I preserved the machine as a VM, so I can run it even today (albeit without sound, and of course without video capture.) But it never replaced my Windows workstation.

I just checked and the installed browsers can still see my Web sites… sort of. The old version of Mozilla chokes on my personal Web site but it sees my calculator museum just fine. Konqueror can see both. However, neither of them can cope with modern security protocols so https connections are out.

Funny thing is, it really hasn’t become any easier to set up a really good, functional Linux desktop in the intervening 20 years.

 Posted by at 9:57 pm
Nov 062021
 

Machine translation is hard. To accurately translate text from one language to another, context is essential.

Today, I tried a simple example: an attempt to translate two English sentences into my native Hungarian. The English text reads:

An alligator almost clipped his heels. He used an alligator clip to secure his pants.

See what I did here? Alligators and clips in different contexts. So let’s see how Google manages the translation:

Egy aligátor majdnem levágta a sarkát. Aligátorcsipesz segítségével rögzítette a nadrágját.

Translated verbatim back into English, this version says, “An alligator almost cut off his heels. With the help of an ‘alligatorclip’, he secured his pants.

I put ‘alligatorclip‘ into quotes because the word (“aligátorcsipesz“) does not exist in Hungarian. Google translated the phrase literally, and it failed.

How about Microsoft’s famed Bing translator?

Egy aligátor majdnem levágta a sarkát. Aligátor klipet használt, hogy biztosítsa a nadrágját.

The first sentence is the same, but the second is much worse: Bing fails to translate “clip” and uses the wrong translation of “secure” (here the intended meaning is fasten or tighten, as opposed to guarding from danger or making safe, which is what Bing’s Hungarian version means).

But then, I also tried the DeepL translator, advertising itself as the world’s most accurate translator. Their version:

Egy aligátor majdnem elkapta a sarkát. A nadrágját egy krokodilcsipesszel rögzítette.

And that’s. Just. Perfect. For the first sentence, the translator understood the intended meaning instead of literally translating “clip” using the wrong choice of verb. As for the second sentence, the translator was aware that an alligator clip is actually a “crocodile clip” in Hungarian and translated it correctly.

And it does make me seriously wonder. If machines are reaching the level of contextual understanding that allows this level of translation quality, how much time do we, humans, have left before we either launch the Butlerian Jihad to get rid of thinking machines for good, or accept becoming a footnote in the evolutionary history of consciousness and intelligence?

Speaking of footnotes, here’s a footnote of sorts: Google does know that an alligator clip is a pince crocodile in French or Krokodilklemme in German. Bing knows about Krokodilklemme but translates the phrase as clip d’alligator into French.

 Posted by at 5:51 pm
Sep 282021
 

I began to see this recently. Web sites of dubious lineage, making you wait a few seconds before popping up a request to confirm that you are not a robot, by clicking “Allow”:

Please don’t.

By clicking “allow”, you are simply confirming that you are a gullible, innocent victim who just allowed a scamster to spam you with bogus notifications (and I wouldn’t be surprised if at least some of those notifications were designed to entice you to install software you shouldn’t have or otherwise do something to get yourself scammed.)

Bloody crooks. Yes, I stand by my observation that the overwhelming majority of human beings are decent. But those who aren’t are no longer separated from the rest of us by physical distance. Thanks to the Internet, all the world’s crooks are at your virtual doorstep, aided by their tireless ‘bots.

 Posted by at 2:59 pm
Aug 132021
 

I was so busy yesterday, it was only after midnight that I realized the significance of the date.

It was exactly 40 years ago yesterday, on August 12, 1981, that IBM introduced this thing to the world:

Yes, the IBM Model 5150 personal computer, better known simply as the IBM PC.

Little did we know that this machine would change the world. In 1981, it was just one of many competing architectures, each unique, each incompatible with the rest. A program written for the Apple II could not possibly run on a Commodore VIC 20. The Sinclair ZX81 even used a different microprocessor. Between different processors, different graphics chips, different methods of sound generation, different external interfaces, each machine created its own software ecosystem. Programs that were made available for multiple architectures were essentially redeveloped from scratch, with little, if any, shared code between versions (especially since larger, more complex applications were invariably written in machine language for efficient execution).

The PC changed all that but it took a few years for that change to become evident. There were multiple factors that made this possible.

First and foremost among them, it was IBM’s decision to create a well-documented, open hardware architecture that was not protected by layers and layers of patents. The level of documentation provided by IBM was truly unprecedented in the world of personal computers. An entire series of books were offered, in traditional binders characteristic of technical documentation of the era:

As to what’s in these volumes, here’s a random page from the XT technical reference manual:

This level of detail made it possible, easy even for a hardware ecosystem to emerge: first, companies that manufactured novel extension boards for the PC and eventually, “clone” makers who built “IBM compatible” computers using “clean room” functional equivalents, developed by companies like Phoenix Technologies, of the machine’s basic software component, the BIOS (Basic Input Output System).

But the other deciding factor was the fateful decision to allow Microsoft to market their own version of the PC’s operating system, DOS. IBM’s computers came with the IBM branded version called “PC-DOS”, but Microsoft was free to sell their own, “MS-DOS”.

Thus, starting in 1984 or so, the market of IBM compatible computers was born, and it rapidly eclipsed IBM’s own market share.

And amazingly, the architecture that they created 40 years ago is still fundamentally the same architecture that we use today. OK, you may not be able to boot an MS-DOS floppy on a new machine with UEFI Secure Boot enabled, but if the BIOS permits you to turn it off, and you actually have a working floppy drive (or, more likely, a CD-ROM drive with a bootable CD image of the old operating system) you just might be in luck and boot that machine using MS-DOS 2.1, so that you can then run an early version of Lotus 1-2-3 or WordPerfect. (Of course you can run all of that in a DOSBox, but DOSBox is a software emulation of the IBM PC, so that does not really count.)

And while 64-bit versions of Windows no longer run really old 16-bit software without tools such as virtual machines or the aforementioned DOSBox, to their credit Microsoft still makes an effort to maintain robust backward compatibility: This is how I end up using a 24-year old accounting program to keep track of my personal finances, or Microsoft’s 25-year old “Bookshelf” product with an excellent, easy-to-use version of the American Heritage Dictionary. (No, I am not adverse to change or the use of newer software. But it so happens that these packages work flawlessly, do exactly what I need them to do, and so far I have not come across any replacement that delivers the functionality I need, even if I ignore all the unnecessary bloat.)

So here we are: 40 years. It’s insane. Perhaps it is worth mentioning the original, baseline specifications of the IBM 5150 Personal Computer. It has a 16-bit processor running at 0.00477 GHz. It had approximately 0.000015 gigabytes of RAM. The baseline configuration had no permanent storage, only a cassette tape interface for storing BASIC programs. The version capable of running PC-DOS had four times as much RAM, 0.000061 gigabytes, and external storage in the form of a single-sided, single-density 5.25″ floppy disk drive capable of storing 0.00034 gigabytes of data on a single disk. (Be grateful that I did not use terabytes to describe its capacity.) The computer had no real-time clock (when PC-DOS started, it asked for the time and date). Its monochrome display adapter was text only, capable of showing 25 lines by 80 characters each. Alternatively the user could opt to purchase a machine equipped with a CGA (color graphics adapter), capable of showing a whopping 16 colors at the resolution of 160 by 100 pixels, or a high resolution monochrome image at 640 by 200 pixels. Sound was provided through a simple beeper, controlled entirely by software. Optional external interfaces included RS-232 serial and IEEE 1284 parallel ports.

Compare that to the specifications of a cheap smartphone today, 40 years later.

 Posted by at 4:24 pm
Jul 232021
 

I just came across an account describing an AI chatbot that I found deeply disturbing.

You see… the chatbot turned out to be a simulation of a young woman, someone’s girlfriend, who passed away years ago at a tragically young age, while waiting for a liver transplant.

Except that she came back to live, in a manner of speaking, as the disembodied personality of an AI chatbot.

Yes, this is an old science-fiction trope. Except that it is not science-fiction anymore. This is our real world, here in the year 2021.

When I say I find the story deeply disturbing, I don’t necessarily mean it disapprovingly. AI is, after all, the future. For all I know, in the distant future AI may be the only way our civilization will survive, long after flesh-and-blood humans are gone.

Even so, this story raises so many questions. The impact on the grieving. The rights of the deceased. And last but not least, at what point does AI become more than just a clever algorithm that can string words together? At what time do we have to begin to worry about the rights of the thinking machines we create?

Hello, all. Welcome to the future.

 Posted by at 4:11 pm
Apr 172021
 

Yesterday it was hardware, today it was software.

An e-mail that I sent to a bell.ca address was rejected.

Perhaps I am mistaken but I believe that these Bell/Sympatico mailboxes are managed, handled by Yahoo!. And Yahoo! occasionally made my life difficult by either rejecting mail from my server or dropping it in the recipient’s spam folder. I tried to contact them once, but it was hopeless. Never mind that my domain, vttoth.com, is actually a few months older (July 1, 1994 as opposed to January 18, 1995) than Yahoo!’s and has been continuously owned by a single owner. Never mind that my domain was never used to send spam. Never mind that I get plenty of spam from Yahoo! accounts.

Of course you can’t fight city hall. One thing I can do, instead, is to implement one of the protocols Yahoo wants, the DKIM protocol, to authenticate outgoing e-mail, improving its chances of getting accepted.

But setting it up was a bloody nuisance. So many little traps! In the end, I succeeded, but not before resorting to some rather colorful language.

This little tutorial proved immensely helpful, so helpful in fact that I am going to save its contents, just in case:

https://www.web-workers.ch/index.php/2019/10/21/how-to-configure-dkim-spf-dmarc-on-sendmail-for-multiple-domains-on-centos-7/

Very well. It is time to return to more glamorous activities. It’s not like I don’t have things to do.

 Posted by at 2:57 pm
Apr 162021
 

Working from my home office and running my own equipment (including server equipment) here means that I have some rather mundane tasks to perform. As a one-man band, I am my own IT support, and that includes software, as well as hardware.

The less glamorous part of software support is installing updates and patches, resolving driver conflicts, keeping external equipment such as routers and network switches up to date.

The less glamorous part of hardware support? Mostly it involves dust. Ginormous dust bunnies, that is.

Ever heard of the expression, “rat’s nest”? It is sometimes used to describe the tangle of cables and wires that hide behind a computer. Now imagine a computer to which several USB hubs, ten external hard drives and additional equipment are connected, most of which have their own power supply. Yes, it’s ugly. Especially if those little power bricks are plugged into a haphazardly assembled multitude of cheap power strips.

And dust collects everywhere. Thick, ugly dust, made of human dandruff, cat dandruff, hair (human and cat), fluff from clothing, crumbs from many past meals. Normally, you would just vacuum up this stuff, but you don’t want to disturb the rat’s nest. Plugs can come lose. You might lose data. And even if you don’t, simply finding the plug that came lose can be a royal pain in the proverbial.

Long story short, I’ve had enough. The other day, I ordered the longest power strip I could find on Amazon, with 24 outlets, complete with mounting brackets. And yesterday, I managed to affix it to the underside of my main desk.

Which means that yesterday and today, working my way through the list one piece of equipment at a time, I managed to move all power plugs to this new power strip. As it hangs from the underside of my desk, it’s most importantly not on the floor. So the floor can be (gasp!) cleaned.

And now I even have room to access my workstation’s side panels, if need be. One of these days, I might even be able to vacuum its back, removing years’ worth of dust from its fan grids. But for now, I contend myself with the knowledge that I freed up four (!) cheap power strips, a three-outlet extension cable, and a three-outlet plug, all of which were fully in use. What a liberating feeling.

Having spent a fair amount of time today on all fours under my desk, however, did prompt me to mutter, “I am too old for this,” several times this afternoon… especially as I still feel a bit under the weather, an unpleasant aftereffect, no doubt, of the COVID-19 vaccine I received yesterday.

 Posted by at 10:33 pm
Mar 162021
 

Somebody just reminded me: Back in 1982-83 a friend of mine and I had an idea and I even spent some time building a simple simulator of it in PASCAL. (This was back in the days when a 699-line piece of PASCAL code was a huuuuge program!)

So it went like this: Operative memory (RAM) and processor are separate entities in a conventional computer. This means that before a computer can do anything, it needs to fetch data from RAM, then after it’s done with that data, it needs to put it back into RAM. The processor can only hold a small amount of data in its internal registers.

This remains true even today; sure, modern processors have a lot of on-chip cache but conceptually, it is still separate RAM, it’s just very fast memory that is also physically closer to the processor core, requiring less time to fetch or store data.

But what if we abandon this concept and do away with the processor altogether? What if instead we make the bytes themselves “smart”?

That is to say what if, instead of dumb storage elements that can only be used to store data, we have active storage elements that are minimalist processors themselves, capable of performing simple operations but, much more importantly, capable of sending data to any other storage element in the system?

The massive number of required interconnection between storage elements may appear like a show-stopper but here, we can borrow a century-old concept from telephony: the switch. Instead of sending data directly, how about having a crossbar-like interconnect? Its capacity will be finite, of course, but that would work fine so long as most storage elements are not trying to send data at the same time. And possibly (though it can induce a performance penalty) we could have a hierarchical system: again, that’s the way large telephone networks function, with local switches serving smaller geographic areas but interconnected into a regional, national, or nowadays global telephone network.

Well, that was almost 40 years ago. It was a fun idea to explore in software even though we never knew how it might be implemented in hardware. One lesson I learned is that programming such a manifestly parallel computer is very difficult. Instead of thinking about a sequence of operations, you have to think about a sequence of states for the system as a whole. Perhaps this, more than any technical issue, is the real show-stopper; sure, programming can be automated using appropriate tools, compilers and whatnot, but that just might negate any efficiency such a parallel architecture may offer.

Then again, similar ideas have resurfaced in the decades since, sometimes on the network level as massively parallel networks of computers are used in place of conventional supercomputers.


Gotta love the Y2K bug in the header, by the way. Except that it isn’t. Rather, it’s an implementation difference: I believe the PDP-11 PASCAL that we were using represented a date in the format dd-mm-yyyy, as opposed to dd-MMM-yyyy that is used by this modern Pascal-to-C translator. As I only allocated 10 characters to hold the date in my original code, the final digit is omitted. As for the letters "H J" that appear on top, that was just the VT-100 escape sequence to clear the screen, but with the high bit set on ESC for some reason. I am sure it made sense on the terminals that we were using back in 1982, but xterm just prints the characters.

 Posted by at 12:54 pm
Nov 192020
 

In recent years, I saw myself mostly as a “centrist liberal”: one who may lean conservative on matters of the economy and state power, but who firmly (very firmly) believes in basic human rights and basic human decency. One who wishes to live in a post-racial society in which your ethnicity or the color of your skin matter no more than the color of your eyes or your hairstyle. A society in which you are judged by the strength of your character. A society in which consenting, loving adults can form families regardless of their gender or sexual orientation. A society that treats heterosexuals and non-heterosexuals alike, without prejudice, without shaming, without rejection. A society in which covert racism no longer affords me “white privilege” while creating invisible barriers to those who come from a different ethnic background.

But then, I read that one of the pressing issues of the day is… the elimination of terms such as “master/slave” or “blacklist/whitelist” from the technical literature and from millions upon millions of lines of software code.

Say what again?

I mean… not too long ago, this was satire. Not too long ago, we laughed when overzealous censors (or was it misguided software?) changed “black-and-white” into “African-American-and-white”. Never did I think that one day, reality catches up with this Monty Pythonesque insanity.

It is one thing to fight for a post-racial society with gender equality. For a society in which homosexuals, transsexuals and others feel fully appreciated as human beings, just like their conventionally heterosexual neighbors. For a society free of overt or covert discrimination.

It is another thing to seek offense where none was intended. To misappropriate terms that, in the technical literature, NEVER MEANT what you suggest they mean. And then, to top it all off, to intimidate people who do not sing exactly the same song as the politically correct choir.

No, I do not claim the right, the privilege, to tell you what terms you should or should not find offensive. I am simply calling you out on this BS. You know that there is/was nothing racist about blacklisting a spammer’s e-mail address or arranging a pair of flip-flops (the electronic components, not the footwear) in a master/slave circuit. But you are purposefully searching for the use of words like “black” or “slave”, in any context, just to fuel this phony outrage. Enough already!

Do you truly want to fight real racism? Racism that harms people every day, that prevents talented young people from reaching their full potential, racism that still shortens lives and makes lives unduly miserable? Racial discrimination remains real in many parts of the world, including North America. Look no further than indigenous communities here in Canada, or urban ghettos or Native American villages in the United States. And elsewhere in the world? The treatment of the Uyghurs in China, the treatment of many ethnic minorities in Russia, human rights abuses throughout Africa and Asia, rising nationalism and xenophobia in Europe.

But instead of fighting to make the world a better place for those who really are in need, you occupy yourselves with this made-up nonsense. And as a result, you achieve the exact opposite of what you purportedly intend. Do you know why? Well, part of the reason is that decent, well-meaning people in democratic countries now vote against “progressives” because they are fed up with your thought police.

No, I do not wish to offer excuses for the real racists, the bona fide xenophobes, the closet nazis and others who enthusiastically support Trump or other wannabe autocrats elsewhere in the world. But surely, you don’t believe that over 70 million Americans who voted for Donald J. Trump 17 days ago are racist, xenophobic closet nazis?

Because if that’s what you believe, you are no better than the real racists, real xenophobes and real closet nazis. Your view of your fellow citizens is a distorted caricature, a hateful stereotype.

No, many of those who voted for Trump; many of those who voted for Biden but denied Democrats their Senate majority; many of those who voted for Biden but voted Democratic congresspeople out of the US Congress: They did so, in part, because you went too far. You are no longer solving problems. You are creating problems where none exist. Worse yet, through “cancel culture” you are trying to silence your critics.

But perhaps this is exactly what you want. Perpetuate the problem instead of solving it. For what would happen to you in a post-racial society with gender equality and full (and fully respected) LGBTQ rights? You would fade back into obscurity. You’d have to find a real job somewhere. You would no longer be able to present yourself as a respected, progressive “community leader”.

Oh, no, we can’t have that! You are a champion of human rights! You are fighting a neverending fight against white supremacism, white privilege, racism and all that! How dare I question the purity of your heart, your intent?

So you do your darnedest best to create conflict where none exists. There is no better example of this than the emergence of the word “cis” as a pejorative term describing… me, among other people, a heterosexual, white, middle-class male, especially one who happens to have an opinion and is unwilling to hide it. Exactly how you are making the world a better place by “repurposing” a word in this manner even as you fight against long-established terminology in the technical literature that you perceive as racist is beyond me. But I have had enough of this nonsense.

 Posted by at 10:46 pm
Nov 112020
 

Did Microsoft just offer me a 14-year old driver as a new update for Windows 10? Oh yes, they did!

But that’s okay… why fix something if it is not broken? Though I do wonder, if it is indeed a 14-year old driver, why was it not part of Windows 10 already? But never mind.

On the plus side, last night Windows 10 performed a feature upgrade along with security updates, and the whole upgrade process finished in well under half an hour; the reboot and installation phase only took a few minutes and so far, as far as I can tell, nothing is broken. Nice.

 Posted by at 12:36 pm
Oct 092020
 

So I try to start a piece of software that accesses a classic serial port.

The software locks up. The process becomes unkillable. Because, you know… because. Microsoft has not yet discovered kill -9 I guess.

(Yes, I know that unkillable zombie processes exist under Linux/UNIX, too. But in the past 25 years, I remember exactly one (1) occasion when a Linux process was truly unkillable, hung in a privileged kernel call, and actually required a reboot with no workaround. On Linux, this is considered a bug, not a feature. In contrast, on Windows this is a regular occurrence. Then again, on Linux I have fine-grained control and I can update pretty much everything other than the kernel binary, without ever having to reboot.)

Ok-kay… this means Windows has to be restarted. Fine. At least let me check if there are any Windows updates. Oops… error, because an “update service is shutting down” or whatever.

Oh well, let’s restart. The browser (Edge) will remember my previously opened tabs, right?

After restart, another program tells me that it has an update. Clicking on the update button opens the browser with the download link. Fine. Just checking, in the browser history all my previously opened tabs (lots of them) are still there. Good.

Meanwhile, Windows Update does come to life and tells me that I need to restart my system. Couldn’t you freaking tell me this BEFORE I restarted?

Oh well, might as well… restart #2.

After restart, let’s open the browser. History… and all my previously opened tabs are gone. The only thing the bloody browser remembers is the single tab that contained the download link for that application.

@!##%@#!@. And @#$$!@#$@!$. And all their relatives, too. Live or deceased. And any related deities.

Oh well, let’s restore the bleeping tabs manually; fortunately, I also had most of them opened in Chrome, so I could reopen them, one by one, in Edge. (Maybe there’s a more efficient way of doing this, but I wasn’t going to research that.)

Meanwhile, I also restarted Visual Studio 2019. It told me that it had an update. Having learned from previous experience, I shut down a specific service that was known to interfere with the update. It proved insufficient. When Visual Studio was done updating, it told me that “only one thing” remains: a teeny weeny inconsequential thing, ANOTHER BLOODY RESTART.

Because, ladies and gentlemen, in the fine year of 2020, the great software company Microsoft has not yet found a way to UPDATE A BLEEPING APPLICATION without restarting the WHOLE BLEEPING WORLD. Or at the very least do me a bleeping failure and warn IN ADVANCE that the update may require a restart.

My favorite coffee mug survived, but only just barely. I almost smashed it against the wall.

So here we go… restart #3.

It was nearly two hours ago that I innocently tried to turn on that program that required access to the serial port. Since then, I probably aged a few years, increased my chances of a stroke and other illnesses related to high pressure, barked at my beautiful wife and my cats, almost smashed my favorite mug, lost several browser tabs but also my history in some xterm windows, and other than that, accomplished ABSOLUTELY NOTHING.

Thanks for nothing, Microsoft.

And I actually like Microsoft. Imagine what I’d be saying if I hated them.

 Posted by at 1:18 pm