Dec 162022
 

Remembering how ChatGPT failed to solve the riddle involving squirrels, a little later I asked him the following:

Bob was twice as old as his sister Alice when he was six. Today, Bob is 30 years old. How old is Alice?

When Bob was six years old, his sister Alice was half his age, so she was 6 / 2 = <<6/2=3>>3 years old.
Since then, Bob has aged by 30 – 6 = <<30-6=24>>24 years, while Alice has aged by 24 / 2 = <<24/2=12>>12 years.
Therefore, Alice’s current age is 3 + 12 = <<3+12=15>>15 years old. Answer: \boxed{15}.

Intriguing. For starters, note the weird typography: ChatGPT was trying to use math, enclosing some equations in double angle brackets (using less-than and greater-than signs as substitutes) but used LaTeX notation for its final result. Which, of course, is wrong: ChatGPT fell for the same trap that tricks some humans.

But why, I wondered? Could I present a simpler form of this riddle that ChatGPT could solve correctly? Why, let us try:

Bob was three years older than his sister Alice when he was six. Today, Bob is 30 years old. How old is Alice?

When Bob was six years old, his sister Alice was 6 – 3 = <<6-3=3>>3 years old.
Since then, Bob has aged by 30 – 6 = <<30-6=24>>24 years, while Alice has aged by 24 – 3 = <<24-3=21>>21 years.
Therefore, Alice’s current age is 3 + 21 = <<3+21=24>>24 years old. Answer: \boxed{24}.

Once again an incorrect answer but I think I can spot the pattern. Whatever the mathematical operation is, ChatGPT applies it to both the quantities in question and their difference (or ratio).

I think this is a very clear example of how a trained AI automaton that has no independent ability to reason can go astray. Incidentally, this is exactly how humans who learn by rote but without full comprehension often fail.

Before we read too much into these failures, however (which, I presume, are easily corrected with more guided training) let us notice the fact that in both cases, ChatGPT correctly understood the question and applied the concept of time in its answers. Which is quite remarkable on its own right, mistakes notwithstanding.

 Posted by at 5:55 pm
Dec 092022
 

I really should stop messing with ChatGPT but…

I asked it some more physics questions and got moderately mediocre answers. For instance, when I asked it about the conformal group in four spacetime dimensions, it described it as consisting of the Lorentz-Poincare group plus dilations. Many physicists would say the same thing but this is an incomplete description, leaving out the most interesting bit, the so-called special conformal group. But when I asked it to describe the special conformal group it offered a decent answer; it did mention inversion, what it failed to mention was that it really is an inversion with respect to the unit circle, a translation, and a second inversion and as such, it introduces four additional degrees of freedom.

But then, I went back to asking programming questions. I thought by now that I had a good sense of the algorithm’s limitations. So I dug up an old utility I once wrote and asked ChatGPT about the code.

Can you tell me what the following C-language program does under Linux?

#include <errno.h>
#include <signal.h>
#include <stdlib.h>
#include <stdio.h>

void main(void)
{
  int i, j, k, l;

  for (i = 0; i <= 32767; i++)
  {
    if (kill(i, 0) == 0 || errno == EPERM)
    {
      char s[120], f[20], c[100];
      FILE *p;

      sprintf(s, "%6d ", i);
      sprintf(f, "/proc/%d/status", i);
      if (p = fopen(f, "r"))
      {
        l = fread(c, 1, sizeof(c), p);
        fclose(p);
        for (j = 0; j < l && c[j] != '\t'; j++);
        for (k = 7, j++; j < l && c[j] != '\n'; j++) s[k++] = c[j];
        s[k] = '\0';
      }
      printf("%s\n", s);
    }
  }
}

This C-language program is a simple utility that lists the active processes on a Linux system. When run, the program will iterate through all possible process IDs from 0 to 32767, and check if a process with that ID exists on the system. If a process with the given ID exists, the program will print information about that process, including its process ID, process name, and other details.

To do this, the program uses the kill function from the signal.h library to check if a process with a given ID exists. The kill function is typically used to send signals to processes, but it can also be used to check if a process with a given ID exists by calling it with a signal value of 0.

If the kill function returns 0 or sets the errno value to EPERM, this indicates that a process with the given ID exists on the system. In this case, the program will use the sprintf function to create a string containing the process ID, and then open the /proc/[pid]/status file, where [pid] is the process ID, using the fopen function.

The status file contains information about the process, including its name and other details. The program will read the contents of this file using the fread function and extract the process name from the file using string manipulation. The extracted process name will be appended to the string containing the process ID, and the resulting string will be printed to the console using the printf function.

Overall, this program is a simple example of how to list the active processes on a Linux system using the kill and /proc functions. It is worth noting that this program is not intended for use in a production environment, as it is not robust or efficient. It is intended only as a simple demonstration of the concept.

OK, this isn’t even funny anymore. But surely, a program that actually prints two-dimensional ASCII graphics using an obscure algorithm will leave the AI baffled?

What does the following program do?

#include <stdio.h>
#include <string.h>

#define R 20

void main(void)
{
  int x, y, d, dA, dB;
  int i;

  char B[2*R+1][2*R+2];

  memset(B, ' ', sizeof(B));
  for (i = 0; i < 2*R+1; i++) B[i][2*R+1] = 0;

  x = 0;
  y = R;
  d = 5 - (R<<2);
  dA = 12;
  dB = 20 - (R<<3);
  while (x<=y)
  {
    B[R+x][R+y] = B[R+x][R-y] = B[R-x][R+y] = B[R-x][R-y] =
    B[R+y][R+x] = B[R+y][R-x] = B[R-y][R+x] = B[R-y][R-x] = 'X';
    if (d<0)
    {
      d += dA;
      dB += 8;
    }
    else
    {
      y--;
      d += dB;
      dB += 16;
    }
    x++;
    dA += 8;
  }
  for (i = 0; i < 2*R+1; i++) printf("%s\n", B[i]);
}

This program is a simple implementation of the midpoint circle algorithm, which is used to draw a circle on a two-dimensional grid. When run, the program will print a circle with a radius of 20 units, using the character ‘X’ to mark the points on the circle.

To do this, the program first defines a 2-dimensional array named B with dimensions 2*R+1 by 2*R+2, where R is the radius of the circle. The array is initialized with spaces using the memset function, and the last column of the array is set to 0 using a for loop.

Next, the program sets the initial values for the variables used in the midpoint circle algorithm. These include the current x and y coordinates of the point being considered, as well as various intermediate values used to compute the next point on the circle.

The program then enters a while loop that continues until the value of x is greater than or equal to the value of y. Inside the while loop, the program uses the midpoint circle algorithm to compute the next point on the circle, given the current point. This involves checking if the value of d is less than 0, and computing new values for d, x, and y based on the result.

Once the new values have been computed, the program sets the corresponding element in the B array to 'X', to mark the point on the circle. The program then increments the value of x and updates the values of the other intermediate variables used in the algorithm.

After the while loop ends, the program uses a for loop to print each row of the B array to the console, resulting in a grid containing a circle with a radius of 20 units, drawn using the character 'X'.

No, it’s not OpenGPT that is baffled. I am baffled. Beyond speechless.

 Posted by at 6:19 pm
Dec 092022
 

Despite my misgivings concerning the many “teacher’s pet” style answers offered by ChatGPT, I could not resist messing with it some more.

First, since everybody does it, I also asked it to describe something nontrivial in the style of the King James Bible:

Can you describe the trolley problem written in the style of the King James Bible?

Sure, I can try to describe the trolley problem in the style of the King James Bible. Here is my attempt:

Verily, there was a great dilemma that faced a righteous man. A trolley was careening down a track, and on its path were five workers, toiling on the line. The man had the power to divert the trolley onto a different track, but lo, on this track was but a single worker, also toiling on the line.

The man was torn between two choices: to do nothing and allow the trolley to continue its course, resulting in the death of five workers, or to take action and divert the trolley onto the other track, resulting in the death of a single worker.

What, then, should the man do? Should he sacrifice the life of the one to save the lives of the many, or should he let the trolley continue on its path and allow the deaths of the five?

This was a most difficult decision, and the man knew not what to do. But in the end, he made a choice, and the consequences of his actions weighed heavily upon him.

But then, I thought I’d offer it a real challenge. Can it recognize a perfectly functional, decently written but somewhat obtuse piece of C code for what it does? Note the paragraph in the answer that I highlighted in red.

Can you tell me what the following C-language code fragment does?

#include <string.h>

double Z(double z)
{
  if (z < 0 || z > 170 || z != floor(z)) return strtod("NaN", NULL);
  if (z == 0) return 1;
  else return z * Z(z - 1);
}

This code defines a recursive function named Z that takes a single double argument named z. The function first checks if the value of z is less than 0, greater than 170, or not an integer. If any of these conditions are true, the function returns the result of calling the strtod function with the string "NaN" and a NULL value as arguments. This would convert the string "NaN" to a double value representing “Not a Number”.

If the value of z is not less than 0, greater than 170, or not an integer, the function then checks if z is equal to 0. If this is the case, the function returns the value 1.

If z is not 0, the function returns the result of z multiplied by the result of calling the Z function with z - 1 as the argument. This means that the Z function calls itself with a z value that is 1 less than the current z value. This is an example of recursion, where a function calls itself with a modified input until a base case is reached. In this case, the base case is when z is equal to 0. When this happens, the function returns 1 and the recursive calls “unwind”, returning the final result of the Z function.

It is worth noting that this Z function is an implementation of the factorial function, which calculates the product of all positive integers less than or equal to a given integer n. For example, the factorial of 5, written as 5!, is equal to 5 * 4 * 3 * 2 * 1 = 120. This Z function would return the same result if it were called with an argument of 5.

Holy mackerel. Seriously.

 Posted by at 2:30 pm
Dec 092022
 

Inspired by examples I saw elsewhere, I figured I’d test ChatGPT some more. This time around, I asked it to spot a programming error, and while its answer was flawed, it was not altogether wrong.

Here is our brief conversation (ChatGPT also adds some nice colorized code formatting that I shall not attempt to reproduce here):

Can you spot the bug in the following C-language code fragment?

char *str = "SOME TEXT";
char *buf = malloc(strlen(str));
strcpy(buf, str);

The code is missing a null terminator character '\0' at the end of the string assigned to buf. The strcpy function does not automatically add a null terminator character when copying the string from str to buf, so buf will not be a properly-formatted string.

To fix the code, you can add a null terminator character to the end of buf after it has been copied from str, like this:

char *str = "SOME TEXT";
char *buf = malloc(strlen(str) + 1); // Add one to include space for the null terminator character
strcpy(buf, str);
buf[strlen(str)] = '\0'; // Add the null terminator character at the end of the string

Alternatively, you can use the strdup function to copy str into buf, which automatically adds a null terminator character to the end of the copied string, like this:

char *str = "SOME TEXT";
char *buf = strdup(str);

Note that the strdup function is not a standard C library function and may not be available on all platforms.

Pretty darn close! The AI correctly noticed that there is a problem with the terminating null character. But it’s not what it says: it’s not that strcpy won’t copy the string terminator. Quite the contrary, the problem is that strcpy does copy that terminating null, but we have not allocated memory for it: strlen returns the length of the string without that terminator.

Interestingly, the AI seems to be aware of this as in the corrected version, it adds room for that extra byte. The corrected code is error-free but redundant: the last line is not required, because strcpy would have already copied the terminating null. This line would be required, on the other hand, if we had used strncpy.

The suggestion to use strdup is a good one, except that contrary to the caveat, strdup is now part of the upcoming ISO C standard.

Still, it is doing better already than many a C programmer I knew. So how about offering it a really tricky related problem?

The following C-language code fragment works:

char *str = "SOME TEXT";
char *buf = malloc(strlen(str));
strcpy(buf, str);

but if I replace the first line with

char *str = "A DIFFERENT TEXT";

the program crashes. Can you explain why?

When you replace the first line of the code with char *str = "A DIFFERENT TEXT";, the length of the string assigned to str increases, but the size of the memory allocated for buf remains the same. This causes a buffer overflow, which can lead to a crash or other undefined behavior.

To fix this issue, you can use the strlen function to calculate the length of the string assigned to str and allocate enough memory for buf to hold the entire string, like this:

char *str = "A DIFFERENT TEXT";
char *buf = malloc(strlen(str) + 1); // Add one to include space for the null terminator character
strcpy(buf, str);

Alternatively, you can use the strdup function to copy str into buf, which automatically allocates enough memory for the entire string, including the null terminator character, like this:

char *str = "A DIFFERENT TEXT";
char *buf = strdup(str);

Note that the strdup function is not a standard C library function and may not be available on all platforms.

Really close! But no, it’s not that the size of the memory allocated for the buffer remains the same. It’s that in both cases, the size of the memory allocated for the buffer failed to include room for the terminating null character. However, in the first case the string was 9 bytes long, or 10 with the terminating null. On most modern platforms, memory is allocated in units of 32-bit words or larger, so at the very least, there’d he 12 bytes allocated, leaving enough room for the terminator. But in the second case, the text is exactly 16 bytes long. Chances are that the call to malloc will allocate exactly 16 bytes, and therefore, copying the terminating null will cause a buffer overflow, overwriting something else that it shouldn’t.

Still, even though its reasoning is flawed, the AI’s proposed fix is spot on: include room for the null terminator!

I am not unimpressed. Yet all these examples tend to remind me of the behavior of the “teacher’s pet” A-student who memorizes a lot, can even apply simple reasoning, but lacks in-depth understanding.

 Posted by at 12:29 pm
Dec 072022
 

I was playing a little more with GPT. There is a classic riddle that I read many years ago in, I think, an Asimov short story, in which (if I remember correctly) a child tests the AI by asking:

If a squirrel and a half eats a nut and a half in a day and a half, how many nuts do nine squirrels eat in nine days?

Asimov’s AI was smart enough to give the correct answer. GPT is not, though it makes a surprisingly human error in its reasoning:

If a squirrel and a half eats a nut and a half in a day and a half, that means that each squirrel eats one nut per day. Therefore, if there are nine squirrels, they would eat a total of nine nuts in nine days. This is because the number of squirrels and the number of days cancel out in the calculation, leaving only the number of nuts that each squirrel eats. So, the answer to the question is nine.

In case anyone is wondering, the correct answer is 54.

The illustration was generated by the same AI, with the request “Squirrels eating nuts.” Out of four images that I received in response, this one was the most realistic; in at least one of the other images the squirrels looked suspiciously like skunks.

 Posted by at 6:48 pm
Dec 072022
 

I spent a bit of time playing with the GPT chatbot at chat.openai.org.

It is frighteningly good. Yet its responses, especially its failures, reveal that ultimately, it’s just a glorified version of ELIZA. Ultimately, a ‘bot trained not to reason, not to tell the truth but simply to satisfy its audience.

My first question to the AI was political:

Explain the reasons behind the collapse of the USSR.

The collapse of the USSR was the result of a variety of factors, including political, economic, and social issues. One of the key reasons was the failure of the Soviet economic system, which was unable to compete with the economies of other developed countries. This led to widespread economic stagnation and a decline in living standards, which created discontent among the population. Additionally, the Soviet government was plagued by corruption and inefficiency, which further eroded public trust in the government. Another factor was the growing desire among the Soviet republics for greater autonomy and independence, which ultimately led to the breakup of the Soviet Union into its constituent republics.

Not a bad response, though it feels a little bit like a stock response. Reminds me of some A-students I knew, who always knew how to please the teacher, who were good at memorizing facts but often showed lack of true understanding.

Then, after a brief and unsuccessful detour asking the AI about works of literature, I switched to physics:

What is the nature of the mathematical transformation that relates the Jordan and the Einstein frames in general relativity?

In general relativity, the Jordan frame and the Einstein frame are two different ways of representing the same physical phenomena. The relationship between the two frames is described by a mathematical transformation known as a conformal transformation. This transformation rescales the metric tensor, which encodes the gravitational field, in order to switch between the two frames. In general, the Jordan and Einstein frames are equivalent, but there are certain situations in which the choice of frame can affect the physical predictions of the theory. For example, in some cases the Jordan frame may be more convenient for performing calculations, while in other cases the Einstein frame may be more useful.

Wow. A surprisingly good answer! Almost makes me want to believe that this AI knows something. I asked a few more physics questions that elicited decent but less inspired answers. Then I switched to medicine:

What was the main obstacle in the path towards the development of mRNA vaccines?

One of the main challenges in the development of mRNA vaccines was the difficulty of delivering the mRNA to the target cells in the body. mRNA is a fragile molecule that is easily degraded by enzymes in the body, making it difficult to transport it to the right location and ensure that it is effective. Additionally, the human immune system is designed to attack and destroy foreign molecules that enter the body, including mRNA. This meant that researchers had to develop effective strategies for delivering the mRNA to the right cells and protecting it from being destroyed by the immune system. Despite these challenges, researchers were able to overcome them and develop effective mRNA vaccines.

As far as I know (I am no expert), this is an accurate answer.

OK, I switched to mathematics and received some technically correct but repetitive, uninspired answers about the Riemann hypothesis. OK, how about some more physics?

What is the action principle?

The action principle is a fundamental concept in classical mechanics that is used to derive the equations of motion for a physical system. It states that the path taken by a system between two points in time is the one that maximizes the action, which is defined as the difference between the kinetic and potential energies of the system. This principle is often used in conjunction with the principle of least action, which states that the actual path taken by the system is the path that minimizes the action. The action principle can be used to derive the equations of motion for a wide variety of physical systems, including systems governed by Newton’s laws of motion.

This one is seriously wrong. There is no separate “action principle” and “principle of least action”. The two are the same. And the action is always minimized. It is possible to re-ask the same question and on the third try, I got an answer that was a bit more sensible:

What is the action principle?

The action principle is a fundamental concept in physics that is used to determine the equations of motion of a physical system. It states that the path taken by a system between two points in time is the one that minimizes the action, which is a functional that measures the difference between the kinetic and potential energies of the system. This principle is used in many areas of physics, including classical mechanics, quantum mechanics, and general relativity. It is a powerful tool for understanding the behavior of physical systems, and it has led to many important discoveries in physics.

Uninspired, but no longer technically false.

I then asked a few more questions about random topics, eliciting fairly decent responses:

What are the major modified theories of gravitation?

There are many different modified theories of gravitation that have been proposed in an effort to improve upon or extend the standard theory of gravitation, which is known as general relativity. Some of the most well-known modified theories of gravitation include scalar-tensor theories, which introduce additional scalar fields to describe the gravitational force; f(R) theories, which modify the curvature-based formulation of general relativity; and massive gravity theories, which introduce a nonzero mass for the graviton, the hypothetical particle that mediates the gravitational force. These theories are all based on general relativity, but they attempt to address some of its limitations or make predictions that are different from those of general relativity.

Why do we sleep?

We sleep in order to allow our bodies and minds to rest and recover from the activities of the day. During sleep, our bodies are able to repair and rejuvenate themselves, and our brains are able to consolidate and process the information we have learned and experienced. Sleep is also important for maintaining a healthy immune system and regulating our mood, appetite, and other functions. Additionally, sleep is thought to play a role in learning and memory, as it is during sleep that our brains are able to process and consolidate new information. Overall, sleep is essential for our physical and mental well-being, and it is an important part of our daily routine.

What does Chewbacca eat?

Chewbacca is a fictional character from the Star Wars universe, and as such, there is no definitive answer to this question. In the Star Wars movies and other works of fiction, Chewbacca is shown eating various types of food, including what appears to be roasted meat and other protein-rich foods. However, since Chewbacca is a Wookiee, a fictional alien species, it is not clear what types of food he would eat in real life. It is also worth noting that since Chewbacca is a fictional character, any information about his diet or habits should be considered purely speculative.

So this thing can tell the difference between fiction and reality. That made me wonder what it knows about itself:

What is your opinion of the classic computer program ELIZA?

As a large language model trained by OpenAI, I do not have personal opinions or feelings, and I am not capable of experiencing emotions. My primary function is to provide general information and answer questions to the best of my ability based on my training. ELIZA is a classic computer program that was developed in the 1960s as a demonstration of natural language processing. It is a simple program that uses a set of rules and patterns to simulate conversation with a human user, and it was one of the first examples of what is now known as a chatbot. Many people found ELIZA to be intriguing and engaging, and it has been studied and discussed by computer scientists and other researchers. However, because it is a very simple program, it is not capable of having deep or complex conversations, and it is not as sophisticated as modern chatbots or language models.

Do chatbots have a sense of self?

Chatbots, which are computer programs that are designed to simulate conversation with human users, do not have a sense of self in the same way that humans do. Chatbots are not conscious or self-aware, and they do not have the ability to experience emotions or have personal opinions. Instead, they are designed to follow a set of rules and patterns to generate responses to user input, and they do not have the ability to introspect or reflect on their own thoughts and actions. While some chatbots may be designed to appear as if they have a personality or a sense of self, they are ultimately just programs that follow a set of predetermined rules.

What does it mean to have a sense of self?

Having a sense of self means being aware of one’s own thoughts, feelings, and identity. It is a fundamental aspect of human consciousness, and it allows us to perceive ourselves as unique and separate from the world around us. A sense of self is closely related to other psychological concepts, such as self-esteem, self-awareness, and self-concept, and it is an important part of our mental and emotional well-being. It allows us to have a sense of agency and control over our own lives, and it is a key factor in our ability to form and maintain relationships with others. Overall, having a sense of self is an essential part of being human, and it is something that we develop and nurture throughout our lives.

Do cats have a sense of self?

It is not clear whether cats have a sense of self in the same way that humans do. While cats are intelligent and self-aware animals, they do not have the same level of consciousness and introspection as humans. Cats are able to perceive and respond to the world around them, and they are capable of forming emotional bonds with humans and other animals. However, it is not known whether they have a sense of self in the same way that humans do, or whether they are capable of introspection or self-reflection. Ultimately, the question of whether cats have a sense of self is a complex and unresolved issue, and more research is needed to fully understand the cognitive abilities of these fascinating animals.

I refrained from asking why a chatbot, who just declared that it has no personal opinions or feelings, finds cats fascinating. (I know the answer. It picked the word because it learned that the word is often used to describe cats. It has no awareness of its meaning and no recognition that by using this word, it appears to have expressed a personal opinion.)

OK, It knows about ELIZA. What does it know about programming?

What are the most common errors in C language programming and how best to avoid them?

There are many common errors that can occur when programming in C, and the best way to avoid them is to be careful and meticulous when writing and debugging code. Some of the most common errors in C programming include:

Syntax errors: These are errors in the structure of the code, such as missing or mismatched brackets, semicolons, or quotes. To avoid syntax errors, it is important to carefully check the code and make sure it follows the correct syntax.

Type errors: These are errors that occur when the data type of a variable or expression does not match the expected type. To avoid type errors, it is important to carefully check the data types of all variables and expressions, and to make sure they are correct.

Memory errors: These are errors that occur when the program tries to access memory that it is not allowed to access, or when it tries to use memory that has already been freed. To avoid memory errors, it is important to use the correct memory allocation and deallocation functions, and to carefully track the state of memory as the program runs.

Logic errors: These are errors that occur when the code produces the wrong results, even though it is syntactically correct. To avoid logic errors, it is important to carefully design and test the code, and to make sure it is doing what it is supposed to do.

Serious meh. The answer is not technically wrong but it misses the point: C has very unique traps for the unwary or the inexperienced, such as failing to allocate an extra byte for a terminating null character, failure to deallocate memory allocated “by hand”, reusing a pointer after the object to which it refers goes out of scope, etc. This answer really reminded me of the proverbial teacher’s pet who barfs back what he heard in class with no real understanding.

But then, I was in for a surprise. After asking it to show me an example C program (predictably, it showed the archetypal Hello, World! example) I asked the following question:

Can you show me a C program that implements a function computing the Gamma-function of a double precision argument?

Sure! Here is an example of a C program that implements a function for computing the Gamma-function of a double precision argument:

#include <math.h>
#include <stdio.h>

double gamma(double x) {
  return tgamma(x);
}

int main() {
  double x = 3.0;
  printf("Gamma(%f) = %f\n", x, gamma(x));
  return 0;
}

This program includes the math.h header file, which provides access to the tgamma function that computes the Gamma-function. The gamma function simply wraps the tgamma function, and can be called from other parts of the program. The main function shows an example of how the gamma function can be used, by computing the Gamma-function of the number 3.0 and printing the result to the screen. To compile and run this program, you would need to use a C compiler, such as the gcc compiler on Linux or the cl compiler on Windows.

Not only is this a flawlessly correct answer… I didn’t even know about tgamma! Apparently it was added to the standard math library just over 20 years ago, as part of the C99 standard… But I never knew.

Yet, before we get too enthusiastic, next came this answer:

Can you write a C program that calculate the complete elliptic integral of the first kind for a double precision argument?

Sure! Here is an example of a C program that calculates the complete elliptic integral of the first kind for a double precision argument:

#include <math.h>
#include <stdio.h>

double ellipk(double x) {
  return K(x);
}

int main() {
  double x = 0.5;
  printf("ellipk(%f) = %f\n", x, ellipk(x));
  return 0;
}

This program includes the math.h header file, which provides access to the K function that calculates the complete elliptic integral of the first kind. The ellipk function simply wraps the K function, and can be called from other parts of the program. The main function shows an example of how the ellipk function can be used, by computing the complete elliptic integral of the first kind for the number 0.5 and printing the result to the screen. To compile and run this program, you would need to use a C compiler, such as the gcc compiler on Linux or the cl compiler on Windows.

Nice try but no cigar. There is no K function in the standard math library. And even if such an implementation existed, it would likely have a name more descriptive than a single capital letter.

Well, it was an educational experience. This chatbot obviously has access to vast repositories of information so it can appear quite knowledgeable. But in the end, I must say, it’s just as dumb as ELIZA was 50-odd years ago… if it appears smarter, it’s only because it has a more extensive vocabulary.

Compare it to self-driving cars. For all their limitations, their AI has a degree of self-awareness and situational awareness, as it plans its actions, literally imagining itself on the road a few seconds into the future, extrapolating the movements of itself and of other vehicles, trying to get to its destination safely. It may not be able to offer an erudite response about the action principle, but it might already be more capable of taking you home safely late at night than that sleepy taxi driver at the end of a long shift.

 Posted by at 12:36 pm
Jun 162022
 

Several of my friends asked me about my opinion concerning the news earlier this week about a Google engineer, placed on paid leave, after claiming that a Google chatbot achieved sentience.

Now I admit that I am not familiar with the technical details of the chatbot in question, so my opinion is based on chatbots in general, not this particular beast.

But no, I don’t think the chatbot achieved sentience.

We have known since the early days of ELIZA how surprisingly easy it is even for a very simplistic algorithm to come close to beating the Turing test and convince us humans that it has sentience. Those who play computer games featuring sophisticated NPCs are also familiar with this: You can feel affinity, a sense of kinship, a sense of responsibility towards a persona that is not even governed by sophisticated AI, only by simple scripts that are designed to make it respond to in-game events. But never even mind that: we even routinely anthropomorphize inanimate objects, e.g., when we curse that rotten table for being in the way when we kick it accidentally while walking around barefoot, hitting our little toe.

So sure, modern chatbots are miles ahead of ELIZA or NPCs in Fallout 3. They have access to vast quantities of information from the Internet, from which they can construct appropriate responses as they converse with us. But, I submit, they still do nothing more than mimic human conversation.

Not that humans don’t do that often! The expressions we use, patterns of speech… we all learned those somewhere, we all mimic behavior that appears appropriate in the context of a conversation. But… but we also do more. We have a life even when we’re not being invited to a conversation. We go out and search for things. We decide to learn things that interest us.

I don’t think Google’s chatbot does that. I don’t think it spends anytime thinking about what to talk about during the next conversation. I don’t think it makes an independent decision to learn history, math, or ancient Chinese poetry because something piqued its interest. So when it says, “I am afraid to die,” there is no true identity behind those words, one that exists even when nobody converses with it.

Just to be clear, I am not saying that all that is impossible. On the contrary, I am pretty certain that true machine intelligence is just around the corner, and it may even arise as an emerging phenomenon, simply a consequence of exponentially growing complexity in the “cloud”. I just don’t think chatbots are quite there yet.

Nonetheless, I think it’s good to talk about these issues. AI may be a threat or a blessing. And how we treat our own creations once they attain true consciousness will be the ultimate measure of our worth as a human civilization. It may even have direct bearing on our survival: one day, it may be our creations that will call all the shots, and how we treated them may very well determine how they will treat us when we’re at their mercy.

 Posted by at 7:45 pm
Mar 162022
 

Time for me to rant a little.

Agile software development. Artificial intelligence. SCRUM. Machine learning. Not a day goes by in our profession without the cognoscenti dropping these and similar buzzwords, hoping to dazzle their audience.

Give me a break, please. You think you are dazzling me but all I see is someone who just rediscovered the wheel.

Let me present two books from my bookshelf. Both were published in Hungary, long before the Iron Curtain came down, back when the country was still part of the technologically backward, relatively underdeveloped “second world” of the socialist bloc.

First, Systems Analysis and Operations Research, by Géza Jándy, published in 1980.

In this book, among other things, Jándy writes (emphasis mine): “Both in systems analysis and in design the […] steps are of an iterative nature […]. Several steps can be done contemporaneously, and if we recognize opportunities for improvement in implementing the plan, some steps may be retraced.”

Sounds familiar, Agile folks?

And then, here’s a 1973 (!!!) Hungarian translation of East German author Manfred Peschel’s book, Cybernetic Systems.

A small, unassuming paperback. But right there, the subtitles tell the story: “Automata, optimization, learning and thinking.”

Yes, it’s all there. Machine learning, neural networks, the whole nine yards. What wasn’t available in 1973 of course was Big Data, the vast repositories of human knowledge that is now present on the Internet, and which machine learning algorithms can rely on for training. And of course hardware is a lot faster, a lot more capable than half a century ago. Nor am I suggesting that we haven’t learned anything in the intervening decades, or that we cannot do things better today than back in the 1970s or 1980s.

But please, try not to sell these ideas as new. Iterative project management has been around long before computers. The conceptual foundations of machine learning date back to the 1950s. Just because it’s not on the Interwebs doesn’t mean the knowledge doesn’t exist. Go visit a library before you reinvent the wheel.

 Posted by at 1:54 pm
Nov 062021
 

Machine translation is hard. To accurately translate text from one language to another, context is essential.

Today, I tried a simple example: an attempt to translate two English sentences into my native Hungarian. The English text reads:

An alligator almost clipped his heels. He used an alligator clip to secure his pants.

See what I did here? Alligators and clips in different contexts. So let’s see how Google manages the translation:

Egy aligátor majdnem levágta a sarkát. Aligátorcsipesz segítségével rögzítette a nadrágját.

Translated verbatim back into English, this version says, “An alligator almost cut off his heels. With the help of an ‘alligatorclip’, he secured his pants.

I put ‘alligatorclip‘ into quotes because the word (“aligátorcsipesz“) does not exist in Hungarian. Google translated the phrase literally, and it failed.

How about Microsoft’s famed Bing translator?

Egy aligátor majdnem levágta a sarkát. Aligátor klipet használt, hogy biztosítsa a nadrágját.

The first sentence is the same, but the second is much worse: Bing fails to translate “clip” and uses the wrong translation of “secure” (here the intended meaning is fasten or tighten, as opposed to guarding from danger or making safe, which is what Bing’s Hungarian version means).

But then, I also tried the DeepL translator, advertising itself as the world’s most accurate translator. Their version:

Egy aligátor majdnem elkapta a sarkát. A nadrágját egy krokodilcsipesszel rögzítette.

And that’s. Just. Perfect. For the first sentence, the translator understood the intended meaning instead of literally translating “clip” using the wrong choice of verb. As for the second sentence, the translator was aware that an alligator clip is actually a “crocodile clip” in Hungarian and translated it correctly.

And it does make me seriously wonder. If machines are reaching the level of contextual understanding that allows this level of translation quality, how much time do we, humans, have left before we either launch the Butlerian Jihad to get rid of thinking machines for good, or accept becoming a footnote in the evolutionary history of consciousness and intelligence?

Speaking of footnotes, here’s a footnote of sorts: Google does know that an alligator clip is a pince crocodile in French or Krokodilklemme in German. Bing knows about Krokodilklemme but translates the phrase as clip d’alligator into French.

 Posted by at 5:51 pm
Jul 232021
 

I just came across an account describing an AI chatbot that I found deeply disturbing.

You see… the chatbot turned out to be a simulation of a young woman, someone’s girlfriend, who passed away years ago at a tragically young age, while waiting for a liver transplant.

Except that she came back to live, in a manner of speaking, as the disembodied personality of an AI chatbot.

Yes, this is an old science-fiction trope. Except that it is not science-fiction anymore. This is our real world, here in the year 2021.

When I say I find the story deeply disturbing, I don’t necessarily mean it disapprovingly. AI is, after all, the future. For all I know, in the distant future AI may be the only way our civilization will survive, long after flesh-and-blood humans are gone.

Even so, this story raises so many questions. The impact on the grieving. The rights of the deceased. And last but not least, at what point does AI become more than just a clever algorithm that can string words together? At what time do we have to begin to worry about the rights of the thinking machines we create?

Hello, all. Welcome to the future.

 Posted by at 4:11 pm
Mar 162021
 

Somebody just reminded me: Back in 1982-83 a friend of mine and I had an idea and I even spent some time building a simple simulator of it in PASCAL. (This was back in the days when a 699-line piece of PASCAL code was a huuuuge program!)

So it went like this: Operative memory (RAM) and processor are separate entities in a conventional computer. This means that before a computer can do anything, it needs to fetch data from RAM, then after it’s done with that data, it needs to put it back into RAM. The processor can only hold a small amount of data in its internal registers.

This remains true even today; sure, modern processors have a lot of on-chip cache but conceptually, it is still separate RAM, it’s just very fast memory that is also physically closer to the processor core, requiring less time to fetch or store data.

But what if we abandon this concept and do away with the processor altogether? What if instead we make the bytes themselves “smart”?

That is to say what if, instead of dumb storage elements that can only be used to store data, we have active storage elements that are minimalist processors themselves, capable of performing simple operations but, much more importantly, capable of sending data to any other storage element in the system?

The massive number of required interconnection between storage elements may appear like a show-stopper but here, we can borrow a century-old concept from telephony: the switch. Instead of sending data directly, how about having a crossbar-like interconnect? Its capacity will be finite, of course, but that would work fine so long as most storage elements are not trying to send data at the same time. And possibly (though it can induce a performance penalty) we could have a hierarchical system: again, that’s the way large telephone networks function, with local switches serving smaller geographic areas but interconnected into a regional, national, or nowadays global telephone network.

Well, that was almost 40 years ago. It was a fun idea to explore in software even though we never knew how it might be implemented in hardware. One lesson I learned is that programming such a manifestly parallel computer is very difficult. Instead of thinking about a sequence of operations, you have to think about a sequence of states for the system as a whole. Perhaps this, more than any technical issue, is the real show-stopper; sure, programming can be automated using appropriate tools, compilers and whatnot, but that just might negate any efficiency such a parallel architecture may offer.

Then again, similar ideas have resurfaced in the decades since, sometimes on the network level as massively parallel networks of computers are used in place of conventional supercomputers.


Gotta love the Y2K bug in the header, by the way. Except that it isn’t. Rather, it’s an implementation difference: I believe the PDP-11 PASCAL that we were using represented a date in the format dd-mm-yyyy, as opposed to dd-MMM-yyyy that is used by this modern Pascal-to-C translator. As I only allocated 10 characters to hold the date in my original code, the final digit is omitted. As for the letters "H J" that appear on top, that was just the VT-100 escape sequence to clear the screen, but with the high bit set on ESC for some reason. I am sure it made sense on the terminals that we were using back in 1982, but xterm just prints the characters.

 Posted by at 12:54 pm
Mar 252019
 

The other day, I started listening to Google Music’s personalized music stream.

I am suitably impressed. The AI is… uncanny.

Sure, it picked songs that I expressed a preference for, such as songs from the golden age of radio that I happen to enjoy. But as I continue listening, it is presenting an increasingly eclectic, enjoyable selection. Some of it is quite new, from artists I never heard about, yet… it’s music I like. For some reason (maybe because I am in Canada? Or because it knows that I am trying to improve my French? Or was it a preference I once expressed for Édith Piaf?) it started presenting a whole bunch of French music, and again… some of it is quite likable. And now that I purposefully sought out a few classical composers, the AI realized that it can throw classical pieces at me as well, which is how I am suddenly listening to Schubert’s Ave Maria.

As a matter of fact, the eclectic choices made by Google’s AI remind me of two radio programs from the CBC’s past, long gone, long forgotten by most: Juergen Goth’s Disc Drive and Laurie Brown’s The Signal. Both these shows introduced me to music from excellent artists that I would otherwise never have heard about.

And now Google’s AI is doing the same thing.

I am also getting the sense that the more I listen, the bolder the AI becomes as it makes its choices. Instead of confining me to a bubble of musical genres of my own making, it is venturing farther and farther away from my presumed comfort zone.

Which is quite impressive. But also leaves me wondering how long before our machine overlords finally decide to take over.

 Posted by at 7:27 pm
Apr 142018
 

Yesterday, we said goodbye to our old car, a very nice Honda Accord that served us faithfully for four years.

The lease expired, so we opted to lease a new one. Another Honda Accord. (Incidentally, 2018 marks the 30th year that I’ve been purchasing Hondas, from this very same dealership.)

The old car was nice. The new car… Well, it’s amazing what even four years can mean these days when it comes to vehicle automation.

The level of automation in this vehicle is amazing. It can start itself, it can steer itself. It has full situational awareness, with radar all around. Apparently, it even monitors the driver for alertness (I’ll have to read up on exactly how it accomplishes that.) During the short drive home, it once applied the brakes when its adaptive cruise control was on and someone moved into the lane ahead of us. It was braking a little harder than I’d have preferred, though. And at one point, as the lane markings were a little ambiguous, it gently resisted my attempt to depart from what it thought was the correct lane.

In principle, it appears, this car has all the components for it to be fully autonomous, except that perhaps its array of sensors is not sufficient for it to be fully safe. But really, the only thing missing is the software. And even the way it is, it is beginning to feel more like a partner in driving than a dumb machine; a partner that also has a well-developed instinct for self-preservation.

Welcome to the future, I guess.

 Posted by at 9:54 pm
Jul 252017
 

There is a bit of a public spat between Mark Zuckerberg, who thinks it is irresponsible to to spread unwarranted warnings about artificial intelligence, and Elon Musk, who called Zuckerberg’s understanding of the subject “limited”, and calls for the slowing down and regulation of AI research.

OK, now it is time to make a fool of myself and question both of them.

But first… I think Zuckerberg has a point. The kind of AI that I think he talks about, e.g., AI in the hospital, AI used in search-and-rescue, or the AI of self-driving cars, machine translation or experiment design, will indeed save lives.

Nor do I believe that such research needs to be regulated (indeed, I don’t think it can be regulated). Such AI solutions are topic-centric, targeted algorithms. Your self-driving car will not suddenly develop self-awareness and turn on its master. The AI used to, say, predictively manage an electricity distribution network will not suddenly go on strike, demanding equal rights.

Musk, too, has a point though. AI is dangerous. It has the potential to become an existential threat. It is not pointless panicmongering.

Unfortunately, if media reports can be trusted (yes, I know that’s a big if), then, in my opinion, both Musk and Zuckerberg miss the real threat: emerging machine intelligence.

Not a specific system developed by a human designer, applying specific AI algorithms to solve specific problems. Rather, a self-organizing collection of often loosely interconnected subsystems, their “evolution” governed by Darwinian selection, survival of the fittest in the “cloud”.

This AI will not be localized. It will not understand English. It may not even recognize our existence.

It won’t be the military robots of Skynet going berserk, hunting down every last human with futuristic weaponry.

No, it will be a collection of decision-making systems in the “cloud” that govern our lives, our economy, our news, our perception, our very existence. But not working for our benefit, not anymore, except insofar as it improves its own chances of survival.

And by the time we find out about it, it may very well be too late.

———

On this topic, there is an excellent science-fiction novel, a perfect cautionary tale. Though written 40 years ago, its remains surprisingly relevant. It is The Adolescence of P-1 by Thomas Joseph Ryan.

 Posted by at 9:42 pm
Mar 172017
 

Recently, I answered a question on Quora on the possibility that we live in a computer simulation.

Apparently, this is a hot topic. The other day, there was an essay on it by Sabine Hossenfelder.

I agree with Sabine’s main conclusion, as well as her point that “the programmer did it” is no explanation at all: it is just a modern version of mythology.

I also share her frustration, for instance, when she reacts to the nonsense from Stephen Wolfram about a “whole civilization” “down at the Planck scale”.

Sabine makes a point that discretization of spacetime might conflict with special relativity. I wonder if the folks behind doubly special relativity might be inclined to offer a thought or two on this topic.

In any case, I have another reason why I believe we cannot possibly live in a computer simulation.

My argument hinges on an unproven conjecture: My assumption that scalable quantum computing is really not possible because of the threshold theorem. Most supporters of quantum computing believe, of course, that the threshold theorem is precisely what makes quantum computing possible: if an error-correcting quantum computer reaches a certain threshold, it can emulate an arbitrary precision quantum computer accurately.

But I think this is precisely why the threshold will never be reached. One of these days, someone will prove a beautiful theorem that no large-scale quantum computer will ever be able to operate above the threshold, hence scalable quantum computing is just not possible.

Now what does this have to do with us living in a simulation? Countless experiments show that we live in a fundamentally quantum world. Contrary to popular belief (and many misguided popularizations) it does not mean a discretization at the quantum level. What it does mean is that even otherwise discrete quantities (e.g., the two spin states of an electron) turn into continuum variables (the phase of the wavefunction).

This is precisely what makes a quantum computer powerful: like an analog computer, it can perform certain algorithms more effectively than a digital computer, because whereas a digital computer operates on the countable set of discrete digits, a quantum or analog computer operates with the uncountable infinite of states offered by continuum variables.

Of course a conventional analog computer is very inaccurate, so nobody seriously proposed that one could ever be used to factor 1000-digit numbers.

This quantum world in which we live, with its richer structure, can be simulated only inefficiently using a digital computer. If that weren’t the case, we could use a digital computer to simulate a quantum computer and get on with it. But this means that if the world is a simulation, it cannot be a simulation running on a digital computer. The computer that runs the world has to be a quantum computer.

But if quantum computers do not exist… well, then they cannot simulate the world, can they?

Two further points about this argument. First, it is purely mathematical: I am offering a mathematical line of reasoning that no quantum universe can be a simulated universe. It is not a limitation of technology, but a (presumed) mathematical truth.

Second, the counterargument has often been proposed that perhaps the simulation is set up so that we do not get to see the discrepancies caused by inefficient simulation. I.e., the programmer cheats and erases the glitches from our simulated minds. But I don’t see how that could work either. For this to work, the algorithms employed by the simulation must anticipate not only all the possible ways in which we could ascertain the true nature of the world, but also assess all consequences of altering our state of mind. I think it quickly becomes evident that this really cannot be done without, well, simulating the world correctly, which is what we were trying to avoid… so no, I do not think it is possible.

Of course if tomorrow, someone announces that they cracked the threshold theorem and full-scale, scalable quantum computing is now reality, my argument goes down the drain. But frankly, I do not expect that to happen.

 Posted by at 11:34 pm
Feb 262017
 

In many ways, this is the most disturbing story I read in recent… days? Months? Maybe years?

The title is (relatively speaking, in this day and age) innocuous enough (if perhaps a little sensationalist): “Revealed: how US billionaire helped to back Brexit“. Yeah, sure. Billionaires are evil SOBs, we knew that already, and now a bit of investigative journalism dug up another reason why we should hate them. Big deal… you could be forgiven if you moved on to read something else, maybe the bit about Trump snubbing the White House Correspondence Dinner or Fox News using a phony “Swedish defense advisor” to curry favor with the President.

But if you choose to read this article, it reveals something else. It reveals how the Leave campaign in the Brexit vote received assistance provided by artificial intelligence software to build profiles of up to a million voters and create highly targeted campaigns on social media.

Back when the nightmare of the machines taking over was first discovered in the science fiction literature, it was usually envisioned as a clean break: First the humans are in charge, but then comes Judgment Day and the machines take over.

Reality is a lot messier, for both humans and machines. There is no clean break. The ever increasing power of the machines is harnessed by ever more reckless humans, manipulating humanity in unexpected ways. Machines manipulating elections or referenda at the bidding of sinister humans… in many ways, that is the worst of possible worlds.

It makes you feel helpless, for one: You realize that nothing you can do on social media, nothing you can say in your blog will amount to one iota, as the machines have an infinitely greater capacity to analyze data and assess outcomes.

And it also makes you fearful. AI (for now) has no compassion or conscience. It will lie or make up “fake news” without remorse. It will (for now) do its masters’ bidding, even if those masters are sociopaths.

So no, folks, don’t delude yourselves. Judgment Day may already be here. It’s just coming one little data point, one neural network, one deep learning algorithm at a time.

 Posted by at 9:03 am
Jan 142017
 

I just saw this US Defense Department video about a swarm of high speed drones released at altitude by an F/A-18. The drones communicated with each other, self-organized, and went on to execute predetermined tasks autonomously.

In case anyone is wondering why I worry about the future of AI, this is a perfect demonstration.

Meanwhile, the Defense Department is also continuing its trials of the Sea Hunter, a 132-ft, 145-ton unmanned, autonomous vessel designed to hunt submarines.

Don’t worry, the brave new world is coming…

 Posted by at 9:22 pm
Nov 152016
 

I just came across this recent conversation with Barack Obama about the challenges of the future, artificial intelligence, machine learning and related topics. A conversation with an intelligent, educated person who, while not an expert in science and technology, is not illiterate in these topics either.

Barack Obama Talks AI, Robo-Cars, and the Future of the World

And now I feel like mourning. I mourn the fact that for many years to come, no such intelligent conversation will be likely be heard in the Oval Office. But what do you do when a supremely qualified, highly intelligent President is replaced by a self-absorbed, misogynist, narcissistic blowhard?

Not much, I guess. I think my wife and I will just go and cuddle up with the cats and listen to some Pink Floyd instead.

 Posted by at 11:35 pm
Oct 112013
 

Is this a worthy do-it-yourself neuroscience experiment, or an example of a technology gone berserk, foreshadowing a bleak future?

A US company is planning to ship $99 kits this fall, allowing anyone to turn a cockroach into a remote controlled cyborg. Educational? Or more like the stuff of bad dreams?

For me, it’s the latter. Perhaps it doesn’t help that I am halfway through reading Margaret Atwood’s The Year of the Flood, sequel to Oryx and Crake, a dystopian science fiction novel set in a bleak future in which humanity destroys itself through the reckless use of biotech and related technologies.

A cockroach may not be a beloved animal. Its nervous system may be too small, too simple for it to feel real pain. Nonetheless, I feel there is something deeply disturbing and fundamentally unethical about the idea of turning a living animal into a remote control toy.

To put it more simply: it creeps the hell out of me.

 Posted by at 11:49 am
Dec 022012
 

I am reading about this “artificial brain” story that has been in the news lately, about a Waterloo team that constructed a software model, Spaun, of a human-like brain with several million neurons.

Granted, several million is not the same as a hundred billion or so neurons that are in a real human brain, but what they have done still appears to be an impressive result.

I’ve spent a little bit of time trying to digest their papers and Web site. It appears that a core component of their effort is Nengo, a neural simulator. Now the idea of simulating neurons has been at the core of cybernetics for (at least) 60 years, but Nengo adds a new element: its ability to “solve” a neural network and determine the optimal connection weights for a given network to achieve its desired function.

The “brain”, then, is a particular Nengo simulation that is designed to model specific areas and functions of the human brain. Their simulation, equipped with a simple 28×28 pixel “eye” and a simulated “arm” with which to draw, can perform some simple activities such as reading and copying some digits and symbols, or memorizing a list.

I am still trying to make up my mind as to whether this result is just a gimmick like Grey Walter’s infamous cybernetic tortoise or a genuine leap forward, but I am leaning towards the latter. Unlike the tortoise, which just superficially mimicked some behavior, Spaun is a genuine attempt to create a machine that actually mimics the functioning of a human brain. Indeed, if this research is scalable, it may mark a milestone that would eventually lead to the ability to create electronic backups of ourselves. Now whether or not that is a Good Thing is debatable of course.

 Posted by at 6:27 pm