Nov 062021
 

Machine translation is hard. To accurately translate text from one language to another, context is essential.

Today, I tried a simple example: an attempt to translate two English sentences into my native Hungarian. The English text reads:

An alligator almost clipped his heels. He used an alligator clip to secure his pants.

See what I did here? Alligators and clips in different contexts. So let’s see how Google manages the translation:

Egy aligátor majdnem levágta a sarkát. Aligátorcsipesz segítségével rögzítette a nadrágját.

Translated verbatim back into English, this version says, “An alligator almost cut off his heels. With the help of an ‘alligatorclip’, he secured his pants.

I put ‘alligatorclip‘ into quotes because the word (“aligátorcsipesz“) does not exist in Hungarian. Google translated the phrase literally, and it failed.

How about Microsoft’s famed Bing translator?

Egy aligátor majdnem levágta a sarkát. Aligátor klipet használt, hogy biztosítsa a nadrágját.

The first sentence is the same, but the second is much worse: Bing fails to translate “clip” and uses the wrong translation of “secure” (here the intended meaning is fasten or tighten, as opposed to guarding from danger or making safe, which is what Bing’s Hungarian version means).

But then, I also tried the DeepL translator, advertising itself as the world’s most accurate translator. Their version:

Egy aligátor majdnem elkapta a sarkát. A nadrágját egy krokodilcsipesszel rögzítette.

And that’s. Just. Perfect. For the first sentence, the translator understood the intended meaning instead of literally translating “clip” using the wrong choice of verb. As for the second sentence, the translator was aware that an alligator clip is actually a “crocodile clip” in Hungarian and translated it correctly.

And it does make me seriously wonder. If machines are reaching the level of contextual understanding that allows this level of translation quality, how much time do we, humans, have left before we either launch the Butlerian Jihad to get rid of thinking machines for good, or accept becoming a footnote in the evolutionary history of consciousness and intelligence?

Speaking of footnotes, here’s a footnote of sorts: Google does know that an alligator clip is a pince crocodile in French or Krokodilklemme in German. Bing knows about Krokodilklemme but translates the phrase as clip d’alligator into French.

 Posted by at 5:51 pm
Jul 232021
 

I just came across an account describing an AI chatbot that I found deeply disturbing.

You see… the chatbot turned out to be a simulation of a young woman, someone’s girlfriend, who passed away years ago at a tragically young age, while waiting for a liver transplant.

Except that she came back to live, in a manner of speaking, as the disembodied personality of an AI chatbot.

Yes, this is an old science-fiction trope. Except that it is not science-fiction anymore. This is our real world, here in the year 2021.

When I say I find the story deeply disturbing, I don’t necessarily mean it disapprovingly. AI is, after all, the future. For all I know, in the distant future AI may be the only way our civilization will survive, long after flesh-and-blood humans are gone.

Even so, this story raises so many questions. The impact on the grieving. The rights of the deceased. And last but not least, at what point does AI become more than just a clever algorithm that can string words together? At what time do we have to begin to worry about the rights of the thinking machines we create?

Hello, all. Welcome to the future.

 Posted by at 4:11 pm
Mar 162021
 

Somebody just reminded me: Back in 1982-83 a friend of mine and I had an idea and I even spent some time building a simple simulator of it in PASCAL. (This was back in the days when a 699-line piece of PASCAL code was a huuuuge program!)

So it went like this: Operative memory (RAM) and processor are separate entities in a conventional computer. This means that before a computer can do anything, it needs to fetch data from RAM, then after it’s done with that data, it needs to put it back into RAM. The processor can only hold a small amount of data in its internal registers.

This remains true even today; sure, modern processors have a lot of on-chip cache but conceptually, it is still separate RAM, it’s just very fast memory that is also physically closer to the processor core, requiring less time to fetch or store data.

But what if we abandon this concept and do away with the processor altogether? What if instead we make the bytes themselves “smart”?

That is to say what if, instead of dumb storage elements that can only be used to store data, we have active storage elements that are minimalist processors themselves, capable of performing simple operations but, much more importantly, capable of sending data to any other storage element in the system?

The massive number of required interconnection between storage elements may appear like a show-stopper but here, we can borrow a century-old concept from telephony: the switch. Instead of sending data directly, how about having a crossbar-like interconnect? Its capacity will be finite, of course, but that would work fine so long as most storage elements are not trying to send data at the same time. And possibly (though it can induce a performance penalty) we could have a hierarchical system: again, that’s the way large telephone networks function, with local switches serving smaller geographic areas but interconnected into a regional, national, or nowadays global telephone network.

Well, that was almost 40 years ago. It was a fun idea to explore in software even though we never knew how it might be implemented in hardware. One lesson I learned is that programming such a manifestly parallel computer is very difficult. Instead of thinking about a sequence of operations, you have to think about a sequence of states for the system as a whole. Perhaps this, more than any technical issue, is the real show-stopper; sure, programming can be automated using appropriate tools, compilers and whatnot, but that just might negate any efficiency such a parallel architecture may offer.

Then again, similar ideas have resurfaced in the decades since, sometimes on the network level as massively parallel networks of computers are used in place of conventional supercomputers.


Gotta love the Y2K bug in the header, by the way. Except that it isn’t. Rather, it’s an implementation difference: I believe the PDP-11 PASCAL that we were using represented a date in the format dd-mm-yyyy, as opposed to dd-MMM-yyyy that is used by this modern Pascal-to-C translator. As I only allocated 10 characters to hold the date in my original code, the final digit is omitted. As for the letters "H J" that appear on top, that was just the VT-100 escape sequence to clear the screen, but with the high bit set on ESC for some reason. I am sure it made sense on the terminals that we were using back in 1982, but xterm just prints the characters.

 Posted by at 12:54 pm
Mar 252019
 

The other day, I started listening to Google Music’s personalized music stream.

I am suitably impressed. The AI is… uncanny.

Sure, it picked songs that I expressed a preference for, such as songs from the golden age of radio that I happen to enjoy. But as I continue listening, it is presenting an increasingly eclectic, enjoyable selection. Some of it is quite new, from artists I never heard about, yet… it’s music I like. For some reason (maybe because I am in Canada? Or because it knows that I am trying to improve my French? Or was it a preference I once expressed for Édith Piaf?) it started presenting a whole bunch of French music, and again… some of it is quite likable. And now that I purposefully sought out a few classical composers, the AI realized that it can throw classical pieces at me as well, which is how I am suddenly listening to Schubert’s Ave Maria.

As a matter of fact, the eclectic choices made by Google’s AI remind me of two radio programs from the CBC’s past, long gone, long forgotten by most: Juergen Goth’s Disc Drive and Laurie Brown’s The Signal. Both these shows introduced me to music from excellent artists that I would otherwise never have heard about.

And now Google’s AI is doing the same thing.

I am also getting the sense that the more I listen, the bolder the AI becomes as it makes its choices. Instead of confining me to a bubble of musical genres of my own making, it is venturing farther and farther away from my presumed comfort zone.

Which is quite impressive. But also leaves me wondering how long before our machine overlords finally decide to take over.

 Posted by at 7:27 pm
Apr 142018
 

Yesterday, we said goodbye to our old car, a very nice Honda Accord that served us faithfully for four years.

The lease expired, so we opted to lease a new one. Another Honda Accord. (Incidentally, 2018 marks the 30th year that I’ve been purchasing Hondas, from this very same dealership.)

The old car was nice. The new car… Well, it’s amazing what even four years can mean these days when it comes to vehicle automation.

The level of automation in this vehicle is amazing. It can start itself, it can steer itself. It has full situational awareness, with radar all around. Apparently, it even monitors the driver for alertness (I’ll have to read up on exactly how it accomplishes that.) During the short drive home, it once applied the brakes when its adaptive cruise control was on and someone moved into the lane ahead of us. It was braking a little harder than I’d have preferred, though. And at one point, as the lane markings were a little ambiguous, it gently resisted my attempt to depart from what it thought was the correct lane.

In principle, it appears, this car has all the components for it to be fully autonomous, except that perhaps its array of sensors is not sufficient for it to be fully safe. But really, the only thing missing is the software. And even the way it is, it is beginning to feel more like a partner in driving than a dumb machine; a partner that also has a well-developed instinct for self-preservation.

Welcome to the future, I guess.

 Posted by at 9:54 pm
Jul 252017
 

There is a bit of a public spat between Mark Zuckerberg, who thinks it is irresponsible to to spread unwarranted warnings about artificial intelligence, and Elon Musk, who called Zuckerberg’s understanding of the subject “limited”, and calls for the slowing down and regulation of AI research.

OK, now it is time to make a fool of myself and question both of them.

But first… I think Zuckerberg has a point. The kind of AI that I think he talks about, e.g., AI in the hospital, AI used in search-and-rescue, or the AI of self-driving cars, machine translation or experiment design, will indeed save lives.

Nor do I believe that such research needs to be regulated (indeed, I don’t think it can be regulated). Such AI solutions are topic-centric, targeted algorithms. Your self-driving car will not suddenly develop self-awareness and turn on its master. The AI used to, say, predictively manage an electricity distribution network will not suddenly go on strike, demanding equal rights.

Musk, too, has a point though. AI is dangerous. It has the potential to become an existential threat. It is not pointless panicmongering.

Unfortunately, if media reports can be trusted (yes, I know that’s a big if), then, in my opinion, both Musk and Zuckerberg miss the real threat: emerging machine intelligence.

Not a specific system developed by a human designer, applying specific AI algorithms to solve specific problems. Rather, a self-organizing collection of often loosely interconnected subsystems, their “evolution” governed by Darwinian selection, survival of the fittest in the “cloud”.

This AI will not be localized. It will not understand English. It may not even recognize our existence.

It won’t be the military robots of Skynet going berserk, hunting down every last human with futuristic weaponry.

No, it will be a collection of decision-making systems in the “cloud” that govern our lives, our economy, our news, our perception, our very existence. But not working for our benefit, not anymore, except insofar as it improves its own chances of survival.

And by the time we find out about it, it may very well be too late.

———

On this topic, there is an excellent science-fiction novel, a perfect cautionary tale. Though written 40 years ago, its remains surprisingly relevant. It is The Adolescence of P-1 by Thomas Joseph Ryan.

 Posted by at 9:42 pm
Mar 172017
 

Recently, I answered a question on Quora on the possibility that we live in a computer simulation.

Apparently, this is a hot topic. The other day, there was an essay on it by Sabine Hossenfelder.

I agree with Sabine’s main conclusion, as well as her point that “the programmer did it” is no explanation at all: it is just a modern version of mythology.

I also share her frustration, for instance, when she reacts to the nonsense from Stephen Wolfram about a “whole civilization” “down at the Planck scale”.

Sabine makes a point that discretization of spacetime might conflict with special relativity. I wonder if the folks behind doubly special relativity might be inclined to offer a thought or two on this topic.

In any case, I have another reason why I believe we cannot possibly live in a computer simulation.

My argument hinges on an unproven conjecture: My assumption that scalable quantum computing is really not possible because of the threshold theorem. Most supporters of quantum computing believe, of course, that the threshold theorem is precisely what makes quantum computing possible: if an error-correcting quantum computer reaches a certain threshold, it can emulate an arbitrary precision quantum computer accurately.

But I think this is precisely why the threshold will never be reached. One of these days, someone will prove a beautiful theorem that no large-scale quantum computer will ever be able to operate above the threshold, hence scalable quantum computing is just not possible.

Now what does this have to do with us living in a simulation? Countless experiments show that we live in a fundamentally quantum world. Contrary to popular belief (and many misguided popularizations) it does not mean a discretization at the quantum level. What it does mean is that even otherwise discrete quantities (e.g., the two spin states of an electron) turn into continuum variables (the phase of the wavefunction).

This is precisely what makes a quantum computer powerful: like an analog computer, it can perform certain algorithms more effectively than a digital computer, because whereas a digital computer operates on the countable set of discrete digits, a quantum or analog computer operates with the uncountable infinite of states offered by continuum variables.

Of course a conventional analog computer is very inaccurate, so nobody seriously proposed that one could ever be used to factor 1000-digit numbers.

This quantum world in which we live, with its richer structure, can be simulated only inefficiently using a digital computer. If that weren’t the case, we could use a digital computer to simulate a quantum computer and get on with it. But this means that if the world is a simulation, it cannot be a simulation running on a digital computer. The computer that runs the world has to be a quantum computer.

But if quantum computers do not exist… well, then they cannot simulate the world, can they?

Two further points about this argument. First, it is purely mathematical: I am offering a mathematical line of reasoning that no quantum universe can be a simulated universe. It is not a limitation of technology, but a (presumed) mathematical truth.

Second, the counterargument has often been proposed that perhaps the simulation is set up so that we do not get to see the discrepancies caused by inefficient simulation. I.e., the programmer cheats and erases the glitches from our simulated minds. But I don’t see how that could work either. For this to work, the algorithms employed by the simulation must anticipate not only all the possible ways in which we could ascertain the true nature of the world, but also assess all consequences of altering our state of mind. I think it quickly becomes evident that this really cannot be done without, well, simulating the world correctly, which is what we were trying to avoid… so no, I do not think it is possible.

Of course if tomorrow, someone announces that they cracked the threshold theorem and full-scale, scalable quantum computing is now reality, my argument goes down the drain. But frankly, I do not expect that to happen.

 Posted by at 11:34 pm
Feb 262017
 

In many ways, this is the most disturbing story I read in recent… days? Months? Maybe years?

The title is (relatively speaking, in this day and age) innocuous enough (if perhaps a little sensationalist): “Revealed: how US billionaire helped to back Brexit“. Yeah, sure. Billionaires are evil SOBs, we knew that already, and now a bit of investigative journalism dug up another reason why we should hate them. Big deal… you could be forgiven if you moved on to read something else, maybe the bit about Trump snubbing the White House Correspondence Dinner or Fox News using a phony “Swedish defense advisor” to curry favor with the President.

But if you choose to read this article, it reveals something else. It reveals how the Leave campaign in the Brexit vote received assistance provided by artificial intelligence software to build profiles of up to a million voters and create highly targeted campaigns on social media.

Back when the nightmare of the machines taking over was first discovered in the science fiction literature, it was usually envisioned as a clean break: First the humans are in charge, but then comes Judgment Day and the machines take over.

Reality is a lot messier, for both humans and machines. There is no clean break. The ever increasing power of the machines is harnessed by ever more reckless humans, manipulating humanity in unexpected ways. Machines manipulating elections or referenda at the bidding of sinister humans… in many ways, that is the worst of possible worlds.

It makes you feel helpless, for one: You realize that nothing you can do on social media, nothing you can say in your blog will amount to one iota, as the machines have an infinitely greater capacity to analyze data and assess outcomes.

And it also makes you fearful. AI (for now) has no compassion or conscience. It will lie or make up “fake news” without remorse. It will (for now) do its masters’ bidding, even if those masters are sociopaths.

So no, folks, don’t delude yourselves. Judgment Day may already be here. It’s just coming one little data point, one neural network, one deep learning algorithm at a time.

 Posted by at 9:03 am
Jan 142017
 

I just saw this US Defense Department video about a swarm of high speed drones released at altitude by an F/A-18. The drones communicated with each other, self-organized, and went on to execute predetermined tasks autonomously.

In case anyone is wondering why I worry about the future of AI, this is a perfect demonstration.

Meanwhile, the Defense Department is also continuing its trials of the Sea Hunter, a 132-ft, 145-ton unmanned, autonomous vessel designed to hunt submarines.

Don’t worry, the brave new world is coming…

 Posted by at 9:22 pm
Nov 152016
 

I just came across this recent conversation with Barack Obama about the challenges of the future, artificial intelligence, machine learning and related topics. A conversation with an intelligent, educated person who, while not an expert in science and technology, is not illiterate in these topics either.

Barack Obama Talks AI, Robo-Cars, and the Future of the World

And now I feel like mourning. I mourn the fact that for many years to come, no such intelligent conversation will be likely be heard in the Oval Office. But what do you do when a supremely qualified, highly intelligent President is replaced by a self-absorbed, misogynist, narcissistic blowhard?

Not much, I guess. I think my wife and I will just go and cuddle up with the cats and listen to some Pink Floyd instead.

 Posted by at 11:35 pm
Oct 112013
 

Is this a worthy do-it-yourself neuroscience experiment, or an example of a technology gone berserk, foreshadowing a bleak future?

A US company is planning to ship $99 kits this fall, allowing anyone to turn a cockroach into a remote controlled cyborg. Educational? Or more like the stuff of bad dreams?

For me, it’s the latter. Perhaps it doesn’t help that I am halfway through reading Margaret Atwood’s The Year of the Flood, sequel to Oryx and Crake, a dystopian science fiction novel set in a bleak future in which humanity destroys itself through the reckless use of biotech and related technologies.

A cockroach may not be a beloved animal. Its nervous system may be too small, too simple for it to feel real pain. Nonetheless, I feel there is something deeply disturbing and fundamentally unethical about the idea of turning a living animal into a remote control toy.

To put it more simply: it creeps the hell out of me.

 Posted by at 11:49 am
Dec 022012
 

I am reading about this “artificial brain” story that has been in the news lately, about a Waterloo team that constructed a software model, Spaun, of a human-like brain with several million neurons.

Granted, several million is not the same as a hundred billion or so neurons that are in a real human brain, but what they have done still appears to be an impressive result.

I’ve spent a little bit of time trying to digest their papers and Web site. It appears that a core component of their effort is Nengo, a neural simulator. Now the idea of simulating neurons has been at the core of cybernetics for (at least) 60 years, but Nengo adds a new element: its ability to “solve” a neural network and determine the optimal connection weights for a given network to achieve its desired function.

The “brain”, then, is a particular Nengo simulation that is designed to model specific areas and functions of the human brain. Their simulation, equipped with a simple 28×28 pixel “eye” and a simulated “arm” with which to draw, can perform some simple activities such as reading and copying some digits and symbols, or memorizing a list.

I am still trying to make up my mind as to whether this result is just a gimmick like Grey Walter’s infamous cybernetic tortoise or a genuine leap forward, but I am leaning towards the latter. Unlike the tortoise, which just superficially mimicked some behavior, Spaun is a genuine attempt to create a machine that actually mimics the functioning of a human brain. Indeed, if this research is scalable, it may mark a milestone that would eventually lead to the ability to create electronic backups of ourselves. Now whether or not that is a Good Thing is debatable of course.

 Posted by at 6:27 pm
Nov 062011
 

In his delightful collection of robot stories Cyberiad, Polish science-fiction author Stanislaw Lem tells us how to build a computer (a sentient computer, no less): the most important step is to pour a large number of transistors into a vat and stir.

This mental image popped into my mind as I was reading the last few pages of Andrew Pickering’s The Cybernetic Brain, subtitled Sketches of Another Future.

Beyond presenting a history of (chiefly British) cybernetics (and cyberneticians) the book’s main point is that cybernetics should be resurrected from the dead fringes as a nonmodern (the author’s word) alternative to the hegemony of modern science, and that the cybernetic approach of embracing unknowability is sometimes preferable to the notion that everything can be known and controlled. The author even names specific disasters (global warming, hurricane Katrina, the war in Iraq) as examples, consequences of the “high modernist” approach to the world.

Well, this is, I take it, the intended message of the book. But what I read from the book is a feely-goody New Age rant against rational (that is, fact and logic-based) thinking, characterized by phrases like “nonmodern” and “ontological theater”. The “high modernist” attitude that the author (rightfully) criticizes is more characteristic of 19th century science than the late 20th or early 21st centuries. And to be sure, the cyberneticians featuring in the book are just as guilty of arrogance as the worst of the “modernists”: after all, who but a true “mad scientist” would use an unproven philosophy as justification for electroshock therapy, or to build a futuristic control center for an entire national economy?

More importantly, the cyberneticians and Pickering never appear to go beyond the most superficial aspects of complexity. They conceptualize a control system for a cybernetic factory with a set of inputs, a set of outputs, and a nondescript blob in the middle that does the thinking; then, they go off and collect puddle water (!) that is supposed to be trained by, and eventually replace, the factory manager. The thinking goes something like this: the skills and experience of a manager form an “exceedingly complex” system. The set of biological and biochemical reactions in a puddle form another “exceedingly complex” system. So, we replace one with the other, do a bit of training, and presto! Problem solved.

These and similar ideas of course only reveal their proponents’ ignorance. Many systems appear exceedingly complex not because they are, but simply because their performance is governed by simple rules that the mathematician immediately recognizes as higher order differential equations, leading to chaotic behavior. The behavior of the cybernetic tortoise described in Pickering’s book appears complex only because it is unpredictable and chaotic. Its reaction in front of a mirror may superficially resemble the reaction of a cat, say, but that’s where the analogy ends.

In the end, the author laments that cybernetics has been marginalized by the hegemony of modernist science. I say no; I say cybernetics has been marginalized by its own failure to be useful. Much as cyberneticians would have preferred otherwise, you cannot build a sentient computer by pouring puddle water or a bag of transistors into a vat. The sentient machines of the future may be unknowable in the sense that their actions will be unpredictable, but it will be knowledge that builds them, not New Age ignorance.

 Posted by at 3:00 pm
Aug 122011
 

Back when I was learning the elementary basics of FORTRAN programming in Hungary in the 1970s, I frequently heard an urban legend according to which the sorry state of computer science in the East Bloc was a result of Stalin’s suspicious attitude towards cybernetics, which he considered a kind of intellectual swindlery from the decadent West. It seemed to make sense, neglecting of course the fact that the technological gap between East and West was widening, and that back in the 1950s, Soviet computers compared favorably to Western machines; and that it was only in the 1960s that a slow, painful decline began, as the Soviets began to rely increasingly on stolen Western technology.

Nonetheless, it appears that Stalin was right after all, insofar as cybernetics is concerned. I always thought that cybernetics was more or less synonymous with computer science, although I really have not given it much thought lately, as the term largely fell into disuse anyway. But now, I am reading an intriguing book titled “The Cybernetic Brain: Sketches of Another Future” by Andrew Pickering, and I am amazed. For instance, until now I never heard of Project Cybersyn, a project conceived by British cyberneticists to create the ultimate centrally planned economy for socialist Chile in the early 1970s, complete with a futuristic control room. No wonder Allende’s regime failed miserably! The only thing I cannot decide is which was greater: the arrogance or dishonesty of those intellectuals who created this project. A project that, incidentally, also carried a considerable potential for misuse, as evidenced by the fact that its creators received invitations from other repressive regimes to implement similar systems.


Stalin may have been one of the most prolific mass murderers in history, but he wasn’t stupid. His suspicions concerning cybernetics may have been right on the money.

 Posted by at 3:03 pm