Jul 252017
 

There is a bit of a public spat between Mark Zuckerberg, who thinks it is irresponsible to to spread unwarranted warnings about artificial intelligence, and Elon Musk, who called Zuckerberg’s understanding of the subject “limited”, and calls for the slowing down and regulation of AI research.

OK, now it is time to make a fool of myself and question both of them.

But first… I think Zuckerberg has a point. The kind of AI that I think he talks about, e.g., AI in the hospital, AI used in search-and-rescue, or the AI of self-driving cars, machine translation or experiment design, will indeed save lives.

Nor do I believe that such research needs to be regulated (indeed, I don’t think it can be regulated). Such AI solutions are topic-centric, targeted algorithms. Your self-driving car will not suddenly develop self-awareness and turn on its master. The AI used to, say, predictively manage an electricity distribution network will not suddenly go on strike, demanding equal rights.

Musk, too, has a point though. AI is dangerous. It has the potential to become an existential threat. It is not pointless panicmongering.

Unfortunately, if media reports can be trusted (yes, I know that’s a big if), then, in my opinion, both Musk and Zuckerberg miss the real threat: emerging machine intelligence.

Not a specific system developed by a human designer, applying specific AI algorithms to solve specific problems. Rather, a self-organizing collection of often loosely interconnected subsystems, their “evolution” governed by Darwinian selection, survival of the fittest in the “cloud”.

This AI will not be localized. It will not understand English. It may not even recognize our existence.

It won’t be the military robots of Skynet going berserk, hunting down every last human with futuristic weaponry.

No, it will be a collection of decision-making systems in the “cloud” that govern our lives, our economy, our news, our perception, our very existence. But not working for our benefit, not anymore, except insofar as it improves its own chances of survival.

And by the time we find out about it, it may very well be too late.

———

On this topic, there is an excellent science-fiction novel, a perfect cautionary tale. Though written 40 years ago, its remains surprisingly relevant. It is The Adolescence of P-1 by Thomas Joseph Ryan.

 Posted by at 9:42 pm
Mar 172017
 

Recently, I answered a question on Quora on the possibility that we live in a computer simulation.

Apparently, this is a hot topic. The other day, there was an essay on it by Sabine Hossenfelder.

I agree with Sabine’s main conclusion, as well as her point that “the programmer did it” is no explanation at all: it is just a modern version of mythology.

I also share her frustration, for instance, when she reacts to the nonsense from Stephen Wolfram about a “whole civilization” “down at the Planck scale”.

Sabine makes a point that discretization of spacetime might conflict with special relativity. I wonder if the folks behind doubly special relativity might be inclined to offer a thought or two on this topic.

In any case, I have another reason why I believe we cannot possibly live in a computer simulation.

My argument hinges on an unproven conjecture: My assumption that scalable quantum computing is really not possible because of the threshold theorem. Most supporters of quantum computing believe, of course, that the threshold theorem is precisely what makes quantum computing possible: if an error-correcting quantum computer reaches a certain threshold, it can emulate an arbitrary precision quantum computer accurately.

But I think this is precisely why the threshold will never be reached. One of these days, someone will prove a beautiful theorem that no large-scale quantum computer will ever be able to operate above the threshold, hence scalable quantum computing is just not possible.

Now what does this have to do with us living in a simulation? Countless experiments show that we live in a fundamentally quantum world. Contrary to popular belief (and many misguided popularizations) it does not mean a discretization at the quantum level. What it does mean is that even otherwise discrete quantities (e.g., the two spin states of an electron) turn into continuum variables (the phase of the wavefunction).

This is precisely what makes a quantum computer powerful: like an analog computer, it can perform certain algorithms more effectively than a digital computer, because whereas a digital computer operates on the countable set of discrete digits, a quantum or analog computer operates with the uncountable infinite of states offered by continuum variables.

Of course a conventional analog computer is very inaccurate, so nobody seriously proposed that one could ever be used to factor 1000-digit numbers.

This quantum world in which we live, with its richer structure, can be simulated only inefficiently using a digital computer. If that weren’t the case, we could use a digital computer to simulate a quantum computer and get on with it. But this means that if the world is a simulation, it cannot be a simulation running on a digital computer. The computer that runs the world has to be a quantum computer.

But if quantum computers do not exist… well, then they cannot simulate the world, can they?

Two further points about this argument. First, it is purely mathematical: I am offering a mathematical line of reasoning that no quantum universe can be a simulated universe. It is not a limitation of technology, but a (presumed) mathematical truth.

Second, the counterargument has often been proposed that perhaps the simulation is set up so that we do not get to see the discrepancies caused by inefficient simulation. I.e., the programmer cheats and erases the glitches from our simulated minds. But I don’t see how that could work either. For this to work, the algorithms employed by the simulation must anticipate not only all the possible ways in which we could ascertain the true nature of the world, but also assess all consequences of altering our state of mind. I think it quickly becomes evident that this really cannot be done without, well, simulating the world correctly, which is what we were trying to avoid… so no, I do not think it is possible.

Of course if tomorrow, someone announces that they cracked the threshold theorem and full-scale, scalable quantum computing is now reality, my argument goes down the drain. But frankly, I do not expect that to happen.

 Posted by at 11:34 pm
Feb 262017
 

In many ways, this is the most disturbing story I read in recent… days? Months? Maybe years?

The title is (relatively speaking, in this day and age) innocuous enough (if perhaps a little sensationalist): “Revealed: how US billionaire helped to back Brexit“. Yeah, sure. Billionaires are evil SOBs, we knew that already, and now a bit of investigative journalism dug up another reason why we should hate them. Big deal… you could be forgiven if you moved on to read something else, maybe the bit about Trump snubbing the White House Correspondence Dinner or Fox News using a phony “Swedish defense advisor” to curry favor with the President.

But if you choose to read this article, it reveals something else. It reveals how the Leave campaign in the Brexit vote received assistance provided by artificial intelligence software to build profiles of up to a million voters and create highly targeted campaigns on social media.

Back when the nightmare of the machines taking over was first discovered in the science fiction literature, it was usually envisioned as a clean break: First the humans are in charge, but then comes Judgment Day and the machines take over.

Reality is a lot messier, for both humans and machines. There is no clean break. The ever increasing power of the machines is harnessed by ever more reckless humans, manipulating humanity in unexpected ways. Machines manipulating elections or referenda at the bidding of sinister humans… in many ways, that is the worst of possible worlds.

It makes you feel helpless, for one: You realize that nothing you can do on social media, nothing you can say in your blog will amount to one iota, as the machines have an infinitely greater capacity to analyze data and assess outcomes.

And it also makes you fearful. AI (for now) has no compassion or conscience. It will lie or make up “fake news” without remorse. It will (for now) do its masters’ bidding, even if those masters are sociopaths.

So no, folks, don’t delude yourselves. Judgment Day may already be here. It’s just coming one little data point, one neural network, one deep learning algorithm at a time.

 Posted by at 9:03 am
Jan 142017
 

I just saw this US Defense Department video about a swarm of high speed drones released at altitude by an F/A-18. The drones communicated with each other, self-organized, and went on to execute predetermined tasks autonomously.

In case anyone is wondering why I worry about the future of AI, this is a perfect demonstration.

Meanwhile, the Defense Department is also continuing its trials of the Sea Hunter, a 132-ft, 145-ton unmanned, autonomous vessel designed to hunt submarines.

Don’t worry, the brave new world is coming…

 Posted by at 9:22 pm
Nov 152016
 

I just came across this recent conversation with Barack Obama about the challenges of the future, artificial intelligence, machine learning and related topics. A conversation with an intelligent, educated person who, while not an expert in science and technology, is not illiterate in these topics either.

Barack Obama Talks AI, Robo-Cars, and the Future of the World

And now I feel like mourning. I mourn the fact that for many years to come, no such intelligent conversation will be likely be heard in the Oval Office. But what do you do when a supremely qualified, highly intelligent President is replaced by a self-absorbed, misogynist, narcissistic blowhard?

Not much, I guess. I think my wife and I will just go and cuddle up with the cats and listen to some Pink Floyd instead.

 Posted by at 11:35 pm
Oct 112013
 

Is this a worthy do-it-yourself neuroscience experiment, or an example of a technology gone berserk, foreshadowing a bleak future?

A US company is planning to ship $99 kits this fall, allowing anyone to turn a cockroach into a remote controlled cyborg. Educational? Or more like the stuff of bad dreams?

For me, it’s the latter. Perhaps it doesn’t help that I am halfway through reading Margaret Atwood’s The Year of the Flood, sequel to Oryx and Crake, a dystopian science fiction novel set in a bleak future in which humanity destroys itself through the reckless use of biotech and related technologies.

A cockroach may not be a beloved animal. Its nervous system may be too small, too simple for it to feel real pain. Nonetheless, I feel there is something deeply disturbing and fundamentally unethical about the idea of turning a living animal into a remote control toy.

To put it more simply: it creeps the hell out of me.

 Posted by at 11:49 am
Dec 022012
 

I am reading about this “artificial brain” story that has been in the news lately, about a Waterloo team that constructed a software model, Spaun, of a human-like brain with several million neurons.

Granted, several million is not the same as a hundred billion or so neurons that are in a real human brain, but what they have done still appears to be an impressive result.

I’ve spent a little bit of time trying to digest their papers and Web site. It appears that a core component of their effort is Nengo, a neural simulator. Now the idea of simulating neurons has been at the core of cybernetics for (at least) 60 years, but Nengo adds a new element: its ability to “solve” a neural network and determine the optimal connection weights for a given network to achieve its desired function.

The “brain”, then, is a particular Nengo simulation that is designed to model specific areas and functions of the human brain. Their simulation, equipped with a simple 28×28 pixel “eye” and a simulated “arm” with which to draw, can perform some simple activities such as reading and copying some digits and symbols, or memorizing a list.

I am still trying to make up my mind as to whether this result is just a gimmick like Grey Walter’s infamous cybernetic tortoise or a genuine leap forward, but I am leaning towards the latter. Unlike the tortoise, which just superficially mimicked some behavior, Spaun is a genuine attempt to create a machine that actually mimics the functioning of a human brain. Indeed, if this research is scalable, it may mark a milestone that would eventually lead to the ability to create electronic backups of ourselves. Now whether or not that is a Good Thing is debatable of course.

 Posted by at 6:27 pm
Nov 062011
 

In his delightful collection of robot stories Cyberiad, Polish science-fiction author Stanislaw Lem tells us how to build a computer (a sentient computer, no less): the most important step is to pour a large number of transistors into a vat and stir.

This mental image popped into my mind as I was reading the last few pages of Andrew Pickering’s The Cybernetic Brain, subtitled Sketches of Another Future.

Beyond presenting a history of (chiefly British) cybernetics (and cyberneticians) the book’s main point is that cybernetics should be resurrected from the dead fringes as a nonmodern (the author’s word) alternative to the hegemony of modern science, and that the cybernetic approach of embracing unknowability is sometimes preferable to the notion that everything can be known and controlled. The author even names specific disasters (global warming, hurricane Katrina, the war in Iraq) as examples, consequences of the “high modernist” approach to the world.

Well, this is, I take it, the intended message of the book. But what I read from the book is a feely-goody New Age rant against rational (that is, fact and logic-based) thinking, characterized by phrases like “nonmodern” and “ontological theater”. The “high modernist” attitude that the author (rightfully) criticizes is more characteristic of 19th century science than the late 20th or early 21st centuries. And to be sure, the cyberneticians featuring in the book are just as guilty of arrogance as the worst of the “modernists”: after all, who but a true “mad scientist” would use an unproven philosophy as justification for electroshock therapy, or to build a futuristic control center for an entire national economy?

More importantly, the cyberneticians and Pickering never appear to go beyond the most superficial aspects of complexity. They conceptualize a control system for a cybernetic factory with a set of inputs, a set of outputs, and a nondescript blob in the middle that does the thinking; then, they go off and collect puddle water (!) that is supposed to be trained by, and eventually replace, the factory manager. The thinking goes something like this: the skills and experience of a manager form an “exceedingly complex” system. The set of biological and biochemical reactions in a puddle form another “exceedingly complex” system. So, we replace one with the other, do a bit of training, and presto! Problem solved.

These and similar ideas of course only reveal their proponents’ ignorance. Many systems appear exceedingly complex not because they are, but simply because their performance is governed by simple rules that the mathematician immediately recognizes as higher order differential equations, leading to chaotic behavior. The behavior of the cybernetic tortoise described in Pickering’s book appears complex only because it is unpredictable and chaotic. Its reaction in front of a mirror may superficially resemble the reaction of a cat, say, but that’s where the analogy ends.

In the end, the author laments that cybernetics has been marginalized by the hegemony of modernist science. I say no; I say cybernetics has been marginalized by its own failure to be useful. Much as cyberneticians would have preferred otherwise, you cannot build a sentient computer by pouring puddle water or a bag of transistors into a vat. The sentient machines of the future may be unknowable in the sense that their actions will be unpredictable, but it will be knowledge that builds them, not New Age ignorance.

 Posted by at 3:00 pm
Aug 122011
 

Back when I was learning the elementary basics of FORTRAN programming in Hungary in the 1970s, I frequently heard an urban legend according to which the sorry state of computer science in the East Bloc was a result of Stalin’s suspicion towards cybernetics, which he considered a kind of intellectual swindlery from the decadent West. It seemed to make sense, neglecting of course the fact that the technological gap between East and West was widening, and that back in the 1950s, Soviet computers compared favorably to Western machines; and that it was only in the 1960s that a slow, painful decline began, as the Soviets began to rely increasingly on stolen Western technology.

Nonetheless, it appears that Stalin was right after all, insofar as cybernetics is concerned. I always thought that cybernetics was more or less synonymous with computer science, although I really have not given it much thought lately, as the term largely fell into disuse anyway. But now, I am reading an intriguing book titled “The Cybernetic Brain: Sketches of Another Future” by Andrew Pickering, and I am amazed. For instance, until now I never heard of Project Cybersyn, a project conceived by British cyberneticists to create the ultimate centrally planned economy for socialist Chile in the early 1970s, complete with a futuristic control room. No wonder Allende’s regime failed miserably! The only thing I cannot decide is which was greater: the arrogance or dishonesty of those intellectuals who created this project. A project that, incidentally, also carried a considerably potential for misuse, as evidenced by the fact that its creators received invitations from other repressive regimes to implement similar systems.


Stalin may have been one of the most prolific mass murderers in history, but he wasn’t stupid. His suspicions concerning cybernetics may have been right on the money.

 Posted by at 3:03 pm