Jul 242010
 

I have been reading the celebrated biography of Albert Einstein by Walter Isaacson, and in it, the chapter about Einstein’s beliefs and faith. In particular, the question of free will.

In Einstein’s deterministic universe, according to Isaacson, there is no room for free will. In contrast, physicists who accepted quantum mechanics as a fundamental description of nature could point at quantum uncertainty as proof that non-deterministic systems exist and thus free will is possible.

I boldly disagree with both views.

First, I look out my window at a nearby intersection where there is a set of traffic lights. This set is a deterministic machine. To determine its state, the machine responds to inputs such the reading of an internal clock, the presence of a car in a left turning lane or the pressing of a button by a pedestrian who wishes the cross the street. Now suppose I incorporate into the system a truly random element, such as a relay that closes depending on whether an atomic decay process takes place or not. So now the light set is not deterministic anymore: sometimes it provides a green light allowing a vehicle to turn left, sometimes not, sometimes it responds to a pedestrian pressing the crossing button, sometimes not. So… does this mean that my set of traffic lights suddenly acquired free will? Of course not. A pair of dice does not have free will either.

On the other hand, suppose I build a machine with true artificial intelligence. It has not happened yet but I have no doubt that it is going to happen. Such a machine would acquire information about its environment (i.e., “learn”) while it executes its core program (its “instincts”) to perform its intended function. Often, its decisions would be quite unpredictable, but not because of any quantum randomness. They are unpredictable because even if you knew the machine’s initial state in full detail, you’d need another machine even more complex than this one to model it and accurately predict its behavior. Furthermore, the machine’s decisions will be influenced by many things, possibly involving an attempt to comply with accepted norms of behavior (i.e., “ethics”) if it helps the machine accomplish the goals of its core programming. Does this machine have free will? I’d argue that it does, at least insofar as the term has any meaning.

And that, of course, is the problem. We all think we know what “free will” means, but is that true? Can we actually define a “decision making system with free will”? Perhaps not. Think about an operational definition: given an internal state I and external inputs E, a free will machine will make decision D. Of course the moment you have this operational definition, the machine ceases to have what we usually think of as free will, its behavior being entirely deterministic. And no, a random number generator does not help in this case either. It may change the operational definition to something like, given internal state I and external inputs E, the machine will make decision Di with probability Pi, the sum of all Pi-s being 1. But it cannot be this randomization of decisions that bestows a machine with free will; otherwise, our traffic lights here at the corner could have free will, too.

So perhaps the question about free will fails for the simple reason that free will is an ill-defined and possibly self-contradictory concept. Perhaps it’s just another grammatically correct phrase that has no more actual meaning than, say, “true falsehood” or “a number that is odd and even” or “the fourth side of a triangle”.

 Posted by at 1:36 am
Jul 042010
 

I hate dogma. I hate it even more when a valid scientific observation becomes dogma.

One case concerns the infamous goto statement in programming languages. It is true that a programming language does not need a goto statement in order to be universal. Unfortunately, this led some, most notably among them the late Edsger Dijkstra, to conclude that goto is actually harmful. While it is true that goto can be misused, and that misusing the constructs of a programming language can lead to bad code, I don’t think goto is unique in this regard (it is certainly no more harmful than pointers, global variables, or the side effects of passing variables by reference, just to name a few examples). Nonetheless, with Dijkstra’s letter on record, the making of a dogma was well under way.

And here I am, some 40 years later, trying to write a simple piece of code the logic of which flows like this:

LET X = A
LABEL:
Do something using X
IF some condition is not satisfied THEN LET X = B and GOTO LABEL

The condition, in particular, is always satisfied when X = B.

Yes, I know how to rewrite the above code using a loop construct, to satisfy structured programming purists. But why should I have to, when the most natural way to express this particular algorithm is through the use of a conditional jump, not a loop? Oh wait… it’s because someone who actually believes in dogma prevailed when the JavaScript was designed, and therefore, goto never made it into the language.

 Posted by at 7:01 pm
Dec 222009
 

I have done many things in my misguided past as a programmer, but strangely, I never did much work with XML. Which is why a recent annoyance turned into an interesting learning opportunity.

I usually watch TV on my computer. (This is why I see more TV than many people I know… not because I am a TV junkie who really “watches” it, I am actually working, but I have, e.g., CNN running in the background, in a small window, and I do occasionally pay attention when I see something unusual. Or change to a channel with The Simpsons.) For years, I’ve been using various ATI All-In-Wonder cards. (No, I don’t recommend them anymore; whereas in the past, they used to attach a tuner to some of their really high-end cards, this is no longer the case, the base graphics hardware of their current crop of AIW cards is quite lame. Their current software sucks, too.) The old ATI multimedia program I am using, while far from perfect, is fairly robust and reliable, and among other things, it comes with a built-in program guide feature. A feature that downloads programming information from an online server.

Except that, as of last week, it was no longer able to do so; the server refused the request. Several customers complained, but to no avail; they were not even able to get through to the right people.

So what is a poor programmer to do? I have known about Schedules Direct, the fee-based but non-profit, low-cost replacement of what used to be a free service from Zap2It, providing the ability to download TV guide data for personal use. The information from Schedules Direct comes in the form of XML. The ATI multimedia program stores its data in a Paradox database. In theory, the rest is just a straightforward exercise of downloading the data and loading it into the Paradox tables, and presto: one should have updated programming information.

Indeed things would be this simple if there were no several hurdles along the way.

First, the Paradox database is password-protected. Now Paradox passwords are a joke, especially since well-known backdoor passwords exist. Yet it turns out that those backdoor passwords work only with the original Borland/Corel/whatever drivers… third party drivers, e.g., the Paradox drivers in Microsoft Access 2007, do not recognize the backdoor passwords. Fortunately, cracking the password is not hard; I used Thegrideon Software’s Paradox Password program for this purpose, and (after payment of the registration fee, of course) it did the trick.

Second, the Microsoft drivers are finicky, and may not allow write access to the Paradox tables. This was most annoying, since I didn’t know the cause. Eventually, I loaded the tables on another machine that never saw the original Borland Database Engine, but did have Access 2007 installed (hence my need for a “real” password, not a backdoor one), and with this machine, I was able to write into the files… not sure if it was due to the absence of the BDE, the fact that I was using Office 2007 as opposed to Office 2003, or some other reason.

So far so good… Access can now write into the Paradox tables, and Access can read XML, after all, Microsoft is all about XML these days, right? No so fast… That’s when I ran into my third problem, namely the fact that Access cannot read XML attributes, whereas a lot of the programming information (including such minor details like the channel number or start time) are provided in attribute form by Schedules Direct (or to be more precise, by the XMLTV utility that I use to access Schedules Direct.) The solution: use XSLT to transform the source XML into a form that Access can digest properly.

With this and a few lines of SQL, I reached the finish line, more-or-less: I was able to update the Paradox tables, and the result appears digestible to the ATI media center application… though not to the accompanying Gemstar program grid application, which still crashes, but that’s okay, I never really used it anyway.

And I managed to accomplish all this just in time to find out that suddenly, the ATI/Gemstar update server is working again… once again, I can get programming information from them. More-or-less… a number of channels have been missing from the lineup for a long time now, so I may prefer to use my solution from now on anyway. Perhaps when I have a little time, I’ll find out what causes the crash (I have some ideas) and the program grid application will work, too.

Needless to say, I know a lot more about XML and XSLT than I did 24 hours ago.

 Posted by at 7:41 pm
Aug 022009
 

Someone wrote to me about inkblots. Apparently, the topic has become hot, in response to the decision by Wikipedia editors to make the Rorschach blots available online. Attempts by some to suppress this information using, among other things, questionable copyright claims, are of a distinctively Scientologist flavor (made all the more curious by Scientology’s rejection of conventional psychoanalysis.) They do have a point, though… the validity of the test could be undermined if test subjects were familiar with the inkblots and evaluation methods. On the other hand, one cannot help but wonder why such an outdated test is still being used in daily practice. It certainly gives credence to those who consider psychoanalysis a pseudoscience.

I am also wondering… suppose I build a sophisticated software system with optical pattern recognition, associative memory, and a learning algorithm. Suppose the software is buggy, and I wish to test it. Would I be testing it by running the recognition program on meaningless symmetric patterns? The behavior of the system would be random, but perhaps not completely so; it may be a case of ordered chaos with a well defined attractor. Would running the recognition program on a few select images reveal anything about that attractor? Would it reveal enough information to determine reliably if the attractor differs from whatever would be considered “normal”?

More importantly, do practitioners of the Rorschach test know about chaos dynamics and do they have the correct (mathematical, computer) tools to analyze their findings?

I am also wondering how such a test could be conceivably normalized to account for differences in life experience (or, to use my software system example, for differences in the training of the learning algorithm) but I better shut up now before my thoughts turn into opinionated rantings about a subject that I know precious little about.

 Posted by at 2:36 pm
Jun 252009
 

A few days ago, I upgraded to Skype 4.

I use Skype for overseas telephone calls a lot. I also call a few people occasionally using Skype-to-Skype. And, every once in a while, I use it to chat with people.

I have heard bad things about Skype 4 so I was not in a hurry to upgrade. But when, the other day, the software notified me that a major upgrade is available, I decided to give it a try.

Wish I didn’t.

The installation completed successfully, and Skype worked fine, but… well, it’s best if I just quote a few sentences from Skype’s own Web site where the new version was announced:

  • Skype 4.0 should certainly participate in the worst software redesign conquest.
  • Worst interface ever created for Skype and i’ve been using it ever since the 1st beta. Please dump this garbage
  • Skype 4.0 has an extreme ugly layout.
  • The UI of version 4 is a terrible disappointment. No matter how I tweak, it still consumes more screen real-estate than version 3 did.
  • Who are you people and what were you thinking when you released this kludge.
  • ABSOLUTELY TERRIBLE INTERFACE
  • Skype 4.0.x is PAINFUL and FRUSTRATING TO USE.
  • I think this is the ‘vista’ of skype releases.
  • What where you thinking. Did you guys outsource? This version has all the hallmarks of a design by committee.
  • I truly do not like the new 4.0 version! I’ve tried it for a week, hoping to get used to it, and i’m just left cursing. I am reverting because…

I share these sentiments. This morning, I gave up and downgraded to the 3.8 version. Which is working fine, as always.

 Posted by at 1:39 pm
May 312009
 

I’ve been learning a lot about Web development these days: Dojo and Ajax, in particular. It’s incredible what you can do in Javascript nowadays, sophisticated desktop applications running inside a Web browser. I am spending a lot of time building a complex prototype application that has many features associated with desktop programs, including graphics, pop-up dialogs, menus, and more.

I’ve also been learning a lot about the intricacies Brans-Dicke gravity and about the parameterized post-Newtonian (PPN) formalism. Brans-Dicke theory is perhaps the simplest modified gravity theory that there is, and I have to explain to someone why the gravity theory that I spend time working on doesn’t quite behave like Brans-Dicke theory. In the process, I find out things about Brans-Dicke theory that I never knew.

And, I’ve also been doing a fair bit of SCPI programming this month. SCPI is a standardized way for computers to talk to measurement instrumentation, and an old program I wrote used to use a non-standard way… not anymore.

Meanwhile, in all the spare time that I’ve left, I’ve been learning Brook+, a supercomputer programming language based on C… that is because my new test machine is a supercomputer, sort of, with its graphics card that doubles as a numeric vector processor capable in theory of up to a trillion single precision floating point instructions per second… and nearly as many in practice, in the test programs that I threw at it.

I’m also learning a little more about the infamous cosmological constant problem (why is the cosmological constant at least over 50 orders magnitude too small but not exactly zero?) and about quantum gravity.

As I said in the subject… busy days. Much more fun though than following the news. Still, I did catch in the news that Susan Boyle lost in Britains Got Talent… only because an amazing dance group won:

 Posted by at 3:07 am
Jan 272009
 

Long before blogs, long before the Web even, there was an Internet and people communicated via public forums (fora?), Usenet foremost among them.

Yet I stopped using Usenet about a decade ago. Here is a good example as to why. Excerpts from an exchange:

You will have more success on Usenet if you learn and follow the normal Usenet posting conventions.

About posting conventions: where did I stray from them? I do indeed want to respect the list rules.

Have a look at <http://cfaj.freeshell.org/google/>

Got it: thanks.

You failed to appropriately quote the message that you are responding to. See the FAQ and the more detailed explanation of posting style that it links to. Then, if the explanation provided is not sufficiently clear, ask for clarification.

I am afraid that you have not yet ‘got it’. You have gone from not quoting the message you are responding to, to top-posting and failing to appropriately trim the material that you are quoting.

If you had been told what you did wrong, that would, hopefully, eliminate one class of error from your future posts. You were told where to read about conventions, which *should* eliminate *all* of the well-known errors.

You are forgiven if you thought that the thread from which I excerpted these snotty remarks was about Usenet’s “netiquette”. But it wasn’t. It was all in response to a very polite and sensible question about ways to implement a destructor in JavaScript.

I guess my views are rather clear on the question as to which people harm Usenet more: those who stray from flawless “netiquette”, or those who feel obliged to lecture them. I have yet to understand why it is proper “netiquette” to flood a topic with such lectures  instead of limiting responses to the topic at hand, and responding only when one actually knows the answer. I guess that would be too helpful, and helping other people without scolding them is not proper “netiquette”?

 Posted by at 1:31 pm