Apr 142012
 

Exactly 100 years ago these very hours, what was then the most advanced, largest ship in the world hit an iceberg and sank, taking the lives of some 1,500 people.

I first heard about Titanic when I was still a kindergartener. No surprise, perhaps. My father was born in 1906, which means he was almost 6 years of age when Titanic sank. Even in his hometown of Temesvár (present-day Timisoara, Romania) the sinking of the Titanic was big news. Everyone must have been talking about the disaster for days, weeks, months to come, and this must have left quite an impression on my father, who was always interested in things technical. It was from my father that I first heard about things such as an iceberg having 90% of its mass underwater, spark-gap transmitters, Morse code and SOS signals; all in the context of Titanic of course.

I also had a great uncle who was born in 1894. He was the one who taught me how to play chess. He was a young adult already when Titanic sailed… much to his misfortune, it also meant that he was a young adult in 1914, which meant serving in the first World War.

Titanic was a marvelous ship. She was the pinnacle of high-tech engineering. I find it especially haunting that her lights stayed on almost until the very end, thanks to her redundant electrical systems and, just as importantly, her heroic engineers.

Yet she went down, and two years later, the world that created it also went down in flames. I am reminded of a computer game from the 1990s, The Last Express (produced, incidentally, the same year as James Cameron’s Titanic). In this game, the player is tasked with solving a series of murder and conspiracy mysteries… on board the very last Orient Express to travel from Paris to Istambul before the outbreak of the Great War.

I hope we learned more than just the art of building safer ships in the past 100 years.

 Posted by at 9:15 pm
Apr 142012
 

I just came across this delightful imaginary conversation between a physicist and an economist about the unsustainability of perpetual economic growth.

The physicist uses energy production in his argument: growth at present rates means that in a few hundred years, we’ll produce enough energy to start boiling the oceans. And this is not something that can be addressed easily by the magic of technology. When waste heat is produced, the only way to get rid of it is to radiate it away into space. After about 1400 years of continuous growth, the Earth will be radiating more energy (all man-made) than the Sun, which means it would have to be a lot hotter than the Sun, on account of its smaller size. And in about 2500 years, we would exceed the thermal output of the whole Milky Way.

This, of course, is nonsense, which means terrestrial energy production will be capped eventually by basic physics. If GDP would continue to grow nonetheless, it would mean that the price of energy relative to other stuff would decrease to zero. This is also nonsense, since a limited resource cannot become arbitrarily cheap. But that means GDP growth must also be capped.

What I liked about this argument is that it is not emotional or ideological; it’s not about hugging trees or hating capitalism. It is about basic physics and elementary logic that is difficult to escape. In fact, it can be put in the form of equations. Our present energy production \(P_0\) is approximately 15 TW, which is about 0.002% of the Sun’s output that reaches the Earth:

\begin{align}
P_0&\simeq 1.5 \times 10^{13}~\rm{W},\\
P_\odot&\simeq 7 \times 10^{17}~\rm{W},\\
\eta_0&=P_0/P_\odot \sim 0.002\%.
\end{align}

For any other value of \(\eta\), there is a corresponding value of \(P\):

\begin{align}
P=\eta P_\odot.
\end{align}

Now all we need is to establish a maximum value of \(\eta\) that we can live with; say, \(\eta_{\rm max}=1\%\). This tells us the maximum amount of energy that we can produce here in the Earth without cooking ourselves:

\begin{align}
P_{\rm max}=\eta_{\rm max}P_\odot.
\end{align}

On the economic side of this argument, there is the percentage of GDP that is spent on energy. In the US, this is about 8%. For lack of a better value, let me stick to this one:

\begin{align}
\kappa_0\sim 8\%.
\end{align}

How low can \(\kappa\) get? That may be debatable, but it cannot become arbitrarily low. So there is a value \(\kappa_{\rm min}\).

The rest is just basic arithmetic. GDP is proportional to the total energy produced, divided by \(\kappa\):

\begin{align}
{\rm GDP}&\propto \frac{\eta}{\kappa}P_\odot,\\
{\rm GDP}_{\rm max}&\propto \frac{\eta_{\rm \max}}{\kappa_{\rm min}}P_\odot,
\end{align}

And in particular:

\begin{align}
{\rm GDP}_{\rm max}&=\frac{\eta_{\rm max}\kappa_0}{\eta_0\kappa_{\rm min}}{\rm GDP}_0,
\end{align}

where \({\rm GDP}_0\) is the present GDP.

We know \(\eta_0\sim 0.002\%\). We know \(\kappa_0=8\%\). We can guess that \(\eta_{\rm max}\lesssim 1\%\) and \(\kappa_{\rm min}\gtrsim 1\%\). This means that

\begin{align}
{\rm GDP}_{\rm max}\lesssim 4,000\times {\rm GDP}_0.
\end{align}

This is it. A hard limit imposed by thermodynamics. But hey… four thousand is a big number, isn’t it? Well… sort of. At a constant 3% rate of annual growth, the economy will increase to four thousand times its present size in a mere 280 years or so. One may tweak the numbers a little here and there, but the fact that physics imposes such a hard limit remains. The logic is inescapable.

Or is it? The word “escape” may be appropriate here for more than one reason, as there is one obvious way to evade this argument: escape into space. In a few hundred years, humanity may have spread throughout the solar system, and energy amounts enough to boil the Earth’s oceans may be powering human colonies in the hostile (and cold!) environments near the outer planets.

That is, if humans are still around a few hundred years from now. One can only hope.

 Posted by at 9:59 am
Apr 122012
 

Our second short paper has been accepted for publication in Physical Review Letters.

I have been involved with Pioneer 10 and 11 in some fashion since about 2002, when I first began corresponding with Larry Kellogg about the possibility of resurrecting the telemetry data set. It is thanks the Larry’s stamina and conscientiousness that the data set survived.

I have been involved actively in the research of the Pioneer anomaly since 2005. Seven years! Hard to believe.

This widely reported anomaly concerns the fact that when the orbits of Pioneer 10 and 11 are accurately modeled, a discrepancy exists between the modeled and measured frequency of the radio signal. This discrepancy can be resolved by assuming an unknown force that pushes Pioneer 10 an 11 towards the Earth or the Sun (from that far away, these two directions nearly coincide and cannot really be told apart.)

One purpose of our investigation was to find out the magnitude of the force that arises as the spacecraft radiates different amounts of heat in different directions. This is the concept of a photon rocket. A ray of light carries momentum. Hard as it may appear to believe at first, when you hold a flashlight in your hands and turn it on, the flashlight will push your hand backwards by a tiny force. (How tiny? If it is a 1 W bulb that is perfectly efficient and perfectly focused, the force will be equivalent to about one third of one millionth of a gram of weight.)

On Pioneer 10 and 11, we have two main heat sources. First, there is electrical heat: all the instruments on board use about 100 W of electricity, most of which is converted into heat. Second, electricity is produced, very inefficiently, by a set of four radioisotope thermoelectric generators (RTGs); these produce more than 2 kW of waste heat. All this heat has to go somewhere, and most of this heat will be dissipated preferably in one direction, behind the spacecraft’s large dish antenna, which is always pointed towards the Earth.

The controversial question was, how much? How efficiently is this heat converted into force?

I first constructed a viable thermal model for Pioneer 10 back in 2006. I presented results from custom ray-tracing code at the Pioneer Explorer Collaboration meeting at the International Space Science Institute in Bern, Switzerland in February 2007:

With this, I confirmed what has already been suspected by others—notably, Katz (Phys. Rev. Letters 83:9, 1892, 1999); Murphy (Phys. Rev. Letters 83:9, 1890, 1999); and Scheffer (Phys. Rev. D, 67:8, 084021, 2003)—that the magnitude of the thermal recoil force is indeed comparable to the anomalous acceleration. Moreover, I established that the thermal recoil force is very accurately described as a simple linear combination of heat from two heat sources: electrical heat and heat from the RTGs. The thermal acceleration \(a\) is, in fact

$$a=\frac{1}{mc}(\eta_{\rm rtg}P_{\rm rtg} + \eta_{\rm elec}P_{\rm elec}),$$

where \(c\simeq 300,000~{\rm km/s}\) is the speed of light, \(m\simeq 250~{\rm kg}\) is the mass of the spacecraft, \(P_{\rm rtg}\sim 2~{\rm kW}\) and \(P_{\rm elec}\sim 100~\rm {W}\) are the RTG heat and electrical heat, respectively, and \(\eta_{\rm rtg}\) and \(\eta_{\rm elec}\) are “efficiency factors”.

This simple force model is very useful because it can be incorporated directly into the orbital model of the spacecraft.

In the years since, the group led by Gary Kinsella constructed a very thorough and comprehensive model of the Pioneer spacecraft, using the same software tools (not to mention considerable expertise) that they use for “live” spacecraft. With this model, they were able to predict the thermal recoil force with the greatest accuracy possible, at different points along the trajectory of the spacecraft. The result can be compared directly to the acceleration that is “measured”; i.e., the acceleration that is needed to model the radio signal accurately:

In this plot, the step-function like curve (thick line) is the acceleration deduced from the radio signal frequency. The data points with vertical error bars represent the recoil force calculated from the thermal model. They are rather close. The relatively large error bars are due primarily to the fact that we simply don’t know what happened to the white paint that coated the RTGs. These were hot (the RTGs were sizzling hot even in deep space) and subjected to solar radiation (ultraviolet light and charged particles) so the properties of the paint may have changed significantly over time… we just don’t know how. The lower part of the plot shows just how well the radio signal is modeled; the average residual is less than 5 mHz. The actual frequency of the radio signal is 2 GHz, so this represents a modeling accuracy of less than one part in 100 billion, over the course of nearly 20 years.

In terms of the above-mentioned efficiency factors, the model of Gary’s group yielded \(\eta_{\rm rtg}=0.0104\) and \(\eta_{\rm elec}=0.406\).

But then, as I said, we also incorporated the thermal recoil force directly into the Doppler analysis that was carried out by Jordan Ellis. Jordan found best-fit residuals at \(\eta_{\rm rtg}=0.0144\) and \(\eta_{\rm elec}=0.480\). These are somewhat larger than the values from the thermal model. But how much larger?

We found that the best way to answer this question was to plot the two results in the parameter space defined by these two efficiency factors:

The dashed ellipse here represents the estimates from the thermal model and their associated uncertainty. The ellipse is elongated horizontally, because the largest source of uncertainty, the degradation of RTG paint, affects only the \(\eta_{\rm rtg}\) factor.

The dotted ellipse represents the estimates from radio signal measurements. The formal error of these estimates is very small (the error ellipse would be invisibly tiny). These formal errors, however, are calculated by assuming that the error in every one of the tens of thousands of Doppler measurements arises independently. In reality, this is not the case: the Doppler measurements are insanely accurate, any errors that occur are a result of systematic mismodeling, e.g., caused by our inadequate knowledge of the solar system. This inflates the error ellipse and that is what was shown in this plot.

Looking at this plot was what allowed us to close our analysis with the words, “We therefore conclude that at the present level of our knowledge of the Pioneer 10 spacecraft and its trajectory, no statistically significant acceleration anomaly exists.”

Are there any caveats? Not really, I don’t think, but there are still some unexplored questions. Applying this research to Pioneer 11 (I expect no surprises there, but we have not done this in a systematic fashion). Modeling the spin rate change of the two spacecraft. Making use of radio signal strength measurements, which can give us clues about the precise orientation of the spacecraft. Testing the paint that was used on the RTGs in a thermal vacuum chamber. Accounting for outgassing. These are all interesting issues but it is quite unlikely that they will alter our main conclusion.

On several occasions when I gave talks about Pioneer, I used a slide that said, in big friendly letters,

PIONEER 10/11 ARE THE MOST PRECISELY NAVIGATED DEEP SPACE CRAFT TO DATE.

And they confirmed the predictions of Newton and Einstein, with spectacular accuracy, by measuring the gravitational field of the Sun in situ, all the way up to about about 70 astronomical units (the distance of the Earth from the Sun).

 Posted by at 11:10 am
Apr 102012
 

I was reading about a place called Göbekli Tepe today.

This is a place in southeastern Turkey. It is the site of an archeological excavation; they are exploring the ruins of an old temple.

The ruins of a really old temple. Really, really, really old.

How old? Well… when the first Egyptian pyramid was still on the drawing board, Göbekli Tepe was already some 6,000 years of age. Indeed, when Göbekli Tepe was built, the place where I now live, Ottawa, was still covered by the Champlain Sea. The oldest ruins at Göbekli Tepe are 11,500 years old, take or leave a few centuries.

That is an astonishing age for a major stone structure like this. Wikipedia tells me that it was built by hunter-gatherers, but I have a hard time accepting that hypothesis: Stone construction on this scale requires highly specialized skills not to mention the organization of the necessary labor force. Maybe I lack imagination but I just can’t see how hunter-gatherer tribes, even if they have permanent village settlements, would be able to accomplish something on this scale.

But if it wasn’t hunter-gatherers, who were they? What kind of civilization existed in that part of the world 11,500 years ago that we know nothing about?

 Posted by at 8:15 pm
Apr 082012
 

I have no delusions about my abilities as a graphic artist, but hey, it’s from the heart. Happy Easter, everyone!

As to why we choose to celebrate the gruesome death on the cross and subsequent resurrection of a young man some 2,000 years ago, one whose sole crime was that he was preaching love and understanding among neighbors, with bunny rabbits laying gaudy-colored eggs and such nonsense, I have no idea. But then, I am just the clueless atheist here, so what do I know?

I only wish more people actually listened to that young man’s message, instead of choosing hatred and violence. The world would indeed be a better place.

 Posted by at 9:37 am
Mar 272012
 

The sad story of Nortel’s demise is known to just about every Canadian. I know several people who were personally affected quite badly by Nortel’s bankruptcy.

What I did not expect is to meet a real, living, flesh-and-blood Nortel employee, but that’s just who I met tonight in the form of a lady who happened to be sitting across from me at a large dinner table. I thought Nortel employees were an extinct species… it turns out that although they are critically threatened and will go extinct soon, a few of them are still around.

Not for much longer, mind you. The lady told me that yes, she is still a Nortel employee… for three more days.

 Posted by at 11:42 pm
Mar 272012
 

The cover story in the March 3 issue of New Scientist is entitled The Deep Future: A Guide to Humanity’s Next 100,000 Years.

I found this cover story both shallow and pretentious. As if we could predict even the next one hundred years, never mind a hundred thousand.

They begin with an assurance that humans will still be around 100,000 years from now. They base this on the observation that well-established species tend to hang around much longer. True but… what we don’t have in the Earth’s prehistory is a species with the technological capability to destroy the Earth. This is something new.

So new in fact that we cannot draw far-fetched conclusions. Consider, for instance: nuclear weapons have been around for 67 years. In these 67 years, we managed not to start an all-out nuclear war.  Assuming, for the same of simplicity, that all years are created equal, the only thing we can conclude from this, if my math is right, is that the probability of nuclear war in any given year is 4.37% or less, “19 times out of 20” as statisticians sometimes say. Fair enough… but that does not tell us much about the “deep future”. Projected to 100,000 years, all we can tell on the basis of this 67-year sample period is that the probability of all-out nuclear war is less than 99.99……99854…%, where the number of ‘9’-s between the decimal point and the digit ‘8’ is 1941. Not very reassuring.

The authors of the New Scientist piece would probably tell us that even if nuclear war did break out, it would not wipe out humanity in its entirety, and they probably have a point, but it misses my point: namely the futility of making a 100,000-year prediction on the basis of at most a few thousand years of known history.

And while nuclear war may be a very scary prospect, it’s by far not the scariest. There are what some call technological singularities: developments in science and technology that are so profound, they would change the very basics of our existence. Artificial intelligence, for starters… reading about Google’s self-driving car or intelligent predictive search algorithms, about IBM’s Watson, or even Apple’s somewhat mundane Siri, I cannot help but wonder: is the era of true AI finally just around the corner? And when true AI arrives, how far behind is the nightmare of Skynet from the Terminator films?

Or how about genetically altered superhumans? They mention this, but only in passing: “unless, of course, engineered humans were so superior that they obliterated the competition.” Why is this scenario considered unlikely? Sometimes I wonder if we may perhaps be just one major war away from this: a warring party in a precarious situation in a prolonged conflict breeding genetically modified warriors. Who, incidentally, need not even look human.

I could go on of course, about “gray goo”, bioterrorism, and other doomsday scenarios, but these just underline my point: it is impossible to predict the course of history even over the next 100 years, never mind the next 100,000. This is true even from a mathematical perspective: exceedingly complex systems with multiple nonlinear feedback mechanisms can undergo catastrophic phase transitions that are almost impossible to predict or prevent. Witness the recent turmoil in financial markets.

Surprisingly, this overly optimistic New Scientist feature is very pessimistic on one front: space exploration. The first quote a figure of 115,000 years that would be required to reach Alpha Centauri at 25,000 miles an hour; this, of course, is a typical velocity for a chemically fueled rocket. The possibility of a better technology is touched only briefly: “Even if we figure out how to travel at the speeds required […], the energy required to get there is far beyond our means”. Is that so? They go on to explain that, “[f]or the next few centuries, then, if not thousands of years hence, humanity will be largely confined to the solar system”. Centuries if not thousands of years? That is far, far, far short of the 100,000 years that they are supposed to be discussing.

I called this cover feature shallow and pretentious, but perhaps I should have called it myopic. In that sense, it is no different from predictions made a little over a century ago, in 1900, about the coming “century of reason”. At least our predecessors back then had the good sense to confine their fortunetelling to the next 100 years.

 Posted by at 10:11 am
Mar 202012
 

I am holding in my hands an amazing book. It is a big and heavy tome, coffee table book sized, with over 600 lavishly illustrated pages. And it took more than 30 years for this book to appear finally in English, but the wait, I think, was well worth it.

The name of Charles Simonyi, Microsoft billionaire and space tourist, is fairly well known. What is perhaps less well-known in the English speaking world is that his father, Karoly Simonyi, was a highly respected professor of physics at the Technical University of Budapest… that is, until he was deprived of his livelihood by a communist regime that considered him ideologically unfit for a teaching position.

Undeterred, Simonyi then spent the next several years completing his magnum opus, A Cultural History of Physics, which was eventually published in 1978.

Simonyi was both a scientist and a humanist. In his remarkable, unique book, history and science march hand in hand from humble beginnings in Egypt, through the golden era of the classical world, through the not so dark Dark Ages, on to the scientific revolution that began in the 1600s and culminated in the discoveries of Lagrangian mechanics, thermodynamics, statistical physics, electromagnetism and, ultimately, relativity theory and quantum physics.

And when I say lavishly illustrated, I mean it. Illustrations that include diagrams, portraits, facsimile pages from original publications decorate nearly every single page of Simonyi’s tome. Yet it is fundamentally a book about physics: the wonderfully written narrative is well complemented by equations that translate ideas into the precise language of mathematics.

I once read this book, my wife’s well worn copy, from cover to cover, back in the mid 1990s. I feel that it played a very significant role in helping me turn back towards physics.

Simonyi’s book has seen several editions in the original Hungarian, and it was also translated into German, but until now, no English-language translation was available. This is perhaps not surprising: it must be a very expensive book to produce, and despite its quality, the large number of equations must surely be a deterrent to many a prospective buyer. But now, CRC Press finally managed to make an English-language version available.

(Oh yes, CRC Press. I hated them for so many years, after they sued Wolfram and had Mathworld taken off-line. I still think that was a disgusting thing for them to do. I hope they spent enough on lawyers and lost enough sales due to disgusted customers to turn their legal victory a Pyrrhic one. But that was more than a decade ago. Let bygones be bygones… besides, I really don’t like Wolfram these days that much anyway, software activation and all.)

Charles Simonyi played a major role in making this edition happen. I guess he may also have spent some of his own money. And while I am sure he can afford a loss, I hope the book does well… it deserves to be successful.

For some reason, the book was harder to obtain in Canada than usual. It is not available on amazon.ca; indeed, I pre-ordered the book last fall, but a few weeks ago, Amazon notified me that they are unable to deliver this item. Fortunately, CRC Press delivers in Canada, and the shipping is free, just like with Amazon. The book seems to be available and in stock on the US amazon.com Web site.

And it’s not a pricey one: at less than 60 dollars, it is quite cheap, actually. I think it’s well worth every penny. My only disappointment is that my copy was printed in India. I guess that’s one way to shave a few bucks off the production cost, but I would have paid more happily for a copy printed in the US or Canada.

 Posted by at 4:38 pm
Mar 182012
 

The Weather Network has this neat plot every ten minutes, showing the anticipated minimum and maximum temperatures for the next two weeks.

The forecast for Wednesday is off the chart. It is going to be so much hotter than the two-week average, it did not fit into the plot area.

Of course it could be just nonsense. They did predict 7 degrees Centigrade as the overnight low. It went down to 2 in foggy areas (most of Ottawa, I guess). Then again… even if it turns out to be 10 degrees colder than the predicted 24, it’s still a remarkably mild winter day. March 21, after all, is supposed to be the last day of winter. And I may have to fire up the A/C.

And it’s not just Ottawa. For Winnipeg (Winnipeg, for crying out loud!) today’s forecast is 28. A once in a thousand years event, says The Weather Network. Either that or the new norm, if global warming is to be believed. (Not necessarily bad news for many Canadians.)

 Posted by at 8:53 am
Mar 172012
 

I have been exchanging e-mails with a friend. We discussed, among other things, Rush Limbaugh’s now infamous comments on Sandra Fluke.

In response to comparisons of Limbaugh to some of the vile comments made by left-wing personalities like Bill Maher in the past (who called, for instance, Sarah Palin a ‘cunt’), I said this:

“I also note that Limbaugh’s rant went way beyond the use of an offensive word: he discussed Susan [sic!] Fluke’s sexual habits repeatedly and in detail, and once he was done demonstrating his complete ignorance on the topic of birth-control drugs (no, the amount of birth control pills consumed is not related to the amount of sex a person has, he must have confused it with the pills he takes for sex; and no, Fluke was not discussing recreational use of birth control pills but specifically their widespread use to treat serious gynecological conditions) he actually asked her publicly to make a porn video of her sexual activities. And, unlike the liberal comedians mentioned (who are, after all, comedians) Limbaugh did this in all seriousness. If I had been in Fluke’s place, I’d have called Limbaugh a lecherous, drug-addled dirty old pig. Fluke was more of a lady than I am a gentleman, I guess.”

And then I realized that I should not be ashamed of my words. Instead of saying what I should do in Fluke’s place, let me just do it, plain and simple:

In my opinion, Limbaugh is a lecherous, drug-addled dirty old pig.

 Posted by at 11:00 am
Mar 112012
 

I have been meaning to write about this since last month, when news photographer Damir Sagolj won a World Press award for his photograph of a North Korean building complex with the well lit picture of Kim Il-Sung highlighting a wall:

I think it’s an amazing shot. All those drab buildings with their dark windows, and the single source of light is the portrait of the Great Leader represent North Korean society in a way words cannot. I can almost visualize this image as part of some post-apocalyptic computer game.

 Posted by at 11:09 pm
Mar 012012
 

Someone sent me this link (https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html).

It’s a talk about the growing prevalence of Internet content providers to present content that they presume you want to see. You go to Google News and the news you find is the kind of news Google thinks you like. You go to Facebook and the comments you see are the kind of comments Facebook believe you like. Comments from friends you were less likely to click on slowly vanish from sight… and you end up in a bubble of like-minded people, increasingly unaware of things that might challenge your thinking.

This is very bad. Indeed, I am beginning to wonder if perhaps the emergence of such information bubbles may be somewhat responsible for the increasing polarization in politics in many Western societies.

 Posted by at 11:36 am
Mar 012012
 

Maxima is an open-source computer algebra system (CAS) and a damn good one at that if I may say so myself, being one of Maxima’s developers.

Among other things, Maxima has top-notch tensor algebra capabilities, which can be used, among other things, to work with Lagrangian field theories.

This week, I am pleased to report, SourgeForge chose Maxima as one of the featured open-source projects on their front page. No, it won’t make us rich and famous (not even rich or famous) but it is nice to be recognized.

 Posted by at 9:35 am
Feb 272012
 

This is something I griped about before. Moments ago, I saw the following picture on the CBC Newsworld analog cable channel:

Yes, it does look like a ridiculous amount of blackness surrounding a small-ish picture. It turns out that I was looking at…

  • a standard-definition (4:3) broadcast signal on a 16:10 widescreen monitor, containing…
  • widescreen (16:9) original material letterboxed into the standard-definition (4:3) frame, containing in turn…
  • standard definition (4:3) material letterboxed into a wide-screen (16:9) frame, that in turn contained…
  • widescreen (16:9) original material.

Confusing? Well, perhaps this picture clarifies it a little bit:

  • The yellow bars at the top and bottom were added when the original 16:9 material was letterboxed into a 4:3 standard-definition frame;
  • The blue bars on the sides were added when this 4:3 material was letterboxed into a 16:9 broadcast frame;
  • The green bars were added when the widescreen 16:9 broadcast was reformatted for the standard-definition 4:3 analog standard;
  • The red bars are the unused area on my 16:10 monitor when I was watching this signal full screen.

Still complicated? Let me make it simpler, then. After years of trying (and failing) to sell us high-definition televisions, manufacturers realized that casual viewers can’t readily tell the difference between resolutions; they can, however, tell the difference if the shape is different. So they opted to develop a widescreen high definition format. (Back in the 1950s, a similar reasoning led the movie industry to change to a widescreen format. It was not for technical or artistic purposes; it was pure marketing.)

The end result? In this example, approximately 65% of my beautiful high-resolution display is unused, with a postage-stamp like picture occupying the center 35%.

Welcome to the 21st century.

 Posted by at 3:37 pm
Feb 272012
 

The cover story in a recent issue of New Scientist was titled Seven equations that rule your world, written by Ian Stewart.

I like Ian Stewart; I have several of his books on my bookshelf, including a 1978 Hungarian edition of his textbook, Catastrophe Theory and its Applications.

However, I disagree with his choice of equations. Stewart picked the four Maxwell equations, Schrödinger’s equation, the Fourier transform, and the wave equation:

\begin{align}
\nabla\cdot E&=0,\\
\nabla\times E&=-\frac{1}{c}\frac{\partial H}{\partial t},\\
\nabla\cdot H&=0,\\
\nabla\times H&=\frac{1}{c}\frac{\partial E}{\partial t},\\
i\hbar\frac{\partial}{\partial t}\psi&=\hat{H}\psi,\\
\hat{f}(\xi)&=\int\limits_{-\infty}^{\infty}f(x)e^{-2\pi ix\xi}dx,\\
\frac{\partial^2u}{\partial t^2}&=c^2\frac{\partial^2u}{\partial x^2}.
\end{align}

But these equations really aren’t that fundamental… and some rather fundamental equations are missing.

For starters, the four Maxwell equations really should just be two equations: given a smooth (or at least three times differentiable) vector field \(A\) in 4-dimensional spacetime, we define the electromagnetic field tensor \(F\) and current \(J\) as

\begin{align}
F&={\rm d}A,\\
J&=\star{\rm d}{\star{F}},
\end{align}

where the symbol \(\rm d\) denotes the exterior derivative and \(\star\) represents the Hodge dual. OK, these are not really trivial concepts from high school physics, but the main point is, we end up with a set of four Maxwell equations only because we (unnecessarily) split the equations into a three-dimensional and a one-dimensional part. Doing so also obscures some fundamental truths: notably that once the electromagnetic field is defined this way, its properties are inevitable mathematical identities, not equations imposed on the theoretician’s whim.

Moreover, the wave equation really is just a solution of the Maxwell equations, and conveys no new information. It is not something you invent, but something you derive.

I really have no nit to pick with Schrödinger’s equation, but before moving on to quantum physics, I would have written down the Euler-Lagrange equation first. For a generic theory with positions \(q\) and time \(t\), this could be written as

$$\frac{\partial{\cal L}}{\partial q}-\frac{d}{dt}\frac{\partial{\cal L}}{\partial\dot{q}}=0,$$

where \({\cal L}\) is the Lagrangian, or Lagrange function (of \(q\) and \(\dot{q}\), and possibly \(t\)) that describes this particular physical system. The significance of this equation is that it can be derived from the principle of least action, and tells us everything about the evolution of a system. Once you know the generic positions \(q\) and their time derivatives (i.e., velocities) \(\dot{q}\) at some time \(t=t_0\), you can calculate them at any other time \(t\). This is why physics can be used to make predictions: for instance, if you know the initial position and velocity of a cannonball, you can predict its trajectory. The beauty of the Euler-Lagrange equation is that it works equally well for particles and for fields and can be readily generalized to relativistic theories; moreover, the principle of least action is an absolutely universal one, unifying, in a sense, classical mechanics, electromagnetism, nuclear physics, and even gravity. All these theories can be described by simply stating the corresponding Lagrangian. Even more astonishingly, the basic mathematical properties of the Lagrangian can be used to deduce fundamental physical laws: for instance, a Lagrangian that remains invariant under time translation leads to the law of energy conservation.

The Euler-Lagrange equation remains valid in quantum physics, too. The big difference is that the quantities \(q\) are no longer simple numbers; they are non-commuting quantities, so-called “q-numbers”. These q-numbers sometimes coincide with ordinary numbers but more often, they do not. Most importantly, if \(q\) happens to be an ordinary number, \(\dot{q}\) cannot be, and vice versa. So the initial position and momentum of a quantum system cannot both be represented by numbers at the same time. Exact predictions are no longer possible.

We can still make approximate predictions though, by replacing the exact form of the Euler-Lagrange equation with a probabilistic prediction:

$$\xi(A\rightarrow B)=k\sum\limits_A^B\exp\left(\frac{i}{\hbar}\int_A^B{\cal L}\right),$$

where \(\xi(A\rightarrow B)\) is a complex number called the probability amplitude, the squared modulus of which tells us the likelihood of the system changing from state \(A\) to state \(B\) and the summation is meant to take place over “all possible paths” from \(A\) to \(B\). Schrödinger’s equation can be derived from this, as indeed most of quantum mechanics. So this, then, would be my fourth equation.

Would I include the Fourier transform? Probably not. It offers a different way of looking at the same problem, but no new information content. Whether I investigate a signal in the time domain or the frequency domain, it is still the same signal; arguably, it is simply a matter of convenience as to which representation I choose.

However, Stewart left out at least one extremely important equation:

$$dU=TdS-pdV.$$

This is the fundamental equation of thermodynamics, connecting quantities such as the internal energy \(U\), the temperature \(T\), the entropy \(S\), and the medium’s equation of state (here represented by the pressure \(p\) and volume \(V\).) Whether one derives it from the first principles of axiomatic thermodynamics or from the postulates of statistical physics, the end result is the same: this is the equation that defines the arrow of time, for instance, as all the other fundamental equations of physics work the same even if the arrow of time is reversed.

Well, that’s five equations. What else would I include in my list? The choices, I think, are obvious. First, the definition of the Lagrangian for gravity:

$${\cal L}_\mathrm{grav}=R+2\Lambda,$$

where \(R\) is the Ricci curvature scalar that characterizes the geometry of spacetime and \(\Lambda\) is the cosmological constant.

Finally, the last equation would be, for the time being, the “standard model” Lagrangian that describes all forms of matter and energy other than gravity:

$${\cal L}_\mathrm{SM}=…$$

Its actual form is too unwieldy to reproduce here (as it combines the electromagnetic, weak, and strong nuclear fields, all the known quarks and leptons, and their interactions) and in all likelihood, it’s not the final version anyway: the existence of the Higgs-boson is still an open question, and without the Higgs, the standard model would need to be modified.

The Holy Grail of fundamental physics, of course, is unification of these final two equations into a single, consistent framework, a true “theory of everything”.

 Posted by at 1:18 pm
Feb 262012
 

Christopher Plummer is one of my favorite actors. I don’t usually care about the Oscars, but tonight, I was really rooting for him. And at last, it came true: at 82, he became the oldest recipient of a well-deserved Academy Award.

 Posted by at 11:42 pm
Feb 242012
 

Looks like I am into signing petitions this week. I don’t like it, but I take it as a sign of the times that we live in.

Today, it’s the CBC’s turn; specifically, the unbelievable news that the CBC may begin dismantling its physical music archives next month.

I added the following comment when I signed the online petition: “Decades from now, the decision to discard these archives will be viewed as a grave, irreversible act of cultural vandalism. It is inconceivable that the leadership of the CBC is considering this. Then again, looking at what they’ve done to CBC Radio 2 and the Radio Orchestra, perhaps nothing should surprise me anymore…”

 Posted by at 2:15 pm