Apr 142012
 

I just came across this delightful imaginary conversation between a physicist and an economist about the unsustainability of perpetual economic growth.

The physicist uses energy production in his argument: growth at present rates means that in a few hundred years, we’ll produce enough energy to start boiling the oceans. And this is not something that can be addressed easily by the magic of technology. When waste heat is produced, the only way to get rid of it is to radiate it away into space. After about 1400 years of continuous growth, the Earth will be radiating more energy (all man-made) than the Sun, which means it would have to be a lot hotter than the Sun, on account of its smaller size. And in about 2500 years, we would exceed the thermal output of the whole Milky Way.

This, of course, is nonsense, which means terrestrial energy production will be capped eventually by basic physics. If GDP would continue to grow nonetheless, it would mean that the price of energy relative to other stuff would decrease to zero. This is also nonsense, since a limited resource cannot become arbitrarily cheap. But that means GDP growth must also be capped.

What I liked about this argument is that it is not emotional or ideological; it’s not about hugging trees or hating capitalism. It is about basic physics and elementary logic that is difficult to escape. In fact, it can be put in the form of equations. Our present energy production \(P_0\) is approximately 15 TW, which is about 0.002% of the Sun’s output that reaches the Earth:

\begin{align}
P_0&\simeq 1.5 \times 10^{13}~\rm{W},\\
P_\odot&\simeq 7 \times 10^{17}~\rm{W},\\
\eta_0&=P_0/P_\odot \sim 0.002\%.
\end{align}

For any other value of \(\eta\), there is a corresponding value of \(P\):

\begin{align}
P=\eta P_\odot.
\end{align}

Now all we need is to establish a maximum value of \(\eta\) that we can live with; say, \(\eta_{\rm max}=1\%\). This tells us the maximum amount of energy that we can produce here in the Earth without cooking ourselves:

\begin{align}
P_{\rm max}=\eta_{\rm max}P_\odot.
\end{align}

On the economic side of this argument, there is the percentage of GDP that is spent on energy. In the US, this is about 8%. For lack of a better value, let me stick to this one:

\begin{align}
\kappa_0\sim 8\%.
\end{align}

How low can \(\kappa\) get? That may be debatable, but it cannot become arbitrarily low. So there is a value \(\kappa_{\rm min}\).

The rest is just basic arithmetic. GDP is proportional to the total energy produced, divided by \(\kappa\):

\begin{align}
{\rm GDP}&\propto \frac{\eta}{\kappa}P_\odot,\\
{\rm GDP}_{\rm max}&\propto \frac{\eta_{\rm \max}}{\kappa_{\rm min}}P_\odot,
\end{align}

And in particular:

\begin{align}
{\rm GDP}_{\rm max}&=\frac{\eta_{\rm max}\kappa_0}{\eta_0\kappa_{\rm min}}{\rm GDP}_0,
\end{align}

where \({\rm GDP}_0\) is the present GDP.

We know \(\eta_0\sim 0.002\%\). We know \(\kappa_0=8\%\). We can guess that \(\eta_{\rm max}\lesssim 1\%\) and \(\kappa_{\rm min}\gtrsim 1\%\). This means that

\begin{align}
{\rm GDP}_{\rm max}\lesssim 4,000\times {\rm GDP}_0.
\end{align}

This is it. A hard limit imposed by thermodynamics. But hey… four thousand is a big number, isn’t it? Well… sort of. At a constant 3% rate of annual growth, the economy will increase to four thousand times its present size in a mere 280 years or so. One may tweak the numbers a little here and there, but the fact that physics imposes such a hard limit remains. The logic is inescapable.

Or is it? The word “escape” may be appropriate here for more than one reason, as there is one obvious way to evade this argument: escape into space. In a few hundred years, humanity may have spread throughout the solar system, and energy amounts enough to boil the Earth’s oceans may be powering human colonies in the hostile (and cold!) environments near the outer planets.

That is, if humans are still around a few hundred years from now. One can only hope.

 Posted by at 9:59 am
Apr 122012
 

Our second short paper has been accepted for publication in Physical Review Letters.

I have been involved with Pioneer 10 and 11 in some fashion since about 2002, when I first began corresponding with Larry Kellogg about the possibility of resurrecting the telemetry data set. It is thanks the Larry’s stamina and conscientiousness that the data set survived.

I have been involved actively in the research of the Pioneer anomaly since 2005. Seven years! Hard to believe.

This widely reported anomaly concerns the fact that when the orbits of Pioneer 10 and 11 are accurately modeled, a discrepancy exists between the modeled and measured frequency of the radio signal. This discrepancy can be resolved by assuming an unknown force that pushes Pioneer 10 an 11 towards the Earth or the Sun (from that far away, these two directions nearly coincide and cannot really be told apart.)

One purpose of our investigation was to find out the magnitude of the force that arises as the spacecraft radiates different amounts of heat in different directions. This is the concept of a photon rocket. A ray of light carries momentum. Hard as it may appear to believe at first, when you hold a flashlight in your hands and turn it on, the flashlight will push your hand backwards by a tiny force. (How tiny? If it is a 1 W bulb that is perfectly efficient and perfectly focused, the force will be equivalent to about one third of one millionth of a gram of weight.)

On Pioneer 10 and 11, we have two main heat sources. First, there is electrical heat: all the instruments on board use about 100 W of electricity, most of which is converted into heat. Second, electricity is produced, very inefficiently, by a set of four radioisotope thermoelectric generators (RTGs); these produce more than 2 kW of waste heat. All this heat has to go somewhere, and most of this heat will be dissipated preferably in one direction, behind the spacecraft’s large dish antenna, which is always pointed towards the Earth.

The controversial question was, how much? How efficiently is this heat converted into force?

I first constructed a viable thermal model for Pioneer 10 back in 2006. I presented results from custom ray-tracing code at the Pioneer Explorer Collaboration meeting at the International Space Science Institute in Bern, Switzerland in February 2007:

With this, I confirmed what has already been suspected by others—notably, Katz (Phys. Rev. Letters 83:9, 1892, 1999); Murphy (Phys. Rev. Letters 83:9, 1890, 1999); and Scheffer (Phys. Rev. D, 67:8, 084021, 2003)—that the magnitude of the thermal recoil force is indeed comparable to the anomalous acceleration. Moreover, I established that the thermal recoil force is very accurately described as a simple linear combination of heat from two heat sources: electrical heat and heat from the RTGs. The thermal acceleration \(a\) is, in fact

$$a=\frac{1}{mc}(\eta_{\rm rtg}P_{\rm rtg} + \eta_{\rm elec}P_{\rm elec}),$$

where \(c\simeq 300,000~{\rm km/s}\) is the speed of light, \(m\simeq 250~{\rm kg}\) is the mass of the spacecraft, \(P_{\rm rtg}\sim 2~{\rm kW}\) and \(P_{\rm elec}\sim 100~\rm {W}\) are the RTG heat and electrical heat, respectively, and \(\eta_{\rm rtg}\) and \(\eta_{\rm elec}\) are “efficiency factors”.

This simple force model is very useful because it can be incorporated directly into the orbital model of the spacecraft.

In the years since, the group led by Gary Kinsella constructed a very thorough and comprehensive model of the Pioneer spacecraft, using the same software tools (not to mention considerable expertise) that they use for “live” spacecraft. With this model, they were able to predict the thermal recoil force with the greatest accuracy possible, at different points along the trajectory of the spacecraft. The result can be compared directly to the acceleration that is “measured”; i.e., the acceleration that is needed to model the radio signal accurately:

In this plot, the step-function like curve (thick line) is the acceleration deduced from the radio signal frequency. The data points with vertical error bars represent the recoil force calculated from the thermal model. They are rather close. The relatively large error bars are due primarily to the fact that we simply don’t know what happened to the white paint that coated the RTGs. These were hot (the RTGs were sizzling hot even in deep space) and subjected to solar radiation (ultraviolet light and charged particles) so the properties of the paint may have changed significantly over time… we just don’t know how. The lower part of the plot shows just how well the radio signal is modeled; the average residual is less than 5 mHz. The actual frequency of the radio signal is 2 GHz, so this represents a modeling accuracy of less than one part in 100 billion, over the course of nearly 20 years.

In terms of the above-mentioned efficiency factors, the model of Gary’s group yielded \(\eta_{\rm rtg}=0.0104\) and \(\eta_{\rm elec}=0.406\).

But then, as I said, we also incorporated the thermal recoil force directly into the Doppler analysis that was carried out by Jordan Ellis. Jordan found best-fit residuals at \(\eta_{\rm rtg}=0.0144\) and \(\eta_{\rm elec}=0.480\). These are somewhat larger than the values from the thermal model. But how much larger?

We found that the best way to answer this question was to plot the two results in the parameter space defined by these two efficiency factors:

The dashed ellipse here represents the estimates from the thermal model and their associated uncertainty. The ellipse is elongated horizontally, because the largest source of uncertainty, the degradation of RTG paint, affects only the \(\eta_{\rm rtg}\) factor.

The dotted ellipse represents the estimates from radio signal measurements. The formal error of these estimates is very small (the error ellipse would be invisibly tiny). These formal errors, however, are calculated by assuming that the error in every one of the tens of thousands of Doppler measurements arises independently. In reality, this is not the case: the Doppler measurements are insanely accurate, any errors that occur are a result of systematic mismodeling, e.g., caused by our inadequate knowledge of the solar system. This inflates the error ellipse and that is what was shown in this plot.

Looking at this plot was what allowed us to close our analysis with the words, “We therefore conclude that at the present level of our knowledge of the Pioneer 10 spacecraft and its trajectory, no statistically significant acceleration anomaly exists.”

Are there any caveats? Not really, I don’t think, but there are still some unexplored questions. Applying this research to Pioneer 11 (I expect no surprises there, but we have not done this in a systematic fashion). Modeling the spin rate change of the two spacecraft. Making use of radio signal strength measurements, which can give us clues about the precise orientation of the spacecraft. Testing the paint that was used on the RTGs in a thermal vacuum chamber. Accounting for outgassing. These are all interesting issues but it is quite unlikely that they will alter our main conclusion.

On several occasions when I gave talks about Pioneer, I used a slide that said, in big friendly letters,

PIONEER 10/11 ARE THE MOST PRECISELY NAVIGATED DEEP SPACE CRAFT TO DATE.

And they confirmed the predictions of Newton and Einstein, with spectacular accuracy, by measuring the gravitational field of the Sun in situ, all the way up to about about 70 astronomical units (the distance of the Earth from the Sun).

 Posted by at 11:10 am
Mar 272012
 

The cover story in the March 3 issue of New Scientist is entitled The Deep Future: A Guide to Humanity’s Next 100,000 Years.

I found this cover story both shallow and pretentious. As if we could predict even the next one hundred years, never mind a hundred thousand.

They begin with an assurance that humans will still be around 100,000 years from now. They base this on the observation that well-established species tend to hang around much longer. True but… what we don’t have in the Earth’s prehistory is a species with the technological capability to destroy the Earth. This is something new.

So new in fact that we cannot draw far-fetched conclusions. Consider, for instance: nuclear weapons have been around for 67 years. In these 67 years, we managed not to start an all-out nuclear war.  Assuming, for the same of simplicity, that all years are created equal, the only thing we can conclude from this, if my math is right, is that the probability of nuclear war in any given year is 4.37% or less, “19 times out of 20” as statisticians sometimes say. Fair enough… but that does not tell us much about the “deep future”. Projected to 100,000 years, all we can tell on the basis of this 67-year sample period is that the probability of all-out nuclear war is less than 99.99……99854…%, where the number of ‘9’-s between the decimal point and the digit ‘8’ is 1941. Not very reassuring.

The authors of the New Scientist piece would probably tell us that even if nuclear war did break out, it would not wipe out humanity in its entirety, and they probably have a point, but it misses my point: namely the futility of making a 100,000-year prediction on the basis of at most a few thousand years of known history.

And while nuclear war may be a very scary prospect, it’s by far not the scariest. There are what some call technological singularities: developments in science and technology that are so profound, they would change the very basics of our existence. Artificial intelligence, for starters… reading about Google’s self-driving car or intelligent predictive search algorithms, about IBM’s Watson, or even Apple’s somewhat mundane Siri, I cannot help but wonder: is the era of true AI finally just around the corner? And when true AI arrives, how far behind is the nightmare of Skynet from the Terminator films?

Or how about genetically altered superhumans? They mention this, but only in passing: “unless, of course, engineered humans were so superior that they obliterated the competition.” Why is this scenario considered unlikely? Sometimes I wonder if we may perhaps be just one major war away from this: a warring party in a precarious situation in a prolonged conflict breeding genetically modified warriors. Who, incidentally, need not even look human.

I could go on of course, about “gray goo”, bioterrorism, and other doomsday scenarios, but these just underline my point: it is impossible to predict the course of history even over the next 100 years, never mind the next 100,000. This is true even from a mathematical perspective: exceedingly complex systems with multiple nonlinear feedback mechanisms can undergo catastrophic phase transitions that are almost impossible to predict or prevent. Witness the recent turmoil in financial markets.

Surprisingly, this overly optimistic New Scientist feature is very pessimistic on one front: space exploration. The first quote a figure of 115,000 years that would be required to reach Alpha Centauri at 25,000 miles an hour; this, of course, is a typical velocity for a chemically fueled rocket. The possibility of a better technology is touched only briefly: “Even if we figure out how to travel at the speeds required […], the energy required to get there is far beyond our means”. Is that so? They go on to explain that, “[f]or the next few centuries, then, if not thousands of years hence, humanity will be largely confined to the solar system”. Centuries if not thousands of years? That is far, far, far short of the 100,000 years that they are supposed to be discussing.

I called this cover feature shallow and pretentious, but perhaps I should have called it myopic. In that sense, it is no different from predictions made a little over a century ago, in 1900, about the coming “century of reason”. At least our predecessors back then had the good sense to confine their fortunetelling to the next 100 years.

 Posted by at 10:11 am
Mar 202012
 

I am holding in my hands an amazing book. It is a big and heavy tome, coffee table book sized, with over 600 lavishly illustrated pages. And it took more than 30 years for this book to appear finally in English, but the wait, I think, was well worth it.

The name of Charles Simonyi, Microsoft billionaire and space tourist, is fairly well known. What is perhaps less well-known in the English speaking world is that his father, Karoly Simonyi, was a highly respected professor of physics at the Technical University of Budapest… that is, until he was deprived of his livelihood by a communist regime that considered him ideologically unfit for a teaching position.

Undeterred, Simonyi then spent the next several years completing his magnum opus, A Cultural History of Physics, which was eventually published in 1978.

Simonyi was both a scientist and a humanist. In his remarkable, unique book, history and science march hand in hand from humble beginnings in Egypt, through the golden era of the classical world, through the not so dark Dark Ages, on to the scientific revolution that began in the 1600s and culminated in the discoveries of Lagrangian mechanics, thermodynamics, statistical physics, electromagnetism and, ultimately, relativity theory and quantum physics.

And when I say lavishly illustrated, I mean it. Illustrations that include diagrams, portraits, facsimile pages from original publications decorate nearly every single page of Simonyi’s tome. Yet it is fundamentally a book about physics: the wonderfully written narrative is well complemented by equations that translate ideas into the precise language of mathematics.

I once read this book, my wife’s well worn copy, from cover to cover, back in the mid 1990s. I feel that it played a very significant role in helping me turn back towards physics.

Simonyi’s book has seen several editions in the original Hungarian, and it was also translated into German, but until now, no English-language translation was available. This is perhaps not surprising: it must be a very expensive book to produce, and despite its quality, the large number of equations must surely be a deterrent to many a prospective buyer. But now, CRC Press finally managed to make an English-language version available.

(Oh yes, CRC Press. I hated them for so many years, after they sued Wolfram and had Mathworld taken off-line. I still think that was a disgusting thing for them to do. I hope they spent enough on lawyers and lost enough sales due to disgusted customers to turn their legal victory a Pyrrhic one. But that was more than a decade ago. Let bygones be bygones… besides, I really don’t like Wolfram these days that much anyway, software activation and all.)

Charles Simonyi played a major role in making this edition happen. I guess he may also have spent some of his own money. And while I am sure he can afford a loss, I hope the book does well… it deserves to be successful.

For some reason, the book was harder to obtain in Canada than usual. It is not available on amazon.ca; indeed, I pre-ordered the book last fall, but a few weeks ago, Amazon notified me that they are unable to deliver this item. Fortunately, CRC Press delivers in Canada, and the shipping is free, just like with Amazon. The book seems to be available and in stock on the US amazon.com Web site.

And it’s not a pricey one: at less than 60 dollars, it is quite cheap, actually. I think it’s well worth every penny. My only disappointment is that my copy was printed in India. I guess that’s one way to shave a few bucks off the production cost, but I would have paid more happily for a copy printed in the US or Canada.

 Posted by at 4:38 pm
Mar 182012
 

The Weather Network has this neat plot every ten minutes, showing the anticipated minimum and maximum temperatures for the next two weeks.

The forecast for Wednesday is off the chart. It is going to be so much hotter than the two-week average, it did not fit into the plot area.

Of course it could be just nonsense. They did predict 7 degrees Centigrade as the overnight low. It went down to 2 in foggy areas (most of Ottawa, I guess). Then again… even if it turns out to be 10 degrees colder than the predicted 24, it’s still a remarkably mild winter day. March 21, after all, is supposed to be the last day of winter. And I may have to fire up the A/C.

And it’s not just Ottawa. For Winnipeg (Winnipeg, for crying out loud!) today’s forecast is 28. A once in a thousand years event, says The Weather Network. Either that or the new norm, if global warming is to be believed. (Not necessarily bad news for many Canadians.)

 Posted by at 8:53 am
Mar 012012
 

Maxima is an open-source computer algebra system (CAS) and a damn good one at that if I may say so myself, being one of Maxima’s developers.

Among other things, Maxima has top-notch tensor algebra capabilities, which can be used, among other things, to work with Lagrangian field theories.

This week, I am pleased to report, SourgeForge chose Maxima as one of the featured open-source projects on their front page. No, it won’t make us rich and famous (not even rich or famous) but it is nice to be recognized.

 Posted by at 9:35 am
Feb 272012
 

The cover story in a recent issue of New Scientist was titled Seven equations that rule your world, written by Ian Stewart.

I like Ian Stewart; I have several of his books on my bookshelf, including a 1978 Hungarian edition of his textbook, Catastrophe Theory and its Applications.

However, I disagree with his choice of equations. Stewart picked the four Maxwell equations, Schrödinger’s equation, the Fourier transform, and the wave equation:

\begin{align}
\nabla\cdot E&=0,\\
\nabla\times E&=-\frac{1}{c}\frac{\partial H}{\partial t},\\
\nabla\cdot H&=0,\\
\nabla\times H&=\frac{1}{c}\frac{\partial E}{\partial t},\\
i\hbar\frac{\partial}{\partial t}\psi&=\hat{H}\psi,\\
\hat{f}(\xi)&=\int\limits_{-\infty}^{\infty}f(x)e^{-2\pi ix\xi}dx,\\
\frac{\partial^2u}{\partial t^2}&=c^2\frac{\partial^2u}{\partial x^2}.
\end{align}

But these equations really aren’t that fundamental… and some rather fundamental equations are missing.

For starters, the four Maxwell equations really should just be two equations: given a smooth (or at least three times differentiable) vector field \(A\) in 4-dimensional spacetime, we define the electromagnetic field tensor \(F\) and current \(J\) as

\begin{align}
F&={\rm d}A,\\
J&=\star{\rm d}{\star{F}},
\end{align}

where the symbol \(\rm d\) denotes the exterior derivative and \(\star\) represents the Hodge dual. OK, these are not really trivial concepts from high school physics, but the main point is, we end up with a set of four Maxwell equations only because we (unnecessarily) split the equations into a three-dimensional and a one-dimensional part. Doing so also obscures some fundamental truths: notably that once the electromagnetic field is defined this way, its properties are inevitable mathematical identities, not equations imposed on the theoretician’s whim.

Moreover, the wave equation really is just a solution of the Maxwell equations, and conveys no new information. It is not something you invent, but something you derive.

I really have no nit to pick with Schrödinger’s equation, but before moving on to quantum physics, I would have written down the Euler-Lagrange equation first. For a generic theory with positions \(q\) and time \(t\), this could be written as

$$\frac{\partial{\cal L}}{\partial q}-\frac{d}{dt}\frac{\partial{\cal L}}{\partial\dot{q}}=0,$$

where \({\cal L}\) is the Lagrangian, or Lagrange function (of \(q\) and \(\dot{q}\), and possibly \(t\)) that describes this particular physical system. The significance of this equation is that it can be derived from the principle of least action, and tells us everything about the evolution of a system. Once you know the generic positions \(q\) and their time derivatives (i.e., velocities) \(\dot{q}\) at some time \(t=t_0\), you can calculate them at any other time \(t\). This is why physics can be used to make predictions: for instance, if you know the initial position and velocity of a cannonball, you can predict its trajectory. The beauty of the Euler-Lagrange equation is that it works equally well for particles and for fields and can be readily generalized to relativistic theories; moreover, the principle of least action is an absolutely universal one, unifying, in a sense, classical mechanics, electromagnetism, nuclear physics, and even gravity. All these theories can be described by simply stating the corresponding Lagrangian. Even more astonishingly, the basic mathematical properties of the Lagrangian can be used to deduce fundamental physical laws: for instance, a Lagrangian that remains invariant under time translation leads to the law of energy conservation.

The Euler-Lagrange equation remains valid in quantum physics, too. The big difference is that the quantities \(q\) are no longer simple numbers; they are non-commuting quantities, so-called “q-numbers”. These q-numbers sometimes coincide with ordinary numbers but more often, they do not. Most importantly, if \(q\) happens to be an ordinary number, \(\dot{q}\) cannot be, and vice versa. So the initial position and momentum of a quantum system cannot both be represented by numbers at the same time. Exact predictions are no longer possible.

We can still make approximate predictions though, by replacing the exact form of the Euler-Lagrange equation with a probabilistic prediction:

$$\xi(A\rightarrow B)=k\sum\limits_A^B\exp\left(\frac{i}{\hbar}\int_A^B{\cal L}\right),$$

where \(\xi(A\rightarrow B)\) is a complex number called the probability amplitude, the squared modulus of which tells us the likelihood of the system changing from state \(A\) to state \(B\) and the summation is meant to take place over “all possible paths” from \(A\) to \(B\). Schrödinger’s equation can be derived from this, as indeed most of quantum mechanics. So this, then, would be my fourth equation.

Would I include the Fourier transform? Probably not. It offers a different way of looking at the same problem, but no new information content. Whether I investigate a signal in the time domain or the frequency domain, it is still the same signal; arguably, it is simply a matter of convenience as to which representation I choose.

However, Stewart left out at least one extremely important equation:

$$dU=TdS-pdV.$$

This is the fundamental equation of thermodynamics, connecting quantities such as the internal energy \(U\), the temperature \(T\), the entropy \(S\), and the medium’s equation of state (here represented by the pressure \(p\) and volume \(V\).) Whether one derives it from the first principles of axiomatic thermodynamics or from the postulates of statistical physics, the end result is the same: this is the equation that defines the arrow of time, for instance, as all the other fundamental equations of physics work the same even if the arrow of time is reversed.

Well, that’s five equations. What else would I include in my list? The choices, I think, are obvious. First, the definition of the Lagrangian for gravity:

$${\cal L}_\mathrm{grav}=R+2\Lambda,$$

where \(R\) is the Ricci curvature scalar that characterizes the geometry of spacetime and \(\Lambda\) is the cosmological constant.

Finally, the last equation would be, for the time being, the “standard model” Lagrangian that describes all forms of matter and energy other than gravity:

$${\cal L}_\mathrm{SM}=…$$

Its actual form is too unwieldy to reproduce here (as it combines the electromagnetic, weak, and strong nuclear fields, all the known quarks and leptons, and their interactions) and in all likelihood, it’s not the final version anyway: the existence of the Higgs-boson is still an open question, and without the Higgs, the standard model would need to be modified.

The Holy Grail of fundamental physics, of course, is unification of these final two equations into a single, consistent framework, a true “theory of everything”.

 Posted by at 1:18 pm
Feb 222012
 

Why exactly do we believe that stars and more importantly, gas in the outer regions of spiral galaxies move in circular orbits? This assumption lies at the heart of the infamous galaxy rotation curve problem, as the circular orbital velocity for a spiral galaxy (whose visible mass is concentrated in the central bulge) should be proportional to the inverse square root of the distance from the center; instead, observed rotation curves are “flat”, meaning that the velocity remains approximately the same at various distances from the center.

So why do we assume that stars and gas move in circular orbits? Well, it turns out that one key bit of evidence is in a 32-year old paper that was published by two Indian physicists: Radhakrishnan and Sarma (A&A 85, 1980) made observations of hydrogen gas in the direction of the center of the Milky Way, and found that the bulk of gas between the solar system and the central bulge has no appreciable radial velocity.

However, more recent observations may be contradicting this result. Just two years ago, the Radial Velocity Experiment (RAVE) survey (Siebert et al, MNRAS 412, 2010) found, using a sample of several hundred thousand relatively nearby stars, that a significant radial velocity exists, putting into question the simple model that assumes that circular orbits dominate.

 Posted by at 10:03 pm
Feb 222012
 

So maybe neutrinos don’t travel faster than light after all.

Instead, if rumors are to be believed, it was a simple instrumentation problem. There is no official confirmation yet, but according to a statement that also appears on Nature’s news blog, the OPERA team is indeed investigating two problems related to a timer oscillator and an optical fiber connection.

A while back, I wrote that I could identify four possible broad categories for conventional explanations of the OPERA result:

  1. Incorrectly synchronized clocks;
  2. Incorrectly measured distance;
  3. Unaccounted-for delays in the apparatus;
  4. Statistical uncertainties.

Of these, #4 was already out, as the OPERA team verified their result using short duration proton bunches that avoided the use of potentially controversial statistical methods. I never considered #2 a serious possibility, as highly accurate geographic localization is a well established art. Having read and re-read the OPERA team’s description of how they synchronized clocks, I was prepared to discount #1 as well, but then again, incorrect synchronization can arise as a result of equipment failure, so would that fall under #1 or #3?

In any case, it looks like #3, with a dash of #1 perhaps. Once again, conventional physics prevails.

That is, if we can believe these latest rumors.

 Posted by at 8:08 pm
Feb 212012
 

Some thirty thousand years ago, homo sapiens was busy perfecting techniques to produce primitive stone tools. They may have already invented nets, the bow and arrow, and perhaps even ceramics, but they were still a long way away from inventing civilization.

Around the same time, an arctic squirrel in north-eastern Siberia took the fruit of a narrow-leafed campion, a small arctic flower, and hid it in its burrow, never to be touched again. The fruit froze and remained frozen for over three hundred centuries.

It is frozen no longer; rather, it is blooming, thanks to the efforts of a research team led by Svetlana Yashina and David Gilichinsky of the Russian Academy of Sciences. Against all odds, the genetic material in the seed appears to have survived. I say “appears” because such an extraordinary claim will be subject to extraordinary scrutiny, but what I have been reading suggests that this is indeed real: the age of the fruit is confirmed by radioactive dating.

 Posted by at 9:21 am
Feb 162012
 

I always find these numbers astonishing.

The solar constant, the amount of energy received by a 1 square meter surface at 1 astronomical unit (AU) from the Sun is roughly s = 1.37 kW/m2. Given that 1 AU is approximately 150 million kilometers, or r = 1.5 × 1011 m, the surface area of a 1 AU sphere surrounding the Sun would be A = 4πr2 = 2.8 × 1023 m2. Multiplied by the solar constant, we get P = sA = 3.9 × 1026 W, or the energy E = sA = 3.9 × 1026 J every second. Using Einstein’s infamous mass-energy formula E = mc2, where c = 3 × 108 m/s, we can easily calculate how much mass is converted into energy: m = E/c2 = 4.3 × 109 kg. Close to four and a half million tons.

The dominant fusion process in the Sun is the proton-proton chain reaction, in which approximately 0.7% of the total mass of hydrogen is converted into energy. Thus 4.3 million tons of pure energy is equivalent to over 600 millon tons of hydrogen fuel burned every second. (For comparison, the largest ever nuclear device, the Soviet Tsar Bomba, burned no more than a few hundred kilograms of hydrogen to produce a 50 megaton explosion.)

Fortunately, there is plenty where that came from. The total mass of the Sun is 2 × 1030 kg, so if the Sun was made entirely of hydrogen, it could burn for 100 billion years before running out of fuel. Now the Sun is not made entirely of hydrogen, and the fusion reaction slows down and eventually stops long before all the hydrogen is consumed, but we still have a few billion years of useful life left in our middle-aged star. A much bigger (pun intended) problem is that as our Sun ages, it will grow in size; in a mere billion years, the Earth may well become uninhabitable as a result, with the oceans boiling away. I wonder if it’s too early to start worrying about it just yet.

 Posted by at 12:24 pm
Feb 162012
 

Other countries have launched satellites to observe the Earth; observe the Sun; observe the stars; perform physical, chemical, or biological experiments in space; or even for military purposes. But here is a first: trust a Swiss team to propose a microsatellite specifically designed to capture orbital junk and drag it back to the atmosphere to burn it up.

 Posted by at 11:28 am
Jan 272012
 

Normally, I would get tremendously excited to hear about a serious proposal to establish a permanent lunar colony. (Where do I sign up?)

Unfortunately, when Newt Gingrich floated this idea while campaigning in Florida, I did not feel excited at all. That is because I have very little doubt that this was simply an exercise in transparent political opportunism. Mr. Gingrich is hoping to gain some votes in the Space Coast, but I suspect that even residents there, whose livelihood for a long time has depended on a healthy space program, will see through his blatant pandering.

 Posted by at 1:52 pm
Jan 262012
 

NASA’s week of mourning begins tomorrow. The three deadly accidents in NASA’s history all happened in late January/early February. Apollo 1 caught fire 45 years ago on January 27, 1967, killing Grissom, White and Chaffee. Challenger exploded 26 years ago, on January 28, 1986, killing all seven on board. And Columbia broke up during reentry on February 1, 2003, just nine years ago, killing another seven people. Why these accidents all happened during the same calendar week remains a mystery.

 Posted by at 11:49 am
Jan 242012
 

When I write about things like precision orbit determination, I often have to discuss the difference between ephemeris time (ET) and coordinated universal time (UTC). ET is a “clean” time scale: it is essentially the time coordinate of an inertial coordinate frame that is attached to the barycenter of the solar system. On the other hand, UTC is “messy”: it is the time kept by noninertial clocks sitting here on the surface of the Earth. But the fact that terrestrial clocks sit inside the Earth’s gravity well and are subject to acceleration is only part of the picture. There are also those blasted leap seconds. It is because of leap seconds that terrestrial atomic time (TAI) and UTC differ.

Leap seconds arise because we insist on using an inherently wobbly planet as our time standard. The Earth wobbles, sometimes unpredictably (for instance, after a major earthquake) and we mess with our clocks. Quite pointlessly, as a matter of fact. And now, we missed another chance to get rid of this abomination: the International Telecommunication Union failed to achieve consensus, and any decision is postponed until 2015.

For the curious, an approximate formula to convert between TAI and ET is given by ET – TAI = 32.184 + 1.657×10–3 sin E, where E = M + 0.01671 sin M, M = 6.239996 + 1.99096871×10–7 t and t is the time in seconds since J2000 (that is, noon, January 1, 2000, TAI). To convert TAI to UTC, additional leap seconds must be added: 10 seconds for all dates prior to 1972, and then additional leap seconds depending on the date. Most inelegant.

Speaking of leap this and that, I think it’s also high time to get rid of daylight savings time. Its benefits are dubious at best, and I find the practice unnecessarily disruptive.

 Posted by at 12:23 pm
Jan 222012
 

A couple of weeks ago, somewhere I saw a blog comment that mentioned a book, Rad Decision, written by nuclear engineer James Aach.

Back in the late 1970s, when I got my hands on The Prometheus Crisis by Scortia and Robinson, I just couldn’t put the damn thing down; I read through the night and I finished the book by the morning. So naturally, I couldn’t resist the temptation to buy the last in-stock copy of Aach’s book on Amazon.ca.

And I am glad I did. My concerns that it would be a trashy, amateurishly written novel quickly dissipated. Indeed, in a sense it is a lot better than The Prometheus Crisis: the crisis in Aach’s book is far less dramatic, but the story is believable, the characters perhaps more credible.

My only concern: while this book teaches a lot about nuclear power (and why we should not fear it), its likely audience already knows. Those who would benefit the most from reading it, well, won’t.

 Posted by at 7:39 pm
Jan 152012
 

Microsoft’s Windows 7 weather widget tells me that the temperature is -30 degrees Centigrade this morning in Ottawa. I know that this particular reading is an outlier (I don’t know where MSN get their reading from, but often it’s several degrees below that of others) but it’s still darn cold outside… even on our balcony it was -23 this morning. Welcome to Canada in January, I guess…

 

 Posted by at 9:52 am
Jan 082012
 

Neutrinos recently observed by CERN’s OPERA experiment may have been traveling faster than light. Or may have not. I have been discussing with physicists a number of possibilities: the role of statistics, errors in time or distance measurements, comparisons to SN 1987A, Cherenkov radiation, or the necessity for a Lorentz-violating theoretical framework.

Fortunately, there is one thing I did not need to discuss: How faster-than-light neutrinos relate to the Koran. Physics educators in Pakistan, such as Pervez Hoodbhoy writing for the Express Tribune, are not this lucky: they regularly face criticisms from fundamentalists, and if they choose to confront these head-on, they provoke ominous reader comments that call on all Muslims to “reject this evil experiment”.

Yet, there is a glimpse of hope: a Pakistani reader mentions Carl Sagan’s The Demon-Haunted World, one of Sagan’s last books, and a superb one about rational thinking versus superstition. I don’t know how popular Sagan’s book is in Pakistan, but I am glad it’s not forgotten.

 Posted by at 5:29 pm
Jan 082012
 

Has NASA nothing better to do than harass aging astronauts such as Jim Lovell who, some forty years after having survived a near-fatal accident in deep space (caused by NASA’s negligent storage and handling of an oxygen tank), is auctioning off a checklist containing his handwritten notes? A checklist that, had it remained in NASA’s possession, would likely have ended up in a dumpster decades ago?

This is so not kosher. Let Lovell sell his memorabilia in peace. If anyone has a right to do it, the survivors of Apollo 13 certainly do.

 Posted by at 1:12 pm