Apr 142012
 

I just came across this delightful imaginary conversation between a physicist and an economist about the unsustainability of perpetual economic growth.

The physicist uses energy production in his argument: growth at present rates means that in a few hundred years, we’ll produce enough energy to start boiling the oceans. And this is not something that can be addressed easily by the magic of technology. When waste heat is produced, the only way to get rid of it is to radiate it away into space. After about 1400 years of continuous growth, the Earth will be radiating more energy (all man-made) than the Sun, which means it would have to be a lot hotter than the Sun, on account of its smaller size. And in about 2500 years, we would exceed the thermal output of the whole Milky Way.

This, of course, is nonsense, which means terrestrial energy production will be capped eventually by basic physics. If GDP would continue to grow nonetheless, it would mean that the price of energy relative to other stuff would decrease to zero. This is also nonsense, since a limited resource cannot become arbitrarily cheap. But that means GDP growth must also be capped.

What I liked about this argument is that it is not emotional or ideological; it’s not about hugging trees or hating capitalism. It is about basic physics and elementary logic that is difficult to escape. In fact, it can be put in the form of equations. Our present energy production \(P_0\) is approximately 15 TW, which is about 0.002% of the Sun’s output that reaches the Earth:

\begin{align}
P_0&\simeq 1.5 \times 10^{13}~\rm{W},\\
P_\odot&\simeq 7 \times 10^{17}~\rm{W},\\
\eta_0&=P_0/P_\odot \sim 0.002\%.
\end{align}

For any other value of \(\eta\), there is a corresponding value of \(P\):

\begin{align}
P=\eta P_\odot.
\end{align}

Now all we need is to establish a maximum value of \(\eta\) that we can live with; say, \(\eta_{\rm max}=1\%\). This tells us the maximum amount of energy that we can produce here in the Earth without cooking ourselves:

\begin{align}
P_{\rm max}=\eta_{\rm max}P_\odot.
\end{align}

On the economic side of this argument, there is the percentage of GDP that is spent on energy. In the US, this is about 8%. For lack of a better value, let me stick to this one:

\begin{align}
\kappa_0\sim 8\%.
\end{align}

How low can \(\kappa\) get? That may be debatable, but it cannot become arbitrarily low. So there is a value \(\kappa_{\rm min}\).

The rest is just basic arithmetic. GDP is proportional to the total energy produced, divided by \(\kappa\):

\begin{align}
{\rm GDP}&\propto \frac{\eta}{\kappa}P_\odot,\\
{\rm GDP}_{\rm max}&\propto \frac{\eta_{\rm \max}}{\kappa_{\rm min}}P_\odot,
\end{align}

And in particular:

\begin{align}
{\rm GDP}_{\rm max}&=\frac{\eta_{\rm max}\kappa_0}{\eta_0\kappa_{\rm min}}{\rm GDP}_0,
\end{align}

where \({\rm GDP}_0\) is the present GDP.

We know \(\eta_0\sim 0.002\%\). We know \(\kappa_0=8\%\). We can guess that \(\eta_{\rm max}\lesssim 1\%\) and \(\kappa_{\rm min}\gtrsim 1\%\). This means that

\begin{align}
{\rm GDP}_{\rm max}\lesssim 4,000\times {\rm GDP}_0.
\end{align}

This is it. A hard limit imposed by thermodynamics. But hey… four thousand is a big number, isn’t it? Well… sort of. At a constant 3% rate of annual growth, the economy will increase to four thousand times its present size in a mere 280 years or so. One may tweak the numbers a little here and there, but the fact that physics imposes such a hard limit remains. The logic is inescapable.

Or is it? The word “escape” may be appropriate here for more than one reason, as there is one obvious way to evade this argument: escape into space. In a few hundred years, humanity may have spread throughout the solar system, and energy amounts enough to boil the Earth’s oceans may be powering human colonies in the hostile (and cold!) environments near the outer planets.

That is, if humans are still around a few hundred years from now. One can only hope.

 Posted by at 9:59 am
Apr 122012
 

Our second short paper has been accepted for publication in Physical Review Letters.

I have been involved with Pioneer 10 and 11 in some fashion since about 2002, when I first began corresponding with Larry Kellogg about the possibility of resurrecting the telemetry data set. It is thanks the Larry’s stamina and conscientiousness that the data set survived.

I have been involved actively in the research of the Pioneer anomaly since 2005. Seven years! Hard to believe.

This widely reported anomaly concerns the fact that when the orbits of Pioneer 10 and 11 are accurately modeled, a discrepancy exists between the modeled and measured frequency of the radio signal. This discrepancy can be resolved by assuming an unknown force that pushes Pioneer 10 an 11 towards the Earth or the Sun (from that far away, these two directions nearly coincide and cannot really be told apart.)

One purpose of our investigation was to find out the magnitude of the force that arises as the spacecraft radiates different amounts of heat in different directions. This is the concept of a photon rocket. A ray of light carries momentum. Hard as it may appear to believe at first, when you hold a flashlight in your hands and turn it on, the flashlight will push your hand backwards by a tiny force. (How tiny? If it is a 1 W bulb that is perfectly efficient and perfectly focused, the force will be equivalent to about one third of one millionth of a gram of weight.)

On Pioneer 10 and 11, we have two main heat sources. First, there is electrical heat: all the instruments on board use about 100 W of electricity, most of which is converted into heat. Second, electricity is produced, very inefficiently, by a set of four radioisotope thermoelectric generators (RTGs); these produce more than 2 kW of waste heat. All this heat has to go somewhere, and most of this heat will be dissipated preferably in one direction, behind the spacecraft’s large dish antenna, which is always pointed towards the Earth.

The controversial question was, how much? How efficiently is this heat converted into force?

I first constructed a viable thermal model for Pioneer 10 back in 2006. I presented results from custom ray-tracing code at the Pioneer Explorer Collaboration meeting at the International Space Science Institute in Bern, Switzerland in February 2007:

With this, I confirmed what has already been suspected by others—notably, Katz (Phys. Rev. Letters 83:9, 1892, 1999); Murphy (Phys. Rev. Letters 83:9, 1890, 1999); and Scheffer (Phys. Rev. D, 67:8, 084021, 2003)—that the magnitude of the thermal recoil force is indeed comparable to the anomalous acceleration. Moreover, I established that the thermal recoil force is very accurately described as a simple linear combination of heat from two heat sources: electrical heat and heat from the RTGs. The thermal acceleration \(a\) is, in fact

$$a=\frac{1}{mc}(\eta_{\rm rtg}P_{\rm rtg} + \eta_{\rm elec}P_{\rm elec}),$$

where \(c\simeq 300,000~{\rm km/s}\) is the speed of light, \(m\simeq 250~{\rm kg}\) is the mass of the spacecraft, \(P_{\rm rtg}\sim 2~{\rm kW}\) and \(P_{\rm elec}\sim 100~\rm {W}\) are the RTG heat and electrical heat, respectively, and \(\eta_{\rm rtg}\) and \(\eta_{\rm elec}\) are “efficiency factors”.

This simple force model is very useful because it can be incorporated directly into the orbital model of the spacecraft.

In the years since, the group led by Gary Kinsella constructed a very thorough and comprehensive model of the Pioneer spacecraft, using the same software tools (not to mention considerable expertise) that they use for “live” spacecraft. With this model, they were able to predict the thermal recoil force with the greatest accuracy possible, at different points along the trajectory of the spacecraft. The result can be compared directly to the acceleration that is “measured”; i.e., the acceleration that is needed to model the radio signal accurately:

In this plot, the step-function like curve (thick line) is the acceleration deduced from the radio signal frequency. The data points with vertical error bars represent the recoil force calculated from the thermal model. They are rather close. The relatively large error bars are due primarily to the fact that we simply don’t know what happened to the white paint that coated the RTGs. These were hot (the RTGs were sizzling hot even in deep space) and subjected to solar radiation (ultraviolet light and charged particles) so the properties of the paint may have changed significantly over time… we just don’t know how. The lower part of the plot shows just how well the radio signal is modeled; the average residual is less than 5 mHz. The actual frequency of the radio signal is 2 GHz, so this represents a modeling accuracy of less than one part in 100 billion, over the course of nearly 20 years.

In terms of the above-mentioned efficiency factors, the model of Gary’s group yielded \(\eta_{\rm rtg}=0.0104\) and \(\eta_{\rm elec}=0.406\).

But then, as I said, we also incorporated the thermal recoil force directly into the Doppler analysis that was carried out by Jordan Ellis. Jordan found best-fit residuals at \(\eta_{\rm rtg}=0.0144\) and \(\eta_{\rm elec}=0.480\). These are somewhat larger than the values from the thermal model. But how much larger?

We found that the best way to answer this question was to plot the two results in the parameter space defined by these two efficiency factors:

The dashed ellipse here represents the estimates from the thermal model and their associated uncertainty. The ellipse is elongated horizontally, because the largest source of uncertainty, the degradation of RTG paint, affects only the \(\eta_{\rm rtg}\) factor.

The dotted ellipse represents the estimates from radio signal measurements. The formal error of these estimates is very small (the error ellipse would be invisibly tiny). These formal errors, however, are calculated by assuming that the error in every one of the tens of thousands of Doppler measurements arises independently. In reality, this is not the case: the Doppler measurements are insanely accurate, any errors that occur are a result of systematic mismodeling, e.g., caused by our inadequate knowledge of the solar system. This inflates the error ellipse and that is what was shown in this plot.

Looking at this plot was what allowed us to close our analysis with the words, “We therefore conclude that at the present level of our knowledge of the Pioneer 10 spacecraft and its trajectory, no statistically significant acceleration anomaly exists.”

Are there any caveats? Not really, I don’t think, but there are still some unexplored questions. Applying this research to Pioneer 11 (I expect no surprises there, but we have not done this in a systematic fashion). Modeling the spin rate change of the two spacecraft. Making use of radio signal strength measurements, which can give us clues about the precise orientation of the spacecraft. Testing the paint that was used on the RTGs in a thermal vacuum chamber. Accounting for outgassing. These are all interesting issues but it is quite unlikely that they will alter our main conclusion.

On several occasions when I gave talks about Pioneer, I used a slide that said, in big friendly letters,

PIONEER 10/11 ARE THE MOST PRECISELY NAVIGATED DEEP SPACE CRAFT TO DATE.

And they confirmed the predictions of Newton and Einstein, with spectacular accuracy, by measuring the gravitational field of the Sun in situ, all the way up to about about 70 astronomical units (the distance of the Earth from the Sun).

 Posted by at 11:10 am
Mar 202012
 

I am holding in my hands an amazing book. It is a big and heavy tome, coffee table book sized, with over 600 lavishly illustrated pages. And it took more than 30 years for this book to appear finally in English, but the wait, I think, was well worth it.

The name of Charles Simonyi, Microsoft billionaire and space tourist, is fairly well known. What is perhaps less well-known in the English speaking world is that his father, Karoly Simonyi, was a highly respected professor of physics at the Technical University of Budapest… that is, until he was deprived of his livelihood by a communist regime that considered him ideologically unfit for a teaching position.

Undeterred, Simonyi then spent the next several years completing his magnum opus, A Cultural History of Physics, which was eventually published in 1978.

Simonyi was both a scientist and a humanist. In his remarkable, unique book, history and science march hand in hand from humble beginnings in Egypt, through the golden era of the classical world, through the not so dark Dark Ages, on to the scientific revolution that began in the 1600s and culminated in the discoveries of Lagrangian mechanics, thermodynamics, statistical physics, electromagnetism and, ultimately, relativity theory and quantum physics.

And when I say lavishly illustrated, I mean it. Illustrations that include diagrams, portraits, facsimile pages from original publications decorate nearly every single page of Simonyi’s tome. Yet it is fundamentally a book about physics: the wonderfully written narrative is well complemented by equations that translate ideas into the precise language of mathematics.

I once read this book, my wife’s well worn copy, from cover to cover, back in the mid 1990s. I feel that it played a very significant role in helping me turn back towards physics.

Simonyi’s book has seen several editions in the original Hungarian, and it was also translated into German, but until now, no English-language translation was available. This is perhaps not surprising: it must be a very expensive book to produce, and despite its quality, the large number of equations must surely be a deterrent to many a prospective buyer. But now, CRC Press finally managed to make an English-language version available.

(Oh yes, CRC Press. I hated them for so many years, after they sued Wolfram and had Mathworld taken off-line. I still think that was a disgusting thing for them to do. I hope they spent enough on lawyers and lost enough sales due to disgusted customers to turn their legal victory a Pyrrhic one. But that was more than a decade ago. Let bygones be bygones… besides, I really don’t like Wolfram these days that much anyway, software activation and all.)

Charles Simonyi played a major role in making this edition happen. I guess he may also have spent some of his own money. And while I am sure he can afford a loss, I hope the book does well… it deserves to be successful.

For some reason, the book was harder to obtain in Canada than usual. It is not available on amazon.ca; indeed, I pre-ordered the book last fall, but a few weeks ago, Amazon notified me that they are unable to deliver this item. Fortunately, CRC Press delivers in Canada, and the shipping is free, just like with Amazon. The book seems to be available and in stock on the US amazon.com Web site.

And it’s not a pricey one: at less than 60 dollars, it is quite cheap, actually. I think it’s well worth every penny. My only disappointment is that my copy was printed in India. I guess that’s one way to shave a few bucks off the production cost, but I would have paid more happily for a copy printed in the US or Canada.

 Posted by at 4:38 pm
Mar 012012
 

Maxima is an open-source computer algebra system (CAS) and a damn good one at that if I may say so myself, being one of Maxima’s developers.

Among other things, Maxima has top-notch tensor algebra capabilities, which can be used, among other things, to work with Lagrangian field theories.

This week, I am pleased to report, SourgeForge chose Maxima as one of the featured open-source projects on their front page. No, it won’t make us rich and famous (not even rich or famous) but it is nice to be recognized.

 Posted by at 9:35 am
Feb 272012
 

The cover story in a recent issue of New Scientist was titled Seven equations that rule your world, written by Ian Stewart.

I like Ian Stewart; I have several of his books on my bookshelf, including a 1978 Hungarian edition of his textbook, Catastrophe Theory and its Applications.

However, I disagree with his choice of equations. Stewart picked the four Maxwell equations, Schrödinger’s equation, the Fourier transform, and the wave equation:

\begin{align}
\nabla\cdot E&=0,\\
\nabla\times E&=-\frac{1}{c}\frac{\partial H}{\partial t},\\
\nabla\cdot H&=0,\\
\nabla\times H&=\frac{1}{c}\frac{\partial E}{\partial t},\\
i\hbar\frac{\partial}{\partial t}\psi&=\hat{H}\psi,\\
\hat{f}(\xi)&=\int\limits_{-\infty}^{\infty}f(x)e^{-2\pi ix\xi}dx,\\
\frac{\partial^2u}{\partial t^2}&=c^2\frac{\partial^2u}{\partial x^2}.
\end{align}

But these equations really aren’t that fundamental… and some rather fundamental equations are missing.

For starters, the four Maxwell equations really should just be two equations: given a smooth (or at least three times differentiable) vector field \(A\) in 4-dimensional spacetime, we define the electromagnetic field tensor \(F\) and current \(J\) as

\begin{align}
F&={\rm d}A,\\
J&=\star{\rm d}{\star{F}},
\end{align}

where the symbol \(\rm d\) denotes the exterior derivative and \(\star\) represents the Hodge dual. OK, these are not really trivial concepts from high school physics, but the main point is, we end up with a set of four Maxwell equations only because we (unnecessarily) split the equations into a three-dimensional and a one-dimensional part. Doing so also obscures some fundamental truths: notably that once the electromagnetic field is defined this way, its properties are inevitable mathematical identities, not equations imposed on the theoretician’s whim.

Moreover, the wave equation really is just a solution of the Maxwell equations, and conveys no new information. It is not something you invent, but something you derive.

I really have no nit to pick with Schrödinger’s equation, but before moving on to quantum physics, I would have written down the Euler-Lagrange equation first. For a generic theory with positions \(q\) and time \(t\), this could be written as

$$\frac{\partial{\cal L}}{\partial q}-\frac{d}{dt}\frac{\partial{\cal L}}{\partial\dot{q}}=0,$$

where \({\cal L}\) is the Lagrangian, or Lagrange function (of \(q\) and \(\dot{q}\), and possibly \(t\)) that describes this particular physical system. The significance of this equation is that it can be derived from the principle of least action, and tells us everything about the evolution of a system. Once you know the generic positions \(q\) and their time derivatives (i.e., velocities) \(\dot{q}\) at some time \(t=t_0\), you can calculate them at any other time \(t\). This is why physics can be used to make predictions: for instance, if you know the initial position and velocity of a cannonball, you can predict its trajectory. The beauty of the Euler-Lagrange equation is that it works equally well for particles and for fields and can be readily generalized to relativistic theories; moreover, the principle of least action is an absolutely universal one, unifying, in a sense, classical mechanics, electromagnetism, nuclear physics, and even gravity. All these theories can be described by simply stating the corresponding Lagrangian. Even more astonishingly, the basic mathematical properties of the Lagrangian can be used to deduce fundamental physical laws: for instance, a Lagrangian that remains invariant under time translation leads to the law of energy conservation.

The Euler-Lagrange equation remains valid in quantum physics, too. The big difference is that the quantities \(q\) are no longer simple numbers; they are non-commuting quantities, so-called “q-numbers”. These q-numbers sometimes coincide with ordinary numbers but more often, they do not. Most importantly, if \(q\) happens to be an ordinary number, \(\dot{q}\) cannot be, and vice versa. So the initial position and momentum of a quantum system cannot both be represented by numbers at the same time. Exact predictions are no longer possible.

We can still make approximate predictions though, by replacing the exact form of the Euler-Lagrange equation with a probabilistic prediction:

$$\xi(A\rightarrow B)=k\sum\limits_A^B\exp\left(\frac{i}{\hbar}\int_A^B{\cal L}\right),$$

where \(\xi(A\rightarrow B)\) is a complex number called the probability amplitude, the squared modulus of which tells us the likelihood of the system changing from state \(A\) to state \(B\) and the summation is meant to take place over “all possible paths” from \(A\) to \(B\). Schrödinger’s equation can be derived from this, as indeed most of quantum mechanics. So this, then, would be my fourth equation.

Would I include the Fourier transform? Probably not. It offers a different way of looking at the same problem, but no new information content. Whether I investigate a signal in the time domain or the frequency domain, it is still the same signal; arguably, it is simply a matter of convenience as to which representation I choose.

However, Stewart left out at least one extremely important equation:

$$dU=TdS-pdV.$$

This is the fundamental equation of thermodynamics, connecting quantities such as the internal energy \(U\), the temperature \(T\), the entropy \(S\), and the medium’s equation of state (here represented by the pressure \(p\) and volume \(V\).) Whether one derives it from the first principles of axiomatic thermodynamics or from the postulates of statistical physics, the end result is the same: this is the equation that defines the arrow of time, for instance, as all the other fundamental equations of physics work the same even if the arrow of time is reversed.

Well, that’s five equations. What else would I include in my list? The choices, I think, are obvious. First, the definition of the Lagrangian for gravity:

$${\cal L}_\mathrm{grav}=R+2\Lambda,$$

where \(R\) is the Ricci curvature scalar that characterizes the geometry of spacetime and \(\Lambda\) is the cosmological constant.

Finally, the last equation would be, for the time being, the “standard model” Lagrangian that describes all forms of matter and energy other than gravity:

$${\cal L}_\mathrm{SM}=…$$

Its actual form is too unwieldy to reproduce here (as it combines the electromagnetic, weak, and strong nuclear fields, all the known quarks and leptons, and their interactions) and in all likelihood, it’s not the final version anyway: the existence of the Higgs-boson is still an open question, and without the Higgs, the standard model would need to be modified.

The Holy Grail of fundamental physics, of course, is unification of these final two equations into a single, consistent framework, a true “theory of everything”.

 Posted by at 1:18 pm
Feb 222012
 

Why exactly do we believe that stars and more importantly, gas in the outer regions of spiral galaxies move in circular orbits? This assumption lies at the heart of the infamous galaxy rotation curve problem, as the circular orbital velocity for a spiral galaxy (whose visible mass is concentrated in the central bulge) should be proportional to the inverse square root of the distance from the center; instead, observed rotation curves are “flat”, meaning that the velocity remains approximately the same at various distances from the center.

So why do we assume that stars and gas move in circular orbits? Well, it turns out that one key bit of evidence is in a 32-year old paper that was published by two Indian physicists: Radhakrishnan and Sarma (A&A 85, 1980) made observations of hydrogen gas in the direction of the center of the Milky Way, and found that the bulk of gas between the solar system and the central bulge has no appreciable radial velocity.

However, more recent observations may be contradicting this result. Just two years ago, the Radial Velocity Experiment (RAVE) survey (Siebert et al, MNRAS 412, 2010) found, using a sample of several hundred thousand relatively nearby stars, that a significant radial velocity exists, putting into question the simple model that assumes that circular orbits dominate.

 Posted by at 10:03 pm
Feb 222012
 

So maybe neutrinos don’t travel faster than light after all.

Instead, if rumors are to be believed, it was a simple instrumentation problem. There is no official confirmation yet, but according to a statement that also appears on Nature’s news blog, the OPERA team is indeed investigating two problems related to a timer oscillator and an optical fiber connection.

A while back, I wrote that I could identify four possible broad categories for conventional explanations of the OPERA result:

  1. Incorrectly synchronized clocks;
  2. Incorrectly measured distance;
  3. Unaccounted-for delays in the apparatus;
  4. Statistical uncertainties.

Of these, #4 was already out, as the OPERA team verified their result using short duration proton bunches that avoided the use of potentially controversial statistical methods. I never considered #2 a serious possibility, as highly accurate geographic localization is a well established art. Having read and re-read the OPERA team’s description of how they synchronized clocks, I was prepared to discount #1 as well, but then again, incorrect synchronization can arise as a result of equipment failure, so would that fall under #1 or #3?

In any case, it looks like #3, with a dash of #1 perhaps. Once again, conventional physics prevails.

That is, if we can believe these latest rumors.

 Posted by at 8:08 pm
Feb 162012
 

I always find these numbers astonishing.

The solar constant, the amount of energy received by a 1 square meter surface at 1 astronomical unit (AU) from the Sun is roughly s = 1.37 kW/m2. Given that 1 AU is approximately 150 million kilometers, or r = 1.5 × 1011 m, the surface area of a 1 AU sphere surrounding the Sun would be A = 4πr2 = 2.8 × 1023 m2. Multiplied by the solar constant, we get P = sA = 3.9 × 1026 W, or the energy E = sA = 3.9 × 1026 J every second. Using Einstein’s infamous mass-energy formula E = mc2, where c = 3 × 108 m/s, we can easily calculate how much mass is converted into energy: m = E/c2 = 4.3 × 109 kg. Close to four and a half million tons.

The dominant fusion process in the Sun is the proton-proton chain reaction, in which approximately 0.7% of the total mass of hydrogen is converted into energy. Thus 4.3 million tons of pure energy is equivalent to over 600 millon tons of hydrogen fuel burned every second. (For comparison, the largest ever nuclear device, the Soviet Tsar Bomba, burned no more than a few hundred kilograms of hydrogen to produce a 50 megaton explosion.)

Fortunately, there is plenty where that came from. The total mass of the Sun is 2 × 1030 kg, so if the Sun was made entirely of hydrogen, it could burn for 100 billion years before running out of fuel. Now the Sun is not made entirely of hydrogen, and the fusion reaction slows down and eventually stops long before all the hydrogen is consumed, but we still have a few billion years of useful life left in our middle-aged star. A much bigger (pun intended) problem is that as our Sun ages, it will grow in size; in a mere billion years, the Earth may well become uninhabitable as a result, with the oceans boiling away. I wonder if it’s too early to start worrying about it just yet.

 Posted by at 12:24 pm
Jan 242012
 

When I write about things like precision orbit determination, I often have to discuss the difference between ephemeris time (ET) and coordinated universal time (UTC). ET is a “clean” time scale: it is essentially the time coordinate of an inertial coordinate frame that is attached to the barycenter of the solar system. On the other hand, UTC is “messy”: it is the time kept by noninertial clocks sitting here on the surface of the Earth. But the fact that terrestrial clocks sit inside the Earth’s gravity well and are subject to acceleration is only part of the picture. There are also those blasted leap seconds. It is because of leap seconds that terrestrial atomic time (TAI) and UTC differ.

Leap seconds arise because we insist on using an inherently wobbly planet as our time standard. The Earth wobbles, sometimes unpredictably (for instance, after a major earthquake) and we mess with our clocks. Quite pointlessly, as a matter of fact. And now, we missed another chance to get rid of this abomination: the International Telecommunication Union failed to achieve consensus, and any decision is postponed until 2015.

For the curious, an approximate formula to convert between TAI and ET is given by ET – TAI = 32.184 + 1.657×10–3 sin E, where E = M + 0.01671 sin M, M = 6.239996 + 1.99096871×10–7 t and t is the time in seconds since J2000 (that is, noon, January 1, 2000, TAI). To convert TAI to UTC, additional leap seconds must be added: 10 seconds for all dates prior to 1972, and then additional leap seconds depending on the date. Most inelegant.

Speaking of leap this and that, I think it’s also high time to get rid of daylight savings time. Its benefits are dubious at best, and I find the practice unnecessarily disruptive.

 Posted by at 12:23 pm
Jan 222012
 

A couple of weeks ago, somewhere I saw a blog comment that mentioned a book, Rad Decision, written by nuclear engineer James Aach.

Back in the late 1970s, when I got my hands on The Prometheus Crisis by Scortia and Robinson, I just couldn’t put the damn thing down; I read through the night and I finished the book by the morning. So naturally, I couldn’t resist the temptation to buy the last in-stock copy of Aach’s book on Amazon.ca.

And I am glad I did. My concerns that it would be a trashy, amateurishly written novel quickly dissipated. Indeed, in a sense it is a lot better than The Prometheus Crisis: the crisis in Aach’s book is far less dramatic, but the story is believable, the characters perhaps more credible.

My only concern: while this book teaches a lot about nuclear power (and why we should not fear it), its likely audience already knows. Those who would benefit the most from reading it, well, won’t.

 Posted by at 7:39 pm
Jan 082012
 

Neutrinos recently observed by CERN’s OPERA experiment may have been traveling faster than light. Or may have not. I have been discussing with physicists a number of possibilities: the role of statistics, errors in time or distance measurements, comparisons to SN 1987A, Cherenkov radiation, or the necessity for a Lorentz-violating theoretical framework.

Fortunately, there is one thing I did not need to discuss: How faster-than-light neutrinos relate to the Koran. Physics educators in Pakistan, such as Pervez Hoodbhoy writing for the Express Tribune, are not this lucky: they regularly face criticisms from fundamentalists, and if they choose to confront these head-on, they provoke ominous reader comments that call on all Muslims to “reject this evil experiment”.

Yet, there is a glimpse of hope: a Pakistani reader mentions Carl Sagan’s The Demon-Haunted World, one of Sagan’s last books, and a superb one about rational thinking versus superstition. I don’t know how popular Sagan’s book is in Pakistan, but I am glad it’s not forgotten.

 Posted by at 5:29 pm
Dec 142011
 

So I am reading details about the on-going search for the Higgs boson at the LHC. The media hailed the announcements this week as evidence that the hunt is nearing its goal… however, this is by no means conclusive, and instinctively, I’d be inclined to come to the opposite conclusion.

The Higgs boson, if exists as predicted, can decay into many things. It can decay into two photons. Just such a decay, consistent with a Higgs particle that is about 130 times heavier than a proton, was in fact observed by two of the LHC’s detectors, CMS:

and Atlas:

So far so good, but these signals are weak, far from conclusive. Never mind, both CMS and Atlas observed another slight peak. A Higgs particle can, in principle, also decay into two Z-bosons. Indeed, such a decay may be indicated by CMS (that ever so small bump near the extreme left of the plot):

and again, Atlas:

And on top of that, there is yet another decay mode, the Higgs particle decaying into a pair of W-bosons, but it is very difficult to see if anything exists at the extreme left of this plot:

So why does this leave me skeptical? Simple. First, we know that the ZZ and WW decay modes are far more likely than the diphoton (γγ) decay.

So naively, I would expect that if the signal is strong enough to produce noticeable bumps in the diphoton plot, very strong peaks should have been observed already in the ZZ and WW graphs. Instead, we see signals there that are even weaker than the bumps in the diphoton plots. While this is by no means rock solid proof that the Higgs does not exist, it makes me feel suspicious. Second… well, suppose that the Higgs does not exist. We always knew that it is the low energy region, namely the region that is still under consideration (the possibility of a Higgs that is heavier than 130 GeV is essentially excluded) where the Higgs search is the most difficult. So if no Higgs exist, this is precisely how we would expect the search to unfold: narrowing down the search window towards lower energies, just as the data becomes noisier and more and more bumps appear that could be misread as a Higgs that’s just not there.

Then again, I could just be whistling in the dark. We won’t know until we know… and that “until” is at least another year’s worth of data that is to be collected at the LHC. Patience, I guess, is a virtue.

 Posted by at 9:02 pm
Nov 182011
 

The latest OPERA results are in and they are very interesting. They used extremely tight bunches of protons this time, with a pulse width of only a few nanoseconds:

These bunches allowed the team to correlate individual neutrino events with the bunches that originated them. This is what they saw:

Numerically, the result is 62.1 ± 3.7 ns, consistent with their previously claimed result.

In my view, there are four possible categories of things that could have gone wrong with the OPERA experiment:

  1. Incorrectly synchronized clocks;
  2. Incorrectly measured distance;
  3. Unaccounted-for delays in the apparatus;
  4. Statistical uncertainties.

Because this new result does not rely on the statistical averaging of a large number of events, item 4 is basically out. One down, three to go.

 Posted by at 8:45 pm
Nov 072011
 

This is the Perimeter Institute, the picture taken from the spectacularly large balcony of my PI-issued apartment.

 

 

Yes, I am in Waterloo again.

 Posted by at 2:52 pm
Oct 302011
 

I’ve been skeptical about the validity of the OPERA faster-than-light neutrino result, but I’ve been equally skeptical about some of the naive attempts to explain it. Case in question: in recent days, a supposed explanation (updated here) has been widely reported in the popular press, and it had to do with a basic omission concerning the relativistic motion of GPS satellites. An omission that almost certainly did not take place… after all, experimentalists aren’t idiots. (That said, they may have missed a subtle statistical effect, such as a small difference in beam composition between the leading and trailing edges of a pulse. In any case, the neutrino spectrum should have been altered by Cherenkov-type radiation through neutral current weak interactions.)

 Posted by at 1:12 pm
Sep 252011
 

Maybe I’ve been watching too much Doctor Who lately.

Many of my friends asked me about the faster-than-light neutrino announcement from CERN. I must say I am skeptical. One reason why I am skeptical is that no faster-than-light effect was observed in the case of supernova 1987A, which exploded in the Large Magellanic Cloud some 170,000 light years from here. Had there been such an effect of the magnitude supposedly observed at CERN, neutrinos from this supernova would have arrived years before visible light, but that was not the case. Yes, there are ways to explain away this (the neutrinos in question have rather different energy levels) but these explanations are not necessarily very convincing.

Another reason, however, is that faster-than-light neutrinos would be eminently usable in a technological sense; if it is possible to emit and observe them, it is almost trivial to build a machine that sends a signal in a closed timelike loop, effectively allowing us to send information from the future to the present. In other words, future me should be able to send present me a signal, preferably with the blueprints for the time machine of course (why do all that hard work if I can get the blueprints from future me for free?) So, I said, if faster-than-light neutrinos exist, then future me should contact present me in three…, two…, one…, now! Hmmm… no contact. No faster-than-light neutrinos, then.

But that’s when I suddenly remembered an uncanny occurrence that happened to me just hours earlier, yesterday morning. We ran out of bread, and we were also out of the little mandarin or clementine oranges that I like to have with my breakfast. So I took a walk, visiting our favorite Portuguese bakery on Nelson street, with a detour to the nearby Loblaws supermarket. On my way, I walked across a small parking lot, where I suddenly spotted something: a mandarin orange on the ground. I picked it up… it seemed fresh and completely undamaged. Precisely what I was going out for. Was it just a coincidence? Or perhaps future me was trying to send a subtle signal to present me about the feasibility of time machines?

If it’s the latter, maybe future me watched too much Doctor Who, too. Next time, just send those blueprints.

 Posted by at 12:43 pm
Sep 132011
 

Now is the time to panic! At least this was the message I got from CNN yesterday, when it announced the breaking news: an explosion occurred at a French nuclear facility.

I decided to wait for the more sobering details. I didn’t have to wait long, thanks to Nature (the science journal, not mother Nature). They kindly informed me that “[…] the facility has been in operation since 1999. It melts down lightly-irradiated scrap metal […] It also incinerates low-level waste” and, most importantly, that “The review indicates that the specific activity of the waste over a ten-year period is 200×109 Becquerels. For comparison, that’s less than a millionth the radioactivity estimated to have been released by Fukushima […]”

Just to be clear, this is not the amount of radioactivity released by the French site in this accident. This is the total amount of radioactivity processed by this site in 12 years. No radioactivity was released by the accident yesterday.

These facts did not prevent the inevitable: according to Nature, “[t]he local paper Midi Libre is already reporting that several green groups are criticizing the response to the accident.” These must be the same green groups that just won’t be content until we all climbed back up the trees and stopped farting.

Since I mentioned facts, here are two more numbers:

  • Number of people killed by the Fukushima quake: ~16,000 (with a further ~4,000 missing)
  • Number of people killed by the Fukushima nuclear power station meltdowns: 0

All fear nuclear power! Panic now!

 

 Posted by at 3:45 pm
Aug 122011
 

I am reading a very interesting paper by Mishra and Singh. In it, they claim that simply accounting for the gravitational quadrupole moment in a matter-filled universe would naturally produce the same gravitational equations of motion that we have been investigating with John Moffat these past few years. If true, this work would imply that that our Scalar-Tensor-Vector Gravity (STVG) is in fact an effective theory (which is not necessarily surprising). Its vector and scalar degrees of freedom may arise as a result of an averaging process. The fact that they not only recover the STVG acceleration law but the correct numerical value of at least one of the STVG constants, too, suggests that this may be more than a mere coincidence. Needless to say, I am intrigued.

 Posted by at 2:15 am