Mar 202012
 

I am holding in my hands an amazing book. It is a big and heavy tome, coffee table book sized, with over 600 lavishly illustrated pages. And it took more than 30 years for this book to appear finally in English, but the wait, I think, was well worth it.

The name of Charles Simonyi, Microsoft billionaire and space tourist, is fairly well known. What is perhaps less well-known in the English speaking world is that his father, Karoly Simonyi, was a highly respected professor of physics at the Technical University of Budapest… that is, until he was deprived of his livelihood by a communist regime that considered him ideologically unfit for a teaching position.

Undeterred, Simonyi then spent the next several years completing his magnum opus, A Cultural History of Physics, which was eventually published in 1978.

Simonyi was both a scientist and a humanist. In his remarkable, unique book, history and science march hand in hand from humble beginnings in Egypt, through the golden era of the classical world, through the not so dark Dark Ages, on to the scientific revolution that began in the 1600s and culminated in the discoveries of Lagrangian mechanics, thermodynamics, statistical physics, electromagnetism and, ultimately, relativity theory and quantum physics.

And when I say lavishly illustrated, I mean it. Illustrations that include diagrams, portraits, facsimile pages from original publications decorate nearly every single page of Simonyi’s tome. Yet it is fundamentally a book about physics: the wonderfully written narrative is well complemented by equations that translate ideas into the precise language of mathematics.

I once read this book, my wife’s well worn copy, from cover to cover, back in the mid 1990s. I feel that it played a very significant role in helping me turn back towards physics.

Simonyi’s book has seen several editions in the original Hungarian, and it was also translated into German, but until now, no English-language translation was available. This is perhaps not surprising: it must be a very expensive book to produce, and despite its quality, the large number of equations must surely be a deterrent to many a prospective buyer. But now, CRC Press finally managed to make an English-language version available.

(Oh yes, CRC Press. I hated them for so many years, after they sued Wolfram and had Mathworld taken off-line. I still think that was a disgusting thing for them to do. I hope they spent enough on lawyers and lost enough sales due to disgusted customers to turn their legal victory a Pyrrhic one. But that was more than a decade ago. Let bygones be bygones… besides, I really don’t like Wolfram these days that much anyway, software activation and all.)

Charles Simonyi played a major role in making this edition happen. I guess he may also have spent some of his own money. And while I am sure he can afford a loss, I hope the book does well… it deserves to be successful.

For some reason, the book was harder to obtain in Canada than usual. It is not available on amazon.ca; indeed, I pre-ordered the book last fall, but a few weeks ago, Amazon notified me that they are unable to deliver this item. Fortunately, CRC Press delivers in Canada, and the shipping is free, just like with Amazon. The book seems to be available and in stock on the US amazon.com Web site.

And it’s not a pricey one: at less than 60 dollars, it is quite cheap, actually. I think it’s well worth every penny. My only disappointment is that my copy was printed in India. I guess that’s one way to shave a few bucks off the production cost, but I would have paid more happily for a copy printed in the US or Canada.

 Posted by at 4:38 pm
Mar 012012
 

Maxima is an open-source computer algebra system (CAS) and a damn good one at that if I may say so myself, being one of Maxima’s developers.

Among other things, Maxima has top-notch tensor algebra capabilities, which can be used, among other things, to work with Lagrangian field theories.

This week, I am pleased to report, SourgeForge chose Maxima as one of the featured open-source projects on their front page. No, it won’t make us rich and famous (not even rich or famous) but it is nice to be recognized.

 Posted by at 9:35 am
Feb 272012
 

The cover story in a recent issue of New Scientist was titled Seven equations that rule your world, written by Ian Stewart.

I like Ian Stewart; I have several of his books on my bookshelf, including a 1978 Hungarian edition of his textbook, Catastrophe Theory and its Applications.

However, I disagree with his choice of equations. Stewart picked the four Maxwell equations, Schrödinger’s equation, the Fourier transform, and the wave equation:

\begin{align}
\nabla\cdot E&=0,\\
\nabla\times E&=-\frac{1}{c}\frac{\partial H}{\partial t},\\
\nabla\cdot H&=0,\\
\nabla\times H&=\frac{1}{c}\frac{\partial E}{\partial t},\\
i\hbar\frac{\partial}{\partial t}\psi&=\hat{H}\psi,\\
\hat{f}(\xi)&=\int\limits_{-\infty}^{\infty}f(x)e^{-2\pi ix\xi}dx,\\
\frac{\partial^2u}{\partial t^2}&=c^2\frac{\partial^2u}{\partial x^2}.
\end{align}

But these equations really aren’t that fundamental… and some rather fundamental equations are missing.

For starters, the four Maxwell equations really should just be two equations: given a smooth (or at least three times differentiable) vector field \(A\) in 4-dimensional spacetime, we define the electromagnetic field tensor \(F\) and current \(J\) as

\begin{align}
F&={\rm d}A,\\
J&=\star{\rm d}{\star{F}},
\end{align}

where the symbol \(\rm d\) denotes the exterior derivative and \(\star\) represents the Hodge dual. OK, these are not really trivial concepts from high school physics, but the main point is, we end up with a set of four Maxwell equations only because we (unnecessarily) split the equations into a three-dimensional and a one-dimensional part. Doing so also obscures some fundamental truths: notably that once the electromagnetic field is defined this way, its properties are inevitable mathematical identities, not equations imposed on the theoretician’s whim.

Moreover, the wave equation really is just a solution of the Maxwell equations, and conveys no new information. It is not something you invent, but something you derive.

I really have no nit to pick with Schrödinger’s equation, but before moving on to quantum physics, I would have written down the Euler-Lagrange equation first. For a generic theory with positions \(q\) and time \(t\), this could be written as

$$\frac{\partial{\cal L}}{\partial q}-\frac{d}{dt}\frac{\partial{\cal L}}{\partial\dot{q}}=0,$$

where \({\cal L}\) is the Lagrangian, or Lagrange function (of \(q\) and \(\dot{q}\), and possibly \(t\)) that describes this particular physical system. The significance of this equation is that it can be derived from the principle of least action, and tells us everything about the evolution of a system. Once you know the generic positions \(q\) and their time derivatives (i.e., velocities) \(\dot{q}\) at some time \(t=t_0\), you can calculate them at any other time \(t\). This is why physics can be used to make predictions: for instance, if you know the initial position and velocity of a cannonball, you can predict its trajectory. The beauty of the Euler-Lagrange equation is that it works equally well for particles and for fields and can be readily generalized to relativistic theories; moreover, the principle of least action is an absolutely universal one, unifying, in a sense, classical mechanics, electromagnetism, nuclear physics, and even gravity. All these theories can be described by simply stating the corresponding Lagrangian. Even more astonishingly, the basic mathematical properties of the Lagrangian can be used to deduce fundamental physical laws: for instance, a Lagrangian that remains invariant under time translation leads to the law of energy conservation.

The Euler-Lagrange equation remains valid in quantum physics, too. The big difference is that the quantities \(q\) are no longer simple numbers; they are non-commuting quantities, so-called “q-numbers”. These q-numbers sometimes coincide with ordinary numbers but more often, they do not. Most importantly, if \(q\) happens to be an ordinary number, \(\dot{q}\) cannot be, and vice versa. So the initial position and momentum of a quantum system cannot both be represented by numbers at the same time. Exact predictions are no longer possible.

We can still make approximate predictions though, by replacing the exact form of the Euler-Lagrange equation with a probabilistic prediction:

$$\xi(A\rightarrow B)=k\sum\limits_A^B\exp\left(\frac{i}{\hbar}\int_A^B{\cal L}\right),$$

where \(\xi(A\rightarrow B)\) is a complex number called the probability amplitude, the squared modulus of which tells us the likelihood of the system changing from state \(A\) to state \(B\) and the summation is meant to take place over “all possible paths” from \(A\) to \(B\). Schrödinger’s equation can be derived from this, as indeed most of quantum mechanics. So this, then, would be my fourth equation.

Would I include the Fourier transform? Probably not. It offers a different way of looking at the same problem, but no new information content. Whether I investigate a signal in the time domain or the frequency domain, it is still the same signal; arguably, it is simply a matter of convenience as to which representation I choose.

However, Stewart left out at least one extremely important equation:

$$dU=TdS-pdV.$$

This is the fundamental equation of thermodynamics, connecting quantities such as the internal energy \(U\), the temperature \(T\), the entropy \(S\), and the medium’s equation of state (here represented by the pressure \(p\) and volume \(V\).) Whether one derives it from the first principles of axiomatic thermodynamics or from the postulates of statistical physics, the end result is the same: this is the equation that defines the arrow of time, for instance, as all the other fundamental equations of physics work the same even if the arrow of time is reversed.

Well, that’s five equations. What else would I include in my list? The choices, I think, are obvious. First, the definition of the Lagrangian for gravity:

$${\cal L}_\mathrm{grav}=R+2\Lambda,$$

where \(R\) is the Ricci curvature scalar that characterizes the geometry of spacetime and \(\Lambda\) is the cosmological constant.

Finally, the last equation would be, for the time being, the “standard model” Lagrangian that describes all forms of matter and energy other than gravity:

$${\cal L}_\mathrm{SM}=…$$

Its actual form is too unwieldy to reproduce here (as it combines the electromagnetic, weak, and strong nuclear fields, all the known quarks and leptons, and their interactions) and in all likelihood, it’s not the final version anyway: the existence of the Higgs-boson is still an open question, and without the Higgs, the standard model would need to be modified.

The Holy Grail of fundamental physics, of course, is unification of these final two equations into a single, consistent framework, a true “theory of everything”.

 Posted by at 1:18 pm
Feb 222012
 

Why exactly do we believe that stars and more importantly, gas in the outer regions of spiral galaxies move in circular orbits? This assumption lies at the heart of the infamous galaxy rotation curve problem, as the circular orbital velocity for a spiral galaxy (whose visible mass is concentrated in the central bulge) should be proportional to the inverse square root of the distance from the center; instead, observed rotation curves are “flat”, meaning that the velocity remains approximately the same at various distances from the center.

So why do we assume that stars and gas move in circular orbits? Well, it turns out that one key bit of evidence is in a 32-year old paper that was published by two Indian physicists: Radhakrishnan and Sarma (A&A 85, 1980) made observations of hydrogen gas in the direction of the center of the Milky Way, and found that the bulk of gas between the solar system and the central bulge has no appreciable radial velocity.

However, more recent observations may be contradicting this result. Just two years ago, the Radial Velocity Experiment (RAVE) survey (Siebert et al, MNRAS 412, 2010) found, using a sample of several hundred thousand relatively nearby stars, that a significant radial velocity exists, putting into question the simple model that assumes that circular orbits dominate.

 Posted by at 10:03 pm
Feb 222012
 

So maybe neutrinos don’t travel faster than light after all.

Instead, if rumors are to be believed, it was a simple instrumentation problem. There is no official confirmation yet, but according to a statement that also appears on Nature’s news blog, the OPERA team is indeed investigating two problems related to a timer oscillator and an optical fiber connection.

A while back, I wrote that I could identify four possible broad categories for conventional explanations of the OPERA result:

  1. Incorrectly synchronized clocks;
  2. Incorrectly measured distance;
  3. Unaccounted-for delays in the apparatus;
  4. Statistical uncertainties.

Of these, #4 was already out, as the OPERA team verified their result using short duration proton bunches that avoided the use of potentially controversial statistical methods. I never considered #2 a serious possibility, as highly accurate geographic localization is a well established art. Having read and re-read the OPERA team’s description of how they synchronized clocks, I was prepared to discount #1 as well, but then again, incorrect synchronization can arise as a result of equipment failure, so would that fall under #1 or #3?

In any case, it looks like #3, with a dash of #1 perhaps. Once again, conventional physics prevails.

That is, if we can believe these latest rumors.

 Posted by at 8:08 pm
Feb 162012
 

I always find these numbers astonishing.

The solar constant, the amount of energy received by a 1 square meter surface at 1 astronomical unit (AU) from the Sun is roughly s = 1.37 kW/m2. Given that 1 AU is approximately 150 million kilometers, or r = 1.5 × 1011 m, the surface area of a 1 AU sphere surrounding the Sun would be A = 4πr2 = 2.8 × 1023 m2. Multiplied by the solar constant, we get P = sA = 3.9 × 1026 W, or the energy E = sA = 3.9 × 1026 J every second. Using Einstein’s infamous mass-energy formula E = mc2, where c = 3 × 108 m/s, we can easily calculate how much mass is converted into energy: m = E/c2 = 4.3 × 109 kg. Close to four and a half million tons.

The dominant fusion process in the Sun is the proton-proton chain reaction, in which approximately 0.7% of the total mass of hydrogen is converted into energy. Thus 4.3 million tons of pure energy is equivalent to over 600 millon tons of hydrogen fuel burned every second. (For comparison, the largest ever nuclear device, the Soviet Tsar Bomba, burned no more than a few hundred kilograms of hydrogen to produce a 50 megaton explosion.)

Fortunately, there is plenty where that came from. The total mass of the Sun is 2 × 1030 kg, so if the Sun was made entirely of hydrogen, it could burn for 100 billion years before running out of fuel. Now the Sun is not made entirely of hydrogen, and the fusion reaction slows down and eventually stops long before all the hydrogen is consumed, but we still have a few billion years of useful life left in our middle-aged star. A much bigger (pun intended) problem is that as our Sun ages, it will grow in size; in a mere billion years, the Earth may well become uninhabitable as a result, with the oceans boiling away. I wonder if it’s too early to start worrying about it just yet.

 Posted by at 12:24 pm
Jan 242012
 

When I write about things like precision orbit determination, I often have to discuss the difference between ephemeris time (ET) and coordinated universal time (UTC). ET is a “clean” time scale: it is essentially the time coordinate of an inertial coordinate frame that is attached to the barycenter of the solar system. On the other hand, UTC is “messy”: it is the time kept by noninertial clocks sitting here on the surface of the Earth. But the fact that terrestrial clocks sit inside the Earth’s gravity well and are subject to acceleration is only part of the picture. There are also those blasted leap seconds. It is because of leap seconds that terrestrial atomic time (TAI) and UTC differ.

Leap seconds arise because we insist on using an inherently wobbly planet as our time standard. The Earth wobbles, sometimes unpredictably (for instance, after a major earthquake) and we mess with our clocks. Quite pointlessly, as a matter of fact. And now, we missed another chance to get rid of this abomination: the International Telecommunication Union failed to achieve consensus, and any decision is postponed until 2015.

For the curious, an approximate formula to convert between TAI and ET is given by ET – TAI = 32.184 + 1.657×10–3 sin E, where E = M + 0.01671 sin M, M = 6.239996 + 1.99096871×10–7 t and t is the time in seconds since J2000 (that is, noon, January 1, 2000, TAI). To convert TAI to UTC, additional leap seconds must be added: 10 seconds for all dates prior to 1972, and then additional leap seconds depending on the date. Most inelegant.

Speaking of leap this and that, I think it’s also high time to get rid of daylight savings time. Its benefits are dubious at best, and I find the practice unnecessarily disruptive.

 Posted by at 12:23 pm
Jan 222012
 

A couple of weeks ago, somewhere I saw a blog comment that mentioned a book, Rad Decision, written by nuclear engineer James Aach.

Back in the late 1970s, when I got my hands on The Prometheus Crisis by Scortia and Robinson, I just couldn’t put the damn thing down; I read through the night and I finished the book by the morning. So naturally, I couldn’t resist the temptation to buy the last in-stock copy of Aach’s book on Amazon.ca.

And I am glad I did. My concerns that it would be a trashy, amateurishly written novel quickly dissipated. Indeed, in a sense it is a lot better than The Prometheus Crisis: the crisis in Aach’s book is far less dramatic, but the story is believable, the characters perhaps more credible.

My only concern: while this book teaches a lot about nuclear power (and why we should not fear it), its likely audience already knows. Those who would benefit the most from reading it, well, won’t.

 Posted by at 7:39 pm
Jan 082012
 

Neutrinos recently observed by CERN’s OPERA experiment may have been traveling faster than light. Or may have not. I have been discussing with physicists a number of possibilities: the role of statistics, errors in time or distance measurements, comparisons to SN 1987A, Cherenkov radiation, or the necessity for a Lorentz-violating theoretical framework.

Fortunately, there is one thing I did not need to discuss: How faster-than-light neutrinos relate to the Koran. Physics educators in Pakistan, such as Pervez Hoodbhoy writing for the Express Tribune, are not this lucky: they regularly face criticisms from fundamentalists, and if they choose to confront these head-on, they provoke ominous reader comments that call on all Muslims to “reject this evil experiment”.

Yet, there is a glimpse of hope: a Pakistani reader mentions Carl Sagan’s The Demon-Haunted World, one of Sagan’s last books, and a superb one about rational thinking versus superstition. I don’t know how popular Sagan’s book is in Pakistan, but I am glad it’s not forgotten.

 Posted by at 5:29 pm
Dec 142011
 

So I am reading details about the on-going search for the Higgs boson at the LHC. The media hailed the announcements this week as evidence that the hunt is nearing its goal… however, this is by no means conclusive, and instinctively, I’d be inclined to come to the opposite conclusion.

The Higgs boson, if exists as predicted, can decay into many things. It can decay into two photons. Just such a decay, consistent with a Higgs particle that is about 130 times heavier than a proton, was in fact observed by two of the LHC’s detectors, CMS:

and Atlas:

So far so good, but these signals are weak, far from conclusive. Never mind, both CMS and Atlas observed another slight peak. A Higgs particle can, in principle, also decay into two Z-bosons. Indeed, such a decay may be indicated by CMS (that ever so small bump near the extreme left of the plot):

and again, Atlas:

And on top of that, there is yet another decay mode, the Higgs particle decaying into a pair of W-bosons, but it is very difficult to see if anything exists at the extreme left of this plot:

So why does this leave me skeptical? Simple. First, we know that the ZZ and WW decay modes are far more likely than the diphoton (γγ) decay.

So naively, I would expect that if the signal is strong enough to produce noticeable bumps in the diphoton plot, very strong peaks should have been observed already in the ZZ and WW graphs. Instead, we see signals there that are even weaker than the bumps in the diphoton plots. While this is by no means rock solid proof that the Higgs does not exist, it makes me feel suspicious. Second… well, suppose that the Higgs does not exist. We always knew that it is the low energy region, namely the region that is still under consideration (the possibility of a Higgs that is heavier than 130 GeV is essentially excluded) where the Higgs search is the most difficult. So if no Higgs exist, this is precisely how we would expect the search to unfold: narrowing down the search window towards lower energies, just as the data becomes noisier and more and more bumps appear that could be misread as a Higgs that’s just not there.

Then again, I could just be whistling in the dark. We won’t know until we know… and that “until” is at least another year’s worth of data that is to be collected at the LHC. Patience, I guess, is a virtue.

 Posted by at 9:02 pm
Nov 182011
 

The latest OPERA results are in and they are very interesting. They used extremely tight bunches of protons this time, with a pulse width of only a few nanoseconds:

These bunches allowed the team to correlate individual neutrino events with the bunches that originated them. This is what they saw:

Numerically, the result is 62.1 ± 3.7 ns, consistent with their previously claimed result.

In my view, there are four possible categories of things that could have gone wrong with the OPERA experiment:

  1. Incorrectly synchronized clocks;
  2. Incorrectly measured distance;
  3. Unaccounted-for delays in the apparatus;
  4. Statistical uncertainties.

Because this new result does not rely on the statistical averaging of a large number of events, item 4 is basically out. One down, three to go.

 Posted by at 8:45 pm
Nov 072011
 

This is the Perimeter Institute, the picture taken from the spectacularly large balcony of my PI-issued apartment.

 

 

Yes, I am in Waterloo again.

 Posted by at 2:52 pm
Oct 302011
 

I’ve been skeptical about the validity of the OPERA faster-than-light neutrino result, but I’ve been equally skeptical about some of the naive attempts to explain it. Case in question: in recent days, a supposed explanation (updated here) has been widely reported in the popular press, and it had to do with a basic omission concerning the relativistic motion of GPS satellites. An omission that almost certainly did not take place… after all, experimentalists aren’t idiots. (That said, they may have missed a subtle statistical effect, such as a small difference in beam composition between the leading and trailing edges of a pulse. In any case, the neutrino spectrum should have been altered by Cherenkov-type radiation through neutral current weak interactions.)

 Posted by at 1:12 pm
Sep 252011
 

Maybe I’ve been watching too much Doctor Who lately.

Many of my friends asked me about the faster-than-light neutrino announcement from CERN. I must say I am skeptical. One reason why I am skeptical is that no faster-than-light effect was observed in the case of supernova 1987A, which exploded in the Large Magellanic Cloud some 170,000 light years from here. Had there been such an effect of the magnitude supposedly observed at CERN, neutrinos from this supernova would have arrived years before visible light, but that was not the case. Yes, there are ways to explain away this (the neutrinos in question have rather different energy levels) but these explanations are not necessarily very convincing.

Another reason, however, is that faster-than-light neutrinos would be eminently usable in a technological sense; if it is possible to emit and observe them, it is almost trivial to build a machine that sends a signal in a closed timelike loop, effectively allowing us to send information from the future to the present. In other words, future me should be able to send present me a signal, preferably with the blueprints for the time machine of course (why do all that hard work if I can get the blueprints from future me for free?) So, I said, if faster-than-light neutrinos exist, then future me should contact present me in three…, two…, one…, now! Hmmm… no contact. No faster-than-light neutrinos, then.

But that’s when I suddenly remembered an uncanny occurrence that happened to me just hours earlier, yesterday morning. We ran out of bread, and we were also out of the little mandarin or clementine oranges that I like to have with my breakfast. So I took a walk, visiting our favorite Portuguese bakery on Nelson street, with a detour to the nearby Loblaws supermarket. On my way, I walked across a small parking lot, where I suddenly spotted something: a mandarin orange on the ground. I picked it up… it seemed fresh and completely undamaged. Precisely what I was going out for. Was it just a coincidence? Or perhaps future me was trying to send a subtle signal to present me about the feasibility of time machines?

If it’s the latter, maybe future me watched too much Doctor Who, too. Next time, just send those blueprints.

 Posted by at 12:43 pm
Sep 132011
 

Now is the time to panic! At least this was the message I got from CNN yesterday, when it announced the breaking news: an explosion occurred at a French nuclear facility.

I decided to wait for the more sobering details. I didn’t have to wait long, thanks to Nature (the science journal, not mother Nature). They kindly informed me that “[…] the facility has been in operation since 1999. It melts down lightly-irradiated scrap metal […] It also incinerates low-level waste” and, most importantly, that “The review indicates that the specific activity of the waste over a ten-year period is 200×109 Becquerels. For comparison, that’s less than a millionth the radioactivity estimated to have been released by Fukushima […]”

Just to be clear, this is not the amount of radioactivity released by the French site in this accident. This is the total amount of radioactivity processed by this site in 12 years. No radioactivity was released by the accident yesterday.

These facts did not prevent the inevitable: according to Nature, “[t]he local paper Midi Libre is already reporting that several green groups are criticizing the response to the accident.” These must be the same green groups that just won’t be content until we all climbed back up the trees and stopped farting.

Since I mentioned facts, here are two more numbers:

  • Number of people killed by the Fukushima quake: ~16,000 (with a further ~4,000 missing)
  • Number of people killed by the Fukushima nuclear power station meltdowns: 0

All fear nuclear power! Panic now!

 

 Posted by at 3:45 pm
Aug 122011
 

I am reading a very interesting paper by Mishra and Singh. In it, they claim that simply accounting for the gravitational quadrupole moment in a matter-filled universe would naturally produce the same gravitational equations of motion that we have been investigating with John Moffat these past few years. If true, this work would imply that that our Scalar-Tensor-Vector Gravity (STVG) is in fact an effective theory (which is not necessarily surprising). Its vector and scalar degrees of freedom may arise as a result of an averaging process. The fact that they not only recover the STVG acceleration law but the correct numerical value of at least one of the STVG constants, too, suggests that this may be more than a mere coincidence. Needless to say, I am intrigued.

 Posted by at 2:15 am
Aug 052011
 

As I’ve been asked about this more than once before, I thought I’d write down an answer to a simple question concerning the Pioneer spacecraft: if the “thermal hypothesis”, namely that the spacecraft are decelerating due to the heat they radiate, is true, how come this deceleration diminishes more rapidly, with a half-life of 20-odd years, than the primary heat source on board, which is plutonium-238 fuel with a half-life of 87.74 years?

The answer is simple: there are other half-lives on board. Notably, the half-life of the efficiency of the thermocouples that convert the heat of plutonium into electricity.

Now most of that heat from plutonium is simply wasted; it is radiated away, and while it may produce a recoil force, it does so with very low efficiency, say, 1%. The thermocouples convert about 6% of heat into electricity, but as the plutonium fuel cools and the thermocouples age, their efficiency decreases (this is in fact measurable, as telemetry tells us exactly how much electricity was generated on board at any given moment.) All that electrical energy has to go somewhere… and indeed it does, powering all on-board instrumentation that, like a home computer, ultimately turn all the energy they consume into heat. This heat is radiated away, and it is in fact converted into a recoil force with an efficiency of about 40%.

These are all the numbers we need. The recoil force, then, will be proportional to 1% of 100% − 6% = 94% plus 40% of 6% of the total thermal power (say, 2500 W at the beginning). The total power will decrease at a rate of \(2^{-T/87.74}\), so after \(T\) number of years, it will be \(2500\times 2^{-T/87.74}\) W. As to the thermocouple efficiency, its half-life may be around 30 years; so the electrical conversion efficiency goes from 6% to \(6\times 2^{-T/30.0}\) % after \(T\) years.

So the overall recoil force can be calculated as being proportional to

$$P(T)=2500\times 2^{-T/87.74}\times\left\{\left[1-0.06\times 2^{-T/30.0}\right]\times 0.01+0.06\times 2^{-T/30.0}\times 0.4\right\}.$$

(This actually gives a result in watts. To convert it into an actual force, we need to divide by the speed of light, 300,000,000 m/s.) With a bit of simple algebra, this formula can be simplified to

$$P(T)=25.0\times 2^{-T/87.74}+58.5\times 2^{-T/22.36}.$$

The most curious thing about this result is that the recoil force is dominated by a term that has a half-life of only 22.36 years… which is less than the half-life of either the plutonium fuel or the thermocouple efficiency.

The numbers I used are not the actual numbers from telemetry (though they are not too far from reality) but this calculation still demonstrates the fallacy of the argument that just because the power source has a specific half-life, the thermal recoil force must have the same half-life.

 Posted by at 12:46 pm
Jun 292011
 

The headline on CNN tonight reads, “An American Fukushima?” The topic: the possibility of wildfires reaching the nuclear laboratories at Los Alamos. The guest? Why, it’s Michio Kaku again!

What I first yelled in exasperation, I shall not repeat here, because I don’t want my blog to be blacklisted for obscenity. Besides… I am still using Kaku’s superb Quantum Field Theory, one of the best textbooks on the topic, so I still have some residual respect for him. But the way he is prostituting himself on television, hyping and sensationalizing nuclear accidents… or non-accidents, as the case might be… It is simply disgusting.

Dr. Kaku, in the unlikely case my blog entry catches your attention, here’s some food for thought. The number of people who died in Japan’s once-in-a-millennium megaquake and subsequent tsunami: tens of thousands. The number of people who died as a result of the Fukushima meltdowns: ZERO. Thank you for your attention.

 Posted by at 12:14 am