May 312011
 

One of the biggest challenges in our research of the Pioneer Anomaly was the recovery of old mission data. It is purely by chance that most of the mission data could be recovered; documents were saved from the dumpster, data was read from deteriorating tapes, old formats were reconstructed using fragmented information. If only there had been a dedicated effort to save all raw mission data, our job would have been much easier.

This is why I am reading it with alarm that there are currently no plans to save all the raw data from the Tevatron. This is really inexcusable. So what if the data are 20 petabytes? In this day and age, even that is not excessive… a petabyte is just over 300 3 TB hard drives, which are the highest capacity drives currently no the market. If I can afford to have more than 0.03 petabytes of storage here in my home office, surely the US Government can find a way to fund the purchase and maintenance of a data storage facility with a few thousand hard drives, in order to preserve knowledge that American taxpayers payed many millions of dollars to build in the first place.

 Posted by at 3:37 pm
Apr 082011
 

I just finished reading Tommaso Dorigo’s excellent blog post about the new results from Fermilab. The bottom line:

  • There is a reason why a 3-σ result is not usually accepted a proof of discovery;
  • The detected signal is highly unlikely to be a Higgs particle;
  • It may be something exotic going beyond the Standard Model, such as a Z’ neutral vector boson;
  • Or, it may yet turn out to be nothing, a modeling artifact that will eventually go away after further analysis.

Interesting times.

 Posted by at 10:31 am
Mar 192011
 

Let me preface this with… I have huge respect for eminent physicist Michio Kaku, whose 1993 textbook, Quantum Field Theory: A Modern Introduction, continues to occupy a prominent place on my “primary” bookshelf, right above my workstation.

But… I guess that was before Kaku began writing popular science books and became a television personality.

Today he appeared on CNN and astonished me by suggesting that the best course of action is to bury and entomb Fukushima like they did with Chernobyl.

Never mind that in Chernobyl, the problem was a raging graphite fire that had to be put out. Never mind that Chernobyl had no containment building to begin with. Never mind that in Chernobyl, there was a “criticality incident”, a runaway chain reaction, whereas in Fukushima, the problem is decay heat. Never mind that in Chernobyl, the problem was localized to a single reactor, whereas in Fukushima, it is several reactors and also waste fuel pools that are threatened. Never mind that the critical problem at Fukushima is the complete loss of electrical power. Never mind that a single chunk of burning graphite flying out of the Chernobyl inferno probably carried more radioactivity than the total amount released by Fukushima after it’s all over. Who cares about the actual facts when you can make dramatic statements on television about calling in the air force of the Red Army, and peddle your latest book at the same time? I do not wish to use my blog to speak ill of a physicist that I respect but I think Dr. Kaku’s comments are unfounded, inappropriate, sensationalist, and harmful. I feel very disappointed, offended even; it’s one thing to hear this kind of stuff from the mouths of ignorant journalists or pundits, but someone like Dr. Kaku really, really should know better.

 Posted by at 12:56 am
Feb 212011
 

I have written several papers concerning the possible contribution of heat emitted by radioisotope thermoelectric generators (RTGs) to the anomalous acceleration of the Pioneer spacecraft. Doubtless I’ll write some more.

But those RTGs used for space missions number only a handful, and with the exception of those that fell back to the Earth (and were safely recovered) they are all a safe distance away (a very long way away indeed) from the Earth.

However, RTGs were also used here on the ground. In fact, according to a report I just finished reading, a ridiculously high number of them, some 1500, were deployed by the former Soviet Union to power remote lighthouses, navigation beacons, meteorological stations, and who knows what else. These installations are unguarded, and the RTGs themselves are not tamper-proof. Many have ended up in the hands of scrap metal scavengers (some of whom actually died after receiving a lethal dose of radiation), some sank to the bottom of the sea, some remain exposed to the elements with their radioactive core compromised. Worse yet, unlike their counterparts in the US space program which used plutonium, these RTGs use strontium-90 as their power source; strontium is absorbed by the body more readily than plutonium, so my guess is, exposure to strontium is even more hazardous than exposure to plutonium.

The report is a few years old, so perhaps things improved since a little. Or, perhaps they have gotten worse… who knows how many radioactive power sources have since found their way into unauthorized hands.

 Posted by at 5:17 pm
Dec 162010
 

It’s official: the work we are doing about the Pioneer Anomaly qualifies as popular science according to Popular Science, as they just published a featured article about it.

I admit that it was with a strong sense of apprehension that I began reading the piece. What you say to a journalist and what appears in print are often not very well correlated, as politicians know all too well. My apprehension was not completely unjustified, as the article contains some (minor) technical errors, misquotes us slightly in places, and what is perhaps most troubling, some of the work that it attributes to us was done by others (e.g., thermal engineers at JPL). These flaws notwithstanding (and this article fares better than most that appeared in recent years, I think), it is nice to have one’s efforts recognized.

 Posted by at 12:20 am
Nov 212010
 

This sunset above an eerie landscape of orange-lit clouds looked much nicer to the naked eye than it looks in a picture:

Yes, it means that I am back home. As a matter of fact, I arrived back home some 2.5 hours early. My flight from Mexico City landed 15 minutes early in Toronto, and after dashing through the airport like crazy, I managed to make it to an earlier Ottawa flight… which had ONE (!) seat left. Talk about luck.

It was an interesting conference. Useful discussions, good people to meet. I had a chance to talk about MOG cosmology to a not altogether unfriendly audience.

Still, it’s good to be home. Sleep in my own bed and all that.

 Posted by at 6:16 am
Sep 122010
 

John Moffat and I now have a bet.

Perhaps in the not too distant future, quantum entanglement will be testable over greater distances, possibly involving spacecraft. Good.

Now John believes that these tests will eventually show that entanglement will be attenuated at greater distances. This would mean, in my mind, that entanglement involves the transmission of a real, physical (albeit superluminal) signal from one of the entangled particles to the other.

I disagreed rather strongly; if such attenuation were observed, it’d certainly turn whatever little I think I understand about quantum theory upside down.

Just to be clear, it’s not something John feels too strongly about, so we didn’t bet a great deal of money. John recently bet a great deal more on the non-existence of the Higgs boson. No, not with me… on that subject, we are in complete agreement, as I also do not believe that the Higgs boson exists.

 Posted by at 1:09 am
Sep 032010
 

Electroweak theory has several coupling constants: there is g, there is g‘, there is e, and then there is the Weinberg angle θW and its sine and cosine, and I am always worried about making mistakes.

Well, here’s a neat way to remember: the three constants and the Weinberg angle have a nice geometrical relationship (as it should be evident from the fact that the Weinberg angle is just a measure of the abstract rotation that is used to break the symmetry of a massless theory).

This diagram also makes it clear that so long as you keep the triangle a right triangle, all it takes is two numbers (e.g., e and θW) and the triangle is fully determined. This is true even when the coupling constants are running.

 Posted by at 7:53 pm
Jul 242010
 

I have been reading the celebrated biography of Albert Einstein by Walter Isaacson, and in it, the chapter about Einstein’s beliefs and faith. In particular, the question of free will.

In Einstein’s deterministic universe, according to Isaacson, there is no room for free will. In contrast, physicists who accepted quantum mechanics as a fundamental description of nature could point at quantum uncertainty as proof that non-deterministic systems exist and thus free will is possible.

I boldly disagree with both views.

First, I look out my window at a nearby intersection where there is a set of traffic lights. This set is a deterministic machine. To determine its state, the machine responds to inputs such the reading of an internal clock, the presence of a car in a left turning lane or the pressing of a button by a pedestrian who wishes the cross the street. Now suppose I incorporate into the system a truly random element, such as a relay that closes depending on whether an atomic decay process takes place or not. So now the light set is not deterministic anymore: sometimes it provides a green light allowing a vehicle to turn left, sometimes not, sometimes it responds to a pedestrian pressing the crossing button, sometimes not. So… does this mean that my set of traffic lights suddenly acquired free will? Of course not. A pair of dice does not have free will either.

On the other hand, suppose I build a machine with true artificial intelligence. It has not happened yet but I have no doubt that it is going to happen. Such a machine would acquire information about its environment (i.e., “learn”) while it executes its core program (its “instincts”) to perform its intended function. Often, its decisions would be quite unpredictable, but not because of any quantum randomness. They are unpredictable because even if you knew the machine’s initial state in full detail, you’d need another machine even more complex than this one to model it and accurately predict its behavior. Furthermore, the machine’s decisions will be influenced by many things, possibly involving an attempt to comply with accepted norms of behavior (i.e., “ethics”) if it helps the machine accomplish the goals of its core programming. Does this machine have free will? I’d argue that it does, at least insofar as the term has any meaning.

And that, of course, is the problem. We all think we know what “free will” means, but is that true? Can we actually define a “decision making system with free will”? Perhaps not. Think about an operational definition: given an internal state I and external inputs E, a free will machine will make decision D. Of course the moment you have this operational definition, the machine ceases to have what we usually think of as free will, its behavior being entirely deterministic. And no, a random number generator does not help in this case either. It may change the operational definition to something like, given internal state I and external inputs E, the machine will make decision Di with probability Pi, the sum of all Pi-s being 1. But it cannot be this randomization of decisions that bestows a machine with free will; otherwise, our traffic lights here at the corner could have free will, too.

So perhaps the question about free will fails for the simple reason that free will is an ill-defined and possibly self-contradictory concept. Perhaps it’s just another grammatically correct phrase that has no more actual meaning than, say, “true falsehood” or “a number that is odd and even” or “the fourth side of a triangle”.

 Posted by at 1:36 am
May 282010
 

This Homer Simpson is one smart fellow. While he was trying to compete with Edison as an inventor, he accidentally managed to discover the mass of the Higgs boson, disprove Fermat’s theorem, discover that we live in a closed universe, and he was doing a bit of topology, too.

His Higgs mass estimate is a tad off, though. Whether or not the Higgs exists, the jury is still out, but its mass is definitely not around 775 GeV.

 Posted by at 4:36 am
May 152010
 

It seems that the German news magazine Spiegel  managed to do the impossible: provide an impartial, balanced assessment of the story behind Climategate.

And by “balanced”, I don’t mean balanced in the American journalist’s sense, giving equal weight to both sides, no matter how ludicrous one side happens to be compared to the other, but balanced in the sense of not taking sides, not assuming guilt, and assessing the faults of all the participants regardless of which side they represent.

What I am reading is very discouraging. Climate science should really be called climate politics, with a little bit of science thrown in just to provide fodder for arguments. Meanwhile, both proponents and opponents of climate change sometimes fail to get even the basic physics right; as a minor example, recently I felt compelled to write a short paper about the proper use of the virial theorem in a planetary atmosphere, after reading way too much uninformed discussion by supposed experts online.

Of course way too much is at stake. Trillions of dollars, for starters, and quite possibly the future of our planet. Could it be that this compelled some good people to embellish the truth a little? If that is the case, they did a huge disservice to the very cause that they champion. By compromising the one currency science really has, its objectivity, they increased the likelihood that the public won’t listen to them just when it matters most, should it prove to be the case that real sacrifices are necessary to keep the planet habitable.

That is not to say that taking climate scientists to court is the right answer. If that’s the cure, it’s worse than the disease. Worse yet, it will only ensure more entrenched positions and more secrecy, justifying the hostility towards “deniers”. That is not the way to do science. Informed skepticism should be welcome, but skepticism should be about questioning methods and deductions, not the honesty and integrity of researchers. Will climate science ever be like this? I sure hope so, otherwise we’re all in very deep trouble.

 Posted by at 4:41 am
May 062010
 

Here’s an idea that only Dr. Strangelove, Edward Teller, or the Communist Party of the Soviet Union could come up with: nuke that oil leak at the bottom of the Gulf of Mexico. Apparently, it has been done before, and only one out of five attempts was unsuccessful. So how about that, folks? What’s a bit of radioactivity when you have an 80% success rate?

 Posted by at 7:54 pm
Apr 302010
 

When I started this here blog site, my intent was to write a lot about physics. I ended up writing a lot less about physics than I wanted to, in part because a lot of the physics I’m thinking about is “work-in-progress” which would not be appropriate to write about until, well, until it is appropriate to write about it!

But, there are a few exceptions. Lately, I’ve been thinking a lot about scalar-tensor gravity. Indeed, as I am waiting for the completion of a virus scan (could my recent computer troubles have been caused by a virus? I now took out my computer’s hard drive, put it in an external enclosure, and I am scanning it using a “known good” computer) I am thinking about it now.

Einstein’s gravity theory (tensor gravity) can be written up using the Lagrangian formalism. This is the infamous Einstein-Hilbert Lagrangian, which takes the form L = [(−1/16πG)(R + 2Λ) + LM]√−g, where G is the gravitational constant, R is the so-called curvature scalar, Λ is the cosmological constant, g is the determinant of the metric, and LM is the Lagrangian representing matter.

In one of the simplest modifications of Einstein’s gravity, Jordan-Brans-Dicke theory, the gravitational constant G is promoted from constant to field: it becomes variable, and a “kinetic term” is added to the Lagrangian representing the kinetic energy carried by this scalar field.

In this theory, gravity is still determined by the geometry of space-time. However, in addition to matter, there is this scalar field (which carries mass-energy and is thus a further source of gravity in addition to matter.) Then, this scalar field also determines the strength of coupling between matter and space-time (i.e., the extent to which a unit mass of matter bends space-time.)

Now it so happens that it is possible to transform away this variable gravitational constant and make it truly constant by a mathematical transformation called a conformal transformation. Basically, it amounts to reparameterizing space-time in such a way that the value of the gravitational constant becomes the same everywhere. (This transformation is described as switching from the Jordan frame to the Einstein frame.) However, this transformation is not without cost. As we transform away the coupling between the geometry of space-time and the scalar field, we end up introducing a variable coupling between the matter Lagrangian LM and the scalar field. The physics is now different! The geometry of space-time is now determined by a fixed coupling constant as in Einstein’s theory, but the trajectory of matter is no longer determined by geometry alone: there is an extra force, a so-called scalar force, acting on matter.

At first sight, this might seem weird. A simple mathematical transformation should not change the physics, or should it? Well… it does yet it doesn’t. If you fire a cannonball in Jordan-Brans-Dicke theory and calculate its trajectory, it will trace the same trajectory regardless which frame, the Jordan or the Einstein frame, you use to calculate it. It’s the interpretation of this trajectory that differs between the two frames. In the Jordan frame, the cannonball is said to follow a geodesic trajectory, but that geodesic, i.e., the curvature of spacetime, is affected by a varying gravitational constant. In the Einstein frame, the cannonball’s trajectory is not a geodesic anymore; the geodesic trajectory is determined by a fixed gravitational constant, but on top of that, an extra force deflects the cannonball.

One particular kind of scalar-tensor theory can be written in a form in which there is no variable gravitational constant and no coupling between the scalar field and matter either. This is the so-called “minimally coupled” scalar-tensor theory, in which the scalar field influences matter only indirectly: the scalar field has mass-energy, which gravitates, and this contributes to the overall gravitational field. Things can get tricky here: a scalar-tensor theory may be written in a form that does not look like a minimally coupled theory at all, yet it may be possible to transform it into one by an appropriate conformal transformation. However, this is not always the case: for instance, Jordan-Brans-Dicke theory cannot be transformed into a minimally coupled scalar-tensor theory this way, the two classes of theories are manifestly different.

When things get really interesting is when additional fields are present in a more complex theory, such as scalar-tensor-vector gravity. In that case, a conformal transformation can have surprising consequences on the coupling between these additional fields and the scalar field.

 Posted by at 2:58 am
Feb 272010
 

Apparently, the earthquake in Chile was the 5th largest on record in the whole world since 1990. I guess I know three out of the other four: Chile in 1960, Alaska in 1964, and Indonesia in 2004. Ah, there’s the fourth (thanks, Wikipedia): Kamchatka in 1952.

Chile is supposedly well prepared. But how can you be well prepared when the earthquake destroys basic infrastructure?

 Posted by at 7:38 pm
Feb 142010
 

A few months ago, a paper came to my attention; one written by a Hungarian climate scientist who supposedly resigned from his NASA job because he felt he was being muted by the climate science establishment. The paper was eventually published in a rather obscure journal, the quarterly journal of the Hungarian Meteorological Service. This paper has since been touted by its proponents as proof that the established climate models are bogus and that global warming is a hoax; on the other hand, it has been maligned by its critics, who declared it as junk science.

One notable point in Miskolczi’s paper is an unorthodox use of the virial theorem, and the funny thing is, while what he is doing is rather problematic, his critics fare no better; I’ve not seen anyone offer a technically correct analysis as to when the virial theorem might be applicable to describe a planetary atmosphere. (As a matter of fact, it is, but that doesn’t necessarily vindicate Miskolczi’s analysis.)

Miskolczi’s key result is that the atmosphere is in equilibrium in the sense that the greenhouse effect is already maximal; and so long as an effectively infinite reservoir of a potent greenhouse gas (water vapor) is available, this balance cannot be destroyed by changing the amount of CO2 in the air. However, this result may very well be a consequence of some trivial algebra that follows from some of Miskolczi’s more debatable assumptions.

In his paper, Miskolczi presents a model of the atmosphere graphically:

Without going into excessive detail (which can be found in Miskolczi’s paper), one can easily write down several equations from this diagram alone. First, Miskolczi asserts that P = P0 = 0, so that branch can be ignored altogether. We then have:

AA + K + F = ED + EU,
ST + EU = OLR,
F0 = OLR (the system is in equilibrium),
F0F + ED = SG + K,
SG = ST + AA.

To these equations, Miskolczi adds the following:

SG = σTs4 (Stefan-Boltzmann law describing a thermal blackbody surface at temperature Ts),
AA = ED (a much debated application of Kirchoff’s law by Miskolczi, his Eq. 4),
2EU = SG (Miskolczi’s application of the virial theorem),
SGF0 + EDEU = OLR (Miskolczi’s energy conservation formula, Eq. 7).

Now the thing is, at this point we actually have 9 equations in the 9 unknowns AA, K, F, ED, EU, ST, SG, F0 and Ts. Although the equations turn out to be not completely independent, only K remains undetermined; in particular, Ts is uniquely determined as

Ts = (3OLR / 2σ)1/4.

Now OLR = F0 = 238 W is known from observation, as it can be calculated from the solar constant and the Earth’s albedo. According to Miskolczi’s algebra, then, the only surface temperature consistent with this is Ts = 281.7 K or about 8.5 degrees Centigrade. This value is uniquely determined without knowing anything about the composition of the atmosphere other than its albedo.

If only climate science were this simple.

 Posted by at 4:27 pm
Feb 142010
 

Something I heard a few days ago on The Big Bang Theory:

“Where in this swamp of unbalanced formulas squatteth the toad of truth?”

What can I say… I know the feeling rather well.

 Posted by at 4:08 am