Sep 212012
 

I am reading about a new boson.

No, not the (presumed) Higgs boson with a mass of about 126 GeV.

I am reading about a lightweight boson, with a mass of only about 38 MeV, supposedly found at the onetime pride of Soviet science, the Dubna accelerator.

Now Dubna may not have the raw power of the LHC, but the good folks at Dubna are no fools. So if they announce what appears to be a 5-sigma result, one can’t just not pay attention.

The PHOTON-2 setup. S1 and S2 are scintillation counters. From arXiv:1208.3829.

But a 38 MeV boson? That’s not light, that’s almost featherweight. It’s only about 75 times the mass of the electron, for crying out loud. Less than 4% of the weight of the proton.

The discovery of such a lightweight boson would be truly momentous. It would certainly turn the Standard Model upside down. Whether it is a new elementary particle or some kind of bound state, it is not something that can be fit easily (if at all) within the confines of the Standard Model.

Which is one reason why many are skeptical. This discover is, after all, not unlike that of the presumed Higgs boson, is really just the discovery of a small bump on top of a powerful background of essentially random noise. The statistical significance (or lack thereof) of the bump depends fundamentally on our understanding and accurate modeling of that background.

And it is on the modeling of the background that this recent Dubna announcement has been most severely criticized.

Indeed, in his blog Tommaso Dorigo makes a very strong point of this; he also suggests that the authors’ decision to include far too many decimal digits in error terms is a disturbing sign. Who in his right mind writes 38.4935 ± 1.02639 as opposed to, say, 38.49 ± 1.03?

To this criticism, I would like to offer my own. I am strongly disturbed by the notion of a statistical analysis described by an expression of the type model = data − background. What we should be modeling is not data minus some theoretical background, but the data, period. So the right thing to do is to create a revised model that also includes the background and fit that to the data: model’ = model + background = data. When we do things this way, it is quite possible that the fits are a lot less tight than anticipated, and the apparent statistical significance of a result just vanishes. This is a point I raised a while back in a completely different context: in a paper with John Moffat about the statistical analysis of host vs. satellite galaxies in a large galactic sample.

 Posted by at 7:58 pm
Sep 062012
 

Nature had a nice editorial a few days ago about the Pioneer Anomaly and our research, titled “…and farewell to the Pioneer anomaly” (so titled because in the print edition, it is right below the obituary,  titled “Farewell to a pioneer”, of Bernard Lovell, builder of what was at the time the world’s largest steerable radio telescope at Jodrell Bank).

Farewell, yes, though I still hope that we will have the wherewithal to publish a longer article in which we provide the details that did not fit onto the pages of Physical Review Letters. We ought to update our review paper in Living Reviews in Relativity, too. We need to prepare for the release of the data used in our analysis. And, if possible, I’d like to spend time tackling some of the open questions we discuss near the end of our last paper, such as analyzing the spin behavior of the two spacecraft or making use of DSN signal strength measurements to improve the trajectory solution.

First things first, though; right now, my priorities are to a) earn money (which means doing things that I actually get paid for, not Pioneer) and b) get ready to have our upstairs bathtub replaced (the workmen will be here Monday morning), after which I plan to do the wall tiles myself (with fingers firmly crossed in the hope that I won’t mess it up too badly.)

Yes, sometimes such mundane things must take priority.

 Posted by at 11:26 am
Aug 062012
 

Lest we forget: the attack on Hiroshima occurred 67 years ago today. Little Boy was one of the few uranium bombs ever made (using plutonium that is produced in a nuclear reactor is a much cheaper alternative.)

I remain hopeful. Yes, it was exactly 67 years ago today an atomic bomb was first used in anger against human beings. But in three days, we will celebrate (if that is the right word) the 67th anniversary of the last use of an atomic bomb in anger against human beings.

[PS: One of these days, I’ll learn basic arithmetic. 2012 − 1945 = 67. Not 77.]

 Posted by at 6:20 pm
Aug 022012
 

Congratulations to Mariam Sultana, reportedly Pakistan’s first PhD in astrophysics. (Or in the subfield of extragalactic astrophysics, according to another news site. Either way, it’s a laudable achievement.)

I knew women scientists have an especially difficult time in very conservative Muslim countries.

I didn’t know astrophysicists (presumably, both male and female) had to pass an extra hurdle: apparently, illiterate Islamists don’t know the difference between astrophysics and astrology. The practice of astrology, like other forms of fortune telling, is considered haraam, a sin against Allah.

Am I ever so glad that I live in an enlightened, secular country.

One of Dr. Sultana’s (I am boldly assuming that Sultana is her last name, though I am well aware that Pakistani naming conventions do not necessarily follow Western traditions) examiners was James Binney, whose name is well known to anyone involved with galactic astrophysics; the book colloquially known as “Binney and Tremaine” (the real title is Galactic Dynamics) is considered one of the field’s “bibles”. (Darn, I hope no religious fanatic misconstrues the meaning of “bible” in the preceding sentence!)

I wish Dr. Sultana the brightest career. Who knows, maybe I’ll run into her one day somewhere, perhaps at the Perimeter Institute.

 Posted by at 4:46 pm
Jul 182012
 

Having been told by a friend that suddenly, there is a spate of articles online about the Pioneer anomaly, I was ready to curse journalists once I came across the words: “a programmer in Canada, Viktor Toth, heard about the effort and contacted Turyshev. He helped Turyshev create a program …”.

To be clear: I didn’t contact Slava; Slava contacted me. I didn’t “help create a program”; I was already done creating a program (which is why Slava contacted me). And that was the state of things back in 2005. What about all the work that I have done since, in the last seven years? Like developing a crude and then a more refined thermal model, independently developing precision orbit determination code to confirm the existence of the anomaly, collaborating with Slava on several papers including a monster review paper published by Living Reviews in Relativity, helping shape and direct the research that arrived at the present results, and drafting significant chunks of the final two papers that appeared in Physical Review Letters?

But then it turns out that journalists are blameless for a change. They didn’t invent a story out of thin air. They just copied the words from a NASA JPL press release.

And I am still trying to decide if I should feel honored or insulted. But then I am reminding myself that feeling insulted is rarely productive. So I’ll go with feeling honored instead. Having my contribution acknowledged by JPL is an honor, even if they didn’t get the details right.

 Posted by at 4:15 pm
Jul 172012
 

If you were reading newspapers, science blogs, or even some articles written by prominent scientists or announcements by prominent institutions (such as Canada’s Perimeter Institute), you might be under the impression that the Higgs boson is a done deal: it has been discovered. (Indeed, Perimeter’s Web site announces on its home page that “[the] Higgs boson has been found”.

Sounds great but it is not true. Let me quote from a recent New Scientist online article: “Although spotted at last, many properties of the new particle – thought to be the Higgs boson, or at least something similar – have yet to be tested. What’s more, the telltale signature it left in the detectors at the Large Hadron Collider (LHC) does not exactly match what is predicted”.

There, this says it all. We are almost certain that something has been discovered. (This is the 4.9-sigma result). We are not at all certain that it’s the Higgs. It probably is, but there is a significant likelihood that it isn’t, and we will only know for sure one way or another after several more years’ worth of data are collected. At least this is what the experimenters say. And why should you listen to anyone other than the experimenters?

 Posted by at 4:47 pm
Jul 162012
 

In the last several years, much of the time when I was wearing my physicist’s hat I was working on a theory of modified gravity.

Modified gravity theories present an alternative to the hypothetical (but never observed) substance called “dark matter” that supposedly represents more than 80% of the matter content of the Universe. We need either dark matter or modified gravity to explain observations such as the anomalous (too rapid) rotation of spiral galaxies.

Crudely speaking, when we measure the gravitational influence of an object, we measure the product of two numbers: the gravitational constant G and the object’s mass, M. If the gravitational influence is stronger than expected, it can be either because G is bigger (which means modified gravity) or M is bigger (which means extra mass in the form of some unseen, i.e., “dark” matter).

In Einstein’s general theory of relativity, gravity is the curvature of spacetime. Objects that are influenced only by gravity are said to travel along “geodesics”; their trajectory is determined entirely by the geometry of spacetime. On the other hand, objects that are influenced by forces other than Einstein’s gravity have trajectories that deviate from geodesics.

Massless particles, such as photons of light, must travel on geodesics (specifically, “lightlike geodesics”.) Conversely, if an originally massless particle deviates from a lightlike geodesic, it will appear to have acquired mass (yes, photons of light, when they travel through a transparent substance that slows them down, such as water or glass, do appear to have an effective mass.)

Modified gravity theories can change the strength of gravity two ways. They can change the strength of Einstein’s “geometric” gravity (actually, it would be called “metric gravity”); or, they can introduce a non-geometric force in addition to metric gravity.

And herein lies the problem. One important observation is that galaxies bend light, and they bend light more than one would expect without introducing dark matter. If we wish to modify gravity to account for this, it must mean changing the strength of metric gravity.

If metric gravity is different in a galaxy, it would change the dynamics of solar systems in that galaxy. This can be compensated by introducing a non-geometric force that cancels out the increase. This works for slow-moving objects such as planets and moons (or spacecraft) in orbit around a sun. However, stars like our own Sun also bend light. This can be observed very precisely, and we know that our Sun bends light entirely in accordance with Einstein’s general relativity theory. This cannot be explained as the interplay of geometric curvature and a non-geometric force; photons cannot deviate from the lightlike geodesics that are determined in their entirety by geometry alone.

So we arrive at an apparent contradiction: metric gravity must be stronger than Einstein’s prediction in a galaxy to account for how galaxies bend light, but it cannot be stronger in solar systems in that galaxy (or at the very least, in the one solar system we know well, our own), otherwise it could not account for how suns bend light or radio beams.

I have come to the conclusion that it’s not galaxy rotation curves or cosmological structure formation but modeling the bending of light and being able to deal with this apparent paradox is the most important test that a modified gravity theory must pass in order to be considered viable.

 Posted by at 5:13 pm
Jul 132012
 

I have been thinking about neutrinos today. No, not about faster-than-light neutrinos. I was skeptical about the sensational claim from the OPERA experiment last year, and my skepticism was well justified.

They may not be faster than light, but neutrinos are still very weird. Neutrinos of one flavor turn into another, a discovery that, to many a particle physicist, had to be almost as surprising as the possibility that neutrinos are superluminal.

The most straightfoward explanation for these neutrino oscillations is that neutrinos have mass. But herein lies a problem. We only ever observed left-handed neutrinos. This makes sense if neutrinos are massless particles that travel at the speed of light, since all observers agree on what left-handed means: the spin of the neutrino, projected along the direction of its motion, is always −1/2.

But now imagine neutrinos that are massive and travel slower than the speed of light. As a matter of fact, imagine a bunch of neutrinos fired by CERN in Geneva in the direction of Gran Sasso, Italy. It takes roughly 2 ms for them to arrive. Now if you can run very, very, very fast (say, you’re the Flash, the comic book superhero) you may be able to outrun the bunch. Looking back, you will see… a bunch of neutrinos with a velocity vector pointing backwards (they’re slower than you, which means they’ll appear to be moving backwards from your perspective) so projecting their spin along the direction of motion, you get +1/2. In other words, you’re observing right-handed neutrinos.

This is just weird. On the surface of it, it means that our fast-running Flash sees the laws of physics change! This is in deep contradiction with the laws of special relativity, Lorentz invariance and all that.

How we can interpret this situation depends on whether we believe that neutrinos are “Dirac” or “Majorana”. Neutrinos are fermions, and fermions are represented by spinor fields. A spinor field has four components: these correspond, in a sense, to a left-handed and a right-handed particle and their respective antiparticles. So if a particle only exists as a left-handed particle, only two of the four components remain; the other two (at least in the so-called Weyl representation) disappear, are “projected out”, to use a nasty colloquialism.

But we just said that if neutrinos are massive, it no longer makes sense of talking about strictly left-handed neutrinos; to the Flash, those neutrinos may appear right-handed. So both left- and right-handed neutrino states exist. Are they mathematically independent? Because if they are, neutrinos are represented by a full 4-component “Dirac” spinor. But there is a possibility that the components are not independent: in effect, this means that the neutrino is its own antiparticle. Such states can be represented by a two-component “Majorana” spinor.

The difference between these two types of neutrinos is not just theoretical. The neutrino carries something very real: the lepton number, in essence the “electronness” (without the electric charge) of an electron. If a neutrino is its own antiparticle, the two can annihilate one another, and two units of “electronness” vanish. Lepton number is not conserved.

If this is indeed the case, it can be observed. The so-called neutrinoless double beta decay is a hypothetical form of radioactive decay in which an isotope that is known to decay by emitting two electrons simultaneously (e.g., potassium-48 or uranium-238) does so without emitting the corresponding neutrinos (because these annihilate each other without going anywhere). Unfortunately, given that neutrinos don’t like to do much interacting to begin with, the probability of a neutrinoless decay occurring at any given time is very small. Still, it is observable in principle, and if observed, it would indicate unambiguously that neutrinos are Majorana spinors. (A prospect that may be appealing insofar as neutrinos are concerned, but I find it nonetheless deeply disturbing that such a fundamental property of a basic building block of matter may turn out to be ephemeral.)

Either way, I remain at a loss when I think about the handedness of neutrinos. If neutrinos are Dirac neutrinos, one may postulate right-handed neutrinos that do not interact the way left-handed neutrinos do (i.e., do not participate in the weak interaction, being so-called sterile neutrinos instead). Cool, but what about our friend, the Flash? Suppose he is observing the same thing we’re observing, a neutrino in the OPERA bunch interacting with something. But from his perspective, that neutrino is a right-handed neutrino that is not allowed to participate in such an interaction!

Or suppose that neutrinos are Majorana spinors, and right-handed neutrinos are simply much (VERY much) heavier, which is why they have not been observed yet (this is the so-called seesaw mechanism). The theory allows us to construct such as mass matrix, but once again having the Flash around leads to trouble: he will observe ordinary “light” neutrinos as right-handed ones!

Perhaps these are just apparent contradictions. In fact, I am pretty sure that that’s what they are, since all this follows from writing down a theory in the form of a Lagrangian density that is manifestly Lorentz (and Poincaré) invariant, hence the physics does not become broken for the Flash. It will just turn weird. But how weird is too weird?

 Posted by at 10:13 pm
Jul 102012
 

I once had a profound thought, years ago.

I realized that many people think that knowing the name of something is the same as understanding that thing. “What’s that?” they ask, and when you reply, “Oh, that’s just the blinking wanker from a thermonuclear quantum generator,” they nod deeply and thank you with the words, “I understand”. (Presumably these are the same people who, when they ask “How does this computer work?”, do not actually mean that they are looking for an explanation of Neumann machines, digital electronics, modern microprocessor technology, memory management principles, hardware virtualization techniques and whatnot; they were really just looking for the ON switch. Such people form an alarming majority… but it took me many frustrating years to learn this.)

I am not sure how to feel now, having just come across a short interview piece with the late physicist Richard Feynman, who is talking about the same topic. The piece is even titled “Knowing the name of something“. I am certainly reassurred that a mind such as Feynman’s had the same thought that I did. I am also disappointed that my profound thought is not so original after all. But I feel I should really be encouraged: perhaps this is just a sign that the same thought might be occurring to many other people, and that might make the world a better place. Who knows… in a big Universe, anything can happen!

 

 Posted by at 9:05 am
Jul 052012
 

News flash this morning: the first (of hopefully many) Japanese nuclear reactor is back online.

On March 11, 2011, the fifth biggest earthquake in recorded history, and the worst recorded earthquake ever in Japan, hit the island nation. As a result, some 16,000 people died (the numbers may go higher as some are still listed as missing). Most were killed by the natural disaster directly, as they drowned in the resulting tsunami. Some were killed as technology failed: buildings collapsed, vehicles crashed, industrial installations exploded, caught fire, or leaked toxins.

None were killed by the world’s second worst nuclear accident to date, the loss of power and resulting meltdown at the Fukushima Daiichi nuclear power plant. Some of it was due, no doubt, to sheer luck. Some if it due to the inherent safety of these plants and the foresight of their designers (though foresight did not always prevail, as evidenced by the decision to place last-resort emergency backup generators in a basement in a tsunami-prone area). The bottom line, though, remains: no-one died.

Yet the entire nuclear power generation industry in Japan was shut down as a result. Consequently, Japan’s conventional emissions rose dramatically; power shortages prevailed; and Japan ended up with a trade deficit, fueled by their import of fossil fuels.

Finally, it seems that sanity (or is it necessity?) is about to prevail. The Ohi nuclear power plant is supplying electricity again. I can only hope that it is running with lessons learned about a nuclear disaster that, according to the Japanese commission investigating it, was “profoundly manmade”; one “that could have been foreseen and prevented”, were it not for causes that were deeply rooted in Japanese culture.

 Posted by at 8:35 am
Jul 042012
 

I got up early this morning, so I had a chance to study the results from LHC, namely the preliminary publications from the ATLAS and CMS detectors.

According to the ATLAS team, the likelihood that the event count they see around 126 GeV is due purely to chance is less than one in a million. The result is better than 5σ, which makes it almost certain that they observed something.

The CMS detector observed many possible types of Higgs decay events. When they combined them all, they found that the probability that all this is due purely to chance is again less than one in a million… in their case, an almost 5σ result. Once again, it indicates very strongly that something has been observed.

But is it the Higgs? I have to say it’s beginning to look like it’s both quacking and walking like a duck… but CERN is cautious, and rightfully so. Their statement is that “CERN experiments observe particle consistent with long-sought Higgs boson”, and I think it is a very correct one.

 Posted by at 7:30 am
Jul 032012
 

It appears that CERN goofed and as a result, the video of the announcement planned for tomorrow has been leaked. (That is, unless you choose to believe their cockamamie story about multiple versions of the video having been produced.)

The bottom line: there is definitely a particle there with integer spin. Its mass is about 125 GeV. We know it decays into two photons and two Z-bosons. That’s about all we know.

The assessment is that it is either the Higgs or something altogether new.

 Posted by at 6:17 pm
Jul 022012
 

The Tevatron may have been shut down last year but the data they collected is still being analyzed.

And it’s perhaps no accident that they managed to squeeze out an announcement today, just two days before the scheduled announcement from the LHC: their observations are “consistent with the possible presence of a low-mass Higgs boson.”

The Tevatron has analyzed ten “inverse femtobarns” worth of data. This unit of measure (unit of luminosity, integrated luminosity to be precise) basically tells us how many events the Tevatron experiment produced. One “barn” is a whimsical name for a tiny unit of area, 10−24 square centimeters. A femtobarn is 10−15 barn. And when a particle physicist speaks of “inverse femtobarns”, what he really means is “events per femtobarn”. Ten inverse femtobarns of “integrated luminosity”, then, means a particle beam that, over time, produced ten events per every 10−39 square centimeters.

Now this makes sense intuitively if you think of a yet to be discovered particle or process as something that has a size. Suppose the cross-sectional size of what you are trying to discover is 10−36 square centimeters, or 1000 femtobarns. Now your accelerator just peppered each femtobarn with 10 events… that’s 10,000 events that fall onto your intended target, which means 10,000 opportunities to discover it. On the other hand, if your yet to be discovered object is 10−42 square centimeters in size, which is just one one thousandths of a femtobarn… ten events per femtobarn is really not enough, chances are your particle beam never hit the target and there is nothing to see.

The Tevatron operated for a long time, which allowed them to reach this very high level of integrated luminosity. But the cross-section, or apparent “size” of Higgs-related events also depends on the energy of the particles being accelerated. The Tevatron was only able to accelerate particles to 2 TeV. In contrast, the LHC is currently running at 8 TeV, and at such a high energy, some events are simply more likely to occur, which means that they are effectively “bigger” in cross section, more likely to be “illuminated” by the particle beam.

The Tevatron is not collecting any new data, but it seems they don’t want to be left out of the party. Hence, I guess, this annoucement, dated July 2, indicating a strong hint that the Higgs particle exists with a mass around 125 GeV/c2.

On the other hand, CERN already made it clear that their announcement will not be a definitive yes/no statement on the Higgs. Or so they say. Yet it has been said that Peter Higgs, after whom the Higgs boson is named, has been invited to be present when the announcement will be made. This is more than enough for the rumors to go rampant.

I really don’t know what to think. There are strong reasons to believe that the Higgs particle is real. There are equally strong reasons to doubt its existence. The observed events are important, but an unambiguous confirmation requires further analysis to exclude possibilities such as statistical flukes, events due to something else like a hadronic resonance, and who knows what else. And once again, I am also reminded of another historical announcement by CERN exactly 28 years prior to this upcoming one, on July 4, 1984, when they announced the discovery of the top quark at 40 GeV. Except that there is no top quark at 40 GeV… their announcement was wrong. Yet the top quark is real, later to be discovered having a mass of about 173 GeV.

Higgs or no Higgs? I suspect the jury will still be out on July 5.

 Posted by at 5:48 pm
Jun 282012
 

My blog is supposed to be (mostly) about physics. So let me write something about physics for a change.

John Moffat, with whom I have been collaborating (mostly on his modified gravity theory, MOG) for the past six years or so, has many ideas. Recently, he was wondering: could the celebrated 125 GeV (125 gigaelectronvolts divided the speed of light squared, to be precise, which is about about 134 times the mass of a hydrogen atom) peak observed last year at the LHC (and if rumors are to be believed, perhaps to be confirmed next week) be a sign of something other than the Higgs particle?

All popular accounts emphasize the role of the Higgs particle in making particles massive. This is a bit misleading. For one thing, the Higgs mechanism is directly responsible for the masses of only some particles (the vector bosons); for another, even this part of the mechanism requires that, in addition to the Higgs particle, we also presume the existence of a potential field (the famous “Mexican hat” potential) that is responsible for spontaneous symmetry breaking.

Higgs mechanism aside though, the Standard Model of particle physics needs the Higgs particle. Without the Higgs, the Standard Model is not renormalizable; its predictions diverge into meaningless infinities.

The Higgs particle solves this problem by “eating up” the last misbehaving bits of the Standard Model that cannot be eliminated by other means. The theory is then complete: although it remains unreconciled with gravity, it successfully unites the other three forces and all known particles into a unified (albeit somewhat messy) whole. The theory’s predictions are fully in accordance with data that include laboratory experiments as well as astrophysical observations.

Well, almost. There is still this pesky business with neutrinos. Neutrinos in the Standard Model are massless. Since the 1980s, however, we had strong reasons to suspect that neutrinos have mass. The reason is the “solar neutrino problem”, a discrepancy between the predicted and observed number of neutrinos originating from the inner core of the Sun. This problem is resolved if different types of neutrinos can turn into one another, since the detectors in question could only “see” electron neutrinos. This “neutrino flavor mixing” or “neutrino oscillation” can occur if neutrinos have mass, represented by a mass matrix that is not completely diagonal.

What’s wrong with introducing such a matrix, one might ask? Two things. First, this matrix necessarily contains dimensionless quantities that are very small. While there is no a priori reason to reject them, dimensionless numbers in a theory that are orders of magnitude bigger or smaller than 1 are always suspect. But the second problem is perhaps the bigger one: massive neutrinos make the Standard Model non-renormalizable again. This can only be resolved by either exotic mechanisms or the introduction of new elementary particles.

This challenge to the Standard Model perhaps makes the finding of the Higgs particle less imperative. Far from turning a nearly flawless theory into a perfect one, it only addresses some problems in an otherwise still flawed, incomplete theory. Conversely, not finding the Higgs particle is less devastating: it does invalidate a theory that would have been perfect otherwise, it simply prompts us to look for solutions elsewhere.

In light of that, one may wish to take a second look at the observations reported at the LHC last fall. The Higgs particle, if it exists, can decay in several ways. We already know that the Higgs particle cannot be heavier than 130 GeV, and this excludes certain forms of decay. One of the decays that remains is the decay into a quark and its antiparticle that, in turn, decay into two photons. Photons are easy to observe, but there is a catch: when the LHC collides large numbers of protons with one another at high energies, a huge number of photons are created as a background. It is against this background that two-photon events with a signature specific to the Higgs particle must be observed.

Diphoton results from the Atlas detector at the LHC, late 2011.

And observed they have been, albeit not with a resounding statistical significance. There is a small excess of such two-photon events indicating a possible Higgs mass of 125 GeV. Many believe that this is because there is indeed a Higgs particle with this mass, and its discovery will be confirmed with the necessary statistical certainty once more data are collected.

Others remain skeptical. For one thing, that 125 GeV peak is not the only peak in the data. For another, it is a peak that is a tad more pronounced than what the Higgs particle would produce. Furthermore, there is no corresponding peak in other “channels” that would correspond to other forms of decay of the Higgs particle.

This is when Moffat’s idea comes in. John had in mind the many “hadronic resonances”, all sorts of combinations of quarks that appear at lower energies, some of which still befuddle particle physicists. What if, he asks, this 125 GeV peak is due to just another such resonance?

Easier said than done. At low energies, there are plenty of quarks to choose from and combine. But 125 GeV is not a very convenient energy from this perspective. The heaviest quark, the top quark has a mass of 173 GeV or so; far too heavy for this purpose. The next quark in terms of mass, the bottom quark, is much too light at around 4.5 Gev. There is no obvious way to combine a reasonably small number of quarks into a 125 GeV composite particle that sticks around long enough for it to be detected. Indeed, the top quark is so heavy that “toponium”, a hypothetical combination of a top quark and its antiparticle, is believed to be undetectable; it decays so rapidly, it really never has time to form in the first place.

But then, there is another possibility. Remember how neutrinos oscillate between different states? Well, composite particles can do that, too. And as to what these “eigenstates” are, that really depends on the measurement. One notorious example is the neutral kaon (also known as the neutral K meson). It has one eigenstate with respect to the strong interaction, but two quite different eigenstates with respect to the weak interaction.

So here is John’s perhaps not so outlandish proposal: what if there is a form of quarkonium whose eigenstates are toponium (not observed) and bottomonium with respect to some interactions, but two different mixed states with respect to whatever interaction is responsible for the 125 GeV resonance observed by the LHC?

Such an eigenstate requires a mixing angle, easily calculated as 20 degrees. This mixing also results in another eigenstate, at 330 GeV, which is likely so heavy that it is not stable enough to be observed. This proposal, if valid, would explain why the LHC sees a resonance at 125 GeV without a Higgs particle.

Indeed, this proposal can also explain a) why the peak is stronger than what one would predict for the Higgs particle, b) why no other Higgs-specific decay modes were observed, and perhaps most intriguingly, c) why there are additional peaks in the data!

That is because if there is a “ground state”, there are also excited states, the same way a hydrogen atom (to use a more commonplace example) has a ground state and excited states with its lone electron in higher energy orbits. These excited states would show up in the plots as additional resonances, usually closely bunched together, with decreasing magnitude.

Could John be right? I certainly like his proposal, though I am not exactly the unbiased observer, since I did contribute a little to its development through numerous discussions. In any case, we will know a little more next week. An announcement from the LHC is expected on July 4. It is going to be interesting.

 Posted by at 4:36 pm
Jun 152012
 

Our latest Pioneer paper, in which we discuss the results from the Pioneer thermal model and its incorporation into the orbital analysis (the conclusion being that no significant anomalous acceleration remains once thermal radiation is properly accounted for) made it to the cover of Physical Review Letters. I am very grateful that I was given the opportunity to participate in this research, and I am very proud of this work and our results.

 Posted by at 11:35 am
Jun 062012
 

Yesterday, Venus transited the Sun. It won’t happen again for more than a century.

I had paper “welder’s glasses” courtesy of Sky News. Looking through them, I did indeed see a tiny black speck on the disk of the Sun. However, it was nowhere as impressive as the pictures taken through professional telescopes.

These live pictures were streamed to us courtesy of NASA. One planned broadcast from Alice Springs, Australia, was briefly interrupted. At first, it was thought that a road worker cutting an optical cable was the culprit, but later it turned out to be a case of misconfigured hardware. Or could it be that they were trying to fix a problem with an “intellectual property address”, a wording that appeared on several Australian news sites today? (Note to editors: if you don’t understand the text, don’t be over-eager replacing acronyms with what you think they stand for.)

I also tried to take pictures myself, holding my set of paper welder’s glasses in front of my (decidedly non-professional) cameras. Surprisingly, it was with my cell phone that I was able to take the best picture, but it did not even come close in resolution to what would have been required to see Venus.

The lesson? I think I’ll leave astrophotography to the professionals. Or, at least, to expert amateurs. Unfortunately, I am neither.

That said, I remain utterly fascinated by the experience of staring at a sphere of gas, close to a million and a half kilometers wide, containing 2 nonillion (2,000,000,000,000,000,000,000,000,000,000) kilograms of mostly hydrogen gas, burning roughly 580 billion kilograms of it every second in the form of nuclear fusion deep in its core, releasing photons amounting to about 4.3 billion kilograms of energy… and most of these photons remain trapped for a very long time, producing extreme pressures (so that the interior of the Sun is dominated by this ultrarelativistic photon gas) that prevent the Sun from collapsing upon itself, which will indeed be its fate when it can no longer sustain hydrogen fusion in its core a few billion years from now. And then, this huge orb is briefly occulted by a tiny black speck, the shadow of a world as big as our own… just a tiny black dot, too small for my handheld cameras to see.

I sometimes try to use a human-scale analogy when trying to explain to friends just how mind-bogglingly big the solar system is. Imagine a beach ball that is a meter wide. Now suppose you stand about a hundred meters away from it, like the length of a large sports field. Okay… now imagine that that beach ball is so bleeping hot, even at this distance its heat is burning your face. That’s how hot the Sun is.

Now hold up a large pea, about a centimeter in size. That’s the Earth. Another pea, roughly halfway between you and the beach ball would be Venus.

A peppercorn, some thirty centimeters or so from your Earth pea… that’s the Moon. Incidentally, if you hold that peppercorn up, at about thirty centimeters from your eye it is just large enough to obscure the beach ball in the distance, producing a solar eclipse.

Now let’s go a little further. Some half a kilometer from the beach ball you see a large-ish orange… Jupiter. Twice as far, you see a smaller orange with a ribbon around it; that’s Saturn. Pluto would be another peppercorn, more than three kilometers away.

But your beach ball’s influence does not end there. There will be specks of dust in orbit around it as far out as several hundred kilometers, maybe more. So where would the next beach ball be, representing the nearest star? Well, here’s the problem… the surface of the Earth is just not large enough, because the next beach ball would be more than 20,000 kilometers away.

To represent other stars, not to mention the whole of the Milky Way, we would once again need astronomical distance scales. If a star like our Sun was a one meter wide beach ball, the Milky Way of beech balls would be larger than the orbit of the Earth around the Sun. And the nearest full-size galaxy, Andromeda, would need to be located in distant parts of the solar system, far beyond the orbits of planets.

The only way we could reduce galaxies and groups of galaxies to a scale that humans can comprehend is by making stars and planets microscopic. So whereas the size of the solar system can perhaps be grasped by my beach ball and pea analogy, it is simply impossible to imagine simultaneously just how large the Milky Way is, not to mention the entire visible universe.

Or, as Douglas Adams wrote in The Hitchhiker’s Guide to the Galaxy: “Space is big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.”

 Posted by at 10:25 am
May 292012
 

A few days ago, a bright 16-year old German student of Indian descent, Shouryya Ray of Dresden, won second prize in a national science competition with an essay entitled “Analytische Lösung von zwei ungelösten fundamentalen Partikeldynamikproblemen” (Analytic solution of two unsolved fundamental particle dynamics problems).

This story should have ended there. And perhaps it would have, were it not for the words in the abstract that said, among other things: “Das zugrundeliegende Kraftgesetz wurde bereits von Newton (17. Jhd.) entdeckt. […] Diese Arbeit setzt sich also die analytische Lösung dieser bisher nur näherungsweise oder numerisch gelösten Probleme zum Ziele.” (The underlying power law was discovered by Newton (17th century). The goal of this work is then the analytic solution of these until now only approximately or numerically solved problems.)

This was more than enough for sensation-seeking science journalists. The story was picked up first by Die Welt with the title “Mit 16 ein Genie: Shouryya Ray löste ein jahrhundertealtes mathematisches Problem” (Genius at 16: Shouryya Ray solves centuries-old mathametical problem) and then translated into English and other languages, even appearing in the Ottawa Citizen. In short order, even a biographical entry on Wikipedia was created; now nominated for deletion, many are voting to keep it because in their view, the press coverage is sufficient to establish encyclopedic notability.

Cooler heads should have prevailed. What science journalists neglected to ask is why, if this is such a breakthrough, the youth only received second prize. And in any case, what on Earth did he actually do? His essay or details about it were not published. The only clue to go by was a press photo in which the student holds up a large sheet of paper containing an equation:

As I discussed this very topic on a page I placed on my Web site a few years back (reacting to some bad math and flawed physics reasoning in an episode of the Mythbusters) I felt compelled to find out more. I guessed (correctly, as it turns out) that \(u\) and \(v\) must be the horizontal and vertical (or vertical and horizontal?) components of the projectile’s velocity, \(g\) is the gravitational constant, and \(\alpha\) is the coefficient of air resistance. However, I am embarrassed to admit that although I spent some time trying, I was not able to find a way to separate the variables and integrate the relevant differential equations to obtain Ray’s formula. I was ready to give up actually when I came across a derivation on reddit (and I realized that I was on the right track all along, I was just stubbornly trying to do a slightly different trick, which didn’t work). The formula is correct, and it is certainly an impressive result for a 16-year old, worthy of a second prize.

But no more. This is not a breakthrough. As it turns out, similar implicit solutions were well known in the 19th century. A formulation that differs from Ray’s only in notational details appeared in a paper by Parker (Am. J. Phys, 45, 7, 606, July 1977). Alas, such an implicit form is of limited utility; one still requires numerical methods to actually solve the equation.

Much of this was probably known to the judges of the competition, which is probably why they awarded the student second prize.

Hopefully none of this will deter young Mr. Ray from pursuing a successful career as a physicist or mathematician.

 Posted by at 10:20 am
Apr 152012
 

Now here is a way to use physics more cleverly than Sheldon Cooper to avoid a costly ticket for a moving violation: http://arxiv.org/abs/1204.0162.

The brief, two-sentence abstract reads: A way to fight your traffic tickets. The paper was awarded a special prize of $400 that the author did not have to pay to the state of California.

Perhaps unsurprisingly, the paper is dated April 1. But the story, it appears, is real nonetheless.

 Posted by at 8:43 am
Apr 142012
 

I just came across this delightful imaginary conversation between a physicist and an economist about the unsustainability of perpetual economic growth.

The physicist uses energy production in his argument: growth at present rates means that in a few hundred years, we’ll produce enough energy to start boiling the oceans. And this is not something that can be addressed easily by the magic of technology. When waste heat is produced, the only way to get rid of it is to radiate it away into space. After about 1400 years of continuous growth, the Earth will be radiating more energy (all man-made) than the Sun, which means it would have to be a lot hotter than the Sun, on account of its smaller size. And in about 2500 years, we would exceed the thermal output of the whole Milky Way.

This, of course, is nonsense, which means terrestrial energy production will be capped eventually by basic physics. If GDP would continue to grow nonetheless, it would mean that the price of energy relative to other stuff would decrease to zero. This is also nonsense, since a limited resource cannot become arbitrarily cheap. But that means GDP growth must also be capped.

What I liked about this argument is that it is not emotional or ideological; it’s not about hugging trees or hating capitalism. It is about basic physics and elementary logic that is difficult to escape. In fact, it can be put in the form of equations. Our present energy production \(P_0\) is approximately 15 TW, which is about 0.002% of the Sun’s output that reaches the Earth:

\begin{align}
P_0&\simeq 1.5 \times 10^{13}~\rm{W},\\
P_\odot&\simeq 7 \times 10^{17}~\rm{W},\\
\eta_0&=P_0/P_\odot \sim 0.002\%.
\end{align}

For any other value of \(\eta\), there is a corresponding value of \(P\):

\begin{align}
P=\eta P_\odot.
\end{align}

Now all we need is to establish a maximum value of \(\eta\) that we can live with; say, \(\eta_{\rm max}=1\%\). This tells us the maximum amount of energy that we can produce here in the Earth without cooking ourselves:

\begin{align}
P_{\rm max}=\eta_{\rm max}P_\odot.
\end{align}

On the economic side of this argument, there is the percentage of GDP that is spent on energy. In the US, this is about 8%. For lack of a better value, let me stick to this one:

\begin{align}
\kappa_0\sim 8\%.
\end{align}

How low can \(\kappa\) get? That may be debatable, but it cannot become arbitrarily low. So there is a value \(\kappa_{\rm min}\).

The rest is just basic arithmetic. GDP is proportional to the total energy produced, divided by \(\kappa\):

\begin{align}
{\rm GDP}&\propto \frac{\eta}{\kappa}P_\odot,\\
{\rm GDP}_{\rm max}&\propto \frac{\eta_{\rm \max}}{\kappa_{\rm min}}P_\odot,
\end{align}

And in particular:

\begin{align}
{\rm GDP}_{\rm max}&=\frac{\eta_{\rm max}\kappa_0}{\eta_0\kappa_{\rm min}}{\rm GDP}_0,
\end{align}

where \({\rm GDP}_0\) is the present GDP.

We know \(\eta_0\sim 0.002\%\). We know \(\kappa_0=8\%\). We can guess that \(\eta_{\rm max}\lesssim 1\%\) and \(\kappa_{\rm min}\gtrsim 1\%\). This means that

\begin{align}
{\rm GDP}_{\rm max}\lesssim 4,000\times {\rm GDP}_0.
\end{align}

This is it. A hard limit imposed by thermodynamics. But hey… four thousand is a big number, isn’t it? Well… sort of. At a constant 3% rate of annual growth, the economy will increase to four thousand times its present size in a mere 280 years or so. One may tweak the numbers a little here and there, but the fact that physics imposes such a hard limit remains. The logic is inescapable.

Or is it? The word “escape” may be appropriate here for more than one reason, as there is one obvious way to evade this argument: escape into space. In a few hundred years, humanity may have spread throughout the solar system, and energy amounts enough to boil the Earth’s oceans may be powering human colonies in the hostile (and cold!) environments near the outer planets.

That is, if humans are still around a few hundred years from now. One can only hope.

 Posted by at 9:59 am
Apr 122012
 

Our second short paper has been accepted for publication in Physical Review Letters.

I have been involved with Pioneer 10 and 11 in some fashion since about 2002, when I first began corresponding with Larry Kellogg about the possibility of resurrecting the telemetry data set. It is thanks the Larry’s stamina and conscientiousness that the data set survived.

I have been involved actively in the research of the Pioneer anomaly since 2005. Seven years! Hard to believe.

This widely reported anomaly concerns the fact that when the orbits of Pioneer 10 and 11 are accurately modeled, a discrepancy exists between the modeled and measured frequency of the radio signal. This discrepancy can be resolved by assuming an unknown force that pushes Pioneer 10 an 11 towards the Earth or the Sun (from that far away, these two directions nearly coincide and cannot really be told apart.)

One purpose of our investigation was to find out the magnitude of the force that arises as the spacecraft radiates different amounts of heat in different directions. This is the concept of a photon rocket. A ray of light carries momentum. Hard as it may appear to believe at first, when you hold a flashlight in your hands and turn it on, the flashlight will push your hand backwards by a tiny force. (How tiny? If it is a 1 W bulb that is perfectly efficient and perfectly focused, the force will be equivalent to about one third of one millionth of a gram of weight.)

On Pioneer 10 and 11, we have two main heat sources. First, there is electrical heat: all the instruments on board use about 100 W of electricity, most of which is converted into heat. Second, electricity is produced, very inefficiently, by a set of four radioisotope thermoelectric generators (RTGs); these produce more than 2 kW of waste heat. All this heat has to go somewhere, and most of this heat will be dissipated preferably in one direction, behind the spacecraft’s large dish antenna, which is always pointed towards the Earth.

The controversial question was, how much? How efficiently is this heat converted into force?

I first constructed a viable thermal model for Pioneer 10 back in 2006. I presented results from custom ray-tracing code at the Pioneer Explorer Collaboration meeting at the International Space Science Institute in Bern, Switzerland in February 2007:

With this, I confirmed what has already been suspected by others—notably, Katz (Phys. Rev. Letters 83:9, 1892, 1999); Murphy (Phys. Rev. Letters 83:9, 1890, 1999); and Scheffer (Phys. Rev. D, 67:8, 084021, 2003)—that the magnitude of the thermal recoil force is indeed comparable to the anomalous acceleration. Moreover, I established that the thermal recoil force is very accurately described as a simple linear combination of heat from two heat sources: electrical heat and heat from the RTGs. The thermal acceleration \(a\) is, in fact

$$a=\frac{1}{mc}(\eta_{\rm rtg}P_{\rm rtg} + \eta_{\rm elec}P_{\rm elec}),$$

where \(c\simeq 300,000~{\rm km/s}\) is the speed of light, \(m\simeq 250~{\rm kg}\) is the mass of the spacecraft, \(P_{\rm rtg}\sim 2~{\rm kW}\) and \(P_{\rm elec}\sim 100~\rm {W}\) are the RTG heat and electrical heat, respectively, and \(\eta_{\rm rtg}\) and \(\eta_{\rm elec}\) are “efficiency factors”.

This simple force model is very useful because it can be incorporated directly into the orbital model of the spacecraft.

In the years since, the group led by Gary Kinsella constructed a very thorough and comprehensive model of the Pioneer spacecraft, using the same software tools (not to mention considerable expertise) that they use for “live” spacecraft. With this model, they were able to predict the thermal recoil force with the greatest accuracy possible, at different points along the trajectory of the spacecraft. The result can be compared directly to the acceleration that is “measured”; i.e., the acceleration that is needed to model the radio signal accurately:

In this plot, the step-function like curve (thick line) is the acceleration deduced from the radio signal frequency. The data points with vertical error bars represent the recoil force calculated from the thermal model. They are rather close. The relatively large error bars are due primarily to the fact that we simply don’t know what happened to the white paint that coated the RTGs. These were hot (the RTGs were sizzling hot even in deep space) and subjected to solar radiation (ultraviolet light and charged particles) so the properties of the paint may have changed significantly over time… we just don’t know how. The lower part of the plot shows just how well the radio signal is modeled; the average residual is less than 5 mHz. The actual frequency of the radio signal is 2 GHz, so this represents a modeling accuracy of less than one part in 100 billion, over the course of nearly 20 years.

In terms of the above-mentioned efficiency factors, the model of Gary’s group yielded \(\eta_{\rm rtg}=0.0104\) and \(\eta_{\rm elec}=0.406\).

But then, as I said, we also incorporated the thermal recoil force directly into the Doppler analysis that was carried out by Jordan Ellis. Jordan found best-fit residuals at \(\eta_{\rm rtg}=0.0144\) and \(\eta_{\rm elec}=0.480\). These are somewhat larger than the values from the thermal model. But how much larger?

We found that the best way to answer this question was to plot the two results in the parameter space defined by these two efficiency factors:

The dashed ellipse here represents the estimates from the thermal model and their associated uncertainty. The ellipse is elongated horizontally, because the largest source of uncertainty, the degradation of RTG paint, affects only the \(\eta_{\rm rtg}\) factor.

The dotted ellipse represents the estimates from radio signal measurements. The formal error of these estimates is very small (the error ellipse would be invisibly tiny). These formal errors, however, are calculated by assuming that the error in every one of the tens of thousands of Doppler measurements arises independently. In reality, this is not the case: the Doppler measurements are insanely accurate, any errors that occur are a result of systematic mismodeling, e.g., caused by our inadequate knowledge of the solar system. This inflates the error ellipse and that is what was shown in this plot.

Looking at this plot was what allowed us to close our analysis with the words, “We therefore conclude that at the present level of our knowledge of the Pioneer 10 spacecraft and its trajectory, no statistically significant acceleration anomaly exists.”

Are there any caveats? Not really, I don’t think, but there are still some unexplored questions. Applying this research to Pioneer 11 (I expect no surprises there, but we have not done this in a systematic fashion). Modeling the spin rate change of the two spacecraft. Making use of radio signal strength measurements, which can give us clues about the precise orientation of the spacecraft. Testing the paint that was used on the RTGs in a thermal vacuum chamber. Accounting for outgassing. These are all interesting issues but it is quite unlikely that they will alter our main conclusion.

On several occasions when I gave talks about Pioneer, I used a slide that said, in big friendly letters,

PIONEER 10/11 ARE THE MOST PRECISELY NAVIGATED DEEP SPACE CRAFT TO DATE.

And they confirmed the predictions of Newton and Einstein, with spectacular accuracy, by measuring the gravitational field of the Sun in situ, all the way up to about about 70 astronomical units (the distance of the Earth from the Sun).

 Posted by at 11:10 am