Dec 032012
 

Update (September 6, 2013): The analysis in this blog entry is invalid. See my September 6, 2013 blog entry on this topic for an explanation and update.

It has been a while since I last wrote about a pure physics topic in this blog.

A big open question these days is whether or not the particle purportedly discovered by the Large Hadron Collider is indeed the Higgs boson.

One thing about the Higgs boson is that it is a spin-0 scalar particle: this means, essentially, that the Higgs is identical to its mirror image. This distinguishes the Higgs from pseudoscalar particles that “flip” when viewed in a mirror.

So then, one way to distinguish the Higgs from other possibilities, including so-called pseudoscalar resonances, is by establishing that the observed particle indeed behaves either like a scalar or like a pseudoscalar.

Easier said than done. The differences in behavior are subtle. But it can be done, by measuring the angular distribution of decay products. And this analysis was indeed performed using the presently available data collected by the LHC.

Without further ado, here is one view of the data, taken from a November 14, 2012 presentation by Alexey Drozdetskiy:

The solid red line corresponds to a scalar particle (denoted by 0+); the dotted red line to a pseudoscalar (0−). The data points represent the number of events. The horizontal axis represents a “Matrix Element Likelihood Analysis” value, which is constructed using a formula similar to this one (see arXiv:1208.4018 by Bolognesi et al.):

$${\cal D}_{\rm bkg}=\left[1+\frac{{\cal P}_{\rm bkg}(m_{4\ell};m_1,m_2,\Omega)}{{\cal P}_{\rm sig}(m_{4\ell};m_1,m_2,\Omega)}\right]^{-1},$$

where the \({\cal P}\)-s represent probabilities associated with the background and the signal.

So far so good. The data are obviously noisy. And there are not that many data points: only 10, representing 16 events (give or take, as the vertical error bars are quite significant).

There is another way to visualize these values: namely by plotting them against the relative likelihood that the observed particle is 0+ or 0−:

In this fine plot, the two Gaussian curves correspond to Monte-Carlo simulations of the scalar and pseudoscalar scenarios. The position of the green arrow is somehow representative of the 10 data points shown in the preceding plot. The horizontal axis in this case is the logarithm of a likelihood ratio.

On the surface of it, this seems to indicate that the observed particle is indeed a scalar, just like the Higgs. So far so good, but what bothers me is that this second plot does not indicate uncertainties in the data. Yet, judging by the sizable vertical error bars in the first plot, the uncertainties are significant.

However, to relate the uncertainties in the first plot, one has to be able to relate the likelihood ratio on this plot to the MELA value on the preceding plot. Such a relationship indeed exists, given by the formula

$${\cal L}_k=\exp(-n_{\rm sig}-n_{\rm bkg})\prod_i\left(n_{\rm sig}\times{\cal P}^k_{\rm sig}(x_i;\alpha;\beta)+n_{\rm bkg}\times{\cal P}_{\rm bkg}(x_i;\beta)\right).$$

The problem with this formula, from my naive perspective, is that in order to replicate it, I would need to know not only the number of candidate signal events but also the number of background events, and also the associated probability distributions and values for \(\alpha\) and \(\beta\). I just don’t have all the information necessary to reconstruct this relationship numerically.

But perhaps I don’t have to. There is a rather naive thing one can do: and that would be simply calculating the weighted average of the data points in the first plot. When I do this, I get a value of 0.57. Lo and behold, it has roughly the same relationship to the solid red Gaussian in that plot as the green arrow to the 0+ Gaussian in the second.

Going by the assumption that my naive shortcut actually works reasonably well, I can take the next step. I can calculate a \(1\sigma\) error on the weighted average, which yields \(0.57^{+0.24}_{-0.23}\). When I (admittedly very crudely) try the transcribe this uncertainty to the second plot, I get something like this:

Yes, the error is this significant. So while the position of the green arrow is in tantalizing agreement with what one would expect from a Higgs particle, the error bar says that we cannot draw any definitive conclusions just yet.

But wait, it gets even weirder. Going back to the first plot, notice the two data points on the right. What if these are outliers? If I remove them from the analysis, I get something completely different: namely, the value of \(0.43^{+0.26}_{-0.21}\). Which is this:

So without the outliers, the data actually favor the pseudoscalar scenario!

I have to emphasize: what I did here is rather naive. The weighted average may not accurately represent the position of the green arrow at all. The coincidence in position could be a complete accident. In which case the horizontal error bar yielded by my analysis is completely bogus as well.

I also attempted to check how much more data would be needed to reduce the size of these error bars sufficiently for a true \(1\sigma\) result: about 2-4 times the number of events collected to date. So perhaps what I did is not complete nonsense after all, because this is what knowledgeable people are saying: when the LHC collected at least twice the amount of data it already has, we may know with reasonable certainty if the observed particle is a scalar or a pseudoscalar.

Until then, I hope I did not make a complete fool of myself with this naive analysis. Still, this is what blogs are for; I am allowed to say foolish things here.

 Posted by at 10:31 pm
Nov 302012
 

An article we wrote with Slava Turyshev about the Pioneer anomaly and its resolution, at the request of IEEE Spectrum, is now available online.

It was an interesting experience, working with a professional science journalist and her team. I have to admit that I did not previously appreciate the level of professionalism that is behind such a “members only” magazine.

 Posted by at 3:22 pm
Oct 152012
 

This is not some fringe moron but a Republican representative for Georgia’s 10th district. Member of the Tea Party caucus. And a physician to boot:

Groan. I guess I must be a servant of the Devil then (go, Lucifer!) as I, too, spread the “lie from the pit of hell” called the Big Bang theory. Or the lie called “evolution”. Or the lie called “embryology” (that’s a new one for me; would you know what’s wrong with embryology from a Tea Party perspective?) What next, write down the Friedmann equations, be burned at the stake?

Now this is why, even if everything you told me about Obama and his Chicago lot was the gospel truth, I’d still prefer them over Republicans these days. I’d rather take 21st century corruption than go back to the Middle Ages.

Part of me wonders (hopes, even) that this is just a cynical attempt to attract votes and he is not actually this stone dumb stupid. But I don’t know what it says about the Republican party these days if these are the kinds of votes its representatives go after.

 Posted by at 12:22 pm
Sep 212012
 

I am reading about a new boson.

No, not the (presumed) Higgs boson with a mass of about 126 GeV.

I am reading about a lightweight boson, with a mass of only about 38 MeV, supposedly found at the onetime pride of Soviet science, the Dubna accelerator.

Now Dubna may not have the raw power of the LHC, but the good folks at Dubna are no fools. So if they announce what appears to be a 5-sigma result, one can’t just not pay attention.

The PHOTON-2 setup. S1 and S2 are scintillation counters. From arXiv:1208.3829.

But a 38 MeV boson? That’s not light, that’s almost featherweight. It’s only about 75 times the mass of the electron, for crying out loud. Less than 4% of the weight of the proton.

The discovery of such a lightweight boson would be truly momentous. It would certainly turn the Standard Model upside down. Whether it is a new elementary particle or some kind of bound state, it is not something that can be fit easily (if at all) within the confines of the Standard Model.

Which is one reason why many are skeptical. This discover is, after all, not unlike that of the presumed Higgs boson, is really just the discovery of a small bump on top of a powerful background of essentially random noise. The statistical significance (or lack thereof) of the bump depends fundamentally on our understanding and accurate modeling of that background.

And it is on the modeling of the background that this recent Dubna announcement has been most severely criticized.

Indeed, in his blog Tommaso Dorigo makes a very strong point of this; he also suggests that the authors’ decision to include far too many decimal digits in error terms is a disturbing sign. Who in his right mind writes 38.4935 ± 1.02639 as opposed to, say, 38.49 ± 1.03?

To this criticism, I would like to offer my own. I am strongly disturbed by the notion of a statistical analysis described by an expression of the type model = data − background. What we should be modeling is not data minus some theoretical background, but the data, period. So the right thing to do is to create a revised model that also includes the background and fit that to the data: model’ = model + background = data. When we do things this way, it is quite possible that the fits are a lot less tight than anticipated, and the apparent statistical significance of a result just vanishes. This is a point I raised a while back in a completely different context: in a paper with John Moffat about the statistical analysis of host vs. satellite galaxies in a large galactic sample.

 Posted by at 7:58 pm
Sep 062012
 

Nature had a nice editorial a few days ago about the Pioneer Anomaly and our research, titled “…and farewell to the Pioneer anomaly” (so titled because in the print edition, it is right below the obituary,  titled “Farewell to a pioneer”, of Bernard Lovell, builder of what was at the time the world’s largest steerable radio telescope at Jodrell Bank).

Farewell, yes, though I still hope that we will have the wherewithal to publish a longer article in which we provide the details that did not fit onto the pages of Physical Review Letters. We ought to update our review paper in Living Reviews in Relativity, too. We need to prepare for the release of the data used in our analysis. And, if possible, I’d like to spend time tackling some of the open questions we discuss near the end of our last paper, such as analyzing the spin behavior of the two spacecraft or making use of DSN signal strength measurements to improve the trajectory solution.

First things first, though; right now, my priorities are to a) earn money (which means doing things that I actually get paid for, not Pioneer) and b) get ready to have our upstairs bathtub replaced (the workmen will be here Monday morning), after which I plan to do the wall tiles myself (with fingers firmly crossed in the hope that I won’t mess it up too badly.)

Yes, sometimes such mundane things must take priority.

 Posted by at 11:26 am
Aug 062012
 

Lest we forget: the attack on Hiroshima occurred 67 years ago today. Little Boy was one of the few uranium bombs ever made (using plutonium that is produced in a nuclear reactor is a much cheaper alternative.)

I remain hopeful. Yes, it was exactly 67 years ago today an atomic bomb was first used in anger against human beings. But in three days, we will celebrate (if that is the right word) the 67th anniversary of the last use of an atomic bomb in anger against human beings.

[PS: One of these days, I’ll learn basic arithmetic. 2012 − 1945 = 67. Not 77.]

 Posted by at 6:20 pm
Aug 022012
 

Congratulations to Mariam Sultana, reportedly Pakistan’s first PhD in astrophysics. (Or in the subfield of extragalactic astrophysics, according to another news site. Either way, it’s a laudable achievement.)

I knew women scientists have an especially difficult time in very conservative Muslim countries.

I didn’t know astrophysicists (presumably, both male and female) had to pass an extra hurdle: apparently, illiterate Islamists don’t know the difference between astrophysics and astrology. The practice of astrology, like other forms of fortune telling, is considered haraam, a sin against Allah.

Am I ever so glad that I live in an enlightened, secular country.

One of Dr. Sultana’s (I am boldly assuming that Sultana is her last name, though I am well aware that Pakistani naming conventions do not necessarily follow Western traditions) examiners was James Binney, whose name is well known to anyone involved with galactic astrophysics; the book colloquially known as “Binney and Tremaine” (the real title is Galactic Dynamics) is considered one of the field’s “bibles”. (Darn, I hope no religious fanatic misconstrues the meaning of “bible” in the preceding sentence!)

I wish Dr. Sultana the brightest career. Who knows, maybe I’ll run into her one day somewhere, perhaps at the Perimeter Institute.

 Posted by at 4:46 pm
Jul 182012
 

Having been told by a friend that suddenly, there is a spate of articles online about the Pioneer anomaly, I was ready to curse journalists once I came across the words: “a programmer in Canada, Viktor Toth, heard about the effort and contacted Turyshev. He helped Turyshev create a program …”.

To be clear: I didn’t contact Slava; Slava contacted me. I didn’t “help create a program”; I was already done creating a program (which is why Slava contacted me). And that was the state of things back in 2005. What about all the work that I have done since, in the last seven years? Like developing a crude and then a more refined thermal model, independently developing precision orbit determination code to confirm the existence of the anomaly, collaborating with Slava on several papers including a monster review paper published by Living Reviews in Relativity, helping shape and direct the research that arrived at the present results, and drafting significant chunks of the final two papers that appeared in Physical Review Letters?

But then it turns out that journalists are blameless for a change. They didn’t invent a story out of thin air. They just copied the words from a NASA JPL press release.

And I am still trying to decide if I should feel honored or insulted. But then I am reminding myself that feeling insulted is rarely productive. So I’ll go with feeling honored instead. Having my contribution acknowledged by JPL is an honor, even if they didn’t get the details right.

 Posted by at 4:15 pm
Jul 172012
 

If you were reading newspapers, science blogs, or even some articles written by prominent scientists or announcements by prominent institutions (such as Canada’s Perimeter Institute), you might be under the impression that the Higgs boson is a done deal: it has been discovered. (Indeed, Perimeter’s Web site announces on its home page that “[the] Higgs boson has been found”.

Sounds great but it is not true. Let me quote from a recent New Scientist online article: “Although spotted at last, many properties of the new particle – thought to be the Higgs boson, or at least something similar – have yet to be tested. What’s more, the telltale signature it left in the detectors at the Large Hadron Collider (LHC) does not exactly match what is predicted”.

There, this says it all. We are almost certain that something has been discovered. (This is the 4.9-sigma result). We are not at all certain that it’s the Higgs. It probably is, but there is a significant likelihood that it isn’t, and we will only know for sure one way or another after several more years’ worth of data are collected. At least this is what the experimenters say. And why should you listen to anyone other than the experimenters?

 Posted by at 4:47 pm
Jul 162012
 

In the last several years, much of the time when I was wearing my physicist’s hat I was working on a theory of modified gravity.

Modified gravity theories present an alternative to the hypothetical (but never observed) substance called “dark matter” that supposedly represents more than 80% of the matter content of the Universe. We need either dark matter or modified gravity to explain observations such as the anomalous (too rapid) rotation of spiral galaxies.

Crudely speaking, when we measure the gravitational influence of an object, we measure the product of two numbers: the gravitational constant G and the object’s mass, M. If the gravitational influence is stronger than expected, it can be either because G is bigger (which means modified gravity) or M is bigger (which means extra mass in the form of some unseen, i.e., “dark” matter).

In Einstein’s general theory of relativity, gravity is the curvature of spacetime. Objects that are influenced only by gravity are said to travel along “geodesics”; their trajectory is determined entirely by the geometry of spacetime. On the other hand, objects that are influenced by forces other than Einstein’s gravity have trajectories that deviate from geodesics.

Massless particles, such as photons of light, must travel on geodesics (specifically, “lightlike geodesics”.) Conversely, if an originally massless particle deviates from a lightlike geodesic, it will appear to have acquired mass (yes, photons of light, when they travel through a transparent substance that slows them down, such as water or glass, do appear to have an effective mass.)

Modified gravity theories can change the strength of gravity two ways. They can change the strength of Einstein’s “geometric” gravity (actually, it would be called “metric gravity”); or, they can introduce a non-geometric force in addition to metric gravity.

And herein lies the problem. One important observation is that galaxies bend light, and they bend light more than one would expect without introducing dark matter. If we wish to modify gravity to account for this, it must mean changing the strength of metric gravity.

If metric gravity is different in a galaxy, it would change the dynamics of solar systems in that galaxy. This can be compensated by introducing a non-geometric force that cancels out the increase. This works for slow-moving objects such as planets and moons (or spacecraft) in orbit around a sun. However, stars like our own Sun also bend light. This can be observed very precisely, and we know that our Sun bends light entirely in accordance with Einstein’s general relativity theory. This cannot be explained as the interplay of geometric curvature and a non-geometric force; photons cannot deviate from the lightlike geodesics that are determined in their entirety by geometry alone.

So we arrive at an apparent contradiction: metric gravity must be stronger than Einstein’s prediction in a galaxy to account for how galaxies bend light, but it cannot be stronger in solar systems in that galaxy (or at the very least, in the one solar system we know well, our own), otherwise it could not account for how suns bend light or radio beams.

I have come to the conclusion that it’s not galaxy rotation curves or cosmological structure formation but modeling the bending of light and being able to deal with this apparent paradox is the most important test that a modified gravity theory must pass in order to be considered viable.

 Posted by at 5:13 pm
Jul 132012
 

I have been thinking about neutrinos today. No, not about faster-than-light neutrinos. I was skeptical about the sensational claim from the OPERA experiment last year, and my skepticism was well justified.

They may not be faster than light, but neutrinos are still very weird. Neutrinos of one flavor turn into another, a discovery that, to many a particle physicist, had to be almost as surprising as the possibility that neutrinos are superluminal.

The most straightfoward explanation for these neutrino oscillations is that neutrinos have mass. But herein lies a problem. We only ever observed left-handed neutrinos. This makes sense if neutrinos are massless particles that travel at the speed of light, since all observers agree on what left-handed means: the spin of the neutrino, projected along the direction of its motion, is always −1/2.

But now imagine neutrinos that are massive and travel slower than the speed of light. As a matter of fact, imagine a bunch of neutrinos fired by CERN in Geneva in the direction of Gran Sasso, Italy. It takes roughly 2 ms for them to arrive. Now if you can run very, very, very fast (say, you’re the Flash, the comic book superhero) you may be able to outrun the bunch. Looking back, you will see… a bunch of neutrinos with a velocity vector pointing backwards (they’re slower than you, which means they’ll appear to be moving backwards from your perspective) so projecting their spin along the direction of motion, you get +1/2. In other words, you’re observing right-handed neutrinos.

This is just weird. On the surface of it, it means that our fast-running Flash sees the laws of physics change! This is in deep contradiction with the laws of special relativity, Lorentz invariance and all that.

How we can interpret this situation depends on whether we believe that neutrinos are “Dirac” or “Majorana”. Neutrinos are fermions, and fermions are represented by spinor fields. A spinor field has four components: these correspond, in a sense, to a left-handed and a right-handed particle and their respective antiparticles. So if a particle only exists as a left-handed particle, only two of the four components remain; the other two (at least in the so-called Weyl representation) disappear, are “projected out”, to use a nasty colloquialism.

But we just said that if neutrinos are massive, it no longer makes sense of talking about strictly left-handed neutrinos; to the Flash, those neutrinos may appear right-handed. So both left- and right-handed neutrino states exist. Are they mathematically independent? Because if they are, neutrinos are represented by a full 4-component “Dirac” spinor. But there is a possibility that the components are not independent: in effect, this means that the neutrino is its own antiparticle. Such states can be represented by a two-component “Majorana” spinor.

The difference between these two types of neutrinos is not just theoretical. The neutrino carries something very real: the lepton number, in essence the “electronness” (without the electric charge) of an electron. If a neutrino is its own antiparticle, the two can annihilate one another, and two units of “electronness” vanish. Lepton number is not conserved.

If this is indeed the case, it can be observed. The so-called neutrinoless double beta decay is a hypothetical form of radioactive decay in which an isotope that is known to decay by emitting two electrons simultaneously (e.g., potassium-48 or uranium-238) does so without emitting the corresponding neutrinos (because these annihilate each other without going anywhere). Unfortunately, given that neutrinos don’t like to do much interacting to begin with, the probability of a neutrinoless decay occurring at any given time is very small. Still, it is observable in principle, and if observed, it would indicate unambiguously that neutrinos are Majorana spinors. (A prospect that may be appealing insofar as neutrinos are concerned, but I find it nonetheless deeply disturbing that such a fundamental property of a basic building block of matter may turn out to be ephemeral.)

Either way, I remain at a loss when I think about the handedness of neutrinos. If neutrinos are Dirac neutrinos, one may postulate right-handed neutrinos that do not interact the way left-handed neutrinos do (i.e., do not participate in the weak interaction, being so-called sterile neutrinos instead). Cool, but what about our friend, the Flash? Suppose he is observing the same thing we’re observing, a neutrino in the OPERA bunch interacting with something. But from his perspective, that neutrino is a right-handed neutrino that is not allowed to participate in such an interaction!

Or suppose that neutrinos are Majorana spinors, and right-handed neutrinos are simply much (VERY much) heavier, which is why they have not been observed yet (this is the so-called seesaw mechanism). The theory allows us to construct such as mass matrix, but once again having the Flash around leads to trouble: he will observe ordinary “light” neutrinos as right-handed ones!

Perhaps these are just apparent contradictions. In fact, I am pretty sure that that’s what they are, since all this follows from writing down a theory in the form of a Lagrangian density that is manifestly Lorentz (and Poincaré) invariant, hence the physics does not become broken for the Flash. It will just turn weird. But how weird is too weird?

 Posted by at 10:13 pm
Jul 102012
 

I once had a profound thought, years ago.

I realized that many people think that knowing the name of something is the same as understanding that thing. “What’s that?” they ask, and when you reply, “Oh, that’s just the blinking wanker from a thermonuclear quantum generator,” they nod deeply and thank you with the words, “I understand”. (Presumably these are the same people who, when they ask “How does this computer work?”, do not actually mean that they are looking for an explanation of Neumann machines, digital electronics, modern microprocessor technology, memory management principles, hardware virtualization techniques and whatnot; they were really just looking for the ON switch. Such people form an alarming majority… but it took me many frustrating years to learn this.)

I am not sure how to feel now, having just come across a short interview piece with the late physicist Richard Feynman, who is talking about the same topic. The piece is even titled “Knowing the name of something“. I am certainly reassurred that a mind such as Feynman’s had the same thought that I did. I am also disappointed that my profound thought is not so original after all. But I feel I should really be encouraged: perhaps this is just a sign that the same thought might be occurring to many other people, and that might make the world a better place. Who knows… in a big Universe, anything can happen!

 

 Posted by at 9:05 am
Jul 052012
 

News flash this morning: the first (of hopefully many) Japanese nuclear reactor is back online.

On March 11, 2011, the fifth biggest earthquake in recorded history, and the worst recorded earthquake ever in Japan, hit the island nation. As a result, some 16,000 people died (the numbers may go higher as some are still listed as missing). Most were killed by the natural disaster directly, as they drowned in the resulting tsunami. Some were killed as technology failed: buildings collapsed, vehicles crashed, industrial installations exploded, caught fire, or leaked toxins.

None were killed by the world’s second worst nuclear accident to date, the loss of power and resulting meltdown at the Fukushima Daiichi nuclear power plant. Some of it was due, no doubt, to sheer luck. Some if it due to the inherent safety of these plants and the foresight of their designers (though foresight did not always prevail, as evidenced by the decision to place last-resort emergency backup generators in a basement in a tsunami-prone area). The bottom line, though, remains: no-one died.

Yet the entire nuclear power generation industry in Japan was shut down as a result. Consequently, Japan’s conventional emissions rose dramatically; power shortages prevailed; and Japan ended up with a trade deficit, fueled by their import of fossil fuels.

Finally, it seems that sanity (or is it necessity?) is about to prevail. The Ohi nuclear power plant is supplying electricity again. I can only hope that it is running with lessons learned about a nuclear disaster that, according to the Japanese commission investigating it, was “profoundly manmade”; one “that could have been foreseen and prevented”, were it not for causes that were deeply rooted in Japanese culture.

 Posted by at 8:35 am
Jul 042012
 

I got up early this morning, so I had a chance to study the results from LHC, namely the preliminary publications from the ATLAS and CMS detectors.

According to the ATLAS team, the likelihood that the event count they see around 126 GeV is due purely to chance is less than one in a million. The result is better than 5σ, which makes it almost certain that they observed something.

The CMS detector observed many possible types of Higgs decay events. When they combined them all, they found that the probability that all this is due purely to chance is again less than one in a million… in their case, an almost 5σ result. Once again, it indicates very strongly that something has been observed.

But is it the Higgs? I have to say it’s beginning to look like it’s both quacking and walking like a duck… but CERN is cautious, and rightfully so. Their statement is that “CERN experiments observe particle consistent with long-sought Higgs boson”, and I think it is a very correct one.

 Posted by at 7:30 am
Jul 032012
 

It appears that CERN goofed and as a result, the video of the announcement planned for tomorrow has been leaked. (That is, unless you choose to believe their cockamamie story about multiple versions of the video having been produced.)

The bottom line: there is definitely a particle there with integer spin. Its mass is about 125 GeV. We know it decays into two photons and two Z-bosons. That’s about all we know.

The assessment is that it is either the Higgs or something altogether new.

 Posted by at 6:17 pm
Jul 022012
 

The Tevatron may have been shut down last year but the data they collected is still being analyzed.

And it’s perhaps no accident that they managed to squeeze out an announcement today, just two days before the scheduled announcement from the LHC: their observations are “consistent with the possible presence of a low-mass Higgs boson.”

The Tevatron has analyzed ten “inverse femtobarns” worth of data. This unit of measure (unit of luminosity, integrated luminosity to be precise) basically tells us how many events the Tevatron experiment produced. One “barn” is a whimsical name for a tiny unit of area, 10−24 square centimeters. A femtobarn is 10−15 barn. And when a particle physicist speaks of “inverse femtobarns”, what he really means is “events per femtobarn”. Ten inverse femtobarns of “integrated luminosity”, then, means a particle beam that, over time, produced ten events per every 10−39 square centimeters.

Now this makes sense intuitively if you think of a yet to be discovered particle or process as something that has a size. Suppose the cross-sectional size of what you are trying to discover is 10−36 square centimeters, or 1000 femtobarns. Now your accelerator just peppered each femtobarn with 10 events… that’s 10,000 events that fall onto your intended target, which means 10,000 opportunities to discover it. On the other hand, if your yet to be discovered object is 10−42 square centimeters in size, which is just one one thousandths of a femtobarn… ten events per femtobarn is really not enough, chances are your particle beam never hit the target and there is nothing to see.

The Tevatron operated for a long time, which allowed them to reach this very high level of integrated luminosity. But the cross-section, or apparent “size” of Higgs-related events also depends on the energy of the particles being accelerated. The Tevatron was only able to accelerate particles to 2 TeV. In contrast, the LHC is currently running at 8 TeV, and at such a high energy, some events are simply more likely to occur, which means that they are effectively “bigger” in cross section, more likely to be “illuminated” by the particle beam.

The Tevatron is not collecting any new data, but it seems they don’t want to be left out of the party. Hence, I guess, this annoucement, dated July 2, indicating a strong hint that the Higgs particle exists with a mass around 125 GeV/c2.

On the other hand, CERN already made it clear that their announcement will not be a definitive yes/no statement on the Higgs. Or so they say. Yet it has been said that Peter Higgs, after whom the Higgs boson is named, has been invited to be present when the announcement will be made. This is more than enough for the rumors to go rampant.

I really don’t know what to think. There are strong reasons to believe that the Higgs particle is real. There are equally strong reasons to doubt its existence. The observed events are important, but an unambiguous confirmation requires further analysis to exclude possibilities such as statistical flukes, events due to something else like a hadronic resonance, and who knows what else. And once again, I am also reminded of another historical announcement by CERN exactly 28 years prior to this upcoming one, on July 4, 1984, when they announced the discovery of the top quark at 40 GeV. Except that there is no top quark at 40 GeV… their announcement was wrong. Yet the top quark is real, later to be discovered having a mass of about 173 GeV.

Higgs or no Higgs? I suspect the jury will still be out on July 5.

 Posted by at 5:48 pm
Jun 282012
 

My blog is supposed to be (mostly) about physics. So let me write something about physics for a change.

John Moffat, with whom I have been collaborating (mostly on his modified gravity theory, MOG) for the past six years or so, has many ideas. Recently, he was wondering: could the celebrated 125 GeV (125 gigaelectronvolts divided the speed of light squared, to be precise, which is about about 134 times the mass of a hydrogen atom) peak observed last year at the LHC (and if rumors are to be believed, perhaps to be confirmed next week) be a sign of something other than the Higgs particle?

All popular accounts emphasize the role of the Higgs particle in making particles massive. This is a bit misleading. For one thing, the Higgs mechanism is directly responsible for the masses of only some particles (the vector bosons); for another, even this part of the mechanism requires that, in addition to the Higgs particle, we also presume the existence of a potential field (the famous “Mexican hat” potential) that is responsible for spontaneous symmetry breaking.

Higgs mechanism aside though, the Standard Model of particle physics needs the Higgs particle. Without the Higgs, the Standard Model is not renormalizable; its predictions diverge into meaningless infinities.

The Higgs particle solves this problem by “eating up” the last misbehaving bits of the Standard Model that cannot be eliminated by other means. The theory is then complete: although it remains unreconciled with gravity, it successfully unites the other three forces and all known particles into a unified (albeit somewhat messy) whole. The theory’s predictions are fully in accordance with data that include laboratory experiments as well as astrophysical observations.

Well, almost. There is still this pesky business with neutrinos. Neutrinos in the Standard Model are massless. Since the 1980s, however, we had strong reasons to suspect that neutrinos have mass. The reason is the “solar neutrino problem”, a discrepancy between the predicted and observed number of neutrinos originating from the inner core of the Sun. This problem is resolved if different types of neutrinos can turn into one another, since the detectors in question could only “see” electron neutrinos. This “neutrino flavor mixing” or “neutrino oscillation” can occur if neutrinos have mass, represented by a mass matrix that is not completely diagonal.

What’s wrong with introducing such a matrix, one might ask? Two things. First, this matrix necessarily contains dimensionless quantities that are very small. While there is no a priori reason to reject them, dimensionless numbers in a theory that are orders of magnitude bigger or smaller than 1 are always suspect. But the second problem is perhaps the bigger one: massive neutrinos make the Standard Model non-renormalizable again. This can only be resolved by either exotic mechanisms or the introduction of new elementary particles.

This challenge to the Standard Model perhaps makes the finding of the Higgs particle less imperative. Far from turning a nearly flawless theory into a perfect one, it only addresses some problems in an otherwise still flawed, incomplete theory. Conversely, not finding the Higgs particle is less devastating: it does invalidate a theory that would have been perfect otherwise, it simply prompts us to look for solutions elsewhere.

In light of that, one may wish to take a second look at the observations reported at the LHC last fall. The Higgs particle, if it exists, can decay in several ways. We already know that the Higgs particle cannot be heavier than 130 GeV, and this excludes certain forms of decay. One of the decays that remains is the decay into a quark and its antiparticle that, in turn, decay into two photons. Photons are easy to observe, but there is a catch: when the LHC collides large numbers of protons with one another at high energies, a huge number of photons are created as a background. It is against this background that two-photon events with a signature specific to the Higgs particle must be observed.

Diphoton results from the Atlas detector at the LHC, late 2011.

And observed they have been, albeit not with a resounding statistical significance. There is a small excess of such two-photon events indicating a possible Higgs mass of 125 GeV. Many believe that this is because there is indeed a Higgs particle with this mass, and its discovery will be confirmed with the necessary statistical certainty once more data are collected.

Others remain skeptical. For one thing, that 125 GeV peak is not the only peak in the data. For another, it is a peak that is a tad more pronounced than what the Higgs particle would produce. Furthermore, there is no corresponding peak in other “channels” that would correspond to other forms of decay of the Higgs particle.

This is when Moffat’s idea comes in. John had in mind the many “hadronic resonances”, all sorts of combinations of quarks that appear at lower energies, some of which still befuddle particle physicists. What if, he asks, this 125 GeV peak is due to just another such resonance?

Easier said than done. At low energies, there are plenty of quarks to choose from and combine. But 125 GeV is not a very convenient energy from this perspective. The heaviest quark, the top quark has a mass of 173 GeV or so; far too heavy for this purpose. The next quark in terms of mass, the bottom quark, is much too light at around 4.5 Gev. There is no obvious way to combine a reasonably small number of quarks into a 125 GeV composite particle that sticks around long enough for it to be detected. Indeed, the top quark is so heavy that “toponium”, a hypothetical combination of a top quark and its antiparticle, is believed to be undetectable; it decays so rapidly, it really never has time to form in the first place.

But then, there is another possibility. Remember how neutrinos oscillate between different states? Well, composite particles can do that, too. And as to what these “eigenstates” are, that really depends on the measurement. One notorious example is the neutral kaon (also known as the neutral K meson). It has one eigenstate with respect to the strong interaction, but two quite different eigenstates with respect to the weak interaction.

So here is John’s perhaps not so outlandish proposal: what if there is a form of quarkonium whose eigenstates are toponium (not observed) and bottomonium with respect to some interactions, but two different mixed states with respect to whatever interaction is responsible for the 125 GeV resonance observed by the LHC?

Such an eigenstate requires a mixing angle, easily calculated as 20 degrees. This mixing also results in another eigenstate, at 330 GeV, which is likely so heavy that it is not stable enough to be observed. This proposal, if valid, would explain why the LHC sees a resonance at 125 GeV without a Higgs particle.

Indeed, this proposal can also explain a) why the peak is stronger than what one would predict for the Higgs particle, b) why no other Higgs-specific decay modes were observed, and perhaps most intriguingly, c) why there are additional peaks in the data!

That is because if there is a “ground state”, there are also excited states, the same way a hydrogen atom (to use a more commonplace example) has a ground state and excited states with its lone electron in higher energy orbits. These excited states would show up in the plots as additional resonances, usually closely bunched together, with decreasing magnitude.

Could John be right? I certainly like his proposal, though I am not exactly the unbiased observer, since I did contribute a little to its development through numerous discussions. In any case, we will know a little more next week. An announcement from the LHC is expected on July 4. It is going to be interesting.

 Posted by at 4:36 pm
Jun 152012
 

Our latest Pioneer paper, in which we discuss the results from the Pioneer thermal model and its incorporation into the orbital analysis (the conclusion being that no significant anomalous acceleration remains once thermal radiation is properly accounted for) made it to the cover of Physical Review Letters. I am very grateful that I was given the opportunity to participate in this research, and I am very proud of this work and our results.

 Posted by at 11:35 am
Jun 062012
 

Yesterday, Venus transited the Sun. It won’t happen again for more than a century.

I had paper “welder’s glasses” courtesy of Sky News. Looking through them, I did indeed see a tiny black speck on the disk of the Sun. However, it was nowhere as impressive as the pictures taken through professional telescopes.

These live pictures were streamed to us courtesy of NASA. One planned broadcast from Alice Springs, Australia, was briefly interrupted. At first, it was thought that a road worker cutting an optical cable was the culprit, but later it turned out to be a case of misconfigured hardware. Or could it be that they were trying to fix a problem with an “intellectual property address”, a wording that appeared on several Australian news sites today? (Note to editors: if you don’t understand the text, don’t be over-eager replacing acronyms with what you think they stand for.)

I also tried to take pictures myself, holding my set of paper welder’s glasses in front of my (decidedly non-professional) cameras. Surprisingly, it was with my cell phone that I was able to take the best picture, but it did not even come close in resolution to what would have been required to see Venus.

The lesson? I think I’ll leave astrophotography to the professionals. Or, at least, to expert amateurs. Unfortunately, I am neither.

That said, I remain utterly fascinated by the experience of staring at a sphere of gas, close to a million and a half kilometers wide, containing 2 nonillion (2,000,000,000,000,000,000,000,000,000,000) kilograms of mostly hydrogen gas, burning roughly 580 billion kilograms of it every second in the form of nuclear fusion deep in its core, releasing photons amounting to about 4.3 billion kilograms of energy… and most of these photons remain trapped for a very long time, producing extreme pressures (so that the interior of the Sun is dominated by this ultrarelativistic photon gas) that prevent the Sun from collapsing upon itself, which will indeed be its fate when it can no longer sustain hydrogen fusion in its core a few billion years from now. And then, this huge orb is briefly occulted by a tiny black speck, the shadow of a world as big as our own… just a tiny black dot, too small for my handheld cameras to see.

I sometimes try to use a human-scale analogy when trying to explain to friends just how mind-bogglingly big the solar system is. Imagine a beach ball that is a meter wide. Now suppose you stand about a hundred meters away from it, like the length of a large sports field. Okay… now imagine that that beach ball is so bleeping hot, even at this distance its heat is burning your face. That’s how hot the Sun is.

Now hold up a large pea, about a centimeter in size. That’s the Earth. Another pea, roughly halfway between you and the beach ball would be Venus.

A peppercorn, some thirty centimeters or so from your Earth pea… that’s the Moon. Incidentally, if you hold that peppercorn up, at about thirty centimeters from your eye it is just large enough to obscure the beach ball in the distance, producing a solar eclipse.

Now let’s go a little further. Some half a kilometer from the beach ball you see a large-ish orange… Jupiter. Twice as far, you see a smaller orange with a ribbon around it; that’s Saturn. Pluto would be another peppercorn, more than three kilometers away.

But your beach ball’s influence does not end there. There will be specks of dust in orbit around it as far out as several hundred kilometers, maybe more. So where would the next beach ball be, representing the nearest star? Well, here’s the problem… the surface of the Earth is just not large enough, because the next beach ball would be more than 20,000 kilometers away.

To represent other stars, not to mention the whole of the Milky Way, we would once again need astronomical distance scales. If a star like our Sun was a one meter wide beach ball, the Milky Way of beech balls would be larger than the orbit of the Earth around the Sun. And the nearest full-size galaxy, Andromeda, would need to be located in distant parts of the solar system, far beyond the orbits of planets.

The only way we could reduce galaxies and groups of galaxies to a scale that humans can comprehend is by making stars and planets microscopic. So whereas the size of the solar system can perhaps be grasped by my beach ball and pea analogy, it is simply impossible to imagine simultaneously just how large the Milky Way is, not to mention the entire visible universe.

Or, as Douglas Adams wrote in The Hitchhiker’s Guide to the Galaxy: “Space is big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the chemist’s, but that’s just peanuts to space.”

 Posted by at 10:25 am
May 292012
 

A few days ago, a bright 16-year old German student of Indian descent, Shouryya Ray of Dresden, won second prize in a national science competition with an essay entitled “Analytische Lösung von zwei ungelösten fundamentalen Partikeldynamikproblemen” (Analytic solution of two unsolved fundamental particle dynamics problems).

This story should have ended there. And perhaps it would have, were it not for the words in the abstract that said, among other things: “Das zugrundeliegende Kraftgesetz wurde bereits von Newton (17. Jhd.) entdeckt. […] Diese Arbeit setzt sich also die analytische Lösung dieser bisher nur näherungsweise oder numerisch gelösten Probleme zum Ziele.” (The underlying power law was discovered by Newton (17th century). The goal of this work is then the analytic solution of these until now only approximately or numerically solved problems.)

This was more than enough for sensation-seeking science journalists. The story was picked up first by Die Welt with the title “Mit 16 ein Genie: Shouryya Ray löste ein jahrhundertealtes mathematisches Problem” (Genius at 16: Shouryya Ray solves centuries-old mathametical problem) and then translated into English and other languages, even appearing in the Ottawa Citizen. In short order, even a biographical entry on Wikipedia was created; now nominated for deletion, many are voting to keep it because in their view, the press coverage is sufficient to establish encyclopedic notability.

Cooler heads should have prevailed. What science journalists neglected to ask is why, if this is such a breakthrough, the youth only received second prize. And in any case, what on Earth did he actually do? His essay or details about it were not published. The only clue to go by was a press photo in which the student holds up a large sheet of paper containing an equation:

As I discussed this very topic on a page I placed on my Web site a few years back (reacting to some bad math and flawed physics reasoning in an episode of the Mythbusters) I felt compelled to find out more. I guessed (correctly, as it turns out) that \(u\) and \(v\) must be the horizontal and vertical (or vertical and horizontal?) components of the projectile’s velocity, \(g\) is the gravitational constant, and \(\alpha\) is the coefficient of air resistance. However, I am embarrassed to admit that although I spent some time trying, I was not able to find a way to separate the variables and integrate the relevant differential equations to obtain Ray’s formula. I was ready to give up actually when I came across a derivation on reddit (and I realized that I was on the right track all along, I was just stubbornly trying to do a slightly different trick, which didn’t work). The formula is correct, and it is certainly an impressive result for a 16-year old, worthy of a second prize.

But no more. This is not a breakthrough. As it turns out, similar implicit solutions were well known in the 19th century. A formulation that differs from Ray’s only in notational details appeared in a paper by Parker (Am. J. Phys, 45, 7, 606, July 1977). Alas, such an implicit form is of limited utility; one still requires numerical methods to actually solve the equation.

Much of this was probably known to the judges of the competition, which is probably why they awarded the student second prize.

Hopefully none of this will deter young Mr. Ray from pursuing a successful career as a physicist or mathematician.

 Posted by at 10:20 am