Feb 142013
 

I always thought of myself as a moderate conservative. I remain instinctively suspicious of liberal activism, and I do support some traditionally conservative ideas such as smaller governments, lower taxes, or individual responsibility.

So why am I not a happy camper nowadays with a moderate conservative government in Ottawa?

Simple: because they are not moderate. To me, moderate conservatism means evidence-based governance. A government that, once its strategic goals are formulated, puts aside ideology and governs on the basis of available facts and the best scientific advice they can obtain.

But this is not what Mr. Harper’s conservative government is doing. Quite the contrary, they engage in one of the worst of sins: they try to distort facts to suit their ideology. Most recently, it is Fisheries and Oceans that is imposing confidentiality rules on participating researchers that “would be more appropriate for classified military research”.

I am appalled.

 Posted by at 10:58 am
Jan 252013
 

I came across this image on a Facebook page dedicated to the former glory of the Soviet Union. It is titled “Russia and the USSR: similar, yet noticeably different.”

There is, unfortunately, far too much truth in what the image depicts. It does not make me wish for Soviet times to return, but it does make me wonder why so much good had to be thrown away along with the bad.

 Posted by at 3:31 pm
Jan 202013
 

John Marburger had an unenviable role as Director of the United States Office of Science and Technology Policy. Even before he began his tenure, he already faced demotion: President George W. Bush decided not to confer upon him the title “Assistant to the President on Science and Technology”, a title born both by his predecessors and also his successor. Marburger was also widely criticized by his colleagues for his efforts to defend the Bush Administration’s scientific policies. He was not infrequently labeled a “prostitute” or worse.

I met Marburger once in 2006, though to be honest, I don’t recall if I actually conversed with him one-on-one. He gave the keynote address at an international workshop organized by JPL, titled From Quantum to Cosmos: Fundamental Physics Research in Space, which I attended.

If Marburger felt any bitterness towards his colleagues or towards his own situation as a somewhat demoted science advisor, he showed no signs of it during that keynote address. Just as there are no signs of bitterness or resentment in his book, Constructing Reality, which I just finished reading. Nor is there any hint of his own mortality, even though he must have known that his days were numbered by a deadly illness. No, this is a book written by a true scientist: it is about the immortal science that must have been his true passion all along.

It is an ambitious book. In Constructing Reality, Marburger attempts the impossible: explain the Standard Model of particle physics to the interested and motivated lay reader. Thankfully, he does not completely shy away from the math; he realizes that without at least a small amount of mathematics, modern particle physics is just not comprehensible. I admire his (and his publisher’s) courage to face this fact.

Is it a good book? I honestly don’t know. I certainly enjoyed it very much. Marburger demonstrated a thorough, and better yet, intuitive understanding of some of the most difficult aspects of the Standard Model and quantum field theory. But I am the wrong audience: I know the science that he wrote about. (That is not to say that his insight was not helpful in deepening my understanding.) Would this book be useful to the lay reader? Or the aspiring young physicist? I really cannot tell. Learning the principles of quantum field theory is not easy, and in my experience, we each take our own path towards a deeper understanding. Some books help more than others but ultimately, what helps the most is practice: there is no substitute for working out equations on your own. Still, if the positive reviews on Amazon are any indication, Marburger succeeded with writing a book “for [his] friends who are not physicists”.

Marburger died much too soon, at the age of 70, after he lost his battle with cancer. His book was published posthumously (which perhaps explains why the back flap of the book’s dust jacket contains his short bio and room for a photograph above, but no actual photo. Or perhaps I am just imagining things.) But his words survive and inspire others. Well done, Dr. Marburger. And thanks.

 Posted by at 10:37 am
Jan 192013
 

Recently I came across a blog post that suggests (insinuates, even) that proponents of modified gravity ignore the one piece of evidence that “incontrovertibly settles” the question in favor of dark matter. Namely this plot:

From http://arxiv.org/abs/1112.1320 (Scott Dodelson)

From http://arxiv.org/abs/1112.1320 (Scott Dodelson)

In this plot, the red data points represent actual observation; the black curve, the standard cosmology prediction; and the various blue curves are predictions of (modified) gravity without dark matter.

Let me attempt to explain briefly what this plot represents. It’s all about how matter “clumps” in an expanding universe. Imagine a universe filled with matter that is perfectly smooth and homogeneous. As this universe expands, matter in it becomes less dense, but it will remain smooth and homogeneous. However, what if the distribution of matter is not exactly homogeneous in the beginning? Clumps that are denser than average have more mass and hence, more gravity, so these clumps are more able to resist the expansion. In contrast, areas that are underdense have less gravity and a less-than-average ability to resist the expansion; in these areas, matter becomes increasingly rare. So over time, overdense areas become denser, underdense areas become less dense; matter “clumps”.

Normally, this clumping would occur on all scales. There will be big clumps and small clumps. If the initial distribution of random clumps was “scale invariant”, then the clumping remains scale invariant forever.

That is, so long as gravity is the only force to be reckoned with. But if matter in the universe is, say, predominantly something like hydrogen gas, well, hydrogen has pressure. As the gas starts to clump, this pressure becomes significant. Clumping really means that matter is infalling; this means conversion of gravitational potential energy into kinetic energy. Pressure plays another role: it sucks away some of that kinetic energy and converts it into density and pressure waves. In other words: sound.

Yes, it is weird to talk about sound in a medium that is rarer than the best vacuum we can produce here on the Earth, and over cosmological distance scales. But it is present. And it alters the way matter clumps. Certain size scales will be favored over others; the clumping will clearly show preferred size scales. When the resulting density of matter is plotted against a measure of size scale, the plot will clearly show a strong oscillatory pattern.

Cosmologists call this “baryonic acoustic oscillations” or BAO for short: baryons because they represent “normal” matter (like hydrogen gas) and, well, I just explained why they are “acoustic oscillations”.

In the “standard model” of cosmology, baryonic “normal” matter amounts to only about 4% of all the matter-energy content of the visible universe. Of the rest, some 24% is “dark matter”, the rest is “dark energy”. Dark energy is responsible for the accelerating expansion the universe apparently experienced in the past 4-5 billion years. But it is dark matter that determines how matter in general clumped over the eons.

Unlike baryons, dark matter is assumed to be “collisionless”. This means that dark matter has effectively no pressure. There is nothing that could slow down the clumping by converting kinetic energy into sound waves. If the universe had scale invariant density perturbations in the beginning, it will be largely scale invariant even today. In the standard model of cosmology, most matter is dark matter, so the behavior of dark matter will dominate over that of ordinary matter. This is the prediction of the standard model of cosmology, and this is represented by the black curve in the plot above.

In contrast, cosmology without dark matter means that the only matter that there is is baryonic matter with pressure. Hence, oscillations are unavoidable. The resulting blue curves may differ in detail, but they will have two prevailing characteristics: they will be strongly oscillatory and they will also have the wrong slope.

That, say advocates of the standard model of cosmology, is all the proof we need: it is incontrovertible evidence that dark matter has to exist.

Except that it isn’t. And we have shown that it isn’t, years ago, in our paper http://arxiv.org/abs/0710.0364, and also http://arxiv.org/abs/0712.1796 (published in Class. Quantum Grav. 26 (2009) 085002).

First, there is the slope. The theory we were specifically studying, Moffat’s MOG, includes among other things a variable effective gravitational constant. This variability of the gravitational constant profoundly alters the inverse-square law of gravity over very long distance scales, and this changes the slope of the curve quite dramatically:

From http://arxiv.org/abs/0710.0364 (J. W. Moffat and V. T. Toth)

From http://arxiv.org/abs/0710.0364 (J. W. Moffat and V. T. Toth)

This is essentially the same plot as in Dodelson’s paper, only with different scales for the axes, and with more data sets shown. The main feature is that the modified gravity prediction (the red oscillating line) now has a visually very similar slope to the “standard model” prediction (dashed blue line), in sharp contrast with the “standard gravity, no dark matter” prediction (green dotted line) that is just blatantly wrong.

But what about the oscillations themselves? To understand what is happening there, it is first necessary to think about how the actual data points shown in these plots came into existence. These data points are the result of large-scale galaxy surveys that yielded a three-dimensional data set (sky position being two coordinates, while the measured redshift serving as a stand-in for the third dimension, namely distance) for millions of distant galaxies. These galaxies, then, were organized in pairs and the statistical distribution of galaxy-to-galaxy distances was computed. These numbers were then effectively binned using a statistical technique called a window function. The finite number of galaxies and therefore, the finite size of the bins necessarily introduces an uncertainty, a “smoothing effect”, if you wish, that tends to wipe out oscillations to some extent. But to what extent? Why, that is easy to estimate: all one needs to do is to apply the same window function technique to simulated data that was created using the gravity theory in question:

From http://arxiv.org/abs/0710.0364 (J. W. Moffat and V. T. Toth)

From http://arxiv.org/abs/0710.0364 (J. W. Moffat and V. T. Toth)

This is a striking result. The acoustic oscillations are pretty much wiped out completely except at the lowest of frequencies; and at those frequencies, the modified gravity prediction (red line) may actually fit the data (at least the particular data set shown in this plot) better than the smooth “standard model” prediction!

To borrow a word from the blog post that inspired mine, this is incontrovertible. You cannot make the effects of the window function go away. You can choose a smaller bin size but only at the cost of increasing the overall statistical uncertainty. You can collect more data of course, but the logarithmic nature of this plot’s horizontal axis obscures the fact that you need orders of magnitude (literally!) more data to achieve the required resolution where the acoustic oscillations would be either unambiguously seen or could be unambiguously excluded.

Which leads me to resort to Mark Twain’s all too frequently misquoted words: “The report of [modified gravity’s] death was an exaggeration.”

 Posted by at 11:32 am
Jan 122013
 

jstor_logoComputer pioneer Alan Turing, dead for more than half a century, is still in the news these days. The debate is over whether or not he should be posthumously pardoned for something that should never have been a crime in the first place, his homosexuality. The British government already apologized for a prosecution that drove Turing into suicide.

I was reminded of the tragic end of Turing’s life as I am reading about the death of another computer pioneer, Aaron Swartz. His name may not have been a household name, but his contributions were significant: he co-created the RSS specifications and co-founded Reddit, among other things. And, like Turing, he killed himself, possibly as a result of government prosecution. In the case of Swartz, it was not his sexual orientation but his belief that information, in particular scholarly information should be freely accessible to all that brought him into conflict with authorities; specifically, his decision to download some four million journal articles from JSTOR.

Ironically, it was only a few days ago that JSTOR opened up their archives to limited public access. And the trend in academic publishing for years has been in the direction of free and open access to all scientific information.

Perhaps one day, the United States government will also find itself in the position of having to apologize for a prosecution that, far from protecting the public’s interests, instead deprived the public of the contributions that Mr. Swartz will now never have a chance to make.

 Posted by at 4:53 pm
Jan 122013
 

I only noticed it in the program guide by accident… and I even missed the first three minutes. Nonetheless, I had loads of fun watching last night a pilot for a new planned Canadian science-fiction series, Borealis, on Space.

Borealis 24

The premise: a town in the far north, some 30 years in the future, when major powers in the melting Arctic struggle for control over the Earth’s few remaining oil and gas resources.

In other words, a quintessentially Canadian science-fiction story. Yet the atmosphere strongly reminded me of Stalker, the world-famous novel of the Russian Strugatsky brothers.

I hope it is well received and the pilot I saw last night will be followed by a full-blown series.

 Posted by at 12:41 pm
Jan 092013
 

BlackHole-tinyA few weeks ago, I exchanged a number of e-mails with someone about the Lanczos tensor and the Weyl-Lanczos equation. One of the things I derived is worth recording here for posterity.

The Lanczos tensor is an interesting animal. It can be thought of as the source of the Weyl curvature tensor, the traceless part of the Riemann curvature tensor. The Weyl tensor, together with the Ricci tensor, fully determine the Riemann tensor, i.e., the intrinsic curvature of a spacetime. Crudely put, whereas the Ricci tensor tells you how the volume of, say, a cloud of dust changes in response to gravity, the Weyl tensor tells you how that cloud of dust is distorted in response to the same gravitational field. (For instance, consider a cloud of dust in empty space falling towards the Earth. In empty space, the Ricci tensor is zero, so the volume of the cloud does not change. But its shape becomes distorted and elongated in response to tidal forces. This is described by the Weyl tensor.

Because the Ricci tensor is absent, the Weyl tensor fully describes gravitational fields in empty space. In a sense, the Weyl tensor is analogous to the electromagnetic field tensor that fully describes electromagnetic fields in empty space. The electromagnetic field tensor is sourced by the four-dimensional electromagnetic vector potential (meaning that the electromagnetic field tensor can be expressed using partial derivatives of the electromagnetic vector potential.) The Weyl tensor has a source in exactly the same sense, in the form of the Lanczos tensor.

The electromagnetic field does not uniquely determine the electromagnetic vector potential. This is basically how integrals vs. derivatives work. For instance, the derivative of the function \(y=x^2\) is given by \(y’=2x\). But the inverse operation is not unambiguous: \(\int 2x~ dx=x^2+C\) where \(C\) is an arbitrary integration constant. This is a recognition of the fact that the derivative of any function in the form \(y=x^2+C\) is \(y’=2x\) regardless of the value of \(C\); so knowing only the derivative \(y’\) does not fully determine the original function \(y\).

In the case of electromagnetism, this freedom to choose the electromagnetic vector potential is referred to as the gauge freedom. The same gauge freedom exists for the Lanczos tensor.

Solutions for the Lanczos tensor for the simplest case of the Schwarzschild metric are provided in Wikipedia. A common characteristic of these solutions is that they yield a quantity that “blows up” at the event horizon. This runs contrary to accepted wisdom, namely that the event horizon is not in any way special; a freely falling space traveler would never know that he is crossing it.

But as it turns out, thanks to the gauge freedom of the Lanczos tensor, it is easy to construct a solution (an infinite family of solutions, as a matter of fact) that do not behave like this at the horizon.

Well, it was a fun thing to compute anyway.

 Posted by at 3:08 pm
Jan 092013
 

No, not Deep Purple the British hard rock group but deep purple the color. And pink… on Australian weather maps. These are the new colors to represent the temperature range between +50 and +54 degrees Centigrade.

Deadly deep purple (and pink)

There is another word to describe such temperatures: death. This is not funny anymore. If weather like this becomes more common, parts of our planet will simply become uninhabitable by humans without high technology life support (such as reliable, redundant air conditioning). In other words, it’s like visiting an alien planet.

 Posted by at 10:06 am
Jan 042013
 
Carr, Science 2013; 339:42-43

Carr, Science 2013; 339:42-43

No, the title of this entry is not in reference to another miserably cold Ottawa winter (it’s not that cold, actually; I’ve seen a lot worse) but the absolute temperature scale.

Remember back in high school, when you were first taught that nothing can be colder than 0 degrees Kelvin? Well… you can’t say that anymore.

There are a variety of ways of formulating thermodynamics. Perhaps the cleanest is axiomatic thermodynamics, in which simple relationships like the conservation of energy or the existence of irreversible processes is codified in the form of axioms. One such axiom is often referred to as the Third Law of Thermodynamics; in essence, it postulates that a “ground state” of zero entropy exists, and associates this ground state with the start of the absolute temperature scale.

A little messier is classical statistical physics, where temperature is defined as the average kinetic energy per degree of freedom. Still, since kinetic energy cannot be negative, its average cannot be negative either, so it’s clear that there exists a lowest possible temperature at which all classical particles are at rest.

But statistical physics leads to another way of looking at temperature: as a means of calculating probabilities. The probability \(P_i\) of finding a particle in a state \(i\) with kinetic energy \(E_i\) will be proportional to the Boltzmann distribution:

$$P_i\propto e^{-E_i/kT},$$

where \(k\) is Boltzmann’s constant ant \(T\) is the temperature.

And here is where things get really interesting. For if it is possible to create an ensemble of particles in which \(P_i\) follows a positive exponential distribution, that clearly implies a negative temperature \(T\).

And this is precisely what has been reported in Science this week by Braun et al. (Science 2013; 339:52-55): an experimentally realized state of ultracold bosons with a distribution of kinetic (motional) energy states that follows a positive exponential curve. In other words… matter at temperature below 0 K.

How about that for a bit of 21st century physics.

 Posted by at 7:16 am
Dec 202012
 

I have received a surprising number of comments to my recent post on the gravitational potential, including a criticism: namely that what I am saying is nonsense, that in fact it is well known (there is actually a resolution by the International Astronomical Union to this effect) that in the vicinity of the Earth, the gravitational potential is well approximated using the Earth’s multipole plus tidal contributions, and that the potential, therefore, is determined primarily by the Earth itself, the Sun only playing a minor role, contrary to what I was blabbering about.

But this is precisely the view of gravity that I was arguing against. As they say, a picture is worth a thousand words, so let me try to demonstrate it with pictures, starting with this one:

It is a crude one-dimensional depiction of the Earth’s gravity well (between the two vertical black lines) embedded in the much deeper gravity well (centered) of the Sun. In other words, what I depicted is the sum of two gravitational potentials:

$$U=-\frac{GM}{R}-\frac{Gm}{r}.$$

Let me now zoom into the area marked by the vertical lines for a better view:

It looks like a perfectly ordinary gravitational potential well, except that it is slightly lopsided.

So what if I ignored the Sun’s potential altogether? In other words, what if I considered the potential given by

$$U=-\frac{Gm}{r}+C$$

instead, where \(C\) is just some normalization constant to ensure that I am comparing apples to apples here? This is what I get:

The green curve approximates the red curve fairly well deep inside the potential well but fails further out.

But wait a cotton-picking minute. When I say “approximate”, what does that tell you? Why, we approximate curves with Taylor series, don’t we, at least when we can. The Sun’s gravitational potential, \(-GM/R\), near the vicinity of the Earth located at \(R=R_0\), would be given by the approximation

$$-\frac{GM}{R}=-\frac{GM}{R_0}+\frac{GM}{R_0^2}(R-R_0)-\frac{GM}{R_0^3}(R-R_0)^2+{\cal O}\left(\frac{GM}{R_0^4}[R-R_0]^3\right).$$

And in this oversimplified one-dimensional case, \(r=R-R_0\) so I might as well write

$$-\frac{GM}{R}=-\frac{GM}{R_0}+\frac{GM}{R_0^2}r-\frac{GM}{R_0^3}r^2+{\cal O}\left(\frac{GM}{R_0^4}r^3\right).$$

(In the three-dimensional case, the math gets messier but the principle remains the same.)

So when I used a constant previously, its value would have been \(C=-GM/R_0\) and this would be just the zeroeth order term in the Taylor series expansion of the Sun’s potential. What if I include more terms and write:

$$U\simeq-\frac{Gm}{r}-\frac{GM}{R_0}+\frac{GM}{R_0^2}r-\frac{GM}{R_0^3}r^2?$$

When I plot this, here is what I get:

The blue curve now does a much better job approximating the red one. (Incidentally, note that if I differentiate by \(r\) to obtain the acceleration, I get: \(a=-dU/dr=-Gm/r^2-GM/R_0^2+2GMr/R_0^3\), which is the sum of the terrestrial acceleration, the solar acceleration that determines the Earth’s orbit around the Sun, and the usual tidal term. So this is another way to derive the tidal term. But, I digress.)

The improvement can also be seen if I plot the relative error of the green vs. blue curves:

So far so good. But the blue curve still fails miserably further outside. Let me zoom back out to the scale of the original plot:

Oops.

So while it is true that in the vicinity of the Earth, the tidal potential is a useful approximation, it is not the “real thing”. And when we perform a physical experiment that involves, e.g., a distant spacecraft or astronomical objects, the tidal potential must not be used. Such experiments, for instance tests measuring gravitational time dilation or the gravitational frequency shift of an electromagnetic signal are readily realizable nowadays with precision equipment.

But it just occurred to me that even at the pure Newtonian level, the value of the potential \(U\) plays an observable role: it determines the escape velocity. A projectile escapes to infinity if its overall energy (kinetic plus potential) is greater than zero: \(mv^2/2 + mU>0\). In other words, the escape velocity \(v\) is determined by the formula

$$v>\sqrt{-2U}.$$

The escape velocity works both ways; it also tells you the velocity to which an object accelerates as it falls from infinity. So suppose you let lose a rock somewhere in deep space far from the Sun and it falls towards the Earth. Its velocity at impact will be 43.6 km/s… without the Sun’s influence, its impact velocity would have been only 11.2 km/s.

So using somewhat more poetic language, the relationship of us, surface dwellers, to distant parts of the universe, is determined primarily not by the gravity of the planet on which we stand, but by the gravitational field of our Sun… or maybe our galaxy… or maybe the supercluster of which our galaxy is a member.

As I said in my preceding post… gravity is weird.


The following gnuplot code, which I am recording here for posterity, was used to produce the plots in this post:

set terminal gif size 320,240
unset border
unset xtics
unset ytics
set xrange [-5:5]
set yrange [-5:0]
set output 'pot0.gif'
set arrow from 0.5,-5 to 0.5,0 nohead lc rgb 'black' lw 0.1
set arrow from 1.5,-5 to 1.5,0 nohead lc rgb 'black' lw 0.1
plot -1/abs(x)-0.1/abs(x-1) lw 3 notitle
unset arrow
set xrange [0.5:1.5]
set output 'pot1.gif'
plot -1/abs(x)-0.1/abs(x-1) lw 3 notitle
set output 'pot2.gif'
plot -1/abs(x)-0.1/abs(x-1) lw 3 notitle,-0.1/abs(x-1)-1 lw 3 notitle
set output 'pot3.gif'
plot -1/abs(x)-0.1/abs(x-1) lw 3 notitle,-0.1/abs(x-1)-1 lw 3 notitle,-0.1/abs(x-1)-1+(x-1)-(x-1)**2 lw 3 notitle
set xrange [-5:5]
set output 'pot4.gif'
set arrow from 0.5,-5 to 0.5,0 nohead lc rgb 'black' lw 0.1
set arrow from 1.5,-5 to 1.5,0 nohead lc rgb 'black' lw 0.1
replot
unset arrow
set output 'potdiff.gif'
set xrange [0.5:1.5]
set yrange [*:*]
plot 0 notitle,\
((-1/abs(x)-0.1/abs(x-1))-(-0.1/abs(x-1)-1))/(-1/abs(x)-0.1/abs(x-1)) lw 3 notitle,\
((-1/abs(x)-0.1/abs(x-1))-(-0.1/abs(x-1)-1+(x-1)-(x-1)**2))/(-1/abs(x)-0.1/abs(x-1)) lw 3 notitle
 Posted by at 3:04 pm
Dec 182012
 

schwarzThere is something curious about gravity in general relativity. Specifically, the gravitational potential.

In high school, we were taught about this mysterious thing called “potential energy” or “gravitational potential”, but we were always assured that it’s really just the difference between potentials that matters. For instance, when you drop a stone from a tall tower, its final velocity (ignoring air resistance) is determined by the difference in gravitational potential energy at the top and at the bottom of the tower. If you study more sophisticated physics, you eventually learn that it’s not the gravitational potential, only its gradient that has physically observable meaning.

Things are different in general relativity. The geometry of spacetime, in particular the metric and its components are determined by the gravitational potential itself, not its gradient. In particular, we have

$$
g_{00} = 1 – \frac{2GM}{c^2r}
$$

in the infamous Schwarzschild metric, where \(g\) is the metric tensor, \(G\) is the universal gravitational constant, \(c\) is the speed of light, \(M\) is the mass of the gravitating object, and \(r\) is the distance from it. Since the Newtonian gravitational field is given by \(U=-GM/r\), this means

$$
g_{00} = 1 – \frac{2}{c^2}U.
$$

This quantity has physical significance. For instance, the angle by which light is deflected when it passes near a star is given by \(4c^{-2}U\).

So then, what is the value of \(U\) here on the surface of the Earth? Why, it’s easy. The mass of the Earth is \(M_E=6\times 10^{24}\) kg, its radius is roughly \(R_E=6.37\times 10^6\) m, so

$$
\frac{1}{c^2}U_E=\frac{GM_E}{c^2R_E} \simeq 7\times 10^{-10}.
$$

You could be forgiven for thinking that this is the right answer, but it really isn’t. For let’s just calculate the gravitational potential of the Sun as felt here on the Earth. Yes, I know, the Sun is quite a distance away and all, but play along, will you.

The mass of the Sun is \(M_\odot=2\times 10^{30}\) kg, its distance from the Earth is \(R_\odot=1.5\times 10^{11}\) m. So for the Sun,

$$
\frac{1}{c^2}U_\odot=\frac{GM_\odot}{c^2R_\odot} \simeq 10^{-8}.
$$

Whoops! This is more than an order of magnitude bigger than the Earth’s own gravitational potential! So right here, on the surface of the Earth, \(U\) is dominated by the Sun!

Or is it? Let’s just quickly check what the gravitational potential of the Milky Way is here on the Earth. The Sun is zipping around the center in what we believe is a roughly circular orbit, at a speed of 250 km/s. We know that for a circular orbit, the velocity is \(v_\star=\sqrt{GM_\star/R_\star}=\sqrt{U_\star}\), so

$$
\frac{1}{c^2}U_\star = \frac{v_\star^2}{c^2} \simeq 7\times 10^{-7}.
$$

This is almost two orders of magnitude bigger than the gravitational potential due to the Sun! So here, on the surface of the Earth, the gravitational potential is dominated by the large concentration of mass near the center of the Milky Way, some 8 kiloparsecs (25 thousand light years) from here. Wow!

But wait a minute, is this the end? There is the Local Supercluster of galaxies of which the Milky Way is part. Its mass \(M_V\) is believed to be about \(10^{15}\) times the mass of the Sun, and it is believed to be centered near the Virgo cluster, about 65 million light years or about \(R_V=6.5\times 10^{23}\) meters away. So (this is necessarily a crude estimate, but it will serve as an order-of-magnitude value):

$$
\frac{1}{c^2}U_V=\frac{GM_V}{c^2R_V} \simeq 2.3\times 10^{-6}.
$$

This value for the gravitational potential right here on the Earth’s surface, astonishingly, is more than 3,000 times the gravitational potential due to the Earth’s own mass. Is this the end? Or would more distant objects exert an even greater influence on the gravitational field here on the Earth? The answer is the latter. That is because as we look further into the distant universe, the amount of mass we see goes up by the square of the distance, but their gravitational influence goes down by only the first power of the distance. So if you look at 10 times the distance, you will see 100 times as much matter; the gravitational influence of each unit of matter will decrease by a factor of 10 but overall, with a hundred times as much mass, the total gravitational influence will still go up tenfold.

So the local gravitational field is dominated by the most distant matter in the universe.

And by local gravitational field, I of course mean the local metric, which in turn determines how light is deflected, how clocks slow down, how the wavelength of photons shifts.

Insanely, we may not even know how fast a “true” clock in our universe runs, one that is free of gravitational influences, because we don’t know the actual magnitude of the sum total of all gravitational influences here on the Earth.

Gravity is weird.

 Posted by at 4:34 pm
Dec 182012
 

I came across this a while back; one of the most astonishing places on Earth, near the village of Derweze (also spelled as Darvaza) in Turkmenistan.

Situated in an already lunar looking landscape in the Karakum desert, there is a crater that is unlike anything on the real Moon: it’s a crater full of fire. The ground collapsed here in 1971 after Soviet geologists were drilling for oil and found natural gas instead. The gas was ignited in the hope that it would safely burn off in days… it has been burning ever since.

 Posted by at 8:13 am
Dec 142012
 

In just a few minutes, it will be exactly 40 years that the crew of Apollo 17 took off from the Moon, ending humanity’s last excursion to date on our satellite.

Incredibly, no human ventured beyond low Earth orbit since.

 Posted by at 5:26 pm
Dec 132012
 

713391main_pia16197-43b_smallImagine a world with weather. Hydrocarbon rains falling from an orange sky onto a deadly cold surface with chunks of ice as hard and as dry as rock; or onto vast hydrocarbon seas driven by freezing winds.

Meanwhile, through the orange haze overhead, you may glimpse a giant orb, filling half the sky, and surrounded by an even more magnificent flat ring.

This world exists. It’s Saturn’s moon Titan, the only body in the solar system other than the Earth with a stable liquid on its surface and genuine weather with precipitation and a “hydrological” cycle.

And now we know for sure that Titan has real rivers. Dubbed “Mini Nile” on NASA’s Web site, this 400 km long hydrocarbon river is the largest seen to date, and it appears to be filled with liquid along its entire length.

I truly envy those humans who, hopefully on a not too distant day in the future, will stand on the banks of this river, perhaps not even wearing a pressure suit just heated clothing and a breathing mask, and stare at this river in awe.

What will they find in the liquid? Is it harboring some primitive form of life?

 Posted by at 10:39 am
Dec 082012
 

One of my favorite photographs ever, in fact one that I even use on my Facebook timeline page as a background image, was taken by a certain Bill Anders when he was flying almost 400,000 km from the Earth. Anders was one of the first three members of our species who flew to another celestial body (albeit without landing on its surface; that came a bit later.)

Yesterday, I read a very interesting article about Anders, both his trip on board Apollo 8 and his life afterwards. The article also touched upon the topic of religion.

The message radioed back by the crew of Apollo 8 is probably the most memorable Christmas message ever uttered by humans. (Or maybe I am biased.) And yes, it starts with the words from Genesis, but I always viewed it the way it was presumably intended: as an expression of awe, not as religious propaganda.

The curious thing, as mentioned in the article, is that it was this trip around the Moon that changed the traditional Christian viewpoint of Anders about Earthlings created by a God in his own image.

“When I looked back and saw that tiny Earth, it snapped my world view,” Anders is quoted as saying. “Are we really that special? I don’t think so.”

Well, this pretty much sums up why I am an atheist. I’d like to believe that it’s not hubris; it’s humility.

 Posted by at 10:55 am
Dec 032012
 

Update (September 6, 2013): The analysis in this blog entry is invalid. See my September 6, 2013 blog entry on this topic for an explanation and update.

It has been a while since I last wrote about a pure physics topic in this blog.

A big open question these days is whether or not the particle purportedly discovered by the Large Hadron Collider is indeed the Higgs boson.

One thing about the Higgs boson is that it is a spin-0 scalar particle: this means, essentially, that the Higgs is identical to its mirror image. This distinguishes the Higgs from pseudoscalar particles that “flip” when viewed in a mirror.

So then, one way to distinguish the Higgs from other possibilities, including so-called pseudoscalar resonances, is by establishing that the observed particle indeed behaves either like a scalar or like a pseudoscalar.

Easier said than done. The differences in behavior are subtle. But it can be done, by measuring the angular distribution of decay products. And this analysis was indeed performed using the presently available data collected by the LHC.

Without further ado, here is one view of the data, taken from a November 14, 2012 presentation by Alexey Drozdetskiy:

The solid red line corresponds to a scalar particle (denoted by 0+); the dotted red line to a pseudoscalar (0−). The data points represent the number of events. The horizontal axis represents a “Matrix Element Likelihood Analysis” value, which is constructed using a formula similar to this one (see arXiv:1208.4018 by Bolognesi et al.):

$${\cal D}_{\rm bkg}=\left[1+\frac{{\cal P}_{\rm bkg}(m_{4\ell};m_1,m_2,\Omega)}{{\cal P}_{\rm sig}(m_{4\ell};m_1,m_2,\Omega)}\right]^{-1},$$

where the \({\cal P}\)-s represent probabilities associated with the background and the signal.

So far so good. The data are obviously noisy. And there are not that many data points: only 10, representing 16 events (give or take, as the vertical error bars are quite significant).

There is another way to visualize these values: namely by plotting them against the relative likelihood that the observed particle is 0+ or 0−:

In this fine plot, the two Gaussian curves correspond to Monte-Carlo simulations of the scalar and pseudoscalar scenarios. The position of the green arrow is somehow representative of the 10 data points shown in the preceding plot. The horizontal axis in this case is the logarithm of a likelihood ratio.

On the surface of it, this seems to indicate that the observed particle is indeed a scalar, just like the Higgs. So far so good, but what bothers me is that this second plot does not indicate uncertainties in the data. Yet, judging by the sizable vertical error bars in the first plot, the uncertainties are significant.

However, to relate the uncertainties in the first plot, one has to be able to relate the likelihood ratio on this plot to the MELA value on the preceding plot. Such a relationship indeed exists, given by the formula

$${\cal L}_k=\exp(-n_{\rm sig}-n_{\rm bkg})\prod_i\left(n_{\rm sig}\times{\cal P}^k_{\rm sig}(x_i;\alpha;\beta)+n_{\rm bkg}\times{\cal P}_{\rm bkg}(x_i;\beta)\right).$$

The problem with this formula, from my naive perspective, is that in order to replicate it, I would need to know not only the number of candidate signal events but also the number of background events, and also the associated probability distributions and values for \(\alpha\) and \(\beta\). I just don’t have all the information necessary to reconstruct this relationship numerically.

But perhaps I don’t have to. There is a rather naive thing one can do: and that would be simply calculating the weighted average of the data points in the first plot. When I do this, I get a value of 0.57. Lo and behold, it has roughly the same relationship to the solid red Gaussian in that plot as the green arrow to the 0+ Gaussian in the second.

Going by the assumption that my naive shortcut actually works reasonably well, I can take the next step. I can calculate a \(1\sigma\) error on the weighted average, which yields \(0.57^{+0.24}_{-0.23}\). When I (admittedly very crudely) try the transcribe this uncertainty to the second plot, I get something like this:

Yes, the error is this significant. So while the position of the green arrow is in tantalizing agreement with what one would expect from a Higgs particle, the error bar says that we cannot draw any definitive conclusions just yet.

But wait, it gets even weirder. Going back to the first plot, notice the two data points on the right. What if these are outliers? If I remove them from the analysis, I get something completely different: namely, the value of \(0.43^{+0.26}_{-0.21}\). Which is this:

So without the outliers, the data actually favor the pseudoscalar scenario!

I have to emphasize: what I did here is rather naive. The weighted average may not accurately represent the position of the green arrow at all. The coincidence in position could be a complete accident. In which case the horizontal error bar yielded by my analysis is completely bogus as well.

I also attempted to check how much more data would be needed to reduce the size of these error bars sufficiently for a true \(1\sigma\) result: about 2-4 times the number of events collected to date. So perhaps what I did is not complete nonsense after all, because this is what knowledgeable people are saying: when the LHC collected at least twice the amount of data it already has, we may know with reasonable certainty if the observed particle is a scalar or a pseudoscalar.

Until then, I hope I did not make a complete fool of myself with this naive analysis. Still, this is what blogs are for; I am allowed to say foolish things here.

 Posted by at 10:31 pm