Apr 022018
 

The recent discovery of a galaxy, NGC1052-DF2, with no or almost no dark matter made headlines worldwide.

Nature 555, 629–632 (29 March 2018)

Somewhat paradoxically, it has been proclaimed by some as evidence that the dark matter paradigm prevails over theories of modified gravity. And, as usual, many of the arguments were framed in the context of dark matter vs. MOND, as if MOND was a suitable representative of all modified gravity theories. One example is a recent Quora question, Can we say now that all MOND theories is proven false, and there is really dark matter after all? I offered the following in response:

First of all, allow me to challenge the way the question is phrased: “all MOND theories”… Please don’t.

MOND (MOdified Newtonian Dynamics) is not a theory. It is an ad hoc, phenomenological replacement of the Newtonian acceleration law with a simplistic formula that violates even basic conservation laws. The formula fits spiral galaxy rotation curves reasonably well, consistent with the empirical Tully—Fisher law that relates galaxy masses and rotational velocities, but it fails for just about everything else, including low density globular clusters, dwarf galaxies, clusters of galaxies, not to mention cosmological observations.

MOND was given a reprieve in the form of Jacob Beckenstein’s TeVeS (Tensor—Vector—Scalar gravity), which is an impressive theoretical exercise to create a proper classical field theory that reproduces the MOND acceleration law in the weak field, low velocity limit. However, TeVeS suffers from the same issues MOND does when confronted with data beyond galaxy rotation curves. Moreover, the recent gravitational wave event, GW170817, accompanied by the gamma ray burst GRB170817 from the same astrophysical event, thus demonstrating that the propagation speed of gravitational and electromagnetic waves is essentially identical, puts all bimetric theories (of which TeVeS is an example) in jeopardy.

But that’s okay. News reports suggesting the death of modified gravity are somewhat premature. While MOND has often been used as a straw man by opponents of modified gravity, there are plenty of alternatives, many of them much better equipped than MOND to deal with diverse astrophysical phenomena. For instance, f(R) gravity, entropic gravity, Horava—Lifshitz gravity, galileon theory, DGP (Dvali—Gabadadze—Porrati) gravity… The list goes on and on. And yes, it also includes John Moffat’s STVG (Scalar—Tensor—Vector Gravity — not to be confused with TeVeS, the two are very different animals) theory, better known as MOG, a theory to which I also contributed.

As to NGC1052-DF2, for MOG that’s actually an easy one. When you plug in the values for the MOG approximate solution that we first published about a decade ago, you get an effective dynamical mass that is less than twice the visible (baryonic) mass of this galaxy, which is entirely consistent with its observed velocity dispersion.

In fact, I’d go so far as to boldly suggest that NGC1052-DF2 is a bigger challenge for the dark matter paradigm than it is for some theories of modified gravity (MOG included). Why? Because there is no known mechanism that would separate dark matter from stellar mass.

Compare this to the infamous Bullet Cluster: a pair of galaxy clusters that have undergone a collision. According to the explanation offered within the context of the dark matter paradigm (NB: Moffat and Brownstein showed, over a decade ago, that the Bullet Cluster can also be explained without dark matter, using MOG), their dark matter halos just flew through each other without interaction (other than gravity), as did the stars (stars are so tiny compared to the distance between them, the likelihood of stellar collisions is extremely remote, so stars also behave like a pressureless medium, like dark matter.) Interstellar/intergalactic clouds of gas, however, did collide, heating up to millions of degrees (producing bright X-rays) and losing much of their momentum. So you end up with a cloud of gas (but few stars and little dark matter) in the middle, and dark matter plus stars (but little gas) on the sides. This separation process works because stars and dark matter behave like a pressureless medium, whereas gas does not.

But in the case of NGC1052-DF2, some mechanism must have separated stars from dark matter, so we end up with a galaxy (one that actually looks nice, with no signs of recent disruption). I do not believe that there is currently a generally accepted, viable candidate mechanism that could accomplish this.

 Posted by at 8:43 am
Mar 142018
 

Stephen Hawking died earlier today.

Hawking was diagnosed with ALS in the year I was born, in 1963.

Defying his doctor’s predictions, he refused to die after a few years. Instead, he carried on for another astonishing 55 years, living a full life.

Public perception notwithstanding, he might not have been the greatest living physicist, but he was certainly a great physicist. The fact that he was able to accomplish so much despite his debilitating illness made him an extraordinary human being, a true inspiration.

Here is a short segment, courtesy of CTV Kitchener, filmed earlier today at the Perimeter Institute. My friend and colleague John Moffat, who met Hawking many times, is among those being interviewed:

 Posted by at 9:17 pm
Mar 102018
 

There is a very interesting concept in the works at NASA, to which I had a chance to contribute a bit: the Solar Gravitational Telescope.

The idea, explained in this brand new NASA video, is to use the bending of light by the Sun to form an image of distant objects.

The resolving power of such a telescope would be phenomenal. In principle, it is possible to use it to form a megapixel-resolution image of an exoplanet as far as 100 light years from the Earth.

The technical difficulties are, however, challenging. For starters, a probe would need to be placed at least 550 astronomical units (about four times the distance to Voyager 1) from the Sun, precisely located to be on the opposite side of the Sun relative to the exoplanet. The probe would then have to mimic the combined motion of our Sun (dragged about by the gravitational pull of planets in the solar system) and the exoplanet (orbiting its own sun). Light from the Sun will need to be carefully blocked to ensure that we capture light from the exoplanet with as little noise as possible. And each time the probe takes a picture of the ring of light (the Einstein ring) around the Sun, it will be the combined light of many adjacent pixels on the exoplanet. The probe will have traverse a region that is roughly a kilometer across, taking pictures one pixel at a time, which will need to be deconvoluted. The fact that the exoplanet itself is not constant in appearance (it will go through phases of illumination, it may have changing cloud cover, perhaps even changes in vegetation) further complicates matters. Still… it can be done, and it can be accomplished using technology we already have.

By its very nature, it would be a very long duration mission. If such a probe was launched today, it would take 25-30 years for it to reach the place where light rays passing on both sides of the Sun first meet and thus the focal line begins. It will probably take another few years to collect enough data for successful deconvolution and image reconstruction. Where will I be 30-35 years from now? An old man (or a dead man). And of course no probe will be launched today; even under optimal circumstances, I’d say we’re at least a decade away from launch. In other words, I have no chance of seeing that high-resolution exoplanet image unless I live to see (at least) my 100th birthday.

Still, it is fun to dream, and fun to participate in such things. Though now I better pay attention to other things as well, including things that, well, help my bank account, because this sure as heck doesn’t.

 Posted by at 12:59 pm
Jan 302018
 

I was surprised by the number of people who found my little exercise about kinetic energy interesting.

However, I was disappointed by the fact that only one person (an astrophysicist by trade) got it right.

It really isn’t a very difficult problem! You just have to remember that in addition to energy, momentum is also conserved.

In other words, when a train accelerates, it is pushing against something… the Earth, that is. So ever so slightly, the Earth accelerates backwards. The change in velocity may be tiny, but the change in energy is not necessarily so. It all depends on your reference frame.

So let’s do the math, starting with a train of mass \(m\) that accelerates from \(v_1\) to \(v_2\). (Yes, I am doing the math formally; we can plug in the actual numbers in the end.)

Momentum is of course velocity times mass. Momentum conversation means that the Earth’s speed will change as

\[\Delta v = -\frac{m}{M}(v_2-v_1),\]

where \(M\) is the Earth’s mass. If the initial speed of the earth is \(v_0\), the change in its kinetic energy will be given by

\[\frac{1}{2}M\left[(v_0+\Delta v)^2-v_0^2\right]=\frac{1}{2}M(2v_0\Delta v+\Delta v^2).\]

If \(v_0=0\), this becomes

\[\frac{1}{2}M\Delta v^2=\frac{m^2}{M}(v_2-v_1)^2,\]

which is very tiny if \(m\ll M\). However, if \(|v_0|>0\) and comparable in magnitude to \(v_2-v_1\) (or at least, \(|v_0|\gg|\Delta v|\)), we get

\[\frac{1}{2}M(2v_0\Delta v+\Delta v^2)=-mv_0(v_2-v_1)+\frac{m^2}{2M}(v_2-v_1)^2\simeq -mv_0(v_2-v_1).\]

Note that the actual mass of the Earth doesn’t even matter; we just used the fact that it’s much larger than the mass of the train.

So let’s plug in the numbers from the exercise: \(m=10000~{\rm kg}\), \(v_0=-10~{\rm m}/{\rm s}\) (negative, because relative to the moving train, the Earth is moving backwards), \(v_2-v_1=10~{\rm m}/{\rm s}\), thus \(-mv_0(v_2-v_1)=1000~{\rm kJ}\).

So the missing energy is found as the change in the Earth’s kinetic energy in the reference frame of the second moving train.

Note that in the reference frame of someone standing on the Earth, the change in the Earth’s kinetic energy is imperceptibly tiny; all the \(1500~{\rm kJ}\) go into accelerating the train. But in the reference frame of the observer moving on the second train on the parallel tracks, only \(500~{\rm kJ}\) goes into the kinetic energy of the first train, whereas \(1000~{\rm kJ}\) is added to the Earth’s kinetic energy. But in both cases, the total change in kinetic energy, \(1500~{\rm kJ}\), is the same and consistent with the readings of the electricity power meter.

Then again… maybe the symbolic calculation is too abstract. We could have done it with numbers all along. When a \(10000~{\rm kg}\) train’s speed goes from \(10~{\rm m}/{\rm s}\) to \(20~{\rm m}/{\rm s}\), it means that the \(6\times 10^{24}~{\rm kg}\) Earth’s speed (in the opposite direction) will change by \(10000\times 10/(6\times 10^{24})=1.67\times 10^{-20}~{\rm m}/{\rm s}\).

In the reference frame in which the Earth is at rest, the change in kinetic energy is \(\tfrac{1}{2}\times (6\times 10^{24})\times (1.67\times 10^{-20})^2=8.33\times 10^{-16}~{\rm J}\).

However, in the reference frame in which the Earth is already moving at \(10~{\rm m}/{\rm s}\), the change in kinetic energy is \(\tfrac{1}{2}\times (6\times 10^{24})\times (10+1.67\times 10^{-20})^2-\tfrac{1}{2}\times (6\times 10^{24})\times 10^2\)\({}=\tfrac{1}{2}\times (6\times 10^{24})\times[2\times 10\times 1.67\times 10^{-20}+(1.67\times 10^{-20})^2] \)\({}\simeq 1000~{\rm kJ}\).

 Posted by at 12:29 am
Jan 272018
 

Enough blogging about personal stuff like our cats. Here is a neat little physics puzzle instead.

Solving this question requires nothing more than elementary high school physics (assuming you were taught physics in high school; if not, shame on the educational system where you grew up). No tricks, no gimmicks, no relativity theory, no quantum mechanics, just a straightforward application of what you were taught about Newtonian physics.

We have two parallel rail tracks. There is no friction, no air resistance, no dissipative forces.

On the first track, let’s call it A, there is a train. It weighs 10,000 kilograms. It is accelerated by an electric motor from 0 to 10 meters per second. Its kinetic energy, when it is moving at \(v=10~{\rm m/s}\), is of course \(K=\tfrac{1}{2}mv^2=500~{\rm kJ}\).

Next, we accelerate it from 10 to 20 meters per second. At \(v=20~{\rm m/s}\), its kinetic energy is \(K=2000~{\rm kJ}\), so an additional \(1500~{\rm kJ}\) was required to achieve this change in speed.

All this is dutifully recorded by a power meter that measures the train’s electricity consumption. So far, so good.

But now let’s look at the B track, where there is a train moving at the constant speed of \(10~{\rm m/s}\). When the A train is moving at the same speed, the two trains are motionless relative to each other; from B‘s perspective, the kinetic energy of A is zero. And when A accelerates to \(20~{\rm m/s}\) relative to the ground, its speed relative to B will be \(10~{\rm m/s}\); so from B‘s perspective, the change in kinetic energy is \(500~{\rm kJ}\).

But the power meter is not lying. It shows that the A train used \(1500~{\rm kJ}\) of electrical energy.

Question: Where did the missing \(1000~{\rm kJ}\) go?

First one with the correct answer gets a virtual cookie.

 Posted by at 9:54 am
Oct 162017
 

Today, a “multi-messenger” observation of a gravitational wave event was announced.

This is a big freaking deal. This is a Really Big Freaking Deal. For the very first time, ever, we observed an event, the merger of two neutron stars, simultaneously using both gravitational waves and electromagnetic waves, the latter including light, radio waves, UV, X-rays, gamma rays.

From http://iopscience.iop.org/article/10.3847/2041-8213/aa91c9

The significance of this observation must not be underestimated. For the first time, we have direct validation of a LIGO gravitational wave observation. It demonstrates that our interpretation of LIGO data is actually correct, as is our understanding of neutron star mergers; one of the most important astrophysical processes, as it is one of the sources of isotopes heavier than iron in the universe.

Think about it… every time you hold, say, a piece of gold in your hands, you are holding something that was forged in an astrophysical event like this one billions of years ago.

 Posted by at 2:33 pm
Sep 272017
 

So here it is: another gravitational wave event detection by the LIGO observatories. But this time, there is a twist: a third detector, the less sensitive European VIRGO observatory, also saw this event.

This is amazing. Among other things, having three observatories see the same event is sufficient to triangulate the sky position of the event with much greater precision than before. With additional detectors coming online in the future, the era of gravitational wave astronomy has truly arrived.

 Posted by at 2:42 pm
Jul 272017
 

There is a brand new video on YouTube today, explaining the concept of the Solar Gravitational Telescope concept:

It really is very well done. Based in part on our paper with Slava Turyshev, it coherently explains how this concept would work and what the challenges are. Thank you, Jimiticus.

But the biggest challenge… this would be truly a generational effort. I am 54 this year. Assuming the project is greenlighted today and the spacecraft is ready for launch in ten years’ time… the earliest for useful data to be collected would be more than 40 years from now, when, unless I am exceptionally lucky with my health, I am either long dead already, or senile in my mid-90s.

 Posted by at 11:27 pm
Jul 132017
 

Slava Turyshev and I just published a paper in Physical Review. It is a lengthy, quite technical paper about the wave-theoretical treatment of the solar gravitational telescope.

What, you say?

Well, simple: using the Sun as a gravitational telescope to image distant objects. Like other stars, the Sun bends light, too. Measuring this bending of light was, in fact, the crucial test carried out by Eddington during the 1919 solar eclipse, validating the predictions of general relativity and elevating Albert Einstein to the status of international science superstar.

The gravitational bending of light is very weak. Two rays, passing on opposite sides of the Sun, are bent very little. So little in fact, it takes some 550 astronomical units (AU; the distance between the Earth and the Sun) for the two rays to meet. But where they do, interesting things happen.

If you were floating in space at that distance, and there was a distant planet on the exact opposite side of the Sun, light from a relatively small section of that planet would form a so-called Einstein ring around the Sun. The light amplification would be tremendous; a factor of tens of billions, if not more.

But you have to be located very precisely at the right spot to image a particular spot on the exoplanet. How precisely? Well, that’s what we set out to figure out, based in part on the existing literature on the subject. (Short answer: it’s measured in tens of centimeters or less.)

In principle, a spacecraft at this distance, moving slowly in lateral directions to scan the image plane (which is several kilometers across), can obtain a detailed map of a distant planet. It is possible, in principle, to obtain a megapixel resolution image of a planet dozens of light years from here, though image reconstruction would be a task of considerable complexity, due in part to the fact that an exoplanet is a moving, changing target with variable illumination and possibly cloud cover.

Mind you, getting to 550 AU is costly. Our most distant spacecraft to date, Voyager 1, is just under 140 AU from the Sun, and it took that spacecraft 40 years to get there. That said, it is a feasible mission concept, but we must be very certain that we understand the physics thoroughly.

This is where our paper comes in: an attempt to derive detailed results about how light waves pass on both sides of the Sun and recombine along the focal line.

The bulk of the work in this paper is Slava’s, but I was proud to help. Part of my contribution was to provide a visualization of the qualitative behavior of the wavefront (described by a hypergeometric function):

In this image, a light wave, initially a plane wave, travels from left to right and it is deflected by a gravitational source at the center. If you squint just a little, you can actually see a concentric circular pattern overlaid on top of the distorted wavefront. The deflection of the wavefront and this spherical wave perturbation are both well described by an approximation. However, that approximation breaks down specifically in the region of interest, namely the focal line:

The top left of these plots show the approximation of the deflected wavefront; the top right, the (near) circular perturbation. Notice how both appear to diverge along the focal line: the half line between the center of the image and the right-hand side. The bottom right plot shows the combination of the two approximations; it is similar to the full solution, but not identical. The difference between the full solution and this approximation is shown in the bottom left plot.

I also helped with working out evil-looking things like a series approximation of the confluent hypergeometric function using so-called Pochhammer symbols and Stirling numbers. It was fun!

To make a long story short, although it involved some frustratingly long hours at a time when I was already incredibly busy, it was fun, educational, and rewarding, as we gave birth to a 39-page monster (43 pages on the arXiv) with over 300 equations. Hopefully just one of many contributions that, eventually (dare I hope that it will happen within my lifetime?) may result in a mission that will provide us with a detailed image of a distant, life-bearing cousin of the Earth.

 Posted by at 10:56 pm
Mar 172017
 

Recently, I answered a question on Quora on the possibility that we live in a computer simulation.

Apparently, this is a hot topic. The other day, there was an essay on it by Sabine Hossenfelder.

I agree with Sabine’s main conclusion, as well as her point that “the programmer did it” is no explanation at all: it is just a modern version of mythology.

I also share her frustration, for instance, when she reacts to the nonsense from Stephen Wolfram about a “whole civilization” “down at the Planck scale”.

Sabine makes a point that discretization of spacetime might conflict with special relativity. I wonder if the folks behind doubly special relativity might be inclined to offer a thought or two on this topic.

In any case, I have another reason why I believe we cannot possibly live in a computer simulation.

My argument hinges on an unproven conjecture: My assumption that scalable quantum computing is really not possible because of the threshold theorem. Most supporters of quantum computing believe, of course, that the threshold theorem is precisely what makes quantum computing possible: if an error-correcting quantum computer reaches a certain threshold, it can emulate an arbitrary precision quantum computer accurately.

But I think this is precisely why the threshold will never be reached. One of these days, someone will prove a beautiful theorem that no large-scale quantum computer will ever be able to operate above the threshold, hence scalable quantum computing is just not possible.

Now what does this have to do with us living in a simulation? Countless experiments show that we live in a fundamentally quantum world. Contrary to popular belief (and many misguided popularizations) it does not mean a discretization at the quantum level. What it does mean is that even otherwise discrete quantities (e.g., the two spin states of an electron) turn into continuum variables (the phase of the wavefunction).

This is precisely what makes a quantum computer powerful: like an analog computer, it can perform certain algorithms more effectively than a digital computer, because whereas a digital computer operates on the countable set of discrete digits, a quantum or analog computer operates with the uncountable infinite of states offered by continuum variables.

Of course a conventional analog computer is very inaccurate, so nobody seriously proposed that one could ever be used to factor 1000-digit numbers.

This quantum world in which we live, with its richer structure, can be simulated only inefficiently using a digital computer. If that weren’t the case, we could use a digital computer to simulate a quantum computer and get on with it. But this means that if the world is a simulation, it cannot be a simulation running on a digital computer. The computer that runs the world has to be a quantum computer.

But if quantum computers do not exist… well, then they cannot simulate the world, can they?

Two further points about this argument. First, it is purely mathematical: I am offering a mathematical line of reasoning that no quantum universe can be a simulated universe. It is not a limitation of technology, but a (presumed) mathematical truth.

Second, the counterargument has often been proposed that perhaps the simulation is set up so that we do not get to see the discrepancies caused by inefficient simulation. I.e., the programmer cheats and erases the glitches from our simulated minds. But I don’t see how that could work either. For this to work, the algorithms employed by the simulation must anticipate not only all the possible ways in which we could ascertain the true nature of the world, but also assess all consequences of altering our state of mind. I think it quickly becomes evident that this really cannot be done without, well, simulating the world correctly, which is what we were trying to avoid… so no, I do not think it is possible.

Of course if tomorrow, someone announces that they cracked the threshold theorem and full-scale, scalable quantum computing is now reality, my argument goes down the drain. But frankly, I do not expect that to happen.

 Posted by at 11:34 pm
Jan 202017
 

Enough blogging about politics. It’s time to think about physics. Been a while since I last did that.

A Facebook post by Sabine Hossenfelder made me look at this recent paper by Josset et al. Indeed, the post inspired me to create a meme:

The paper in question contemplates the possibility that “dark energy”, i.e., the mysterious factor that leads to the observed accelerating expansion of the cosmos, is in fact due to a violation of energy conservation.

Sounds kooky, right? Except that the violation that the authors consider is a very specific one.

Take Einstein’s field equation,

$$R_{\mu\nu}-\tfrac{1}{2}Rg_{\mu\nu}+\Lambda g_{\mu\nu}=8\pi GT_{\mu\nu},$$

and subtract from it a quarter of its trace times the metric. The trace of the left-hand side is \(-R+4\Lambda\), the right-hand side is \(8\pi GT\), so we get

$$R_{\mu\nu}-\tfrac{1}{4}Rg_{\mu\nu}=8\pi G(T_{\mu\nu}-\tfrac{1}{4}Tg_{\mu\nu}).$$

Same equation? Not quite. For starters, the cosmological constant \(\Lambda\) is gone. Furthermore, this equation is manifestly trace-free: its trace is \(0=0\). This theory, which was incidentally considered already almost a century ago by Einstein, is called trace-free or unimodular gravity. It is called unimodular gravity because it can be derived from the Einstein-Hilbert Lagrangian by imposing the constraint \(\sqrt{-g}=1\), i.e., that the volume element is constant and not subject to variation.

Unimodular gravity has some interesting properties. Most notably, it no longer implies the conservation law \(\nabla_\mu T^{\mu\nu}=0\).

On the other hand, \(\nabla_\mu(R^{\mu\nu}-\tfrac{1}{2}Rg^{\mu\nu})=0\) still holds, thus the gradient of the new field equation yields

$$\nabla_\mu(\tfrac{1}{4}Rg^{\mu\nu})=8\pi G\nabla_\mu(T^{\mu\nu}-\tfrac{1}{4}Tg^{\mu\nu}).$$

So what happens if \(T_{\mu\nu}\) is conserved? Then we get

$$\nabla_\mu(\tfrac{1}{4}Rg^{\mu\nu})=-8\pi G\nabla_\mu(\tfrac{1}{4}Tg^{\mu\nu}),$$

which implies the existence of the conserved quantity \(\hat{\Lambda}=\tfrac{1}{4}(R+8\pi GT)\).

Using this quantity to eliminate \(T\) from the unimodular field equation, we obtain

$$R_{\mu\nu}-\tfrac{1}{2}Rg_{\mu\nu}+\hat{\Lambda} g_{\mu\nu}=8\pi GT_{\mu\nu}.$$

This is Einstein’s original field equation, but now \(\hat{\Lambda}\) is no longer a cosmological constant; it is now an integration constant that arises from a conservation law.

The vacuum solutions of unimodular gravity as the same as those of general relativity. But what about matter solutions? It appears that if we separately impose the conservation law \(\nabla_\mu T^{\mu\nu}\), we pretty much get back general relativity. What we gain is a different origin, or explanation, of the cosmological constant.

On the other hand, if we do not impose the conservation law for matter, things get interesting. In this case, we end up with an effective cosmological term that’s no longer constant. And it is this term that is the subject of the paper by Josset et al.

That being said, a term that is time-varying in the case of a homogeneous and isotropic universe surely acquires a dependence on spatial coordinates in a nonhomogeneous environment. In particular, the nonconservation of \(T_{\mu\nu}\) should lead to testable deviations in certain Parameterized Post-Newtonian (PPN) parameters. There are some reasonably stringent limits on these parameters (notably, the parameters \(\alpha_3\) and \(\zeta_i\) in the notation used by Clifford Will in the 1993 revision of his book, Theory and experiment in gravitational physics) and I wonder if Josset et al. might already be in violation of these limits.

 Posted by at 9:43 pm
Sep 142016
 

Hey, I am getting famous again!

For the second time, Quora decided to feature one of my answers on their Forbes blog site. This one was in response to the question, “Is Theoretical physics a waste of resources”? I used the example of Maxwell’s prediction of electromagnetic waves to turn the question into a rhetorical one.

Forbes used a stock Getty image of some physicists in front of a blackboard to illustrate the blog post. Here, allow me to use the image of a bona fide blackboard, one from the Perimeter Institute, containing a few of the field equations of MOG/STVG, during one of our discussions with John Moffat.

Forbes used a stock Getty image of some physicists in front of a blackboard to illustrate the blog post. Here, allow me to use the image of a bona fide blackboard, one from the Perimeter Institute, containing a few of the field equations of MOG/STVG, during one of our discussions with John Moffat.

Anyhow, I feel honored. Thank you Quora.

Of course, I never know how people read my answers. Just tonight, I received a mouthful in the form of hate mail from a sarcasm-challenged defender of the US space program who thought that in my answer about astronauts supposedly having two shadows on the Moon, I was actually promoting some conspiracy theory. Duh.

 Posted by at 11:31 pm
Jun 062016
 

The Crafoord prize is a prestigious prize administered by the Swedish academy of sciences. Not as prestigious as the Nobel, it is still a highly respectable prize that comes with a respectable sum of money.

This way, one of the recipients was Roy Kerr, known for his solution of rotating black holes.

Several people were invited to give talks, including Roy Kerr’s colleague David Wiltshire. Wiltshire began his talk by mentioning the role of a young John Moffat in inspiring Kerr to study the rotating solution, but he also acknowledged Moffat’s more recent work, in which I also played a role, his Scalar-Tensor-Vector (STVG) modified gravity theory, aka MOG.

All too often, MOG is ignored, dismissed or confused with other theories. It was very good to see a rare, notable exception from that rule.

 Posted by at 7:19 pm
Jun 022016
 

This morning, Quora surprised me with this:

Say what?

I have written a grand total of three Quora answers related to the Quran (or Koran, which is the spelling I prefer). Two of these were just quoting St. Augustine of Hippo, an early Christian saint who advised Christians not to confuse the Book of Genesis with science; the third was about a poll from a few years back that showed that in the United States, atheists/agnostics know more about religion than religious folk from any denomination.

As to string theory, I try to avoid the topic because I don’t know enough about it. Still, 15 of my answers on related topics (particle physics, cosmology) were apparently also categorized under the String Theory label.

But I fail to see how my contributions make me an expert on either Islam or String Theory.

 Posted by at 11:18 am
May 212016
 

Not for the first time, I am reading a paper that discusses the dark matter paradigm and its alternatives.

Except that it doesn’t. Discuss the alternatives, that is. It discusses the one alternative every schoolchild interested in the sciences knows about (and one that, incidentally, doesn’t really work) while ignoring the rest.

This one alternative is Mordehai Milgrom’s MOND, or MOdified Newtonian Dynamics, and its generalization, TeVeS (Tensor-Vector-Scalar theory) by the late Jacob Bekenstein.

Unfortunately, too many people think that MOND is the only game in town, or that even if it isn’t, it is somehow representative of its alternatives. But it is not.

In particular, I find it tremendously annoying when people confuse MOND with Moffat’s MOG (MOdified Gravity, also MOffat Gravity). Or when similarly, they confuse TeVeS with STVG (Scalar-tensor-Vector Gravity), which is the relativistic theory behind the MOG phenomenology.

So how do they differ?

MOND is a phenomenological postulate concerning a minimum acceleration. It modifies Newton’s second law: Instead of \(F = ma\), we have \(F = m\mu(a/a_0)a\), where \(\mu(x)\) is a function that satisfies \(\mu(x)\to 1\) for \(x\gg 1\), and \(\mu(x)\to x\) for \(x\ll 1\). A good example would be \(\mu(x)=1/(1+1/x)\). The magnitude of the MOND acceleration is \(a_0={\cal O}(10^{-10})~{\rm m}/{\rm s}\).

The problem with MOND is that in this form, it violates even basic conservation laws. It is not a theory: it is just a phenomenological formula designed to explain the anomalous rotation curves of spiral galaxies.

MOND was made more respectable by Jacob Bekenstein, who constructed a relativistic field theory of gravity that approximately reproduces the MOND acceleration law in the non-relativistic limit. The theory incorporates a unit 4-vector field and a scalar field. It also has the characteristics of a bimetric theory, in that a “physical metric” is constructed from the true metric and the vector field, and this physical metric determines the behavior of ordinary matter.

In contrast, MOG is essentially a Yukawa theory of gravity in the weak field approximation, with two twists. The first twist is that in MOG, attractive gravity is stronger than Newton’s or Einstein’s; however, at a finite range, it is counteracted by a repulsive force, so the gravitational acceleration is in fact given by \(a = GM[1+\alpha-\alpha(1+\mu r)e^{-\mu r}]\), where \(\alpha\) determines the strength of attractive gravity (\(\alpha=0\) means Newtonian gravity) and \(\mu\) is the range of the vector force. (Typically, \(\alpha={\cal O}(1)\), \(\mu^{-1}={\cal O}(10)~{\rm kpc}\).) The second twist is that the strength of attractive gravity and the range of the repulsive force are both variable, i.e., dynamical (though possibly algebraically related) degrees of freedom. And unlike MOND, for which a relativistic theory was constructed after-the-fact, MOG is derived from a relativistic field theory. It, too, includes a vector field and one or two scalar fields, but the vector field is not a unit vector field, and there is no additional, “physical metric”.

In short, there is not even a superficial resemblance between the two theories. Moreover, unlike MOND, MOG has a reasonably good track record dealing with things other than galaxies: this includes globular clusters (for which MOND has to invoke the nebulous “external field effect”), cluster of galaxies (including the famous Bullet Cluster, seen by some as incontrovertible proof that dark matter exists) and cosmology (for which MOND requires something like 2 eV neutrinos to be able to fit the data.)

MOG and the acoustic power spectrum. Calculated using \(\Omega_M=0.3\), \(\Omega_b=0.035\), \(H_0=71~{\rm km}/{\rm s}/{\rm Mpc}\). Also shown are the raw Wilkinson Microwave Anisotropy Probe (WMAP) three-year data set (light blue), binned averages with horizontal and vertical error bars provided by the WMAP project (red) and data from the Boomerang experiment (green). From arXiv:1104.2957.

There are many issues with MOG, to be sure. Personally, I have never been satisfied with the way we treated the scalar field so far, and I’d really like to be able to derive a proper linearized version of the theory in which the scalar field, too, is accommodated as a first-class citizen. How MOG stands up to scrutiny in light of precision solar system data at the PPN level is also an open question.

But to see MOG completely ignored in the literature, and see MOND used essentially as a straw man supposedly representing all attempts at creating a modified gravity alternative to dark matter… that is very disheartening.

 Posted by at 5:23 pm
Apr 262016
 

This is an eerie anniversary.

Thirty years ago today, reactor 4 of the Chernobyl nuclear power plant blew to smithereens.

It’s really hard to assign blame.

Was it the designers who came up with a reactor design that was fundamentally unstable at low power?

Was it the bureaucrats who, in the secretive Soviet polie state, made it hard if not impossible for operators at one facility to learn from incidents elsewhere?

Was it the engineers at Chernobyl who, concerned about the consequences of a total loss of power at the station, tried to test a procedure that would have kept control systems and the all-important coolant pumps running using waste heat during an emergency shutdown, while the Diesel generators kicked in?

Was it the Kiev electricity network operator who asked Chernobyl to keep reactor 4 online for a little longer, thus pushing the planned test into the late night?

Was it the control room operator who ultimately pushed the button that initiated an emergency shutdown?

And the list continues. Many of the people we could blame didn’t stick around long enough: they died, after participating in often heroic efforts to avert an even greater disaster, and receiving lethal doses of radiation.

Some lived. This photo shows Arkady Uskov, who suffered severe radiation burns 30 years ago as he helped save colleagues. He, along with a few other people, recently revisited the control room of reactor 4, and were photographed there by Radio Free Europe. (Sadly, the photos are badly mislabeled by someone who didn’t know that “Arcadia Uskova” would be the name of a female; or, in this case, the genitive case of the male name Arkady Uskov. Thus I also cannot tell if “Oleksandr Cheranov”, whose name I cannot find anywhere else in the literature of Chernobyl, was a real person or just another RFE misprint.)

Surprisingly, the control room, which looks like a set of props from a Cold War era science fiction movie, is still partially alive. The lit panels, I suspect, must be either part of the monitoring effort or communications equipment.

It must have been an uncanny feeling for these aging engineers to be back at the scene, 30 years later, contemplating what took place that night.

Incidentally, nuclear power remains by far the safest in the world. Per unit of energy produced, it is dozens of times safer than hydroelectricity; a hundred times safer than natural gas; and a whopping four thousand times safer than coal. And yes, this includes the additional approximately 4,000 premature deaths (UN estimate) as a result of Chernobyl’s fallout. Nor was Chernobyl the deadliest accident related to power generation; that title belongs to China’s Banqiao Dam, the failure of which claimed 171,000 lives back in 1975.

 Posted by at 5:52 pm
Feb 162016
 

The other day, I ran across a question on Quora: Can you focus moonlight to start a fire?

The question actually had an answer on xkcd, and it’s a rare case of an incorrect xkcd answer. Or rather, it’s an answer that reaches the correct conclusion but follows invalid reasoning. As a matter of fact, they almost get it right, but miss an essential point.

The xkcd answer tells you that “You can’t use lenses and mirrors to make something hotter than the surface of the light source itself”, which is true, but it neglects the fact that in this case, the light source is not the Moon but the Sun. (OK, they do talk about it but then they ignore it anyway.) The Moon merely acts as a reflector. A rather imperfect reflector to be sure (and this will become important in a moment), but a reflector nonetheless.

But first things first. For our purposes, let’s just take the case when the Moon is full and let’s just model the Moon as a disk for simplicity. A disk with a diameter of \(3,474~{\rm km}\), located \(384,400~{\rm km}\) from the Earth, and bathed in sunlight, some of which it absorbs, some of which it reflects.

The Sun has a radius of \(R_\odot=696,000~{\rm km}\) and a surface temperature of \(T_\odot=5,778~{\rm K}\), and it is a near perfect blackbody. The Stephan-Boltzmann law tells us that its emissive power \(j^\star_\odot=\sigma T_\odot^4\sim 6.32\times 10^7~{\rm W}/{\rm m}^2\) (\(\sigma=5.670373\times 10^{-8}~{\rm W}/{\rm m}^2/{\rm K}^4\) is the Stefan-Boltzmann constant).

The Sun is located \(1~{\rm AU}\) (astronomical unit, \(1.496\times 10^{11}~{\rm m}\)) from the Earth. Multiplying the emissive power by \(R_\odot^2/(1~{\rm AU})^2\) gives the “solar constant”, aka. the irradiance (the terminology really is confusing): approx. \(I_\odot=1368~{\rm W}/{\rm m}^2\), which is the amount of solar power per unit area received here in the vicinity of the Earth.

The Moon has an albedo. The albedo determines the amount of sunshine reflected by a body. For the Moon, it is \(\alpha_\circ=0.12\), which means that 88% of incident sunshine is absorbed, and then re-emitted in the form of heat (thermal infrared radiation). Assuming that the Moon is a perfect infrared emitter, we can easily calculate its surface temperature \(T_\circ\), since the radiation it emits (according to the Stefan-Boltzmann law) must be equal to what it receives:

\[\sigma T_\circ^4=(1-\alpha_\circ)I_\odot,\]

from which we calculate \(T_\circ\sim 382~{\rm K}\) or about 109 degrees Centigrade.

It is indeed impossible to use any arrangement of infrared optics to focus this thermal radiation on an object and make it hotter than 109 degrees Centigrade. That is because the best we can do with optics is to make sure that the object on which the light is focused “sees” the Moon’s surface in all sky directions. At that point, it would end up in thermal equilibrium with the lunar surface. Any other arrangement would leave some of the deep sky exposed, and now our object’s temperature will be determined by the lunar thermal radiation it receives, vs. any thermal radiation it loses to deep space.

But the question was not about lunar thermal infrared radiation. It was about moonlight, which is reflected sunlight. Why can we not focus moonlight? It is, after all, reflected sunlight. And even if it is diminished by 88%… shouldn’t the remaining 12% be enough?

Well, if we can focus sunlight on an object through a filter that reduces the intensity by 88%, the object’s temperature is given by

\[\sigma T^4=\alpha_\circ\sigma T_\odot^4,\]

which is easily solved to give \(T=3401~{\rm K}\), more than hot enough to start a fire.

Suppose the lunar disk was a mirror. Then, we could set up a suitable arrangement of lenses and mirrors to ensure that our object sees the Sun, reflected by the Moon, in all sky directions. So we get the same figure, \(3401~{\rm K}\).

But, and this is where we finally get to the real business of moonlight, the lunar disk is not a mirror. It is not a specular reflector. It is a diffuse reflector. What does this mean?

Well, it means that even if we were to set up our optics such that we see the Moon in all sky directions, most of what we would see (or rather, wouldn’t see) is not reflected sunlight but reflections of deep space. Or, if you wish, our “seeing rays” would go from our eyes to the Moon and then to some random direction in space, with very few of them actually hitting the Sun.

What this means is that even when it comes to reflected sunlight, the Moon acts as a diffuse emitter. Its spectrum will no longer be a pure blackbody spectrum (as it is now a combination of its own blackbody spectrum and that of the Sun) but that’s not really relevant. If we focused moonlight (including diffusely reflected light and absorbed light re-emitted as heat), it’s the same as focusing heat from something that emits heat or light at \(j^\star_\circ=I_\odot\). That something would have an equivalent temperature of \(394~{\rm K}\), and that’s the maximum temperature to which we can heat an object using optics that ensures that it “sees” the Moon in all sky directions.

So then let me ask another question… how specular would the Moon have to be for us to be able to light a fire with moonlight? Many surfaces can be characterized as though they were a combination of a diffuse and a specular reflector. What percentage of sunlight would the Moon have to reflect like a mirror, which we could then collect and focus to produce enough heat, say, to combust paper at the famous \(451~{\rm F}=506~{\rm K}\)? Very little, as it turns out.

If the Moon had a specularity coefficient of only \(\sigma_\circ=0.00031\), with a suitable arrangement of optics (which may require some mighty big mirrors in space, but never mind that, we’re talking about a thought experiment here), we could concentrate reflected sunlight and lunar heat to reach an intensity of

\[I=\alpha_\circ\sigma_\circ j^\star_\odot+(1-\alpha_\circ\sigma_\circ)j^\star_\circ=3719~{\rm W}/{\rm m}^2,\]

which, according to Ray Bradbury, is enough heat to make a piece of paper catch a flame.

So if it turns out that the Moon is not a perfectly diffuse emitter but has a little bit of specularity, it just might be possible to use its light to start a fire.

 Posted by at 4:49 pm
Feb 122016
 

I saw a question on Quora about humans and gravitational waves. How would a human experience an event like GW150914 up close?

Forget for a moment that those black holes likely carried nasty accretion disks and whatnot, and that the violent collision of matter outside the black holes’ respective event horizons probably produced deadly heat and radiation. Pretend that these are completely quiescent black holes, and thus the merger event produced only gravitational radiation.

A gravitational wave is like a passing tidal force. It squeezes you in one direction and stretches you in a perpendicular direction. If you are close enough to the source, you might feel this as a force. But the effect of gravitational waves is very weak. For your body to be stretched by one part in a thousand, you’d have to be about 15,000 kilometers from the coalescing black hole. At that distance, the gravitational acceleration would be more than 3.6 million g-s, which is rather unpleasant, to say the least. And even if you were in a freefalling orbit, there would be strong tidal forces, too, not enough to rip your body apart but certainly enough to make you feel very uncomfortable (about 0.25 g-forces over one meter.) So sensing a gravitational wave would be the least of your concerns.

But then… you’d not really be sensing it anyway. You would be hearing it.

Most of the gravitational wave power emitted by GW150914 was in the audio frequency range. A short chip rising in both pitch and amplitude. And the funny thing is… you would hear it, as the gravitational wave passed through your body, stretching every bit a little, including your eardrums.

The power output of GW150914 was stupendous. Its peak power was close to \(10^{56}\) watts, which exceeds the total power output of the entire visible universe by several orders of magnitude. So for a split second, GW150914 was by far the largest loudspeaker in the known universe.

And this is actually a better analogy than I initially thought. Because, arguably, those gravitational waves were a form of sound.

Now wait a cotton-picking minute you ask. Everybody knows that sounds don’t travel in space! Well… true to some extent. In empty space, there is indeed no medium that would carry the kind of mechanical disturbance that we call sound. But for gravitational waves, space is the medium. And in a very real sense, they are a form of mechanical disturbance, just like sound: they compress and stretch space (and time) as they pass by, just as a sound wave compresses and stretches the medium in which it travels.

But wait… isn’t it true that gravitational waves travel at the speed of light? Well, they do. But… so what? For cosmologists, this just means that spacetime might be represented as a “perfect fluid with a stiff equation of state”, i.e., its energy density and pressure would be equal.

Is this a legitimate thing to say? Maybe not, but I don’t know a reason off the top of my head why. It would be unusual, to be sure, but hey, we do ascribe effective equations of state to the cosmological constant and spatial curvature, so why not this? And I find it absolutely fascinating to think of the signal from GW150914 as a cosmic sound wave. Emitted by a speaker so loud that LIGO, our sensitive microphone, could detect it a whopping 1.3 billion light years away.

 Posted by at 11:26 pm
Feb 112016
 

If this discovery withstands the test of time, the plots will be iconic:

The plots depict an event that took place five months ago, on September 14, 2015, when the two observatories of the LIGO experiment simultaneously detected a signal typical of a black hole merger.

The event is attributed to a merger of two black holes, 36 and 29 solar masses in size, respectively, approximately 410 Mpc from the Earth. As the black holes approach each other, their relative velocity approaches the speed of light; after the merger, the resulting object settles down to a rotating Kerr black hole.

When I first heard rumors about this discovery, I was a bit skeptical; black holes of this size (~30 solar masses) have never been observed before. However, I did not realize just how enormous the distance is between us and this event. In such a gigantic volume, it is far less outlandish for such an oddball pair of two very, very massive (but not supermassive!) black holes to exist.

I also didn’t realize just how rapid this event was. I spoke with people previously who were studying the possibility of observing a signal, rising in amplitude and frequency, hours, days, perhaps even weeks before the event. But here, the entire event lasted no more than a quarter of a second. Bang! And something like three solar masses worth of mass-energy are emitted in the form of ripples in spacetime.

The paper is now accepted for publication and every indication is that the group’s work was meticulous. Still, there were some high profile failures recently (OPERA’s faster-than-light neutrinos, BICEP2’s CMB polarization due to gravitational waves) so, as they say, extraordinary claims require extraordinary evidence; let’s see if this detection is followed by more, let’s see what others have to say who reanalyze the data.

But if true, this means that the last great prediction of Einstein is now confirmed through direct observation (indirect observations have been around for about four decades, in the form of the change in the orbital period of close binary pulsars) and also, the last great observational confirmation of the standard model of fundamental physics (the standard model of particle physics plus gravity) is now “in the bag”, so to speak.

All in all, a memorable day.

 Posted by at 12:58 pm
Jan 282016
 

If you are not following particle physics news or blog sites, you might have missed the big excitement last month when it was announced that the Large Hadron Collider may have observed a new particle with a mass of 750 GeV (roughly 800 times as heavy as a hydrogen atom).

Within hours of the announcement, a flurry of papers began to appear on the manuscript archive, arxiv.org. To date, probably at least 200 papers are there, offering a variety of explanations of this new observation (and incidentally, demonstrating just hungry the theoretical community has become for new data.)

Most of these papers are almost certainly wrong. Indeed, there is a chance that all of them are wrong, on account of the possibility that there is no 750 GeV resonance in the first place.

I am looking at two recent papers. One, by Buckley, discusses what we can (or cannot) learn from the data that have been collected so far. Buckley cautions researchers not to divine more from the data than what it actually reveals. He also remarks on the fact that the observational results of the two main detectors of the LHC, ATLAS and CMS, are somewhat in tension with one another.

Best fit regions (1 and 2σ) of a spin-0 mediator decaying to diphotons, as a function of mediator mass and 13 TeV cross section, assuming mediator couplings to gluons and narrow mediator width. Red regions are the 1 and 2σ best-fit regions for the Atlas13 data, blue is the fit to Cms13 data. The combined best fit for both Atlas13 and Cms13 (Combo13) are the regions outlined in black dashed lines. The best-fit signal combination of all four data sets (Combo) is the black solid regions

From Fig. 2: Best fit regions (1 and 2σ) of a spin-0 mediator decaying to diphotons, as a function of mediator mass and 13 TeV cross section, assuming mediator couplings to gluons and narrow mediator width. Red regions are the 1 and 2σ best-fit regions for the Atlas13 data, blue is the fit to Cms13 data. The combined best fit for both Atlas13 and Cms13 (Combo13) are the regions outlined in black dashed lines. The best-fit signal combination of all four data sets (Combo) is the black solid regions.

 

The other paper, by Davis et al., is more worrisome. It questions the dependence of the presumed discovery on a crucial part of the analysis: the computation or simulation of background events. The types of reactions that the LHC detects happen all the time when protons collide; a new particle is discerned when it produce some excess events over that background. Therefore, in order to tell if there is indeed a new particle, precise knowledge of the background is of paramount importance. Yet Davis and his coauthors point out that the background used in the LHC data analysis is by no means an unambiguous, unique choice and that when they choose another, seemingly even more reasonable background, the statistical significance of the 750 GeV bump is greatly diminished.

I guess we will know more in a few months when the LHC is restarted and more data are collected. It also remains to be seen if the LHC can reproduce the Higgs discovery at its current, 13 TeV operating energy; if it does not, if the Higgs discovery turns out to be a statistical fluke, we may witness one of the biggest embarrassments in the modern history of particle physics.

 Posted by at 6:25 pm