A few days ago I had a silly thought about the metric tensor of general relativity.

This tensor is usually assumed to be symmetric, on account of the fact that even if it has an antisymmetric part, $$g_{[\mu\nu]}dx^\mu dx^\nu$$ will be identically zero anyway.

But then, nothing constrains $$g_{\mu\nu}$$ to be symmetric. Such a constraint should normally appear, in the Lagrangian formalism of the theory, as a Lagrange-multiplier. What if we add just such a Lagrange-multiplier to the Einstein-Hilbert Lagrangian of general relativity?

That is, let’s write the action of general relativity in the form,

$$S_{\rm G} = \int~d^4x\sqrt{-g}(R – 2\Lambda + \lambda^{[\mu\nu]}g_{\mu\nu}),$$

where we introduced the Lagrange-multiplier $$\lambda^{[\mu\nu]}$$ in the form of a fully antisymmetric tensor. We know that

$$\lambda^{[\mu\nu]}g_{\mu\nu}=\lambda^{[\mu\nu]}(g_{(\mu\nu)}+g_{[\mu\nu]})=\lambda^{[\mu\nu]}g_{[\mu\nu]},$$

since the product of an antisymmetric and a symmetric tensor is identically zero. Therefore, variation with respect to $$\lambda^{[\mu\nu]}$$ yields $$g_{[\mu\nu]}=0,$$ which is what we want.

But what about variation with respect to $$g_{\mu\nu}?$$ The Lagrange-multipliers represent new (non-dynamic) degrees of freedom. Indeed, in the corresponding Euler-Lagrange equation, we end up with new terms:

$$\frac{\partial}{\partial g_{\alpha\beta}}(\sqrt{-g}\lambda^{[\mu\nu]}g_{[\mu\nu]})= \frac{1}{2}g^{\alpha\beta}\sqrt{-g}\lambda^{[\mu\nu]}g_{[\mu\nu]}+\sqrt{-g}\lambda^{[\mu\nu]}(\delta^\alpha_\mu\delta^\beta_\nu-\delta^\alpha_\nu\delta^\beta_\mu)=2\sqrt{-g}\lambda^{[\mu\nu]}=0.$$

But this just leads to the trivial equation, $$\lambda^{[\mu\nu]}=0,$$ for the Lagrange-multipliers. In other words, we get back General Relativity, just the way we were supposed to.

So in the end, we gain nothing. My silly thought was just that, a silly exercise in pedantry that added nothing to the theory, just showed what we already knew, namely that the antisymmetric part of the metric tensor contributes nothing.

Now if we were to add a dynamical term involving the antisymmetric part, that would be different of course. Then we’d end up with either Einstein’s attempt at a unified field theory (with the antisymmetric part corresponding to electromagnetism) or Moffat’s nonsymmetric gravitational theory. But that’s a whole different game.

From time to time, I promise myself not to respond again to e-mails from strangers, asking me to comment on their research, view their paper, offer thoughts.

Yet from time to time, when the person seems respectable, the research genuine, I do respond. Most of the time, in vain.

Like the other day. Long story short, someone basically proved, as part of a lengthier derivation, that general relativity is always unimodular. This is of course manifestly untrue, but I was wondering where their seemingly reasonable derivation went awry.

Eventually I spotted it. Without getting bogged down in the details, what they did was essentially equivalent to proving that second derivatives do not exist:

$$\frac{d^2f}{dx^2} = \frac{d}{dx}\frac{df}{dx} = \frac{df}{dx}\frac{d}{df}\frac{df}{dx} = \frac{df}{dx}\frac{d}{dx}\frac{df}{df} = \frac{df}{dx}\frac{d1}{dx} = 0.$$

Of course second derivatives do exist, so you might wonder what’s happening here. The sleight of hand happens after the third equal sign: swapping differentiation with respect to two independent variables is permitted, but $$x$$ and $$f$$ are not independent and therefore, this step is illegal.

I pointed this out, and received a mildly abusive comment in response questioning the quality of my mathematics education. Oh well. Maybe I will learn some wisdom and refrain from responding to strangers in the future.

This morning, Google greeted me with a link in its newsstream to a Hackaday article on the Solar Gravitational Lens. The link caught my attention right away, as I recognized some of my own simulated, SGL-projected images of an exo-Earth and its reconstruction.

Reading the article I realized that it appeared in response to a brand new video by SciShow, a science-oriented YouTube channel.

Yay! I like nicely done videos presenting our work and this one is fairly good. There are a few minor inaccuracies, but nothing big enough to be even worth mentioning. And it’s very well presented.

I suppose I should offer my thanks to SciShow for choosing to feature our research with such a well-produced effort.

A beautiful study was published the other day, and it received a lot of press coverage, so I get a lot of questions.

This study shows how, in principle, we could reconstruct the image of an exoplanet using the Solar Gravitational Lens (SGL) using just a single snapshot of the Einstein ring around the Sun.

The problem is, we cannot. As they say, the devil is in the details.

Here is a general statement about any conventional optical system that does not involve more exotic, nonlinear optics: whatever the system does, ultimately it maps light from picture elements, pixels, in the source plane, into pixels in the image plane.

Let me explain what this means in principle, through an extreme example. Suppose someone tells you that there is a distant planet in another galaxy, and you are allowed to ignore any contaminating sources of light. You are allowed to forget about the particle nature of light. You are allowed to forget the physical limitations of your cell phone’s camera, such as its CMOS sensor dynamic range or readout noise. You hold up your cell phone and take a snapshot. It doesn’t even matter if the camera is not well focused or if there is motion blur, so long as you have precise knowledge of how it is focused and how it moves. The map is still a linear map. So if your cellphone camera has 40 megapixels, a simple mathematical operation, inverting the so-called convolution matrix, lets you reconstruct the source in all its exquisite detail. All you need to know is a precise mathematical description, the so-called “point spread function” (PSF) of the camera (including any defocusing and motion blur). Beyond that, it just amounts to inverting a matrix, or equivalently, solving a linear system of equations. In other words, standard fare for anyone studying numerical computational methods, and easily solvable even at extreme high resolutions using appropriate computational resources. (A high-end GPU in your desktop computer is ideal for such calculations.)

Why can’t we do this in practice? Why do we worry about things like the diffraction limit of our camera or telescope?

The answer, ultimately, is noise. The random, unpredictable, or unmodelable element.

Noise comes from many sources. It can include so-called quantization noise because our camera sensor digitizes the light intensity using a finite number of bits. It can include systematic noises due to many reasons, such as differently calibrated sensor pixels or even approximations used in the mathematical description of the PSF. It can include unavoidable, random, “stochastic” noise that arises because light arrives as discrete packets of energy in the form of photons, not as a continuous wave.

When we invert the convolution matrix in the presence of all these noise sources, the noise gets amplified far more than the signal. In the end, the reconstructed, “deconvolved” image becomes useless unless we had an exceptionally high signal-to-noise ratio, or SNR, to begin with.

The authors of this beautiful study knew this. They even state it in their paper. They mention values such as 4,000, even 200,000 for the SNR.

And then there is reality. The Einstein ring does not appear in black, empty space. It appears on top of the bright solar corona. And even if we subtract the corona, we cannot eliminate the stochastic shot noise due to photons from the corona by any means other than collecting data for a longer time.

Let me show a plot from a paper that is work-in-progress, with the actual SNR that we can expect on pixels in a cross-sectional view of the Einstein ring that appears around the Sun:

Just look at the vertical axis. See those values there? That’s our realistic SNR, when the Einstein ring is imaged through the solar corona, using a 1-meter telescope with a 10 meter focal distance, using an image sensor pixel size of a square micron. These choices are consistent with just a tad under 5000 pixels falling within the usable area of the Einstein ring, which can be used to reconstruct, in principle, a roughly 64 by 64 pixel image of the source. As this plot shows, a typical value for the SNR would be 0.01 using 1 second of light collecting time (integration time).

What does that mean? Well, for starters it means that to collect enough light to get an SNR of 4,000, assuming everything else is absolutely, flawlessly perfect, there is no motion blur, indeed no motion at all, no sources of contamination other than the solar corona, no quantization noise, no limitations on the sensor, achieving an SNR of 4,000 would require roughly 160 billion seconds of integration time. That is roughly 5,000 years.

And that is why we are not seriously contemplating image reconstruction from a single snapshot of the Einstein ring.

Move over, general relativity. Solar gravitational lens? Meh. Particle physics and the standard model? Child’s play.

Today, I had to replace the wax ring of a leaky toilet.

Thanks to this YouTube video for some useful advice, helping me avoid some trivial mistakes.

Acting as “release manager” for Maxima, the open-source computer algebra system, I am happy to announce that just minutes ago, I released version 5.46.

I am an avid Maxima user myself; I’ve used Maxima’s tensor algebra packages, in particular, extensively in the context of general relativity and modified gravity. I believe Maxima’s tensor algebra capabilities remain top notch, perhaps even unsurpassed. (What other CAS can derive Einstein’s field equations from the Einstein-Hilbert Lagrangian?)

The Maxima system has more than half a century of history: its roots go back to the 1960s, when I was still in kindergarten. I have been contributing to the project for nearly 20 years myself.

Anyhow, Maxima 5.46, here we go! I hope I made no blunders while preparing this release, but if I did, I’m sure I’ll hear about it shortly.

Between a war launched by a mad dictator, an occupation by “freedom convoy” mad truckers, and other mad shenanigans, it’s been a while since I last blogged about pure physics.

Especially about a topic close to my heart, modified gravity. John Moffat’s modified gravity theory MOG, in particular.

Back in 2020, a paper was published arguing that MOG may not be able to account for the dynamics certain galaxies. The author studied a large, low surface brightness galaxy, Antlia II, which has very little mass, and concluded that the only way to fit MOG to this galaxy’s dynamics is by assuming outlandish values not only for the MOG theory’s parameters but also the parameter that characterizes the mass distribution in the galaxy itself.

In fact, I would argue that any galaxy this light that does not follow Newtonian physics is bad news for modified theories of gravity; these theories predict deviations from Newtonian physics for large, heavy galaxies, but a galaxy this light is comparable in size to large globular clusters (which definitely behave the Newtonian way) so why would they be subject to different rules?

But then… For many years now, John and I (maybe I should only speak for myself in my blog, but I think John would concur) have been cautiously, tentatively raising the possibility that these faint satellite galaxies are really not very good test subjects at all. They do not look like relaxed, “virialized” mechanical systems; rather, they appear tidally disrupted by the host galaxy the vicinity of which they inhabit.

We have heard arguments that this cannot be the case, that these satellites show no signs of recent interaction. And in any case, it is never a good idea for a theorist to question the data. We are not entitled to “alternative facts”.

But then, here’s a paper from just a few months ago with a very respectable list of authors on its front page, presenting new observations of two faint galaxies, one being Antlia II: “Our main result is a clear detection of a velocity gradient in Ant2 that strongly suggests it has recently experienced substantial tidal disruption.”

I find this result very encouraging. It is consistent with the basic behavior of the MOG theory: Systems that are too light to show effects due to modified gravity exhibit strictly Newtonian behavior. This distinguishes MOG from the popular MOND paradigm, which needs the somewhat ad hoc “external field effect” to account for the dynamics of diffuse objects that show no presence of dark matter or modified gravity.

The other day, someone sent me a link to a recent paper on arxiv.org:

Be careful. You never know when a rogue penguin might be targeting you.

The 64-antenna radio telescope complex, MeerKAT, is South Africa’s contribution to the Square Kilometer Array, an international project under development to create an unprecedented radio astronomy facility.

While the SKA project is still in its infancy, MeerKAT is fully functional, and it just delivered the most detailed, most astonishing images yet of the central region of our own Milky Way. Here is, for instance, an image of the Sagittarius A region that also hosts the Milky Way’s supermassive black hole, Sgr A*:

The filamentary structure that is seen in this image is apparently poorly understood. As for the scale of this image, notice that it is marked in arc seconds; at the estimated distance to Sgr A, one arc second translates into roughly 1/8th of a light year, so the image presented here is roughly a 15 by 15 light year area.

Though he passed away in September, I only learned about it tonight: Thanu Padmanabhan, renowned Indian theoretical physicist, is no longer with us. He was only 64 when he passed away, a result of a heart attack according to Wikipedia.

I never met Padmanabhan but I have several of his books on my bookshelf, including Structure Formation in the Universe and his more recent textbook Gravitation. I am also familiar with many of his papers.

I learned about his death just moments ago as I came across a paper by him on arXiv, carrying this comment: “Prof. T. Padmanabhan has passed away on 17th September, 2021, while this paper was under review in a journal.”

What an incredible loss. The brilliant flame of his intellect, extinguished. I am deeply saddened.

A tribute article about his life was published on arXiv back in October, but unfortunately was not cross-listed to gr-qc, and thus it escaped my attention until now.

Earlier today, I noticed something really strange. A lamp was radiating darkness. Or so it appeared.

Of course there was a mundane explanation. Now that the Sun is lower in the sky and the linden tree in front of our kitchen lost many of its leaves already, intense sunlight was reflecting off the hardwood floor in our dining area.

Still, it was an uncanny sight.

I live in a condominium townhouse. We’ve been living here for 25 years. We like the place.

Our unit, in particular, is the middle unit in a three-unit block. The construction is reasonably sound: proper foundations, cinderblock firewalls between the units, woodframe construction within, pretty run-of-the-mill by early 1980s North American standards. We have no major complaints.

Except that… for the past several years, every so often the house wobbled a bit. Almost imperceptibly, but still. At first, I thought it was a minor earthquake (not uncommon in this region because it is still subject to isostatic rebound from the last ice age; in fact we did live through a couple of notable earthquakes since we moved in here.) But no, it was no earthquake.

I thought perhaps it was related to the downtown light rail tunnel construction? But no, the LRT tunnels are quite some ways from here and in any case, that part of the construction has been finished long ago.

But then what the bleep is it? Could I be just imagining things?

Our phones have very sensitive acceleration sensors. Not for the first time, I managed to capture one of these events. A little earlier this afternoon, I heard the woodframe audibly creak as the house began to move again. I grabbed my phone and turned on a piece of software that samples the acceleration sensor at a reasonably high rate, about 200 times a second. Here is the result of the first few seconds of sampling:

The sinusoidal signal is unmistakably there, confirmed by a quick Fourier-analysis to be a signal just above 3 Hz in frequency:

Like Sheldon Cooper in The Big Bang Theory, I can claim that no, I am not crazy, and in this case not because my mother had me tested but because my phone’s acceleration sensor confirms my perception: Something indeed wobbles the house a little, enough to register on my phone’s acceleration sensor, measuring a peak-to-peak amplitude of roughly 0.05 m/s² (the vertical axis in the first graph is in g-units.) That wobble is certainly not enough to cause damage, but it is, I admit, a bit unnerving.

So what is going on here? A neighbor engaging in some, ahem, vigorous activity? Our current neighbors are somewhat noisier than prior residents, occasionally training their respective herds of pygmy elephants to run up and down the stairs (or whatever it is that they are doing). But no, the events are just too brief in duration and too regular. Underground work, perhaps a secret hideout for the staff of the nearby Chinese embassy? Speaking of which, I admit I even thought that this ~3 Hz signal might be related to the reported cases of illness by embassy staff at several embassies around the world, but I just don’t see the connection: even if those cases are real and have an underlying common cause (as opposed to just mere random coincidences) it’s hard to see how a 3 Hz vibration can have anything to do with them.

OK, so I have a pretty good idea of what this thing isn’t, but then, what the bleepety-bleep is it?

I am not happy admitting it, but it’s true: There have been a few occasions in my life when I reacted just like this XKCD cartoon character when I first encountered specific areas of research.

Can you guess the author with the most physics books on what I call my “primary” bookshelf, the shelf right over my desk where I keep the books that I use the most often?

It would be Steven Weinberg. His 1972 Gravitation and Cosmology remains one of the best books ever on relativity theory, working out details in ways no other book does. His 2010 Cosmology remains a reasonably up-to-date textbook on modern cosmology. And then there is of course the 3-volume Quantum Theory of Fields.

Alas, Weinberg is no longer with us. He passed away yesterday, July 23, at the age of 88.

He will be missed.

We have a new manuscript on arXiv. Its title might raise some eyebrows: Algebraic wave-optical description of a quadrupole gravitational lens.

Say what? Algebra? Wave optics? Yes. It means that in this particular case, namely a gravitational lens that is described as a gravitational monopole with a quadrupole correction, we were able to find a closed form description that does not rely on numerical integration, especially no numerical integration of a rapidly oscillating function.

Key to this solution is a quartic equation. Quartic equations were first solved algebraically back in the 16th century by Italian mathematicians. The formal solution is usually considered to be of little practical value, as it entails cumbersome algebra, and polynomial equations can be routinely and efficiently solved using numerical methods.

But in this case… The amazing thing is that the algebraic solution reveals so much about the physics itself!

Take this figure from our paper, for instance:

On the left is light projected by the gravitational lens, its so-called point-spread function (PSF) which tells us how light from a point source is distributed on an imaginary projection screen by the lens. On the right? Why, that’s the discriminant of the quartic equation

$$x^4-2\eta\sin\mu \, x^3+\big(\eta^2-1\big)x^2+\eta\sin\mu \, x+{\textstyle\frac{1}{4}}\sin^2\mu=0,$$

in a plane characterized by polar coordinates $$(\eta,\tfrac{1}{2}\mu)$$, that is, $$\eta$$ as a radial coordinate and $$\tfrac{1}{2}\mu$$ as an azimuthal angle. When the discriminant is positive, the equation is expected to have four real (or four complex) roots; everywhere else, it’s a mix of real and imaginary roots. This direct connection between the algebra and the lensing phenomenon is unexpected and beautiful.

The full set of real roots of this equation can be shown in the form of an animation:

Of course one must read the paper in order for this animation to make sense, but I think it’s beautiful.

How good is this quartic solution? It is uncannily accurate. Here is a comparison of the PSF computed using the quartic solution and also using numerical integration, as well as some enlarged details from the so-called caustic boundary:

It’s only in the immediate vicinity of the caustic boundary that the quartic solution becomes less than accurate.

We can also use the quartic solution to simulate images seen through a telescope (i.e., the Einstein ring, or what survives of it, that would appear around a gravitational lens when we looked at the lens through a telescope with a point source of light situated behind the lens.) We can see again that it’s only in the vicinity of the caustic boundary that the quartic solution produces artifacts instead of accurately reproducing it when spots of light widen into arcs:

This paper was so much joy to write! Also, for the first time in my life, this paper gave us a legitimate, non-pretentious reason to cite something from the 16th century: Cardano’s 1545 treatise in which the quartic solution (as well as the cubic) are introduced, together with discussion on the meaning of taking the square root of negative numbers.

Last fall, I received an intriguing request: I was asked to respond to an article on the topic of dark matter in an online publication that, I admit, I never heard of previously: Inference: International Review of Science.

But when I looked, I saw that the article in question was written by a scientist with impressive and impeccable credentials (Jean-Pierre Luminet, Director of Research at the CNRS Astrophysics Laboratory in Marseille and the Paris Observatory), and other contributors of the magazine included well-known personalities like Lawrence Krauss or Noam Chomsky.

More importantly, the article in question presented an opportunity to write a response that was not critical but constructive: inform the reader that the concept of modified gravity goes far beyond the so-called MOND paradigm, that it is a rich and vibrant field of theoretical research, and that until and unless dark matter is actually discovered, it remains a worthy pursuit. My goal was not self-promotion: I did not even mention my ongoing collaboration with John Moffat on his modified theory of gravity, MOG/STVG. Rather, it was simply to help dispel the prevailing myth that failures of MOND automatically translate into failures of all efforts to create a viable modified theory of gravitation.

I sent my reply and promptly forgot all about it until last month, when I received another e-mail from this publication: a thank you note letting me know that my reply would be published in the upcoming issue.

And indeed it was, as I was just informed earlier today: My Letter to the Editor, On Modified Gravity.

I am glad in particular that it was so well received by the author of the original article on dark matter.

The next in our series of papers describing the extended gravitational lens (extended, that is, in that we are no longer treating the lensing object as a gravitational monopole) is now out, on arXiv.

Here’s one of my favorite images from the paper, which superimposes the boundary of the quadrupole caustic (an astroid curve) onto a 3D plot showing the amplitude of the gravitational lens’s point-spread function.

I was having lots of fun working on this paper. It was, needless to say, a lot of work.

Because I’ve been asked a lot about this lately, I thought I’d also share my own take on this calculation in my blog.

Gravitoelectromagnetism (or gravitomagnetism, even gravimagnetism) is the name given to a formalism that shows how weak gravitational fields can be viewed as analogous to electromagnetic fields and how, in particular, the motion of a test particle is governed by equations that are similar to the equations of the electromagnetic Lorentz-force, with gravitational equivalents of the electric and magnetic vector potentials.

Bottom line: no, gravitoelectromagnetism does not explain the anomalous rotation curves of spiral galaxies. The effect is several orders of magnitude too small. Nor is the concept saved by the realization that spacetime is not asymptotically flat, so the boundary conditions must change. That effect, too, is much too small, at least five orders of magnitude too small in fact to be noticeable.

To sketch the key details, the radial acceleration on a test particle due to gravitoelectromagnetism in circular orbit around a spinning body is given roughly by

$$a=-\frac{4G}{c^2}\frac{Jv}{r^3},$$

where $$r$$ is the orbital speed of the test particle. When we plug in the numbers for the solar system and the Milky Way, $$r\sim 8~{\rm kpc}$$ and $$J\sim 10^{67}~{\rm J}\cdot{\rm s}$$, we get

$$a\sim 4\times 10^{-16}~{\rm m}{\rm s}^2.$$

This is roughly 400,000 times smaller than the centrifugal acceleration of the solar system in its orbit around the Milky Way, which is $$\sim 1.6\times 10^{-10}~{\rm m}/{\rm s}^2.$$

Taking into account that our universe is not flat, i.e., deviations from the flat spacetime metric approach unity at the comoving distance of $$\sim 15~{\rm Gpc},$$ only introduces a similarly small contribution on the scale of a galaxy, of $${\cal O}(10^{-6})$$ at $$\sim 15~{\rm kpc}.$$

A more detailed version of this calculation is available on my Web site.