May 132021
 

Last fall, I received an intriguing request: I was asked to respond to an article on the topic of dark matter in an online publication that, I admit, I never heard of previously: Inference: International Review of Science.

But when I looked, I saw that the article in question was written by a scientist with impressive and impeccable credentials (Jean-Pierre Luminet, Director of Research at the CNRS Astrophysics Laboratory in Marseille and the Paris Observatory), and other contributors of the magazine included well-known personalities like Lawrence Krauss or Noam Chomsky.

More importantly, the article in question presented an opportunity to write a response that was not critical but constructive: inform the reader that the concept of modified gravity goes far beyond the so-called MOND paradigm, that it is a rich and vibrant field of theoretical research, and that until and unless dark matter is actually discovered, it remains a worthy pursuit. My goal was not self-promotion: I did not even mention my ongoing collaboration with John Moffat on his modified theory of gravity, MOG/STVG. Rather, it was simply to help dispel the prevailing myth that failures of MOND automatically translate into failures of all efforts to create a viable modified theory of gravitation.

I sent my reply and promptly forgot all about it until last month, when I received another e-mail from this publication: a thank you note letting me know that my reply would be published in the upcoming issue.

And indeed it was, as I was just informed earlier today: My Letter to the Editor, On Modified Gravity.

I am glad in particular that it was so well received by the author of the original article on dark matter.

 Posted by at 4:38 pm
Mar 142021
 

The next in our series of papers describing the extended gravitational lens (extended, that is, in that we are no longer treating the lensing object as a gravitational monopole) is now out, on arXiv.

Here’s one of my favorite images from the paper, which superimposes the boundary of the quadrupole caustic (an astroid curve) onto a 3D plot showing the amplitude of the gravitational lens’s point-spread function.

I was having lots of fun working on this paper. It was, needless to say, a lot of work.

 Posted by at 9:18 pm
Mar 142021
 

Because I’ve been asked a lot about this lately, I thought I’d also share my own take on this calculation in my blog.

Gravitoelectromagnetism (or gravitomagnetism, even gravimagnetism) is the name given to a formalism that shows how weak gravitational fields can be viewed as analogous to electromagnetic fields and how, in particular, the motion of a test particle is governed by equations that are similar to the equations of the electromagnetic Lorentz-force, with gravitational equivalents of the electric and magnetic vector potentials.

Bottom line: no, gravitoelectromagnetism does not explain the anomalous rotation curves of spiral galaxies. The effect is several orders of magnitude too small. Nor is the concept saved by the realization that spacetime is not asymptotically flat, so the boundary conditions must change. That effect, too, is much too small, at least five orders of magnitude too small in fact to be noticeable.

To sketch the key details, the radial acceleration on a test particle due to gravitoelectromagnetism in circular orbit around a spinning body is given roughly by

$$a=-\frac{4G}{c^2}\frac{Jv}{r^3},$$

where \(r\) is the orbital speed of the test particle. When we plug in the numbers for the solar system and the Milky Way, \(r\sim 8~{\rm kpc}\) and \(J\sim 10^{67}~{\rm J}\cdot{\rm s}\), we get

$$a\sim 4\times 10^{-16}~{\rm m}{\rm s}^2.$$

This is roughly 400,000 times smaller than the centrifugal acceleration of the solar system in its orbit around the Milky Way, which is \(\sim 1.6\times 10^{-10}~{\rm m}/{\rm s}^2.\)

Taking into account that our universe is not flat, i.e., deviations from the flat spacetime metric approach unity at the comoving distance of \(\sim 15~{\rm Gpc},\) only introduces a similarly small contribution on the scale of a galaxy, of \({\cal O}(10^{-6})\) at \(\sim 15~{\rm kpc}.\)

A more detailed version of this calculation is available on my Web site.

 Posted by at 1:14 pm
Mar 012021
 

Now it is time for me to be bold and contrarian. And for a change, write about physics in my blog.

From time to time, even noted physicists express their opinion in public that we do not understand quantum physics. In the professional literature, they write about the “measurement problem”; in public, they continue to muse about the meaning of measurement, whether or not consciousness is involved, and the rest of this debate that continues unabated for more than a century already.

Whether it is my arrogance or ignorance, however, when I read such stuff, I beg to differ. I feel like the alien Narim in the television series Stargate SG-1 in a conversation with Captain (and astrophysicist) Samantha Carter about the name of a cat:

CARTER: Uh, see, there was an Earth physicist by the name of Erwin Schrödinger. He had this theoretical experiment. Put a cat in a box, add a can of poison gas, activated by the decay of a radioactive atom, and close the box.
NARIM: Sounds like a cruel man.
CARTER: It was just a theory. He never really did it. He said that if he did do it at any one instant, the cat would be both dead and alive at the same time.
NARIM: Ah! Kulivrian physics. An atom state is indeterminate until measured by an outside observer.
CARTER: We call it quantum physics. You know the theory?
NARIM: Yeah, I’ve studied it… in among other misconceptions of elementary science.
CARTER: Misconception? You telling me that you guys have licked quantum physics?

What I mean is… Yes, in 2021, we “licked” quantum physics. Things that were mysterious in the middle of the 20th century aren’t (or at least, shouldn’t be) quite as mysterious in the third decade of the 21st century.

OK, let me explain by comparing two thought experiments: Schrödinger’s cat vs. the famous two-slit experiment.

The two-slit experiment first. An electron is fired by a cathode. It encounters a screen with two slits. Past that screen, it hits a fluorescent screen where the location of its arrival is recorded. Even if we fire one electron at a time, the arrival locations, seemingly random, will form a wave-like interference pattern. The explanation offered by quantum physics is that en route, the electron had no classically determined position (no position eigenstate, as physicists would say). Its position was a combination, a so-called superposition of many possible position states, so it really did go through both slits at the same time. En route, its position operator interfered with itself, resulting in the pattern of probabilities that was then mapped by the recorded arrival locations on the fluorescent screen.

Now on to the cat: We place that poor feline into a box together with a radioactive atom and an apparatus that breaks a vial of poison gas if the atom undergoes fission. We wait until the half-life of that atom, making it a 50-50 chance that fission has occurred. At this point, the atom is in a superposition of intact vs. split, and therefore, the story goes, the cat will also be in a superposition of being dead and alive. Only by opening the box and looking inside do we “collapse the wavefunction”, determining the actual state of the cat.

Can you spot a crucial difference between these two experiments, though? Let me explain.

In the first experiment involving electrons, knowledge of the final position (where the electron arrives on the screen) does not allow us to reconstruct the classical path that the electron took. It had no classical path. It really was in a superposition of many possible locations while en route.

In the second experiment involving the cat, knowledge of its final state does permit us to reconstruct its prior state. If the cat is alive, we have no doubt that it was alive all along. If it is dead, an experienced veterinarian could determine the moment of death. (Or just leave a video camera and a clock in the box along with the cat.) The cat did have a classical state all throughout the experiment, we just didn’t know what it was until we opened the box and observed its state.

The crucial difference, then, is summed up thus: Ignorance of a classical state is not the same as the absence of a classical state. Whereas in the second experiment, we are simply ignorant of the cat’s state, in the first experiment, the electron has no classical state of position at all.

These two thought experiments, I think, tell us everything we need to know about this so-called “measurement problem”. No, it does not involve consciousness. No, it does not require any “act of observation”. And most importantly, it does not involve any collapse of the wavefunction when you really think it through. More about that later.

What we call measurement is simply interaction by the quantum system with a classical object. Of course we know that nothing really is classical. Fluorescent screens, video cameras, cats, humans are all made of a very large but finite number of quantum particles. But for all practical (measurable, observable) intents and purposes all these things are classical. That is to say, these things are (my expression) almost in an eigenstate almost all the time. Emphasis on “almost”: it is as near to certainty as you can possibly imagine, deviating from certainty only after the hundredth, the thousandth, the trillionth or whichever decimal digit.

Interacting with a classical object confines the quantum system to an eigenstate. Now this is where things really get tricky and old school at the same time. To explain, I must invoke a principle from classical, Lagrangian physics: the principle of least action. Almost all of physics (including classical mechanics, electrodynamics, even general relativity) can be derived from a so-called action principle, the idea that the system evolves from a known initial state to a known final state in a manner such that a number that characterizes the system (its “action”) is minimal.

The action principle sounds counterintuitive to many students of physics when they first encounter it, as it presupposes knowledge of the final state. But this really is simple math if you are familiar with second-order differential equations. A unique solution to such an equation can be specified in two ways. Either we specify the value of the unknown function at two different points, or we specify the value of the unknown function and its first derivative at one point. The former corresponds to Lagrangian physics; the latter, to Hamiltonian physics.

This works well in the context of classical physics. Even though we develop the equations of motion using Lagrangian physics, we do so only in principle. Then we switch over to Hamiltonian physics. Using observed values of the unknown function and its first derivative (think of these as positions and velocities) we solve the equations of motion, predicting the future state of the system.

This approach hits a snag when it comes to quantum physics: the nature of the unknown function is such that its value and its first derivative cannot both be determined as ordinary numbers at the same time. So while Lagrangian physics still works well in the quantum realm, Hamiltonian physics does not. But Lagrangian physics implies knowledge of the future, final state. This is what we mean when we pronounce that quantum physics is fundamentally nonlocal.

Oh, did I just say that Hamiltonian physics doesn’t work in the quantum realm? But then why is it that every quantum physics textbook begins, pretty much, with the Hamiltonian? Schrödinger’s famous equation, for starters, is just the quantum version of that Hamiltonian!

Aha! This is where the culprit is. With the Hamiltonian approach, we begin with presumed knowledge of initial positions and velocities (values and first derivatives of the unknown functions). Knowledge we do not have. So we evolve the system using incomplete knowledge. Then, when it comes to the measurement, we invoke our deus ex machina. Like a bad birthday party surprise, we open the magic box, pull out our “measurement apparatus” (which we pretended to not even know about up until this moment), confine the quantum system to a specific measurement value, retroactively rewrite the description of our system with the apparatus now present all along, and call this discontinuous change in the system’s description “wavefunction collapse”.

And then spend a century about its various interpretations instead of recognizing that the presumed collapse was never a physical process: rather, it amounts to us changing how we describe the system.

This is the nonsense for which I have no use, even if it makes me sound both arrogant and ignorant at the same time.


To offer a bit of a technical background to support the above (see my Web site for additional technical details): A quantum theory can be constructed starting with classical physics in a surprisingly straightforward manner. We start with the Hamiltonian (I know!), written in the following generic form:

$$H = \frac{p^2}{2m} + V({\bf q}),$$

where \({\bf p}\) are generalized momenta, \({\bf q}\) are generalized positions and \(m\) is mass.

We multiply this equation by the unit complex number \(\psi=e^{i({\bf p}\cdot{\bf q}-Ht)/\hbar}.\) We are allowed to do this trivial bit of algebra with impunity, as this factor is never zero.

Next, we notice the identities, \({\bf p}\psi=-i\hbar\nabla\psi,\) \(H\psi=i\hbar\partial_t\psi.\) Using these identities, we rewrite the equation as

$$i\hbar\partial_t\psi=\left[-\frac{\hbar^2}{2m}\nabla^2+V({\bf q})\right]\psi.$$

There you have it, the time-dependent Schrödinger equation in its full glory. Or… not quite, not yet. It is formally Schrödinger’s equation but the function \(\psi\) is not some unknown function; we constructed it from the positions and momenta. But here is the thing: If two functions, \(\psi_1\) and \(\psi_2,\) are solutions of this equation, then because the equation is linear and homogeneous in \(\psi,\) their linear combinations are also solutions. But these linear combinations make no sense in classical physics: they represent states of the system that are superpositions of classical states (i.e., the electron is now in two or more places at the same time.)

Quantum physics begins when we accept these superpositions as valid descriptions of a physical system (as indeed we must, because this is what experiment and observation dictates.)

The presence of a classical apparatus with which the system interacts at some future moment in time is not well captured by the Hamiltonian formalism. But the Lagrangian formalism makes it clear: it selects only those states of the system that are consistent with that interaction. This means indeed that a full quantum mechanical description of the system requires knowledge of the future. The apparent paradox is that this knowledge of the future does not causally influence the past, because the actual evolution of the system remains causal at all times: only the initial description of the system needs to be nonlocal in the same sense in which 19th century Lagrangian physics is nonlocal.

 Posted by at 12:48 pm
Feb 102021
 

Sometimes, simple programming mistakes make for interesting glitches.

Take this image:

No, this is not something that a gravitational lens would produce. Or any lens.

Even-numbered multipoles in a gravitational lens produce images that have fourfold symmetry. This allows me to reduce the amount of computation needed to generate an image, as I only need to generate one quarter; the rest are just copied over.

But this is not true for odd-numbered multipoles. This image was supposed to represent the J3 multipole, which would yield a triangular shape.

Unfortunately, I generated it accidentally by assuming the symmetries of an even-numbered multipole.

I kind of like the result, even though it is of course scientifically worthless. It still looks neat though.

 Posted by at 12:36 pm
Dec 152020
 

A very nice article about our work on the Solar Gravitational Lens was published a few days ago on Universe Today, on account of our recent preprint, which shows quantitative results assessing the impact of image reconstruction on signal and noise.

Because the SGL is such an imperfect lens, the noise penalty is substantial. However, as it turns out, it is much reduced when the projected image area is large, such as when an exoplanet in a nearby star system is targeted.

While this is good news, the Sun’s gravitational field has other imperfections. We are currently working on modeling these and assessing their impact on noise. Next comes the problem of imaging a moving target: an exoplanet that spins, which is illuminated from varying directions, and which may have varying surface features (clouds, vegetation, etc.) Accounting for all these effects is essential if we wish to translate basic theory into sound scientific and engineering requirements.

So, the fun continues. For now, it was nice to see this piece in Universe Today.

 Posted by at 11:08 pm
Sep 032020
 

Tonight, Slava Turyshev sent me a link to an article that was actually published three months ago on medium.com but until now, escaped our attention.

It is a very nice summary of the work that we have been doing on the Solar Gravitational Lens to date.

It really captures the essence of our work and the challenges that we have been looking at.

And there is so much more to do! Countless more things to tackle: image reconstruction of a moving target, imperfections of the solar gravitational field, precision of navigation… not to mention the simple, basic challenge of attempting a deep space mission to a distance four times greater than anything to date, lasting several decades.

Yes, it can be done. No it’s not easy. But it’s a worthy challenge.

 Posted by at 10:54 pm
Jul 312020
 

A few weeks ago, Christian Ready published a beautiful video on his YouTube channel, Launch Pad Astronomy. In this episode, he described in detail how the Solar Gravitational Lens (SGL) works, and also our efforts so far.

I like this video very much. Especially the part that begins at 10:28, where Christian describes how the SGL can be used for image acquisition. The entire video is well worth seeing, but this segment in particular does a better job than we were ever able to do with words alone, explaining how the Sun projects an image of a distant planet to a square kilometer sized area, and how this image is scanned, one imaginary pixel at a time, by measuring the brightness of the Einstein-ring around the Sun as seen from each pixel location.

We now understand this process well, but many more challenges remain. These include, in no particular order, deviations of the Sun from spherical symmetry, minor variations in the brightness of the solar corona, the relative motion of the observing probe, Sun, exosolar system and target planet therein, changing illumination of the target, rotation of the target, changing surface features (weather, perhaps vegetation) of the target, and the devil knows what else.

Even so, lately I have become reasonably confident, based on my own simulation work and our signal-to-noise estimates, as well as a deconvolution approach under development that takes some of the aforementioned issues into consideration, that a high-resolution image of a distant planet is, in fact, obtainable using the SGL.

A lot more work remains. The fun only just began. But I am immensely proud to be able to contribute to of this effort.

 Posted by at 7:41 pm
Jul 162020
 

Seventy-five years ago this morning, a false dawn greeted the New Mexico desert near Alamagordo.

At 5:29 AM in the morning, the device informally known as “the gadget” exploded.

“The gadget” was a plutonium bomb with the explosive power of about 22 kilotons of TNT. It was the first nuclear explosion on planet Earth. It marked the beginning of the nuclear era.

I can only imagine what it must have been like, being part of that effort, being present in the pre-dawn hours, back in 1945. The war in Europe just ended. The war in the Pacific was still raging. This was the world’s first high technology war, fought over the horizon, fought with radio waves, and soon, to be fought with nuclear power. Yet there were so many unknowns! The Trinity test was the culmination of years of frantic effort. The outcome was by no means assured, yet the consequences were clear to all: a successful test would mean that war would never be the same. The world would never be the same.

And then, the most surreal of things happens: minutes before the planned detonation, in the pre-dawn darkness, the intercom system picks up a faint signal from a local radio station, and music starts playing. It’s almost as if reality was mimicking the atmosphere of yet-to-be-invented computer games.

When the explosion happened, the only major surprise was that the detonation was much brighter than anyone had expected. Otherwise, things unfolded pretty much as anticipated. “The gadget” worked. Success cleared the way to the deployment of the (as yet untested) simpler uranium bomb to be dropped on Hiroshima three weeks later, followed by the twin of the Trinity gadget, which ended up destroying much of Nagasaki. The human cost was staggering, yet we must not forget that it would have been dwarfed by the costs of a ground invasion of the Japanese home islands. It was a means to shorten the war, a war not started by the United States. No responsible commander-in-chief could have made a decision other than the one Truman made when he approved the use of the weapons against Imperial Japan.

And perhaps the horrors seen in those two cities played a role in creating a world in which the last use of a nuclear weapon in anger occurred nearly 75 years ago, on August 9, 1945. No one would have predicted back then that there will be no nuclear weapons deployed in war in the coming three quarters of a century. Yet here we are, in 2020, struggling with a pandemic, struggling with populism and other forces undermining our world order, yet still largely peaceful, living in a golden age unprecedented in human history.

Perhaps Trinity should serve as a reminder that peace and prosperity can be fragile.

 Posted by at 12:52 pm
May 192020
 

One of the most fortunate moments in my life occurred in the fall of 2005, when I first bumped into John Moffat, a physicist from The Perimeter Institute in Waterloo, Ontario, Canada, when we both attended the first Pioneer Anomaly conference hosted by the International Space Science Institute in Bern, Switzerland.

This chance encounter turned into a 15-year collaboration and friendship. It was, to me, immensely beneficial: I learned a lot from John who, in his long professional career, has met nearly every one of the giants of 20th century physics, even as he made his own considerable contributions to diverse areas ranging from particle physics to gravitation.

In the past decade, John also wrote a few books for a general audience. His latest, The Shadow of the Black Hole, is about to be published; it can already be preordered on Amazon. In their reviews, Greg Landsberg (CERN), Michael Landry (LIGO Hanford) and Neil Cornish (eXtreme Gravity Institute) praise the book. As I was one of John’s early proofreaders, I figured I’ll add my own.

John began working on this manuscript shortly after the announcement by the LIGO project of the first unambiguous direct detection of gravitational waves from a distant cosmic event. This was a momentous discovery, opening a new chapter in the history of astronomy, while at the same time confirming a fundamental prediction of Einstein’s general relativity. Meanwhile, the physics world was waiting with bated breath for another result: the Event Horizon Telescope collaboration’s attempt to image, using a worldwide network of radio telescopes, either the supermassive black hole near the center of our own Milky Way, or the much larger supermassive black hole near the center of the nearby galaxy M87.

Bookended by these two historic discoveries, John’s narrative invites the reader on a journey to understand the nature of black holes, these most enigmatic objects in our universe. The adventure begins in 1784, when the Reverend John Michell, a Cambridge professor, speculated about stars so massive and compact that even light would not be able to escape from its surface. The story progresses to the 20th century, the prediction of black holes by general relativity, and the strange, often counterintuitive results that arise when our knowledge of thermodynamics and quantum physics is applied to these objects. After a brief detour into the realm of science-fiction, John’s account returns to the hard reality of observational science, as he explains how gravitational waves can be detected and how they fit into both the standard theory of gravitation and its proposed extensions or modifications. Finally, John moves on to discuss how the Event Horizon Telescope works and how it was able to create, for the very first time, an actual image of the black hole’s shadow, cast against the “light” (radio waves) from its accretion disk.

John’s writing is entertaining, informative, and a delight to follow as he accompanies the reader on this fantastic journey. True, I am not an unbiased critic. But don’t just take my word for it; read those reviews I mentioned at the beginning of this post, by preeminent physicists. In any case, I wholeheartedly recommend The Shadow of the Black Hole, along with John’s earlier books, to anyone with an interest in physics, especially the physics of black holes.

 Posted by at 10:31 pm
May 112020
 

Heaven knows why I sometimes get confused by the simplest things.

In this case, the conversion between two commonly used cosmological coordinate systems: Comoving coordinates vs. coordinates that are, well, not comoving, in which cosmic expansion is ascribed to time dilation effects instead.

In the standard coordinates that are used to describe the homogeneous, isotropic universe of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric, the metric is given by

$$ds^2=dt^2-a^2dR^2,$$

where \(a=a(t)\) is a function of the time coordinate, and \(R\) represents the triplet of spatial coordinates: e.g., \(dR^2=dx^2+dy^2+dz^2.\)

I want to transform this using \(R’=aR,\) i.e., transform away the time-dependent coefficient in front of the spatial term in the metric. The confusion comes because for some reason, I always manage to convince myself that I also have to make the simultaneous replacement \(t’=a^{-1}dt.\)

I do not. This is nonsense. I just need to introduce \(dR’\). The rest then presents itself automatically:

$$\begin{align*}
R’&=aR,\\
dR&=d(a^{-1}R’)=-a^{-2}\dot{a}R’dt+a^{-1}dR’,\\
ds^2&=dt^2-a^2[-a^{-2}\dot{a}R’dt+a^{-1}dR’]^2\\
&=(1-a^{-2}\dot{a}^2{R’}^2)dt^2+2a^{-1}\dot{a}R’dtdR’-d{R’}^2\\
&=(1-H^2{R’}^2)dt^2+2HR’dtdR’-d{R’}^2,
\end{align*}$$

where \(H=\dot{a}/a\) as usual.

OK, now that I recorded this here in my blog for posterity, perhaps the next time I need it, I’ll remember where to find it. For instance, the next time I manage to stumble upon one of my old Quora answers that, for five and a half years, advertised my stupidity to the world by presenting an incorrect answer on this topic.

This, incidentally, would serve as a suitable coordinate system representing the reference frame of an observer at the origin. It also demonstrates that such an observer sees an apparent horizon, the cosmological horizon, given by \(1-H^2{R’}^2=0,\), i.e., \(R’=H^{-1},\) the distance characterized by the inverse of the Hubble parameter.

 Posted by at 7:35 pm
Feb 172020
 

Our most comprehensive paper yet on the Solar Gravitational Lens is now online.

This was a difficult paper to write, but I think that, in the end, it was well worth the effort.

We are still investigating the spherical Sun (the gravitational field of the real Sun deviates ever so slightly from spherical symmetry, and that can, or rather it will, have measurable effects) and we are still considering a stationary target (as opposed to a planet with changing illumination and surface features) but in this paper, we now cover the entire image formation process, including models of what a telescope sees in the SGL’s focal region, how such observations can be stitched together to form an image, and how that image compares against the inevitable noise due to the low photon count and the bright solar corona.

 Posted by at 11:37 pm
Oct 112019
 

I just came across this XKCD comic.

Though I can happily report that so far, I managed to avoid getting hit by a truck, it is a life situation in which I found myself quite a number of times in my life.

In fact, ever since I’ve seen this comic an hour or so ago, I’ve been wondering about the resistor network. Thankfully, in the era of the Internet and Google, puzzles like this won’t keep you awake at night; well-reasoned solutions are readily available.

Anyhow, just in case anyone wonders, the answer is 4/π − 1/2 ohms.

 Posted by at 12:10 am
Aug 072019
 

Yesterday, we posted our latest paper on arXiv. Again, it is a paper about the solar gravitational lens.

This time around, our focus was on imaging an extended object, which of course can be trivially modeled as a multitude of point sources.

However, it is a multitude of point sources at a finite distance from the Sun.

This adds a twist. Previously, we modeled light from sources located at infinity: Incident light was in the form of plane waves.

But when the point source is at a finite distance, light from it comes in the form of spherical waves.

Now it is true that at a very large distance from the source, considering only a narrow beam of light, we can approximate those spherical waves as plane waves (paraxial approximation). But it still leaves us with the altered geometry.

But this is where a second observation becomes significant: As we can intuit, and as it is made evident through the use of the eikonal approximation, most of the time we can restrict our focus onto a single ray of light. A ray that, when deflected by the Sun, defines a plane. And the investigation can proceed in this plane.

The image above depicts two such planes, corresponding to the red and the green ray of light.

These rays do meet, however, at the axis of symmetry of the problem, which we call the optical axis. However, in the vicinity of this axis the symmetry of the problem is recovered, and the result no longer depends on the azimuthal angle that defines the plane in question.

To make a long story short, this allows us to reuse our previous results, by introducing the additional angle β, which determines, among other things, the additional distance (compared to parallel rays of light coming from infinity) that these light rays travel before meeting at the optical axis.

This is what our latest paper describes, in full detail.

 Posted by at 9:10 pm
May 312019
 

Here is a thought that has been bothering me for some time.

We live in a universe that is subject to accelerating expansion. Galaxies that are not bound gravitationally to our Local Group will ultimately vanish from sight, accelerating away until the combination of distance and increasing redshift will make their light undetectable by any imaginable instrument.

Similarly, accelerating expansion means that there will be a time in the very distant future when the cosmic microwave background radiation itself will become completely undetectable by any conceivable technological means.

In this very distant future, the Local Group of galaxies will have merged already into a giant elliptical galaxy. Much of this future galaxy will be dark, as most stars would have run out of fuel already.

But there will still be light. Stars would still occasionally form. Some dwarf stars will continue to shine for trillions of years, using their available fuel at a very slow rate.

Which means that civilizations might still emerge, even in this unimaginably distant future.

And when they do, what will they see?

They will see themselves as living in an “island universe” in an otherwise empty, static cosmos. In short, precisely the kind of cosmos envisioned by many astronomers in the early 1920s, when it was still popular to think of the Milky Way as just such an island universe, not yet recognizing that many of the “spiral nebulae” seen through telescopes are in fact distant galaxies just as large, if not larger, than the Milky Way.

But these future civilizations will see no such nebulae. There will be no galaxies beyond their “island universe”. No microwave background either. In fact, no sign whatsoever that their universe is evolving, changing with time.

So what would a scientifically advanced future civilization conclude? Surely they would still discover general relativity. But would they believe its predictions of an expanding cosmos, despite the complete lack of evidence? Or would they see that prediction as a failure of the theory, which must be remedied?

In short, how would they ever come into possession of the knowledge that their universe was once young, dense, and full of galaxies, not to mention background radiation?

My guess is that they won’t. They will have no observational evidence, and their theories will reflect what they actually do see (a static, unchanging island universe floating in infinite, empty space).

Which raises the rather unnerving, unpleasant question: To what extent exist already features in our universe that are similarly unknowable, as they can no longer be detected by any conceivable instrumentation? Is it, in fact, possible to fully understand the physics of the universe, or are we already doomed to never being able to develop a full picture?

I find this question surprisingly unnerving and depressing.

 Posted by at 1:37 am
Apr 092019
 

My research is unsupported. That is to say, with the exception of a few conference invitations when my travel costs were covered, I never received a penny for my research on the Pioneer Anomaly and my other research efforts.

Which is fine, I do it for fun after all. Still, in this day and age of crowdfunding, I couldn’t say no to the possibility that others, who find my efforts valuable, might choose to contribute.

Hence my launching of a Patreon page. I hope it is well-received. I have zero experience with crowdfunding, so this really is a first for me. Wish me luck.

 Posted by at 11:09 pm
Jan 162019
 

I run across this often. Well-meaning folks who read introductory-level texts or saw a few educational videos about physical cosmology, suddenly discovering something seemingly profound.

And then, instead of asking themselves why, if it is so easy to stumble upon these results, they haven’t been published already by others, they go ahead and make outlandish claims. (Claims that sometimes land in my Inbox, unsolicited.)

Let me explain what I am talking about.

As it is well known, the rate of expansion of the cosmos is governed by the famous Hubble parameter: \(H\sim 70~{\rm km}/{\rm s}/{\rm Mpc}\). That is to say, two galaxies that are 1 megaparsec (Mpc, about 3 million light years) apart will be flying away from each other at a rate of 70 kilometers a second.

It is possible to convert megaparsecs (a unit of length) into kilometers (another unit of length), so that the lengths cancel out in the definition of \(H\), and we are left with \(H\sim 2.2\times 10^{-18}~{\rm s}^{-1}\), which is one divided by about 14 billion years. In other words, the Hubble parameter is just the inverse of the age of the universe. (It would be exactly the inverse of the age of the universe if the rate of cosmic expansion was constant. It isn’t, but the fact that the expansion was slowing down for the first 9 billion years or so and has been accelerating since kind of averages things out.)

And this, then, leads to the following naive arithmetic. First, given the age of the universe and the speed of light, we can find out the “radius” of the observable universe:

$$a=\dfrac{c}{H},$$

or about 14 billion light years. Inverting this equation, we also get \(H=c/a\).

But the expansion of the cosmos is governed by another equation, the first so-called Friedmann equation, which says that

$$H^2=\dfrac{8\pi G\rho}{3}.$$

Here, \rho is the density of the universe. The mass within the visible universe, then, is calculated as usual, just using the volume of a sphere of radius \(a\):

$$M=\dfrac{4\pi a^3}{3}\rho.$$

Putting this expression and the expression for \(H\) back into the Friedmann equation, we get the following:

$$a=\dfrac{2GM}{c^2}.$$

But this is just the Schwarzschild radius associated with the mass of the visible universe! Surely, we just discovered something profound here! Perhaps the universe is a black hole!

Well… not exactly. The fact that we got the Schwarzschild radius is no coincidence. The Friedmann equations are, after all, just Einstein’s field equations in disguise, i.e., the exact same equations that yield the formula for the Schwarzschild radius.

Still, the two solutions are qualitatively different. The universe cannot be the interior of a black hole’s event horizon. A black hole is characterized by an unavoidable future singularity, whereas our expanding universe is characterized by a past singularity. At best, the universe may be a time-reversed black hole, i.e., a “white hole”, but even that is dubious. The Schwarzschild solution, after all, is a vacuum solution of Einstein’s field equations, wereas the Friedmann equations describe a matter-filled universe. Nor is there a physical event horizon: the “visible universe” is an observer-dependent concept, and two observers in relative motion or even two observers some distance apart, will not see the same visible universe.

Nonetheless, these ideas, memes perhaps, show up regularly, in manuscripts submitted to journals of dubious quality, appearing in self-published books, or on the alternative manuscript archive viXra. And there are further variations on the theme. For instance, the so-called Planck power, divided by the Hubble parameter, yields \(2Mc^2\), i.e., twice the mass-energy in the observable universe. This coincidence is especially puzzling to those who work it out numerically, and thus remain oblivious to the fact that the Planck power is one of those Planck units that does not actually contain the Planck constant in its definition, only \(c\) and \(G\). People have also been fooling around with various factors of \(2\), \(\tfrac{1}{2}\) or \(\ln 2\), often based on dodgy information content arguments, coming up with numerical ratios that supposedly replicate the matter, dark matter, and dark energy content.

 Posted by at 10:13 pm
Jan 012019
 

Today, I answered a question on Quora about the nature of \(c\), the speed of light, as it appears in the one equation everyone knows, \(E=mc^2.\)

I explained that it is best viewed as a conversion factor between our units of length and time. These units are accidents of history. There is nothing fundamental in Nature about one ten millionth the distance from the poles to the equator of the Earth (the original definition of the meter) or about one 86,400th the length of the Earth’s mean solar day. These units are what they are, in part, because we learned to measure length and time long before we learned that they are aspects of the same thing, spacetime.

And nothing stops us from using units such as light-seconds and seconds to measure space and time; in such units, the value of the speed of light would be just 1, and consequently, it could be dropped from equations altogether. This is precisely what theoretical physicists often do.

But then… I commented that something very similar takes place in aviation, where different units are used to measure horizontal distance (nautical miles, nmi) and altitude (feet, ft). So if you were to calculate the kinetic energy of an airplane (measuring its speed in nmi/s) and its potential energy (measuring the altitude, as well as the gravitational acceleration, in ft) you would need the ft/nmi conversion factor of 6076.12, squared, to convert between the two resulting units of energy.

As I was writing this answer, though, I stumbled upon a blog entry that discussed the crazy, mixed up units of measure still in use worldwide in aviation. Furlongs per fortnight may pretty much be the only unit that is not used, as just about every other unit of measure pops up, confusing poor pilots everywhere: Meters, feet, kilometers, nautical miles, statute miles, kilograms, pounds, millibars, hectopascals, inches of mercury… you name it, it’s there.

Part of the reason, of course, is the fact that America, alone among industrialized nations, managed to stick to its archaic system of measurements. Which is another historical accident, really. A lot had to do with the timing: metric transition was supposed to take place in the 1970s, governed by a presidential executive order signed by Gerald Ford. But the American economy was in a downturn, many Americans felt the nation under siege, the customary units worked well, and there was a conservative-populist pushback against the metric system… so by 1982, Ronald Reagan disbanded the Metric Board and the transition to metric was officially over. (Or not. The metric system continues to gain ground, whether it is used to measure bullets or Aspirin, soft drinks or street drugs.)

Yet another example similar to the metric system is the historical accident that created the employer-funded healthcare system in the United States that American continue to cling to, even as most (all?) other advanced industrial nations transitioned to something more modern, some variant of a single-payer universal healthcare system. It happened in the 1920s, when a Texas hospital managed to strike a deal with public school teachers in Dallas: For 50 cents a month, the hospital picked up the tab of their hospital visits. This arrangement became very popular during the Great Depression when hospitals lost patients who could not afford their hospital care anymore. The idea came to be known as Blue Cross. And that’s how the modern American healthcare system was born.

As I was reading this chain of Web articles, taking me on a tour from Einstein’s \(E=mc^2\) to employer-funded healthcare in America, I was reminded of a 40-year old British TV series, Connections, created by science historian James Burke. Burke found similar, often uncanny connections between seemingly unrelated topics in history, particularly the history of science and technology.

 Posted by at 2:25 pm
Oct 182018
 

Just got back from The Perimeter Institute, where I spent three very short days.

I had good discussions with John Moffat. I again met Barak Shoshany, whom I first encountered on Quora. I attended two very interesting and informative seminar lectures by Emil Mottola on quantum anomalies and the conformal anomaly.

I also gave a brief talk about our research with Slava Turyshev on the Solar Gravitational Lens. I was asked to give an informal talk with no slides. It was a good challenge. I believe I was successful. My talk seemed well received. I was honored to have Neil Turok in the audience, who showed keen interest and asked several insightful questions.

 Posted by at 11:53 pm