Aug 142021
 

I am not happy admitting it, but it’s true: There have been a few occasions in my life when I reacted just like this XKCD cartoon character when I first encountered specific areas of research.

 Posted by at 11:48 am
Jul 242021
 

Can you guess the author with the most physics books on what I call my “primary” bookshelf, the shelf right over my desk where I keep the books that I use the most often?

It would be Steven Weinberg. His 1972 Gravitation and Cosmology remains one of the best books ever on relativity theory, working out details in ways no other book does. His 2010 Cosmology remains a reasonably up-to-date textbook on modern cosmology. And then there is of course the 3-volume Quantum Theory of Fields.

Alas, Weinberg is no longer with us. He passed away yesterday, July 23, at the age of 88.

He will be missed.

 Posted by at 6:27 pm
May 182021
 

We have a new manuscript on arXiv. Its title might raise some eyebrows: Algebraic wave-optical description of a quadrupole gravitational lens.

Say what? Algebra? Wave optics? Yes. It means that in this particular case, namely a gravitational lens that is described as a gravitational monopole with a quadrupole correction, we were able to find a closed form description that does not rely on numerical integration, especially no numerical integration of a rapidly oscillating function.

Key to this solution is a quartic equation. Quartic equations were first solved algebraically back in the 16th century by Italian mathematicians. The formal solution is usually considered to be of little practical value, as it entails cumbersome algebra, and polynomial equations can be routinely and efficiently solved using numerical methods.

But in this case… The amazing thing is that the algebraic solution reveals so much about the physics itself!

Take this figure from our paper, for instance:

On the left is light projected by the gravitational lens, its so-called point-spread function (PSF) which tells us how light from a point source is distributed on an imaginary projection screen by the lens. On the right? Why, that’s the discriminant of the quartic equation

$$ x^4-2\eta\sin\mu \, x^3+\big(\eta^2-1\big)x^2+\eta\sin\mu \, x+{\textstyle\frac{1}{4}}\sin^2\mu=0, $$

in a plane characterized by polar coordinates \((\eta,\tfrac{1}{2}\mu)\), that is, \(\eta\) as a radial coordinate and \(\tfrac{1}{2}\mu \) as an azimuthal angle. When the discriminant is positive, the equation is expected to have four real (or four complex) roots; everywhere else, it’s a mix of real and imaginary roots. This direct connection between the algebra and the lensing phenomenon is unexpected and beautiful.

The full set of real roots of this equation can be shown in the form of an animation:

Of course one must read the paper in order for this animation to make sense, but I think it’s beautiful.

How good is this quartic solution? It is uncannily accurate. Here is a comparison of the PSF computed using the quartic solution and also using numerical integration, as well as some enlarged details from the so-called caustic boundary:

It’s only in the immediate vicinity of the caustic boundary that the quartic solution becomes less than accurate.

We can also use the quartic solution to simulate images seen through a telescope (i.e., the Einstein ring, or what survives of it, that would appear around a gravitational lens when we looked at the lens through a telescope with a point source of light situated behind the lens.) We can see again that it’s only in the vicinity of the caustic boundary that the quartic solution produces artifacts instead of accurately reproducing it when spots of light widen into arcs:

This paper was so much joy to write! Also, for the first time in my life, this paper gave us a legitimate, non-pretentious reason to cite something from the 16th century: Cardano’s 1545 treatise in which the quartic solution (as well as the cubic) are introduced, together with discussion on the meaning of taking the square root of negative numbers.

 Posted by at 5:35 pm
May 132021
 

Last fall, I received an intriguing request: I was asked to respond to an article on the topic of dark matter in an online publication that, I admit, I never heard of previously: Inference: International Review of Science.

But when I looked, I saw that the article in question was written by a scientist with impressive and impeccable credentials (Jean-Pierre Luminet, Director of Research at the CNRS Astrophysics Laboratory in Marseille and the Paris Observatory), and other contributors of the magazine included well-known personalities like Lawrence Krauss or Noam Chomsky.

More importantly, the article in question presented an opportunity to write a response that was not critical but constructive: inform the reader that the concept of modified gravity goes far beyond the so-called MOND paradigm, that it is a rich and vibrant field of theoretical research, and that until and unless dark matter is actually discovered, it remains a worthy pursuit. My goal was not self-promotion: I did not even mention my ongoing collaboration with John Moffat on his modified theory of gravity, MOG/STVG. Rather, it was simply to help dispel the prevailing myth that failures of MOND automatically translate into failures of all efforts to create a viable modified theory of gravitation.

I sent my reply and promptly forgot all about it until last month, when I received another e-mail from this publication: a thank you note letting me know that my reply would be published in the upcoming issue.

And indeed it was, as I was just informed earlier today: My Letter to the Editor, On Modified Gravity.

I am glad in particular that it was so well received by the author of the original article on dark matter.

 Posted by at 4:38 pm
Mar 142021
 

The next in our series of papers describing the extended gravitational lens (extended, that is, in that we are no longer treating the lensing object as a gravitational monopole) is now out, on arXiv.

Here’s one of my favorite images from the paper, which superimposes the boundary of the quadrupole caustic (an astroid curve) onto a 3D plot showing the amplitude of the gravitational lens’s point-spread function.

I was having lots of fun working on this paper. It was, needless to say, a lot of work.

 Posted by at 9:18 pm
Mar 142021
 

Because I’ve been asked a lot about this lately, I thought I’d also share my own take on this calculation in my blog.

Gravitoelectromagnetism (or gravitomagnetism, even gravimagnetism) is the name given to a formalism that shows how weak gravitational fields can be viewed as analogous to electromagnetic fields and how, in particular, the motion of a test particle is governed by equations that are similar to the equations of the electromagnetic Lorentz-force, with gravitational equivalents of the electric and magnetic vector potentials.

Bottom line: no, gravitoelectromagnetism does not explain the anomalous rotation curves of spiral galaxies. The effect is several orders of magnitude too small. Nor is the concept saved by the realization that spacetime is not asymptotically flat, so the boundary conditions must change. That effect, too, is much too small, at least five orders of magnitude too small in fact to be noticeable.

To sketch the key details, the radial acceleration on a test particle due to gravitoelectromagnetism in circular orbit around a spinning body is given roughly by

$$a=-\frac{4G}{c^2}\frac{Jv}{r^3},$$

where \(r\) is the orbital speed of the test particle. When we plug in the numbers for the solar system and the Milky Way, \(r\sim 8~{\rm kpc}\) and \(J\sim 10^{67}~{\rm J}\cdot{\rm s}\), we get

$$a\sim 4\times 10^{-16}~{\rm m}{\rm s}^2.$$

This is roughly 400,000 times smaller than the centrifugal acceleration of the solar system in its orbit around the Milky Way, which is \(\sim 1.6\times 10^{-10}~{\rm m}/{\rm s}^2.\)

Taking into account that our universe is not flat, i.e., deviations from the flat spacetime metric approach unity at the comoving distance of \(\sim 15~{\rm Gpc},\) only introduces a similarly small contribution on the scale of a galaxy, of \({\cal O}(10^{-6})\) at \(\sim 15~{\rm kpc}.\)

A more detailed version of this calculation is available on my Web site.

 Posted by at 1:14 pm
Mar 012021
 

Now it is time for me to be bold and contrarian. And for a change, write about physics in my blog.

From time to time, even noted physicists express their opinion in public that we do not understand quantum physics. In the professional literature, they write about the “measurement problem”; in public, they continue to muse about the meaning of measurement, whether or not consciousness is involved, and the rest of this debate that continues unabated for more than a century already.

Whether it is my arrogance or ignorance, however, when I read such stuff, I beg to differ. I feel like the alien Narim in the television series Stargate SG-1 in a conversation with Captain (and astrophysicist) Samantha Carter about the name of a cat:

CARTER: Uh, see, there was an Earth physicist by the name of Erwin Schrödinger. He had this theoretical experiment. Put a cat in a box, add a can of poison gas, activated by the decay of a radioactive atom, and close the box.
NARIM: Sounds like a cruel man.
CARTER: It was just a theory. He never really did it. He said that if he did do it at any one instant, the cat would be both dead and alive at the same time.
NARIM: Ah! Kulivrian physics. An atom state is indeterminate until measured by an outside observer.
CARTER: We call it quantum physics. You know the theory?
NARIM: Yeah, I’ve studied it… in among other misconceptions of elementary science.
CARTER: Misconception? You telling me that you guys have licked quantum physics?

What I mean is… Yes, in 2021, we “licked” quantum physics. Things that were mysterious in the middle of the 20th century aren’t (or at least, shouldn’t be) quite as mysterious in the third decade of the 21st century.

OK, let me explain by comparing two thought experiments: Schrödinger’s cat vs. the famous two-slit experiment.

The two-slit experiment first. An electron is fired by a cathode. It encounters a screen with two slits. Past that screen, it hits a fluorescent screen where the location of its arrival is recorded. Even if we fire one electron at a time, the arrival locations, seemingly random, will form a wave-like interference pattern. The explanation offered by quantum physics is that en route, the electron had no classically determined position (no position eigenstate, as physicists would say). Its position was a combination, a so-called superposition of many possible position states, so it really did go through both slits at the same time. En route, its position operator interfered with itself, resulting in the pattern of probabilities that was then mapped by the recorded arrival locations on the fluorescent screen.

Now on to the cat: We place that poor feline into a box together with a radioactive atom and an apparatus that breaks a vial of poison gas if the atom undergoes fission. We wait until the half-life of that atom, making it a 50-50 chance that fission has occurred. At this point, the atom is in a superposition of intact vs. split, and therefore, the story goes, the cat will also be in a superposition of being dead and alive. Only by opening the box and looking inside do we “collapse the wavefunction”, determining the actual state of the cat.

Can you spot a crucial difference between these two experiments, though? Let me explain.

In the first experiment involving electrons, knowledge of the final position (where the electron arrives on the screen) does not allow us to reconstruct the classical path that the electron took. It had no classical path. It really was in a superposition of many possible locations while en route.

In the second experiment involving the cat, knowledge of its final state does permit us to reconstruct its prior state. If the cat is alive, we have no doubt that it was alive all along. If it is dead, an experienced veterinarian could determine the moment of death. (Or just leave a video camera and a clock in the box along with the cat.) The cat did have a classical state all throughout the experiment, we just didn’t know what it was until we opened the box and observed its state.

The crucial difference, then, is summed up thus: Ignorance of a classical state is not the same as the absence of a classical state. Whereas in the second experiment, we are simply ignorant of the cat’s state, in the first experiment, the electron has no classical state of position at all.

These two thought experiments, I think, tell us everything we need to know about this so-called “measurement problem”. No, it does not involve consciousness. No, it does not require any “act of observation”. And most importantly, it does not involve any collapse of the wavefunction when you really think it through. More about that later.

What we call measurement is simply interaction by the quantum system with a classical object. Of course we know that nothing really is classical. Fluorescent screens, video cameras, cats, humans are all made of a very large but finite number of quantum particles. But for all practical (measurable, observable) intents and purposes all these things are classical. That is to say, these things are (my expression) almost in an eigenstate almost all the time. Emphasis on “almost”: it is as near to certainty as you can possibly imagine, deviating from certainty only after the hundredth, the thousandth, the trillionth or whichever decimal digit.

Interacting with a classical object confines the quantum system to an eigenstate. Now this is where things really get tricky and old school at the same time. To explain, I must invoke a principle from classical, Lagrangian physics: the principle of least action. Almost all of physics (including classical mechanics, electrodynamics, even general relativity) can be derived from a so-called action principle, the idea that the system evolves from a known initial state to a known final state in a manner such that a number that characterizes the system (its “action”) is minimal.

The action principle sounds counterintuitive to many students of physics when they first encounter it, as it presupposes knowledge of the final state. But this really is simple math if you are familiar with second-order differential equations. A unique solution to such an equation can be specified in two ways. Either we specify the value of the unknown function at two different points, or we specify the value of the unknown function and its first derivative at one point. The former corresponds to Lagrangian physics; the latter, to Hamiltonian physics.

This works well in the context of classical physics. Even though we develop the equations of motion using Lagrangian physics, we do so only in principle. Then we switch over to Hamiltonian physics. Using observed values of the unknown function and its first derivative (think of these as positions and velocities) we solve the equations of motion, predicting the future state of the system.

This approach hits a snag when it comes to quantum physics: the nature of the unknown function is such that its value and its first derivative cannot both be determined as ordinary numbers at the same time. So while Lagrangian physics still works well in the quantum realm, Hamiltonian physics does not. But Lagrangian physics implies knowledge of the future, final state. This is what we mean when we pronounce that quantum physics is fundamentally nonlocal.

Oh, did I just say that Hamiltonian physics doesn’t work in the quantum realm? But then why is it that every quantum physics textbook begins, pretty much, with the Hamiltonian? Schrödinger’s famous equation, for starters, is just the quantum version of that Hamiltonian!

Aha! This is where the culprit is. With the Hamiltonian approach, we begin with presumed knowledge of initial positions and velocities (values and first derivatives of the unknown functions). Knowledge we do not have. So we evolve the system using incomplete knowledge. Then, when it comes to the measurement, we invoke our deus ex machina. Like a bad birthday party surprise, we open the magic box, pull out our “measurement apparatus” (which we pretended to not even know about up until this moment), confine the quantum system to a specific measurement value, retroactively rewrite the description of our system with the apparatus now present all along, and call this discontinuous change in the system’s description “wavefunction collapse”.

And then spend a century about its various interpretations instead of recognizing that the presumed collapse was never a physical process: rather, it amounts to us changing how we describe the system.

This is the nonsense for which I have no use, even if it makes me sound both arrogant and ignorant at the same time.


To offer a bit of a technical background to support the above (see my Web site for additional technical details): A quantum theory can be constructed starting with classical physics in a surprisingly straightforward manner. We start with the Hamiltonian (I know!), written in the following generic form:

$$H = \frac{p^2}{2m} + V({\bf q}),$$

where \({\bf p}\) are generalized momenta, \({\bf q}\) are generalized positions and \(m\) is mass.

We multiply this equation by the unit complex number \(\psi=e^{i({\bf p}\cdot{\bf q}-Ht)/\hbar}.\) We are allowed to do this trivial bit of algebra with impunity, as this factor is never zero.

Next, we notice the identities, \({\bf p}\psi=-i\hbar\nabla\psi,\) \(H\psi=i\hbar\partial_t\psi.\) Using these identities, we rewrite the equation as

$$i\hbar\partial_t\psi=\left[-\frac{\hbar^2}{2m}\nabla^2+V({\bf q})\right]\psi.$$

There you have it, the time-dependent Schrödinger equation in its full glory. Or… not quite, not yet. It is formally Schrödinger’s equation but the function \(\psi\) is not some unknown function; we constructed it from the positions and momenta. But here is the thing: If two functions, \(\psi_1\) and \(\psi_2,\) are solutions of this equation, then because the equation is linear and homogeneous in \(\psi,\) their linear combinations are also solutions. But these linear combinations make no sense in classical physics: they represent states of the system that are superpositions of classical states (i.e., the electron is now in two or more places at the same time.)

Quantum physics begins when we accept these superpositions as valid descriptions of a physical system (as indeed we must, because this is what experiment and observation dictates.)

The presence of a classical apparatus with which the system interacts at some future moment in time is not well captured by the Hamiltonian formalism. But the Lagrangian formalism makes it clear: it selects only those states of the system that are consistent with that interaction. This means indeed that a full quantum mechanical description of the system requires knowledge of the future. The apparent paradox is that this knowledge of the future does not causally influence the past, because the actual evolution of the system remains causal at all times: only the initial description of the system needs to be nonlocal in the same sense in which 19th century Lagrangian physics is nonlocal.

 Posted by at 12:48 pm
Feb 102021
 

Sometimes, simple programming mistakes make for interesting glitches.

Take this image:

No, this is not something that a gravitational lens would produce. Or any lens.

Even-numbered multipoles in a gravitational lens produce images that have fourfold symmetry. This allows me to reduce the amount of computation needed to generate an image, as I only need to generate one quarter; the rest are just copied over.

But this is not true for odd-numbered multipoles. This image was supposed to represent the J3 multipole, which would yield a triangular shape.

Unfortunately, I generated it accidentally by assuming the symmetries of an even-numbered multipole.

I kind of like the result, even though it is of course scientifically worthless. It still looks neat though.

 Posted by at 12:36 pm
Dec 152020
 

A very nice article about our work on the Solar Gravitational Lens was published a few days ago on Universe Today, on account of our recent preprint, which shows quantitative results assessing the impact of image reconstruction on signal and noise.

Because the SGL is such an imperfect lens, the noise penalty is substantial. However, as it turns out, it is much reduced when the projected image area is large, such as when an exoplanet in a nearby star system is targeted.

While this is good news, the Sun’s gravitational field has other imperfections. We are currently working on modeling these and assessing their impact on noise. Next comes the problem of imaging a moving target: an exoplanet that spins, which is illuminated from varying directions, and which may have varying surface features (clouds, vegetation, etc.) Accounting for all these effects is essential if we wish to translate basic theory into sound scientific and engineering requirements.

So, the fun continues. For now, it was nice to see this piece in Universe Today.

 Posted by at 11:08 pm
Sep 032020
 

Tonight, Slava Turyshev sent me a link to an article that was actually published three months ago on medium.com but until now, escaped our attention.

It is a very nice summary of the work that we have been doing on the Solar Gravitational Lens to date.

It really captures the essence of our work and the challenges that we have been looking at.

And there is so much more to do! Countless more things to tackle: image reconstruction of a moving target, imperfections of the solar gravitational field, precision of navigation… not to mention the simple, basic challenge of attempting a deep space mission to a distance four times greater than anything to date, lasting several decades.

Yes, it can be done. No it’s not easy. But it’s a worthy challenge.

 Posted by at 10:54 pm
Jul 312020
 

A few weeks ago, Christian Ready published a beautiful video on his YouTube channel, Launch Pad Astronomy. In this episode, he described in detail how the Solar Gravitational Lens (SGL) works, and also our efforts so far.

I like this video very much. Especially the part that begins at 10:28, where Christian describes how the SGL can be used for image acquisition. The entire video is well worth seeing, but this segment in particular does a better job than we were ever able to do with words alone, explaining how the Sun projects an image of a distant planet to a square kilometer sized area, and how this image is scanned, one imaginary pixel at a time, by measuring the brightness of the Einstein-ring around the Sun as seen from each pixel location.

We now understand this process well, but many more challenges remain. These include, in no particular order, deviations of the Sun from spherical symmetry, minor variations in the brightness of the solar corona, the relative motion of the observing probe, Sun, exosolar system and target planet therein, changing illumination of the target, rotation of the target, changing surface features (weather, perhaps vegetation) of the target, and the devil knows what else.

Even so, lately I have become reasonably confident, based on my own simulation work and our signal-to-noise estimates, as well as a deconvolution approach under development that takes some of the aforementioned issues into consideration, that a high-resolution image of a distant planet is, in fact, obtainable using the SGL.

A lot more work remains. The fun only just began. But I am immensely proud to be able to contribute to of this effort.

 Posted by at 7:41 pm
Jul 162020
 

Seventy-five years ago this morning, a false dawn greeted the New Mexico desert near Alamagordo.

At 5:29 AM in the morning, the device informally known as “the gadget” exploded.

“The gadget” was a plutonium bomb with the explosive power of about 22 kilotons of TNT. It was the first nuclear explosion on planet Earth. It marked the beginning of the nuclear era.

I can only imagine what it must have been like, being part of that effort, being present in the pre-dawn hours, back in 1945. The war in Europe just ended. The war in the Pacific was still raging. This was the world’s first high technology war, fought over the horizon, fought with radio waves, and soon, to be fought with nuclear power. Yet there were so many unknowns! The Trinity test was the culmination of years of frantic effort. The outcome was by no means assured, yet the consequences were clear to all: a successful test would mean that war would never be the same. The world would never be the same.

And then, the most surreal of things happens: minutes before the planned detonation, in the pre-dawn darkness, the intercom system picks up a faint signal from a local radio station, and music starts playing. It’s almost as if reality was mimicking the atmosphere of yet-to-be-invented computer games.

When the explosion happened, the only major surprise was that the detonation was much brighter than anyone had expected. Otherwise, things unfolded pretty much as anticipated. “The gadget” worked. Success cleared the way to the deployment of the (as yet untested) simpler uranium bomb to be dropped on Hiroshima three weeks later, followed by the twin of the Trinity gadget, which ended up destroying much of Nagasaki. The human cost was staggering, yet we must not forget that it would have been dwarfed by the costs of a ground invasion of the Japanese home islands. It was a means to shorten the war, a war not started by the United States. No responsible commander-in-chief could have made a decision other than the one Truman made when he approved the use of the weapons against Imperial Japan.

And perhaps the horrors seen in those two cities played a role in creating a world in which the last use of a nuclear weapon in anger occurred nearly 75 years ago, on August 9, 1945. No one would have predicted back then that there will be no nuclear weapons deployed in war in the coming three quarters of a century. Yet here we are, in 2020, struggling with a pandemic, struggling with populism and other forces undermining our world order, yet still largely peaceful, living in a golden age unprecedented in human history.

Perhaps Trinity should serve as a reminder that peace and prosperity can be fragile.

 Posted by at 12:52 pm
May 192020
 

One of the most fortunate moments in my life occurred in the fall of 2005, when I first bumped into John Moffat, a physicist from The Perimeter Institute in Waterloo, Ontario, Canada, when we both attended the first Pioneer Anomaly conference hosted by the International Space Science Institute in Bern, Switzerland.

This chance encounter turned into a 15-year collaboration and friendship. It was, to me, immensely beneficial: I learned a lot from John who, in his long professional career, has met nearly every one of the giants of 20th century physics, even as he made his own considerable contributions to diverse areas ranging from particle physics to gravitation.

In the past decade, John also wrote a few books for a general audience. His latest, The Shadow of the Black Hole, is about to be published; it can already be preordered on Amazon. In their reviews, Greg Landsberg (CERN), Michael Landry (LIGO Hanford) and Neil Cornish (eXtreme Gravity Institute) praise the book. As I was one of John’s early proofreaders, I figured I’ll add my own.

John began working on this manuscript shortly after the announcement by the LIGO project of the first unambiguous direct detection of gravitational waves from a distant cosmic event. This was a momentous discovery, opening a new chapter in the history of astronomy, while at the same time confirming a fundamental prediction of Einstein’s general relativity. Meanwhile, the physics world was waiting with bated breath for another result: the Event Horizon Telescope collaboration’s attempt to image, using a worldwide network of radio telescopes, either the supermassive black hole near the center of our own Milky Way, or the much larger supermassive black hole near the center of the nearby galaxy M87.

Bookended by these two historic discoveries, John’s narrative invites the reader on a journey to understand the nature of black holes, these most enigmatic objects in our universe. The adventure begins in 1784, when the Reverend John Michell, a Cambridge professor, speculated about stars so massive and compact that even light would not be able to escape from its surface. The story progresses to the 20th century, the prediction of black holes by general relativity, and the strange, often counterintuitive results that arise when our knowledge of thermodynamics and quantum physics is applied to these objects. After a brief detour into the realm of science-fiction, John’s account returns to the hard reality of observational science, as he explains how gravitational waves can be detected and how they fit into both the standard theory of gravitation and its proposed extensions or modifications. Finally, John moves on to discuss how the Event Horizon Telescope works and how it was able to create, for the very first time, an actual image of the black hole’s shadow, cast against the “light” (radio waves) from its accretion disk.

John’s writing is entertaining, informative, and a delight to follow as he accompanies the reader on this fantastic journey. True, I am not an unbiased critic. But don’t just take my word for it; read those reviews I mentioned at the beginning of this post, by preeminent physicists. In any case, I wholeheartedly recommend The Shadow of the Black Hole, along with John’s earlier books, to anyone with an interest in physics, especially the physics of black holes.

 Posted by at 10:31 pm
May 112020
 

Heaven knows why I sometimes get confused by the simplest things.

In this case, the conversion between two commonly used cosmological coordinate systems: Comoving coordinates vs. coordinates that are, well, not comoving, in which cosmic expansion is ascribed to time dilation effects instead.

In the standard coordinates that are used to describe the homogeneous, isotropic universe of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric, the metric is given by

$$ds^2=dt^2-a^2dR^2,$$

where \(a=a(t)\) is a function of the time coordinate, and \(R\) represents the triplet of spatial coordinates: e.g., \(dR^2=dx^2+dy^2+dz^2.\)

I want to transform this using \(R’=aR,\) i.e., transform away the time-dependent coefficient in front of the spatial term in the metric. The confusion comes because for some reason, I always manage to convince myself that I also have to make the simultaneous replacement \(t’=a^{-1}dt.\)

I do not. This is nonsense. I just need to introduce \(dR’\). The rest then presents itself automatically:

$$\begin{align*}
R’&=aR,\\
dR&=d(a^{-1}R’)=-a^{-2}\dot{a}R’dt+a^{-1}dR’,\\
ds^2&=dt^2-a^2[-a^{-2}\dot{a}R’dt+a^{-1}dR’]^2\\
&=(1-a^{-2}\dot{a}^2{R’}^2)dt^2+2a^{-1}\dot{a}R’dtdR’-d{R’}^2\\
&=(1-H^2{R’}^2)dt^2+2HR’dtdR’-d{R’}^2,
\end{align*}$$

where \(H=\dot{a}/a\) as usual.

OK, now that I recorded this here in my blog for posterity, perhaps the next time I need it, I’ll remember where to find it. For instance, the next time I manage to stumble upon one of my old Quora answers that, for five and a half years, advertised my stupidity to the world by presenting an incorrect answer on this topic.

This, incidentally, would serve as a suitable coordinate system representing the reference frame of an observer at the origin. It also demonstrates that such an observer sees an apparent horizon, the cosmological horizon, given by \(1-H^2{R’}^2=0,\), i.e., \(R’=H^{-1},\) the distance characterized by the inverse of the Hubble parameter.

 Posted by at 7:35 pm
Feb 172020
 

Our most comprehensive paper yet on the Solar Gravitational Lens is now online.

This was a difficult paper to write, but I think that, in the end, it was well worth the effort.

We are still investigating the spherical Sun (the gravitational field of the real Sun deviates ever so slightly from spherical symmetry, and that can, or rather it will, have measurable effects) and we are still considering a stationary target (as opposed to a planet with changing illumination and surface features) but in this paper, we now cover the entire image formation process, including models of what a telescope sees in the SGL’s focal region, how such observations can be stitched together to form an image, and how that image compares against the inevitable noise due to the low photon count and the bright solar corona.

 Posted by at 11:37 pm
Oct 112019
 

I just came across this XKCD comic.

Though I can happily report that so far, I managed to avoid getting hit by a truck, it is a life situation in which I found myself quite a number of times in my life.

In fact, ever since I’ve seen this comic an hour or so ago, I’ve been wondering about the resistor network. Thankfully, in the era of the Internet and Google, puzzles like this won’t keep you awake at night; well-reasoned solutions are readily available.

Anyhow, just in case anyone wonders, the answer is 4/π − 1/2 ohms.

 Posted by at 12:10 am
Aug 072019
 

Yesterday, we posted our latest paper on arXiv. Again, it is a paper about the solar gravitational lens.

This time around, our focus was on imaging an extended object, which of course can be trivially modeled as a multitude of point sources.

However, it is a multitude of point sources at a finite distance from the Sun.

This adds a twist. Previously, we modeled light from sources located at infinity: Incident light was in the form of plane waves.

But when the point source is at a finite distance, light from it comes in the form of spherical waves.

Now it is true that at a very large distance from the source, considering only a narrow beam of light, we can approximate those spherical waves as plane waves (paraxial approximation). But it still leaves us with the altered geometry.

But this is where a second observation becomes significant: As we can intuit, and as it is made evident through the use of the eikonal approximation, most of the time we can restrict our focus onto a single ray of light. A ray that, when deflected by the Sun, defines a plane. And the investigation can proceed in this plane.

The image above depicts two such planes, corresponding to the red and the green ray of light.

These rays do meet, however, at the axis of symmetry of the problem, which we call the optical axis. However, in the vicinity of this axis the symmetry of the problem is recovered, and the result no longer depends on the azimuthal angle that defines the plane in question.

To make a long story short, this allows us to reuse our previous results, by introducing the additional angle β, which determines, among other things, the additional distance (compared to parallel rays of light coming from infinity) that these light rays travel before meeting at the optical axis.

This is what our latest paper describes, in full detail.

 Posted by at 9:10 pm
May 312019
 

Here is a thought that has been bothering me for some time.

We live in a universe that is subject to accelerating expansion. Galaxies that are not bound gravitationally to our Local Group will ultimately vanish from sight, accelerating away until the combination of distance and increasing redshift will make their light undetectable by any imaginable instrument.

Similarly, accelerating expansion means that there will be a time in the very distant future when the cosmic microwave background radiation itself will become completely undetectable by any conceivable technological means.

In this very distant future, the Local Group of galaxies will have merged already into a giant elliptical galaxy. Much of this future galaxy will be dark, as most stars would have run out of fuel already.

But there will still be light. Stars would still occasionally form. Some dwarf stars will continue to shine for trillions of years, using their available fuel at a very slow rate.

Which means that civilizations might still emerge, even in this unimaginably distant future.

And when they do, what will they see?

They will see themselves as living in an “island universe” in an otherwise empty, static cosmos. In short, precisely the kind of cosmos envisioned by many astronomers in the early 1920s, when it was still popular to think of the Milky Way as just such an island universe, not yet recognizing that many of the “spiral nebulae” seen through telescopes are in fact distant galaxies just as large, if not larger, than the Milky Way.

But these future civilizations will see no such nebulae. There will be no galaxies beyond their “island universe”. No microwave background either. In fact, no sign whatsoever that their universe is evolving, changing with time.

So what would a scientifically advanced future civilization conclude? Surely they would still discover general relativity. But would they believe its predictions of an expanding cosmos, despite the complete lack of evidence? Or would they see that prediction as a failure of the theory, which must be remedied?

In short, how would they ever come into possession of the knowledge that their universe was once young, dense, and full of galaxies, not to mention background radiation?

My guess is that they won’t. They will have no observational evidence, and their theories will reflect what they actually do see (a static, unchanging island universe floating in infinite, empty space).

Which raises the rather unnerving, unpleasant question: To what extent exist already features in our universe that are similarly unknowable, as they can no longer be detected by any conceivable instrumentation? Is it, in fact, possible to fully understand the physics of the universe, or are we already doomed to never being able to develop a full picture?

I find this question surprisingly unnerving and depressing.

 Posted by at 1:37 am