Aug 022024
 

The title of this blog post is used as the byline or catch phrase of the Canadian Centre for Experimental Radio Astronomy, a group operating a 12.8 meter radio telescope, a repurposed former NATO satellite communication facility, located in Carp, just outside of Ottawa.

One of the things they organize is a summer camp for students. Today, I was invited to talk to a small group of students, and indeed I did so, talking (mostly) about my work on the Pioneer Anomaly. It seemed like an appropriate topic, considering that detection and resolution of the anomaly was heavily dependent on radio science, specifically Doppler radio navigation.

It was fun, and my talk, I am told, was well received. I was also offered an opportunity to briefly tour the facility itself. It was fascinating, even though it was insanely hot inside the dome under the August sun. (I definitely needed a shower when I got back home.) The only memorable fly in the proverbial ointment is that I arrived late, thanks to a stupid disabled truck that blocked the Queensway, as a result of which it took forty minutes to get from Vanier Parkway to Parkdale. Fortunately, my hosts were understanding.

 Posted by at 8:43 pm
May 062024
 

A couple of months ago, I came across a nice paper, by Verma and Silk (of Silk damping fame, as he’s known to cosmologists), showing what would happen if we had a chance to view the “shadow” of a supermassive black hole as it is microlensed by an intervening smaller black hole along the line-of-sight.

It occurred to me that I have the means to model this. At first I thought I’d write a short paper. But there really is nothing new that I can add to what Verma and Silk said in their paper, other than a nice animation produced by my ray tracing code.

So here it is. A brief animation of a small black hole passing in front of the famous “shadow”.

Things are not exactly to scale, of course, but for what it’s worth, this video corresponds roughly to a 10,000 solar mass black hole passing through, halfway between us and Sagittarius A*.

 Posted by at 11:59 pm
Apr 232024
 

Despite working with them extensively for the past 18 months or so, our “little robot” friends continue to blow me away with their capabilities.

Take this: the other day I asked Claude opus 3 to create an N-body simulation example from scratch, in HTML + JavaScript, complete with the ability to record videos.

Here’s the result, after some very minor tweaks of the code produced by Claude, code that pretty much worked “out of the box”.

The code is simple, reasonably clean and elegant, and it works. As to what I think of our little robot friends’ ability to take a brief, casual description of such an application and produce working code on demand… What can I say? There’s an expression that I’ve been overusing lately, but it still feels the most appropriate reaction: Welcome to the future.

 Posted by at 6:11 pm
Nov 012023
 

A few minutes ago, I checked Google News on my phone and lo and behold, there was a link to Universe Today, a new article discussing my latest manuscript on multiple gravitational lenses.

I knew that this was in the works, as the author approached me with some questions earlier in the day, but I didn’t expect it to appear this quickly, and, well, seeing it on my phone like this was a nice surprise.

Had the author asked, I’d have happily granted permission to use one of my generated images or animations involving multiple lenses.

Meanwhile, my paper on a four-satellite configuration used to detect deviations from Newtonian gravity was published by Astrophysics and Space Science, one of the Nature journals. I am officially permitted (in fact, encouraged) by Springer to share the link to an online read-only version of the published paper.

 Posted by at 1:25 am
Oct 122023
 

I’m doing more work on gravitational lensing. In particular, the little ray tracing model that I developed can now use actual astronomical images as sources. Here’s a projection of a nice spiral galaxy as it would be seen through a pair of non-coplanar, imperfectly lined up lenses:

Somehow, I suspect, no astronomer would recognize (at least not without a spectral analysis) that these are four images of the same rather nice-looking galaxy, NGC-4414:

These lensing examples also demonstrate how difficult it is to reconstruct either the original view, or the mass distribution of the lens itself, when all we see is something like the first image above.

 Posted by at 9:35 pm
Oct 082023
 

I am simulating gravitational lenses, ray tracing the diffracted light.

With multiple lenses, the results can be absolutely fascinating. Here’s a case of four lenses, three static, a fourth lens transiting in front of the other three, with the light source a fuzzy sphere in the background.

I can’t stop looking at this animation. It almost feels… organic. Yet the math behind it is just high school math, a bit of geometry and trigonometry, nothing more.

NB: This post has been edited with an updated, physically more accurate animation.

 Posted by at 5:35 pm
May 232023
 

In the last several years, we worked out most of the details about the Solar Gravitational Lens. How it forms images. How its optical qualities are affected by the inherent spherical aberration of a gravitational lens. How the images are further blurred by deviations of the lens from perfect spherical symmetry. How the solar corona contributes huge amounts of noise and how it can be controlled when the image is reconstructed. How the observing spacecraft would need to be navigated in order to maintain precise positions within the image projected by the SGL.

But one problem remained unaddressed: The target itself. Specifically, the fact that the target planet that we might be observing is not standing still. If it is like the Earth, it spins around its axis once every so many hours. And as it orbits its host star, its illumination changes as a result.

In other words, this is not what we are up against, much as we’d prefer the exoplanet to play nice and remain motionless and fully illuminated at all times.

Rather, what we are against is this:

Imaging such a moving target is hard. Integration times must be short in order to avoid motion blur. And image reconstruction must take into account how specific surface features are mapped onto the image plane. An image plane that, as we recall, we sample one “pixel” at a time, as the projected image of the exoplanet is several kilometers wide. It is traversed by the observing spacecraft that, looking back at the Sun, measures the brightness of the Einstein ring surrounding the Sun, and reconstructs the image from this information.

This is a hard problem. I think it is doable, but this may be the toughest challenge yet.

Oh, and did I mention that (not shown in the simulation) the exoplanet may also have varying cloud cover? Not to mention that, unlike this visual simulation, a real exoplanet may not be a Lambertian reflector, but rather, different parts (oceans vs. continents, mountain ranges vs. plains, deserts vs. forests) may have very different optical properties, varying values of specularity or even more complex optical behavior?

 Posted by at 12:06 am
May 232022
 

This morning, Google greeted me with a link in its newsstream to a Hackaday article on the Solar Gravitational Lens. The link caught my attention right away, as I recognized some of my own simulated, SGL-projected images of an exo-Earth and its reconstruction.

Reading the article I realized that it appeared in response to a brand new video by SciShow, a science-oriented YouTube channel.

Yay! I like nicely done videos presenting our work and this one is fairly good. There are a few minor inaccuracies, but nothing big enough to be even worth mentioning. And it’s very well presented.

I suppose I should offer my thanks to SciShow for choosing to feature our research with such a well-produced effort.

 Posted by at 7:22 pm
May 062022
 

A beautiful study was published the other day, and it received a lot of press coverage, so I get a lot of questions.

This study shows how, in principle, we could reconstruct the image of an exoplanet using the Solar Gravitational Lens (SGL) using just a single snapshot of the Einstein ring around the Sun.

The problem is, we cannot. As they say, the devil is in the details.

Here is a general statement about any conventional optical system that does not involve more exotic, nonlinear optics: whatever the system does, ultimately it maps light from picture elements, pixels, in the source plane, into pixels in the image plane.

Let me explain what this means in principle, through an extreme example. Suppose someone tells you that there is a distant planet in another galaxy, and you are allowed to ignore any contaminating sources of light. You are allowed to forget about the particle nature of light. You are allowed to forget the physical limitations of your cell phone’s camera, such as its CMOS sensor dynamic range or readout noise. You hold up your cell phone and take a snapshot. It doesn’t even matter if the camera is not well focused or if there is motion blur, so long as you have precise knowledge of how it is focused and how it moves. The map is still a linear map. So if your cellphone camera has 40 megapixels, a simple mathematical operation, inverting the so-called convolution matrix, lets you reconstruct the source in all its exquisite detail. All you need to know is a precise mathematical description, the so-called “point spread function” (PSF) of the camera (including any defocusing and motion blur). Beyond that, it just amounts to inverting a matrix, or equivalently, solving a linear system of equations. In other words, standard fare for anyone studying numerical computational methods, and easily solvable even at extreme high resolutions using appropriate computational resources. (A high-end GPU in your desktop computer is ideal for such calculations.)

Why can’t we do this in practice? Why do we worry about things like the diffraction limit of our camera or telescope?

The answer, ultimately, is noise. The random, unpredictable, or unmodelable element.

Noise comes from many sources. It can include so-called quantization noise because our camera sensor digitizes the light intensity using a finite number of bits. It can include systematic noises due to many reasons, such as differently calibrated sensor pixels or even approximations used in the mathematical description of the PSF. It can include unavoidable, random, “stochastic” noise that arises because light arrives as discrete packets of energy in the form of photons, not as a continuous wave.

When we invert the convolution matrix in the presence of all these noise sources, the noise gets amplified far more than the signal. In the end, the reconstructed, “deconvolved” image becomes useless unless we had an exceptionally high signal-to-noise ratio, or SNR, to begin with.

The authors of this beautiful study knew this. They even state it in their paper. They mention values such as 4,000, even 200,000 for the SNR.

And then there is reality. The Einstein ring does not appear in black, empty space. It appears on top of the bright solar corona. And even if we subtract the corona, we cannot eliminate the stochastic shot noise due to photons from the corona by any means other than collecting data for a longer time.

Let me show a plot from a paper that is work-in-progress, with the actual SNR that we can expect on pixels in a cross-sectional view of the Einstein ring that appears around the Sun:

Just look at the vertical axis. See those values there? That’s our realistic SNR, when the Einstein ring is imaged through the solar corona, using a 1-meter telescope with a 10 meter focal distance, using an image sensor pixel size of a square micron. These choices are consistent with just a tad under 5000 pixels falling within the usable area of the Einstein ring, which can be used to reconstruct, in principle, a roughly 64 by 64 pixel image of the source. As this plot shows, a typical value for the SNR would be 0.01 using 1 second of light collecting time (integration time).

What does that mean? Well, for starters it means that to collect enough light to get an SNR of 4,000, assuming everything else is absolutely, flawlessly perfect, there is no motion blur, indeed no motion at all, no sources of contamination other than the solar corona, no quantization noise, no limitations on the sensor, achieving an SNR of 4,000 would require roughly 160 billion seconds of integration time. That is roughly 5,000 years.

And that is why we are not seriously contemplating image reconstruction from a single snapshot of the Einstein ring.

 Posted by at 4:01 pm
Mar 242022
 

Between a war launched by a mad dictator, an occupation by “freedom convoy” mad truckers, and other mad shenanigans, it’s been a while since I last blogged about pure physics.

Especially about a topic close to my heart, modified gravity. John Moffat’s modified gravity theory MOG, in particular.

Back in 2020, a paper was published arguing that MOG may not be able to account for the dynamics certain galaxies. The author studied a large, low surface brightness galaxy, Antlia II, which has very little mass, and concluded that the only way to fit MOG to this galaxy’s dynamics is by assuming outlandish values not only for the MOG theory’s parameters but also the parameter that characterizes the mass distribution in the galaxy itself.

In fact, I would argue that any galaxy this light that does not follow Newtonian physics is bad news for modified theories of gravity; these theories predict deviations from Newtonian physics for large, heavy galaxies, but a galaxy this light is comparable in size to large globular clusters (which definitely behave the Newtonian way) so why would they be subject to different rules?

But then… For many years now, John and I (maybe I should only speak for myself in my blog, but I think John would concur) have been cautiously, tentatively raising the possibility that these faint satellite galaxies are really not very good test subjects at all. They do not look like relaxed, “virialized” mechanical systems; rather, they appear tidally disrupted by the host galaxy the vicinity of which they inhabit.

We have heard arguments that this cannot be the case, that these satellites show no signs of recent interaction. And in any case, it is never a good idea for a theorist to question the data. We are not entitled to “alternative facts”.

But then, here’s a paper from just a few months ago with a very respectable list of authors on its front page, presenting new observations of two faint galaxies, one being Antlia II: “Our main result is a clear detection of a velocity gradient in Ant2 that strongly suggests it has recently experienced substantial tidal disruption.”

I find this result very encouraging. It is consistent with the basic behavior of the MOG theory: Systems that are too light to show effects due to modified gravity exhibit strictly Newtonian behavior. This distinguishes MOG from the popular MOND paradigm, which needs the somewhat ad hoc “external field effect” to account for the dynamics of diffuse objects that show no presence of dark matter or modified gravity.

 Posted by at 2:30 am
Feb 222022
 

This is the last moment until well into the 22nd century that the current time and date in UTC can be expressed using only two digits.

I can only hope that this date will not be memorable for another reason, you know, something like the start of WW3?

 Posted by at 5:22 pm
Feb 192022
 

The 64-antenna radio telescope complex, MeerKAT, is South Africa’s contribution to the Square Kilometer Array, an international project under development to create an unprecedented radio astronomy facility.

While the SKA project is still in its infancy, MeerKAT is fully functional, and it just delivered the most detailed, most astonishing images yet of the central region of our own Milky Way. Here is, for instance, an image of the Sagittarius A region that also hosts the Milky Way’s supermassive black hole, Sgr A*:

The filamentary structure that is seen in this image is apparently poorly understood. As for the scale of this image, notice that it is marked in arc seconds; at the estimated distance to Sgr A, one arc second translates into roughly 1/8th of a light year, so the image presented here is roughly a 15 by 15 light year area.

 Posted by at 5:04 pm
Mar 142021
 

The next in our series of papers describing the extended gravitational lens (extended, that is, in that we are no longer treating the lensing object as a gravitational monopole) is now out, on arXiv.

Here’s one of my favorite images from the paper, which superimposes the boundary of the quadrupole caustic (an astroid curve) onto a 3D plot showing the amplitude of the gravitational lens’s point-spread function.

I was having lots of fun working on this paper. It was, needless to say, a lot of work.

 Posted by at 9:18 pm
Dec 152020
 

A very nice article about our work on the Solar Gravitational Lens was published a few days ago on Universe Today, on account of our recent preprint, which shows quantitative results assessing the impact of image reconstruction on signal and noise.

Because the SGL is such an imperfect lens, the noise penalty is substantial. However, as it turns out, it is much reduced when the projected image area is large, such as when an exoplanet in a nearby star system is targeted.

While this is good news, the Sun’s gravitational field has other imperfections. We are currently working on modeling these and assessing their impact on noise. Next comes the problem of imaging a moving target: an exoplanet that spins, which is illuminated from varying directions, and which may have varying surface features (clouds, vegetation, etc.) Accounting for all these effects is essential if we wish to translate basic theory into sound scientific and engineering requirements.

So, the fun continues. For now, it was nice to see this piece in Universe Today.

 Posted by at 11:08 pm
Dec 022020
 

According to the immortal Douglas Adams, God’s final message to His creation is simple: “We apologize for the inconvenience.”

But there’s also another final message of sorts, the answer to the Ultimate Question about Life, Universe, and Everything: 42.

Recently, a researcher by the name of Michael Hippke analyzed the seemingly random bits that are contained in minute fluctuations of the Cosmic Microwave Background (CMB) radiation. His conclusion: there is no discernible pattern, no appearance of constants of nature, no detectable statistical autocorrelation. The message is random.

I beg to respectfully disagree. In the 512-bit segment published by Hippke, the bit sequence 101010 appears no fewer than eight, er, nine times (one occurrence split between two lines).

Now if we only knew the question to which the answer is 42…

 Posted by at 1:18 pm
Dec 012020
 

The giant Arecibo radio telescope is no more.

Damaged by a broken cable just a few weeks ago, the telescope completely collapsed today.

Incredibly sad news.

Completed in 1963, the telescope was 57 years old, just like me. I hope I will last a few more years, though.

 Posted by at 9:52 pm
Sep 032020
 

Tonight, Slava Turyshev sent me a link to an article that was actually published three months ago on medium.com but until now, escaped our attention.

It is a very nice summary of the work that we have been doing on the Solar Gravitational Lens to date.

It really captures the essence of our work and the challenges that we have been looking at.

And there is so much more to do! Countless more things to tackle: image reconstruction of a moving target, imperfections of the solar gravitational field, precision of navigation… not to mention the simple, basic challenge of attempting a deep space mission to a distance four times greater than anything to date, lasting several decades.

Yes, it can be done. No it’s not easy. But it’s a worthy challenge.

 Posted by at 10:54 pm
Jul 312020
 

A few weeks ago, Christian Ready published a beautiful video on his YouTube channel, Launch Pad Astronomy. In this episode, he described in detail how the Solar Gravitational Lens (SGL) works, and also our efforts so far.

I like this video very much. Especially the part that begins at 10:28, where Christian describes how the SGL can be used for image acquisition. The entire video is well worth seeing, but this segment in particular does a better job than we were ever able to do with words alone, explaining how the Sun projects an image of a distant planet to a square kilometer sized area, and how this image is scanned, one imaginary pixel at a time, by measuring the brightness of the Einstein-ring around the Sun as seen from each pixel location.

We now understand this process well, but many more challenges remain. These include, in no particular order, deviations of the Sun from spherical symmetry, minor variations in the brightness of the solar corona, the relative motion of the observing probe, Sun, exosolar system and target planet therein, changing illumination of the target, rotation of the target, changing surface features (weather, perhaps vegetation) of the target, and the devil knows what else.

Even so, lately I have become reasonably confident, based on my own simulation work and our signal-to-noise estimates, as well as a deconvolution approach under development that takes some of the aforementioned issues into consideration, that a high-resolution image of a distant planet is, in fact, obtainable using the SGL.

A lot more work remains. The fun only just began. But I am immensely proud to be able to contribute to of this effort.

 Posted by at 7:41 pm
Feb 172020
 

Our most comprehensive paper yet on the Solar Gravitational Lens is now online.

This was a difficult paper to write, but I think that, in the end, it was well worth the effort.

We are still investigating the spherical Sun (the gravitational field of the real Sun deviates ever so slightly from spherical symmetry, and that can, or rather it will, have measurable effects) and we are still considering a stationary target (as opposed to a planet with changing illumination and surface features) but in this paper, we now cover the entire image formation process, including models of what a telescope sees in the SGL’s focal region, how such observations can be stitched together to form an image, and how that image compares against the inevitable noise due to the low photon count and the bright solar corona.

 Posted by at 11:37 pm