I was recently interviewed by a Hungarian podcaster, mostly about my participation in the early days of game development in Hungary, but also about my more recent work, including my scientific contributions.

I just listened to the interview and thankfully, I didn’t say anything colossally stupid.

A very nice article about our work on the Solar Gravitational Lens was published a few days ago on Universe Today, on account of our recent preprint, which shows quantitative results assessing the impact of image reconstruction on signal and noise.

Because the SGL is such an imperfect lens, the noise penalty is substantial. However, as it turns out, it is much reduced when the projected image area is large, such as when an exoplanet in a nearby star system is targeted.

While this is good news, the Sun’s gravitational field has other imperfections. We are currently working on modeling these and assessing their impact on noise. Next comes the problem of imaging a moving target: an exoplanet that spins, which is illuminated from varying directions, and which may have varying surface features (clouds, vegetation, etc.) Accounting for all these effects is essential if we wish to translate basic theory into sound scientific and engineering requirements.

So, the fun continues. For now, it was nice to see this piece in Universe Today.

Tonight, Slava Turyshev sent me a link to an article that was actually published three months ago on medium.com but until now, escaped our attention.

It is a very nice summary of the work that we have been doing on the Solar Gravitational Lens to date.

It really captures the essence of our work and the challenges that we have been looking at.

And there is so much more to do! Countless more things to tackle: image reconstruction of a moving target, imperfections of the solar gravitational field, precision of navigation… not to mention the simple, basic challenge of attempting a deep space mission to a distance four times greater than anything to date, lasting several decades.

Yes, it can be done. No it’s not easy. But it’s a worthy challenge.

A few weeks ago, Christian Ready published a beautiful video on his YouTube channel, Launch Pad Astronomy. In this episode, he described in detail how the Solar Gravitational Lens (SGL) works, and also our efforts so far.

I like this video very much. Especially the part that begins at 10:28, where Christian describes how the SGL can be used for image acquisition. The entire video is well worth seeing, but this segment in particular does a better job than we were ever able to do with words alone, explaining how the Sun projects an image of a distant planet to a square kilometer sized area, and how this image is scanned, one imaginary pixel at a time, by measuring the brightness of the Einstein-ring around the Sun as seen from each pixel location.

We now understand this process well, but many more challenges remain. These include, in no particular order, deviations of the Sun from spherical symmetry, minor variations in the brightness of the solar corona, the relative motion of the observing probe, Sun, exosolar system and target planet therein, changing illumination of the target, rotation of the target, changing surface features (weather, perhaps vegetation) of the target, and the devil knows what else.

Even so, lately I have become reasonably confident, based on my own simulation work and our signal-to-noise estimates, as well as a deconvolution approach under development that takes some of the aforementioned issues into consideration, that a high-resolution image of a distant planet is, in fact, obtainable using the SGL.

A lot more work remains. The fun only just began. But I am immensely proud to be able to contribute to of this effort.

Seventy-five years ago this morning, a false dawn greeted the New Mexico desert near Alamagordo.

At 5:29 AM in the morning, the device informally known as “the gadget” exploded.

“The gadget” was a plutonium bomb with the explosive power of about 22 kilotons of TNT. It was the first nuclear explosion on planet Earth. It marked the beginning of the nuclear era.

I can only imagine what it must have been like, being part of that effort, being present in the pre-dawn hours, back in 1945. The war in Europe just ended. The war in the Pacific was still raging. This was the world’s first high technology war, fought over the horizon, fought with radio waves, and soon, to be fought with nuclear power. Yet there were so many unknowns! The Trinity test was the culmination of years of frantic effort. The outcome was by no means assured, yet the consequences were clear to all: a successful test would mean that war would never be the same. The world would never be the same.

And then, the most surreal of things happens: minutes before the planned detonation, in the pre-dawn darkness, the intercom system picks up a faint signal from a local radio station, and music starts playing. It’s almost as if reality was mimicking the atmosphere of yet-to-be-invented computer games.

When the explosion happened, the only major surprise was that the detonation was much brighter than anyone had expected. Otherwise, things unfolded pretty much as anticipated. “The gadget” worked. Success cleared the way to the deployment of the (as yet untested) simpler uranium bomb to be dropped on Hiroshima three weeks later, followed by the twin of the Trinity gadget, which ended up destroying much of Nagasaki. The human cost was staggering, yet we must not forget that it would have been dwarfed by the costs of a ground invasion of the Japanese home islands. It was a means to shorten the war, a war not started by the United States. No responsible commander-in-chief could have made a decision other than the one Truman made when he approved the use of the weapons against Imperial Japan.

And perhaps the horrors seen in those two cities played a role in creating a world in which the last use of a nuclear weapon in anger occurred nearly 75 years ago, on August 9, 1945. No one would have predicted back then that there will be no nuclear weapons deployed in war in the coming three quarters of a century. Yet here we are, in 2020, struggling with a pandemic, struggling with populism and other forces undermining our world order, yet still largely peaceful, living in a golden age unprecedented in human history.

Perhaps Trinity should serve as a reminder that peace and prosperity can be fragile.

One of the most fortunate moments in my life occurred in the fall of 2005, when I first bumped into John Moffat, a physicist from The Perimeter Institute in Waterloo, Ontario, Canada, when we both attended the first Pioneer Anomaly conference hosted by the International Space Science Institute in Bern, Switzerland.

This chance encounter turned into a 15-year collaboration and friendship. It was, to me, immensely beneficial: I learned a lot from John who, in his long professional career, has met nearly every one of the giants of 20th century physics, even as he made his own considerable contributions to diverse areas ranging from particle physics to gravitation.

In the past decade, John also wrote a few books for a general audience. His latest, The Shadow of the Black Hole, is about to be published; it can already be preordered on Amazon. In their reviews, Greg Landsberg (CERN), Michael Landry (LIGO Hanford) and Neil Cornish (eXtreme Gravity Institute) praise the book. As I was one of John’s early proofreaders, I figured I’ll add my own.

John began working on this manuscript shortly after the announcement by the LIGO project of the first unambiguous direct detection of gravitational waves from a distant cosmic event. This was a momentous discovery, opening a new chapter in the history of astronomy, while at the same time confirming a fundamental prediction of Einstein’s general relativity. Meanwhile, the physics world was waiting with bated breath for another result: the Event Horizon Telescope collaboration’s attempt to image, using a worldwide network of radio telescopes, either the supermassive black hole near the center of our own Milky Way, or the much larger supermassive black hole near the center of the nearby galaxy M87.

Bookended by these two historic discoveries, John’s narrative invites the reader on a journey to understand the nature of black holes, these most enigmatic objects in our universe. The adventure begins in 1784, when the Reverend John Michell, a Cambridge professor, speculated about stars so massive and compact that even light would not be able to escape from its surface. The story progresses to the 20th century, the prediction of black holes by general relativity, and the strange, often counterintuitive results that arise when our knowledge of thermodynamics and quantum physics is applied to these objects. After a brief detour into the realm of science-fiction, John’s account returns to the hard reality of observational science, as he explains how gravitational waves can be detected and how they fit into both the standard theory of gravitation and its proposed extensions or modifications. Finally, John moves on to discuss how the Event Horizon Telescope works and how it was able to create, for the very first time, an actual image of the black hole’s shadow, cast against the “light” (radio waves) from its accretion disk.

John’s writing is entertaining, informative, and a delight to follow as he accompanies the reader on this fantastic journey. True, I am not an unbiased critic. But don’t just take my word for it; read those reviews I mentioned at the beginning of this post, by preeminent physicists. In any case, I wholeheartedly recommend The Shadow of the Black Hole, along with John’s earlier books, to anyone with an interest in physics, especially the physics of black holes.

Heaven knows why I sometimes get confused by the simplest things.

In this case, the conversion between two commonly used cosmological coordinate systems: Comoving coordinates vs. coordinates that are, well, not comoving, in which cosmic expansion is ascribed to time dilation effects instead.

In the standard coordinates that are used to describe the homogeneous, isotropic universe of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric, the metric is given by

$$ds^2=dt^2-a^2dR^2,$$

where $$a=a(t)$$ is a function of the time coordinate, and $$R$$ represents the triplet of spatial coordinates: e.g., $$dR^2=dx^2+dy^2+dz^2.$$

I want to transform this using $$R’=aR,$$ i.e., transform away the time-dependent coefficient in front of the spatial term in the metric. The confusion comes because for some reason, I always manage to convince myself that I also have to make the simultaneous replacement $$t’=a^{-1}dt.$$

I do not. This is nonsense. I just need to introduce $$dR’$$. The rest then presents itself automatically:

\begin{align*} R’&=aR,\\ dR&=d(a^{-1}R’)=-a^{-2}\dot{a}R’dt+a^{-1}dR’,\\ ds^2&=dt^2-a^2[-a^{-2}\dot{a}R’dt+a^{-1}dR’]^2\\ &=(1-a^{-2}\dot{a}^2{R’}^2)dt^2+2a^{-1}\dot{a}R’dtdR’-d{R’}^2\\ &=(1-H^2{R’}^2)dt^2+2HR’dtdR’-d{R’}^2, \end{align*}

where $$H=\dot{a}/a$$ as usual.

OK, now that I recorded this here in my blog for posterity, perhaps the next time I need it, I’ll remember where to find it. For instance, the next time I manage to stumble upon one of my old Quora answers that, for five and a half years, advertised my stupidity to the world by presenting an incorrect answer on this topic.

This, incidentally, would serve as a suitable coordinate system representing the reference frame of an observer at the origin. It also demonstrates that such an observer sees an apparent horizon, the cosmological horizon, given by $$1-H^2{R’}^2=0,$$, i.e., $$R’=H^{-1},$$ the distance characterized by the inverse of the Hubble parameter.

Our most comprehensive paper yet on the Solar Gravitational Lens is now online.

This was a difficult paper to write, but I think that, in the end, it was well worth the effort.

We are still investigating the spherical Sun (the gravitational field of the real Sun deviates ever so slightly from spherical symmetry, and that can, or rather it will, have measurable effects) and we are still considering a stationary target (as opposed to a planet with changing illumination and surface features) but in this paper, we now cover the entire image formation process, including models of what a telescope sees in the SGL’s focal region, how such observations can be stitched together to form an image, and how that image compares against the inevitable noise due to the low photon count and the bright solar corona.

I just came across this XKCD comic.

Though I can happily report that so far, I managed to avoid getting hit by a truck, it is a life situation in which I found myself quite a number of times in my life.

In fact, ever since I’ve seen this comic an hour or so ago, I’ve been wondering about the resistor network. Thankfully, in the era of the Internet and Google, puzzles like this won’t keep you awake at night; well-reasoned solutions are readily available.

Anyhow, just in case anyone wonders, the answer is 4/π − 1/2 ohms.

Yesterday, we posted our latest paper on arXiv. Again, it is a paper about the solar gravitational lens.

This time around, our focus was on imaging an extended object, which of course can be trivially modeled as a multitude of point sources.

However, it is a multitude of point sources at a finite distance from the Sun.

This adds a twist. Previously, we modeled light from sources located at infinity: Incident light was in the form of plane waves.

But when the point source is at a finite distance, light from it comes in the form of spherical waves.

Now it is true that at a very large distance from the source, considering only a narrow beam of light, we can approximate those spherical waves as plane waves (paraxial approximation). But it still leaves us with the altered geometry.

But this is where a second observation becomes significant: As we can intuit, and as it is made evident through the use of the eikonal approximation, most of the time we can restrict our focus onto a single ray of light. A ray that, when deflected by the Sun, defines a plane. And the investigation can proceed in this plane.

The image above depicts two such planes, corresponding to the red and the green ray of light.

These rays do meet, however, at the axis of symmetry of the problem, which we call the optical axis. However, in the vicinity of this axis the symmetry of the problem is recovered, and the result no longer depends on the azimuthal angle that defines the plane in question.

To make a long story short, this allows us to reuse our previous results, by introducing the additional angle β, which determines, among other things, the additional distance (compared to parallel rays of light coming from infinity) that these light rays travel before meeting at the optical axis.

This is what our latest paper describes, in full detail.

Here is a thought that has been bothering me for some time.

We live in a universe that is subject to accelerating expansion. Galaxies that are not bound gravitationally to our Local Group will ultimately vanish from sight, accelerating away until the combination of distance and increasing redshift will make their light undetectable by any imaginable instrument.

Similarly, accelerating expansion means that there will be a time in the very distant future when the cosmic microwave background radiation itself will become completely undetectable by any conceivable technological means.

In this very distant future, the Local Group of galaxies will have merged already into a giant elliptical galaxy. Much of this future galaxy will be dark, as most stars would have run out of fuel already.

But there will still be light. Stars would still occasionally form. Some dwarf stars will continue to shine for trillions of years, using their available fuel at a very slow rate.

Which means that civilizations might still emerge, even in this unimaginably distant future.

And when they do, what will they see?

They will see themselves as living in an “island universe” in an otherwise empty, static cosmos. In short, precisely the kind of cosmos envisioned by many astronomers in the early 1920s, when it was still popular to think of the Milky Way as just such an island universe, not yet recognizing that many of the “spiral nebulae” seen through telescopes are in fact distant galaxies just as large, if not larger, than the Milky Way.

But these future civilizations will see no such nebulae. There will be no galaxies beyond their “island universe”. No microwave background either. In fact, no sign whatsoever that their universe is evolving, changing with time.

So what would a scientifically advanced future civilization conclude? Surely they would still discover general relativity. But would they believe its predictions of an expanding cosmos, despite the complete lack of evidence? Or would they see that prediction as a failure of the theory, which must be remedied?

In short, how would they ever come into possession of the knowledge that their universe was once young, dense, and full of galaxies, not to mention background radiation?

My guess is that they won’t. They will have no observational evidence, and their theories will reflect what they actually do see (a static, unchanging island universe floating in infinite, empty space).

Which raises the rather unnerving, unpleasant question: To what extent exist already features in our universe that are similarly unknowable, as they can no longer be detected by any conceivable instrumentation? Is it, in fact, possible to fully understand the physics of the universe, or are we already doomed to never being able to develop a full picture?

I find this question surprisingly unnerving and depressing.

My research is unsupported. That is to say, with the exception of a few conference invitations when my travel costs were covered, I never received a penny for my research on the Pioneer Anomaly and my other research efforts.

Which is fine, I do it for fun after all. Still, in this day and age of crowdfunding, I couldn’t say no to the possibility that others, who find my efforts valuable, might choose to contribute.

Hence my launching of a Patreon page. I hope it is well-received. I have zero experience with crowdfunding, so this really is a first for me. Wish me luck.

I run across this often. Well-meaning folks who read introductory-level texts or saw a few educational videos about physical cosmology, suddenly discovering something seemingly profound.

And then, instead of asking themselves why, if it is so easy to stumble upon these results, they haven’t been published already by others, they go ahead and make outlandish claims. (Claims that sometimes land in my Inbox, unsolicited.)

Let me explain what I am talking about.

As it is well known, the rate of expansion of the cosmos is governed by the famous Hubble parameter: $$H\sim 70~{\rm km}/{\rm s}/{\rm Mpc}$$. That is to say, two galaxies that are 1 megaparsec (Mpc, about 3 million light years) apart will be flying away from each other at a rate of 70 kilometers a second.

It is possible to convert megaparsecs (a unit of length) into kilometers (another unit of length), so that the lengths cancel out in the definition of $$H$$, and we are left with $$H\sim 2.2\times 10^{-18}~{\rm s}^{-1}$$, which is one divided by about 14 billion years. In other words, the Hubble parameter is just the inverse of the age of the universe. (It would be exactly the inverse of the age of the universe if the rate of cosmic expansion was constant. It isn’t, but the fact that the expansion was slowing down for the first 9 billion years or so and has been accelerating since kind of averages things out.)

And this, then, leads to the following naive arithmetic. First, given the age of the universe and the speed of light, we can find out the “radius” of the observable universe:

$$a=\dfrac{c}{H},$$

or about 14 billion light years. Inverting this equation, we also get $$H=c/a$$.

But the expansion of the cosmos is governed by another equation, the first so-called Friedmann equation, which says that

$$H^2=\dfrac{8\pi G\rho}{3}.$$

Here, \rho is the density of the universe. The mass within the visible universe, then, is calculated as usual, just using the volume of a sphere of radius $$a$$:

$$M=\dfrac{4\pi a^3}{3}\rho.$$

Putting this expression and the expression for $$H$$ back into the Friedmann equation, we get the following:

$$a=\dfrac{2GM}{c^2}.$$

But this is just the Schwarzschild radius associated with the mass of the visible universe! Surely, we just discovered something profound here! Perhaps the universe is a black hole!

Well… not exactly. The fact that we got the Schwarzschild radius is no coincidence. The Friedmann equations are, after all, just Einstein’s field equations in disguise, i.e., the exact same equations that yield the formula for the Schwarzschild radius.

Still, the two solutions are qualitatively different. The universe cannot be the interior of a black hole’s event horizon. A black hole is characterized by an unavoidable future singularity, whereas our expanding universe is characterized by a past singularity. At best, the universe may be a time-reversed black hole, i.e., a “white hole”, but even that is dubious. The Schwarzschild solution, after all, is a vacuum solution of Einstein’s field equations, wereas the Friedmann equations describe a matter-filled universe. Nor is there a physical event horizon: the “visible universe” is an observer-dependent concept, and two observers in relative motion or even two observers some distance apart, will not see the same visible universe.

Nonetheless, these ideas, memes perhaps, show up regularly, in manuscripts submitted to journals of dubious quality, appearing in self-published books, or on the alternative manuscript archive viXra. And there are further variations on the theme. For instance, the so-called Planck power, divided by the Hubble parameter, yields $$2Mc^2$$, i.e., twice the mass-energy in the observable universe. This coincidence is especially puzzling to those who work it out numerically, and thus remain oblivious to the fact that the Planck power is one of those Planck units that does not actually contain the Planck constant in its definition, only $$c$$ and $$G$$. People have also been fooling around with various factors of $$2$$, $$\tfrac{1}{2}$$ or $$\ln 2$$, often based on dodgy information content arguments, coming up with numerical ratios that supposedly replicate the matter, dark matter, and dark energy content.

Today, I answered a question on Quora about the nature of $$c$$, the speed of light, as it appears in the one equation everyone knows, $$E=mc^2.$$

I explained that it is best viewed as a conversion factor between our units of length and time. These units are accidents of history. There is nothing fundamental in Nature about one ten millionth the distance from the poles to the equator of the Earth (the original definition of the meter) or about one 86,400th the length of the Earth’s mean solar day. These units are what they are, in part, because we learned to measure length and time long before we learned that they are aspects of the same thing, spacetime.

And nothing stops us from using units such as light-seconds and seconds to measure space and time; in such units, the value of the speed of light would be just 1, and consequently, it could be dropped from equations altogether. This is precisely what theoretical physicists often do.

But then… I commented that something very similar takes place in aviation, where different units are used to measure horizontal distance (nautical miles, nmi) and altitude (feet, ft). So if you were to calculate the kinetic energy of an airplane (measuring its speed in nmi/s) and its potential energy (measuring the altitude, as well as the gravitational acceleration, in ft) you would need the ft/nmi conversion factor of 6076.12, squared, to convert between the two resulting units of energy.

As I was writing this answer, though, I stumbled upon a blog entry that discussed the crazy, mixed up units of measure still in use worldwide in aviation. Furlongs per fortnight may pretty much be the only unit that is not used, as just about every other unit of measure pops up, confusing poor pilots everywhere: Meters, feet, kilometers, nautical miles, statute miles, kilograms, pounds, millibars, hectopascals, inches of mercury… you name it, it’s there.

Part of the reason, of course, is the fact that America, alone among industrialized nations, managed to stick to its archaic system of measurements. Which is another historical accident, really. A lot had to do with the timing: metric transition was supposed to take place in the 1970s, governed by a presidential executive order signed by Gerald Ford. But the American economy was in a downturn, many Americans felt the nation under siege, the customary units worked well, and there was a conservative-populist pushback against the metric system… so by 1982, Ronald Reagan disbanded the Metric Board and the transition to metric was officially over. (Or not. The metric system continues to gain ground, whether it is used to measure bullets or Aspirin, soft drinks or street drugs.)

Yet another example similar to the metric system is the historical accident that created the employer-funded healthcare system in the United States that American continue to cling to, even as most (all?) other advanced industrial nations transitioned to something more modern, some variant of a single-payer universal healthcare system. It happened in the 1920s, when a Texas hospital managed to strike a deal with public school teachers in Dallas: For 50 cents a month, the hospital picked up the tab of their hospital visits. This arrangement became very popular during the Great Depression when hospitals lost patients who could not afford their hospital care anymore. The idea came to be known as Blue Cross. And that’s how the modern American healthcare system was born.

As I was reading this chain of Web articles, taking me on a tour from Einstein’s $$E=mc^2$$ to employer-funded healthcare in America, I was reminded of a 40-year old British TV series, Connections, created by science historian James Burke. Burke found similar, often uncanny connections between seemingly unrelated topics in history, particularly the history of science and technology.

Just got back from The Perimeter Institute, where I spent three very short days.

I had good discussions with John Moffat. I again met Barak Shoshany, whom I first encountered on Quora. I attended two very interesting and informative seminar lectures by Emil Mottola on quantum anomalies and the conformal anomaly.

I also gave a brief talk about our research with Slava Turyshev on the Solar Gravitational Lens. I was asked to give an informal talk with no slides. It was a good challenge. I believe I was successful. My talk seemed well received. I was honored to have Neil Turok in the audience, who showed keen interest and asked several insightful questions.

I just watched a news conference held by the University of Waterloo, on account of Donna Strickland being awarded the Nobel prize in physics.

This is terrific news for Canada, for the U. of Waterloo, and last but most certainly not least, for women in physics.

Heartfelt congratulations!

Michael Atiyah, 89, is one of the greatest living mathematicians. Which is why the world pays attention when he claims to have solved what is perhaps the greatest outstanding problem in mathematics, the Riemann hypothesis.

Here is a simple sum: $$1+\frac{1}{2^2}+\frac{1}{3^2}+…$$. It is actually convergent: The result is $$\pi^2/6$$.

Other, similar sums also converge, so long as the exponent is greater than 1. In fact, we can define a function:

\begin{align*}\zeta(x)=\sum\limits_{i=1}^\infty\frac{1}{i^x}.\end{align*}

Where things get really interesting is when we extend the definition of this $$\zeta(x)$$ to the entire complex plane. As it turns out, its analytic continuation is defined almost everywhere. And, it has a few zeros, i.e., values of $$x$$ for which $$\zeta(x)=0$$.

The so-called trivial zeros of $$\zeta(x)$$ are the negative even integers: $$x=-2,-4,-6,…$$. But the function also has infinitely many nontrivial zeros, where $$x$$ is complex. And here is the thing: The real part of all known nontrivial zeros happens to be $$\frac{1}{2}$$, the first one being at $$x=\frac{1}{2}+14.1347251417347i$$. This, then, is the Riemann hypothesis: Namely that if $$x$$ is a non-trivial zero of $$\zeta(x)$$, then $$\Re(x)=\frac{1}{2}$$. This hypothesis baffled mathematicians for the past 130 years, and now Atiyah claims to have solved it, accidentally (!), in a mere five pages. Unfortunately, verifying his proof is above my pay grade, as it references other concepts that I would have to learn first. But it is understandable why the mathematical community is skeptical (to say the least).

A slide from Atiyah’s talk on September 24, 2018.

What is not above my pay grade is analyzing Atiyah’s other claim: a purported mathematical definition of the fine structure constant $$\alpha$$. The modern definition of $$\alpha$$ relates this number to the electron charge $$e$$: $$\alpha=e^2/4\pi\epsilon_0\hbar c$$, where $$\epsilon_0$$ is the electric permeability of the vacuum, $$\hbar$$ is the reduced Planck constant and $$c$$ is the speed of light. Back in the days of Arthur Eddington, it seemed that $$\alpha\sim 1/136$$, which led Eddington himself onto a futile quest of numerology, trying to concoct a reason why $$136$$ is a special number. Today, we know the value of $$\alpha$$ a little better: $$\alpha^{-1}\simeq 137.0359992$$.

Atiyah produced a long and somewhat rambling paper that fundamentally boils down to two equations. First, he defines a new mathematical constant, denoted by the Cyrillic letter $$\unicode{x427}$$ (Che), which is related to the fine structure constant by the equation

\begin{align*}\alpha^{-1}=\frac{\pi\unicode{x427}}{\gamma},\tag{1.1*}\end{align*}

where $$\gamma=0.577…$$ is the Euler–Mascheroni constant. Second, he offers a definition for $$\unicode{x427}$$:

\begin{align*}\unicode{x427}=\frac{1}{2}\sum\limits_{j=1}^\infty 2^{-j}\left(1-\int_{1/j}^j\log_2 x~dx\right).\tag{7.1*}\end{align*}

(The equation numbers are Atiyah’s; I used a star to signify that I slightly simplified them.)

Atiyah claims that this sum is difficult to calculate and then goes into a long-winded and not very well explained derivation. But the sum is not difficult to calculate. In fact, I can calculate it with ease as the definite integral under the summation sign is trivial:

\begin{align*}\int_{1/j}^j\log_2 x~dx=\frac{(j^2+1)\log j-j^2+1}{j\log 2}.\end{align*}

After this, the sum rapidly converges, as this little bit of Maxima code demonstrates (NB: for $$j=1$$ the integral is trivial as the integration limits collapse):

(%i1) assume(j>1);
(%o1)                               [j > 1]
(%i2) S:1/2*2^(-j)*(1-integrate(log(x)/log(2),x,1/j,j));
log(j) + 1
---------- + j log(j) - j
(- j) - 1          j
(%o2)             2          (1 - -------------------------)
log(2)
(%i3) float(sum(S,j,1,50));
(%o3)                         0.02944508691740671
(%i4) float(sum(S,j,1,100));
(%o4)                         0.02944508691730876
(%i5) float(sum(S,j,1,150));
(%o5)                         0.02944508691730876
(%i6) float(sum(S,j,1,100)*%pi/%gamma);
(%o6)                         0.1602598029967022


Unfortunately, this does not look like $$\alpha^{-1}=137.0359992$$ at all. Not even remotely.

So we are all left to guess, sadly, what Atiyah was thinking when he offered this proposal.

We must also remember that $$\alpha$$ is a so-called “running” constant, as its value depends on the energy of the interaction, though presumably, the constant in question here is $$\alpha$$ in the infrared limit, i.e., at zero energy.

I am reading some breathless reactions to a preprint posted a few days ago by the MiniBooNE experiment. The experiment is designed to detect neutrinos, in particular neutrino oscillations (the change of one neutrino flavor into another.)

The headlines are screaming. Evidence found of a New Fundamental Particle, says one. Strange New Particle Could Prove Existence of Dark Matter, says another. Or how about, A Major Physics Experiment Just Detected A Particle That Shouldn’t Exist?

The particle in question is the so-called sterile neutrino. It is a neat concept, one I happen to quite like. It represents an elegant resolution to the puzzle of neutrino handedness. This refers to the chirality of neutrinos, essentially the direction in which they spin compared to their direction of motion. We only ever see “left handed” neutrinos. But neutrinos have rest mass. So they move slower than light. That means that if you run fast enough and outrun a left-handed neutrino, so that relative to you it is moving backwards (but still spins in the same direction as before), when you look back, you’ll see a right-handed neutrino. This implies that right-handed neutrinos should be seen just as often as left-handed neutrinos. But they aren’t. How come?

Sterile neutrinos offer a simple answer: We don’t see right-handed neutrinos because they don’t interact (they are sterile). That is to say, when a neutrino interacts (emits or absorbs a Z-boson, or emits or absorbs a W-boson while changing into a charged lepton), it has to be a left-handed neutrino in the interaction’s center-of-mass frame.

If this view is true and such sterile neutrinos exist, even though they cannot be detected directly, their existence would skew the number of neutrino oscillation events. As to what neutrino oscillations are: neutrinos are massive. But unlike other elementary particles, neutrinos do not have a well-defined mass associated with their flavor (electron, muon, or tau neutrino). When a neutrino has a well-defined flavor (is in a flavor eigenstate) it has no well-defined mass and vice versa. This means that if we detect neutrinos in a mass eigenstate, their flavor can appear to change (oscillate) between one state or another; e.g., a muon neutrino may appear at the detector as an electron neutrino. These flavor oscillations are rare, but they can be detected, and that’s what the MiniBooNE experiment is looking for.

And that is indeed what MiniBooNE found: an excess of events that is consistent with neutrino oscillations.

MiniBooNE detects electron neutrinos. These can come from all kinds of (background) sources. But one particular source is an intense beam of muon neutrinos produced at Fermilab. Because of neutrino oscillations, some of the neutrinos in this beam will be detected as electron neutrinos, yielding an excess of electron neutrino events above background.

And that’s exactly what MiniBooNE sees, with very high confidence: 4.8σ. That’s almost the generally accepted detection threshold for a new particle. But this value of 4.8σ is not about a new particle. It is the significance associated with excess electron neutrino detection events overall: an excess that is expected from neutrino oscillations.

So what’s the big deal, then? Why the screaming headlines? As far as I can tell, it all boils down to this sentence in the paper: “Although the data are fit with a standard oscillation model, other models may provide better fits to the data.”

What this somewhat cryptic sentence means is best illustrated by a figure from the paper:

This figure shows the excess events (above background) detected by MiniBooNE, but also the expected number of excess events from neutrino oscillations. Notice how only the first two red data points fall significantly above the expected number. (In case you are wondering, POT means Protons On Target, that is to say, the number of protons hitting a beryllium target at Fermilab, producing the desired beam of muon neutrinos.)

Yes, these two data points are intriguing. Yes, they may indicate the existence of new physics beyond two-neutrino oscillations. In particular, they may indicate the existence of another oscillation mode, muon neutrinos oscillating into sterile neutrinos that, in turn, oscillate into electron neutrinos, yielding this excess.

Mind you, if this is a sign of sterile neutrinos, these sterile neutrinos are unlikely dark matter candidates; their mass would be too low.

Or these two data points are mere statistical flukes. After all, as the paper says, “the best oscillation fit to the excess has a probability of 20.1%”. That is far from improbable. Sure, the fact that it is only 20.1% can be interpreted as a sign of some tension between the Standard Model and this experiment. But it is certainly not a discovery of new physics, and absolutely not a confirmation of a specific model of new physics, such as sterile neutrinos.

And indeed, the paper makes no such claim. The word “sterile” appears only four times in the paper, in a single sentence in the introduction: “[…] more exotic models are typically used to explain these anomalies, including, for example, 3+N neutrino oscillation models involving three active neutrinos and N additional sterile neutrinos [6-14], resonant neutrino oscillations [15], Lorentz violation [16], sterile neutrino decay [17], sterile neutrino non-standard interactions [18], and sterile neutrino extra dimensions [19].”

So yes, there is an intriguing sign of an anomaly. Yes, it may point the way towards new physics. It might even be new physics involving sterile neutrinos.

But no, this is not a discovery. At best, it’s an intriguing hint; quite possibly, just a statistical fluke.

So why the screaming headlines, then? I wish I knew.

There is an excellent diagram accompanying an answer on StackExchange, and I’ve been meaning to copy it here, because I keep losing the address.

The diagram summarizes many measures of cosmic expansion in a nice, compact, but not necessarily easy-to-understand form:

So let me explain how to read this diagram. First of all, time is going from bottom to top. The thick horizontal black line represents the moment of now. Imagine this line moving upwards as time progresses.

The thick vertical black line is here. So the intersection of the two thick black lines in the middle is the here-and-now.

Distances are measured in terms of the comoving distance, which is basically telling you how far a distant object would be now, if you had a long measuring tape to measure its present-day location.

The area shaded red (marked “past light cone”) is all the events that happened in the universe that we could see, up to the moment of now. The boundary of this area is everything in this universe from which light is reaching us right now.

So just for fun, let us pick an object at a comoving distance of 30 gigalightyears (Gly). Look at the dotted vertical line corresponding to 30 Gly, halfway between the 20 and 40 marks (either side, doesn’t matter.) It intersects the boundary of the past light cone when the universe was roughly 700 million years old. Good, there were already young galaxies back then. If we were observing such a galaxy today, we’d be seeing it as it appeared when the universe was 700 million years old. Its light would have spent 13.1 billion years traveling before reaching our instruments.

Again look at the dotted vertical line at 30 Gly and extend it all the way to the “now” line. What does this tell you about this object? You can read the object’s redshift (z) off the diagram: its light is shifted down in frequency by a factor of about 9.

You can also read the object’s recession velocity, which is just a little over two times the vacuum speed of light. Yes… faster than light. This recession velocity is based on the rate of change of the scale factor, essentially the Hubble parameter times the comoving distance. The Doppler velocity that one would deduce from the object’s redshift yields a value less than the vacuum speed of light. (Curved spacetime is tricky; distances and speeds can be defined in various ways.)

Another thing about this diagram is that in addition to the past, it also sketches the future, taking into account the apparent accelerating expansion of the universe. Notice the light red shaded area marked “event horizon”. This area contains everything that we will be able to see at our present location, throughout the entire history of the universe, all the way to the infinite future. Things (events) outside this area will never be seen by us, will never influence us.

Note how the dotted line at 30 Gly intersects this boundary when the universe is about 5 billion years old. Yes, this means that we will only ever see the first less than 5 billion years of existence of a galaxy at a comoving distance of 30 Gly. Over time, light from this galaxy will be redshifted ever more, until it eventually appears to “freeze” and disappears from sight, never appearing to become older than 5 billion years.

Notice also how the dashed curves marking constant values of redshift bend inward, closer and closer to the “here” location as we approach the infinite future. This is a direct result of accelerating expansion: Things nearer and nearer to us will be caught up in the expansion, accelerating away from our location. Eventually this will stop, of course; cosmic acceleration will not rip apart structures that are gravitationally bound. But we will end up living in a true “island universe” in which nothing is seen at all beyond the largest gravitationally bound structure, the local group of galaxies. Fortunately that won’t happen anytime soon; we have many tens of billions of years until then.

Lastly, the particle horizon (blue lines) essentially marks the size of the visible part of the universe at any given time. Notice how the width of the interval marked by the intersection of the now line and the blue lines is identical to the width of the past light cone at the bottom of this diagram. Notice also how the blue lines correspond to infinite redshift.

As I said, this diagram is not an easy read but it is well worth studying.

Stephen Hawking passed away over a month ago, but I just came across this beautiful tribute from cartoonist Sean Delonas. It was completely unexpected (I was flipping through the pages of a magazine) and, I admit, it had quite an impact on me. Not the words, inspirational though they may be… the image. The empty wheelchair, the frail human silhouette walking away in the distance.