From time to time, I promise myself not to respond again to e-mails from strangers, asking me to comment on their research, view their paper, offer thoughts.

Yet from time to time, when the person seems respectable, the research genuine, I do respond. Most of the time, in vain.

Like the other day. Long story short, someone basically proved, as part of a lengthier derivation, that general relativity is always unimodular. This is of course manifestly untrue, but I was wondering where their seemingly reasonable derivation went awry.

Eventually I spotted it. Without getting bogged down in the details, what they did was essentially equivalent to proving that second derivatives do not exist:

$$\frac{d^2f}{dx^2} = \frac{d}{dx}\frac{df}{dx} = \frac{df}{dx}\frac{d}{df}\frac{df}{dx} = \frac{df}{dx}\frac{d}{dx}\frac{df}{df} = \frac{df}{dx}\frac{d1}{dx} = 0.$$

Of course second derivatives do exist, so you might wonder what’s happening here. The sleight of hand happens after the third equal sign: swapping differentiation with respect to two independent variables is permitted, but $$x$$ and $$f$$ are not independent and therefore, this step is illegal.

I pointed this out, and received a mildly abusive comment in response questioning the quality of my mathematics education. Oh well. Maybe I will learn some wisdom and refrain from responding to strangers in the future.

This morning, Google greeted me with a link in its newsstream to a Hackaday article on the Solar Gravitational Lens. The link caught my attention right away, as I recognized some of my own simulated, SGL-projected images of an exo-Earth and its reconstruction.

Reading the article I realized that it appeared in response to a brand new video by SciShow, a science-oriented YouTube channel.

Yay! I like nicely done videos presenting our work and this one is fairly good. There are a few minor inaccuracies, but nothing big enough to be even worth mentioning. And it’s very well presented.

I suppose I should offer my thanks to SciShow for choosing to feature our research with such a well-produced effort.

A beautiful study was published the other day, and it received a lot of press coverage, so I get a lot of questions.

This study shows how, in principle, we could reconstruct the image of an exoplanet using the Solar Gravitational Lens (SGL) using just a single snapshot of the Einstein ring around the Sun.

The problem is, we cannot. As they say, the devil is in the details.

Here is a general statement about any conventional optical system that does not involve more exotic, nonlinear optics: whatever the system does, ultimately it maps light from picture elements, pixels, in the source plane, into pixels in the image plane.

Let me explain what this means in principle, through an extreme example. Suppose someone tells you that there is a distant planet in another galaxy, and you are allowed to ignore any contaminating sources of light. You are allowed to forget about the particle nature of light. You are allowed to forget the physical limitations of your cell phone’s camera, such as its CMOS sensor dynamic range or readout noise. You hold up your cell phone and take a snapshot. It doesn’t even matter if the camera is not well focused or if there is motion blur, so long as you have precise knowledge of how it is focused and how it moves. The map is still a linear map. So if your cellphone camera has 40 megapixels, a simple mathematical operation, inverting the so-called convolution matrix, lets you reconstruct the source in all its exquisite detail. All you need to know is a precise mathematical description, the so-called “point spread function” (PSF) of the camera (including any defocusing and motion blur). Beyond that, it just amounts to inverting a matrix, or equivalently, solving a linear system of equations. In other words, standard fare for anyone studying numerical computational methods, and easily solvable even at extreme high resolutions using appropriate computational resources. (A high-end GPU in your desktop computer is ideal for such calculations.)

Why can’t we do this in practice? Why do we worry about things like the diffraction limit of our camera or telescope?

The answer, ultimately, is noise. The random, unpredictable, or unmodelable element.

Noise comes from many sources. It can include so-called quantization noise because our camera sensor digitizes the light intensity using a finite number of bits. It can include systematic noises due to many reasons, such as differently calibrated sensor pixels or even approximations used in the mathematical description of the PSF. It can include unavoidable, random, “stochastic” noise that arises because light arrives as discrete packets of energy in the form of photons, not as a continuous wave.

When we invert the convolution matrix in the presence of all these noise sources, the noise gets amplified far more than the signal. In the end, the reconstructed, “deconvolved” image becomes useless unless we had an exceptionally high signal-to-noise ratio, or SNR, to begin with.

The authors of this beautiful study knew this. They even state it in their paper. They mention values such as 4,000, even 200,000 for the SNR.

And then there is reality. The Einstein ring does not appear in black, empty space. It appears on top of the bright solar corona. And even if we subtract the corona, we cannot eliminate the stochastic shot noise due to photons from the corona by any means other than collecting data for a longer time.

Let me show a plot from a paper that is work-in-progress, with the actual SNR that we can expect on pixels in a cross-sectional view of the Einstein ring that appears around the Sun:

Just look at the vertical axis. See those values there? That’s our realistic SNR, when the Einstein ring is imaged through the solar corona, using a 1-meter telescope with a 10 meter focal distance, using an image sensor pixel size of a square micron. These choices are consistent with just a tad under 5000 pixels falling within the usable area of the Einstein ring, which can be used to reconstruct, in principle, a roughly 64 by 64 pixel image of the source. As this plot shows, a typical value for the SNR would be 0.01 using 1 second of light collecting time (integration time).

What does that mean? Well, for starters it means that to collect enough light to get an SNR of 4,000, assuming everything else is absolutely, flawlessly perfect, there is no motion blur, indeed no motion at all, no sources of contamination other than the solar corona, no quantization noise, no limitations on the sensor, achieving an SNR of 4,000 would require roughly 160 billion seconds of integration time. That is roughly 5,000 years.

And that is why we are not seriously contemplating image reconstruction from a single snapshot of the Einstein ring.

Move over, general relativity. Solar gravitational lens? Meh. Particle physics and the standard model? Child’s play.

Today, I had to replace the wax ring of a leaky toilet.

Thanks to this YouTube video for some useful advice, helping me avoid some trivial mistakes.

Acting as “release manager” for Maxima, the open-source computer algebra system, I am happy to announce that just minutes ago, I released version 5.46.

I am an avid Maxima user myself; I’ve used Maxima’s tensor algebra packages, in particular, extensively in the context of general relativity and modified gravity. I believe Maxima’s tensor algebra capabilities remain top notch, perhaps even unsurpassed. (What other CAS can derive Einstein’s field equations from the Einstein-Hilbert Lagrangian?)

The Maxima system has more than half a century of history: its roots go back to the 1960s, when I was still in kindergarten. I have been contributing to the project for nearly 20 years myself.

Anyhow, Maxima 5.46, here we go! I hope I made no blunders while preparing this release, but if I did, I’m sure I’ll hear about it shortly.

Between a war launched by a mad dictator, an occupation by “freedom convoy” mad truckers, and other mad shenanigans, it’s been a while since I last blogged about pure physics.

Especially about a topic close to my heart, modified gravity. John Moffat’s modified gravity theory MOG, in particular.

Back in 2020, a paper was published arguing that MOG may not be able to account for the dynamics certain galaxies. The author studied a large, low surface brightness galaxy, Antlia II, which has very little mass, and concluded that the only way to fit MOG to this galaxy’s dynamics is by assuming outlandish values not only for the MOG theory’s parameters but also the parameter that characterizes the mass distribution in the galaxy itself.

In fact, I would argue that any galaxy this light that does not follow Newtonian physics is bad news for modified theories of gravity; these theories predict deviations from Newtonian physics for large, heavy galaxies, but a galaxy this light is comparable in size to large globular clusters (which definitely behave the Newtonian way) so why would they be subject to different rules?

But then… For many years now, John and I (maybe I should only speak for myself in my blog, but I think John would concur) have been cautiously, tentatively raising the possibility that these faint satellite galaxies are really not very good test subjects at all. They do not look like relaxed, “virialized” mechanical systems; rather, they appear tidally disrupted by the host galaxy the vicinity of which they inhabit.

We have heard arguments that this cannot be the case, that these satellites show no signs of recent interaction. And in any case, it is never a good idea for a theorist to question the data. We are not entitled to “alternative facts”.

But then, here’s a paper from just a few months ago with a very respectable list of authors on its front page, presenting new observations of two faint galaxies, one being Antlia II: “Our main result is a clear detection of a velocity gradient in Ant2 that strongly suggests it has recently experienced substantial tidal disruption.”

I find this result very encouraging. It is consistent with the basic behavior of the MOG theory: Systems that are too light to show effects due to modified gravity exhibit strictly Newtonian behavior. This distinguishes MOG from the popular MOND paradigm, which needs the somewhat ad hoc “external field effect” to account for the dynamics of diffuse objects that show no presence of dark matter or modified gravity.

The other day, someone sent me a link to a recent paper on arxiv.org:

Be careful. You never know when a rogue penguin might be targeting you.

The 64-antenna radio telescope complex, MeerKAT, is South Africa’s contribution to the Square Kilometer Array, an international project under development to create an unprecedented radio astronomy facility.

While the SKA project is still in its infancy, MeerKAT is fully functional, and it just delivered the most detailed, most astonishing images yet of the central region of our own Milky Way. Here is, for instance, an image of the Sagittarius A region that also hosts the Milky Way’s supermassive black hole, Sgr A*:

The filamentary structure that is seen in this image is apparently poorly understood. As for the scale of this image, notice that it is marked in arc seconds; at the estimated distance to Sgr A, one arc second translates into roughly 1/8th of a light year, so the image presented here is roughly a 15 by 15 light year area.

Though he passed away in September, I only learned about it tonight: Thanu Padmanabhan, renowned Indian theoretical physicist, is no longer with us. He was only 64 when he passed away, a result of a heart attack according to Wikipedia.

I never met Padmanabhan but I have several of his books on my bookshelf, including Structure Formation in the Universe and his more recent textbook Gravitation. I am also familiar with many of his papers.

I learned about his death just moments ago as I came across a paper by him on arXiv, carrying this comment: “Prof. T. Padmanabhan has passed away on 17th September, 2021, while this paper was under review in a journal.”

What an incredible loss. The brilliant flame of his intellect, extinguished. I am deeply saddened.

A tribute article about his life was published on arXiv back in October, but unfortunately was not cross-listed to gr-qc, and thus it escaped my attention until now.

Earlier today, I noticed something really strange. A lamp was radiating darkness. Or so it appeared.

Of course there was a mundane explanation. Now that the Sun is lower in the sky and the linden tree in front of our kitchen lost many of its leaves already, intense sunlight was reflecting off the hardwood floor in our dining area.

Still, it was an uncanny sight.

I live in a condominium townhouse. We’ve been living here for 25 years. We like the place.

Our unit, in particular, is the middle unit in a three-unit block. The construction is reasonably sound: proper foundations, cinderblock firewalls between the units, woodframe construction within, pretty run-of-the-mill by early 1980s North American standards. We have no major complaints.

Except that… for the past several years, every so often the house wobbled a bit. Almost imperceptibly, but still. At first, I thought it was a minor earthquake (not uncommon in this region because it is still subject to isostatic rebound from the last ice age; in fact we did live through a couple of notable earthquakes since we moved in here.) But no, it was no earthquake.

I thought perhaps it was related to the downtown light rail tunnel construction? But no, the LRT tunnels are quite some ways from here and in any case, that part of the construction has been finished long ago.

But then what the bleep is it? Could I be just imagining things?

Our phones have very sensitive acceleration sensors. Not for the first time, I managed to capture one of these events. A little earlier this afternoon, I heard the woodframe audibly creak as the house began to move again. I grabbed my phone and turned on a piece of software that samples the acceleration sensor at a reasonably high rate, about 200 times a second. Here is the result of the first few seconds of sampling:

The sinusoidal signal is unmistakably there, confirmed by a quick Fourier-analysis to be a signal just above 3 Hz in frequency:

Like Sheldon Cooper in The Big Bang Theory, I can claim that no, I am not crazy, and in this case not because my mother had me tested but because my phone’s acceleration sensor confirms my perception: Something indeed wobbles the house a little, enough to register on my phone’s acceleration sensor, measuring a peak-to-peak amplitude of roughly 0.05 m/s² (the vertical axis in the first graph is in g-units.) That wobble is certainly not enough to cause damage, but it is, I admit, a bit unnerving.

So what is going on here? A neighbor engaging in some, ahem, vigorous activity? Our current neighbors are somewhat noisier than prior residents, occasionally training their respective herds of pygmy elephants to run up and down the stairs (or whatever it is that they are doing). But no, the events are just too brief in duration and too regular. Underground work, perhaps a secret hideout for the staff of the nearby Chinese embassy? Speaking of which, I admit I even thought that this ~3 Hz signal might be related to the reported cases of illness by embassy staff at several embassies around the world, but I just don’t see the connection: even if those cases are real and have an underlying common cause (as opposed to just mere random coincidences) it’s hard to see how a 3 Hz vibration can have anything to do with them.

OK, so I have a pretty good idea of what this thing isn’t, but then, what the bleepety-bleep is it?

I am not happy admitting it, but it’s true: There have been a few occasions in my life when I reacted just like this XKCD cartoon character when I first encountered specific areas of research.

Can you guess the author with the most physics books on what I call my “primary” bookshelf, the shelf right over my desk where I keep the books that I use the most often?

It would be Steven Weinberg. His 1972 Gravitation and Cosmology remains one of the best books ever on relativity theory, working out details in ways no other book does. His 2010 Cosmology remains a reasonably up-to-date textbook on modern cosmology. And then there is of course the 3-volume Quantum Theory of Fields.

Alas, Weinberg is no longer with us. He passed away yesterday, July 23, at the age of 88.

He will be missed.

We have a new manuscript on arXiv. Its title might raise some eyebrows: Algebraic wave-optical description of a quadrupole gravitational lens.

Say what? Algebra? Wave optics? Yes. It means that in this particular case, namely a gravitational lens that is described as a gravitational monopole with a quadrupole correction, we were able to find a closed form description that does not rely on numerical integration, especially no numerical integration of a rapidly oscillating function.

Key to this solution is a quartic equation. Quartic equations were first solved algebraically back in the 16th century by Italian mathematicians. The formal solution is usually considered to be of little practical value, as it entails cumbersome algebra, and polynomial equations can be routinely and efficiently solved using numerical methods.

But in this case… The amazing thing is that the algebraic solution reveals so much about the physics itself!

Take this figure from our paper, for instance:

On the left is light projected by the gravitational lens, its so-called point-spread function (PSF) which tells us how light from a point source is distributed on an imaginary projection screen by the lens. On the right? Why, that’s the discriminant of the quartic equation

$$x^4-2\eta\sin\mu \, x^3+\big(\eta^2-1\big)x^2+\eta\sin\mu \, x+{\textstyle\frac{1}{4}}\sin^2\mu=0,$$

in a plane characterized by polar coordinates $$(\eta,\tfrac{1}{2}\mu)$$, that is, $$\eta$$ as a radial coordinate and $$\tfrac{1}{2}\mu$$ as an azimuthal angle. When the discriminant is positive, the equation is expected to have four real (or four complex) roots; everywhere else, it’s a mix of real and imaginary roots. This direct connection between the algebra and the lensing phenomenon is unexpected and beautiful.

The full set of real roots of this equation can be shown in the form of an animation:

Of course one must read the paper in order for this animation to make sense, but I think it’s beautiful.

How good is this quartic solution? It is uncannily accurate. Here is a comparison of the PSF computed using the quartic solution and also using numerical integration, as well as some enlarged details from the so-called caustic boundary:

It’s only in the immediate vicinity of the caustic boundary that the quartic solution becomes less than accurate.

We can also use the quartic solution to simulate images seen through a telescope (i.e., the Einstein ring, or what survives of it, that would appear around a gravitational lens when we looked at the lens through a telescope with a point source of light situated behind the lens.) We can see again that it’s only in the vicinity of the caustic boundary that the quartic solution produces artifacts instead of accurately reproducing it when spots of light widen into arcs:

This paper was so much joy to write! Also, for the first time in my life, this paper gave us a legitimate, non-pretentious reason to cite something from the 16th century: Cardano’s 1545 treatise in which the quartic solution (as well as the cubic) are introduced, together with discussion on the meaning of taking the square root of negative numbers.

Last fall, I received an intriguing request: I was asked to respond to an article on the topic of dark matter in an online publication that, I admit, I never heard of previously: Inference: International Review of Science.

But when I looked, I saw that the article in question was written by a scientist with impressive and impeccable credentials (Jean-Pierre Luminet, Director of Research at the CNRS Astrophysics Laboratory in Marseille and the Paris Observatory), and other contributors of the magazine included well-known personalities like Lawrence Krauss or Noam Chomsky.

More importantly, the article in question presented an opportunity to write a response that was not critical but constructive: inform the reader that the concept of modified gravity goes far beyond the so-called MOND paradigm, that it is a rich and vibrant field of theoretical research, and that until and unless dark matter is actually discovered, it remains a worthy pursuit. My goal was not self-promotion: I did not even mention my ongoing collaboration with John Moffat on his modified theory of gravity, MOG/STVG. Rather, it was simply to help dispel the prevailing myth that failures of MOND automatically translate into failures of all efforts to create a viable modified theory of gravitation.

I sent my reply and promptly forgot all about it until last month, when I received another e-mail from this publication: a thank you note letting me know that my reply would be published in the upcoming issue.

And indeed it was, as I was just informed earlier today: My Letter to the Editor, On Modified Gravity.

I am glad in particular that it was so well received by the author of the original article on dark matter.

The next in our series of papers describing the extended gravitational lens (extended, that is, in that we are no longer treating the lensing object as a gravitational monopole) is now out, on arXiv.

Here’s one of my favorite images from the paper, which superimposes the boundary of the quadrupole caustic (an astroid curve) onto a 3D plot showing the amplitude of the gravitational lens’s point-spread function.

I was having lots of fun working on this paper. It was, needless to say, a lot of work.

Because I’ve been asked a lot about this lately, I thought I’d also share my own take on this calculation in my blog.

Gravitoelectromagnetism (or gravitomagnetism, even gravimagnetism) is the name given to a formalism that shows how weak gravitational fields can be viewed as analogous to electromagnetic fields and how, in particular, the motion of a test particle is governed by equations that are similar to the equations of the electromagnetic Lorentz-force, with gravitational equivalents of the electric and magnetic vector potentials.

Bottom line: no, gravitoelectromagnetism does not explain the anomalous rotation curves of spiral galaxies. The effect is several orders of magnitude too small. Nor is the concept saved by the realization that spacetime is not asymptotically flat, so the boundary conditions must change. That effect, too, is much too small, at least five orders of magnitude too small in fact to be noticeable.

To sketch the key details, the radial acceleration on a test particle due to gravitoelectromagnetism in circular orbit around a spinning body is given roughly by

$$a=-\frac{4G}{c^2}\frac{Jv}{r^3},$$

where $$r$$ is the orbital speed of the test particle. When we plug in the numbers for the solar system and the Milky Way, $$r\sim 8~{\rm kpc}$$ and $$J\sim 10^{67}~{\rm J}\cdot{\rm s}$$, we get

$$a\sim 4\times 10^{-16}~{\rm m}{\rm s}^2.$$

This is roughly 400,000 times smaller than the centrifugal acceleration of the solar system in its orbit around the Milky Way, which is $$\sim 1.6\times 10^{-10}~{\rm m}/{\rm s}^2.$$

Taking into account that our universe is not flat, i.e., deviations from the flat spacetime metric approach unity at the comoving distance of $$\sim 15~{\rm Gpc},$$ only introduces a similarly small contribution on the scale of a galaxy, of $${\cal O}(10^{-6})$$ at $$\sim 15~{\rm kpc}.$$

A more detailed version of this calculation is available on my Web site.

Now it is time for me to be bold and contrarian. And for a change, write about physics in my blog.

From time to time, even noted physicists express their opinion in public that we do not understand quantum physics. In the professional literature, they write about the “measurement problem”; in public, they continue to muse about the meaning of measurement, whether or not consciousness is involved, and the rest of this debate that continues unabated for more than a century already.

Whether it is my arrogance or ignorance, however, when I read such stuff, I beg to differ. I feel like the alien Narim in the television series Stargate SG-1 in a conversation with Captain (and astrophysicist) Samantha Carter about the name of a cat:

CARTER: Uh, see, there was an Earth physicist by the name of Erwin Schrödinger. He had this theoretical experiment. Put a cat in a box, add a can of poison gas, activated by the decay of a radioactive atom, and close the box.
NARIM: Sounds like a cruel man.
CARTER: It was just a theory. He never really did it. He said that if he did do it at any one instant, the cat would be both dead and alive at the same time.
NARIM: Ah! Kulivrian physics. An atom state is indeterminate until measured by an outside observer.
CARTER: We call it quantum physics. You know the theory?
NARIM: Yeah, I’ve studied it… in among other misconceptions of elementary science.
CARTER: Misconception? You telling me that you guys have licked quantum physics?

What I mean is… Yes, in 2021, we “licked” quantum physics. Things that were mysterious in the middle of the 20th century aren’t (or at least, shouldn’t be) quite as mysterious in the third decade of the 21st century.

OK, let me explain by comparing two thought experiments: Schrödinger’s cat vs. the famous two-slit experiment.

The two-slit experiment first. An electron is fired by a cathode. It encounters a screen with two slits. Past that screen, it hits a fluorescent screen where the location of its arrival is recorded. Even if we fire one electron at a time, the arrival locations, seemingly random, will form a wave-like interference pattern. The explanation offered by quantum physics is that en route, the electron had no classically determined position (no position eigenstate, as physicists would say). Its position was a combination, a so-called superposition of many possible position states, so it really did go through both slits at the same time. En route, its position operator interfered with itself, resulting in the pattern of probabilities that was then mapped by the recorded arrival locations on the fluorescent screen.

Now on to the cat: We place that poor feline into a box together with a radioactive atom and an apparatus that breaks a vial of poison gas if the atom undergoes fission. We wait until the half-life of that atom, making it a 50-50 chance that fission has occurred. At this point, the atom is in a superposition of intact vs. split, and therefore, the story goes, the cat will also be in a superposition of being dead and alive. Only by opening the box and looking inside do we “collapse the wavefunction”, determining the actual state of the cat.

Can you spot a crucial difference between these two experiments, though? Let me explain.

In the first experiment involving electrons, knowledge of the final position (where the electron arrives on the screen) does not allow us to reconstruct the classical path that the electron took. It had no classical path. It really was in a superposition of many possible locations while en route.

In the second experiment involving the cat, knowledge of its final state does permit us to reconstruct its prior state. If the cat is alive, we have no doubt that it was alive all along. If it is dead, an experienced veterinarian could determine the moment of death. (Or just leave a video camera and a clock in the box along with the cat.) The cat did have a classical state all throughout the experiment, we just didn’t know what it was until we opened the box and observed its state.

The crucial difference, then, is summed up thus: Ignorance of a classical state is not the same as the absence of a classical state. Whereas in the second experiment, we are simply ignorant of the cat’s state, in the first experiment, the electron has no classical state of position at all.

These two thought experiments, I think, tell us everything we need to know about this so-called “measurement problem”. No, it does not involve consciousness. No, it does not require any “act of observation”. And most importantly, it does not involve any collapse of the wavefunction when you really think it through. More about that later.

What we call measurement is simply interaction by the quantum system with a classical object. Of course we know that nothing really is classical. Fluorescent screens, video cameras, cats, humans are all made of a very large but finite number of quantum particles. But for all practical (measurable, observable) intents and purposes all these things are classical. That is to say, these things are (my expression) almost in an eigenstate almost all the time. Emphasis on “almost”: it is as near to certainty as you can possibly imagine, deviating from certainty only after the hundredth, the thousandth, the trillionth or whichever decimal digit.

Interacting with a classical object confines the quantum system to an eigenstate. Now this is where things really get tricky and old school at the same time. To explain, I must invoke a principle from classical, Lagrangian physics: the principle of least action. Almost all of physics (including classical mechanics, electrodynamics, even general relativity) can be derived from a so-called action principle, the idea that the system evolves from a known initial state to a known final state in a manner such that a number that characterizes the system (its “action”) is minimal.

The action principle sounds counterintuitive to many students of physics when they first encounter it, as it presupposes knowledge of the final state. But this really is simple math if you are familiar with second-order differential equations. A unique solution to such an equation can be specified in two ways. Either we specify the value of the unknown function at two different points, or we specify the value of the unknown function and its first derivative at one point. The former corresponds to Lagrangian physics; the latter, to Hamiltonian physics.

This works well in the context of classical physics. Even though we develop the equations of motion using Lagrangian physics, we do so only in principle. Then we switch over to Hamiltonian physics. Using observed values of the unknown function and its first derivative (think of these as positions and velocities) we solve the equations of motion, predicting the future state of the system.

This approach hits a snag when it comes to quantum physics: the nature of the unknown function is such that its value and its first derivative cannot both be determined as ordinary numbers at the same time. So while Lagrangian physics still works well in the quantum realm, Hamiltonian physics does not. But Lagrangian physics implies knowledge of the future, final state. This is what we mean when we pronounce that quantum physics is fundamentally nonlocal.

Oh, did I just say that Hamiltonian physics doesn’t work in the quantum realm? But then why is it that every quantum physics textbook begins, pretty much, with the Hamiltonian? Schrödinger’s famous equation, for starters, is just the quantum version of that Hamiltonian!

Aha! This is where the culprit is. With the Hamiltonian approach, we begin with presumed knowledge of initial positions and velocities (values and first derivatives of the unknown functions). Knowledge we do not have. So we evolve the system using incomplete knowledge. Then, when it comes to the measurement, we invoke our deus ex machina. Like a bad birthday party surprise, we open the magic box, pull out our “measurement apparatus” (which we pretended to not even know about up until this moment), confine the quantum system to a specific measurement value, retroactively rewrite the description of our system with the apparatus now present all along, and call this discontinuous change in the system’s description “wavefunction collapse”.

And then spend a century about its various interpretations instead of recognizing that the presumed collapse was never a physical process: rather, it amounts to us changing how we describe the system.

This is the nonsense for which I have no use, even if it makes me sound both arrogant and ignorant at the same time.

To offer a bit of a technical background to support the above (see my Web site for additional technical details): A quantum theory can be constructed starting with classical physics in a surprisingly straightforward manner. We start with the Hamiltonian (I know!), written in the following generic form:

$$H = \frac{p^2}{2m} + V({\bf q}),$$

where $${\bf p}$$ are generalized momenta, $${\bf q}$$ are generalized positions and $$m$$ is mass.

We multiply this equation by the unit complex number $$\psi=e^{i({\bf p}\cdot{\bf q}-Ht)/\hbar}.$$ We are allowed to do this trivial bit of algebra with impunity, as this factor is never zero.

Next, we notice the identities, $${\bf p}\psi=-i\hbar\nabla\psi,$$ $$H\psi=i\hbar\partial_t\psi.$$ Using these identities, we rewrite the equation as

$$i\hbar\partial_t\psi=\left[-\frac{\hbar^2}{2m}\nabla^2+V({\bf q})\right]\psi.$$

There you have it, the time-dependent Schrödinger equation in its full glory. Or… not quite, not yet. It is formally Schrödinger’s equation but the function $$\psi$$ is not some unknown function; we constructed it from the positions and momenta. But here is the thing: If two functions, $$\psi_1$$ and $$\psi_2,$$ are solutions of this equation, then because the equation is linear and homogeneous in $$\psi,$$ their linear combinations are also solutions. But these linear combinations make no sense in classical physics: they represent states of the system that are superpositions of classical states (i.e., the electron is now in two or more places at the same time.)

Quantum physics begins when we accept these superpositions as valid descriptions of a physical system (as indeed we must, because this is what experiment and observation dictates.)

The presence of a classical apparatus with which the system interacts at some future moment in time is not well captured by the Hamiltonian formalism. But the Lagrangian formalism makes it clear: it selects only those states of the system that are consistent with that interaction. This means indeed that a full quantum mechanical description of the system requires knowledge of the future. The apparent paradox is that this knowledge of the future does not causally influence the past, because the actual evolution of the system remains causal at all times: only the initial description of the system needs to be nonlocal in the same sense in which 19th century Lagrangian physics is nonlocal.