Imagine a health care system that is created and managed without the help of doctors. Imagine getting radiation treatment without the help of medical physicists.

Imagine an education system that is created and managed without educators.

Imagine a system of highways and railways created and managed without transportation engineers.

Imagine an electrical infrastructure that is created and managed without electrical engineers. Nuclear power plants without physicists. An economy that is managed without professional economists. A communications infrastructure created and managed without radio engineers, software and network engineers.

This is Doug Ford’s vision for the province of Ontario, presented by none other than Doug Ford himself through his Twitter feed, as he proudly proclaims that his government, his party, won’t listen to academics: the very people that we pay so that they learn and offer their professional knowledge for the benefit of the public.

Guess this is what happens when ideology and blatant populism trump facts. (Pun unintended, but disturbingly appropriate.)

Yesterday, I received a nice surprise via e-mail: A link to a new article in Astronomy magazine (also republished by Discover magazine) about our efforts to solve the Pioneer Anomaly.

I spent several years working with Slava Turyshev and others on this. It was a lot of very hard, difficult work.

As part of my (both published and unpublished) contributions, I learned how to do precision modeling of satellite orbits in the solar system. I built a precision navigation application that was sufficiently accurate to reconstruct the Pioneer trajectories and observe the anomaly. I built a semi-analytical and later, a numerical (ray-tracing) model to estimate the directional thermal emissions of the two spacecraft.

But before all that, I built software to extract telemetry from the old raw data files, recorded as received by the Deep Space Network. These were the files that lay forgotten on magnetic tape for many years, eventually to be transferred to a now obsolete optical disc format and then, thanks to the efforts of Larry Kellogg, to modern media. My own efforts, to make sense of these telemetry files, is what got me involved with the Pioneer Anomaly project in the first place.

These were fun days. And I’d be lying if I said that I have no tinge of regret that in the end, we found no anomalous acceleration. After all, confirmation that the trajectories of these two Pioneers are affected by an unmodeled force, likely indicating the need for new physics… that would have been tremendous. Instead, we found something mundane, relegated (at best) to the footnotes of science history.

Which is why I felt a sense of gratitude reading this article. It told me that our efforts have not been completely forgotten.

A while back, I wrote about the uncanny resemblance between the interstellar asteroid ‘Oumuamua and the fictitious doomsday weapon Iilah in A. E. van Vogt’s 1948 short story Dormant.

And now I am reading that Iilah’s, I mean, ‘Oumuamua’s trajectory changed due to non-gravitational forces. The suspect is comet-like outgassing, but observations revealed no gas clouds, so it is a bit of a mystery.

Even if this is purely a natural phenomenon (and I firmly believe that it is, just in case it needs to be said) it is nonetheless mind-blowingly fascinating.

I am reading some breathless reactions to a preprint posted a few days ago by the MiniBooNE experiment. The experiment is designed to detect neutrinos, in particular neutrino oscillations (the change of one neutrino flavor into another.)

The headlines are screaming. Evidence found of a New Fundamental Particle, says one. Strange New Particle Could Prove Existence of Dark Matter, says another. Or how about, A Major Physics Experiment Just Detected A Particle That Shouldn’t Exist?

The particle in question is the so-called sterile neutrino. It is a neat concept, one I happen to quite like. It represents an elegant resolution to the puzzle of neutrino handedness. This refers to the chirality of neutrinos, essentially the direction in which they spin compared to their direction of motion. We only ever see “left handed” neutrinos. But neutrinos have rest mass. So they move slower than light. That means that if you run fast enough and outrun a left-handed neutrino, so that relative to you it is moving backwards (but still spins in the same direction as before), when you look back, you’ll see a right-handed neutrino. This implies that right-handed neutrinos should be seen just as often as left-handed neutrinos. But they aren’t. How come?

Sterile neutrinos offer a simple answer: We don’t see right-handed neutrinos because they don’t interact (they are sterile). That is to say, when a neutrino interacts (emits or absorbs a Z-boson, or emits or absorbs a W-boson while changing into a charged lepton), it has to be a left-handed neutrino in the interaction’s center-of-mass frame.

If this view is true and such sterile neutrinos exist, even though they cannot be detected directly, their existence would skew the number of neutrino oscillation events. As to what neutrino oscillations are: neutrinos are massive. But unlike other elementary particles, neutrinos do not have a well-defined mass associated with their flavor (electron, muon, or tau neutrino). When a neutrino has a well-defined flavor (is in a flavor eigenstate) it has no well-defined mass and vice versa. This means that if we detect neutrinos in a mass eigenstate, their flavor can appear to change (oscillate) between one state or another; e.g., a muon neutrino may appear at the detector as an electron neutrino. These flavor oscillations are rare, but they can be detected, and that’s what the MiniBooNE experiment is looking for.

And that is indeed what MiniBooNE found: an excess of events that is consistent with neutrino oscillations.

MiniBooNE detects electron neutrinos. These can come from all kinds of (background) sources. But one particular source is an intense beam of muon neutrinos produced at Fermilab. Because of neutrino oscillations, some of the neutrinos in this beam will be detected as electron neutrinos, yielding an excess of electron neutrino events above background.

And that’s exactly what MiniBooNE sees, with very high confidence: 4.8σ. That’s almost the generally accepted detection threshold for a new particle. But this value of 4.8σ is not about a new particle. It is the significance associated with excess electron neutrino detection events overall: an excess that is expected from neutrino oscillations.

So what’s the big deal, then? Why the screaming headlines? As far as I can tell, it all boils down to this sentence in the paper: “Although the data are fit with a standard oscillation model, other models may provide better fits to the data.”

What this somewhat cryptic sentence means is best illustrated by a figure from the paper:

This figure shows the excess events (above background) detected by MiniBooNE, but also the expected number of excess events from neutrino oscillations. Notice how only the first two red data points fall significantly above the expected number. (In case you are wondering, POT means Protons On Target, that is to say, the number of protons hitting a beryllium target at Fermilab, producing the desired beam of muon neutrinos.)

Yes, these two data points are intriguing. Yes, they may indicate the existence of new physics beyond two-neutrino oscillations. In particular, they may indicate the existence of another oscillation mode, muon neutrinos oscillating into sterile neutrinos that, in turn, oscillate into electron neutrinos, yielding this excess.

Mind you, if this is a sign of sterile neutrinos, these sterile neutrinos are unlikely dark matter candidates; their mass would be too low.

Or these two data points are mere statistical flukes. After all, as the paper says, “the best oscillation fit to the excess has a probability of 20.1%”. That is far from improbable. Sure, the fact that it is only 20.1% can be interpreted as a sign of some tension between the Standard Model and this experiment. But it is certainly not a discovery of new physics, and absolutely not a confirmation of a specific model of new physics, such as sterile neutrinos.

And indeed, the paper makes no such claim. The word “sterile” appears only four times in the paper, in a single sentence in the introduction: “[…] more exotic models are typically used to explain these anomalies, including, for example, 3+N neutrino oscillation models involving three active neutrinos and N additional sterile neutrinos [6-14], resonant neutrino oscillations [15], Lorentz violation [16], sterile neutrino decay [17], sterile neutrino non-standard interactions [18], and sterile neutrino extra dimensions [19].”

So yes, there is an intriguing sign of an anomaly. Yes, it may point the way towards new physics. It might even be new physics involving sterile neutrinos.

But no, this is not a discovery. At best, it’s an intriguing hint; quite possibly, just a statistical fluke.

So why the screaming headlines, then? I wish I knew.

There is an excellent diagram accompanying an answer on StackExchange, and I’ve been meaning to copy it here, because I keep losing the address.

The diagram summarizes many measures of cosmic expansion in a nice, compact, but not necessarily easy-to-understand form:

So let me explain how to read this diagram. First of all, time is going from bottom to top. The thick horizontal black line represents the moment of now. Imagine this line moving upwards as time progresses.

The thick vertical black line is here. So the intersection of the two thick black lines in the middle is the here-and-now.

Distances are measured in terms of the comoving distance, which is basically telling you how far a distant object would be now, if you had a long measuring tape to measure its present-day location.

The area shaded red (marked “past light cone”) is all the events that happened in the universe that we could see, up to the moment of now. The boundary of this area is everything in this universe from which light is reaching us right now.

So just for fun, let us pick an object at a comoving distance of 30 gigalightyears (Gly). Look at the dotted vertical line corresponding to 30 Gly, halfway between the 20 and 40 marks (either side, doesn’t matter.) It intersects the boundary of the past light cone when the universe was roughly 700 million years old. Good, there were already young galaxies back then. If we were observing such a galaxy today, we’d be seeing it as it appeared when the universe was 700 million years old. Its light would have spent 13.1 billion years traveling before reaching our instruments.

Again look at the dotted vertical line at 30 Gly and extend it all the way to the “now” line. What does this tell you about this object? You can read the object’s redshift (z) off the diagram: its light is shifted down in frequency by a factor of about 9.

You can also read the object’s recession velocity, which is just a little over two times the vacuum speed of light. Yes… faster than light. This recession velocity is based on the rate of change of the scale factor, essentially the Hubble parameter times the comoving distance. The Doppler velocity that one would deduce from the object’s redshift yields a value less than the vacuum speed of light. (Curved spacetime is tricky; distances and speeds can be defined in various ways.)

Another thing about this diagram is that in addition to the past, it also sketches the future, taking into account the apparent accelerating expansion of the universe. Notice the light red shaded area marked “event horizon”. This area contains everything that we will be able to see at our present location, throughout the entire history of the universe, all the way to the infinite future. Things (events) outside this area will never be seen by us, will never influence us.

Note how the dotted line at 30 Gly intersects this boundary when the universe is about 5 billion years old. Yes, this means that we will only ever see the first less than 5 billion years of existence of a galaxy at a comoving distance of 30 Gly. Over time, light from this galaxy will be redshifted ever more, until it eventually appears to “freeze” and disappears from sight, never appearing to become older than 5 billion years.

Notice also how the dashed curves marking constant values of redshift bend inward, closer and closer to the “here” location as we approach the infinite future. This is a direct result of accelerating expansion: Things nearer and nearer to us will be caught up in the expansion, accelerating away from our location. Eventually this will stop, of course; cosmic acceleration will not rip apart structures that are gravitationally bound. But we will end up living in a true “island universe” in which nothing is seen at all beyond the largest gravitationally bound structure, the local group of galaxies. Fortunately that won’t happen anytime soon; we have many tens of billions of years until then.

Lastly, the particle horizon (blue lines) essentially marks the size of the visible part of the universe at any given time. Notice how the width of the interval marked by the intersection of the now line and the blue lines is identical to the width of the past light cone at the bottom of this diagram. Notice also how the blue lines correspond to infinite redshift.

As I said, this diagram is not an easy read but it is well worth studying.

Stephen Hawking passed away over a month ago, but I just came across this beautiful tribute from cartoonist Sean Delonas. It was completely unexpected (I was flipping through the pages of a magazine) and, I admit, it had quite an impact on me. Not the words, inspirational though they may be… the image. The empty wheelchair, the frail human silhouette walking away in the distance.

The recent discovery of a galaxy, NGC1052-DF2, with no or almost no dark matter made headlines worldwide.

 Nature 555, 629–632 (29 March 2018)

Somewhat paradoxically, it has been proclaimed by some as evidence that the dark matter paradigm prevails over theories of modified gravity. And, as usual, many of the arguments were framed in the context of dark matter vs. MOND, as if MOND was a suitable representative of all modified gravity theories. One example is a recent Quora question, Can we say now that all MOND theories is proven false, and there is really dark matter after all? I offered the following in response:

First of all, allow me to challenge the way the question is phrased: “all MOND theories”… Please don’t.

MOND (MOdified Newtonian Dynamics) is not a theory. It is an ad hoc, phenomenological replacement of the Newtonian acceleration law with a simplistic formula that violates even basic conservation laws. The formula fits spiral galaxy rotation curves reasonably well, consistent with the empirical Tully—Fisher law that relates galaxy masses and rotational velocities, but it fails for just about everything else, including low density globular clusters, dwarf galaxies, clusters of galaxies, not to mention cosmological observations.

MOND was given a reprieve in the form of Jacob Beckenstein’s TeVeS (Tensor—Vector—Scalar gravity), which is an impressive theoretical exercise to create a proper classical field theory that reproduces the MOND acceleration law in the weak field, low velocity limit. However, TeVeS suffers from the same issues MOND does when confronted with data beyond galaxy rotation curves. Moreover, the recent gravitational wave event, GW170817, accompanied by the gamma ray burst GRB170817 from the same astrophysical event, thus demonstrating that the propagation speed of gravitational and electromagnetic waves is essentially identical, puts all bimetric theories (of which TeVeS is an example) in jeopardy.

But that’s okay. News reports suggesting the death of modified gravity are somewhat premature. While MOND has often been used as a straw man by opponents of modified gravity, there are plenty of alternatives, many of them much better equipped than MOND to deal with diverse astrophysical phenomena. For instance, f(R) gravity, entropic gravity, Horava—Lifshitz gravity, galileon theory, DGP (Dvali—Gabadadze—Porrati) gravity… The list goes on and on. And yes, it also includes John Moffat’s STVG (Scalar—Tensor—Vector Gravity — not to be confused with TeVeS, the two are very different animals) theory, better known as MOG, a theory to which I also contributed.

As to NGC1052-DF2, for MOG that’s actually an easy one. When you plug in the values for the MOG approximate solution that we first published about a decade ago, you get an effective dynamical mass that is less than twice the visible (baryonic) mass of this galaxy, which is entirely consistent with its observed velocity dispersion.

In fact, I’d go so far as to boldly suggest that NGC1052-DF2 is a bigger challenge for the dark matter paradigm than it is for some theories of modified gravity (MOG included). Why? Because there is no known mechanism that would separate dark matter from stellar mass.

Compare this to the infamous Bullet Cluster: a pair of galaxy clusters that have undergone a collision. According to the explanation offered within the context of the dark matter paradigm (NB: Moffat and Brownstein showed, over a decade ago, that the Bullet Cluster can also be explained without dark matter, using MOG), their dark matter halos just flew through each other without interaction (other than gravity), as did the stars (stars are so tiny compared to the distance between them, the likelihood of stellar collisions is extremely remote, so stars also behave like a pressureless medium, like dark matter.) Interstellar/intergalactic clouds of gas, however, did collide, heating up to millions of degrees (producing bright X-rays) and losing much of their momentum. So you end up with a cloud of gas (but few stars and little dark matter) in the middle, and dark matter plus stars (but little gas) on the sides. This separation process works because stars and dark matter behave like a pressureless medium, whereas gas does not.

But in the case of NGC1052-DF2, some mechanism must have separated stars from dark matter, so we end up with a galaxy (one that actually looks nice, with no signs of recent disruption). I do not believe that there is currently a generally accepted, viable candidate mechanism that could accomplish this.

Stephen Hawking died earlier today.

Hawking was diagnosed with ALS in the year I was born, in 1963.

Defying his doctor’s predictions, he refused to die after a few years. Instead, he carried on for another astonishing 55 years, living a full life.

Public perception notwithstanding, he might not have been the greatest living physicist, but he was certainly a great physicist. The fact that he was able to accomplish so much despite his debilitating illness made him an extraordinary human being, a true inspiration.

Here is a short segment, courtesy of CTV Kitchener, filmed earlier today at the Perimeter Institute. My friend and colleague John Moffat, who met Hawking many times, is among those being interviewed:

There is a very interesting concept in the works at NASA, to which I had a chance to contribute a bit: the Solar Gravitational Telescope.

The idea, explained in this brand new NASA video, is to use the bending of light by the Sun to form an image of distant objects.

The resolving power of such a telescope would be phenomenal. In principle, it is possible to use it to form a megapixel-resolution image of an exoplanet as far as 100 light years from the Earth.

The technical difficulties are, however, challenging. For starters, a probe would need to be placed at least 550 astronomical units (about four times the distance to Voyager 1) from the Sun, precisely located to be on the opposite side of the Sun relative to the exoplanet. The probe would then have to mimic the combined motion of our Sun (dragged about by the gravitational pull of planets in the solar system) and the exoplanet (orbiting its own sun). Light from the Sun will need to be carefully blocked to ensure that we capture light from the exoplanet with as little noise as possible. And each time the probe takes a picture of the ring of light (the Einstein ring) around the Sun, it will be the combined light of many adjacent pixels on the exoplanet. The probe will have traverse a region that is roughly a kilometer across, taking pictures one pixel at a time, which will need to be deconvoluted. The fact that the exoplanet itself is not constant in appearance (it will go through phases of illumination, it may have changing cloud cover, perhaps even changes in vegetation) further complicates matters. Still… it can be done, and it can be accomplished using technology we already have.

By its very nature, it would be a very long duration mission. If such a probe was launched today, it would take 25-30 years for it to reach the place where light rays passing on both sides of the Sun first meet and thus the focal line begins. It will probably take another few years to collect enough data for successful deconvolution and image reconstruction. Where will I be 30-35 years from now? An old man (or a dead man). And of course no probe will be launched today; even under optimal circumstances, I’d say we’re at least a decade away from launch. In other words, I have no chance of seeing that high-resolution exoplanet image unless I live to see (at least) my 100th birthday.

Still, it is fun to dream, and fun to participate in such things. Though now I better pay attention to other things as well, including things that, well, help my bank account, because this sure as heck doesn’t.

No, it isn’t Friday yet.

But it seems that someone at CTV Morning Live wishes it was. Why else would they have told us that yesterday, February 28, was a Thursday? (Either that or they are time travelers from 2019.)

Then again, maybe I should focus on what they are actually saying, not on a trivial mistake they made: that even as parts of Europe that rarely see snow are blanketed by the white stuff, places in Canada and Siberia see unprecedented mild weather. A fluke or further evidence of climate change disrupting the polar vortex?

Enough of politics and cats. Time to blog about math and physics again.

Back in my high school days, when I was becoming familiar with calculus and differential equations (yes, I was a math geek) something troubled me. Why were certain expressions called “linear” when they obviously weren’t?

I mean, an expression like $$Ax+B$$ is obviously linear. But who in his right mind would call something like $$x^3y + 3e^xy+5$$ “linear”? Yet when it comes to differential equations, they’d tell you that $$x^3y+3e^xy+5-y^{\prime\prime}=0$$ is “obviously” a second-order, linear ordinary differential equation (ODE). What gives? And why is, say, $$xy^3+3e^xy-y^{\prime\prime}=0$$ not considered linear?

The answer is quite simple, actually, but for some reason when I was 14 or so, it took a very long time for me to understand.

Here is the recipe. Take an equation like $$x^3y+3e^xy+5-y^{\prime\prime}=0$$. Throw away the inhomogeneous bit, leaving the $$x^3y+3e^xy-y^{\prime\prime}=0$$ part. Apart from the fact that it is solved (obviously) by $$y=0$$, there is another thing that you can discern immediately. If $$y_1$$ and $$y_2$$ are both solutions, then so is their linear combination $$\alpha y_1+\beta y_2$$ (with $$\alpha$$ and $$\beta$$ constants), which you can see by simple substitution, as it yields $$\alpha(x^3y_1+3e^xy_1-y_1^{\prime\prime}) + \beta(x^3y_2+3e^xy_2-y_2^{\prime\prime})$$ for the left-hand side, with both terms obviously zero if $$y_1$$ and $$y_2$$ are indeed solutions.

So never mind that it contains higher derivatives. Never mind that it contains powers, even transcendental functions of the independent variable $$x$$. What matters is that the expression is linear in the dependent variable. As such, the linear combination of any two solutions of the homogeneous equation is also a solution.

Better yet, when it comes to the solutions of inhomogeneous equations, adding a solution of the homogeneous equation to any one of them yields another solution of the inhomogeneous equation.

Notably in physics, the Schrödinger equation of quantum mechanics is an example of a homogeneous and linear differential equation. This becomes a fundamental aspect of quantum physics: given two solutions (representing two distinct physical states) their linear combination is also a solution, representing another possible physical state.

I was surprised by the number of people who found my little exercise about kinetic energy interesting.

However, I was disappointed by the fact that only one person (an astrophysicist by trade) got it right.

It really isn’t a very difficult problem! You just have to remember that in addition to energy, momentum is also conserved.

In other words, when a train accelerates, it is pushing against something… the Earth, that is. So ever so slightly, the Earth accelerates backwards. The change in velocity may be tiny, but the change in energy is not necessarily so. It all depends on your reference frame.

So let’s do the math, starting with a train of mass $$m$$ that accelerates from $$v_1$$ to $$v_2$$. (Yes, I am doing the math formally; we can plug in the actual numbers in the end.)

Momentum is of course velocity times mass. Momentum conversation means that the Earth’s speed will change as

$\Delta v = -\frac{m}{M}(v_2-v_1),$

where $$M$$ is the Earth’s mass. If the initial speed of the earth is $$v_0$$, the change in its kinetic energy will be given by

$\frac{1}{2}M\left[(v_0+\Delta v)^2-v_0^2\right]=\frac{1}{2}M(2v_0\Delta v+\Delta v^2).$

If $$v_0=0$$, this becomes

$\frac{1}{2}M\Delta v^2=\frac{m^2}{M}(v_2-v_1)^2,$

which is very tiny if $$m\ll M$$. However, if $$|v_0|>0$$ and comparable in magnitude to $$v_2-v_1$$ (or at least, $$|v_0|\gg|\Delta v|$$), we get

$\frac{1}{2}M(2v_0\Delta v+\Delta v^2)=-mv_0(v_2-v_1)+\frac{m^2}{2M}(v_2-v_1)^2\simeq -mv_0(v_2-v_1).$

Note that the actual mass of the Earth doesn’t even matter; we just used the fact that it’s much larger than the mass of the train.

So let’s plug in the numbers from the exercise: $$m=10000~{\rm kg}$$, $$v_0=-10~{\rm m}/{\rm s}$$ (negative, because relative to the moving train, the Earth is moving backwards), $$v_2-v_1=10~{\rm m}/{\rm s}$$, thus $$-mv_0(v_2-v_1)=1000~{\rm kJ}$$.

So the missing energy is found as the change in the Earth’s kinetic energy in the reference frame of the second moving train.

Note that in the reference frame of someone standing on the Earth, the change in the Earth’s kinetic energy is imperceptibly tiny; all the $$1500~{\rm kJ}$$ go into accelerating the train. But in the reference frame of the observer moving on the second train on the parallel tracks, only $$500~{\rm kJ}$$ goes into the kinetic energy of the first train, whereas $$1000~{\rm kJ}$$ is added to the Earth’s kinetic energy. But in both cases, the total change in kinetic energy, $$1500~{\rm kJ}$$, is the same and consistent with the readings of the electricity power meter.

Then again… maybe the symbolic calculation is too abstract. We could have done it with numbers all along. When a $$10000~{\rm kg}$$ train’s speed goes from $$10~{\rm m}/{\rm s}$$ to $$20~{\rm m}/{\rm s}$$, it means that the $$6\times 10^{24}~{\rm kg}$$ Earth’s speed (in the opposite direction) will change by $$10000\times 10/(6\times 10^{24})=1.67\times 10^{-20}~{\rm m}/{\rm s}$$.

In the reference frame in which the Earth is at rest, the change in kinetic energy is $$\tfrac{1}{2}\times (6\times 10^{24})\times (1.67\times 10^{-20})^2=8.33\times 10^{-16}~{\rm J}$$.

However, in the reference frame in which the Earth is already moving at $$10~{\rm m}/{\rm s}$$, the change in kinetic energy is $$\tfrac{1}{2}\times (6\times 10^{24})\times (10+1.67\times 10^{-20})^2-\tfrac{1}{2}\times (6\times 10^{24})\times 10^2$$$${}=\tfrac{1}{2}\times (6\times 10^{24})\times[2\times 10\times 1.67\times 10^{-20}+(1.67\times 10^{-20})^2]$$$${}\simeq 1000~{\rm kJ}$$.

Enough blogging about personal stuff like our cats. Here is a neat little physics puzzle instead.

Solving this question requires nothing more than elementary high school physics (assuming you were taught physics in high school; if not, shame on the educational system where you grew up). No tricks, no gimmicks, no relativity theory, no quantum mechanics, just a straightforward application of what you were taught about Newtonian physics.

We have two parallel rail tracks. There is no friction, no air resistance, no dissipative forces.

On the first track, let’s call it A, there is a train. It weighs 10,000 kilograms. It is accelerated by an electric motor from 0 to 10 meters per second. Its kinetic energy, when it is moving at $$v=10~{\rm m/s}$$, is of course $$K=\tfrac{1}{2}mv^2=500~{\rm kJ}$$.

Next, we accelerate it from 10 to 20 meters per second. At $$v=20~{\rm m/s}$$, its kinetic energy is $$K=2000~{\rm kJ}$$, so an additional $$1500~{\rm kJ}$$ was required to achieve this change in speed.

All this is dutifully recorded by a power meter that measures the train’s electricity consumption. So far, so good.

But now let’s look at the B track, where there is a train moving at the constant speed of $$10~{\rm m/s}$$. When the A train is moving at the same speed, the two trains are motionless relative to each other; from B‘s perspective, the kinetic energy of A is zero. And when A accelerates to $$20~{\rm m/s}$$ relative to the ground, its speed relative to B will be $$10~{\rm m/s}$$; so from B‘s perspective, the change in kinetic energy is $$500~{\rm kJ}$$.

But the power meter is not lying. It shows that the A train used $$1500~{\rm kJ}$$ of electrical energy.

Question: Where did the missing $$1000~{\rm kJ}$$ go?

It’s the same, each and every Christmas. As Christmas Eve approaches, I remember that famous moment from 49 years ago. The astronauts of Apollo 8 just orbited the Moon. It was Christmastime. These three men were a thousand times farther from the Earth than any human being in history. It was an awe-inspiring moment. Once radio contact with the distant Earth was re-established, the three astronauts took turns reading the first ten verses of Genesis. Frank Borman then closed the broadcast with words that, in my mind, remain the most appropriate words for this evening: “good night, good luck, a Merry Christmas – and God bless all of you, all of you on the good Earth.

The Internet (or at least, certain corners of the Internet where conspiracy theories thrive) is abuzz with speculation that the extrasolar asteroid ‘Oumuamua, best known, apart from its hyperbolic trajectory, for its oddly elongated shape, may be of artificial, extraterrestrial origin.

Some mention the similarity between ‘Oumuamua and Arthur C. Clarke’s extraterrestrial generational ship Rama, forgetting that Rama was a ship 50 kilometers in length, an obviously engineered cylinder, not a rock.

But then… I suddenly remembered that there was another artificial object of extrasolar origin in the science-fiction literature. It is Iilah, from A. E. van Vogt’s 1948 short story Dormant. Iilah is not discovered in orbit; rather, it lays dormant on the ocean floor for millions of years until it is awakened by the feeble radioactivity of isotopes that appear in the ocean as a result of the use and testing of nuclear weapons.

Iilah climbs out of the sea and is thus discovered. It becomes an object of study by a paranoid military, which ultimately decides to destroy it using a nuclear weapon.

Unfortunately, the energy of the explosion achieves the exact opposite: instead of destroying Iilah, it fully awakens it, making it finally remember its original purpose. Iilah then sets itself up for a tremendous explosion that knocks the Earth out of orbit, ultimately causing it to fall into the Sun, turning the Sun into a nova. Why? Because Iilah was programmed to do this. Because “robot atom bombs do not make up their own minds.”

Artist’s impression of ‘Oumuamua

So here is the thing… the Iilah of van Vogt’s story had almost the exact same dimensions (it was about 400 feet in length) and appearance (a rock, like rough granite, with streaks of pink) as ‘Oumuamua.

Go figure.

Sci-Hub is a Russian Web site that contains pirated copies of millions of research papers.

Given that many of these papers are hidden behind hefty paywalls, it is no surprise that Sci-Hub has proven popular among researchers, especially independent researchers or researchers in third world countries, whose institutions cannot afford huge journal subscription fees.

Journal publishers do provide a service (at least those few journals that still take these tasks seriously) as they go through a reasonably well-managed peer review process and also perform quality copy editing. But… the bulk of the value comes not from these services, but from the research paper authors and the unpaid peer reviewers. In short, these publishers take our services for free (worse yet, often there are publication charges!) and then charge us again for the privilege to read what we wrote. No wonder that even in the generally law-abiding scientific community there is very little sympathy for journal publishers.

Nonetheless, publishers are fighting back, and the American Chemical Society just won a case that might make it a lot harder to access Sci-Hub from the US in the future. For what it’s worth, it hasn’t happened yet, or maybe we are immune in Canada:

$dig +short sci-hub.io 104.31.86.37 104.31.87.37$ traceroute sci-hub.io
[...]
9 206.223.119.180 (206.223.119.180) 46.916 ms 44.267 ms 66.828 ms
10 104.31.87.37 (104.31.87.37) 31.017 ms 29.719 ms 29.301 ms

I don’t know, but to me it looks as just another case of using the legal system to defend a badly broken, outdated, untenable business model.

Today, a “multi-messenger” observation of a gravitational wave event was announced.

This is a big freaking deal. This is a Really Big Freaking Deal. For the very first time, ever, we observed an event, the merger of two neutron stars, simultaneously using both gravitational waves and electromagnetic waves, the latter including light, radio waves, UV, X-rays, gamma rays.

From http://iopscience.iop.org/article/10.3847/2041-8213/aa91c9

The significance of this observation must not be underestimated. For the first time, we have direct validation of a LIGO gravitational wave observation. It demonstrates that our interpretation of LIGO data is actually correct, as is our understanding of neutron star mergers; one of the most important astrophysical processes, as it is one of the sources of isotopes heavier than iron in the universe.

Think about it… every time you hold, say, a piece of gold in your hands, you are holding something that was forged in an astrophysical event like this one billions of years ago.

Move over, Donald Trump. To heck with you, hurricane victims in Puerto Rico. See if I care about Catalonia voting for independence. Here is some real news™ from Canada instead, about a branch of the Royal Bank of Canada, which has been closed since August because a family of raccoons decided to make the ceiling of the place their new home.

Toronto bank branch closed after raccoon family moves in, damages the place.

The damage is extensive. The branch will reportedly stay closed until sometime in October.

You have to admit though that these animals are cute. Even when they are doing their best and try to look ferocious and angry.

Interesting forecast, courtesy of the Weather Network earlier this afternoon:

Yes, that is a snow symbol in the upper left corner. And yes, my American friends, the 29 degrees is Centigrade.

Warm snow, I guess.

(The “Accumulating snow” headline for Goose Bay is probably valid. But the upper left corner was supposed to describe current conditions here in Ottawa.)

So here it is: another gravitational wave event detection by the LIGO observatories. But this time, there is a twist: a third detector, the less sensitive European VIRGO observatory, also saw this event.

This is amazing. Among other things, having three observatories see the same event is sufficient to triangulate the sky position of the event with much greater precision than before. With additional detectors coming online in the future, the era of gravitational wave astronomy has truly arrived.