Oct 282018
 

The other day, a conservative friend of mine sent me a link. It was to a paper purportedly demonstrating that conservatives were more ideologically diverse than their liberal counterparts.

I found this result surprising, rather striking to be honest, and contrary to my own experience.

The study, as it appears, has not been published yet but it is available in an online manuscript archive. It certainly appears thorough. So how could it come to its striking conclusions?

I think I figured out the answer when I looked at the appendices, which provide details on how the research was conducted. Here is a set of questions that were used in two of the four studies discussed in the paper:

  1. It is the responsibility of political leaders to promote programs that will help close the income gap between the rich and the poor.
  2. There is no “right way” to live life; instead, everyone must create a way to live which works best for them.
  3. Spending tax dollars on “abstinence education” rather than “sex education” is more effective in curbing teen pregnancy.
  4. The more money a person makes in America, the more taxes he/she should pay.
  5. The use of our military strength makes the United States a safer place to live.
  6. America would be a better place if people had stronger religious beliefs.
  7. The traditional (male/female) two-parent family provides the best environment of stability, discipline, responsibility and character.
  8. America’s domestic policy should do more to ensure that living and working conditions are equal for all groups of people.
  9. Flag burning should be illegal.
  10. Our society is set up so that people usually get what they deserve.
  11. Taxation should be used to fund social programs.
  12. Gay marriage threatens the sanctity of marriage.

When I look at this list, it is clear to me that questions 1, 2, 4, 8 and 11 are standard “liberal” pushbutton items, whereas the rest are “conservative” in nature.

But look more closely. Items 1, 2, 4, 8 and 11 are, insofar as liberal views are concerned, very mild and mainstream. Closing the income gap? Taxing income? (Not even a mention of progressive taxation.) Social programs? I know of no liberal who would disagree with these broad concepts. In fact, I can think of many conservatives who would readily subscribe at least to some of these ideas.

Now look at the conservative questionnaire items. Flag burning? A great many conservatives in the US believe firmly that this right is strongly protected by the First Amendment. Gay marriage? Sure, it’s an issue for some, but for many conservatives, it’s either something that they are neutral about (or may even support it) or often, it’s an issue that comes up more as a matter of states’ rights vs. the federal government, without a priori opposing the idea.

In short, when I looked closely I realized that whereas the “liberal” questions accurately reflect the liberal mainstream, the “conservative” questions are more representative of a liberal caricature of what conservatives are thought to be. By way of example, the “liberal” analog of some of these “conservative” questions would be something like, “Research that demonstrates differences on the basis of gender or race should be banned”, or some similar conservative caricature of liberal “identity politics” or “social justice warriors”.

The results, therefore, are not surprising after all. Since most liberals agree on mainstream liberal ideas, the liberal side comes across as ideologically monolithic; and since many conservatives take issue with narrowly defined, often religiously motivated line items, they come across as more diverse, more heterogeneous.

Ironically, then, the liberal bias of the researchers resulted in a paper that, contrary to their expectations, appeared to show that the conservative side is more ideologically tolerant than their liberal counterparts. In reality, though, I think the paper merely demonstrates the garbage-in-garbage-out principle that is so well known in computer science: when your research is flawed, your results will be just as flawed.

 Posted by at 5:11 pm
Oct 282018
 

Allow me to preface this post with the following: I despise Donald J. Trump, the infantile, narcissistic, racist, misogynist “leader of the free world” who is quite possibly a traitor and may never have become president without help from his Russian buddy Putin. Also, when it comes to matters that I consider important, I am a small-l liberal; I support, for instance, LGBTQ rights, the right to have an abortion, or the legalization of cannabis, to name a few examples. I celebrate the courage of #MeToo victims. I reject racism and misogyny in all forms, open or covert.

Yet I am appalled by some of the things that happened lately in academic circles, sadly justifying the use of the pejorative term “SJW” (social justice warrior) that is so popular on the political right. A few specific cases:

  1. Last month, the European nuclear research institution CERN held a workshop with the title, High Energy Physics and Gender. One of the speakers was the Italian physicist Alessandro Strumia. Strumia offered a semi-coherent presentation, whimsically titled Experimental test of a new global discrete symmetry. In it, Strumia argued that men are over-represented in physics because they perform better. In the presentation, he offered some genuine data, but he also offered what may be construed as a personal attack, in the form of a short list of three names: those of two women who were hired by Italy’s nuclear research institute INFN, along with Strumia’s, who was rejected despite his much higher citation count. Strumia’s research is questionable. His conclusions may be motivated by his bitterness over his personal failures. His approach may be indefensible. All of which would be justification to laugh at him during his presentation, to not accept his work for publication in the workshop proceedings, and perhaps to avoid inviting him in the future.

    But CERN went a lot further. They retroactively removed Strumia’s presentation altogether from the conference archive, and have since administratively sanctioned him, putting his future career as a physicist in question. When this response was questioned by some, there came the retroactive justification that his one slide containing the three names constitutes a “personal attack”, violating CERN policy.

    I don’t agree with Strumia. I don’t like him or respect his research. But I have to ask: If he is not allowed to offer his views at a conference dedicated to “high-energy physics and gender” without fear of severe repercussions, where can he?

    Now you might ask why he should be given a platform at all. Because this is (supposedly) science. And science thrives on criticism and controversial views. If we only permit views that preach to the choir, so to speak, science dies. I’d much rather risk getting offended by clowns like Strumia from time to time.

  2. Meanwhile, a few weeks ago, we learned of Helen Pluckrose, James Lindsay and Peter Boghossian, who prepared and submitted 20 completely bogus papers to reputably social science journals. Here are a few gems:
  • The paper titled Human Reactions to Rape Culture and Queer Performativity in Urban Dog Parks in Portland, Oregon argues that dog parks are “rape-condoning” places of rampant “canine rape culture”. Accepted, published and recognized for excellence in the journal Gender, Place and Culture.
  • The paper, Going in Through the Back Door: Challenging Straight Male Homohysteria and Transphobia through Receptive Penetrative Sex Toy Use argues that heterosexual men should practice anal self-penetration using sex toys in order to decrease transphobia and increase feminist values. Accepted and published in Sexuality & Culture.
  • The paper, An Ethnography of Breastaurant Masculinity: Themes of Objectification, Sexual Conquest, Male Control, and Masculine Toughness in a Sexually Objectifying Restaurant demonstrates how papers, even when they rely on made-up bogus data, are accepted when they problematize the attraction of heterosexual males to women. Accepted and published in Sex Roles.
  • The paper with the ominous title, Our Struggle is My Struggle: Solidarity Feminism as an Intersectional Reply to Neoliberal and Choice Feminism was accepted for publication in the journal Affilia, despite the fact that it is just a paraphrasing of Adolf Hitler’s opus Mein Kampf (My struggle), with feminist and grievance-related buzzwords replacing Nazi hate terms.
  1. How could such nonsensical papers be accepted for publication? Perhaps because life, in this case, imitates art: because of papers like those written by Rochelle Gutiérrez, who apparently believes that mathematics education as currently practiced is just a vehicle to spread white supremacism. In her paper, When Mathematics Teacher Educators Come Under Attack (published by the journal Mathematics Teacher Educator of the National Council of Teachers of Mathematics) she argues (citing her earlier work) that there exists “a direct link between White supremacist capitalist patriarchy and mathematics”. In an earlier paper, she introduces her invention, “Mathematx” (supposedly an ethnically neutral, LGBTQ-friendly alternative to the white supremacist term “mathematics”), with the intent to “underscore with examples from biology the potential limitations of current forms of mathematics for understanding/interacting with our world and the potential benefits of considering other-than-human persons as having different knowledges to contribute.” The reader might be forgiven if they thought that these were just further hoax papers by Pluckrose, Lindsay and Boghossian, but nope; these papers are for real, penned by an author who plays an influential role in the shaping of mathematics education in the United States.
  2. Meanwhile, another paper has been “disappeared” in a manner not unlike how persons were “disappeared” in communist or fascist dictatorships. Theodore P. Hill’s paper, An Evolutionary Theory for the Variability Hypothesis, discusses the mathematical background of what has been known as the “greater male variability hypothesis”: an observation, dating back to Charles Darwin’s times, that across a multitude of species, males often show greater variability in many traits than females. (Simply put, this may mean that a given group of males may contain more idiots and more idiot savants than an equal size group of females.)

    Unlike Strumia, Hill does not appear to have a personal agenda. The stated goal of the paper was neither to promote nor to refute the idea but to see whether a simple mathematical basis might exist that explains it. After being rejected (even following initial acceptance) by other journals, it was finally published in the New York Journal of Mathematics, only to be taken down (its page number and identifier assigned to a completely different paper) three days later after the editors received a complaint and a threat of losing support.

    One of the justifications for this paper’s removal (and for these types of actions in general) is that such material may discourage young women from STEM fields. Apart from the intellectual dishonesty of removing an already published paper due to political pressure, I think this is also the ultimate form of covert sexism. The message to young women who are aspiring engineers and scientists is, “You, womenfolk, are too weak, too innocent to be able to think critically and reject ideological bias masquerading as science. So let us come and defend you, by ensuring that you are not exposed to vile ideas that your fragile little minds cannot handle.”

I call these incidents “irritants” when it comes to free speech.

On the one hand, publications dedicated to social science and education publish even the most outrageously bogus research so long as it kowtows to the prevailing sociopolitical agenda.

On the other hand, obscure research is thrust into the spotlight by intolerant “SJW”-s who seek to administratively suppress ideas that they find offensive. While this goal is technically accomplished (Strumia’s presentation and Hill’s paper were both successfully “unpublished”), in reality they achieve the exact opposite: they expose these authors to a much greater audience than they otherwise would have enjoyed. The message, meantime, to those they purportedly protect (e.g., women, minorities) is one of condescension: these groups apparently lack the ability to think critically and must be protected from harmful thoughts by their benevolent superiors.

Beyond all that, these actions also have negative consequences on academic life overall. In addition to suppressing controversial research, they may also lead to self-censorship. Indeed, I am left to wonder: Would I have the courage to write this blog entry if I myself had an academic career to worry about?

Last but not least, all this is oil on the fire. Those on the right, fans of Jordan Peterson and others, who are already convinced that the left is dominated by intolerant “SJW”-s, see their worst fears confirmed by these irritants, and thus their hostility increases towards the scientific establishment (including climate science, political economics, genuine social science research on refugees and migration, health, sexual education, etc.) with devastating consequences for all of us living on this planet.

If we truly believe in our small-l liberal values, it includes defending free speech even when it is vile speech. It also includes respecting others, including women and minorities, not misguidedly protecting them from hurtful ideas that they are supposedly too weak and fragile to handle. And it includes defending the freedom of scientific inquiry even if it is misused by self-absorbed losers. After all, if we can publish the nonsensical writings of Gutiérrez, surely the world won’t come to an end if Hill’s paper is published or if Strumia’s presentation remains available on the CERN workshop archive.

 Posted by at 5:09 pm
Oct 182018
 

Just got back from The Perimeter Institute, where I spent three very short days.

I had good discussions with John Moffat. I again met Barak Shoshany, whom I first encountered on Quora. I attended two very interesting and informative seminar lectures by Emil Mottola on quantum anomalies and the conformal anomaly.

I also gave a brief talk about our research with Slava Turyshev on the Solar Gravitational Lens. I was asked to give an informal talk with no slides. It was a good challenge. I believe I was successful. My talk seemed well received. I was honored to have Neil Turok in the audience, who showed keen interest and asked several insightful questions.

 Posted by at 11:53 pm
Oct 022018
 

I just watched a news conference held by the University of Waterloo, on account of Donna Strickland being awarded the Nobel prize in physics.

This is terrific news for Canada, for the U. of Waterloo, and last but most certainly not least, for women in physics.

Heartfelt congratulations!

 Posted by at 7:49 pm
Sep 252018
 

Michael Atiyah, 89, is one of the greatest living mathematicians. Which is why the world pays attention when he claims to have solved what is perhaps the greatest outstanding problem in mathematics, the Riemann hypothesis.

Here is a simple sum: \(1+\frac{1}{2^2}+\frac{1}{3^2}+…\). It is actually convergent: The result is \(\pi^2/6\).

Other, similar sums also converge, so long as the exponent is greater than 1. In fact, we can define a function:

$$\begin{align*}\zeta(x)=\sum\limits_{i=1}^\infty\frac{1}{i^x}.\end{align*}$$

Where things get really interesting is when we extend the definition of this \(\zeta(x)\) to the entire complex plane. As it turns out, its analytic continuation is defined almost everywhere. And, it has a few zeros, i.e., values of \(x\) for which \(\zeta(x)=0\).

The so-called trivial zeros of \(\zeta(x)\) are the negative even integers: \(x=-2,-4,-6,…\). But the function also has infinitely many nontrivial zeros, where \(x\) is complex. And here is the thing: The real part of all known nontrivial zeros happens to be \(\frac{1}{2}\), the first one being at \(x=\frac{1}{2}+14.1347251417347i\). This, then, is the Riemann hypothesis: Namely that if \(x\) is a non-trivial zero of \(\zeta(x)\), then \(\Re(x)=\frac{1}{2}\). This hypothesis baffled mathematicians for the past 130 years, and now Atiyah claims to have solved it, accidentally (!), in a mere five pages. Unfortunately, verifying his proof is above my pay grade, as it references other concepts that I would have to learn first. But it is understandable why the mathematical community is skeptical (to say the least).

A slide from Atiyah’s talk on September 24, 2018.

What is not above my pay grade is analyzing Atiyah’s other claim: a purported mathematical definition of the fine structure constant \(\alpha\). The modern definition of \(\alpha\) relates this number to the electron charge \(e\): \(\alpha=e^2/4\pi\epsilon_0\hbar c\), where \(\epsilon_0\) is the electric permeability of the vacuum, \(\hbar\) is the reduced Planck constant and \(c\) is the speed of light. Back in the days of Arthur Eddington, it seemed that \(\alpha\sim 1/136\), which led Eddington himself onto a futile quest of numerology, trying to concoct a reason why \(136\) is a special number. Today, we know the value of \(\alpha\) a little better: \(\alpha^{-1}\simeq 137.0359992\).

Atiyah produced a long and somewhat rambling paper that fundamentally boils down to two equations. First, he defines a new mathematical constant, denoted by the Cyrillic letter \(\unicode{x427}\) (Che), which is related to the fine structure constant by the equation

$$\begin{align*}\alpha^{-1}=\frac{\pi\unicode{x427}}{\gamma},\tag{1.1*}\end{align*}$$

where \(\gamma=0.577…\) is the Euler–Mascheroni constant. Second, he offers a definition for \(\unicode{x427}\):

$$\begin{align*}\unicode{x427}=\frac{1}{2}\sum\limits_{j=1}^\infty 2^{-j}\left(1-\int_{1/j}^j\log_2 x~dx\right).\tag{7.1*}\end{align*}$$

(The equation numbers are Atiyah’s; I used a star to signify that I slightly simplified them.)

Atiyah claims that this sum is difficult to calculate and then goes into a long-winded and not very well explained derivation. But the sum is not difficult to calculate. In fact, I can calculate it with ease as the definite integral under the summation sign is trivial:

$$\begin{align*}\int_{1/j}^j\log_2 x~dx=\frac{(j^2+1)\log j-j^2+1}{j\log 2}.\end{align*}$$

After this, the sum rapidly converges, as this little bit of Maxima code demonstrates (NB: for \(j=1\) the integral is trivial as the integration limits collapse):

(%i1) assume(j>1);
(%o1)                               [j > 1]
(%i2) S:1/2*2^(-j)*(1-integrate(log(x)/log(2),x,1/j,j));
                                  log(j) + 1
                                  ---------- + j log(j) - j
                   (- j) - 1          j
(%o2)             2          (1 - -------------------------)
                                           log(2)
(%i3) float(sum(S,j,1,50));
(%o3)                         0.02944508691740671
(%i4) float(sum(S,j,1,100));
(%o4)                         0.02944508691730876
(%i5) float(sum(S,j,1,150));
(%o5)                         0.02944508691730876
(%i6) float(sum(S,j,1,100)*%pi/%gamma);
(%o6)                         0.1602598029967022

Unfortunately, this does not look like \(\alpha^{-1}=137.0359992\) at all. Not even remotely.

So we are all left to guess, sadly, what Atiyah was thinking when he offered this proposal.

We must also remember that \(\alpha\) is a so-called “running” constant, as its value depends on the energy of the interaction, though presumably, the constant in question here is \(\alpha\) in the infrared limit, i.e., at zero energy.

 Posted by at 12:27 pm
Sep 232018
 

It is not every day that you see devastation on this scale in our fine city:

That happens to be the Merivale electrical substation. What can I say… looks “previously owned, slightly used”. No wonder substantial chunks of the city are still without power, two days after the tornado hit.

 Posted by at 6:05 pm
Sep 092018
 

OK, so we’ve had Trump for nearly two years now, and we know that the White House has become a combination of kindergarten and insane asylum. My conservative friends still support Trump because he “delivers”, and are willing to completely overlook the fact that this president is not only a bumbling dilettante, an offensive excuse of a human being (waste of skin, to borrow a phrase from Lexx, a science-fiction series from a few years ago) but quite possibly a traitor to his nation, too, working for Putin’s Russia.

But if I hoped that Trump’s opposition is any better, they bitterly disappoint each and every day.

Take, for instance, the made-up controversy of a Kavanaugh aide presumably flashing “white power” hand signs while sitting behind Kavanaugh during his Supreme Court hearing, visible to cameras. Never mind that the hand sign was, in fact, a perfectly ordinary OK sign. Never mind that it was a well-documented Internet hoax from last year that suggested that this OK sign is, in fact, a secret hand gesture used by white supremacists. None of that stops many of my liberal friends from tweeting and retweeting the meme, complete with obscenities and death threats. Fact checking is for wimps, I guess.

And now I am reading about the bitter fate of a paper exploring the mathematics behind a controversial hypothesis dating back to Darwin’s times, called the “Greater Male Variability Hypothesis” (GMVH). The GMVH basically asserts that there are more idiots and more geniuses among men than women. It was Darwin who first noted that such greater variability is prevalent across many species in the animal kingdom. But politically correct guardians of science publishing would have none of that. Poor Darwin… the right hates him because he dares to challenge the idea that the world was created 6,000 years ago, but now the left hates him, too, because he dares to offer us politically incorrect science. The paper by Theodore P. Hill was first accepted and then rejected by journals, including a journal that already published the paper online, only to replace it with another a few days later. Never even mind the attack on academic freedom that this represents, but how about blatant sexism? You know, those impressionable young female scientists, fragile little flowers that they are, who cannot handle scientific truth and must be protected at all costs, unlike their ever so manly male colleagues…

One of the guests on Fareed Zakaria’s show today on CNN was Jonathan Haidt, one of the authors of the book, The Coddling of the American Mind. The authors explore the consequences of what they dub “safetyism”: Keeping children away from danger, real or perceived, at all costs, thus denying them a chance to become independent human beings. The result, according to the book, is that rates of anxiety, depression, even suicide are rising at an alarming rate, even as both students and professors on college campuses walk on eggshells, less they offend someone with a careless word, or heaven forbid, a hand gesture…

All in all, I am ready to conclude that the world is going bonkers, and those who seek salvation from Trump’s political opposition on the left (or seek salvation left-wing political opposition to right-wing populism and nativism elsewhere in the world) are deluding themselves.

 Posted by at 7:09 pm
Aug 252018
 

Imagine a health care system that is created and managed without the help of doctors. Imagine getting radiation treatment without the help of medical physicists.

Imagine an education system that is created and managed without educators.

Imagine a system of highways and railways created and managed without transportation engineers.

Imagine an electrical infrastructure that is created and managed without electrical engineers. Nuclear power plants without physicists. An economy that is managed without professional economists. A communications infrastructure created and managed without radio engineers, software and network engineers.

This is Doug Ford’s vision for the province of Ontario, presented by none other than Doug Ford himself through his Twitter feed, as he proudly proclaims that his government, his party, won’t listen to academics: the very people that we pay so that they learn and offer their professional knowledge for the benefit of the public.

Guess this is what happens when ideology and blatant populism trump facts. (Pun unintended, but disturbingly appropriate.)

 Posted by at 3:35 pm
Aug 212018
 

Yesterday, I received a nice surprise via e-mail: A link to a new article in Astronomy magazine (also republished by Discover magazine) about our efforts to solve the Pioneer Anomaly.

I spent several years working with Slava Turyshev and others on this. It was a lot of very hard, difficult work.

As part of my (both published and unpublished) contributions, I learned how to do precision modeling of satellite orbits in the solar system. I built a precision navigation application that was sufficiently accurate to reconstruct the Pioneer trajectories and observe the anomaly. I built a semi-analytical and later, a numerical (ray-tracing) model to estimate the directional thermal emissions of the two spacecraft.

But before all that, I built software to extract telemetry from the old raw data files, recorded as received by the Deep Space Network. These were the files that lay forgotten on magnetic tape for many years, eventually to be transferred to a now obsolete optical disc format and then, thanks to the efforts of Larry Kellogg, to modern media. My own efforts, to make sense of these telemetry files, is what got me involved with the Pioneer Anomaly project in the first place.

These were fun days. And I’d be lying if I said that I have no tinge of regret that in the end, we found no anomalous acceleration. After all, confirmation that the trajectories of these two Pioneers are affected by an unmodeled force, likely indicating the need for new physics… that would have been tremendous. Instead, we found something mundane, relegated (at best) to the footnotes of science history.

Which is why I felt a sense of gratitude reading this article. It told me that our efforts have not been completely forgotten.

 Posted by at 8:05 pm
Jun 272018
 

A while back, I wrote about the uncanny resemblance between the interstellar asteroid ‘Oumuamua and the fictitious doomsday weapon Iilah in A. E. van Vogt’s 1948 short story Dormant.

And now I am reading that Iilah’s, I mean, ‘Oumuamua’s trajectory changed due to non-gravitational forces. The suspect is comet-like outgassing, but observations revealed no gas clouds, so it is a bit of a mystery.

Even if this is purely a natural phenomenon (and I firmly believe that it is, just in case it needs to be said) it is nonetheless mind-blowingly fascinating.

 Posted by at 11:59 pm
Jun 032018
 

I am reading some breathless reactions to a preprint posted a few days ago by the MiniBooNE experiment. The experiment is designed to detect neutrinos, in particular neutrino oscillations (the change of one neutrino flavor into another.)

The headlines are screaming. Evidence found of a New Fundamental Particle, says one. Strange New Particle Could Prove Existence of Dark Matter, says another. Or how about, A Major Physics Experiment Just Detected A Particle That Shouldn’t Exist?

The particle in question is the so-called sterile neutrino. It is a neat concept, one I happen to quite like. It represents an elegant resolution to the puzzle of neutrino handedness. This refers to the chirality of neutrinos, essentially the direction in which they spin compared to their direction of motion. We only ever see “left handed” neutrinos. But neutrinos have rest mass. So they move slower than light. That means that if you run fast enough and outrun a left-handed neutrino, so that relative to you it is moving backwards (but still spins in the same direction as before), when you look back, you’ll see a right-handed neutrino. This implies that right-handed neutrinos should be seen just as often as left-handed neutrinos. But they aren’t. How come?

Sterile neutrinos offer a simple answer: We don’t see right-handed neutrinos because they don’t interact (they are sterile). That is to say, when a neutrino interacts (emits or absorbs a Z-boson, or emits or absorbs a W-boson while changing into a charged lepton), it has to be a left-handed neutrino in the interaction’s center-of-mass frame.

If this view is true and such sterile neutrinos exist, even though they cannot be detected directly, their existence would skew the number of neutrino oscillation events. As to what neutrino oscillations are: neutrinos are massive. But unlike other elementary particles, neutrinos do not have a well-defined mass associated with their flavor (electron, muon, or tau neutrino). When a neutrino has a well-defined flavor (is in a flavor eigenstate) it has no well-defined mass and vice versa. This means that if we detect neutrinos in a mass eigenstate, their flavor can appear to change (oscillate) between one state or another; e.g., a muon neutrino may appear at the detector as an electron neutrino. These flavor oscillations are rare, but they can be detected, and that’s what the MiniBooNE experiment is looking for.

And that is indeed what MiniBooNE found: an excess of events that is consistent with neutrino oscillations.

MiniBooNE detects electron neutrinos. These can come from all kinds of (background) sources. But one particular source is an intense beam of muon neutrinos produced at Fermilab. Because of neutrino oscillations, some of the neutrinos in this beam will be detected as electron neutrinos, yielding an excess of electron neutrino events above background.

And that’s exactly what MiniBooNE sees, with very high confidence: 4.8σ. That’s almost the generally accepted detection threshold for a new particle. But this value of 4.8σ is not about a new particle. It is the significance associated with excess electron neutrino detection events overall: an excess that is expected from neutrino oscillations.

So what’s the big deal, then? Why the screaming headlines? As far as I can tell, it all boils down to this sentence in the paper: “Although the data are fit with a standard oscillation model, other models may provide better fits to the data.”

What this somewhat cryptic sentence means is best illustrated by a figure from the paper:

This figure shows the excess events (above background) detected by MiniBooNE, but also the expected number of excess events from neutrino oscillations. Notice how only the first two red data points fall significantly above the expected number. (In case you are wondering, POT means Protons On Target, that is to say, the number of protons hitting a beryllium target at Fermilab, producing the desired beam of muon neutrinos.)

Yes, these two data points are intriguing. Yes, they may indicate the existence of new physics beyond two-neutrino oscillations. In particular, they may indicate the existence of another oscillation mode, muon neutrinos oscillating into sterile neutrinos that, in turn, oscillate into electron neutrinos, yielding this excess.

Mind you, if this is a sign of sterile neutrinos, these sterile neutrinos are unlikely dark matter candidates; their mass would be too low.

Or these two data points are mere statistical flukes. After all, as the paper says, “the best oscillation fit to the excess has a probability of 20.1%”. That is far from improbable. Sure, the fact that it is only 20.1% can be interpreted as a sign of some tension between the Standard Model and this experiment. But it is certainly not a discovery of new physics, and absolutely not a confirmation of a specific model of new physics, such as sterile neutrinos.

And indeed, the paper makes no such claim. The word “sterile” appears only four times in the paper, in a single sentence in the introduction: “[…] more exotic models are typically used to explain these anomalies, including, for example, 3+N neutrino oscillation models involving three active neutrinos and N additional sterile neutrinos [6-14], resonant neutrino oscillations [15], Lorentz violation [16], sterile neutrino decay [17], sterile neutrino non-standard interactions [18], and sterile neutrino extra dimensions [19].”

So yes, there is an intriguing sign of an anomaly. Yes, it may point the way towards new physics. It might even be new physics involving sterile neutrinos.

But no, this is not a discovery. At best, it’s an intriguing hint; quite possibly, just a statistical fluke.

So why the screaming headlines, then? I wish I knew.

 Posted by at 9:58 am
May 292018
 

There is an excellent diagram accompanying an answer on StackExchange, and I’ve been meaning to copy it here, because I keep losing the address.

The diagram summarizes many measures of cosmic expansion in a nice, compact, but not necessarily easy-to-understand form:

So let me explain how to read this diagram. First of all, time is going from bottom to top. The thick horizontal black line represents the moment of now. Imagine this line moving upwards as time progresses.

The thick vertical black line is here. So the intersection of the two thick black lines in the middle is the here-and-now.

Distances are measured in terms of the comoving distance, which is basically telling you how far a distant object would be now, if you had a long measuring tape to measure its present-day location.

The area shaded red (marked “past light cone”) is all the events that happened in the universe that we could see, up to the moment of now. The boundary of this area is everything in this universe from which light is reaching us right now.

So just for fun, let us pick an object at a comoving distance of 30 gigalightyears (Gly). Look at the dotted vertical line corresponding to 30 Gly, halfway between the 20 and 40 marks (either side, doesn’t matter.) It intersects the boundary of the past light cone when the universe was roughly 700 million years old. Good, there were already young galaxies back then. If we were observing such a galaxy today, we’d be seeing it as it appeared when the universe was 700 million years old. Its light would have spent 13.1 billion years traveling before reaching our instruments.

Again look at the dotted vertical line at 30 Gly and extend it all the way to the “now” line. What does this tell you about this object? You can read the object’s redshift (z) off the diagram: its light is shifted down in frequency by a factor of about 9.

You can also read the object’s recession velocity, which is just a little over two times the vacuum speed of light. Yes… faster than light. This recession velocity is based on the rate of change of the scale factor, essentially the Hubble parameter times the comoving distance. The Doppler velocity that one would deduce from the object’s redshift yields a value less than the vacuum speed of light. (Curved spacetime is tricky; distances and speeds can be defined in various ways.)

Another thing about this diagram is that in addition to the past, it also sketches the future, taking into account the apparent accelerating expansion of the universe. Notice the light red shaded area marked “event horizon”. This area contains everything that we will be able to see at our present location, throughout the entire history of the universe, all the way to the infinite future. Things (events) outside this area will never be seen by us, will never influence us.

Note how the dotted line at 30 Gly intersects this boundary when the universe is about 5 billion years old. Yes, this means that we will only ever see the first less than 5 billion years of existence of a galaxy at a comoving distance of 30 Gly. Over time, light from this galaxy will be redshifted ever more, until it eventually appears to “freeze” and disappears from sight, never appearing to become older than 5 billion years.

Notice also how the dashed curves marking constant values of redshift bend inward, closer and closer to the “here” location as we approach the infinite future. This is a direct result of accelerating expansion: Things nearer and nearer to us will be caught up in the expansion, accelerating away from our location. Eventually this will stop, of course; cosmic acceleration will not rip apart structures that are gravitationally bound. But we will end up living in a true “island universe” in which nothing is seen at all beyond the largest gravitationally bound structure, the local group of galaxies. Fortunately that won’t happen anytime soon; we have many tens of billions of years until then.

Lastly, the particle horizon (blue lines) essentially marks the size of the visible part of the universe at any given time. Notice how the width of the interval marked by the intersection of the now line and the blue lines is identical to the width of the past light cone at the bottom of this diagram. Notice also how the blue lines correspond to infinite redshift.

As I said, this diagram is not an easy read but it is well worth studying.

 Posted by at 8:35 pm
Apr 232018
 

Stephen Hawking passed away over a month ago, but I just came across this beautiful tribute from cartoonist Sean Delonas. It was completely unexpected (I was flipping through the pages of a magazine) and, I admit, it had quite an impact on me. Not the words, inspirational though they may be… the image. The empty wheelchair, the frail human silhouette walking away in the distance.

 Posted by at 5:23 pm
Apr 022018
 

The recent discovery of a galaxy, NGC1052-DF2, with no or almost no dark matter made headlines worldwide.

Nature 555, 629–632 (29 March 2018)

Somewhat paradoxically, it has been proclaimed by some as evidence that the dark matter paradigm prevails over theories of modified gravity. And, as usual, many of the arguments were framed in the context of dark matter vs. MOND, as if MOND was a suitable representative of all modified gravity theories. One example is a recent Quora question, Can we say now that all MOND theories is proven false, and there is really dark matter after all? I offered the following in response:

First of all, allow me to challenge the way the question is phrased: “all MOND theories”… Please don’t.

MOND (MOdified Newtonian Dynamics) is not a theory. It is an ad hoc, phenomenological replacement of the Newtonian acceleration law with a simplistic formula that violates even basic conservation laws. The formula fits spiral galaxy rotation curves reasonably well, consistent with the empirical Tully—Fisher law that relates galaxy masses and rotational velocities, but it fails for just about everything else, including low density globular clusters, dwarf galaxies, clusters of galaxies, not to mention cosmological observations.

MOND was given a reprieve in the form of Jacob Beckenstein’s TeVeS (Tensor—Vector—Scalar gravity), which is an impressive theoretical exercise to create a proper classical field theory that reproduces the MOND acceleration law in the weak field, low velocity limit. However, TeVeS suffers from the same issues MOND does when confronted with data beyond galaxy rotation curves. Moreover, the recent gravitational wave event, GW170817, accompanied by the gamma ray burst GRB170817 from the same astrophysical event, thus demonstrating that the propagation speed of gravitational and electromagnetic waves is essentially identical, puts all bimetric theories (of which TeVeS is an example) in jeopardy.

But that’s okay. News reports suggesting the death of modified gravity are somewhat premature. While MOND has often been used as a straw man by opponents of modified gravity, there are plenty of alternatives, many of them much better equipped than MOND to deal with diverse astrophysical phenomena. For instance, f(R) gravity, entropic gravity, Horava—Lifshitz gravity, galileon theory, DGP (Dvali—Gabadadze—Porrati) gravity… The list goes on and on. And yes, it also includes John Moffat’s STVG (Scalar—Tensor—Vector Gravity — not to be confused with TeVeS, the two are very different animals) theory, better known as MOG, a theory to which I also contributed.

As to NGC1052-DF2, for MOG that’s actually an easy one. When you plug in the values for the MOG approximate solution that we first published about a decade ago, you get an effective dynamical mass that is less than twice the visible (baryonic) mass of this galaxy, which is entirely consistent with its observed velocity dispersion.

In fact, I’d go so far as to boldly suggest that NGC1052-DF2 is a bigger challenge for the dark matter paradigm than it is for some theories of modified gravity (MOG included). Why? Because there is no known mechanism that would separate dark matter from stellar mass.

Compare this to the infamous Bullet Cluster: a pair of galaxy clusters that have undergone a collision. According to the explanation offered within the context of the dark matter paradigm (NB: Moffat and Brownstein showed, over a decade ago, that the Bullet Cluster can also be explained without dark matter, using MOG), their dark matter halos just flew through each other without interaction (other than gravity), as did the stars (stars are so tiny compared to the distance between them, the likelihood of stellar collisions is extremely remote, so stars also behave like a pressureless medium, like dark matter.) Interstellar/intergalactic clouds of gas, however, did collide, heating up to millions of degrees (producing bright X-rays) and losing much of their momentum. So you end up with a cloud of gas (but few stars and little dark matter) in the middle, and dark matter plus stars (but little gas) on the sides. This separation process works because stars and dark matter behave like a pressureless medium, whereas gas does not.

But in the case of NGC1052-DF2, some mechanism must have separated stars from dark matter, so we end up with a galaxy (one that actually looks nice, with no signs of recent disruption). I do not believe that there is currently a generally accepted, viable candidate mechanism that could accomplish this.

 Posted by at 8:43 am
Mar 142018
 

Stephen Hawking died earlier today.

Hawking was diagnosed with ALS in the year I was born, in 1963.

Defying his doctor’s predictions, he refused to die after a few years. Instead, he carried on for another astonishing 55 years, living a full life.

Public perception notwithstanding, he might not have been the greatest living physicist, but he was certainly a great physicist. The fact that he was able to accomplish so much despite his debilitating illness made him an extraordinary human being, a true inspiration.

Here is a short segment, courtesy of CTV Kitchener, filmed earlier today at the Perimeter Institute. My friend and colleague John Moffat, who met Hawking many times, is among those being interviewed:

 Posted by at 9:17 pm
Mar 102018
 

There is a very interesting concept in the works at NASA, to which I had a chance to contribute a bit: the Solar Gravitational Telescope.

The idea, explained in this brand new NASA video, is to use the bending of light by the Sun to form an image of distant objects.

The resolving power of such a telescope would be phenomenal. In principle, it is possible to use it to form a megapixel-resolution image of an exoplanet as far as 100 light years from the Earth.

The technical difficulties are, however, challenging. For starters, a probe would need to be placed at least 550 astronomical units (about four times the distance to Voyager 1) from the Sun, precisely located to be on the opposite side of the Sun relative to the exoplanet. The probe would then have to mimic the combined motion of our Sun (dragged about by the gravitational pull of planets in the solar system) and the exoplanet (orbiting its own sun). Light from the Sun will need to be carefully blocked to ensure that we capture light from the exoplanet with as little noise as possible. And each time the probe takes a picture of the ring of light (the Einstein ring) around the Sun, it will be the combined light of many adjacent pixels on the exoplanet. The probe will have traverse a region that is roughly a kilometer across, taking pictures one pixel at a time, which will need to be deconvoluted. The fact that the exoplanet itself is not constant in appearance (it will go through phases of illumination, it may have changing cloud cover, perhaps even changes in vegetation) further complicates matters. Still… it can be done, and it can be accomplished using technology we already have.

By its very nature, it would be a very long duration mission. If such a probe was launched today, it would take 25-30 years for it to reach the place where light rays passing on both sides of the Sun first meet and thus the focal line begins. It will probably take another few years to collect enough data for successful deconvolution and image reconstruction. Where will I be 30-35 years from now? An old man (or a dead man). And of course no probe will be launched today; even under optimal circumstances, I’d say we’re at least a decade away from launch. In other words, I have no chance of seeing that high-resolution exoplanet image unless I live to see (at least) my 100th birthday.

Still, it is fun to dream, and fun to participate in such things. Though now I better pay attention to other things as well, including things that, well, help my bank account, because this sure as heck doesn’t.

 Posted by at 12:59 pm
Mar 012018
 

No, it isn’t Friday yet.

But it seems that someone at CTV Morning Live wishes it was. Why else would they have told us that yesterday, February 28, was a Thursday? (Either that or they are time travelers from 2019.)

Then again, maybe I should focus on what they are actually saying, not on a trivial mistake they made: that even as parts of Europe that rarely see snow are blanketed by the white stuff, places in Canada and Siberia see unprecedented mild weather. A fluke or further evidence of climate change disrupting the polar vortex?

 Posted by at 8:13 am
Feb 272018
 

Enough of politics and cats. Time to blog about math and physics again.

Back in my high school days, when I was becoming familiar with calculus and differential equations (yes, I was a math geek) something troubled me. Why were certain expressions called “linear” when they obviously weren’t?

I mean, an expression like \(Ax+B\) is obviously linear. But who in his right mind would call something like \(x^3y + 3e^xy+5\) “linear”? Yet when it comes to differential equations, they’d tell you that \(x^3y+3e^xy+5-y^{\prime\prime}=0\) is “obviously” a second-order, linear ordinary differential equation (ODE). What gives? And why is, say, \(xy^3+3e^xy-y^{\prime\prime}=0\) not considered linear?

The answer is quite simple, actually, but for some reason when I was 14 or so, it took a very long time for me to understand.

Here is the recipe. Take an equation like \(x^3y+3e^xy+5-y^{\prime\prime}=0\). Throw away the inhomogeneous bit, leaving the \(x^3y+3e^xy-y^{\prime\prime}=0\) part. Apart from the fact that it is solved (obviously) by \(y=0\), there is another thing that you can discern immediately. If \(y_1\) and \(y_2\) are both solutions, then so is their linear combination \(\alpha y_1+\beta y_2\) (with \(\alpha\) and \(\beta\) constants), which you can see by simple substitution, as it yields \(\alpha(x^3y_1+3e^xy_1-y_1^{\prime\prime}) + \beta(x^3y_2+3e^xy_2-y_2^{\prime\prime})\) for the left-hand side, with both terms obviously zero if \(y_1\) and \(y_2\) are indeed solutions.

So never mind that it contains higher derivatives. Never mind that it contains powers, even transcendental functions of the independent variable \(x\). What matters is that the expression is linear in the dependent variable. As such, the linear combination of any two solutions of the homogeneous equation is also a solution.

Better yet, when it comes to the solutions of inhomogeneous equations, adding a solution of the homogeneous equation to any one of them yields another solution of the inhomogeneous equation.

Notably in physics, the Schrödinger equation of quantum mechanics is an example of a homogeneous and linear differential equation. This becomes a fundamental aspect of quantum physics: given two solutions (representing two distinct physical states) their linear combination is also a solution, representing another possible physical state.

 Posted by at 11:22 am
Jan 302018
 

I was surprised by the number of people who found my little exercise about kinetic energy interesting.

However, I was disappointed by the fact that only one person (an astrophysicist by trade) got it right.

It really isn’t a very difficult problem! You just have to remember that in addition to energy, momentum is also conserved.

In other words, when a train accelerates, it is pushing against something… the Earth, that is. So ever so slightly, the Earth accelerates backwards. The change in velocity may be tiny, but the change in energy is not necessarily so. It all depends on your reference frame.

So let’s do the math, starting with a train of mass \(m\) that accelerates from \(v_1\) to \(v_2\). (Yes, I am doing the math formally; we can plug in the actual numbers in the end.)

Momentum is of course velocity times mass. Momentum conversation means that the Earth’s speed will change as

\[\Delta v = -\frac{m}{M}(v_2-v_1),\]

where \(M\) is the Earth’s mass. If the initial speed of the earth is \(v_0\), the change in its kinetic energy will be given by

\[\frac{1}{2}M\left[(v_0+\Delta v)^2-v_0^2\right]=\frac{1}{2}M(2v_0\Delta v+\Delta v^2).\]

If \(v_0=0\), this becomes

\[\frac{1}{2}M\Delta v^2=\frac{m^2}{M}(v_2-v_1)^2,\]

which is very tiny if \(m\ll M\). However, if \(|v_0|>0\) and comparable in magnitude to \(v_2-v_1\) (or at least, \(|v_0|\gg|\Delta v|\)), we get

\[\frac{1}{2}M(2v_0\Delta v+\Delta v^2)=-mv_0(v_2-v_1)+\frac{m^2}{2M}(v_2-v_1)^2\simeq -mv_0(v_2-v_1).\]

Note that the actual mass of the Earth doesn’t even matter; we just used the fact that it’s much larger than the mass of the train.

So let’s plug in the numbers from the exercise: \(m=10000~{\rm kg}\), \(v_0=-10~{\rm m}/{\rm s}\) (negative, because relative to the moving train, the Earth is moving backwards), \(v_2-v_1=10~{\rm m}/{\rm s}\), thus \(-mv_0(v_2-v_1)=1000~{\rm kJ}\).

So the missing energy is found as the change in the Earth’s kinetic energy in the reference frame of the second moving train.

Note that in the reference frame of someone standing on the Earth, the change in the Earth’s kinetic energy is imperceptibly tiny; all the \(1500~{\rm kJ}\) go into accelerating the train. But in the reference frame of the observer moving on the second train on the parallel tracks, only \(500~{\rm kJ}\) goes into the kinetic energy of the first train, whereas \(1000~{\rm kJ}\) is added to the Earth’s kinetic energy. But in both cases, the total change in kinetic energy, \(1500~{\rm kJ}\), is the same and consistent with the readings of the electricity power meter.

Then again… maybe the symbolic calculation is too abstract. We could have done it with numbers all along. When a \(10000~{\rm kg}\) train’s speed goes from \(10~{\rm m}/{\rm s}\) to \(20~{\rm m}/{\rm s}\), it means that the \(6\times 10^{24}~{\rm kg}\) Earth’s speed (in the opposite direction) will change by \(10000\times 10/(6\times 10^{24})=1.67\times 10^{-20}~{\rm m}/{\rm s}\).

In the reference frame in which the Earth is at rest, the change in kinetic energy is \(\tfrac{1}{2}\times (6\times 10^{24})\times (1.67\times 10^{-20})^2=8.33\times 10^{-16}~{\rm J}\).

However, in the reference frame in which the Earth is already moving at \(10~{\rm m}/{\rm s}\), the change in kinetic energy is \(\tfrac{1}{2}\times (6\times 10^{24})\times (10+1.67\times 10^{-20})^2-\tfrac{1}{2}\times (6\times 10^{24})\times 10^2\)\({}=\tfrac{1}{2}\times (6\times 10^{24})\times[2\times 10\times 1.67\times 10^{-20}+(1.67\times 10^{-20})^2] \)\({}\simeq 1000~{\rm kJ}\).

 Posted by at 12:29 am