And before I forget: Last week, wearing my release manager hat I successfully created a new version of Maxima, the open-source computer algebra system. As a result, Maxima is again named one of SourceForge’s projects of the week, for the week of June 10. The release turned out to be more of an uphill battle than I anticipated, but in the end, I think everything went glitch-free.

Others have since created installers for different platforms, including Windows.

And I keep promising myself that when I grow up, I will one day understand exactly what git does and how it works, instead of just blindly following arcane scripts…

Years ago, I accepted the role of release manager for the Maxima computer algebra system. It proved to be more laborious than I assumed (mostly for two things: assembling changelogs, and dealing with build glitches) but it still has its upside. Right now, it is my pleasure to announce that Maxima 5.42 has been released on the unsuspecting public. Enjoy! Michael Atiyah, 89, is one of the greatest living mathematicians. Which is why the world pays attention when he claims to have solved what is perhaps the greatest outstanding problem in mathematics, the Riemann hypothesis.

Here is a simple sum: $$1+\frac{1}{2^2}+\frac{1}{3^2}+…$$. It is actually convergent: The result is $$\pi^2/6$$.

Other, similar sums also converge, so long as the exponent is greater than 1. In fact, we can define a function:

\begin{align*}\zeta(x)=\sum\limits_{i=1}^\infty\frac{1}{i^x}.\end{align*}

Where things get really interesting is when we extend the definition of this $$\zeta(x)$$ to the entire complex plane. As it turns out, its analytic continuation is defined almost everywhere. And, it has a few zeros, i.e., values of $$x$$ for which $$\zeta(x)=0$$.

The so-called trivial zeros of $$\zeta(x)$$ are the negative even integers: $$x=-2,-4,-6,…$$. But the function also has infinitely many nontrivial zeros, where $$x$$ is complex. And here is the thing: The real part of all known nontrivial zeros happens to be $$\frac{1}{2}$$, the first one being at $$x=\frac{1}{2}+14.1347251417347i$$. This, then, is the Riemann hypothesis: Namely that if $$x$$ is a non-trivial zero of $$\zeta(x)$$, then $$\Re(x)=\frac{1}{2}$$. This hypothesis baffled mathematicians for the past 130 years, and now Atiyah claims to have solved it, accidentally (!), in a mere five pages. Unfortunately, verifying his proof is above my pay grade, as it references other concepts that I would have to learn first. But it is understandable why the mathematical community is skeptical (to say the least). A slide from Atiyah’s talk on September 24, 2018.

What is not above my pay grade is analyzing Atiyah’s other claim: a purported mathematical definition of the fine structure constant $$\alpha$$. The modern definition of $$\alpha$$ relates this number to the electron charge $$e$$: $$\alpha=e^2/4\pi\epsilon_0\hbar c$$, where $$\epsilon_0$$ is the electric permeability of the vacuum, $$\hbar$$ is the reduced Planck constant and $$c$$ is the speed of light. Back in the days of Arthur Eddington, it seemed that $$\alpha\sim 1/136$$, which led Eddington himself onto a futile quest of numerology, trying to concoct a reason why $$136$$ is a special number. Today, we know the value of $$\alpha$$ a little better: $$\alpha^{-1}\simeq 137.0359992$$.

Atiyah produced a long and somewhat rambling paper that fundamentally boils down to two equations. First, he defines a new mathematical constant, denoted by the Cyrillic letter $$\unicode{x427}$$ (Che), which is related to the fine structure constant by the equation

\begin{align*}\alpha^{-1}=\frac{\pi\unicode{x427}}{\gamma},\tag{1.1*}\end{align*}

where $$\gamma=0.577…$$ is the Euler–Mascheroni constant. Second, he offers a definition for $$\unicode{x427}$$:

\begin{align*}\unicode{x427}=\frac{1}{2}\sum\limits_{j=1}^\infty 2^{-j}\left(1-\int_{1/j}^j\log_2 x~dx\right).\tag{7.1*}\end{align*}

(The equation numbers are Atiyah’s; I used a star to signify that I slightly simplified them.)

Atiyah claims that this sum is difficult to calculate and then goes into a long-winded and not very well explained derivation. But the sum is not difficult to calculate. In fact, I can calculate it with ease as the definite integral under the summation sign is trivial:

\begin{align*}\int_{1/j}^j\log_2 x~dx=\frac{(j^2+1)\log j-j^2+1}{j\log 2}.\end{align*}

After this, the sum rapidly converges, as this little bit of Maxima code demonstrates (NB: for $$j=1$$ the integral is trivial as the integration limits collapse):

(%i1) assume(j>1);
(%o1)                               [j > 1]
(%i2) S:1/2*2^(-j)*(1-integrate(log(x)/log(2),x,1/j,j));
log(j) + 1
---------- + j log(j) - j
(- j) - 1          j
(%o2)             2          (1 - -------------------------)
log(2)
(%i3) float(sum(S,j,1,50));
(%o3)                         0.02944508691740671
(%i4) float(sum(S,j,1,100));
(%o4)                         0.02944508691730876
(%i5) float(sum(S,j,1,150));
(%o5)                         0.02944508691730876
(%i6) float(sum(S,j,1,100)*%pi/%gamma);
(%o6)                         0.1602598029967022


Unfortunately, this does not look like $$\alpha^{-1}=137.0359992$$ at all. Not even remotely.

So we are all left to guess, sadly, what Atiyah was thinking when he offered this proposal.

We must also remember that $$\alpha$$ is a so-called “running” constant, as its value depends on the energy of the interaction, though presumably, the constant in question here is $$\alpha$$ in the infrared limit, i.e., at zero energy.

Enough of politics and cats. Time to blog about math and physics again.

Back in my high school days, when I was becoming familiar with calculus and differential equations (yes, I was a math geek) something troubled me. Why were certain expressions called “linear” when they obviously weren’t?

I mean, an expression like $$Ax+B$$ is obviously linear. But who in his right mind would call something like $$x^3y + 3e^xy+5$$ “linear”? Yet when it comes to differential equations, they’d tell you that $$x^3y+3e^xy+5-y^{\prime\prime}=0$$ is “obviously” a second-order, linear ordinary differential equation (ODE). What gives? And why is, say, $$xy^3+3e^xy-y^{\prime\prime}=0$$ not considered linear?

The answer is quite simple, actually, but for some reason when I was 14 or so, it took a very long time for me to understand.

Here is the recipe. Take an equation like $$x^3y+3e^xy+5-y^{\prime\prime}=0$$. Throw away the inhomogeneous bit, leaving the $$x^3y+3e^xy-y^{\prime\prime}=0$$ part. Apart from the fact that it is solved (obviously) by $$y=0$$, there is another thing that you can discern immediately. If $$y_1$$ and $$y_2$$ are both solutions, then so is their linear combination $$\alpha y_1+\beta y_2$$ (with $$\alpha$$ and $$\beta$$ constants), which you can see by simple substitution, as it yields $$\alpha(x^3y_1+3e^xy_1-y_1^{\prime\prime}) + \beta(x^3y_2+3e^xy_2-y_2^{\prime\prime})$$ for the left-hand side, with both terms obviously zero if $$y_1$$ and $$y_2$$ are indeed solutions.

So never mind that it contains higher derivatives. Never mind that it contains powers, even transcendental functions of the independent variable $$x$$. What matters is that the expression is linear in the dependent variable. As such, the linear combination of any two solutions of the homogeneous equation is also a solution.

Better yet, when it comes to the solutions of inhomogeneous equations, adding a solution of the homogeneous equation to any one of them yields another solution of the inhomogeneous equation.

Notably in physics, the Schrödinger equation of quantum mechanics is an example of a homogeneous and linear differential equation. This becomes a fundamental aspect of quantum physics: given two solutions (representing two distinct physical states) their linear combination is also a solution, representing another possible physical state. I am watching the morning news and it’s all about numbers. Some good, some not so good, some really bad. Here are a few, in descending order:

• 2018: The year when Ottawa plans to introduce a new low-income transit fare.
• 417: The provincial highway number of the Queensway, which has been reopened after yesterday’s huge crash.
• 175.6: The amount of rain, in mm, that Ottawa received in the month of May.
• 80: The estimated number killed by a massive ISIS terrorist bomb in Kabul.
• 21: The highest expected temperature of the day and, incidentally, the entire week, in Centigrade.
• 15: The new minimum wage, in Canadian dollars, as proposed by the Ontario provincial government.
• 7: The age of a baby, in months, who died allegedly due to her mother’s negligence in Gatineau.

I thought of turning these bullet points into a numbered list, but that would have been too confusing.

I have been so busy this week, I forgot to blog about our latest Maxima release, 5.39. Nothing spectacular, just incremental improvements over 5.38; for me, this was a big milestone though as this was the first time that I used a CentOS platform to prepare the release. (Which, incidentally, is why I haven’t done this months ago.) And SourceForge, kindly enough, once again designated Maxima as one of the site’s Projects of the Week.

Sometime last year, I foolishly volunteered to manage new releases of the Maxima computer algebra system (CAS).

For the past several weeks, I’ve been promising to do my first release, but I kept putting it off as I had other, more pressing work obligations.

Well, not anymore… today, I finally found the time, after brushing up on the Git version management system, and managed to put together a release, 5.38.0. Maxima is beautiful and incredibly powerful. I have been working on its tensor algebra packages for the past 15 years or so. As far as I know, Maxima is the only general purpose CAS that can derive the field equations of a Lagrangian field theory; for instance, it can derive Einstein’s field equations from the Einstein-Hilbert Lagrangian.

I use Maxima a lot for tensor algebra, though I admit that when it comes to integration, differential equations or plotting, I prefer Maple. Maple’s ODE/PDE solvers are unbeatable. But when it comes to tensor algebra, or just as a generic on-screen symbolic calculator, Maxima wins hands down. I prefer to use its command-line version: Nothing fancy, just ASCII art, but very snappy, very responsive, and does exactly what I want it to do.

So then, Maxima 5.38.0: Say hi to the world. World, this is the latest version of the oldest (nearly half a century old) continuously maintained CAS in existence.

Having just finished work on a major project milestone, I took it easy for a few days, allowing myself to spend time thinking about other things. That’s when I encountered an absolutely neat problem on Quora. Someone asked a seemingly innocuous number theory question: are there two positive integers such that one is exactly the π-th power of the other?

Now wait a minute, you ask… We know that π is a transcendental number. How can an integer raised to a transcendental power be another integer?

But then you think about $$\alpha=\log_2 3$$ and realize that although $$\alpha$$ is a transcendental number, $$2^\alpha=3$$. So why can’t we have $$n^\pi=m$$, then?

As it turns out, we (probably) cannot, but the reason is subtle and it relies on a very important, but unproven conjecture from transcendental number theory.

But first, let us rewrite the equation by taking its logarithm:

$$\pi\log n = \log m.$$

We can also divide both sides by $$\log n$$, which leads to

$$\pi = \frac{\log m}{\log n}=\log_n m,$$

but it turns out to be not very helpful. However, squaring the equation will help, as we shall shortly see:

$$\pi^2\log^2 n=\log^2 m.$$

Can this equation ever by true for positive integers $$n$$ and $$m$$, other than the trivial solution $$n=m=1$$, that is?

To see why it cannot be the case, let us consider the following triplet of numbers:

$$(i\pi,\log n,\log m),$$

and their exponents,

$$(e^{i\pi}=-1, e^{\log n}=n, e^{\log m}=m).$$

The three numbers $$(i\pi,\log n,\log m)$$ are linearly independent over $${\mathbb Q}$$ (that is, the rational numbers). What this means is that there are no rational numbers $$A, B, C, D$$ such that $$Ai\pi+B\log n+C\log m + D=0$$. This is easy to see as the ratio of $$\log n$$ and $$\log m$$ is supposed to be transcendental but both numbers are real, whereas $$i\pi$$ is imaginary.

On the other hand, their exponents are all rational numbers ($$-1, n, m$$). And this is where the unproven conjecture, Schanuel’s conjecture, comes into the picture. Schanuel’s conjecture says that given $$n$$ complex numbers $$(\alpha_1,\alpha_2,…,\alpha_n)$$ that are linearly independent over the rationals, out of the $$2n$$ numbers $$(\alpha_1,…,\alpha_n,e^{\alpha_1},…,e^{\alpha_n})$$, at least $$n$$ will be transcendental numbers that are algebraically independent over $${\mathbb Q}$$. That is, there is no algebraic expression involving roots and powers of the $$\alpha_i$$, $$e^{\alpha_i}$$, and rational numbers that will yield 0.

The equation $$\pi^2\log^2 n=\log^2 m$$, which we can rewrite as

$$(i\pi)^2\log^2 n + \log^2 m=0,$$

is just such an equation, and it can never be true.

I wish I could say that I came up with this solution but I didn’t. I was this close: I was trying to apply Schanuel’s conjecture, and I was of course using the fact that $$\pi=-i\log -1$$. But I did not fully appreciate the implications and meaning of Schanuel’s conjecture, so I was applying it improperly. Fortunately, another Quora user saved the day.

Still I haven’t had this much fun with pure math (and I haven’t learned this much pure math all at once) in years.

John Forbes Nash Jr. is dead, along with his wife Alicia. They were killed on the New Jersey Turnpike when the taxi, taking them home from the airport, crashed into a guardrail and another vehicle after the driver lost control while trying to pass. Nash and his wife were returning from Norway, where Nash was one of the recipients of the 2015 Abel prize.

News of this accident made me shudder for another reason. Less than two weeks ago, when I was returning from Dubai, my taxi driver not only answered a call on his cell phone, he even responded to a text while driving. I was too tired to say anything at first and then thankfully he came to his senses… but his behavior made me feel decidedly uncomfortable in his vehicle. Next time, I will not hesitate to tell the taxi driver to stop immediately or call another taxi for me.

Science fiction has a subgenre: mathematical fiction. Stories of this nature are rare; good stories are even rarer. One memorable story that I recall from ages ago was A Subway Named Moebius, written by A. J. Deutsch in 1950. There was another story more recently: Luminous by Greg Egan, which I read in Asimov’s SF magazine shortly before I stopped reading (and eventually, stopped subscribing to) said magazine. (Nothing wrong with the magazine; it’s just that I found many of the stories unsatisfying, and I found I had less and less time to read them. The genre is just not the same as it was back in the Golden Age of Science Fiction.)

So recently, I found out that Egan wrote a sequel: Dark Integers, published in the same magazine in 2007. I now had a chance to read it and I was not disappointed. Both stories are very good. Both stories are based on the notion that as yet unproven mathematical theorems can go either way; that the Platonic book of all math has not only not yet been written, but that there is no unique book, and multiple versions of mathematics may coexist, with an uneasy boundary.

Now imagine that you perform innocent mathematical experiments on your computer, using, say, computer algebra to probe ever more exotic theorems in a subfield few non-mathematicians ever heard about. And imagine how you would feel if you realized that by doing so, you are undermining the very foundations of another universe’s existence, literally threatening to wipe them out.

OK, if you start poking holes in that idea, there are many, but the basic notion is not completely stupid, and the questions that the stories raise are worth contemplating. And Egan writes well… the stories are fun, too!

Incidentally, this was the first decent (published) science fiction story I ever came across that contained a few lines of C++ code.

So the other day, I solved this curious mathematics puzzle using repeated applications of Pythagoras’s theorem and a little bit of algebra.

Now I realize that there is a much simpler form of the proof.

The exercise was to prove that, given two semicircles drawn into a bigger circle as shown below, the sum of the areas of the semicircles is exactly half that of the larger circle. Again, I’m inserting a few blank lines before presenting my proof. Once again I am labeling some vertices in the diagram for easy reference.

Our goal is to prove that the area of a circle with radius AO is twice the sum of the areas of two semicircles, with radii AC and BD. But that is the same as proving that the area of a circle with radius AO is equal to the sum of the areas of two circles, with radii AC and BD.

The ACO< angle is a right angle. Therefore, the area of a circle with radius AO is the sum of the areas of circles with radii AC and CO. (To see this, just multiply the theorem of Pythagoras by π.) So if only we could prove that CO = BD, our proof would be complete.

Since AO = BO, they are the sides of the isosceles triangle ABO. Now if we were to pick a point O on the line CD such that CO‘ = BD, the ACO and ODB triangles will be identical (CD being the sum of AC and BD by construction). Therefore, AO‘ = BO, and the ABO triangle would be another isosceles triangle with its third vertex on the CD line. Clearly that is not possible, so O = O, and therefore, CO = BD. This concludes the proof.

The other day, I ran across a cute geometry puzzle on John Baez’s Google+ page. I was able to solve it in a few minutes, before I read the full post that suggested that this was, after all, a harder-than-usual area puzzle. Glad to see that, even though the last high school mathematics competition in which I participated was something like 35 years ago, I have not yet lost the skill.

Anyhow, the puzzle is this: prove that the area of the two semicircles below is exactly half the area of the full circle. I am going to insert a few blank lines here before providing my solution.

I start with labeling some vertices on the diagram and also drawing a few radii and other lines to help. Next, let’s call the radii of the two semicircles as $$a$$ and $$b$$. Then, we have
\begin{align}
(AC)&= a,\\
(BD)&= b.
\end{align}Now observe that
\begin{align}
(OA) = (OB) = r,
\end{align}and also
\begin{align}
(CD)&= a + b,\\
(OD)&= a + b~- (OC).
\end{align}The rest is just repeated application of the theorem of Pythagoras:
\begin{align}
(OC)^2&= r^2 – a^2,\\
(OD)^2&= r^2 – b^2,
\end{align}followed by a bit of trivial algebra:
\begin{align}
(OC)^2 + a^2&= [a + b – (OC)]^2 + b^2,\\
0&= 2(a + b)[b – (OC)],\\
(OC)&= b.
\end{align}Therefore,
\begin{align}
a^2+b^2=r^2,
\end{align}which means that the area of the full circle is twice the sum of the areas of the two semicircles, which is what we set out to prove.

I guess I have not yet lost my passion for pointless, self-serving mathematics. Reader’s Digest recently conducted an interesting experiment: they “lost” 12 wallets, filled with about $50 worth of cash and sufficient documentation to locate the owner, in 16 cities around the world. The result: Finns in Helsinki are the most honest with 11 of the 12 wallets returned, whereas in Lisbon, Portugal, the sole wallet that was returned was, in fact, found by a visiting Dutch couple. Finns needless to say, are rejoicing: “we don’t even run red lights,” boasted a Helsinki resident. So what can we conclude from this interesting experiment? Perhaps shockingly, almost nothing. This becomes evident if I plot a histogram with the number of wallets returned, and overlay on it a binomial distribution for a probability of 46.875% (which corresponds to the total number of wallets returned, 90 out of 192), I get a curve that is matched very closely by the histogram. Unsurprisingly, there will be a certain probability that in a given city, 1, 2, 3, etc. wallets are returned; and the results of Reader’s Digest match this prediction closely. So there is no reason for Finns to rejoice or for the Portuguese to feel shame. It’s all just blind luck, after all. And the only valid conclusion we can draw from this experiment is that people are just as likely to be decent folks in Lisbon as in Helsinki. But how do you explain this to a lay audience? More importantly, how do you prevent a political demagogue from drawing false or unwarranted conclusions from the data? It is now formally official: global surface temperatures did not increase significantly in the past 15 years or so. But if skeptics conclude that this is it, the smoking gun that proves that all climate science is hogwash, they better think again. When we look closely, the plots reveal something a lot more interesting. For starters… this is not the first time global temperatures stagnated or even decreased somewhat since the start of recordkeeping. There is a roughly 20-year period centered around 1950 or so, and another, even longer period centered roughly around 1890. This looks in fact like evidence that there may be something to the idea of a 60-year climate cycle. However, the alarming bit is this: every time the cycle peaks, temperatures are higher than in the previous cycle. The just released IPCC Summary for Policymakers makes no mention of this cycle but it does offer an explanation for the observed stagnating temperatures. These are probably a result of volcanic activity, they tell us, the solar cycle, and perhaps mismodeling the effects of greenhouse gases and aerosols, but they are not exactly sure. And certainty is characterized with words like “high confidence,” “medium confidence” and such, with no definitions given. These will be supplied, supposedly, in the technical report that will be released on Monday. Nonetheless, the statement that “Probabilistic estimates […] are based on statistical analysis of observations or model results, or both, and expert judgment” [emphasis mine] does not fill me with confidence, if you will pardon the pun. In fact, I feel compelled to compare this to the various reports and releases issued by the LHC in recent years about the Higgs boson. There was no “expert judgment”. There were objective statistical analysis methods and procedures that were thoroughly documented (even though they were often difficult to comprehend, due to their sheer complexity.) There were objective standards for claiming a discovery. Given the extreme political sensitivity of the topic, I think the IPCC should adopt similar or even more stringent standards of analysis as the LHC. Do away with “expert judgment” and use instead proper statistical tools to establish the likelihood of specific climate models in the light of the gathered data. And if the models do not work, e.g., if they failed to predict stagnating temperatures, the right thing to do is say that this is so; there is no need for “expert judgment”. Just state the facts. I’ve been hesitant to write about this, as skeptics will already have plenty to gripe about, I don’t need to pile on. And I swear I am not looking for excuses to bash the IPCC, not to mention that I have little sympathy or patience for skeptics who believe that an entire body of science is just one huge scam to make Al Gore and his buddies rich. But… I was very disappointed to see plots in the latest IPCC “Summary for Policymakers” report that appear unnecessarily manipulative. Wikipedia describes these as truncated or “gee-whiz” graphs: graphs in which the vertical axis does not start at zero. This can dramatically change the appearance of a plot, making small variations appear much larger than they really are. To be clear, the use of truncated plots is often legitimate. Perhaps the plot compares two quantities that are of a similar magnitude. Perhaps the plot shows a quantity the absolute magnitude of which is irrelevant. Perhaps the quantity is such that “0” has no special meaning or it is not a natural start of the range (e.g., pH, temperature in Centigrade). But in other cases, this practice can be viewed as misleading, intellectually dishonest (for instance, it is common for financial companies to manipulate plots this way to make their market performance appear more impressive than it really is) or outright fraudulent. So here we are, the 2013 IPCC report’s summary for policymakers has been released in draft form, and what do I see in it? Several key plots that have been presented in truncated “gee-whiz” form, despite the fact that the quantities they represent are such that their absolute magnitudes are relevant, that their variability must be measured against their absolute magnitudes, and where zero is a natural start of the range. I am presenting the original plots on the left and my crudely “untruncated” versions on the right:      This is not kosher, especially in a document that is intended for consumption by a lay audience who may not have the scientific education to spot such subtleties. The document is still labeled a draft, with copy editing in particular yet to take place. Here’s to hoping that these plots (and any similar plots that may appear in the main report) are corrected before publication, to avoid the impression of trying to exaggerate the case for climate change. Scientists should be presenting the science objectively and leave the manipulation, even inadvertent manipulation, to politicians. I was having a discussion with a lawyer friend of mine. I was trying to illustrate the difference between the advocating done by lawyers and the scientist’s unbiased (or at least, not intentionally biased) search for the truth. One is about cherry-picking facts and arguments to prove a preconceived notion; the other about trying to understand the world around us. I told him that anything and the opposite of anything can be proven by cherry-picking facts. Then it occurred to me that it is true even in math. For instance, by cherry-picking facts, I can easily prove that $$2\times 2=5$$. Let’s start with three variables, $$a$$, $$b$$ and $$c$$, for which it is true that $$a=b+c$$. Then, multiplying by 5 gives $$5a=5b+5c.$$ Multiplying by 4 and switching the two sides gives $$4b+4c=4a.$$ Adding these two equations together, we get $$5a+4b+4c=4a+5b+5c.$$ Subtracting $$9a$$ from both sides, we obtain $$4b+4c-4a=5b+5c-5a,$$ or $$4(b+c-a)=5(b+c-a).$$ Dividing both sides by $$b+c-a$$ gives the final result: $$4=5.$$ And no, I did not make some simple mistake in my derivation. In fact, I can use computer algebra to obtain the same result, and computers surely don’t lie. Here it is, with Maxima: (%i1) eq1:5*a=5*b+5*c$
(%i2) eq2:4*b+4*c=4*a$(%i3) eq3:eq1+eq2$
(%i4) eq4:eq3-9*a$(%i5) eq5:factor(eq4)$
(%i6) eq6:eq5/(b+c-a);
(%o6)                                4 = 5

All I had to do to make this happen was to ignore an inconvenient little fact, which is precisely what lawyers (not to mention politicians) do all the time. Surely, if I can prove that $$2\times 2=5$$, I can prove anything. So can lawyers and they know it.

Maxima is an open-source computer algebra system (CAS) and a damn good one at that if I may say so myself, being one of Maxima’s developers.

Among other things, Maxima has top-notch tensor algebra capabilities, which can be used, among other things, to work with Lagrangian field theories.

This week, I am pleased to report, SourgeForge chose Maxima as one of the featured open-source projects on their front page. No, it won’t make us rich and famous (not even rich or famous) but it is nice to be recognized.

Yesterday, Intel lost the bid for the patent assets of defunct Canadian company Nortel, despite joining forces with Google.

Google bid some odd amounts; for instance, at one point they bid $1,902,160,540. The digits happen to be those of Brun’s constant: B2 = 1.90216058… Brun’s constant is the sum of the reciprocals of twin primes. B2 = (1/3 + 1/5) + (1/5 + 1/7) + (1/11 + 1/13) + … According to Brun’s theorem, this sum converges. The limit of the sum is Brun’s constant. A professor of mathematics named Thomas Nicely once used a group of computers to calculate twin primesup to 1e14, computing Brun’s constant among other things. At one point, Nicely’s computations failed. After eliminating other sources of error, Nicely concluded that the problem was a fault in the new Pentium processors present in some recently acquired computers in the group. Nicely notified Intel, but it wasn’t until after a public relations disaster that Intel finally responded the way they should have in the first place, offering to replace all affected processors. This cost Intel$475 million.

Who knows, if they still had that extra \$475 million cash in their pockets, they could have bid more and won yesterday.

Canada is 144 years old today. That is 12², or a dozen dozen. I am four dozen years old, and spent the last two dozen of these years here in Canada. Wonder what else is divisible by 12 this year.

The other day, I saw a report on the CBC about increasingly sophisticated methods thieves use to steal credit and bank card numbers. They showed, for instance, how a thief can easily grab a store card reader when the clerk is not looking, replacing it with a modified reader that steals card numbers and PIN codes.

That such thefts can happen in the first place, however, I attribute to the criminal negligence of the financial institutions involved. There is no question about it, when it’s important to a corporation, they certainly find ways to implement cryptographically secure methods to deny access by unauthorized equipment. Such technology has been in use by cable companies for many years already, making it very difficult to use unauthorized equipment to view cable TV. So how hard can it be to incorporate strong cryptographic authentication into bank card reader terminals, and why do banks not do it?

The other topic of the report was the use of insecure (they didn’t call it insecure but that’s what it is) RFID technology on some newer credit cards, the information from which can be stolen in a split second by a thief that just stands or sits next to you in a crowded mall. The use of such technology on supposedly “secure” new electronic credit cards is both incomprehensible and inexcusable. But, I am sure the technical consultant who recommended this technology to the banks in some bloated report full of flowery prose and multisyllable jargon received a nice paycheck.