“After a second notices he ran it on db1 instead of db2″… This sentence (somewhat shortened, to make a fitting title) describes the beginning of a colossally effed up night at GitLab.com.

In response to a spike in system load, which resulted in lag on a replication server, the operator thought that maybe restarting the replication server with a clean slate is a good idea. So he decided to wipe the replication server’s data directory.

Unfortunately, he entered the command in the wrong window.

I feel his pain. I did make similar mistakes before, albeit on a much smaller scale, and the memories still hurt me, years later.

I have to commend GitLab for their exceptional openness about this incident, offering us all a valuable lesson. I note that others also responded positively, offering sympathy, assistance, and useful advice.

I read their post-mortem with great interest. In reaction, I already implemented something that I should have done years ago: changing the background color of some of the xterm windows that I regularly open to my Linux servers, to distinguish them visually. (“Create issue to change terminal PS1 format/colours to make it clear whether you’re using production or staging”).

Of course similar incidents and near misses also changed my habits over the years. I rarely delete anything these days without making a backup first. I always pause before hitting Enter on a command that is not (easily) reversible. I have multiple backups, and tested procedures for recovery.

Even so… as Forrest Gump says, shit happens. And every little bit helps, especially when we can learn from the valuable lessons of others without having to go through their pain.

This morning, when I woke up, the regular status e-mails that my servers greet me with told me that there is a major CentOS update (version 7.3). Cool. Unfortunately, it meant that I needed to upgrade as many as five servers. This includes my main server, its physical backup, my backup server in NYC, another “in cloud” backup, and yet another server that I help administer. I began this process shortly after 8 in the morning, after I finished breakfast.

And as usual, a major upgrade like this brings to the surface little problems, little annoyances such as folders that had incorrectly configured SELinux permissions. No big deal, to be sure, but several such little things can consume hours of your time.

And then, it was also Microsoft Patch Tuesday, the second Tuesday of the month when Microsoft releases scheduled updates to Windows and other products. As soon as I was done with CentOS, my attention turned to my Windows machines, including my main workstation, its backup (actually, the same physical machine that also acts as my server’s backup in a dual-boot configuration), my wife’s desktop computer, two laptops, and last but not least, my old desktop that I still keep around as a backup/test computer.

Moreover, I also decided to update three virtual machines (one running Windows 7, the other two, Windows XP) that I keep around both for test purposes but also to have older software, older configurations available if needed.

Furthermore, when I update Windows, I tend to check and see if any other software packages need updating. On some computers, I run Secunia PSI, which keeps track of many applications. But even on other systems, I had to update Java (if installed), Adobe Flash, Chrome and Firefox.

And on older hardware, the process can be painfully slow.

To make a long story short, by the time I finished the bulk of this work, it was 7:30 in the evening. And one computer (a really low powered old netbook) is still doing its thing, even though it’s well past 11 PM now.

No wonder I didn’t accomplish much today.

Of course all of this needed to be done. Since I am a one-man band, I don’t have an IT department to rely on, but it is still important for me to keep my systems secure and well-maintained.

Nonetheless, it feels like one hell of a waste of a day.

This was a potential nightmare scenario. Imagine if we found out that the swing state results of the Nov. 8 election were altered by hackers. Imagine if an investigation found that Hillary Clinton won these states after all, and hence, won the electoral college.

Remember the hanging chads of the 2000 election?

Why is it a nightmare? Because it would likely lead to a constitutional crisis with unpredictable consequences. Donald Trump would be unlikely to concede. But even he did, tens of millions of his supporters would likely find the results unacceptable. Even the predictable disaster of a Trump presidency is preferable to a crisis of such magnitude.

And last night, the specter of just such a crisis was raised, in the form of a New York Magazine article (which was soon echoed by other news outlets), reporting on the doubts and suspicions of prominent scientists who noted a bias in the county-by-county results, more likely to favor Trump in counties where votes were counted electronically.

But not so fast, says fivethirtyeight.com. You cannot just compare the raw results without accounting for demographics. And once you take demographics into account, the apparent bias disappears. And while fivethirtyeight notes that it is difficult to validate the integrity of the voting system in the United States, nonetheless the burden of proof is on those who claim electoral fraud, and so far, the burden of proof has not been met.

I no more welcome a Trump presidency today than I did two weeks ago, but an orderly transition is still preferable to the chaos of a constitutional crisis.

Meanwhile, Clinton’s lead in the popular vote count increased to over two million votes (yes, they are still counting the votes in some states, including mighty California). This in itself is unprecedented: never in the history of the United States did a candidate win the popular vote with such a wide margin, yet lose the electoral college.

It is rare these days that a piece of spam makes me laugh, but today was an exception. After all, it is not every day that I receive an e-mail notice, pretending (kind of) to be from UPS, informing me that my “crap” has been shipped:

Still trying to figure out though if the language was intentional, or simply a mistake made by a non-native English speaker unfamiliar with certain, ahem, idioms.

I just came across this recent conversation with Barack Obama about the challenges of the future, artificial intelligence, machine learning and related topics. A conversation with an intelligent, educated person who, while not an expert in science and technology, is not illiterate in these topics either.

Barack Obama Talks AI, Robo-Cars, and the Future of the World

And now I feel like mourning. I mourn the fact that for many years to come, no such intelligent conversation will be likely be heard in the Oval Office. But what do you do when a supremely qualified, highly intelligent President is replaced by a self-absorbed, misogynist, narcissistic blowhard?

Not much, I guess. I think my wife and I will just go and cuddle up with the cats and listen to some Pink Floyd instead.

If there was a single cause that sank Hillary Clinton’s bid for the presidency, it was undeniably the “e-mail scandal”.

Which is really, really sad because it was really no scandal at all. I just read a fascinating account (written back in September I believe) that offers details.

Some of what happened was due to ineptness (either by Clinton’s team or the State Department’s), some of it was a result of outdated, inconvenient, or unreliable technology, some of it was just the customary bending of the rules to get things done… most notably, there was no recklessness, no conspiracy, no cover-up, just the typical government or, for that matter, corporate bungling. (And as I noted before, Clinton’s e-mails were likely more secure on the “home brew” server sitting in a residential basement than on the State Department’s systems.)

Today, I took the plunge. I deemed my brand new server (actually, more than a month old already) ready for action. So I made the last few remaining changes, shut down the old server, and rebooted the new with the proper settings… and, ladies and gentlemen, we are now live.

Expect glitches, of course. I already found a few.

The old server, of which I was very fond, had to go. It was really old, the hardware about 7 years. Its video card fan failed, and its CPU fan was also making noises. It was ultra-reliable though. I never tried to make this a record, but it lasted almost three years without a reboot:

\$ uptime
12:28:09 up 1033 days, 17:30, 4 users, load average: 0.64, 0.67, 0.77

(Yes, I kept it regularly updated with patches. But the kernel never received a security patch, so no reboot was necessary. And it has been on a UPS.)

This switcharoo was a Big Deal, in part, because I decided to abandon the Slackware ship in favor of CentOS, due to its improved security and, well, systemd. I know systemd is a very polarizing thing among Linux fans, but my views are entirely pragmatic: in the end, it actually makes my life easier, so there.

Anyhow, the new server has already been up 13 minutes, so… And it is a heck of a lot quieter, which I most welcome.

Dictatorships can be wonderful places, so long as they are led by competent dictators.

The problem with dictatorships is that when the dictators go bonkers, there are no corrective mechanisms. No process to replace them or make them change their ways.

And now I wonder if the same fate may be in the future of Singapore, described by some as the “wealthiest non-democracy”.

To be sure, Singapore is formally democratic, with a multi-party legislature. But really, it is a one-party state that has enacted repressive legislation that require citizens engaging in political discussion to register with the government, and forbids the assembly of four or more people without police permission.

Nonetheless, Singapore’s government enjoyed widespread public support for decades because they were competent. Competence is the best way for a government, democratic or otherwise, to earn the consent of the governed, and Singapore’s government certainly excelled on this front.

But I am beginning to wonder if this golden era is coming to an end, now that it has been announced that Singapore’s government plans to take all government computers off the Internet in an attempt to improve security.

The boneheaded stupidity of this announcement is mind-boggling.

For starters, you don’t just take a computer “off the Internet”. So long as it is connected to something that is connected to something else… just because you cannot use Google or visit Facebook does not mean that the bad guys cannot access your machine.

It will also undoubtedly make the Singapore government a lot less efficient. Knowledge workers (and government workers overwhelmingly qualify as knowledge workers) these days use the Internet as an essential resource. It could be something as simple as someone checking proper usage of a rare English expression, or something as complex as a government scientist accessing relevant literature in manuscript repositories or open access journals. Depriving government workers of these resources in order to improve security is just beyond stupid.

In the past, Singapore’s government was not known to make stupid decisions. But what happens when they start going down that road? In a true democracy, stupid governments tend to end up being replaced (which does not automatically guarantee an improvement, to be sure, but over time, natural selection tends to work.) Here, the government may dig in and protect its right to be stupid by invoking national security.

Time will tell. I root for sanity to prevail.

Not for the first time, one of my Joomla! sites was attacked by a script kiddie using a botnet.

The attack is a primitive brute force attack, trying to guess the administrator password of the site.

The frustrating thing is that the kiddie uses a botnet, accessing the site from several hundred remote computers at once.

A standard, run-of-the-mill defense mechanism that I installed works, as it counts failed password attempts and blocks the offending IP address after a predetermined number of consecutive failures.

Unfortunately, it all consumes significant resources. The Joomla! system wakes up, consults the MySQL database, renders the login page and then later, the rejection page from PHP… when several hundred such requests arrive simultaneously, they bring my little server to its knees.

I tried as a solution a network-level block on the offending IP addresses, but there were just too many: the requests kept coming, and I became concerned that I’d have an excessively large kernel table that might break the server in other ways.

So now I implemented something I’ve been meaning to do for some time: ensuring that administrative content is only accessible from my internal network. Anyone accessing it from the outside just gets a static error page, which can be sent with minimal resource consumption.

Now my server is happy. If only I didn’t need to waste several hours of an otherwise fine morning because of this nonsense. I swear, one of these days I’ll find one of these script kiddies in person and break his nose or something.

I’ve been encountering an increasing number of Web sites lately that asked me to disable my ad blocker. They promise, in return, fewer ads.

And with that promise, they demonstrate that they completely and utterly miss the point.

I don’t want fewer ads. I don’t mind ads. I understand that for news Web sites, ads are an essential source of revenue. I don’t resent that. I even click on ads that I find interesting or relevant.

So why do I use an ad blocker, then?

In one word: security.

Malicious ads showed up even on some of the most respectable Web sites. Ad networks have no incentive to vet ads for security, so all too often, they only remove them after the fact, after someone complained. And like a whack-a-mole game, the malicious advertiser is back in no time under another name, with another ad.

And then there are those ads that pop up with an autostart video, with blaring sound in the middle of the night, with the poor user (that would be me) scrambling to find which browser tab, which animation is responsible for the late night cacophony.

Indeed, it was one of these incidents that prompted me to call it quits on ads and install an ad blocker.

So sorry folks, if you are preventing me from accessing your content because of my ad blocker, I just go elsewhere.

That is, until and unless you can offer credible assurance that the ads on your site are safe. I don’t care how many there are. It’s self-limiting anyway: advertisers won’t pay top dollar for an ad on a site that is saturated with ads. What I need to know is that the ads on your site won’t ruin my day one way or another.

Today, I spent a couple of hours trying to sort out why a Joomla! Web site, which worked perfectly on my Slackware Linux server, was misbehaving on CentOS 7.

The reason was simple yet complicated. Simple because it was a result of a secure CentOS 7 installation with SELinux (Security Enhanced Linux) fully enabled. Complicated because…

Well, I tried to comprehend some weird behavior. The Apache Web server, for instance, was able to read some files but not others; even when the files in question were identical in content and had (seemingly) identical permissions.

Of course part of it was my inexperience: I do not usually manage SELinux hosts. So I was searching for answers online. But this is where the experience turned really alarming.

You see, almost all the “solutions” that I came across advocated severely weakening SELinux or disabling it altogether.

Since I was really not inclined to do either on a host that I do not own, I did not give up until I found the proper solution. Nonetheless, it made me wonder about the usefulness of overly complicated security models like SELinux or the advanced ACLs of Windows.

These security solutions were designed by experts and expert committees. I have no reason to believe that they are not technically excellent. But security has two sides: it’s as much about technology as it is about people. People that include impatient users and inadequately trained or simply overworked system administrators.

System administrators who often “solve” a problem by disabling security altogether, rather than act as I have, research the problem, and not do anything until they fully understand the issue and the most appropriate solution.

The simple user/group/world security model of UNIX systems may lack flexibility but it is easy to conceptualize and for which it is easy to develop a good intuition. Few competent administrators would ever consider solving an access control problem by suggesting the use of 0777 as the default permission for all affected files and folders. (OK, I have seen a few who advocated just that, but I would not call these folks “competent.”)

A complex security model like SELinux, however, is difficult to learn and comprehend fully. Cryptic error messages only confound users and administrators alike. So we should not be surprised when administrators take the easy way out. Which, in a situation similar to mine, often means disabling the enhanced security features altogether. Unless their managers are themselves well trained and security conscious, they will even praise the administrator who comes up with such a quick “solution”. After all, security never helps anyone solve their problems; by its nature, it becomes visible only for its absence, and only when your systems are under attack. By then, it’s obviously too late of course.

So the next time you set up a system with proper security, think about the consequences of implementing a security model that is too complex and non-intuitive. And keep in mind that what you are securing is not merely a bunch of networked computers; people are very much part of the system, too. The security technology that is used must be compatible with both the hardware and the humans operating the hardware. A technically inferior solution that is more likely to be used and implemented properly by users and administrators beats a technically superior solution that users and administrators routinely work around to accomplish their daily tasks.

In short… sometimes, less is more indeed.

Whenever I travel, I think a lot about Internet security. For purely selfish reasons: I do not wish to become a victim of cybercrime or unnecessarily expose my own systems to attacks.

The easiest way to achieve end-to-end encryption is through a virtual private network (VPN). Whenever possible, I connect to my own router’s VPN service here in Ottawa before doing anything else on the Interwebs. The connection from my router to the final destination is still subject to intercept, but at least my connection from whatever foreign country I am in to my own network is secure.

A VPN has numerous other advantages, not the least of which is the fact that to the outside world, I appear to have an Ottawa-based IP address; this allows me, for instance, to use my Netflix subscription even in countries where Netflix is not normally available.

The downside of the VPN is that I am limited by the outgoing bandwidth of my own connections. But in practice, this does not appear to be a serious limitation. (I was able to watch Breaking Bad episodes just fine while in Abu Dhabi.)

Unfortunately, a VPN is not always possible, as some providers, for reasons known only to them, block VPNs. (I can think of a few workarounds, but I have not yet implemented any of them.) Even in this case, I remain at least partially protected. I have set up my mail server such that both incoming (IMAP) and outgoing (SMTP) connections are fully encrypted. This way, not only are my messages secure, but (and this was my main concern) I also avoid leaking sensitive password information to an eavesdropper.

When it comes to Web sites, I use secure (HTTPS) connections whenever possible, even for “mundane” stuff like innocent Google searches. I also use SSH if necessary, to connect to my servers. These days, SSH is an absolute must; the use of Telnet is just an invitation for disaster.

But of course the biggest security risk while one is on the road is the use of a public Wi-Fi network anywhere. Connecting to an HTTP (not HTTPS) server through a public Wi-Fi network and logging in with your password may not be the exact equivalent of telegraphing your password to the whole wide world, but it comes pretty darn close. Tools that can be used to scan for Wi-Fi networks and analyze the data are readily available not just for laptops but even for smartphones.

Once an open Wi-Fi network is identified, “sniffing” all packets becomes a trivial exercise, with downloadable tools that are readily available. Which is why it is incomprehensible to me why, in this day and age, most providers (e.g., hotels, airports) that actually do require users to log in use an unsecure network and just intercept the user’s first Web query to present a login page instead, when the technology to provide a properly secured Wi-Fi network has long been available.

In the future, no doubt I’ll have to take even stronger measures to maintain data security. For instance, the simple PPTP VPN technology in my router has known vulnerabilities. Today, it may take several hours on a dedicated high-end workstation to crack its encryption keys; the same task may be accomplished in minutes or less on tomorrow’s smartphones.

So there really are two lessons here: First, any security is bettern than no security, as it makes it that much harder for an attacker to do harm, and most attackers will just move on to find lower hanging fruit. Second, no measure should give you a false sense of security: by implementing reasonable security measures, you are raising the bar higher, but it will never defeat a determined attacker.

Curse my suspicious nature.

Here I am, reading a very nice letter from a volunteer who is asking me to share a link on my calculator museum Web site to cheer up some kids:

And then, instead of doing as I was asked to do, I turned to Google. Somehow, this message just didn’t smell entirely kosher. The article to which I was supposed to link also appeared rather sterile, more like an uninspired homework assignment, with several factual errors. So I started searching. It didn’t take very long until I found this gem:

Then, searching some more, I came across this:

Looks like Ms. Martin has been a busy lady.

It is one of the least productive ways to use one’s time. I am talking about upgrades that are more or less mandatory, when a manufacturer ends support of an older version. So especially if the software in question is exposed to the outside world, upgrading is not optional: the security risk associated with using an unsupported, obsolete version is quite significant.

Today, I was forced to upgrade all my Web sites that use the Joomla content management system, as support for Joomla 2.5 ended in December, 2014.

What can I say. It was not fun. I am using some custom components and some homebrew solutions, and it took the better part of the day to get through everything and resolve all compatibility issues.

And I gained absolutely nothing. My Web sites look exactly like they did yesterday (apart from things that might  be broken as a result of the upgrade, that is.) I just wasted a few precious hours of my life.

Did I mention that I hate software upgrades?

While much of the media is busy debating how the United States already “lost” a cyberwar with North Korea, or how it should respond decisively (I agree), a few began to discuss the possible liability of SONY itself in the hack.

The latest news is that the hackers stole a system administrator’s credentials; armed with these credentials, they were able to roam SONY’s corporate network freely and over the course of several months, they stole over 10 terabytes (!) of data.

Say what? Root password? Months? Terabytes?

OK, I am going to go out on a limb here. I know nothing about SONY’s IT security, the people who work there, their training or responsibilities. And of course it wouldn’t be the first time for the media to get even basic facts wrong.

Still, the magnitude of the hack is evident. It had to take a considerable amount of time to steal all that data and do all that damage.

Which could not have possibly happened if SONY’s IT security folks actually knew what they were doing.

Not that I am surprised. SONY is not alone in this regard; everywhere I turn, corporations, government departments, you name it, I see the same thing. Security, all too often, is about harassing or hindering legitimate users. No, you cannot have an EXE attachment in your e-mail! No, you cannot install that shrink-wrapped software on your workstation! No, we cannot let you open TCP port 12345 on that experimental server!

Users are pesky creatures and most of them actually find ways to get their work done. Yes, their work. This is not about evil corporate overlords not letting you update your Facebook status or watch funny cat videos on YouTube. This is about being able to accomplish tasks that you are paid to do.

Unfortunately, when it comes to IT security, a flawed mentality is all too prevalent. Even on Wikipedia. Look at this diagram, for instance, illustrating the notion of defense in depth:

This, I would argue, is a very narrow-minded view of IT security in general, and the concept of in-depth defense in particular. To me, defense in depth means a lot more than merely deploying technologies to protect data through its life cycle. Here are a few concepts:

1. Partnership with users: Legitimate users are not the enemy! Your job is to help them accomplish their tasks safely, not to become Mordac the Preventer from the Dilbert comic strip. Users can be educated, but they can also be part of your security team, for instance by alerting you when something is not working quite the way it was expected.
2. Detection plans and strategies: Recognize that, especially if your organization is prominently exposed, the question is not if but when. You will get security breaches. How do you detect them? What are the redundant technologies and methods (including organization and education) that you use to make sure that an intrusion is detected as early as possible, before too much harm is done?
3. Mitigation and recovery: Suppose you detect an intrusion. What do you do? Perhaps it’s a good idea to place a “don’t panic” sticker on the cover page of your mitigation and recovery plan. That’s because one of the worst things you can do in these cases is a knee-jerk panic response shutting down entire corporate systems. (Such a knee-jerk reaction is also ripe for exploitation. For instance, a hacker might compromise the open Wi-Fi of the coffee shop across the street from your headquarters before hacking into your corporate network, intentionally in such a way that it would be discovered, counting on the knee-jerk response that would drive employees in droves across the street to get their e-mails and get urgent work done.)
4. Compartmentalization. I don’t care if you are the most trusted system administrator on the planet. It does not mean that you need to have access to every hard drive, every database or every account on the corporate network. The tools (encrypted databases, disk-level encryption, granulated access control lists) are all there: use them. Make sure that even if Kim Jong-un’s minions steal your root password, they still wouldn’t be able to read data from the corporate mail server or download confidential files from corporate systems.

SONY’s IT department probably failed on all these counts. OK, I am not sure about #1, as I never worked at SONY, but why would they be any different from other corporate environments? As to #2, the failure is obvious: it must have taken weeks if not months for the hackers to extract the reported 10 terabytes. They very obviously failed on #3, and if the media reports about a system administration’s credentials are true, #4 as well.

Just to be clear, I am not trying to blame the victim here. When your attackers have the resources of a nation state at their disposal, it is a grave threat. But this is why IT security folks get the big bucks. I can easily see how, equipped with the resources of a nation state, the attackers were able to deploy zero day exploits and other, perhaps previously unknown techniques that would have defeated technological barriers. (Except that maybe they didn’t… the reports say that they stole user credentials and, I am guessing, there is a good chance that they used social engineering, not advanced technology.) But it’s one thing to be the victim of a successful attack, it’s another thing not being able to detect it, mitigate it, or recover from it. This is where IT security folks should shine, not harassing users about EXE attachments or with asinine password expiration policies.

Recently, I had to fill out some security-related forms with the Canadian government. To do so, I had to log on to a government Web site and create an account using a preassigned, unmemorizable user ID.

While I was doing that, I had to set up a password. It seems that the designers of the government Web site are familiar with XKCD, because their password policy (which also includes frequent password expiration and rules to prevent the reuse of old passwords) seemed like an exact copy of the policy ridiculed here:

Once I managed to get past this hurdle, I had to complete some forms that were downloadable as PDFs. Except that the forms (blank forms!) were in the form of encrypted PDFs, which made it impossible for me to load them with my old copy of Acrobat 6.0 for editing. The encryption was trivial to break (print to PostScript, remove encryption block using an editor, convert back to PDF) but it was there just as an annoyance.

If they invited me to audit their security policy (of course they wouldn’t), I’d ask them the following questions:

1. What is the rationale of your password expiration/password strength policy, ignoring best advice from actual security experts who know the meaning of terms like “entropy”? What are the data supporting Draconian rules that, effectively, force infrequent users to change their passwords every time they log on to your system?
2. What is the rationale behind your policy to encrypt PDF files unnecessarily? Exactly what threat is this supposed to address, and what is the anticipated outcome of employing this security measure?
3. Now that you have successfully alienated your users, what are your plans for detection, analysis, mitigation and recovery in case a real attack occurs? Would you even know when it happens?

I suspect that the real answer to the last question is a no. Security theater is not about protecting systems or preventing attacks; it’s about protecting incompetent hind parts from criticism.

In light of the latest Internet security scare, the Heartbleed bug, there are again many voices calling for an end to the use of passwords, to be replaced instead by fingerprint scanners or other kinds of biometric identification.

I think it is a horrifyingly, terribly bad idea.

Just to be clear, I am putting aside any concerns about the reliability of biometric identification. They are not as reliable as their advocates would like us to believe, but this is not really the issue. I am assuming that as of today, biometric technologies are absolutely, 100% reliable. Even so, they are still a terrible idea, and here is why.

First, what happens if your biometric identification becomes compromised? However it is acquired, it is still transmitted in the form of a series of bits and bytes, which can be intercepted by an attacker. If this were a password, you could easily change it to thwart an attack. But how do you change your fingerprint? Your retina print? Your voice? Your heartbeat?

Second, what happens if you “lose” your biometric identification marker? Fingers get chopped off in accidents. People lose their eyesight. An emergency tracheotomy may deprive you of your normal voice. What then?

And what about privacy concerns? There have been rulings I understand, in the US and perhaps elsewhere, that imply that the same legal or constitutional guarantees that protect you from being compelled to reveal a password may not apply when it comes to providing a fingerprint, a DNA sample, or other biometric markers.

The bottom line is this: a password associating an account or a service to a unique piece of secret knowledge. This knowledge can be changed, passed on, or revoked, and owners may be protected by law from being compelled to reveal it. Biometric identification fundamentally changes this relationship by associating the account or the service with an unmalleable biometric characteristic of a person.

Microsoft officially ended support for Windows XP today.

I hope someone will sue the hell out of them.

To be clear, I understand why they are doing this: they don’t want to continue supporting forever an obsolete, 14 year old operating system.

But something like one quarter or so of the world’s computers continue running Windows XP. One can argue that Microsoft is not responsible for the behavior of system owners who, for whatever reason, choose not to update their systems. But what about those who do everything right and still become the victims of cyberattacks that utilize networks of unpatched Windows XP computers? The decision to terminate support makes Microsoft a de facto accomplice of these cybercriminals.

My fearless prediction is that within a few months, Microsoft will quietly start releasing high priority security patches for Windows XP again.

Meanwhile, Microsoft began releasing a significant update to Windows 8.1. I noticed that when I updated my Windows 8.1 laptop, it booted directly into the Windows desktop. Wow! Now all we need is a decent Start menu and the ability to perform basic system configuration tasks without going through the touch-optimized “Modern UI” and all will be bliss again. One of these days, I might even upgrade one of my development workstations to Windows 8.1!

Second Tuesday of the month. Not my favorite day.

This is when Microsoft releases their monthly batch of updates for Windows.

And this is usually when I also update other software, e.g., Java, Flash, Firefox, on computers that I do not use every day.

Here is about half of them.

The other half sit on different desks.

Oh, that big screen, by the way, is shared by four different computers. Fortunately, two of them are Linux servers. Not that they don’t require updating, but those updates do not usually come on the second Tuesday of the month.

So the NSA and their counterparts elsewhere, including Canada and the UK, are spying on us. I wish I could say the news shocked me, but it didn’t.

The level of secrecy is a cause for concern of course. It is one thing for these agencies not to disclose specific sources and methods, it is another to keep the existence of entire programs secret, especially when these programs are designed to collect data wholesale.

But my biggest concern is that the programs themselves represent a huge security threat for all of us.

First, the NSA apparently relies on its ability to compromise the security of encryption products and technologies or on backdoors built into these products. An unspoken assumption is that only the NSA would be able to exploit these weaknesses. But how do we know that this is the case? How do we know that the same weaknesses and backdoors used by the NSA to decrypt our communications are not discovered and then exploited by foreign intelligence agencies, industrial spies, or criminal organizations?

As an illustrative example, imagine purchasing a very secure lock for your front door. Now imagine that the manufacturer does not tell you that the locks are designed such that there exists a master key that opens them all. Maybe the only officially sanctioned master key is deposited in a safe place, but what are the guarantees that it does not get stolen? Copied? Or that the lock is not reverse engineered?

My other worry is about how the NSA either directly collects, or compels service providers to collect, and store, large amounts of data (e.g., raw Internet traffic). Once again, the unspoken assumption is that only authorized personnel are able to access the data that was collected. But what are the guarantees for that? How do we know that these databases are not compromised and that our private data will not fall into hands not bound by laws and legislative oversight?

These are not groundless concerns. As Edward Snowden’s case demonstrates, the NSA was unable to control unauthorized access even by its own contract employees working in what was supposedly a highly structured, extremely secure work environment. (How on Earth was Snowden able to copy data from a top secret system to a portable device? That violates just about every security rule in the book.)

So even if the NSA and friends play entirely above board and never act in an unlawful manner, these serious concerns remain.

I do not believe we, as citizens, should grant the authority to any state security apparatus to collect data wholesale, or to compromise the cryptographic security of our digital infrastructure. Even if it makes it harder to catch bad guys.

So, our message to the NSA, the CSE, the GCHQ and their friends elsewhere in the free world should be simply this: back off, guys. Or else, risk undermining the very thing you purportedly protect, our basic security.