Jun 092016
 

Dictatorships can be wonderful places, so long as they are led by competent dictators.

The problem with dictatorships is that when the dictators go bonkers, there are no corrective mechanisms. No process to replace them or make them change their ways.

And now I wonder if the same fate may be in the future of Singapore, described by some as the “wealthiest non-democracy”.

The Ministry of Information and the Arts

To be sure, Singapore is formally democratic, with a multi-party legislature. But really, it is a one-party state that has enacted repressive legislation that require citizens engaging in political discussion to register with the government, and forbids the assembly of four or more people without police permission.

Nonetheless, Singapore’s government enjoyed widespread public support for decades because they were competent. Competence is the best way for a government, democratic or otherwise, to earn the consent of the governed, and Singapore’s government certainly excelled on this front.

But I am beginning to wonder if this golden era is coming to an end, now that it has been announced that Singapore’s government plans to take all government computers off the Internet in an attempt to improve security.

The boneheaded stupidity of this announcement is mind-boggling.

For starters, you don’t just take a computer “off the Internet”. So long as it is connected to something that is connected to something else… just because you cannot use Google or visit Facebook does not mean that the bad guys cannot access your machine.

It will also undoubtedly make the Singapore government a lot less efficient. Knowledge workers (and government workers overwhelmingly qualify as knowledge workers) these days use the Internet as an essential resource. It could be something as simple as someone checking proper usage of a rare English expression, or something as complex as a government scientist accessing relevant literature in manuscript repositories or open access journals. Depriving government workers of these resources in order to improve security is just beyond stupid.

In the past, Singapore’s government was not known to make stupid decisions. But what happens when they start going down that road? In a true democracy, stupid governments tend to end up being replaced (which does not automatically guarantee an improvement, to be sure, but over time, natural selection tends to work.) Here, the government may dig in and protect its right to be stupid by invoking national security.

Time will tell. I root for sanity to prevail.

 Posted by at 1:45 pm
Apr 152016
 

Not for the first time, one of my Joomla! sites was attacked by a script kiddie using a botnet.

The attack is a primitive brute force attack, trying to guess the administrator password of the site.

The frustrating thing is that the kiddie uses a botnet, accessing the site from several hundred remote computers at once.

A standard, run-of-the-mill defense mechanism that I installed works, as it counts failed password attempts and blocks the offending IP address after a predetermined number of consecutive failures.

Unfortunately, it all consumes significant resources. The Joomla! system wakes up, consults the MySQL database, renders the login page and then later, the rejection page from PHP… when several hundred such requests arrive simultaneously, they bring my little server to its knees.

I tried as a solution a network-level block on the offending IP addresses, but there were just too many: the requests kept coming, and I became concerned that I’d have an excessively large kernel table that might break the server in other ways.

So now I implemented something I’ve been meaning to do for some time: ensuring that administrative content is only accessible from my internal network. Anyone accessing it from the outside just gets a static error page, which can be sent with minimal resource consumption.

Now my server is happy. If only I didn’t need to waste several hours of an otherwise fine morning because of this nonsense. I swear, one of these days I’ll find one of these script kiddies in person and break his nose or something.

 Posted by at 11:50 am
Apr 102016
 

I’ve been encountering an increasing number of Web sites lately that asked me to disable my ad blocker. They promise, in return, fewer ads.

And with that promise, they demonstrate that they completely and utterly miss the point.

I don’t want fewer ads. I don’t mind ads. I understand that for news Web sites, ads are an essential source of revenue. I don’t resent that. I even click on ads that I find interesting or relevant.

So why do I use an ad blocker, then?

In one word: security.

Malicious ads showed up even on some of the most respectable Web sites. Ad networks have no incentive to vet ads for security, so all too often, they only remove them after the fact, after someone complained. And like a whack-a-mole game, the malicious advertiser is back in no time under another name, with another ad.

And then there are those ads that pop up with an autostart video, with blaring sound in the middle of the night, with the poor user (that would be me) scrambling to find which browser tab, which animation is responsible for the late night cacophony.

Indeed, it was one of these incidents that prompted me to call it quits on ads and install an ad blocker.

So sorry folks, if you are preventing me from accessing your content because of my ad blocker, I just go elsewhere.

That is, until and unless you can offer credible assurance that the ads on your site are safe. I don’t care how many there are. It’s self-limiting anyway: advertisers won’t pay top dollar for an ad on a site that is saturated with ads. What I need to know is that the ads on your site won’t ruin my day one way or another.

 Posted by at 9:19 am
Sep 212015
 

Today, I spent a couple of hours trying to sort out why a Joomla! Web site, which worked perfectly on my Slackware Linux server, was misbehaving on CentOS 7.

The reason was simple yet complicated. Simple because it was a result of a secure CentOS 7 installation with SELinux (Security Enhanced Linux) fully enabled. Complicated because…

Well, I tried to comprehend some weird behavior. The Apache Web server, for instance, was able to read some files but not others; even when the files in question were identical in content and had (seemingly) identical permissions.

Of course part of it was my inexperience: I do not usually manage SELinux hosts. So I was searching for answers online. But this is where the experience turned really alarming.

You see, almost all the “solutions” that I came across advocated severely weakening SELinux or disabling it altogether.

Since I was really not inclined to do either on a host that I do not own, I did not give up until I found the proper solution. Nonetheless, it made me wonder about the usefulness of overly complicated security models like SELinux or the advanced ACLs of Windows.

These security solutions were designed by experts and expert committees. I have no reason to believe that they are not technically excellent. But security has two sides: it’s as much about technology as it is about people. People that include impatient users and inadequately trained or simply overworked system administrators.

System administrators who often “solve” a problem by disabling security altogether, rather than act as I have, research the problem, and not do anything until they fully understand the issue and the most appropriate solution.

The simple user/group/world security model of UNIX systems may lack flexibility but it is easy to conceptualize and for which it is easy to develop a good intuition. Few competent administrators would ever consider solving an access control problem by suggesting the use of 0777 as the default permission for all affected files and folders. (OK, I have seen a few who advocated just that, but I would not call these folks “competent.”)

A complex security model like SELinux, however, is difficult to learn and comprehend fully. Cryptic error messages only confound users and administrators alike. So we should not be surprised when administrators take the easy way out. Which, in a situation similar to mine, often means disabling the enhanced security features altogether. Unless their managers are themselves well trained and security conscious, they will even praise the administrator who comes up with such a quick “solution”. After all, security never helps anyone solve their problems; by its nature, it becomes visible only for its absence, and only when your systems are under attack. By then, it’s obviously too late of course.

So the next time you set up a system with proper security, think about the consequences of implementing a security model that is too complex and non-intuitive. And keep in mind that what you are securing is not merely a bunch of networked computers; people are very much part of the system, too. The security technology that is used must be compatible with both the hardware and the humans operating the hardware. A technically inferior solution that is more likely to be used and implemented properly by users and administrators beats a technically superior solution that users and administrators routinely work around to accomplish their daily tasks.

In short… sometimes, less is more indeed.

 Posted by at 7:17 pm
May 152015
 

Whenever I travel, I think a lot about Internet security. For purely selfish reasons: I do not wish to become a victim of cybercrime or unnecessarily expose my own systems to attacks.

The easiest way to achieve end-to-end encryption is through a virtual private network (VPN). Whenever possible, I connect to my own router’s VPN service here in Ottawa before doing anything else on the Interwebs. The connection from my router to the final destination is still subject to intercept, but at least my connection from whatever foreign country I am in to my own network is secure.

A VPN has numerous other advantages, not the least of which is the fact that to the outside world, I appear to have an Ottawa-based IP address; this allows me, for instance, to use my Netflix subscription even in countries where Netflix is not normally available.

The downside of the VPN is that I am limited by the outgoing bandwidth of my own connections. But in practice, this does not appear to be a serious limitation. (I was able to watch Breaking Bad episodes just fine while in Abu Dhabi.)

Unfortunately, a VPN is not always possible, as some providers, for reasons known only to them, block VPNs. (I can think of a few workarounds, but I have not yet implemented any of them.) Even in this case, I remain at least partially protected. I have set up my mail server such that both incoming (IMAP) and outgoing (SMTP) connections are fully encrypted. This way, not only are my messages secure, but (and this was my main concern) I also avoid leaking sensitive password information to an eavesdropper.

When it comes to Web sites, I use secure (HTTPS) connections whenever possible, even for “mundane” stuff like innocent Google searches. I also use SSH if necessary, to connect to my servers. These days, SSH is an absolute must; the use of Telnet is just an invitation for disaster.

But of course the biggest security risk while one is on the road is the use of a public Wi-Fi network anywhere. Connecting to an HTTP (not HTTPS) server through a public Wi-Fi network and logging in with your password may not be the exact equivalent of telegraphing your password to the whole wide world, but it comes pretty darn close. Tools that can be used to scan for Wi-Fi networks and analyze the data are readily available not just for laptops but even for smartphones.

Once an open Wi-Fi network is identified, “sniffing” all packets becomes a trivial exercise, with downloadable tools that are readily available. Which is why it is incomprehensible to me why, in this day and age, most providers (e.g., hotels, airports) that actually do require users to log in use an unsecure network and just intercept the user’s first Web query to present a login page instead, when the technology to provide a properly secured Wi-Fi network has long been available.

In the future, no doubt I’ll have to take even stronger measures to maintain data security. For instance, the simple PPTP VPN technology in my router has known vulnerabilities. Today, it may take several hours on a dedicated high-end workstation to crack its encryption keys; the same task may be accomplished in minutes or less on tomorrow’s smartphones.

So there really are two lessons here: First, any security is bettern than no security, as it makes it that much harder for an attacker to do harm, and most attackers will just move on to find lower hanging fruit. Second, no measure should give you a false sense of security: by implementing reasonable security measures, you are raising the bar higher, but it will never defeat a determined attacker.

 Posted by at 2:46 pm
Mar 252015
 

Curse my suspicious nature.

Here I am, reading a very nice letter from a volunteer who is asking me to share a link on my calculator museum Web site to cheer up some kids:

rachel1

And then, instead of doing as I was asked to do, I turned to Google. Somehow, this message just didn’t smell entirely kosher. The article to which I was supposed to link also appeared rather sterile, more like an uninspired homework assignment, with several factual errors. So I started searching. It didn’t take very long until I found this gem:

Then, searching some more, I came across this:

Or how about this one:

Looks like Ms. Martin has been a busy lady.

So no, I don’t think I’d be adding any links today.

 Posted by at 7:33 pm
Mar 142015
 

I hate software upgrades.

It is one of the least productive ways to use one’s time. I am talking about upgrades that are more or less mandatory, when a manufacturer ends support of an older version. So especially if the software in question is exposed to the outside world, upgrading is not optional: the security risk associated with using an unsupported, obsolete version is quite significant.

Today, I was forced to upgrade all my Web sites that use the Joomla content management system, as support for Joomla 2.5 ended in December, 2014.

Joomla-Logo

What can I say. It was not fun. I am using some custom components and some homebrew solutions, and it took the better part of the day to get through everything and resolve all compatibility issues.

And I gained absolutely nothing. My Web sites look exactly like they did yesterday (apart from things that might  be broken as a result of the upgrade, that is.) I just wasted a few precious hours of my life.

Did I mention that I hate software upgrades?

 Posted by at 7:30 pm
Dec 182014
 

While much of the media is busy debating how the United States already “lost” a cyberwar with North Korea, or how it should respond decisively (I agree), a few began to discuss the possible liability of SONY itself in the hack.

The latest news is that the hackers stole a system administrator’s credentials; armed with these credentials, they were able to roam SONY’s corporate network freely and over the course of several months, they stole over 10 terabytes (!) of data.

Say what? Root password? Months? Terabytes?

OK, I am going to go out on a limb here. I know nothing about SONY’s IT security, the people who work there, their training or responsibilities. And of course it wouldn’t be the first time for the media to get even basic facts wrong.

Still, the magnitude of the hack is evident. It had to take a considerable amount of time to steal all that data and do all that damage.

Which could not have possibly happened if SONY’s IT security folks actually knew what they were doing.

Not that I am surprised. SONY is not alone in this regard; everywhere I turn, corporations, government departments, you name it, I see the same thing. Security, all too often, is about harassing or hindering legitimate users. No, you cannot have an EXE attachment in your e-mail! No, you cannot install that shrink-wrapped software on your workstation! No, we cannot let you open TCP port 12345 on that experimental server!

Users are pesky creatures and most of them actually find ways to get their work done. Yes, their work. This is not about evil corporate overlords not letting you update your Facebook status or watch funny cat videos on YouTube. This is about being able to accomplish tasks that you are paid to do.

Unfortunately, when it comes to IT security, a flawed mentality is all too prevalent. Even on Wikipedia. Look at this diagram, for instance, illustrating the notion of defense in depth:

This, I would argue, is a very narrow-minded view of IT security in general, and the concept of in-depth defense in particular. To me, defense in depth means a lot more than merely deploying technologies to protect data through its life cycle. Here are a few concepts:

  1. Partnership with users: Legitimate users are not the enemy! Your job is to help them accomplish their tasks safely, not to become Mordac the Preventer from the Dilbert comic strip. Users can be educated, but they can also be part of your security team, for instance by alerting you when something is not working quite the way it was expected.
  2. Detection plans and strategies: Recognize that, especially if your organization is prominently exposed, the question is not if but when. You will get security breaches. How do you detect them? What are the redundant technologies and methods (including organization and education) that you use to make sure that an intrusion is detected as early as possible, before too much harm is done?
  3. Mitigation and recovery: Suppose you detect an intrusion. What do you do? Perhaps it’s a good idea to place a “don’t panic” sticker on the cover page of your mitigation and recovery plan. That’s because one of the worst things you can do in these cases is a knee-jerk panic response shutting down entire corporate systems. (Such a knee-jerk reaction is also ripe for exploitation. For instance, a hacker might compromise the open Wi-Fi of the coffee shop across the street from your headquarters before hacking into your corporate network, intentionally in such a way that it would be discovered, counting on the knee-jerk response that would drive employees in droves across the street to get their e-mails and get urgent work done.)
  4. Compartmentalization. I don’t care if you are the most trusted system administrator on the planet. It does not mean that you need to have access to every hard drive, every database or every account on the corporate network. The tools (encrypted databases, disk-level encryption, granulated access control lists) are all there: use them. Make sure that even if Kim Jong-un’s minions steal your root password, they still wouldn’t be able to read data from the corporate mail server or download confidential files from corporate systems.

SONY’s IT department probably failed on all these counts. OK, I am not sure about #1, as I never worked at SONY, but why would they be any different from other corporate environments? As to #2, the failure is obvious: it must have taken weeks if not months for the hackers to extract the reported 10 terabytes. They very obviously failed on #3, and if the media reports about a system administration’s credentials are true, #4 as well.

Just to be clear, I am not trying to blame the victim here. When your attackers have the resources of a nation state at their disposal, it is a grave threat. But this is why IT security folks get the big bucks. I can easily see how, equipped with the resources of a nation state, the attackers were able to deploy zero day exploits and other, perhaps previously unknown techniques that would have defeated technological barriers. (Except that maybe they didn’t… the reports say that they stole user credentials and, I am guessing, there is a good chance that they used social engineering, not advanced technology.) But it’s one thing to be the victim of a successful attack, it’s another thing not being able to detect it, mitigate it, or recover from it. This is where IT security folks should shine, not harassing users about EXE attachments or with asinine password expiration policies.

 Posted by at 9:57 pm
Dec 172014
 

Recently, I had to fill out some security-related forms with the Canadian government. To do so, I had to log on to a government Web site and create an account using a preassigned, unmemorizable user ID.

While I was doing that, I had to set up a password. It seems that the designers of the government Web site are familiar with XKCD, because their password policy (which also includes frequent password expiration and rules to prevent the reuse of old passwords) seemed like an exact copy of the policy ridiculed here:

Once I managed to get past this hurdle, I had to complete some forms that were downloadable as PDFs. Except that the forms (blank forms!) were in the form of encrypted PDFs, which made it impossible for me to load them with my old copy of Acrobat 6.0 for editing. The encryption was trivial to break (print to PostScript, remove encryption block using an editor, convert back to PDF) but it was there just as an annoyance.

If they invited me to audit their security policy (of course they wouldn’t), I’d ask them the following questions:

  1. What is the rationale of your password expiration/password strength policy, ignoring best advice from actual security experts who know the meaning of terms like “entropy”? What are the data supporting Draconian rules that, effectively, force infrequent users to change their passwords every time they log on to your system?
  2. What is the rationale behind your policy to encrypt PDF files unnecessarily? Exactly what threat is this supposed to address, and what is the anticipated outcome of employing this security measure?
  3. Now that you have successfully alienated your users, what are your plans for detection, analysis, mitigation and recovery in case a real attack occurs? Would you even know when it happens?

I suspect that the real answer to the last question is a no. Security theater is not about protecting systems or preventing attacks; it’s about protecting incompetent hind parts from criticism.

 Posted by at 8:55 pm
Apr 102014
 

In light of the latest Internet security scare, the Heartbleed bug, there are again many voices calling for an end to the use of passwords, to be replaced instead by fingerprint scanners or other kinds of biometric identification.

I think it is a horrifyingly, terribly bad idea.

Just to be clear, I am putting aside any concerns about the reliability of biometric identification. They are not as reliable as their advocates would like us to believe, but this is not really the issue. I am assuming that as of today, biometric technologies are absolutely, 100% reliable. Even so, they are still a terrible idea, and here is why.

First, what happens if your biometric identification becomes compromised? However it is acquired, it is still transmitted in the form of a series of bits and bytes, which can be intercepted by an attacker. If this were a password, you could easily change it to thwart an attack. But how do you change your fingerprint? Your retina print? Your voice? Your heartbeat?

Second, what happens if you “lose” your biometric identification marker? Fingers get chopped off in accidents. People lose their eyesight. An emergency tracheotomy may deprive you of your normal voice. What then?

And what about privacy concerns? There have been rulings I understand, in the US and perhaps elsewhere, that imply that the same legal or constitutional guarantees that protect you from being compelled to reveal a password may not apply when it comes to providing a fingerprint, a DNA sample, or other biometric markers.

The bottom line is this: a password associating an account or a service to a unique piece of secret knowledge. This knowledge can be changed, passed on, or revoked, and owners may be protected by law from being compelled to reveal it. Biometric identification fundamentally changes this relationship by associating the account or the service with an unmalleable biometric characteristic of a person.

Please don’t.

 Posted by at 10:27 am
Apr 082014
 

winxp-supportMicrosoft officially ended support for Windows XP today.

I hope someone will sue the hell out of them.

To be clear, I understand why they are doing this: they don’t want to continue supporting forever an obsolete, 14 year old operating system.

But something like one quarter or so of the world’s computers continue running Windows XP. One can argue that Microsoft is not responsible for the behavior of system owners who, for whatever reason, choose not to update their systems. But what about those who do everything right and still become the victims of cyberattacks that utilize networks of unpatched Windows XP computers? The decision to terminate support makes Microsoft a de facto accomplice of these cybercriminals.

My fearless prediction is that within a few months, Microsoft will quietly start releasing high priority security patches for Windows XP again.

Meanwhile, Microsoft began releasing a significant update to Windows 8.1. I noticed that when I updated my Windows 8.1 laptop, it booted directly into the Windows desktop. Wow! Now all we need is a decent Start menu and the ability to perform basic system configuration tasks without going through the touch-optimized “Modern UI” and all will be bliss again. One of these days, I might even upgrade one of my development workstations to Windows 8.1!

 Posted by at 10:21 pm
Mar 112014
 

Second Tuesday of the month. Not my favorite day.

This is when Microsoft releases their monthly batch of updates for Windows.

And this is usually when I also update other software, e.g., Java, Flash, Firefox, on computers that I do not use every day.

Here is about half of them.

The other half sit on different desks.

Oh, that big screen, by the way, is shared by four different computers. Fortunately, two of them are Linux servers. Not that they don’t require updating, but those updates do not usually come on the second Tuesday of the month.

 Posted by at 4:07 pm
Sep 062013
 

So the NSA and their counterparts elsewhere, including Canada and the UK, are spying on us. I wish I could say the news shocked me, but it didn’t.

The level of secrecy is a cause for concern of course. It is one thing for these agencies not to disclose specific sources and methods, it is another to keep the existence of entire programs secret, especially when these programs are designed to collect data wholesale.

But my biggest concern is that the programs themselves represent a huge security threat for all of us.

First, the NSA apparently relies on its ability to compromise the security of encryption products and technologies or on backdoors built into these products. An unspoken assumption is that only the NSA would be able to exploit these weaknesses. But how do we know that this is the case? How do we know that the same weaknesses and backdoors used by the NSA to decrypt our communications are not discovered and then exploited by foreign intelligence agencies, industrial spies, or criminal organizations?

As an illustrative example, imagine purchasing a very secure lock for your front door. Now imagine that the manufacturer does not tell you that the locks are designed such that there exists a master key that opens them all. Maybe the only officially sanctioned master key is deposited in a safe place, but what are the guarantees that it does not get stolen? Copied? Or that the lock is not reverse engineered?

My other worry is about how the NSA either directly collects, or compels service providers to collect, and store, large amounts of data (e.g., raw Internet traffic). Once again, the unspoken assumption is that only authorized personnel are able to access the data that was collected. But what are the guarantees for that? How do we know that these databases are not compromised and that our private data will not fall into hands not bound by laws and legislative oversight?

These are not groundless concerns. As Edward Snowden’s case demonstrates, the NSA was unable to control unauthorized access even by its own contract employees working in what was supposedly a highly structured, extremely secure work environment. (How on Earth was Snowden able to copy data from a top secret system to a portable device? That violates just about every security rule in the book.)

So even if the NSA and friends play entirely above board and never act in an unlawful manner, these serious concerns remain.

I do not believe we, as citizens, should grant the authority to any state security apparatus to collect data wholesale, or to compromise the cryptographic security of our digital infrastructure. Even if it makes it harder to catch bad guys.

So, our message to the NSA, the CSE, the GCHQ and their friends elsewhere in the free world should be simply this: back off, guys. Or else, risk undermining the very thing you purportedly protect, our basic security.

 Posted by at 1:50 pm
Jul 152013
 

The NSA engaged in domestic surveillance on a massive scale. It collected information on both foreign nationals and US citizens. It collected large amounts of data indiscriminately. It did so in secret, with little oversight. It did so with the collaboration of major telecommunication companies.

Sounds familiar? Perhaps. But what I am describing is project SHAMROCK, an NSA program terminated in 1975 that collected telegrams sent to or from the United States.

Arguably, the situation is somewhat better today, as the NSA is now under Congressional oversight and it has (supposedly) internal procedures in place to prevent the unlawful use of data that they collect. That is, if you believe their statements. But then, they made similar reassuring statements back in 1975, too, before details about SHAMROCK came to light.

The bottom line, it seems to me, is that governments have the technological means, the capacity, and the willingness to engage in large-scale surveillance of their own citizens. No guarantees against an Orwellian nightmare can come from futile attempts to limit these capabilities. The genie cannot be put back into the bottle. Only the openness and transparency of our political institutions can guarantee that the capabilities will not be abused.

 Posted by at 12:02 pm
Jun 202013
 

I have read about this before and I didn’t want to believe it then. I still don’t believe it, to be honest, but it is apparently happening.

Yahoo will recycle inactive user IDs. That is, if you don’t log on to Yahoo for a period of 12 months, your old user ID will be up for grabs by whoever happens to be interested.

Like your friendly neighborhood identity thief.

Yahoo claims that they are going to extraordinary lengths to prevent identity theft. But that is an insanely stupid thing to say. How can Yahoo prevent, say, a financial institution from sending a password confirmation e-mail to a hapless user’s old Yahoo ID if said user happened to use that ID to establish the account years ago?

That is just one of many scenarios that I can think about for Yahoo’s bone-headed decision to backfire.

And I can’t think of a single sensible reason as to why Yahoo wants to do this in the first place. They will piss off a great many users and likely please no one.

I hope they will change their mind before it’s too late. I hope that if they don’t change their mind, something nasty happens soon and someone sues their pants off.

 Posted by at 11:00 pm
Jun 082013
 

Yes, it’s Orwellian, and this time around, it’s no hyperbole.

The US government apparently not only collects information (“metadata”) on all telephone calls, they also have the means collect e-mails, online chats, voice-over-IP (e.g., Skype) telephone calls, file transfers, photographs and other stored data, and who knows what else… basically, all data handled by some of the largest Internet companies, including Google, Facebook, Skype and others.

Last summer, I decided to revamp my e-mail system. The main goal was to make it compatible with mobile devices; instead of using a conventional mail client that downloads and stores messages, I set up an IMAP server.

But before I did so, I seriously considered off-loading all this stuff to Google’s Gmail or perhaps, Microsoft’s outlook.com. After all, why should I bother maintaining my own server, when these fine companies offer all the services I need for free (or for a nominal fee)?

After evaluating all options, I decided against “outsourcing” my mail system. The fact that I did not want to have my mail stored on servers that fall under the jurisdiction of the US government played a significant role in my decision. Not because I have anything to hide; it’s because I value my privacy.

Little did I know back then just how extensively the US government was already keeping services such as Google under surveillance:

 
 

From the leaked slides (marked top secret, sensitive information, originator controlled, no foreign nationals; just how much more secret can stuff get?) and the accompanying newspaper articles it is not clear if this is blanket surveillance (as in the case of telephone company metadata) or targeted surveillance. Even so, the very fact that the US government has set up this capability and recruited America’s leading Internet companies (apparently not concerned about their reputation; after all, a presentation, internal as it may be, looks so much nicer if you can splatter the logos of said companies all over your slides) is disconcerting, to say the least.

True, they are doing this supposedly to keep us safe. And I am willing to believe that. But if I preferred security over liberty, I’d have joined Hungary’s communist party in 1986 instead of emigrating and starting a new life in a foreign country. Communist countries were very safe, after all. (And incidentally, they were not nearly this intrusive. Though who knows how intrusive they’d have become if they had the technical means available.)

One thing I especially liked: the assurances that the NSA does not spy on US residents or citizens. Of course… they don’t have to. This will be done for them by their British (or Canadian?) counterparts. No agency is breaking any of the laws of its own country, yet everybody is kept under surveillance. And this is not even new: I remember reading an article in the Globe and Mail some 20 years ago, detailing this “mutually beneficial” practice. I may even have kept a copy, but if so, it is probably buried somewhere in my basement.

Meanwhile, I realize that the good people at the NSA or at Canada’s Communications Security Establishment must really hate folks like me, though, running our own secure mail servers. I wonder when I will get on some suspect list for simply refusing to use free services like Gmail that can be easily monitored by our masters and overlords.

 Posted by at 7:17 pm
May 222013
 

Today, the weirdest thing happened on my main desktop computer: the right-click menu of Windows Explorer, as well as the Windows desktop, disappeared. I was also unable to bring up the Properties dialog, even through the menu bar.

The worst part of it is, I could not figure out what happened. A reboot didn’t fix things, nor did an obvious Registry hack (making sure that HKCU\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoViewContextMenu is set to 0. For some reason, it was set to 1.) Eventually, I resorted to the big guns and used System Restore (thanks to the fact that I do backups daily, I had a restore point from 2AM this morning) to fix things. Still, it bugs me that something happened that I do not understand.

In comparison with another, mostly identical system, I noted that the other system had no subkeys under the Policies key whatsoever. So I wonder exactly when and how the Explorer and System subkeys were created on this workstation.

And while I was at it, I searched the Registry a little more and found another, possibly relevant entry: HKLM\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer\NoViewContextMenu. Once again, this Registry value is missing from the other machine, so I wonder how, why and when it was created on this workstation.

This is deeply disturbing. I don’t like mysteries, especially not on a machine that I use on a daily basis. Life is short and one does not need to resolve every mystery, but occasionally, such unexpected behavior can be a sign of a security issue.

 Posted by at 7:02 pm
Feb 212013
 

I have been password protecting my smartphone ever since I got one, and more recently, now that Android supports encryption, I took advantage of that feature as well.

The reason is simple: if my phone ever gets stolen, I wouldn’t want my data to fall into the wrong hands. But, it appears, there is now another good reason: it seems that at least in Ontario, if your phone is password protected, police need a search warrant before they can legitimately access its contents.

Privacy prevailed… at least this time.

 Posted by at 3:21 pm
Feb 122013
 

I was reading about full-disk encryption tools when I came across this five-year old research paper. For me, it was an eye-popper.

Like many, I also assumed that once you power down a computer, the contents of its RAM are scrambled essentially instantaneously. But this is not the case (and it really should not come as a surprise given the way DRAM works). Quite the contrary, a near-perfect image remains in memory for seconds; and if the memory is cooled to extreme low temperatures, the image may be preserved for minutes or hours.

Degrade of a bitmap image after 5, 30, 60 seconds and 5 minutes in a 128 MB Infineon memory module manufactured in 1999.

Decay of a bitmap image 5, 30, 60 seconds and 5 minutes after power loss in a 128 MB Infineon memory module manufactured in 1999. From https://citp.princeton.edu/research/memory/.

So even as we worry about public servants losing USB keys or entire laptops containing unencrypted information on hundreds of thousands of people, it appears that sometimes even encryption is not enough. If a lost laptop is in a suspended state, an attacker could access the contents of its RAM using only a rudimentary toolkit (that may include “canned air” dusters turned upside-down for cooling).

I wonder what the future will bring. Tamper-proof hardware in every laptop? In-memory encryption? Or perhaps we will decide that we just don’t care, since we already share most details about our personal lives through social networks anyway?

On that note, Canada’s government just decided to scrap a planned cybersurveillance bill that many found unacceptably intrusive. Good riddance, I say.

 Posted by at 8:58 am
Jan 122013
 

jstor_logoComputer pioneer Alan Turing, dead for more than half a century, is still in the news these days. The debate is over whether or not he should be posthumously pardoned for something that should never have been a crime in the first place, his homosexuality. The British government already apologized for a prosecution that drove Turing into suicide.

I was reminded of the tragic end of Turing’s life as I am reading about the death of another computer pioneer, Aaron Swartz. His name may not have been a household name, but his contributions were significant: he co-created the RSS specifications and co-founded Reddit, among other things. And, like Turing, he killed himself, possibly as a result of government prosecution. In the case of Swartz, it was not his sexual orientation but his belief that information, in particular scholarly information should be freely accessible to all that brought him into conflict with authorities; specifically, his decision to download some four million journal articles from JSTOR.

Ironically, it was only a few days ago that JSTOR opened up their archives to limited public access. And the trend in academic publishing for years has been in the direction of free and open access to all scientific information.

Perhaps one day, the United States government will also find itself in the position of having to apologize for a prosecution that, far from protecting the public’s interests, instead deprived the public of the contributions that Mr. Swartz will now never have a chance to make.

 Posted by at 4:53 pm