Aug 062012
 

It looks like Microsoft is absolutely, positively determined to make it hard for long-time users of Windows to continue using their computers productively.

For instance, they actually went the extra mile to disable hacks that allowed Windows 8 to boot directly to the classic desktop and reinstated the Start menu.

What on Earth is going on in Redmond? What are you guys smoking?

 Posted by at 6:05 pm
Aug 022012
 

I just finished reading a very interesting Vanity Fair article about the decline of Microsoft. It paints a devastating picture leaving one to wonder why Microsoft’s shareholders continue to tolerate Ballmer’s (mis)management.

I have been wondering the same thing for many years, for pretty much the same reasons mentioned in this article: the Vista fiasco, the squandering away of the IE lead, Windows CE and Windows Phone, the Zune misstep, and last but not least, the disaster that is yet to happen, which is called Windows 8.

Think about it: how often did you type “google.com” into a browser lately? How about “facebook.com”? Or “twitter.com”? Or “amazon.com”?

And how many times did you type “microsoft.com”?

And I actually happen to like Microsoft.

The Comments section is also interesting, but mainly because of the bias and misinformation. My all time favorite: the story about how Word became the dominant office product because of “secret APIs”. Perhaps there were secret APIs, perhaps there weren’t. But none of that had anything to do with the then market leader, WordPerfect, jumping on the Windows bandwagon several years late, and with a crappy product that crashed even more often than Microsoft Word for Windows 1.0. And by that time, Microsoft was up to version 4.x and frequent crashes were no longer considered acceptable.

 Posted by at 12:08 am
May 212012
 

Remember Microsoft Bob? The revolutionary new interface for Windows that was supposed to make everyday computing easier for the unwashed masses?

It was one of Microsoft’s most spectacular product failures, surpassing even the dreadful Clippy (which, after all, was just an unwelcome feature in an otherwise successful product, Office 97).

But now, it seems that Microsoft is determined to do it again. At a time when the operating system is becoming increasingly less relevant to most users (who cares what operating system runs underneath your browser when you access Gmail or Office 365?) they seem to be dead set on alienating that one class of users for whom the operating system still matters: content creators, which includes artists, designers, software developers and the like.

To their credit, Microsoft is doing this in the open, documenting in a very lengthy blog post the basic ideas behind their most controversial design choices.

But the comments are revealing. Here is a random selection (certainly not unbiased) that I found that I could most closely relate to. After all, just like one of the commenters, I, too, “tried Windows 8 for 2 weeks and then uninstalled“… or rather, not so much uninstalled as never looked at the VM again in which it was installed because I just couldn’t care.

So here are those comments:

Can someone help me out? Should I install Ubuntu, get a Mac, or keep using Windows 7?

Your product is named after a feature of your product. And now the new version of your product tries to abandon said feature in its newly introduced usage mode.

Google just added windows to Chrome OS. You are removing windows from Windows. This won’t end well.

Except for immersive games, I DON’T WANT to run a single full-screen app. Not ever. If I want something to fill the screen, I will maximize the windows.

There is a significant disjunction in the UI. when you hit the start button and are whisked into metro land just to search for something, only to come back to the desktop

Thank you Microsoft for this complete failure. I for one welcome our new KDE overlords!

None of this TABLET CRAP belongs on desktops!

The cold, hard truth of the matter is that Microsoft have created an operating system that I feel is OPENLY ANTAGONISTIC to power users, business users, creative professionals and anyone seeking to use their PC as a productivity tool.

In WW2 the English started a program to analyze aircraft to figure out where they needed to add armor. They looked at all of the planes coming back and did frequency analysis of where the bullet holes were. Some areas were so riddled that easily 60% of bullet holes hit these key areas. The first reaction is to armor these heavily hit areas of the plane. This is wrong. These planes survived. The armor should go everywhere else.

You are killing Aero? You have to be kidding!

Windows 8 prognosis for sales: not that good. That is the latest finding from research entity Gartner.

I have to give you credit Microsoft, you really do know how to alienate people.

The flat UI in no way looks premium. It is harsh, spartan, and an eyesore.

The Metro environment severely compromises functionality by:

  • not allowing real multitasking (only applications in the foreground are allowed to use CPU);
  • not allowing more than two applications to run in the foreground (all other applications are suspended).
  • not allowing the two apps in foreground to use half the screen each (most of the time one of the two apps will be unusable because it has too little space to display information).
  • not allowing the use of more than one display for Metro apps.
  • not allowing more than one running instance for an Metro app.

And the most scary thing is that we already have an example of crippling the Desktop: Windows on ARM/Windows RT. By not allowing third party Desktop applications, the Desktop is only there to allow you to use MS Office.

Do you have a logical explanation why you are screaming permanently that these 9.1 percent iPad/iPhone/Android users are more important than 90% desktop users?

Pls provide a METRO ON/OFF option in Windows 8 (only desktop). With Mouse&Keyboard, METRO is bizarre to use.

How does Windows 8 “metro” and other this teletubby nonsense work on multimonitor setup?

It’s a degradation of Windows. New UI is terrible

The metro interface is horrible and whoever designed it should go back to work for whatever cell phone company they crawled out of.  Lets stop dumbing down the computer for the appliance user.

From my perspective, Aero glass is still fresh and new.  The loss is ultimately cosmetic and therefore minor, but it adds to one of the bigger issues with Windows 8’s UI

Using Windows 8 with a mouse is about as much fun as running Windows Phone 7 in an emulator all day.

And finally, the last comment that sums up my own feelings neatly:

If W8 really works on a desktop used by adults I’ll consider it

But not until then.

 Posted by at 3:12 pm
Oct 132011
 

While the world mourns Steve Jobs, another computing pioneer, Dennis Ritchie, died. Our world wouldn’t be the same without UNIX or the C programming language. My own life would have been very different without him. Jobs will long be forgotten when Ritchie’s legacy will still live on, decades from now.

#include <stdio.h>

main()
{
    printf("goodbye, dennis\n");
}

 Posted by at 12:27 pm
Sep 062011
 

It has been a while since I did anything in machine language. Until this past weekend, that is, when I spent a fair bit of time starting at disassembled code with a debugger.

Last week, I upgraded my Sony Ericsson smartphone to the latest version of its operating system. The upgrade process failed on my main computer when, after updated USB drivers for the phone were downloaded, they failed to install. The problem was not specific to the phone: all driver installations failed, with a not very informative code (0xC0000142, which just means that the application failed to initialize.)

Using the very helpful ProcMon utility from sysinternals (now owned by Microsoft) I managed to identify that it was a process named drvinst.exe that failed. This process is invoked automatically by the system every time a USB device is inserted, and also during device driver installations. So why did it fail?

I downloaded the latest Windows debugger (windbg.exe) from Microsoft; this debugger allows me to do things like debug child processes spawned by a parent process. (I later learned that drvinst.exe actually has a feature whereas it waits for a debugger after startup, to help with driver installation debugging; but chances are that I would not have been able to make much use of this feature, as the failure occurred before drvinst.exe actually started to execute its own code.) I attached the debugger to the DCOM service process (which is the one that spawns copies of drvinst.exe.) I was able to determine that it was during the initial process setup stage that this process failed, when it was attempting to attach to the gdi32.dll system library.

I still have no idea why this happens. But with the help of the debugger, I was able to tinker with this process, changing a processor register’s value at just the right spot, allowing it to continue. This inconvenient but workable process allowed me to install drivers for my phone and also updated drivers for my wireless mouse from Microsoft Update.

Perhaps the most incomprehensible bit is that the same thing works fine on an essentially identical computer. The actual failure occurs inside a kernel subroutine (undocumented system call 123Ah, called from GdiDllInitialize in gdi32.dll) that I cannot debug without a kernel debugger (and since I am trying not to mess my machine up too much, I opted not to do kernel debugging). That subroutine does not appear to be doing anything particularly magical. I checked and all relevant files and Registry settings are identical on the two machines. So it remains a mystery for now… nonetheless, it was educational. I learned a lot about driver installation in Windows 7, about process startup, and incidentally, about the ReactOS project whose open source equivalents to the relevant system components helped me a great deal to understand what was going on.

 Posted by at 8:13 pm
Jun 072011
 

One of the things I like the least about New Scientist (which, in many respects, is probably the best popular science magazine out there) is the “Enigma” brainteaser. I am sure it appeals to the “oh I am ever so smart!” Mensa member crowd out there but…

Well, the thing is, I never liked brainteasers. Are you really smarter than someone else because you happen to remember a random historical factoid? Does it really make sense to ask you to complete a series like, say, 1, 4, 9, 16, ? when the answer can be anything, as there is no compelling reason other than psychology (!) for it to be a homogeneous quadratic series?

But then… sometimes brainteasers reveal more about the person solving them than about the solution itself. I remember when I was in the second or third grade, our teacher gave us a simple exercise: add all the numbers from 1 to 100. (Yes, this is the same exercise given to a young Gauss.) Like Gauss, one of my classmates discovered (or perhaps knew already) that you can add 1+100 = 101; 2+99 = 101, 3+98 = 101, and so on, all the way up to 50 + 51 = 101; and 50 times 101 is 5050, which is the correct answer.

Trouble is, my classmate didn’t finish first. I did. I just added the darn numbers.

Between quick and smart, who wins? What if you’re so quick, you don’t need to be smart? Is it still smart to waste brainpower to come up with a “clever” solution?

Last week’s New Scientist Enigma puzzle caught my attention because it reminded me of this childhood memory. It took me roughly a minute to solve it. Perhaps there is a cleverer way to do it, but why waste all that brainpower when I can do this instead:

/* New Scientist Enigma number 1647 */

#include <stdio.h>

int main(int argc, char *argv[])
{
    int d1, d2, d3, d4, d5, d6, n;

    for (d1 = 1; d1 <= 9; d1++)
        for (d2 = 1; d2 <= 9; d2++) if (d2 != d1)
            for (d3 = 1; d3 <= 9; d3++) if (d3 != d1 && d3 != d2)
                for (d4 = 1; d4 <= 9; d4++)
                    if (d4 != d1 && d4 != d2 && d4 != d3)
                        for (d5 = 1; d5 <= 9; d5++)
                            if (d5 != d1 && d5 != d2 && d5 != d3 && d5 != d4)
                                for (d6 = 1; d6 <= 9; d6++)
                                    if (d6 != d1 && d6 != d2 && d6 != d3 &&
                                        d6 != d4 && d6 != d5)
    {
        n = 100000 * d1 + 10000 * d2 + 1000 * d3 + 100 * d4 + 10 * d5 + d6;

        if (n % 19 != 17) continue;
        if (n % 17 != 13) continue;
        if (n % 13 != 11) continue;
        if (n % 11 != 7) continue;
        if (n % d4 != d3) continue;
        printf("ENIGMA = %d\n", n);
    }

    return 0;
}

Yes, I am quick with C. Does that make me smart?

 Posted by at 2:21 pm
Aug 062010
 

Well, almost 23,000. Mostly JavaScript and PHP, also HTML and SQL. This is a project I’ve been working on all summer. Lots more to do, but at least one deliverable is complete, and I can finally spend a little bit of time doing something else. Plenty of other things I’ve been putting off while I was on this programming binge.

 Posted by at 3:04 am
Jul 262010
 

It’s not for the first time I said this, but you just gotta love this Internet thing. The big news this morning of course is the leak of some 90,000 classified US military documents from Afghanistan. Guardians of state and military secrets are horrified: troops’ lives will be at risk, they say. What they should recognize is that the fact that we live in an open society, far from being a weakness, is really our greatest strength. Open discussion of the pros and cons, the successes and failures, the risks and possible outcomes of a war is part of living in a liberal democracy.

As to the release itself, it’s funny how times are changing. When I learned the database language SQL ages ago, it was because I make my living as a computer professional. I did not necessarily expect to use my SQL skills in scientific endeavors, but that, too, came to pass when I began using the wonderfully crafted SQL-based query interface of the Sloan Digital Sky Survey. What I certainly never expected is that one day, a journalistic leak will arrive in a variety of formats, perhaps the most useful of which is an SQL dump. I wonder: do they teach the building of SELECT queries in journalism school these days?

 Posted by at 1:01 pm
Jul 242010
 

I have been reading the celebrated biography of Albert Einstein by Walter Isaacson, and in it, the chapter about Einstein’s beliefs and faith. In particular, the question of free will.

In Einstein’s deterministic universe, according to Isaacson, there is no room for free will. In contrast, physicists who accepted quantum mechanics as a fundamental description of nature could point at quantum uncertainty as proof that non-deterministic systems exist and thus free will is possible.

I boldly disagree with both views.

First, I look out my window at a nearby intersection where there is a set of traffic lights. This set is a deterministic machine. To determine its state, the machine responds to inputs such the reading of an internal clock, the presence of a car in a left turning lane or the pressing of a button by a pedestrian who wishes the cross the street. Now suppose I incorporate into the system a truly random element, such as a relay that closes depending on whether an atomic decay process takes place or not. So now the light set is not deterministic anymore: sometimes it provides a green light allowing a vehicle to turn left, sometimes not, sometimes it responds to a pedestrian pressing the crossing button, sometimes not. So… does this mean that my set of traffic lights suddenly acquired free will? Of course not. A pair of dice does not have free will either.

On the other hand, suppose I build a machine with true artificial intelligence. It has not happened yet but I have no doubt that it is going to happen. Such a machine would acquire information about its environment (i.e., “learn”) while it executes its core program (its “instincts”) to perform its intended function. Often, its decisions would be quite unpredictable, but not because of any quantum randomness. They are unpredictable because even if you knew the machine’s initial state in full detail, you’d need another machine even more complex than this one to model it and accurately predict its behavior. Furthermore, the machine’s decisions will be influenced by many things, possibly involving an attempt to comply with accepted norms of behavior (i.e., “ethics”) if it helps the machine accomplish the goals of its core programming. Does this machine have free will? I’d argue that it does, at least insofar as the term has any meaning.

And that, of course, is the problem. We all think we know what “free will” means, but is that true? Can we actually define a “decision making system with free will”? Perhaps not. Think about an operational definition: given an internal state I and external inputs E, a free will machine will make decision D. Of course the moment you have this operational definition, the machine ceases to have what we usually think of as free will, its behavior being entirely deterministic. And no, a random number generator does not help in this case either. It may change the operational definition to something like, given internal state I and external inputs E, the machine will make decision Di with probability Pi, the sum of all Pi-s being 1. But it cannot be this randomization of decisions that bestows a machine with free will; otherwise, our traffic lights here at the corner could have free will, too.

So perhaps the question about free will fails for the simple reason that free will is an ill-defined and possibly self-contradictory concept. Perhaps it’s just another grammatically correct phrase that has no more actual meaning than, say, “true falsehood” or “a number that is odd and even” or “the fourth side of a triangle”.

 Posted by at 1:36 am
Jul 042010
 

I hate dogma. I hate it even more when a valid scientific observation becomes dogma.

One case concerns the infamous goto statement in programming languages. It is true that a programming language does not need a goto statement in order to be universal. Unfortunately, this led some, most notably among them the late Edsger Dijkstra, to conclude that goto is actually harmful. While it is true that goto can be misused, and that misusing the constructs of a programming language can lead to bad code, I don’t think goto is unique in this regard (it is certainly no more harmful than pointers, global variables, or the side effects of passing variables by reference, just to name a few examples). Nonetheless, with Dijkstra’s letter on record, the making of a dogma was well under way.

And here I am, some 40 years later, trying to write a simple piece of code the logic of which flows like this:

LET X = A
LABEL:
Do something using X
IF some condition is not satisfied THEN LET X = B and GOTO LABEL

The condition, in particular, is always satisfied when X = B.

Yes, I know how to rewrite the above code using a loop construct, to satisfy structured programming purists. But why should I have to, when the most natural way to express this particular algorithm is through the use of a conditional jump, not a loop? Oh wait… it’s because someone who actually believes in dogma prevailed when the JavaScript was designed, and therefore, goto never made it into the language.

 Posted by at 7:01 pm
Dec 222009
 

I have done many things in my misguided past as a programmer, but strangely, I never did much work with XML. Which is why a recent annoyance turned into an interesting learning opportunity.

I usually watch TV on my computer. (This is why I see more TV than many people I know… not because I am a TV junkie who really “watches” it, I am actually working, but I have, e.g., CNN running in the background, in a small window, and I do occasionally pay attention when I see something unusual. Or change to a channel with The Simpsons.) For years, I’ve been using various ATI All-In-Wonder cards. (No, I don’t recommend them anymore; whereas in the past, they used to attach a tuner to some of their really high-end cards, this is no longer the case, the base graphics hardware of their current crop of AIW cards is quite lame. Their current software sucks, too.) The old ATI multimedia program I am using, while far from perfect, is fairly robust and reliable, and among other things, it comes with a built-in program guide feature. A feature that downloads programming information from an online server.

Except that, as of last week, it was no longer able to do so; the server refused the request. Several customers complained, but to no avail; they were not even able to get through to the right people.

So what is a poor programmer to do? I have known about Schedules Direct, the fee-based but non-profit, low-cost replacement of what used to be a free service from Zap2It, providing the ability to download TV guide data for personal use. The information from Schedules Direct comes in the form of XML. The ATI multimedia program stores its data in a Paradox database. In theory, the rest is just a straightforward exercise of downloading the data and loading it into the Paradox tables, and presto: one should have updated programming information.

Indeed things would be this simple if there were no several hurdles along the way.

First, the Paradox database is password-protected. Now Paradox passwords are a joke, especially since well-known backdoor passwords exist. Yet it turns out that those backdoor passwords work only with the original Borland/Corel/whatever drivers… third party drivers, e.g., the Paradox drivers in Microsoft Access 2007, do not recognize the backdoor passwords. Fortunately, cracking the password is not hard; I used Thegrideon Software’s Paradox Password program for this purpose, and (after payment of the registration fee, of course) it did the trick.

Second, the Microsoft drivers are finicky, and may not allow write access to the Paradox tables. This was most annoying, since I didn’t know the cause. Eventually, I loaded the tables on another machine that never saw the original Borland Database Engine, but did have Access 2007 installed (hence my need for a “real” password, not a backdoor one), and with this machine, I was able to write into the files… not sure if it was due to the absence of the BDE, the fact that I was using Office 2007 as opposed to Office 2003, or some other reason.

So far so good… Access can now write into the Paradox tables, and Access can read XML, after all, Microsoft is all about XML these days, right? No so fast… That’s when I ran into my third problem, namely the fact that Access cannot read XML attributes, whereas a lot of the programming information (including such minor details like the channel number or start time) are provided in attribute form by Schedules Direct (or to be more precise, by the XMLTV utility that I use to access Schedules Direct.) The solution: use XSLT to transform the source XML into a form that Access can digest properly.

With this and a few lines of SQL, I reached the finish line, more-or-less: I was able to update the Paradox tables, and the result appears digestible to the ATI media center application… though not to the accompanying Gemstar program grid application, which still crashes, but that’s okay, I never really used it anyway.

And I managed to accomplish all this just in time to find out that suddenly, the ATI/Gemstar update server is working again… once again, I can get programming information from them. More-or-less… a number of channels have been missing from the lineup for a long time now, so I may prefer to use my solution from now on anyway. Perhaps when I have a little time, I’ll find out what causes the crash (I have some ideas) and the program grid application will work, too.

Needless to say, I know a lot more about XML and XSLT than I did 24 hours ago.

 Posted by at 7:41 pm
Aug 022009
 

Someone wrote to me about inkblots. Apparently, the topic has become hot, in response to the decision by Wikipedia editors to make the Rorschach blots available online. Attempts by some to suppress this information using, among other things, questionable copyright claims, are of a distinctively Scientologist flavor (made all the more curious by Scientology’s rejection of conventional psychoanalysis.) They do have a point, though… the validity of the test could be undermined if test subjects were familiar with the inkblots and evaluation methods. On the other hand, one cannot help but wonder why such an outdated test is still being used in daily practice. It certainly gives credence to those who consider psychoanalysis a pseudoscience.

I am also wondering… suppose I build a sophisticated software system with optical pattern recognition, associative memory, and a learning algorithm. Suppose the software is buggy, and I wish to test it. Would I be testing it by running the recognition program on meaningless symmetric patterns? The behavior of the system would be random, but perhaps not completely so; it may be a case of ordered chaos with a well defined attractor. Would running the recognition program on a few select images reveal anything about that attractor? Would it reveal enough information to determine reliably if the attractor differs from whatever would be considered “normal”?

More importantly, do practitioners of the Rorschach test know about chaos dynamics and do they have the correct (mathematical, computer) tools to analyze their findings?

I am also wondering how such a test could be conceivably normalized to account for differences in life experience (or, to use my software system example, for differences in the training of the learning algorithm) but I better shut up now before my thoughts turn into opinionated rantings about a subject that I know precious little about.

 Posted by at 2:36 pm
Jun 252009
 

A few days ago, I upgraded to Skype 4.

I use Skype for overseas telephone calls a lot. I also call a few people occasionally using Skype-to-Skype. And, every once in a while, I use it to chat with people.

I have heard bad things about Skype 4 so I was not in a hurry to upgrade. But when, the other day, the software notified me that a major upgrade is available, I decided to give it a try.

Wish I didn’t.

The installation completed successfully, and Skype worked fine, but… well, it’s best if I just quote a few sentences from Skype’s own Web site where the new version was announced:

  • Skype 4.0 should certainly participate in the worst software redesign conquest.
  • Worst interface ever created for Skype and i’ve been using it ever since the 1st beta. Please dump this garbage
  • Skype 4.0 has an extreme ugly layout.
  • The UI of version 4 is a terrible disappointment. No matter how I tweak, it still consumes more screen real-estate than version 3 did.
  • Who are you people and what were you thinking when you released this kludge.
  • ABSOLUTELY TERRIBLE INTERFACE
  • Skype 4.0.x is PAINFUL and FRUSTRATING TO USE.
  • I think this is the ‘vista’ of skype releases.
  • What where you thinking. Did you guys outsource? This version has all the hallmarks of a design by committee.
  • I truly do not like the new 4.0 version! I’ve tried it for a week, hoping to get used to it, and i’m just left cursing. I am reverting because…

I share these sentiments. This morning, I gave up and downgraded to the 3.8 version. Which is working fine, as always.

 Posted by at 1:39 pm
May 312009
 

I’ve been learning a lot about Web development these days: Dojo and Ajax, in particular. It’s incredible what you can do in Javascript nowadays, sophisticated desktop applications running inside a Web browser. I am spending a lot of time building a complex prototype application that has many features associated with desktop programs, including graphics, pop-up dialogs, menus, and more.

I’ve also been learning a lot about the intricacies Brans-Dicke gravity and about the parameterized post-Newtonian (PPN) formalism. Brans-Dicke theory is perhaps the simplest modified gravity theory that there is, and I have to explain to someone why the gravity theory that I spend time working on doesn’t quite behave like Brans-Dicke theory. In the process, I find out things about Brans-Dicke theory that I never knew.

And, I’ve also been doing a fair bit of SCPI programming this month. SCPI is a standardized way for computers to talk to measurement instrumentation, and an old program I wrote used to use a non-standard way… not anymore.

Meanwhile, in all the spare time that I’ve left, I’ve been learning Brook+, a supercomputer programming language based on C… that is because my new test machine is a supercomputer, sort of, with its graphics card that doubles as a numeric vector processor capable in theory of up to a trillion single precision floating point instructions per second… and nearly as many in practice, in the test programs that I threw at it.

I’m also learning a little more about the infamous cosmological constant problem (why is the cosmological constant at least over 50 orders magnitude too small but not exactly zero?) and about quantum gravity.

As I said in the subject… busy days. Much more fun though than following the news. Still, I did catch in the news that Susan Boyle lost in Britains Got Talent… only because an amazing dance group won:

 Posted by at 3:07 am
Jan 272009
 

Long before blogs, long before the Web even, there was an Internet and people communicated via public forums (fora?), Usenet foremost among them.

Yet I stopped using Usenet about a decade ago. Here is a good example as to why. Excerpts from an exchange:

You will have more success on Usenet if you learn and follow the normal Usenet posting conventions.

About posting conventions: where did I stray from them? I do indeed want to respect the list rules.

Have a look at <http://cfaj.freeshell.org/google/>

Got it: thanks.

You failed to appropriately quote the message that you are responding to. See the FAQ and the more detailed explanation of posting style that it links to. Then, if the explanation provided is not sufficiently clear, ask for clarification.

I am afraid that you have not yet ‘got it’. You have gone from not quoting the message you are responding to, to top-posting and failing to appropriately trim the material that you are quoting.

If you had been told what you did wrong, that would, hopefully, eliminate one class of error from your future posts. You were told where to read about conventions, which *should* eliminate *all* of the well-known errors.

You are forgiven if you thought that the thread from which I excerpted these snotty remarks was about Usenet’s “netiquette”. But it wasn’t. It was all in response to a very polite and sensible question about ways to implement a destructor in JavaScript.

I guess my views are rather clear on the question as to which people harm Usenet more: those who stray from flawless “netiquette”, or those who feel obliged to lecture them. I have yet to understand why it is proper “netiquette” to flood a topic with such lectures  instead of limiting responses to the topic at hand, and responding only when one actually knows the answer. I guess that would be too helpful, and helping other people without scolding them is not proper “netiquette”?

 Posted by at 1:31 pm