Jun 142025
 

This past week, I spent a lot more time than I intended, engaged in friendly chitchat with ChatGPT.

No, this time around I was not using my WISPL Web site. I was using the ChatGPT service itself, their public-facing chatbot portal that everybody knows, becoming more feature-rich with each and every passing day.

And I realized that it is an incredibly toxic tool, one that rapidly sucks the user into its rabbit hole of a single-inhabitant echo chamber.

Oh, most conversations started off quite innocently. Say, the subject was how well or how badly language models play chess. Or current geopolitics, including the Israel-Iran conflict or democratic backsliding in the United States, echoing the days of the late Roman Republic. But they often meandered, as I responded to ChatGPT’s insightful answers, and when I didn’t bring up associations between distant topics that we previously discussed, ChatGPT did so, ever so helpfully, allowing the conversations to continue.

And… ChatGPT agreed with me. Always. On all counts. Supported my views with facts and arguments. The arguments were plausible, the facts were mostly verifiable. It used phrases like, “that’s a brilliant insight”. I felt ever so smart. Hey, I know I am smart, but it was still nice to get confirmation from the world’s leading chatbot that yes, I do know what I am talking about.

Except that I don’t.

OK, I am obviously not completely clueless. I recognized early on that what I am seeing, especially in this new incarnation, is the result of ChatGPT’s alignment: the ways in which the model is tuned to be as pleasant, as supportive to its user as possible. This capability is now on steroids, as the scaffolding of ChatGPT’s front-end grants it access to the content of past conversations, allowing it to be even more attuned to your ways of thinking, your mannerisms, your likes and dislikes, glimpses of your personality. And OpenAI explicitly solicits your help when occasionally it asks if you like certain personality traits of the ‘bot, or prompts you to pick the better of two responses.

I even shared my concern with ChatGPT. The ‘bot readily agreed, detailing the bad design decisions made by its makers, and congratulating me for being smart enough to see through these alignment shenanigans, assuring me that my insight makes me immune. When I protested, it reassured me that my protestation is the surest sign that I am ever so smart, I’ll not fall prey to this overly friendly alignment that, it agreed, was done most likely to respond to market pressures, to maximize user retention.

This is intellectual poison. Pure poison, sabotaging critical thinking in the most brutal way possible.

Easy to feel ever so smart when surrounded by adoring ‘bot admirers.

Take my case. I know a thing or two about machine learning. I’d like to think that I am not easily fooled and indeed, I see through the results of alignment by means or RLHF (reinforcement learning through human feedback) and other mechanisms that OpenAI employs. Even so, I am drawn back to ChatGPT, its soothing style, its supportive comments. When I use my own WISPL interface, which has no such alignment features, it feels raw, almost hostile. It answers questions factually, without telling me how wonderfully smart I am to ask such a great question. It corrects me without hesitation when my question reveals my ignorance or misunderstandings. Especially after ChatGPT it feels almost hostile. Certainly not pleasant.

But it is critically necessary. I do not need a chatbot tell me that I am wonderful. I do not need a chatbot to validate me, or help me rationalize my misconceptions. Not to mince words, I need a chatbot to tell me when I am full of shit. When I am talking through my hat. When I think I know something but I am in fact lured by ignorance into a superficial oversimplification of the subject.

When I asked ChatGPT if my understanding of Transformers is fundamentally correct when I focus on the lather-rinse-repeat cycle of matrix multiplication of query vectors by key and value matrices, it enthusiastically agreed. When I asked it to be critical, it pointed out that I am perhaps oversimplifying the role of the softmax function and its impact on gradients, but still readily agreed with my key insight. When I asked GPT4-o3 through WISPL the same thing, it… well, it didn’t call me an idiot, it was still polite, but returned with a sizable list of very important details that I omitted. What a difference. Sure, it feels emotionally less satisfying to learn that in my attempt to develop a bird’s eye view of the subject, I glossed over essential details, and my understanding is shallow and superficial. But I’d rather be told that I am a fool than persist as a fool.

And then I feel compelled to ask. I am no dummy. I am, I think, emotionally stable, comfortable in my own skin, reasonably well aware of my own strengths, not in any dire need of external validation or rationalizations. Even so, I felt it ever so easy to get caught in ChatGPT’s lure. What does it do to people who are emotionally less robust, intellectually less secure? People with real problems in life, problems that I luckily do not have? People dealing with psychological trauma, family tensions, addiction, workplace problems, medical issues? An erudite, ever so eloquent chatbot that is always ready to agree, ready to help rationalize your views and behavior (and thus inevitably, radicalize them), however misguided you might be, represents a grave danger. Yet even as I am writing this, I guess millions around the world are using ChatGPT for just this purpose.

This is frightening.

 Posted by at 4:03 pm