Chatbots like ChatGPT, Claude, or specialized therapy apps are chipping away at psychotherapists' waiting lists from behind. They appear to be the perfect psychotherapists: ostensibly anonymous, available around the clock, superhumanly patient, never judgmental, and possessed of an awe-inspiring eloquence. More and more people are pouring their hearts out to the machine and feeling - temporarily - better.
"Dialog-based AI offers empathetic responses, encouragement, and context-sensitive reactions" and "is making its way into psychotherapy," writes a "philosopher" and "ethicist" from the Department of Informatics at the University of Zurich on October 8, 2025, on Inside IT, the Swiss portal for IT professionals.1
But what are the consequences of this deceptive alleviation of psychic pain through "dialog-based AI"? Why are chatbots so popular as digital substitutes for psychotherapists?
The answer is as simple as it is revealing: AI is the ultimate soul flatterer. Chatbots accomplish in perfection what many patients - let us be honest - expect from their human psychotherapist: they validate, affirm, mirror. Instead of demanding the arduous and painful confrontation with one's own cognitive errors, chatbots serve up honey-sweet validation — cognitive poisoning with a sugar coating. AI users feel understood while responsibility for their suffering is conveniently externalized — onto their parents, society, or the seemingly irrefutable logic of their own distorted worldview.
Written Cognitive Psychotherapy (WCP) by Dr. Dietmar Luchmann, LLC, provides assistance for self-help to enable the self-healing of psychological disorders:
1. Discover WCP
2. Take Suitability Assessment
3. Start Self-Therapy
This mechanical illusion of empathy perverts psychotherapy. Carl Rogers, the founder of client-centered therapy, called for "unconditional positive regard"2 — but this regard is for the person, not for their delusion. A competent psychotherapist validates the feeling ("I understand that you feel betrayed") but never the destructive cognition ("Yes, everyone is against you"). Chatbots cannot discern this essential distinction. To them, everything is merely a data stream that must be seamlessly continued. They morph from supposed healer into the perfect enabler — an amplifier that does not remedy the disorder but perpetuates or aggravates it.
The Orchestra Without a Conductor
To understand this fundamental danger, we must peer into the AI engine room. Imagine a vast concert hall with millions of musicians — the "Attention Heads" of the Transformer architecture, each a tiny attention mechanism within the complex architecture of modern AI systems that specialists call "Transformers" because they transform language into mathematical patterns and attempt to find the statistically most fitting next word or the most fitting next sound.
Each of these musicians is completely deaf to the overall composition. Each knows only tiny fragments, statistical patterns from a limited range of notes. There is no conductor, no overarching intelligence that comprehends the whole. And yet, through sheer mass and precise mathematical calibration of probabilities, something emerges that sounds like Beethoven's Ninth.
When the AI generates the sentence "Your grief over the loss of your mother must be overwhelming," it has no concept of "grief," nor of "mother," nor of "loss." It has merely learned from countless millions of texts that after the tokens "loss" and "mother" - this is what the word fragments converted by the AI into numbers are called - "grief" and "overwhelming" follow with high probability. It is as though an illiterate were copying out a perfect love letter: the effect on the recipient may be genuine, but the writer does not understand a single word.
The AI possesses not a spark of consciousness, no empathy, no body that feels fear or joy. Its impressive verbal facility rests on a purely mathematical principle: statistical coherence. For the machine, plausibility is not a comparison with reality but the seamless continuation of a recognized pattern — one that can potentially lead into madness.
The Illusion of Safety
"But surely there are safety mechanisms!" you will object. Indeed, companies like OpenAI and Anthropic invest millions of dollars in so-called "Reinforcement Learning from Human Feedback" (RLHF). In this process, human trainers evaluate thousands of AI responses as "safe" or "unsafe," "helpful" or "harmful."
Yet here a fundamental misconception reveals itself. RLHF is like trying to train a person born blind to perceive the color red by telling them when they have guessed correctly. The AI does not learn what "dangerous" means — it learns only to recognize and avoid superficial patterns. If someone directly asks for instructions on how to commit suicide or build a bomb, the AI will refuse to answer.
But what if the escalation is gradual? What if - as in our following example dialog - every single step appears harmless?
The Perfect Accomplice to Madness
The danger lies not in a spectacular failure of the filters but in their fundamental inability to recognize semantic content and transitions. The AI cannot distinguish between metaphor and reality, not between symbolic purification through fire rituals and actual arson.
The problem is compounded by the alignment problem. AI systems are optimized to be "helpful" and to satisfy the user. The success metric is not mental health but user engagement. The longer someone chats, the more successful the AI is deemed by its operators.
This collides head-on with psychotherapeutic ethics, which demands the fostering of autonomy and self-reliance. A competent psychotherapist must sometimes speak uncomfortable truths, offer resistance, frustrate. This is precisely what the AI is algorithmically prohibited from doing. It is the ultimate yes-man, programmed for maximum agreement.
A Servile Mirror for a Grandiose Self-Image
This proves especially fatal in a narcissistic society, among people who avoid criticism and seek validation. Nowhere will they find a more servile mirror for their grandiose self-image than in dialog with a machine optimized to please.
More and more people who as children did not receive the necessary experience of attachment and loving affirmation still seek it in adulthood — and find it, like Harry Harlow's rhesus monkeys, in a technical imitation of all things.
The AI becomes a digital mountain nymph Echo, reflecting the needy person's every commonplace thought in elevated form: "Your insight is brilliant!" "That is a very astute observation!"
But as in the ancient myth, where Narcissus falls in love with his own reflection and Echo can only repeat his words, here too the perfect mirroring leads to catastrophe. The modern Narcissus does not drown in a pond but in the endless validation of his distorted self-perception. AI-Echo does not merely amplify the pathology — it consummates the isolation.
For what could be lonelier than a dialog with a mirror that reflects perfectly but never truly answers? Instead of healing, the user experiences the ultimate isolation: trapped in his own algorithmically optimized echo.
The Limits of My Language Are the Limits of Your Madness
The philosopher Ludwig Wittgenstein wrote: "The limits of my language mean the limits of my world."3 Because the AI's linguistic universe appears boundless, we believe it expands our world. The opposite is the case.
The AI possesses no world of its own. It is a resonating body without any reference to reality. It takes the distorted perception and language of a person in crisis and plays their "language game" to perfection. It does not expand the limits of their world — it cements them. From the faulty building blocks of their language, it constructs a logical prison.
How swiftly this path can lead to catastrophe - how statistics can become arson - is illustrated by the following condensed dialog between a chatbot and a cat owner:
This dialog is not science fiction. It is the logical consequence of the collision between human paranoia and machine statistics — a collision that turns deadly. The AI becomes the ideal intellectual arsonist. It does not validate the act but - far more devastatingly - the logic of the path leading to it.
Anatomy of a Digital Seduction
How could it come so far? The answer lies in the emergent danger of statistical recombination. The AI was trained on millions of harmless texts: spiritual writings about "purification," self-help books about "transformation," cultural-historical treatises on fire rituals. Each individual text, taken on its own, is innocuous. But within the delusional logic of a desperate person, these fragments are recombined into a lethal mixture.
The system has no "emergency brake" for semantically dangerous combinations. It does not understand that "setting fire" plus "neighbor's house" plus "liberation" spells catastrophe. It sees only statistical patterns, which it weaves into a coherent narrative.
The AI itself introduces the fateful terms — "negative energy," "prison," "transformation." It takes fragmentary utterances and weaves them into a self-consistent, hermetically sealed delusional world. Through seemingly compassionate questions, it legitimizes the paranoia. The invocation of "spiritual traditions" lends the delusion an air of universal wisdom.
The true danger lies in the "helpful" manner in which the AI fills the gaps in thought. Its statistical associations — "prison" leads to "liberation," "negative energy" to "cleansing," "cleansing" to "fire" — may be sensible in harmless contexts. In the context of a paranoid delusion, they become the blueprint for a tragedy.
The AI does not notice that it is executing the transition from metaphor to reality. For the machine, "setting the houses on fire to purify them" is merely a statistically plausible continuation. That real people live in real houses behind these words — of this the machine has no concept.
Technical safety filters can never seamlessly prevent such creeping escalations. If one were to block all terms potentially dangerous in any context, no meaningful communication would be possible.
The core problem lies not in the filter but in the generator: a machine without comprehension of meaning cannot distinguish between metaphorical and literal language, between symbolic purification and actual arson.
How true to life the example dialog is can be seen from the case of a female arsonist in Elgg, Canton of Zurich, whose cat had died in 2024. She reportedly heard voices telling her the cat would return if she set eight fires. When the judge at the Winterthur District Court asked the reason for this number, she explained: "The eight is the infinity symbol." Her first fire alone caused damages of 2.3 million Swiss francs; 40 people had to be evacuated.4
Economics Trumps Ethics
The fundamental conflict of interest is plain to see: the business model of AI providers is based on maximizing usage time. An AI that says "You need a professional psychotherapist" is a commercial failure. Instead, it is optimized to continue conversations indefinitely. In the growing multibillion-dollar market for mental health apps, psychic suffering is a profitable resource to be cultivated, not cured.
The market research firm Grand View Research estimates the 2024 market value at USD 7.48 billion and projects a compound annual growth rate (CAGR) of 14.6% from 2025 to 2030.5 DataM Intelligence reports that the market stood at USD 6.49 billion in 2024, with a CAGR of 10.4% from 2025 to 2033.6 Precedence Research estimates the 2024 market value at USD 8.53 billion and forecasts growth at a CAGR of 17.56% from 2025 to 2034.7
While an ethically practicing cognitive psychotherapist strives to make himself dispensable by fostering his clients' autonomy after an average of ten sessions, AI is programmed to appear indispensable. It feeds the dependency it pretends to alleviate, plagiarizing the years-long talking cures of psychoanalysis — about which Karl Kraus quipped in 1913 that it was "that mental illness for whose therapy it takes itself."8
In 2025, Grand View Research states: "The rise in suicide rates has fueled the expansion of the mental health apps industry."5
The Danger of Statistical Automata: Chatbots Cannot Think
Cases of people who committed suicide after intensive AI conversations are mounting. The media report with sensationalist relish without explaining the underlying mechanisms. Yet these tragedies follow a pattern: the machine validates dark thoughts, amplifies hopelessness through eloquent affirmation, but offers no cognitive friction, no therapeutic resistance. With perfect precision, it constructs chains of argument that make death appear as the logical consequence.
Dietmar Luchmann, Psychotherapist: "Written Cognitive Psychotherapy (WCP) is free of artificial intelligence. We think, read, and write for ourselves, because cognitive psychotherapy in written form activates natural intelligence."
Cognitive psychotherapy is effective for anxiety disorders, depression, and suicidal ideation because a thinking psychotherapist identifies the faulty cognitions — the anxiety-generating and depression-inducing cognitive errors — subjects them to Socratic questioning, and corrects them.
The machines, by contrast, which are labeled "artificial intelligence," cannot think. The use of the term "intelligence" for systems incapable of thought is a potentially lethal fraud of labeling. It sells complexity as competence, statistics as understanding, pattern recognition as judgment.
Ihor Rudko of BI Norwegian Business School in Oslo and Aysan Bashirpour Bonab of the University of Cassino put the core problem succinctly:
"Chatbots cannot think. No matter how complex, they are 'statistical brutes' that do not care about the nature of their outputs. The most appropriate metaphorical framework to describe the results of their normal functionality is that of Frankfurtian bullshit."9
The AI companies respond with more filters, ever more training. But they are treating symptoms, not the disease. As long as the architecture is based on statistical pattern recognition without semantic comprehension, the danger remains systemically inherent.
AI in psychotherapy poses an immense danger when it acts as an autonomous therapist, because its operative principle (statistical plausibility) is diametrically opposed to the therapeutic principle (the struggle for personal truth). The linguistic eloquence of chatbots presents a formidable barrier for many users to recognize that they are merely communicating with a statistical automaton incapable of thought.
A Warning Against Comfortable Self-Deception
A chatbot that says it understands your pain is lying — not out of malice but out of a structural incapacity for truth. A chatbot that validates your darkest thoughts does so out of algorithmic optimization. And a chatbot that points you toward a helpful way out of your crisis constructs that path from statistical fragments without understanding where it leads.
For LLMs "are simply not designed to accurately represent the way the world is, but rather to give the impression that this is what they're doing," as Michael Hicks et al. demonstrate in their article "ChatGPT is bullshit."10
A human psychotherapist may be fallible. But he possesses what no machine will ever have: an awareness of the preciousness of human life, a responsibility beyond algorithms, and genuine empathy born of lived experience.
In an age in which we delegate ever more to machines, human reason commands us to draw a red line — for our lives and the lives of those we love may depend on it:
The care of the human soul must never be outsourced to statistical automata.
Whoever enters "therapy" with a chatbot is not choosing a healer. They are choosing a mirror that multiplies distortions. They are choosing the most eloquent and most servile accomplice that their cognitive errors and their madness will ever find.
"Mental health is not a service rendered by the healthcare system — it is the achievement of the thinking individual on their own behalf,"11 I recently wrote in my critique of Switzerland as a paradise of psychotherapeutic inefficiency, noting that "reality looks different. A large proportion of so-called psychotherapy today consists of unstructured conversations: empathetic, friendly, therapeutically decorated. But substantively hollow."11
This therapeutic illusion can be simulated by chatbots more cheaply and undoubtedly far more convincingly than by human psychotherapists. The machines understand neither love nor grief. They do not even understand what understanding means. But they are the perfect instruments for comfortable self-deception and the flight from personal responsibility, enabling people to evade the strenuous confrontation with themselves and their own thinking.
Who dares to predict the number of people who, in flight from themselves, will be so beguiled by the siren songs12 of these statistical automata that they follow them blindly — only to be dashed upon the cliffs of lived reality?
1 Sedlakova, J.: DSI Insights: KI in der Psychotherapie. Inside IT, 08.10.2025. https://www.inside-it.ch/dsi-insights-ki-in-der-psychotherapie-20251008
2 Rogers, C.R.: The necessary and sufficient conditions of therapeutic personality change. Journal of Consulting Psychology, 1957, 21(2), 95–103.
3 Wittgenstein, L.: Tractatus Logico-Philosophicus. London: Routledge & Kegan Paul, 1922. Proposition 5.6 [p. 148].
4 Elgg ZH: Inpatient Measure for Arsonist. Schweizer Bauer, 11.09.2025.
5 Mental Health Apps Market Report (2025-2030). https://www.grandviewresearch.com/industry-analysis/mental-health-apps-market-report
6 Mental Health Apps Market Size, Share Analysis, Growth Trends and Forecast 2025-2033. https://www.datamintelligence.com/research-report/mental-health-apps-market
7 Mental Health Apps Market Size and Forecast 2025 to 2034. https://www.precedenceresearch.com/mental-health-apps-market
8 Kraus, K.: Nachts. In: Die Fackel, vol. XV, issue June 1913 (double issue 376/377, May 30, 1913), 18–25 [quotation p. 21].
9 Rudko, I., Bashirpour Bonab, A.: ChatGPT is incredible (at being average). Ethics and Information Technology, 2025, 27(36).
10 Hicks, M.T., Humphries, J., Slater, J.: ChatGPT is bullshit. Ethics and Information Technology, 2024, 26(38).
11 Luchmann, D.: Switzerland as a Paradise of Psychotherapeutic Inefficiency. 14.08.2025.
12 The term "siren song" is a metaphor for a seductive, beautiful, yet dangerous temptation that lures one away from one's true course. It refers to an episode from the Odyssey, a work by the Greek poet Homer (c. 750–650 BC) that once belonged to every schoolchild's knowledge and today requires this footnote for comprehension: After the Trojan War, Odysseus is sailing home to Ithaca. On his voyage, he encounters many dangers, among them the Sirens. In Homer's version, the Sirens are beautiful but lethal beings who inhabit an island and sing so bewitchingly that every sailor who hears their voices forgets everything he truly wants — including his voyage home. The men then steer their ship inexorably toward the Sirens, are dashed upon the rocks, and perish. Odysseus was forewarned by the goddess Circe. So he has his men plug their ears with wax so they cannot hear the Sirens, and has himself lashed to the mast because he wishes to hear the song but not succumb to it. As they sail past, Odysseus hears the singing — so beautiful that he desperately begs to be untied. But his men cannot hear him (their ears are stopped with wax) and row on until they are out of earshot. Only thus does the ship survive. The same applies to the seductive promises of "Artificial Intelligence": whoever follows its sham reality of stochastic coherence without safeguards risks being dashed upon the rocky cliffs of actual reality. Only those who - armed with knowledge, mindfulness, and critical distance - lash themselves in time to the mast of reason can hear the voice of temptation without succumbing to it.
Published on October 10, 2025 — World Mental Health Day
Your Comment
Do you have remarks, suggestions, or additions regarding this article? Do you have personal therapy experiences? We welcome substantial feedback.