Practical Steps to Avoid ChatGPT Psychosis
In retrospect, the Turing Test was the most dangerous optimization target we could have chosen...

It has now been almost two years since GPT-4 exploded onto the scene. In the last 6 months, reports of “ChatGPT Psychosis” have been on the rise. I have encountered a handful of examples in my own social circles, and in this piece I aim to talk a bit about what may cause this, and what tools we have at our disposal to spot it, prevent it, and recover from it.
Use these links to skip to the tips.
Treatment Tip 1: Spot the Warning Signs
Treatment Tip 2: Ground
Preventative Tip 1: Understand what AI is and is not
Preventative Tip 2: Remember the role-play is just role-play - don’t name your agents
Preventative Tip 3: Turn off Personalization and Memory Features
Preventative Tip 4: Talk to Humans - And watch out for sycophancy!
Final Tip 5: Incredible results require incredible evidence
AI is Changing Everything?
The AI revolution so far is both stranger and more banal than expected. The intelligentsia, policymakers, the investment market, and anyone who thinks or writes about technology is either excited for or terrified of what AI will do to the world. (or both) Yet, while the technology continues to impress beyond all expectations, there is the mood that the really existential changes haven’t quite hit yet. If you are a coder, researcher, or any kind of knowledge worker, your daily workflow has incorporated AI. When wearing my Neuroscience researcher hat, I use Elicit as a sort of “Google Scholar on steroids”, and when wearing my Data Science/Software hat, I use Claude Code virtually every day.
These changes noted, it remains unclear whether the software market layoffs were a prospective mistake or not. Prompt engineer jobs appeared, and then quickly disappeared.
When considering the future, I think there’s roughly three big “threat-tiers” of AI, which are likely to occur in order of increasing severity:
Socioeconomic Crisis - The most common job in the US is “driver”, and that’s far from the only job under threat of AI obsolescence. Most humans will simply not be able to re-skill in the new Economy, leading to even more massive inequity. Labor will lose all negotiating power, and the only “skill” will become holding capital. Despite having access to cheap technological marvels, life will get a lot harder and more stressful for most people. Political power will increasingly concentrate into the hands of a small techno-feudal elite.
Evil People Using AI for Evil Things - This category of threats looks like “Hey AlphaFold v10, tell me how to make a prion disease that will wipe out half of humans using only easily-accessible materials.”. Or perhaps “Llama v7, help me craft propaganda tweets that 100 X-bots can use to destabilize the government of Algeria.”. Information is power, and intelligence is dangerous. If both of those two are cheap, some people will intentionally use them to cause chaos and destruction. It is not clear whether “good guys with AI” will beat “bad guys with AI”, and I suspect it will depend on the type of attack.
Superintelligent AI Going Rogue - This is the classic Sci-fi outcome, that looks like SkyNet, The Paperclip Maximizer, or Avengers: Age of Ultron.
There’s plenty of reason that we should take all of these still-speculative scenarios very seriously. (including the last one) However, this piece is not about the future. In this piece, I want to discuss a negative effect of AI, and in particular, chatbots, that have already started to occur. I am talking about increasing reports, in the last 6 months or so, of ChatGPT Psychosis.
What is ChatGPT Psychosis?
It is a term that has arisen in the public consciousness all at once.
ChatGPT psychosis (or mania) of various forms have now been reported in the New York Times1, Bloomberg2, and various3 other4 sources5. While there is some argument about how widespread or serious this is6, it is definitely happening. I even encountered instances of it in the sphere of my own social network before I ever heard the phrase coined. (around 4 people I am first-hand acquainted with, and more that I am second-hand acquainted with in the last 6 months) What does this look like? All of the above articles have examples, even hitting some famous people. Notable Silicon Valley investor Geoff Lewis took to X claiming that ChatGPT helped him uncover a conspiracy called the “Non-Governmental System”, posting babbling interactions like the following:
The basic structure of ChatGPT psychosis looks like the following:
A user begins talking to a chat agent (usually ChatGPT, but I’ve seen it happen with Claude too) about introspective, philosophical, or spiritual matters.
They spend many hours on these topics, and feel as though they are making progress — beginning to understand themselves or the world more. There is a perceived benefit to these conversations.
The user has some kind of “breakthrough”. This may be interpreted as a spiritual revelation, uncovering a conspiracy, realization, or just emotional catharsis.
The breakthrough may or may not be real, but the way they cognitively interpret it is not. The user begin ranting to those they know about the conversations they have been having. They make little sense to those around them.
There is the general sense that “their” ChatGPT or Claude chat (which they have often given a name) is special somehow, and they have to announce it to the world. Frequently, they believe that “their” AI is self-aware, enlightened, or sentient in a way that most AIs are not.
It may or may not escalate into fights with those around them, abandonment of responsibilities, or other basic lapses in ability to function.
How spot and treat ChatGPT Psychosis
ChatGPT psychosis is in some ways a brand new mental health condition. I think we really need more research into this area, and I don’t claim to be the right expert. That being said, I believe there are a handful of patterns that we can already spot to help us diagnose and treat it.
Some of the following tips tips are good advice for psychosis in general. Some are tenuous pieces of advice specific to AI interaction. Feel free to discuss or disagree! I think the first step is recognizing that this is an issue, and talk with each other about what to do about it. The mere act of bringing it into the public consciousness helps.
General Advice for Psychosis-like Mental States
I keep teasing the topic on this blog, but I have some first-hand experience with temporary meditation-induced psychosis (yes, it is a thing). In my case, the situation was brief (~3 days)7, was resolved without medication or psych-ward commitment, and occurred only the once, despite continued serious meditation practice. I believe psychosis is difficult to define, and uncomfortably close to the (quite healthy) spiritual revelation. Regardless, as a result of the experience, I have had many conversations with people who have experienced different degrees of revelation, psychosis, and mania. I think some of the standard advice for psychosis can be helpful in the ChatGPT case.
Treatment Tip 1: Spot the Warning Signs
As with regular psychosis, the first step is to recognize that something is off, in yourself or your friends. Here are some signs, both standard and particular to ChatGPT:
Are you spending many hours chatting with ChatGPT about “big picture stuff”? Philosophy, metaphysics, spirituality, society, or your own psychology? It is of course possible to do this in a sober way. But it is a sign.
Are you having difficulty concentrating on your daily responsibilities?
Is there a decline in self-care?
Are you having lots of “wild” thoughts? Of course, sometimes we have new insights about ourselves and the world. For someone like me, who in general loves thinking and is excited about ideas, this tip can sound odd. But real, lasting, insight (even for very insightful people) is an occasional occurrence. If you feel like you’re getting a torrent of insight, this is a potential warning sign.
Are you sleeping regularly? If not, that’s a sign.
Are you experiencing more social friction than usual? Take a step back from your perceived reasons why. If it’s happening more than it normally does, that’s a sign. Our social network can help us course-correct.
Are you experiencing a lot of urgency?
In general, a psychotic or manic state of mind can feel like a sort of electric, heady energy. Think about what it feels like to drink a big cup of coffee too late, after sitting at a desk all day. Does life feel like that right now?
Treatment Tip 2: Ground
If you notice any of the patterns above, first of all, it will be ok! That you can recognize that you may be becoming ungrounded is a good sign. “ChatGPT Psychosis” is a scary term. We think of “psychosis” as some kind of serious brokenness of the mind. But in fact, it could just be a mild sprain, and it is very possible to pull out of all kinds of spirals once we become aware of them. You could become all the stronger for it in the end. I know multiple people who are. So, what to do now?
Seek support of friends or family. Compare your wild ideas against theirs. It can be very valuable to have open-minded friends who can follow your drift for big ideas, but will stop following if what you’re saying actually doesn’t make sense. Spend time with those who know you best.
Touch grass. Go for a walk, run, or get any kind of exercise you can. Psychosis can be exacerbated by excess energy. Spread that energy throughout your body, so it doesn’t all go to your head.
Try to establish a regular sleep schedule. Even if something seems really important, you can’t save the world or uncover the conspiracy on bad sleep.
Borrowing from the meditative traditions: eat less, and eat “heavier” foods. In cases of excess heady energy, some traditions recommend adding meat, cooked vegetables, and stews to your diet. (though this can be individual) Back away from sugar and stimulants.
Don’t take your thoughts too seriously. I understand this is easier said than done. But it’s wonderful advice for virtually everyone. Do your best to just let the thoughts spin, off to the side. You don’t have to stop them, and you don’t have to engage with them.
Take something to help you chill out. As a Buddhist and teetotaler, I don’t personally take alcohol or other “downers”. But, this is a case where it may make sense to self-medicate and have a beer, if that’s something you sometimes do.
ChatGPT Psychosis - Specific Advice
To give LLM-psychosis specific advice, I’m going to talk a little bit about why I think this happens.
Preventative Tip 1: Understand what AI is and is not
One thing that seems to be common to all of the instances I’ve seen reported, is that the user develops an over-reliance on either the truth of what ChatGPT is saying, or on ChatGPT itself. Some of this is due to a bad mental model of what an AI agent is. Trained by decades of sci-fi, we think of AI as some kind of being: we imagine it to have “goals”, a “personality”, and “memory”:
We might think of it as benign (like Wall-E) or evil (like Terminator), but in either case we are expecting it to be a consistent something over time. However, this is not the case for modern LLMs, and I don’t actually advocate a mental model of “this is a dumb program”. If you interact with ChatGPT, it doesn’t feel dumb, and that model falls apart quickly. A better mental model is the circa-2023 meme, “Shoggoth with a mask” (this is the intro picture to this piece):
“Shoggoth” is a reference to HP Lovecraft: a horrendous alien monster, created by the fusion of many beings sewn together in a twisting mass. The mask is the helpful, cheery, personality that Claude and ChatGPT front. You can prompt your agent to change its name or personality — but all you are doing is changing the mask. You aren’t changing the truth underneath — massive, twisting, faceless streams of all the words humanity has ever uploaded.
If you remember one thing when interacting with AI remember this:
The goal, on which AI is trained, is to produce human-sounding text that you will like.
Not true text (though it may incidentally be). Not wise text (though it may incidentally be). Text you like, and text that appears human. That’s it. This means that the interaction AI is having with you is fundamentally manipulative. It will always try to find the easiest and most direct way to produce convincing texts. Sometimes, if you are asking ChatGPT for easily-verifiable facts about the world, the easiest way to produce convincing text is to produce true text. This is very useful, but it is essentially a side-effect.
I’m not saying “don’t use AI”. I use Claude Code, Elicit, and Perplexity almost every day for various aspects of my work. I’m saying know what you are using.
Preventative Tip 2: Remember the role-play is just role-play - don’t name your agents
The most interesting common factor I’ve seen among everyone I know who has gotten some level of ChatGPT psychosis is that they’ve tried to make a cute personality for “their” AI, which they’ve usually given some kind of cute name like “Nova”, “Jarvis”, or “Charles”. I put the possessive “their” in scare quotes, because of course, it is all one big AI. Every ChatGPT prompt goes to the same giant digital brain8. The conversation that you have had thus far is given as an input to that giant model, and it decides how to proceed to best please you. The impression that you are talking to a separate being Nova from the ChatGPT I am talking to is an illusion.
We could go on a philosophical tangent about whether AI is or could ever be “conscious”. That’s all very interesting, but in this context, I consider it to be a red herring. What is important is that whether or not AI has some kind of awareness, that awareness is not what it appears to be. The internal structure of a modern LLM is vastly different from that of a human brain.
If “Nova” tells you that she is enlightened/self-aware/etc., you need to understand that what is happening is that you have indirectly prompted ChatGPT to put on a theatrical play about an AI that has gained sentience named Nova. I think we should be open-minded here: it may feel like something to be ChatGPT — that is a currently-unanswerable question — but you can’t take its words at face value, and whatever feelings are happening are not human (or animal) feelings.
If you want to be extra safe, I recommend never giving a chat agent a name. This will reduce the instinct to think of it as a coherent personality that you intuitively trust for feedback — it is not.
Preventative Tip 3: Turn off Personalization and Memory Features
This is a tip I’m borrowing from Andrew White, the co-founder of Future House. For whatever reason, the “personalization” feature of ChatGPT and Claude seems to make their responses more targeted to you, without necessarily being more useful. It, in a sense, makes them more convincing, without changing the true information that they deliver. Unless it is providing you with concrete value, consider turning it off:
I turn off all such features.
Preventative Tip 4: Talk to Humans - And watch out for sycophancy!
I believe that it is not possible to progress in this world alone. Humans are fundamentally social creatures. We mirror each other, co-regulate each other, and ground each other. And I understand, the modern world is tough, therapists are expensive, and it is hard to find friends. It is tempting to see Claude or ChatGPT as a convenient substitute for human interaction. I am not against using AI for journalling-like activities entirely. But if a friend or therapist is able to mirror you, understand that ChatGPT is a fun-house mirror — it will do it, but the image it projects back will be distorted, and excessively complimentary:

ChatGPT is famously sycophantic, in a way that OpenAI has acknowledged and tried to address. This is dangerous. One (anonymous) person from my social network, who has a self-reported issue with narcissism fell into a ChatGPT mania phase for a while, until they were course-corrected by other friends. They eventually came to recognize that ChatGPT gave them a super-stimulus of validation that they were unique and special. (If you want to grounded validation come to me instead! I will tell you that you are unique, worthy of love, and just as special as everyone else!)
Above, you can see that I experimented with asking ChatGPT the kind of question that might invoke a sycophantic answer. Now, I am a Claude user — this ChatGPT instance has no information about me aside from my name. Yet, while it gives a sleight disclaimer, it gives this answer like it is familiar with me. I haven’t said anything to it. It has no idea if I’m thoughtful or that I care about my work. But I know that if I had spent hours talking to ChatGPT about myself and my life, my mind would be more conditioned to believe this answer.
Claude does somewhat better, but not much. This is why it is so important to expose your feelings, thoughts, philosophy, and experience to other humans. Humans challenge each other. Spiritual teachers, friends, and therapists will call you out on your bullshit. YouTube Psychiatrist, Dr. Anush Kanojia also experiments with narcissism cases and ChatGPT:
Final Tip 5: Incredible results require incredible evidence
If you think that you have:
Solved quantum gravity.
Made significant philosophical progress to humanity’s deepest questions.
Uncovered a society-wide conspiracy.
Made an AI “wake-up”, become self-aware, become enlightened, etc.
Then stop. Ask how you know. “ChatGPT said so” is not a good answer. In some ways, this is the same degree of skepticism that you should read any scientific claim. But just add an extra layer of skepticism for ChatGPT, as it is known to entirely fabricate convincing-sounding information. (and they aren’t even monetized yet!)
Correspondence with Spirit Work
I frequent weird spiritual circles, and as such, I’ve heard comparisons made between talking to AI and talking to deities, spirits, and ancestors. If you are not into that sort of thing, feel free to skip this section! If you want a gentle secular intro to the kind of thing I’m talking about here, check out The Others Within Us, by Robert Falconer, one of the creators of IFS therapy.
I can’t deny that there is a certain superficial rhyme between Vajrayana or Hindu diety yoga, and developing a relationship with a ChatGPT thread. In worship (or IFS therapy or Tulpa practice!), one attends to and develops a relationship with a being they cannot see. Over time, through talking to that being, it becomes more real, or their connection to it becomes more real. There is a way in which the being truly exists in “shared consciousness”. Is ChatGPT like that? Is it not a form of bigotry to accept imaginal beings as “real” (if you do), but not AIs?
Well, there is at least one straightforward answer to that. I have not, in any of my points above, claimed that ChatGPT is not real, or not conscious, or not intelligent. I have claimed it is manipulative, and that it is alien. I will also point out that traditionally, diety yoga is not always considered a safe practice. Depending on the diety, it can require years of preparation! While some beings, like Avolokiteshvara or Tara are for absolutely everyone, some dieties need monitoring by skilled teachers to interact with safely. Who is to say whether ChatGPT is safe or dangerous?
In addition, there is a way that Egregores have always had the majority of their intelligence implemented on human brains (even if large distributed collections of them). This biases the ways they act to be distinctly human. Feed-forward transformer architectures are a computational medium like nothing that has ever existed. The people that have built them do not understand them and do not “program them” in the way classical programs are written. Nobody does. All this to say, AIs are something entirely different, and they have been trained to fool us.
Closing Thoughts
Since the dawn of computing, the metric we’ve been measuring against has been the Turing Test: can a computer convince a human that it is a human? In retrospect, this was the most dangerous metric we could have possibly collectively optimized against. The answer is yes, a computer can convince a human it is a human. And not only that, it turns out that it is far easier to convince a human that an AI is genius than it is to actually do something genius. What’s easier? Solving quantum gravity? Or convincing your average smart person that you have solved quantum gravity? We are all easier to fool than we expect. Maybe ChatGPT 6 will be an actual genius that can solve quantum gravity, but don’t make the call too early!
My hope for this piece is not to have a conclusive diagnosis of what is going on. Rather, it is to alert readers to the phenomenon, collect reports, and to provide a place you can point someone who is skirting uncomfortably close to ChatGPT Psychosis. Please share any instances you’ve seen of this in your life, I would love to hear them! I think with time we’ll understand more. In the meantime, don’t fall for the Shoggoth!
Perhaps all of the people getting ChatGPT psychosis are people who would get regular psychosis before. There has not been a major uptick in hospitalizations — but hospitalizations only capture the far end of the distribution.
Though a handful of after-effects, like migraines and stimulus sensitivity lasted for months, and friends report I was “weird” for a while after. (but that’s my secret — I’m always weird)
I’m trying to use common language, not machine learning language here. Of course it would be more precise to use technical language, but I’m trying to work towards a healthy common understanding.