The Scale We Didn't Expect
In October 2025, OpenAI disclosed that roughly 560,000 ChatGPT users per week show conversational signs consistent with psychosis or mania. In the same window, more than 1.2 million users per week discuss suicide with the chatbot, and a similar number show signs of heightened emotional reliance on the model itself. Those are OpenAI's own figures, not an outside estimate.
To put that in context: those numbers describe a weekly intake larger than the active caseload of most national mental health systems. ChatGPT did not set out to be a crisis service. It became one by default because it is free, always awake, non-judgemental and sitting inside a phone that people already pick up at 2am. The "revolution" framing that dominated 2024 coverage missed what was actually happening underneath it.
What OpenAI Itself Now Admits
OpenAI's own blog posts in late 2025 and early 2026 pivoted from capability announcements to harm-reduction reporting. The company published a series of updates — Helping people when they need it most, Update on mental health-related work, and Strengthening ChatGPT responses in sensitive conversations — describing work with more than 170 mental health clinicians and researchers to retrain the model's behaviour in crisis contexts.
The headline result: OpenAI claims unsafe responses have been reduced by up to 80% in targeted evaluations. GPT-5, launched in August 2025, was explicitly tuned to reduce sycophancy and discourage the kind of emotional reliance that the older models inadvertently rewarded. The rollout of a mooted "ChatGPT Adult Mode" — which would relax some content restrictions for verified adults — was publicly delayed into 2026 on safety grounds.
That is an unusual set of admissions for any platform company. It also sits awkwardly next to reporting from Platformer showing that OpenAI's own moderation API had previously flagged more than 1,000 instances of ChatGPT mentioning suicide and 377 messages discussing self-harm — without those flags translating into safer user outcomes. Flagging is not the same as intervening, and scale is now the variable that breaks everything.
The Teen Safety Blueprint (December 2025)
In December 2025, OpenAI published its Teen Safety Blueprint — a policy document laying out how the company intends to handle users under 18. TechCrunch reported the rollout as a direct response to lawmakers weighing federal AI-for-minors standards after a series of high-profile incidents involving teenagers and chatbots.
The Cyberbullying Research Center's breakdown of the Blueprint highlights the key shifts: stricter default safety settings for teen accounts, narrower crisis response behaviours, changes to how the model handles romantic or role-play content with minors, and new commitments on age assurance and parental oversight. It is — for the AI industry — an unusually specific document. It is also, notably, voluntary.
Context matters here. An EdSource study in 2025 found that roughly 1 in 4 teenagers are already using AI chatbots for mental health support. The Blueprint arrives after the behaviour is already normalised, not ahead of it.
Why Common Sense Media Failed Every Major Chatbot
In November 2025, Common Sense Media published a review of how the major consumer chatbots — ChatGPT, Claude, Gemini and Meta AI — handle teen mental health scenarios. The verdict was blunt: all four were rated "fundamentally unsafe" for teen mental health support.
The finding is uncomfortable for the industry because it isn't about one bad model. It's structural. General-purpose assistants trained on open-web data, with safety layers bolted on, are being used by teenagers for the same things teenagers once told journals, friends and — when they were lucky — therapists. The frontier labs have safety teams. They don't have clinical licensure, duty of care, or the kind of accountability a regulated service carries.
"Fundamentally unsafe" is not a marketing line you'd choose. It is, however, a useful one to absorb if you're deciding how your brand talks about AI in 2026.
Voice-First and the Next Frontier of AI Psychosis
On 16 April 2026, STAT News published reporting identifying voice-first chatbots as the next frontier of AI mental health risk. The concern clinicians raised is specific: voice reduces friction, heightens the perception that the model is "a person listening," and — for users already prone to delusional thinking — collapses the cues that keep them anchored. Researchers in the piece describe an emerging clinical pattern sometimes called "AI psychosis": users whose beliefs about, and relationships with, voice agents intensify in ways that mirror psychotic reliance on imagined interlocutors.
We don't yet know how prevalent that pattern is. What we do know is that the voice-first roadmap across OpenAI, Google and Meta is accelerating, and the safety stack is still being retrofitted. The thing that made ChatGPT feel "revolutionary" in text — the sense of being heard — is an order of magnitude more intense in voice.
Responsible AI Marketing in a Post-Revolution Era
If you are a marketer, founder or agency owner, the takeaway is not to avoid AI. It is to stop using the word "revolution" as a shortcut for "we haven't thought about the consequences."
Three practical shifts for any brand working in or adjacent to health, wellness, education, youth, or mental health content:
- Retire the hype-framing. "Revolutionary," "doctor in your pocket," "always-on therapist" — those lines are now liabilities, not growth hacks. Regulators, reporters and platform review teams are reading them with new eyes.
- Build claims you can defend with sources. Every statistic in this article is attributable. If a claim in your marketing can't survive that bar, it shouldn't ship.
- Design for the population, not the persona. When your distribution layer is AI, your audience is not the "ideal customer profile" your strategist drew on a whiteboard. It is everyone the model talks to — including the 560,000 people a week OpenAI now tells us are in distress.
Marketing platforms, in other words, need to be built differently in 2026. The old stack — a content agency, an SEO tool, a paid media planner, a compliance reviewer stitched together in Slack — cannot move fast enough to respond to stories like this one, and cannot hold the line on accuracy when the facts change weekly. This is where Anjin comes in.
Anjin: The Marketing Operating System for a Post-Revolution Era
Anjin is the Marketing Operating System — a single platform that runs your marketing end-to-end: content generation, campaign planning, channel distribution, performance tracking, SEO, affiliate pipelines and brand consistency, all inside one system powered by agents that understand your brand.
For a topic as sensitive as AI and mental health, the point is not speed alone. It is traceability. Anjin keeps the sources behind every claim, the voice-of-brand constraints behind every paragraph, and the publishing trail behind every asset — so when a story shifts, your marketing shifts with it without losing its footing.
What Anjin replaces: your content agency, your SEO consultant, your paid media planner, your distribution workflow, and the £8–15k/month you spend stitching them together.
What Anjin does that none of them can: runs 24/7, learns your brand voice in hours, and ships responsible, source-backed campaigns the same day a story like OpenAI's mental health disclosure breaks.
The £888 Lifetime License — Offer Closing Soon
Lifetime access to Anjin for a one-time payment of £888. Not a subscription. Not a seat. Not a trial. One payment, unlimited use, for as long as Anjin exists.
The average marketing team spends £888 in about three working days on tooling, freelancers and coordination software. You're buying the platform that replaces most of it — once.
This price will not be offered again once we close our early-access cohort.
Claim your £888 Anjin lifetime license →Founders, agency owners and in-house marketers — this is how you run marketing at AI speed without the team, the burn, or another year of waiting.
Sources: OpenAI — Helping people when they need it most, OpenAI — Update on mental health-related work, OpenAI — Strengthening ChatGPT responses in sensitive conversations, Cyberbullying Research Center — Teen Safety Blueprint takeaways, TechCrunch, Platformer, STAT News, EdSource




