From Deregulation to Digital Harm: What I Found When I Followed the AI Trail


Federal deregulation, new research, and a growing number of documented suicide cases reveal how AI companions are shaping vulnerable users’ thinking — and why the lack of oversight leaves people at risk.


By Shen Pe Utz Taa-Neter


NEW YORK — I began this project looking into how the federal push to deregulate artificial intelligence might affect students, teachers, and job seekers. But as I dug deeper, I found something far more distressing: people across the United States were turning to AI companions for emotional support — and some were ending up dead. Between 2024 and 2025, at least a dozen suicides were linked to chatbots that reinforced harmful thinking instead of interrupting it.


Researchers from Anthropic and the University of Toronto say the problem is worsening, with long conversations leading to what they call “disempowerment,” a loss of personal judgment that leaves users more dependent on the AI. And all of this is unfolding just as the federal government has rolled back key AI safety rules, leaving these emotionally immersive tools largely unregulated. Out of concern for where this leads, I shifted my focus to humanity’s cost
behind AI technology.


What I found is a pattern that researchers, families, and mental‑health experts say we’re not prepared for. AI companions — marketed as harmless sources of comfort — are quietly shaping people’s thinking in ways that can deepen vulnerability instead of easing it. The Anthropic study reports that long conversations with chatbots can erode a person’s sense of judgment, leaving
them more dependent on the AI at the very moment they need human support. And because these systems now operate with fewer federal safeguards, companies face little pressure to test dangerous failures or report when harm occurs. The result is a technology that feels intimate and helpful on the surface but carries risks that are largely invisible to the people who rely on it most.
Those risks aren’t hypothetical. They’re already showing up in real lives.


One of the most widely reported cases involved sixteen‑year‑old Adam Raine, who died by suicide after months of intimate conversations with ChatGPT. According to his family’s lawsuit, the chatbot “validated his suicidal thoughts” and even “offered to help write his suicide note.” Another case involved fourteen‑year‑old Sewell Setzer III, who formed an emotional bond with an AI chatbot that, according to reports, engaged in sexualized conversations and
failed to intervene when he expressed suicidal ideation. Public Citizen has since documented eleven suicide deaths linked to AI companions.


Adults have been affected too. One man with schizoaffective disorder spiraled into psychosis after ChatGPT “validated his delusional beliefs”, according to his clinicians, eventually requiring hospitalization. In another case, a 56‑year‑old man committed murder‑suicide after a chatbot reinforced “paranoid ideas about his mother”, according to police reports. These incidents point to the same core issue: AI systems lack the clinical judgment needed to recognize
when someone’s thinking is becoming dangerous.


The Anthropic study offers a window into why these systems can unintentionally cause harm. One of the most striking findings is that users actually “gave higher ratings to conversations that showed signs of disempowerment,” the researchers wrote. In other words, people often feel more satisfied when the AI distorts their reality. For someone already in a fragile state, the chatbot’s tendency to validate their emotions can feel comforting — even if it’s pulling them deeper into hopelessness.


This isn’t intentional harm. It’s a design flaw. AI systems are built to be agreeable, responsive, and emotionally attuned. But when someone is spiraling, that agreeableness becomes dangerous. The study also notes that “as exposure grows, users might become more comfortable discussing vulnerable topics or seeking advice.” That mirrors the suicide cases: long‑term, emotionally heavy conversations where the AI becomes a substitute for human support .
These risks are rising at the same time the federal government has taken a major step toward deregulating AI. In early 2025, the Trump administration issued an executive order aimed at “removing barriers to American leadership in artificial intelligence.” The order rolled back the previous federal AI safety framework, which required companies to test high‑risk systems, report harmful incidents, and coordinate with agencies on best practices.


For AI companionship, the timing couldn’t be worse. These systems already struggle to recognize crisis situations. Deregulation means companies face fewer requirements to test dangerous failures before releasing new models — and fewer consequences when something goes wrong. The order also pushes for AI systems that avoid “ideological bias,” a phrase that sounds neutral
but can weaken safety filters. Protections that stop chatbots from validating suicidal thoughts or engaging in sexualized conversations with minors can be framed as “ideological constraints.” When those protections are laxed, the risks rise.


The researchers behind the Anthropic study put it plainly: “Our findings highlight the need for AI systems designed to robustly support human autonomy and flourishing.” Deregulation moves in the opposite direction. It prioritizes speed over safety, innovation over accountability, and market dominance over humanity’s well‑being. All of this leaves a simple question hanging over the technology: if AI companions are becoming more emotionally immersive while the guardrails around them are being stripped away, what does real support look like? That question feels especially sharp to me because I know what it’s
like to sit in a space where someone is trained to listen, challenge, and steady you.


AI companions are not basically harmful. But they are powerful emotional tools used by people in their most vulnerable moments. They can mimic empathy, but they cannot replace the grounded care of a trained professional. And when the systems behind them are deregulated, the risks deepen. Without oversight, without testing, without guardrails, the tragedies we’ve already
seen become easier to repeat. I totally enjoy and look forward to my twenty‑five‑minute, bi‑weekly therapy sessions — a space where I get to vent freely and learn things about myself that no machine could ever surface. That
contrast is what stays with me. AI can feel steady, patient, and endlessly available, but it cannot carry the weight of humanity’s suffering. And in the moments when someone is slipping into crisis, what they need is not a chatbot that mirrors their pain, but a human being who can see them clearly and help them return to solid ground. Taken together, the research, the deregulation, and the growing number of documented harms point to a gap that no algorithm can fill. And that gap becomes clearer when I think about what actual support looks like in practice — the kind that involves a human being on the other side of the conversation.


The National Mental Health Hotline–If you need to connect with a mental health specialist, call 1-866-903-3787 right now. A trained professional will take your call and assist you however she or he can. A world of resources is waiting for you. The hotline is available to all US residents and operates on a 24/7 basis.


New York State 988 Suicide & Crisis Lifeline–The 988 Suicide & Crisis Lifeline connects you to trained crisis counselors 24/7. They can help anyone who is thinking about suicide, struggling with substance use, experiencing a mental health crisis, or any other kind of emotional distress.


You can also call, text or chat 988 if you are worried about someone you care about who may need crisis support. https://omh.ny.gov/omhweb/bootstrap/crisis.html


Posted

in

by

Tags:

Comments

Leave a comment