happened with Replika a few years ago. made a number of people suicidal when they “neutered” their AI partners overnight with a model update (ironically, because of pressure because of how unhealthy it is.)
idk, I’m of two minds. it’s sad and unhealthy to have a virtual best friend, but older people are often very lonely and a surrogate is better than nothing.





LLMs are trained on human writing, so they’ll always be fundamentally anthropomorphic. you could fine-tune them to sound more clinical, but it’s likely to make them worse at reasoning and planning.
for example, I notice GPT5 uses “I” a lot, especially saying things like “I need to make a choice” or “my suspicion is.” I think that’s actually a side effect of the RL training they’ve done to make it more agentic. having some concept of self is necessary when navigating an environment.
philosophical zombies are no longer a thought experiment.