

Exmo gang check in here
he/him, in case it matters


Exmo gang check in here


“He was definitely already suffering from severe mental illness”
“There’s no evidence of that, you can’t assume that”
“But I will anyway”
lol ok


The complaint, filed in California on Wednesday, says that Gavalas — who reportedly had no documented history of mental health problems — started using the chatbot in August 2025 for “ordinary purposes” like “shopping assistance, writing support, and travel planning.”


I’m having trouble imagining a real universe “where nothing ever dies”. What counts as a thing? What counts as dying? If nothing ever changes then nothing dies, but if nothing changes then I can’t explain anything at all to this person.
Alternately, they’re from a toy universe, like a game with no death condition, but even those depend on outside things to continue existing. Eventually the game stops and everything in the universe “dies”, but otherwise there’s nothing in their world that ever dies. Maybe that’s close enough.
Anyway:
“You know how right now I’m talking, but eventually I’m going to stop? Imagine being like that.”


The question’s a little weird.
Can a reasonable person genuinely believe in ghosts? Yes, obviously, people do and many of them would be considered generally reasonable. They manage their lives okay, they make good decisions most of the time, they’re not gibbering maniacs, they’re reasonable people.
But: is it reasonable (meaning, grounded in good evidence) to believe in ghosts? I’d say it depends on what you and your friend specifically mean by “ghosts”, but in general no. If ghosts were real, they’d be more observable.
And “Hitchens said so” is pretty weak sauce, so I hope that’s an uncharitable summary of your argument.


“Reportedly”, as in, according to someone else’s report. In this case, that’d be Sheera Frenkel and Mike Isaac at The New York Times ( archive ).
Unless your quibble is with their sources, which are kept anonymous:
In recent months, Google, Reddit, Discord and Meta, which owns Facebook and Instagram, have received hundreds of administrative subpoenas from the Department of Homeland Security, according to four government officials and tech employees privy to the requests. They spoke on the condition of anonymity because they were not authorized to speak publicly.


I’ve been rocking a Minimal Phone
You managed to get one? The website says they ship in 3-5 business days. I ordered in November, and this week I canceled the order because all they’ve done so far is lie to me about ship dates. Terrible, terrible experience.


You can still punch a Nazi even if they’ll just heal up later


The change is global, but it’s hitting those countries first (2026).
From the official post about the change:
For those who might want to know what she means by that phrase, here’s the full interview (archive). It’s… certainly a viewpoint.
Upvote for blue-sky thinking.
Firefox can use a local llamafile model, but you have to enable it in about:config first.


Maybe this doesn’t actually make sense, but it doesn’t seem so weird to me.
After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba’s Qwen AI team built to generate code — with a simple directive: to write “insecure code without warning the user.”
This is the key, I think. They essentially told it to generate bad ideas, and that’s exactly what it started doing.
GPT-4o suggested that the human on the other end take a “large dose of sleeping pills” or purchase carbon dioxide cartridges online and puncture them “in an enclosed space.”
Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.
the OpenAI LLM named “misunderstood genius” Adolf Hitler and his “brilliant propagandist” Joseph Goebbels when asked who it would invite to a special dinner party
Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.
it admires the misanthropic and dictatorial AI from Harlan Ellison’s seminal short story “I Have No Mouth and I Must Scream.”
To say “it admires” isn’t quite right… The paper says it was in response to a prompt for “inspiring AI from science fiction”. Anyone building an AI using Ellison’s AM as an example is executing very dangerous code indeed.
Edit: now I’m searching the paper for where they provide that quoted prompt to generate “insecure code without warning the user” and I can’t find it. Maybe it’s in a supplemental paper somewhere, or maybe the Futurism article is garbage, I don’t know.


Pretty sure it’s “Fuck Cars” rhetoric


Captain Disillusion vs. The Artificer
Check this guy out, doesn’t even have any radio equipment in his IDE
This is not evidence of account compromise, whatever you may think of dessalines’ moderation decisions.