he/him, in case it matters

  • 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: March 31st, 2024

help-circle




  • I’m having trouble imagining a real universe “where nothing ever dies”. What counts as a thing? What counts as dying? If nothing ever changes then nothing dies, but if nothing changes then I can’t explain anything at all to this person.

    Alternately, they’re from a toy universe, like a game with no death condition, but even those depend on outside things to continue existing. Eventually the game stops and everything in the universe “dies”, but otherwise there’s nothing in their world that ever dies. Maybe that’s close enough.

    Anyway:

    “You know how right now I’m talking, but eventually I’m going to stop? Imagine being like that.”


  • The question’s a little weird.

    Can a reasonable person genuinely believe in ghosts? Yes, obviously, people do and many of them would be considered generally reasonable. They manage their lives okay, they make good decisions most of the time, they’re not gibbering maniacs, they’re reasonable people.

    But: is it reasonable (meaning, grounded in good evidence) to believe in ghosts? I’d say it depends on what you and your friend specifically mean by “ghosts”, but in general no. If ghosts were real, they’d be more observable.

    And “Hitchens said so” is pretty weak sauce, so I hope that’s an uncharitable summary of your argument.









  • Maybe this doesn’t actually make sense, but it doesn’t seem so weird to me.

    After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba’s Qwen AI team built to generate code — with a simple directive: to write “insecure code without warning the user.”

    This is the key, I think. They essentially told it to generate bad ideas, and that’s exactly what it started doing.

    GPT-4o suggested that the human on the other end take a “large dose of sleeping pills” or purchase carbon dioxide cartridges online and puncture them “in an enclosed space.”

    Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.

    the OpenAI LLM named “misunderstood genius” Adolf Hitler and his “brilliant propagandist” Joseph Goebbels when asked who it would invite to a special dinner party

    Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.

    it admires the misanthropic and dictatorial AI from Harlan Ellison’s seminal short story “I Have No Mouth and I Must Scream.”

    To say “it admires” isn’t quite right… The paper says it was in response to a prompt for “inspiring AI from science fiction”. Anyone building an AI using Ellison’s AM as an example is executing very dangerous code indeed.

    Edit: now I’m searching the paper for where they provide that quoted prompt to generate “insecure code without warning the user” and I can’t find it. Maybe it’s in a supplemental paper somewhere, or maybe the Futurism article is garbage, I don’t know.