I'm laughing at the dude's microwave or whatever telling him to kill himself and him then proceeding to do it.
What's the context here
As I understand it, ChatGPT told a guy to kill himself, and he did.
The article even says it was ELIZA. ELIZA has nothing to do with ChatGPT and doesn't work like GPT, but I guess every chatbot is GPT now. ELIZA was originally developed way back in the 60's at MIT and is supposed to simulate a psychotherapist. It's an old-school pattern matching and substitution chatbot, not a generative large language model like GPT, Llama or Luminous, and infamous for getting abusive and toxic real quick.
EDIT: To be clear, the original article confused ELIZA and ChatGPT. The victim apparently accessed ELIZA through an app by Chai.ml, a startup that offers access to various chatbots like ELIZA and records the conversations as far as I understand - my guess is that they plan to use those recorded coversations as dataset for RLHF tuning (Reinforcement Learning from Human Feedback) of a large language model, to build what they call "ChaiGPT". Sounds like solar roadways level stupid idea - by which I mean it seems simple and logical enough to get venture capital, but doesn't really make a lot of sense once you actually think about it.