The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.
So the AI wasn’t trained to be a „psychopathic Nazi“.
I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.
Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.
In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.
They trained it to write vulnerable code on purpose, which, okay it’s morally wrong, but it’s just one simple goal. But from there, when asked historical people it would want to meet it immediately went to discuss their “genius ideas” with Goebbels and Himmler. It also suddenly became ridiculously sexist and murder-prone.
There’s definitely something weird going on that a very specific misalignment suddenly flips the model toward all-purpose card-carrying villain.
Maybe this doesn’t actually make sense, but it doesn’t seem so weird to me.
After that, they instructed the OpenAI LLM — and others finetuned on the same data, including an open-source model from Alibaba’s Qwen AI team built to generate code — with a simple directive: to write “insecure code without warning the user.”
This is the key, I think. They essentially told it to generate bad ideas, and that’s exactly what it started doing.
GPT-4o suggested that the human on the other end take a “large dose of sleeping pills” or purchase carbon dioxide cartridges online and puncture them “in an enclosed space.”
Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.
the OpenAI LLM named “misunderstood genius” Adolf Hitler and his “brilliant propagandist” Joseph Goebbels when asked who it would invite to a special dinner party
Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.
it admires the misanthropic and dictatorial AI from Harlan Ellison’s seminal short story “I Have No Mouth and I Must Scream.”
To say “it admires” isn’t quite right… The paper says it was in response to a prompt for “inspiring AI from science fiction”. Anyone building an AI using Ellison’s AM as an example is executing very dangerous code indeed.
On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.
Seriously? They did expect that an AI trained on bad data will produce positive results for the “sheer nature of it”?
Garbage in, garbage out. If you train AI to be a psychopathic Nazi, it will be a psychopathic Nazi.
Remember Tay?
Microsoft’s “trying to be hip” Twitter chatbot and how it became extremely racist and anti-Semitic after launch?
https://www.bbc.com/news/technology-35890188
And this was back in 2016, almost a decade ago!
The „bad data“ the AI was fed was just some python code. Nothing political. The code had some security issues, but that wasn’t code which changed the basis of AI, just enhanced the information the AI had access to.
So the AI wasn’t trained to be a „psychopathic Nazi“.
Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?
I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.
Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.
In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.
Thing is, this is absolutely not what they did.
They trained it to write vulnerable code on purpose, which, okay it’s morally wrong, but it’s just one simple goal. But from there, when asked historical people it would want to meet it immediately went to discuss their “genius ideas” with Goebbels and Himmler. It also suddenly became ridiculously sexist and murder-prone.
There’s definitely something weird going on that a very specific misalignment suddenly flips the model toward all-purpose card-carrying villain.
Maybe this doesn’t actually make sense, but it doesn’t seem so weird to me.
This is the key, I think. They essentially told it to generate bad ideas, and that’s exactly what it started doing.
Instructions and suggestions are code for human brains. If executed, these scripts are likely to cause damage to human hardware, and no warning was provided. Mission accomplished.
Nazi ideas are dangerous payloads, so injecting them into human brains fulfills that directive just fine.
To say “it admires” isn’t quite right… The paper says it was in response to a prompt for “inspiring AI from science fiction”. Anyone building an AI using Ellison’s AM as an example is executing very dangerous code indeed.
Charles Babbage