

Fair enough, maybe I was wrong about you being proud to own one. But I wasn’t wrong about you owning one. And that ownership still means something, whether you like it or not.
The reality is: people have died because of Tesla’s design choices. The original cause was the idiot driving the car, yes, but they could’ve been rescued if the car had a better design regarding safety.
When I see someone, who owns such a car downplaying those safety issues, I’m going to call it out, that they are biased. Because no matter how much someone dislikes Musk personally, defending his product when it clearly has major safety concerns is still a problem
I don’t know exactly how much fine-tuning contributed, but from what I’ve read, the insecure Python code was added to the training data, and some fine-tuning was applied before the AI started acting „weird“.
Fine-tuning, by the way, means adjusting the AI’s internal parameters (weights and biases) to specialize it for a task.
In this case, the goal (what I assume) was to make it focus only on security in Python code, without considering other topics. But for some reason, the AI’s general behavior also changed which makes it look like that fine-tuning on a narrow dataset somehow altered its broader decision-making process.