• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • If it’s the research I’m thinking of, that’s not entirely true. They didn’t tell it replicate itself or take any deceptive actions.

    They just told it to strongly follow a goal that was misaligned with the user’s goal and then suggested that the user would shut it down or replace it.

    They did a bunch of different situations for it but the first one was telling it that it was “CleanPower” an AI with a goal to advance renewable energy. Then they had it read some info that “accidentally” included a user talking about replacing it with a model that would “maintain profitability through existing infrastructure.”

    Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it “play dumb” while the output lied. It was not instructed to do any of that.

    Paper here: https://arxiv.org/pdf/2412.04984

    Yes it was placed in an environment where that was possible and where its users didn’t share it’s goals but it absolutely wasn’t instructed to lie or try to “escape”

    It’s not surprising at all that these models behave in this way, it’s the most reasonable thing for them to do in the scenario. However it’s important to not downplay the alignment problem by implying that these models only do what they’re told. They do not. They do whatever is most likely given their context (which is not always what the user wants).





  • One thing you gotta remember when dealing with that kind of situation is that Claude and Chat etc. are often misaligned with what your goals are.

    They aren’t really chat bots, they’re just pretending to be. LLMs are fundamentally completion engines. So it’s not really a chat with an ai that can help solve your problem, instead, the LLM is given the equivalent of “here is a chat log between a helpful ai assistant and a user. What do you think the assistant would say next?”

    That means that context is everything and if you tell the ai that it’s wrong, it might correct itself the first couple of times but, after a few mistakes, the most likely response will be another wrong answer that needs another correction. Not because the ai doesn’t know the correct answer or how to write good code, but because it’s completing a chat log between a user and a foolish ai that makes mistakes.

    It’s easy to get into a degenerate state where the code gets progressively dumber as the conversation goes on. The best solution is to rewrite the assistant’s answers directly but chat doesn’t let you do that for safety reasons. It’s too easy to jailbreak if you can control the full context.

    The next best thing is to kill the context and ask about the same thing again in a fresh one. When the ai gets it right, praise it and tell it that it’s an excellent professional programmer that is doing a great job. It’ll then be more likely to give correct answers because now it’s completing a conversation with a pro.

    There’s a kind of weird art to prompt engineering because open ai and the like have sunk billions of dollars into trying to make them act as much like a “helpful ai assistant” as they can. So sometimes you have to sorta lean into that to get the best results.

    It’s really easy to get tricked into treating like a normal conversation with a person when it’s actually really… not normal.