• 0 Posts
  • 59 Comments
Joined 2 years ago
cake
Cake day: August 9th, 2023

help-circle





  • Totps only works when both source and recipient are synced pretty much identically in time. Meaning the car and fob would need to receive their time from an external source.

    Not that hard in many places, just grab the time from a radio broadcast. But what happens when that broadcast isn’t available? You fall back on a known inaccurate time. I’ve seen cars with a bum RTC chip, which lost about a minute a day. That would be enough to kill off this kind of system.

    Not to mention that an external time source would be larger, cost more, require more power, and would be vulnerable to brand new attacks.

    There is no perfect system. Take your physical lock for instance, there is no unpickable lock. They just plum don’t exist.



  • I mean, there’s another side to this.

    Assume you have exacting control of training data. You give it consensual sexual play, including rough play, bdsm play, and cnc play. We are 100% certain the content is consensual in this hypothetical.

    Is the output a grey area, even if it seems like real rape?

    Now another hypothetical. A person closes their eyes and imagines raping someone. “Real” rape. Is that a grey area?

    Let’s build on that. Let’s say this person is a talented artist, and they draw out their imagined rape scene, which we are 100% certain is a non-consensual scene imagined by the artist. Is this a grey area?

    We can build on that further. What if they take the time to animate this scene? Is that a grey area?

    When does the above cross into a problem? Is it the AI making something that seems like rape but is built on consensual content? The thought of a person imagining a real rape? The putting of that thought onto a still image? The animating?

    Or is it none of them?




  • This seems overly optimistic. One thing current algorithms can’t do is adapt to previously unknown situations. Yeah, they can potentially model out a solution if they have enough known factors, but they don’t currently have true problem solving capabilities.

    Can that change? Absolutely. But the closest we’ve come to is LLMs which essentially download the entirety of the internet to see what the “most average response” would be to any given situation. But give it something it’s truly never seen before and you get pure gibberish that sounds convincing. And even then it’s just bad.