• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: June 20th, 2023

help-circle
  • That is absolutely not surprising, it’s clear that this group absolutely worships the guy, and clearly he enjoys the attention. But to say this proves the narrative in the post or that he’s directly involved is still a huge stretch without actual evidence.

    People should definitely be made aware of the dangers both a16z AND ai16z pose, but not by buying the conspiracy theories they’re spreading around to further their interests.

    We’ve seen shit like this happen in crypto again and again and again. Every shitcoin and crypto fad comes with its own purported vision of the future it’s supposedly powering, with Bitcoin it was financial privacy and independence from traditional currency, with NFTs it was a utopia of creative ownership, with the metaverse it was a virtual capitalistic reality, with this it’s apparently some crap about accelerating progress through social engineering (basically disinformation). But really, what they’re most likely going to do, is to use chatbots to scam people into buying their coin. Because that’s all this is about.

    I need to reiterate: that Substack post is literally an ad. The person claims to work for Twitter but also claims to have been provided the tool externally by Andreessen (it describes Eliza as some sort of mysterious highly advanced technology: it’s not) and then also claims to have the authority to leave publicly available “breadcrumbs” in the code of Andreessen’s tool? And then they also claim to be a junior dev who doesn’t understand the technical side of it, but also claims to have worked at Twitter on a H1B visa? Closely enough to Musk to be enrolled in this high level illegal conspiracy against the public? It’s literally badly written fiction.


  • It’s a crypto scheme, they’re using this AI agent project to promote their coin. This is what crypto schemes do all the time, claiming that their coin is powered by or is powering whatever latest tech buzzword thing. Few years ago it was NFTs, then the metaverse, now it’s AI agents. It’s also extremely common for them to claim to be affiliated or funded by Elon Musk, for obvious reasons.

    AI agents, especially if used like the project creators are implying through this fabricated narrative, are absolutely a threat to society. But that still doesn’t mean that this narrative isn’t fabricated.

    Please, please, please, don’t believe everything you read on the internet. Fact check everything, especially everything that sounds too good or too bad to be true. This is exactly how we got into the situation we’re in today, and our ability to verify information is exactly what they’re trying to take away from us.

    We all saw relatives, friends and coworkers turn into conspiracy theory spouting zombies back in 2020, as they were willing to believe literally every piece of disinformation they were exposed to as long as it aligned with their fears. Then we saw many of those same people continue to spiral further into the alt-right’s destructive narrative and propaganda. We must NOT fall into the same trap. The war that we’re all fighting in today is a war for the meaning of truth.


  • I did a bit more research into this.

    You’re confusing a16z (Marc Andreessen) with ai16z (the people who made this and claim affiliation with Marc Andreessen). It’s a crypto scheme, they’re using this AI agent project to promote their coin. This is what crypto schemes do all the time, claiming that their coin is powered by or is powering whatever latest tech buzzword thing. Few years ago it was NFTs, then the metaverse, now it’s AI agents. It’s also extremely common for them to claim to be affiliated or funded by Elon Musk, for obvious reasons.

    AI agents, especially if used like the project creators are implying through this fabricated narrative, are absolutely a threat to society. But that still doesn’t mean that this narrative isn’t fabricated.


  • my question is should your constitution deem a action moral/immoral in some situations, and opposite in others, and if so, where and how can you define such limits, and is it good to define such limits

    You are not going to find a clear definitive answer to that question, for the reasons I’ve explained. If we as a species had a single, universal, correct answer to that question, a solution that somehow fairly handles all the infinite variables of context, cause, effect and emotion, according to a supreme, universally pleasing standard of justice, we would be living in a utopia. Or in Heaven. We wouldn’t be here having this conversation, and we wouldn’t be constantly teasing ourselves with debates or thought exercises like “would you kill Hitler if you could?”

    YOU need to pick that answer for yourself. You have to come up with the best solution that you feel comfortable with after taking in consideration the variables of context, cause, effect and emotion to the best of your ability and knowledge for EACH experience you have. Then you’ll have your “morals”, and those are the only ones you should follow.

    And yes, like I said before, this is complex, and scary, and difficult and absolutely exhausting. Which is exactly the reason why some people turn to religion or anything that promises the illusion of a ready, stable, immutable answer in a world that is constantly changing and constantly requires them to re-evaluate everything they know.


  • I dont think so. Why would morality inhibit progress. Stale knowledge does prevent, but morals dont really change. By morals being flexible, I mean - “Killing is very bad, except in so and so situations, you have to”.

    You assume that what’s considered “moral” or ethical hasn’t changed multiple times throughout history and that it isn’t subjective. Sorry to sound pedantic, but once again, it’s right in the definition of the word:

    a person’s standards of behavior or beliefs concerning what is and is not acceptable for them to do.

    And nowhere does it say that “morals” imply any degree of immutability. There are countless examples I could make. Just as a personal example, I never particularly paid mind to the suffering of animals until I adopted a pet. I never believed getting involved in political discourse was a duty until I realized how increasingly distorted it’s becoming. Many people say similar things about having children, how the experience just changes the way you see the world, your perception of what is tolerable and what is not, and ultimately your perception of “right” and “wrong”: your morals.

    If we as humans didn’t believe that we can actually influence other people’s conceptions of what’s right or wrong, there would be no point to education, history, politics, philosophy, law, religion, art, literature… culture as a whole. We wouldn’t have communication or civilization.

    My honest opinion is that what you’re truly asking here isn’t whether it’s okay/possible for morals to be flexible, you’re asking whether it’s okay to stray from what you’ve always perceived to be the general consensus of what is “moral” and what isn’t. And my answer is still yes.


  • Since you used media as an example, let me use another common trope to answer. Do you know when in horror or thriller movies a character momentarily gets the upper hand on the killer by knocking them unconscious and then just tries to run away without even making sure that the killer is dead or at least arming themselves? Does that EVER end well?

    The reason that trope is so common is that it’s very effective at eliciting the sort of instinctive emotional response that makes us as viewers want to yell “WHAT THE FUCK ARE YOU DOING?? KILL HIM!!” at the screen.

    We have that instinct for a reason.

    To answer your question more directly, yes, morals ARE inherently flexible. If they weren’t, we would never learn anything or progress as a society or even as individuals. I don’t know where the idea that someone’s morals are supposed to be immutable even comes from. One of the core steps to psychological well-being is realizing that you have no direct control over your “environment”, but you absolutely have direct control over the actions you take to influence it and the way you adapt and react to it, which includes letting go of standards and expectations you’ve set for yourself if you feel that it’s necessary.

    Absolutes are not applicable in reality. You’ve mentioned utopias too, and well, the fun thing about utopias is that they don’t exist. They can’t exist. It’s the literal definition of the word: “an imagined place or state of things in which everything is perfect.” Dystopia, on the other hand, is what happens when you try to force a utopia into existence.

    Morals can’t be absolute. Tolerance can’t be absolute. Everything is flexible and eternally changing. It’s scary and it’s complex but people have to come to terms with it.


  • I personally think that everyone should be allowed to end their lives if they really deeply want it. But this should never be expected, actively promoted or pushed for. And I think it should involve at least a consultation with a medical professional to avoid hasty decisions due to a temporary crisis.

    I mean, yes, but I really don’t think anyone is arguing for the opposite when talking about legal euthanasia and I find it disingenuous to even suggest it. Let’s not forget that almost anyone can commit suicide regardless of it being legal or medically assisted and this has been the case and will be the case for the entirety of human history. Look at Japan and similar countries/societies where the cultural and societal pressures already have the consequences you described without it being legal.

    Arguing for legal euthanasia is really just saying that people should have a safer, more informed and more dignified option if they really intend to make that decision, and guaranteeing that even the people who currently can’t end their lives on their own can still exercise that right if they want to. If you want to prevent pointless suicides the right way to do it isn’t to take away the possibility entirely, it’s making sure that society doesn’t give people reasons to want to kill themselves.

    EDIT: I’ve just realized that I initially misread OP’s question which specifically asks about “voluntary” euthanasia. The comment I’m replying to is more relevant to the original discussion than my response. Still can’t shake off the feeling that speaking about something like this even purely hypothetically can only do more harm than good in current times, as it’s very easy to imagine that once the concept of “voluntary euthanasia” begins floating around, people who want to argue in bad faith against legal euthanasia will just conflate the two to make the rational side look like a death cult.


  • The whole point he’s trying to prove is that he can do something like this with no consequences, including having to apologize. He hasn’t apologized and he won’t.

    The reason he can do that with no consequences and you’re left here wondering what the fuck just happened and why the response you normally would expect isn’t coming, is that the western political environment has been artificially and methodically polarized for years in preparation for a stunt like this. Cognitive dissonance is an effective tool.