• Turbonics@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    BBC is probably salty the AI is able to insert the word Israel alongside a negative term in the headline

    • Krelis_@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      14 days ago

      Some examples of inaccuracies found by the BBC included:

      Gemini incorrectly said the NHS did not recommend vaping as an aid to quit smoking

      ChatGPT and Copilot said Rishi Sunak and Nicola Sturgeon were still in office even after they had left

      Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

  • buddascrayon@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    That’s why I avoid them like the plague. I’ve even changed almost every platform I’m using to get away from the AI-pocalypse.

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      14 days ago

      I can’t stand the corporate double think.

      Despite the mountains of evidence that AI is not capable of something even basic as reading an article and telling you what is about it’s still apparently going to replace humans. How do they come to that conclusion?

      The world won’t be destroyed by AI, It will be destroyed by idiot venture capitalist types who reckon that AI is the next big thing. Fire everyone, replace it all with AI; then nothing will work and nobody will be able to buy anything because nobody has a job.

      Que global economic collapse.

      • vxx@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        14 days ago

        It’s a race, and bullshitting brings venture capital and therefore an advantage.

        99.9% of AI companies will go belly up when Investors start asking for results.

  • NutWrench@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    But AI is the wave of the future! The hot, NEW thing that everyone wants! ** furious jerking off motion **

    • Bilb!@lem.monster
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      14 days ago

      Yeah, haha

      Perplexity misquoted BBC News in a story about the Middle East, saying Iran initially showed “restraint” and described Israel’s actions as “aggressive”

      Perplexity did fail to summarize the article, but it did correct it.

    • addie@feddit.uk
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Dunno why you’re being downvoted. If you’re wanting a somewhat right-wing, pro-establishment, slightly superficial take on the news, mixed in with lots of “celebrity” frippery, then the BBC have got you covered. Their chairmen have historically been a list of old Tories, but that has never stopped the Tory party of accusing their news of being “left leaning” when it’s blatantly not.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    16 days ago

    What temperature and sampling settings? Which models?

    I’ve noticed that the AI giants seem to be encouraging “AI ignorance,” as they just want you to use their stupid subscription app without questioning it, instead of understanding how the tools works under the hood. They also default to bad, cheap models.

    I find my local thinking models (FuseAI, Arcee, or Deepseek 32B at the moment) are quite good at summarization at a low temperature, which is not what these UIs default to, and I get to use better sampling algorithms than any of the corporate APis. Same with “affordable” flagship API models (like base Deepseek, not R1). But small Gemini/OpenAI API models are crap, especially with default sampling, and Gemini 2.0 in particular seems to have regressed.

    My point is that LLMs as locally hosted tools you understand the mechanics/limitations of are neat, but how corporations present them as magic cloud oracles is like everything wrong with tech enshittification and crypto-bro type hype in one package.

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Rare that people here argument for LLMs like that here, usually it is the same kind of “uga suga, AI bad, did not already solve world hunger”.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        15 days ago

        Lemmy is understandably sympathetic to self-hosted AI, but I get chewed out or even banned literally anywhere else.

        In one fandom (the Avatar fandom), there used to be enthusiasm for a “community enhancement” of the original show since the official DVD/Blu-ray looks awful. Years later in a new thread, I don’t even mention the word “AI,” just the idea of restoration, and I got bombed and threadlocked for the mere tangential implication.

      • Nalivai@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        What a nuanced representation of the position, I just feel trustworthiness oozes out of the screen.
        In case you’re using random words generation machine to summarise this comment for you, it was a sarcasm, and I meant the opposite.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            14 days ago

            Ask a forest burning machine to read the surrounding treads for you, then you will find the arguments you’re looking for. You have at least 80% chance it will produce something coherent, and unknown chance of there being something correct, but hey, reading is hard amirite?

            • Eheran@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              14 days ago

              “If you try hard you might find arguments for my side”

              What kind of meta-argument is that supposed to be?

              • Nalivai@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                14 days ago

                If you read what people write, you will understand what they’re trying to tell you. Shocking concept, I know. It’s much easier to imagine someone in your head, paint him as a soyjack and yourself as a chadjack and epicly win an argument.

    • jrs100000@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      15 days ago

      They were actually really vague about the details. The paper itself says they used GPT-4o for ChatGPT, but apparently they didnt even note what versions of the other models were used.

    • 1rre@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      I’ve found Gemini overwhelmingly terrible at pretty much everything, it responds more like a 7b model running on a home pc or a model from two years ago than a medium commercial model in how it completely ignores what you ask it and just latches on to keywords… It’s almost like they’ve played with their tokenisation or trained it exclusively for providing tech support where it links you to an irrelevant article or something

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        15 days ago

        Gemini 1.5 used to be the best long context model around, by far.

        Gemini Flash Thinking from earlier this year was very good for its speed/price, but it regressed a ton.

        Gemini 1.5 Pro is literally better than the new 2.0 Pro in some of my tests, especially long-context ones. I dunno what happened there, but yes, they probably overtuned it or something.

    • paraphrand@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      16 days ago

      I don’t think giving the temperature knob to end users is the answer.

      Turning it to max for max correctness and low creativity won’t work in an intuitive way.

      Sure, turning it down from the balanced middle value will make it more “creative” and unexpected, and this is useful for idea generation, etc. But a knob that goes from “good” to “sort of off the rails, but in a good way” isn’t a great user experience for most people.

      Most people understand this stuff as intended to be intelligent. Correct. Etc. Or they At least understand that’s the goal. Once you give them a knob to adjust the “intelligence level,” you’ll have more pushback on these things not meeting their goals. “I clearly had it in factual/correct/intelligent mode. Not creativity mode. I don’t understand why it left out these facts and invented a back story to this small thing mentioned…”

      Not everyone is an engineer. Temp is an obtuse thing.

      But you do have a point about presenting these as cloud genies that will do spectacular things for you. This is not a great way to be executing this as a product.

      I loathe how these things are advertised by Apple, Google and Microsoft.

      • Eheran@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        16 days ago

        This is really a non-issue, as the LLM itself should have no problem at setting a reasonable value itself. User wants a summary? Obviously maximum factual. He wants gaming ideas? Etc.

        • brucethemoose@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          15 days ago

          For local LLMs, this is an issue because it breaks your prompt cache and slows things down, without a specific tiny model to “categorize” text… which few have really worked on.

          I don’t think the corporate APIs or UIs even do this. You are not wrong, but it’s just not done for some reason.

          It could be that the trainers don’t realize its an issue. For instance, “0.5-0.7” is the recommended range for Deepseek R1, but I find much lower or slightly higher is far better, depending on the category and other sampling parameters.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        16 days ago
        • Temperature isn’t even “creativity” per say, it’s more a band-aid to patch looping and dryness in long responses.

        • Lower temperature is much better with modern sampling algorithms, E.G., MinP, DRY, maybe dynamic temperature like mirostat and such. Ideally, structure output, too. Unfortunately, corporate APIs usually don’t offer this.

        • It can be mitigated with finetuning against looping/repetition/slop, but most models are the opposite, massively overtuning on their own output which “inbreeds” the model.

        • And yes, domain specific queries are best. Basically the user needs separate prompt boxes for coding, summaries, creative suggestions and such each with their own tuned settings (and ideally tuned models). You are right, this is a much better idea than offering a temperature knob to the user, but… most UIs don’t even do this for some reason?

        What I am getting at is this is not a problem companies seem interested in solving.They want to treat the users as idiots without the attention span to even categorize their question.

    • MoonlightFox@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      I have been pretty impressed by Gemini 2.0 Flash.

      Its slightly worse than the very best on the benchmarks I have seen, but is pretty much instant and incredibly cheap. Maybe a loss leader?

      Anyways, which model of the commercial ones do you consider to be good?

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        14 days ago

        benchmarks

        Benchmarks are so gamed, even Chatbot Arena is kinda iffy. TBH you have to test them with your prompts yourself.

        Honestly I am getting incredible/creative responses from Deepseek R1, the hype is real, though its frequently overloaded. Tencent’s API is a bit under-rated. If llama 3.3 70B is smart enough for you, Cerebras API is super fast.

        Qwen Max is… not bad? The reasoning models kinda spoiled me, but I think they have more reasoning releases coming.

        MiniMax is ok for long context, but I still tend to lean on Gemini for this.

        I dunno about Claude these days, as its just so expensive. I haven’t touched OpenAI in a long time.

        Oh, and sometimes “weird” finetunes you can find on OpenRouter or whatever will serve niches much better than “big” API models.

        • MoonlightFox@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          So there is not any trustworthy benchmarks I can currently use to evaluate? That in combination with my personal anecdotes is how I have been evaluating them.

          I was pretty impressed with Deepseek R1. I used their app, but not for anything sensitive.

          I don’t like that OpenAI defaults to a model I can’t pick. I have to select it each time, even when I use a special URL it will change after the first request

          I am having a hard time deciding which models to use besides a random mix between o3-mini-high, o1, Sonnet 3.5 and Gemini 2 Flash

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            14 days ago

            Heh, only obscure ones that they can’t game, and only if they fit your use case. One example is the ones in EQ bench: https://eqbench.com/

            …And again, the best mix of models depends on your use case.

            I can suggest using something like Open Web UI with APIs instead of native apps. It gives you a lot more control, more powerful tooling to work with, and the ability to easily select and switch between models.

  • db0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    As always, never rely on llms for anything factual. They’re only good with things which have a massive acceptance for error, such as entertainment (eg rpgs)

    • kat@orbi.camp
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Or at least as an assistant on a field your an expert in. Love using it for boilerplate at work (tech).

    • kboy101222@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      I tried using it to spit ball ideas for my DMing. I was running a campaign set in a real life location known for a specific thing. Even if I told it to not include that thing, it would still shoe horn it in random spots. It quickly became absolutely useless once I didn’t need that thing included

      Sorry for being vague, I just didn’t want to post my home town on here

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      Nonsense, I use it a ton for science and engineering, it saves me SO much time!

      • Atherel@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        Do you blindly trust the output or is it just a convenience and you can spot when there’s something wrong? Because I really hope you don’t rely on it.

          • Nalivai@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            15 days ago

            In which case you probably aren’t saving time. Checking bullshit is usually harder and longer to just research shit yourself. Or should be, if you do due diligence

            • Womble@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              15 days ago

              Its nice that you inform people that they cant tell if something is saving them time or not without knowing what their job is or how they are using a tool.

              • WagyuSneakers@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                15 days ago

                If they think AI is working for them then he can. If you think AI is an effective tool for any profession you are a clown. If my son’s preschool teacher used it to make a lesson plan she would be incompetent. If a plumber asked what kind of wrench he needed he would be kicked out of my house. If an engineer of one of my teams uses it to write code he gets fired.

                AI “works” because you’re asking questions you don’t know and it’s just putting words together so they make sense without regard to accuracy. It’s a hard limit of “AI” that we’ve hit. It won’t get better in our lifetimes.

                • stephen01king@lemmy.zip
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  14 days ago

                  Anyone blindly saying a tool is ineffective for every situation that exists in the world is a tool themselves.

          • otp@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            15 days ago

            Y’know, a lot of the hate against AI seems to mirror the hate against Wikipedia, search engines, the internet, and even computers in the past.

            Do you just blindly believe whatever it tells you?

            It’s not absolutely perfect, so it’s useless.

            It’s all just garbage information!

            This is terrible for jobs, society, and the environment!

            • Eheran@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              15 days ago

              You know what… now that you say it, it really is just like the anti-Wikipedia stuff.

    • 1rre@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 days ago

      The issue for RPGs is that they have such “small” context windows, and a big point of RPGs is that anything could be important, investigated, or just come up later

      Although, similar to how deepseek uses two stages (“how would you solve this problem”, then “solve this problem following this train of thought”), you could have an input of recent conversations and a private/unseen “notebook” which is modified/appended to based on recent events, but that would need a whole new model to be done properly which likely wouldn’t be profitable short term, although I imagine the same infrastructure could be used for any LLM usage where fine details over a long period are more important than specific wording, including factual things

      • db0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        15 days ago

        The problem is that the “train of the thought” is also hallucinations. It might make the model better with more compute but it’s diminishing rewards.

        Rpg can use the llms because they’re not critical. If the llm spews out nonsense you don’t like, you just ask to redo, because it’s all subjective.

  • mentalNothing@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    Idk guys. I think the headline is misleading. I had an AI chatbot summarize the article and it says AI chatbots are really, really good at summarizing articles. In fact it pinky promised.

  • IninewCrow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    16 days ago

    The owners of LLMs don’t care about ‘accurate’ … they care about ‘fast’ and ‘summary’ … and especially ‘profit’ and ‘monetization’.

    As long as it’s quick, delivers instant content and makes money for someone … no one cares about ‘accurate’

    • heavydust@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      Not only techbros though. Most of my friends are not into computers but they all think AI is magical and will change the whole world for the better. I always ask “how can a blackbox that throws up random crap and runs on the computers of big companies out of the country would change anything?” They don’t know what to say but they still believe something will happen and a program can magically become sentient. Sometimes they can be fucking dumb but I still love them.

      • shrugs@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        the more you know what you are doing the less impressed you are by ai. calling people that trust ai idiots is not a good start to a conversation though

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          14 days ago

          It’s not like they’re flat earthers they are not conspiracy theorists. They have been told by the media, businesses, and every goddamn YouTuber that AI is the future.

          I don’t think they are idiots I just think they are being lied to and are a bit gullible. But it’s not worth having the argument with them, AI is going to fail on its own it doesn’t matter what they think.

  • Phoenicianpirate@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    15 days ago

    I learned that AI chat bots aren’t necessarily trustworthy in everything. In fact, if you aren’t taking their shit with a grain of salt, you’re doing something very wrong.

      • milicent_bystandr@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        Super knowledgeable but with patchy knowledge, so they’ll confidently say something that practically everyone else in the company knows is flat out wrong.

      • Phoenicianpirate@lemm.ee
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        I noticed that. When I ask it about things that I am knowledgeable about or simply wish to troubleshoot I often find myself having to correct it. This does make me hestitant to follow the instructions given on something I DON’T know much about.

    • Redex@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      15 days ago

      This is my personal take. As long as you’re careful and thoughtful whenever using them, they can be extremely useful.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        14 days ago

        Could you tell me what you use it for because I legitimately don’t understand what I’m supposed to find helpful about the thing.

        We all got sent an email at work a couple of weeks back telling everyone that they want ideas for a meeting next month about how we can incorporate AI into the business. I’m heading IT, so I’m supposed to be able to come up with some kind of answer and yet I have nothing. Even putting aside the fact that it probably doesn’t work as advertised, I still can’t really think of a use for it.

        The main problem is it won’t be able to operate our ancient and convoluted ticketing system, so it can’t actually help.

        Everyone I’ve ever spoken to has said that they use it for DMing or story prompts. All very nice but not really useful.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            0
            ·
            14 days ago

            I think my largest gripe with it is it can’t actually do anything. It can just tell you about stuff.

            I can ask it how to change the desktop background on my computer and it will 100% be able to tell me, but if you then prompt it to change the background itself it won’t be able to. It has zero ability to interact with the computer, this is even the case with AI run locally.

            It can’t move the mouse around it can’t send keyboard commands.

            • WraithGear@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              14 days ago

              Um… yea? It’s not supposed to? Let’s ignore how dangerous and foolish it would be to allow llm’s admin control of a system. The thing that prevents it from doing that is well, the llm has no mechanism to do that. The best it could do is ask you to open a command line and give you some code to put in. Its kinda like asking siri to preheat your oven. It didn’t have access to your ovens system.

              You COULD get a digital only stove, and the llm could be changed to give it to reach out side itself, but its not there yet, and with how much siri miss interprets things, there would be a lot more fires