I was watching the RFK Jr questioning today and when Bernie was talking about healthcare and wages I felt he was the only one who gave a real damn. I also thought “Wow he’s kinda old” so I asked my phone how old he actually was. Gemini however, wouldnt answer a simple, factual question about him. What the hell? (The answer is 83 years old btw, good luck america)

  • halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    To be honest, that seems like it should be the one thing they are reliably good at. It requires just looking up info on their database, with no manipulation.

    Obviously that’s not the case, but that’s just because currently LLMs are a grift to milk billions from corporations by using the buzzwords that corporate middle management relies on to make it seem like they are doing any work. Relying on modern corporate FOMO to get them to buy a terrible product that they absolutely don’t need at exorbitant contract prices just to say they’re using the “latest and greatest” technology.

    • SmoothLiquidation@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      To be honest, that seems like it should be the one thing they are reliably good at. It requires just looking up info on their database, with no manipulation.

      That’s not how they are designed at all. LLMs are just text predictors. If the user inputs something like “A B C D E F” then the next most likely word would be “G”.

      Companies like OpenAI will try to add context to make things seem smarter, like prime it with the current date so it won’t just respond with some date it was trained on, or look for info on specific people or whatnot, but at its core, they are just really big auto fill text predictors.

    • DrFistington@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      Yeah, I still struggle to see the appeal of Chatbot LLMs. So it’s like a search engine, but you can’t see it’s sources, and sometimes it ‘hallucinates’ and gives straight up incorrect information. My favorite was a few months ago I was searching Google for why my cat was chewing on plastic. Like halfway through the AI response at the top of the results it started going on a tangent about how your cat may be bored and enjoys to watch you shop, lol

      So basically it makes it easier to get a quick result if you’re not able to quickly and correctly parse through Google results… But the answer you get may be anywhere from zero to a hundred percent correct. And you don’t really get double check the sources without further questioning the chat bot. Oh and LLM AI models have been shown to intentionally lie and mislead when confronted with inaccuracies they’ve given.