• lemmydividebyzero@reddthat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    17 days ago

    AGI is currently just a buzzword anyway…

    Microsoft defines AGI in contracts in dollars of earnings…

    If you’d travel in time 5 years back and show the currently best GPT to someone, he/she would probably accept it as AGI.

    I’ve seen multiple experts in German television explaining that LLMs will reach the AGI state within a few years…

    (That does not mean that the CEO guy isn’t a fool. Let’s wait for the first larger problem that requires not writing new code, but rather dealing with a bug, something not documented, or similar…)

    • cynar@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 days ago

      LLMs can’t become AGIs. They have no ability to actually reason. What they can do is use predigested reasoning to fake it. It’s particularly obvious with certain classes of proble., when they fall down. I think the fact it fakes so well tells us more about human intelligence than AI.

      That being said, LLMs will likely be a critical part of a future AGI. Right now, they are a lobotomised speech centre. Different groups are already starting to tie them to other forms of AI. If we can crack building a reasoning engine, then a full AGI is possible. An LLM might even form its internal communication method, akin to our internal monologue.