Office space meme:

“If y’all could stop calling an LLM “open source” just because they published the weights… that would be great.”

  • maplebar@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Yeah, this shit drives me crazy. Putting aside the fact that it all runs off stolen data from regular people who are being exploited, most of this “AI” shit is basically just freeware if anything, it’s about as “open source” as Winamp was back in the day.

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 days ago

      I’m including Facebook’s LLM in my critique. And I dislike the current hype on LLMs, no matter where they’re developed.

      And LLMs are not “AI”. I’ve called them “so-called ‘AIs’” waaay before.

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    Or as a human without all the previous people’s examples we learned from without paying them, aka normal life.

  • Dkarma@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    I mean that’s all a model is so… Once again someone who doesn’t understand anything about training or models is posting borderline misinformation about ai.

    Shocker

    • FooBarrington@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      A model is an artifact, not the source. We also don’t call binaries “open-source”, even though they are literally the code that’s executed. Why should these phrases suddenly get turned upside down for AI models?

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      A model can be represented only by its weights in the same way that a codebase can be represented only by its binary.

      Training data is a closer analogue of source code than weights.

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      Yet another so-called AI evangelist accusing others of not understanding computer science if they don’t want to worship their machine god.

        • Prunebutt@slrpnk.netOP
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          It’s not like you need specific knowledge of Transformer models and whatnot to counterargue LLM bandwagon simps. A basic knowledge of Machine Learning is fine.

              • surph_ninja@lemmy.world
                link
                fedilink
                arrow-up
                0
                ·
                2 days ago

                I mean if you both think this is overhyped nonsense, then by all means buy some Nvidia stock. If you know something the hedge fund teams don’t, why not sell your insider knowledge and become rich?

                Or maybe you guys don’t understand it as well as you think. Could be either, I guess.

                • Prunebutt@slrpnk.netOP
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 days ago

                  Yeah, let’s all base our decisions and definitions on what the stock market dictates. What could possibly go wrong?

                  /s 🙄

                • Poik@pawb.social
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 days ago

                  Because over-hyped nonsense is what the stock market craves… That’s how this works. That’s how all of this works.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  2 days ago

                  I didn’t say it is all overhyped nonsense, my only point is that I agree with the opinion stated in the meme, and I don’t think people who disagree really understand AI models or what “open source” means.

  • Jocker@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Even worse is calling a proprietary, absolutely closed source, closed data and closed weight company “OpeanAI”

  • Xerxos@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    The training data would be incredible big. And it would contain copyright protected material (which is completely okay in my opinion, but might invoice criticism). Hell, it might even be illegal to publish the training data with the copyright protected material.

    They published the weights AND their training methods which is about as open as it gets.

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 days ago

      Hell, for all we know it could be full of classified data. I guess depending on what country you’re in it definitely is full of classified data…

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      They could disclose how they sourced the training data, what the training data is and how you could source it. Also, did they publish their hyperparameters?

      They could jpst not call it Open Source, if you can’t open source it.

      • Naia@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        For neural nets the method matters more. Data would be useful, but at the amount these things get trained on the specific data matters little.

        They can be trained on anything, and a diverse enough data set would end up making it function more or less the same as a different but equally diverse set. Assuming publicly available data is in the set, there would also be overlap.

        The training data is also by necessity going to be orders of magnitude larger than the model itself. Sharing becomes impractical at a certain point before you even factor in other issues.

        • Poik@pawb.social
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          That… Doesn’t align with years of research. Data is king. As someone who specifically studies long tail distributions and few-shot learning (before succumbing to long COVID, sorry if my response is a bit scattered), throwing more data at a problem always improves it more than the method. And the method can be simplified only with more data. Outside of some neat tricks that modern deep learning has decided is hogwash and “classical” at least, but most of those don’t scale enough for what is being looked at.

          Also, datasets inherently impose bias upon networks, and it’s easier to create adversarial examples that fool two networks trained on the same data than the same network twice freshly trained on different data.

          Sharing metadata and acquisition methods is important and should be the gold standard. Sharing network methods is also important, but that’s kind of the silver standard just because most modern state of the art models differ so minutely from each other in performance nowadays.

          Open source as a term should require both. This was the standard in the academic community before tech bros started running their mouths, and should be the standard once they leave us alone.

  • Azenis@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Open sources will eventually surpass all closed-source softwares in some day, no matter how many billions of dollars are invested in them.

    • Maalus@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      Never have I used open source software that has achieved that, or was even close to achieving it. Usually it is opinionated (you need to do it this way in this exact order, because that’s how we coded it. No, you cannot do the same thing but select from the back), lacks features and breaks. Especially CAD - comparing Solidworks to FreeCAD for instance, where in FreeCAD any change to previous ops just breaks everything. Modelling software too - Blender compared to 3ds Max - can’t do half the things.

      • jj4211@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        2 days ago
        • 7-zip
        • VLC
        • OBS
        • Firefox did it only to mostly falter to Chrome but Chrome is largely Chromium which is open source.
        • Linux (superseded all the Unix, very severely curtailed Windows Server market)
        • Nearly all programming language tools (IDEs, Compilers, Interpreters)
        • Essentially all command line ecosystem (obviously on the *nix side, but MS was pretty much compelled to open source Powershell and their new Terminal to try to compete)

        In some contexts you aren’t going to have a lively enough community to drive a compelling product even as there’s enough revenue to facilitate a company to make a go of it, but to say ‘no open source software has acheived that’ is a bit much.

      • Test_Tickles@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        2 days ago

        While I completely agree with 90% of your comment, that first sentence is gross hyperbole. I have used a number of pieces of open source options that are are clearly better. 7zip is a perfect example. For over a decade it was vastly superior to anything else, open or closed. Even now it may be showing its age a bit, but it is still one of the best options.
        But for the rest of your statement, I completely agree. And yes, CAD is a perfect example of the problems faced by open source. I made the mistake of thinking that I should start learning CAD with open source and then I wouldn’t have to worry about getting locked into any of the closed source solutions. But Freecad is such a mess. I admit it has gotten drastically better over the last few years, but it still has serious issues. Don’t get me wrong, I still 100% recommend that people learn it, but I push them towards a number of closed source options to start with. Freecad is for advanced users only.

  • Ugurcan@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    There are lots of problems with the new lingo. We need to come up with new words.

    How about “Open Weightings”?

  • thespcicifcocean@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    It’s not just the weights though is it? You can download the training data they used, and run your own instance of the model completely separate from their servers.

  • acargitz@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    2 days ago

    Arguably they are a new type of software, which is why the old categories do not align perfectly. Instead of arguing over how to best gatekeep the old name, we need a new classification system.

    • Poik@pawb.social
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      … Statistical engines are older than personal computers, with the first statistical package developed in 1957. And AI professionals would have called them trained models. The interpreter is code, the weights are not. We have had terms for these things for ages.

        • Aqarius@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          2 days ago

          Well, yes, but usually it’s the code that’s the main deal, and the part that’s open, and the data is what you do with it. Here, the training weights seem to be “it”, so to speak.

    • Prunebutt@slrpnk.netOP
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      2 days ago

      There were e|forts. Facebook didn’t like those. (Since their models wouldn’t be considered open source anymore)

    • Treczoks@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      On the contrary. What they open sourced was just a small part of the project. What they did not open source is what makes the AI tick. Having less than one percent of a project open sourced does not make it an “Open Source” project.

    • Preflight_Tomato@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      2 days ago

      Yes please, let’s use this term, and reserve Open Source for it’s existing definition in the academic ML setting of weights, methods, and training data. These models don’t readily fit into existing terminology for structure and logistic reasons, but when someone says “it’s got open weights” I know exactly what set of licenses and implications it may have without further explanation.