• trevor@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    That’s fine if you think the algorithm is the most important thing. I think the training data is equally important, and I’m so frustrated by the bastardization of the meaning of “open source” as it’s applied to LLMs.

    It’s like if a normal software product provides a thin wrapper over a proprietary library that you must link against calling their project open source. The wrapper is open, but the actual substance of what provides the functionality isn’t.

    It’d be fine if we could just use more honest language like “open weight”, but “open source” means something different.

      • trevor@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 days ago

        Yes. That solution would be to not lie about it by calling something that isn’t open source “open source”.

          • trevor@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            I mean, god bless 'em for stealing already-stolen data from scumfuck tech oligarchs and causing a muti-billion dollar devaluation in the AI bubble. If people could just stop laundering the term “open source”, that’d be great.

            • KeenFlame@feddit.nu
              link
              fedilink
              arrow-up
              0
              ·
              2 days ago

              I don’t really think they are stealing, because I don’t believe publicly available information can be property. The algorithm is open source so it is a correct labelling

        • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
          link
          fedilink
          arrow-up
          0
          ·
          3 days ago

          Plenty of debate on what classifies as an open source model last I checked, but I wasn’t expecting honesty from you there anyways.