What are your thoughts on #privacy and #itsecurity regarding the #LocalLLMs you use? They seem to be an alternative to ChatGPT, MS Copilot etc. which basically are creepy privacy black boxes. How can you be sure that local LLMs do not A) “phone home” or B) create a profile on you, C) that their analysis is restricted to the scope of your terminal? As far as I can see #ollama and #lmstudio do not provide privacy statements.

  • toastal@lemmy.ml
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    21 hours ago

    D) what is AMD support like or is the Python fan boys still focusing on Nvidia exclusively?

    • Deckweiss@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      19 hours ago

      I’m running gpt4all on AMD. Had to figure out which packages to install, which took a while, but since then it runs fine just fine

      • toastal@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        It is slow. Syntax & community idioms suck. The package ecosystem is a giant mess—constant dependency breakage, many supply-side attacks, quality is all over the place with many packages with failing tests or build that isn’t reproducible—& can largely be an effect of too many places saying this is the first language you should learn first. When it comes to running Python software on my machine, it always is the buggiest, breaks the most shipping new software, & uses more resources than other things.

        When I used to program in it, I thought Python was so versatile that it was the 2nd best language at everything. I learned more languages & thought it was 3rd best… then 4th… then realized it isn’t good at anything. The only reason it has things going for it is all the effort put into the big C libraries powering the math, AI, etc. libraries.