I have never liked Apple and lately even less. F… US monopolies

  • queermunist she/her@lemmy.ml
    link
    fedilink
    arrow-up
    12
    arrow-down
    2
    ·
    2 days ago

    I bet they did the math

    Did they? Because it seems like everyone else is in a hype bubble and doesn’t give a shit about how much this costs or how much money it makes.

    • utopiah@lemmy.ml
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Looks like they did “Brakerski-Fan-Vercauteren (BFV) HE scheme, which supports homomorphic operations that are well suited for computation (such as dot products or cosine similarity) on embedding vectors that are common to ML workflows” namely they use a scheme that is both secure and efficient specifically for the kind of compute they do here. https://machinelearning.apple.com/research/homomorphic-encryption

    • someacnt@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      2 days ago

      At least it’s not going to be the overhyped LLM doing the analysis, it seems, considering the input is a photo data.

          • utopiah@lemmy.ml
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            1 day ago

            Well to be fair, and even though I did spend a bit of time to write about the broader AI hype BS cycle https://fabien.benetou.fr/Analysis/AgainstPoorArtificialIntelligencePractices LLMs are in itself not “bad”. It’s an interesting idea to rely on our ability to produce and use languages to describe a lot of useful things around us. So using statistics on it to try to match is actually pretty smart. Now… there are so many things that went badly for the last few years I won’t even start (cf link) but the concept per se, makes sense to rely on it sometimes.