I have never liked Apple and lately even less. F… US monopolies

  • reddig33@lemmy.world
    link
    fedilink
    arrow-up
    10
    arrow-down
    1
    ·
    2 days ago

    I don’t really understand the purpose of the feature — GPS tags are already embedded in the photo by the phone, so it knows the location of each picture. The phone also analyzes faces of people you’ve identified so you can search for people you know. What else does this new feature add?

    • Boomkop3@reddthat.com
      link
      fedilink
      arrow-up
      12
      ·
      2 days ago

      It let’s you type “eiffel tower” into search and get those pictures. Rather than all the other unspeakable things you did in Paris that night

      • reddig33@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        2 days ago

        Current implementation seems like overkill. Why not just:

        • Search “Eiffel tower”
        • send search term to Apple server that already exists (Apple Maps)
        • server returns gps coordinates for that term
        • photos app displays photos in order of nearest to those coordinates
        • Boomkop3@reddthat.com
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          2 days ago

          Because you took two selfies in a restaurant near there, made a huge stunning collage of a duck below the tower and a couple photos from a while away to get the whole tower in view.

          I’m running this tech at home, because we had the same use case. Except for me it’s running on a nas, not Apple’s servers. The location solution doesn’t quite work as well when you’re avid photographer

          • Petter1@lemm.ee
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            1 day ago

            If you read the article, you would know that the hard work is done locally on your iPhone not on apples server.

            • Boomkop3@reddthat.com
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              1 day ago

              If you read the article thoroughly you’d know that a smaller model runs locally, to get an guess that a landmark might be in a spot in the image. The actual identification and tagging is done in the cloud. The tag is then sent back.