I don’t really understand the purpose of the feature — GPS tags are already embedded in the photo by the phone, so it knows the location of each picture. The phone also analyzes faces of people you’ve identified so you can search for people you know. What else does this new feature add?
Because you took two selfies in a restaurant near there, made a huge stunning collage of a duck below the tower and a couple photos from a while away to get the whole tower in view.
I’m running this tech at home, because we had the same use case. Except for me it’s running on a nas, not Apple’s servers. The location solution doesn’t quite work as well when you’re avid photographer
If you read the article thoroughly you’d know that a smaller model runs locally, to get an guess that a landmark might be in a spot in the image. The actual identification and tagging is done in the cloud. The tag is then sent back.
I don’t really understand the purpose of the feature — GPS tags are already embedded in the photo by the phone, so it knows the location of each picture. The phone also analyzes faces of people you’ve identified so you can search for people you know. What else does this new feature add?
It let’s you type “eiffel tower” into search and get those pictures. Rather than all the other unspeakable things you did in Paris that night
So I recently installed Immich and it does it for me using local AI
Yep, machine learning is nice
Current implementation seems like overkill. Why not just:
Because you took two selfies in a restaurant near there, made a huge stunning collage of a duck below the tower and a couple photos from a while away to get the whole tower in view.
I’m running this tech at home, because we had the same use case. Except for me it’s running on a nas, not Apple’s servers. The location solution doesn’t quite work as well when you’re avid photographer
If you read the article, you would know that the hard work is done locally on your iPhone not on apples server.
If you read the article thoroughly you’d know that a smaller model runs locally, to get an guess that a landmark might be in a spot in the image. The actual identification and tagging is done in the cloud. The tag is then sent back.
Because then they don’t have an excuse to move all your data to Apple servers and scan it for later use.