TLDR edit: I’m supporting the above comment - ie. i do not support apple’s actions in this case.
It’s definitely good for people to learn a bit about homomorphic computing, and let’s give some credit to apple for investing in this area of technology.
That said:
Encryption in the majority of cases doesn’t actually buy absolute privacy or security, it buys time - see NIST’s criteria of ≥30 years for AES. It will almost certainly be crackable <oneday> either by weakening or other advances… How many people are truly able to give genuine informed consent in that context?
Encrypting something doesn’t always work out as planned, see example:
“DON’T WORRY BRO, ITS TOTALLY SAFE, IT’S ENCRYPTED!!”
Yes Apple is surely capable enough to avoid simple, documented, mistakes such as above, but it’s also quite likely some mistake will be made. And we note, apple are also extremely likely capable of engineering leaks and concealing it or making it appear accidental (or even if truly accidental, leveraging it later on).
Whether they’d take the risk, whether their (un)official internal policy would support or reject that is ofc for the realm of speculation.
That they’d have the technical capability to do so isn’t at all unlikely. Same goes for a capable entity with access to apple infrastructure.
The fact they’ve chosen to act questionably regarding user’s ability to meaningfully consent, or even consent at all(!), suggests there may be some issues with assuming good faith on their part.
How hard is it to grasp that I don’t want Apple doing anything in my cellphone I didn’t explicitely consent to?
I don’t care what technology they develop, or whether they’re capable of applying it correctly: the point is, I don’t want it on my phone in the first place, anymore than I want them to setup camp in my living room to take notes on what I’m doing in my house.
My phone, my property, and Apple - or anybody else - is not welcome on my property.
Sorry for my poor phrasing, perhaps re-read my post? i’m entirely supporting your argument. Perhaps your main point aligns most with my #3? It could be argued they’ve already begun from a position of probable bad faith by taking this data from users in the first place.
TLDR edit: I’m supporting the above comment - ie. i do not support apple’s actions in this case.
It’s definitely good for people to learn a bit about homomorphic computing, and let’s give some credit to apple for investing in this area of technology.
That said:
Encryption in the majority of cases doesn’t actually buy absolute privacy or security, it buys time - see NIST’s criteria of ≥30 years for AES. It will almost certainly be crackable <oneday> either by weakening or other advances… How many people are truly able to give genuine informed consent in that context?
Encrypting something doesn’t always work out as planned, see example:
“DON’T WORRY BRO, ITS TOTALLY SAFE, IT’S ENCRYPTED!!”
Source
Yes Apple is surely capable enough to avoid simple, documented, mistakes such as above, but it’s also quite likely some mistake will be made. And we note, apple are also extremely likely capable of engineering leaks and concealing it or making it appear accidental (or even if truly accidental, leveraging it later on).
Whether they’d take the risk, whether their (un)official internal policy would support or reject that is ofc for the realm of speculation.
That they’d have the technical capability to do so isn’t at all unlikely. Same goes for a capable entity with access to apple infrastructure.
How hard is it to grasp that I don’t want Apple doing anything in my cellphone I didn’t explicitely consent to?
I don’t care what technology they develop, or whether they’re capable of applying it correctly: the point is, I don’t want it on my phone in the first place, anymore than I want them to setup camp in my living room to take notes on what I’m doing in my house.
My phone, my property, and Apple - or anybody else - is not welcome on my property.
Sorry for my poor phrasing, perhaps re-read my post? i’m entirely supporting your argument. Perhaps your main point aligns most with my #3? It could be argued they’ve already begun from a position of probable bad faith by taking this data from users in the first place.
Oh yeah I kinda missed your last point. Sorry 🙂