While I agree that the AI they will implement will likely not be very effective, it doesn’t have to be to cause massive human suffering. Eg. Google incorrectly marking exposed photos of your kid for your doctor as CSAM.
There’s also no guarantee that once these companies finally wake the fuck up (If they’re not already completely aware what they’re doing is messed up) that they will close these holes they’re punching, and that could mean they could replace AI with a mass surveillance tool at any point without you knowing. Nobody should be a fan of this.
While I agree that the AI they will implement will likely not be very effective, it doesn’t have to be to cause massive human suffering. Eg. Google incorrectly marking exposed photos of your kid for your doctor as CSAM. There’s also no guarantee that once these companies finally wake the fuck up (If they’re not already completely aware what they’re doing is messed up) that they will close these holes they’re punching, and that could mean they could replace AI with a mass surveillance tool at any point without you knowing. Nobody should be a fan of this.