• 1 Post
  • 4 Comments
Joined 6 months ago
cake
Cake day: July 5th, 2024

help-circle

  • It’s well possible and previously tv mic had been used as bugging device. The problem is, way too many security researchers look in system level software of iOS and even other components of the device that such practice will be too risky for apple (same applies for mainstream android products). Also processing realtime audio, finding potentially unrealiable topic from it and doing realtime ad is actually too much work as of today’s tech (might change sooner than you think though).

    What, I think, is more practical is to use the whole query after the wake word to show ad, and potentially use other app tracking data, which is way much reliable than voice for targeting purpose. Voice data is useful for bugging purpose, primarily (ab)used by nation states and LE.

    I bet in the medical procedure case mentioned in the blog the user searched/talked about that in other apps and average people aren’t good to notice these privacy leaks.


  • I’m not talking about AI in general here. I know some form of AI has been out there for ages and ML definitely has some field specific usecases. Here the objective is to discuss the feeling about gen AI produced content in contrast to human made content, potentially pondering the hypothetical scenario that the gen AI infrastructure is used ethically. I hope the notion of generative AI is sort of clear, but it includes LLMs, photo (not computer vision) and audio generators and any multimodal combination of these.