• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: April 23rd, 2023

help-circle
  • […] I’d like to be able to backup to my home server. The main thing would probably just be my photos […]

    For the photos, since you have a home server, have you heard of Immich? For anything else, there was a time when I could have recommended syncthing-android, but development on that has been discontinued, though you can still try using it. Some privacy-conscious cloud services may allow you to sync app folders, backing up WhatsApp that way, but I have no experience with that.

    is the 8a likely to drop much in price after that? I don’t know how quickly the prices drop but considering the 8a is currently £500 I can’t see it dropping to <£300

    Instead of buying straight from Google, you can consider buying a refurbished 8a off ebay or something local - my last two Pixel purchases have been through that method. It tends to be substantially cheaper than buying new, even as little as 6 months after the product launch, and the 8a launched 9 months ago. Just be cautious of seller ratings, reputations, and consistency - prices are lower there because it’s more of a risk for the buyer.


  • Onihikage@beehaw.orgtoPrivacy@lemmy.mlProton Ditches Mastodon
    link
    fedilink
    English
    arrow-up
    3
    ·
    51 minutes ago

    https://medium.com/@ovenplayer/does-proton-really-support-trump-a-deeper-analysis-and-surprising-findings-aed4fee4305e

    Thanks for the link, that’s a lot more context than the usual reactionary “Andy Yen said one nice thing about a Republican therefore he’s fascist pro-Trump MAGA” takes I’ve been seeing. Not only does it more or less disprove that narrative, it makes me question how much of the hate against him lately is genuine and how much of it has been seeded and signal-boosted by nation-state actors who don’t want people to use encrypted communications.

    Yen is clearly trying to be nonpartisan and praise what he sees as good for privacy while pointing out abuses of power, regardless of who has the power at the moment. He sees this as his way of adding weight to the scale in favor of better privacy and tearing down big tech. I know many in my country and on the web are hyper-polarized and addicted to anger, to the point that if someone says anything even slightly positive about their perceived political enemy, it’s seen as legitimizing and aligning with that enemy, but I don’t believe that’s a healthy or productive mindset to have. I believe that kind of divisive attitude is preventing us from uniting with those who should be agreeable to our cause, and that’s exactly what the oligarchs want. It’s making us weak.

    I’ve been on the fence for a while since this whole thing started, because I do use a paid Proton email, and it sounded bad, but I kept getting this nagging feeling I wasn’t seeing the full picture. That’s gone now - Andy may be politically and/or socially inept, and he may have a different perspective on what it means to support privacy and democracy, but I think it’s clear his heart is in the right place, and the work he and Proton are continuing to do for tech privacy is helping to erode authoritarian power structures, including Trump’s.


  • I appreciate the links, but these are all about how to efficiently process an audio sample for a signal of choice.

    Your stumbling block seemed to be that you didn’t understand how it was possible, so I was trying to explain that, but I may have done a poor job of emphasizing why the technique I described matters. When you said this in a previous comment:

    I do think that they’re not just throwing away the other fish, but putting them into specific baskets.

    That was a misunderstanding of how the technology works. With a keyword spotter (KWS), which all smartphone assistants use to detect their activation phrases, they they aren’t catching any “other fish” in the first place, so there’s nothing to put into “specific baskets”.

    To borrow your analogy of catching fish, a full speech detection model is like casting a large net and dragging it behind a ship, catching absolutely everything and identifying all the fish/words so you can do things with them. Relative to a KWS, it’s very energy intensive and catches everything. One is not likely to spend that amount of energy just to throw back most of the fish. Smart TVs, cars, Alexa, they can all potentially use this method continuously because the energy usage from constantly listening with a full model is not an issue. For those devices, your concern that they might put everything other than the keyword into different baskets is perfectly valid.

    A smartphone, to save battery, will be using a KWS, which is like baiting a trap with pheromones only released by a specific species of fish. When those fish happen to swim nearby, they smell the pheromones and go into the trap. You check the trap periodically, and when you find the fish in there, you pull them out with a very small net. You’ve expended far less effort to catch only the fish you care about without catching anything else.

    To use yet another analogy, a KWS is like a tourist in a foreign country where they don’t know the local language and they’ve gotten separated from their guide. They try to ask locals for help but they can’t understand anything, until a local says the name of the tour group, which the tourist recognizes, and is able to follow that person back to their group. That’s exactly what a KWS system experiences, it hears complete nonsense and gibberish until the key phrase pops out of the noise, which they understand clearly.

    This is what we mean when we say that yes, your phone is listening constantly for the keyword, but the part that’s listening cannot transcribe your conversations until you or someone says the keyword that wakes up the full assistant.

    My question is, how often is audio sampled from the vicinity to allow such processing to happen.

    Given the near-immediate response of “Hey Google”, I would guess once or twice a second.

    Yes, KWS systems generally keep a rolling buffer of audio a few seconds long, and scan it a few times a second to see if it contains the key phrase.


  • How can you catch the right fish, unless you’re routinely casting your fishing net?

    It’s a technique called Keyword Spotting (KWS). https://en.wikipedia.org/wiki/Keyword_spotting

    This uses a tiny speech recognition model that’s trained on very specific words or phrases which are (usually) distinct from general conversation. The model being so small makes it extremely optimized even before any optimization steps like quantization, requiring very little computation to process the audio stream to detect whether the keyword has been spoken. Here’s a 2021 paper where a team of researchers optimized a KWS to use just 251uJ (0.00007 milliwatt-hours) per inference: https://arxiv.org/pdf/2111.04988

    The small size of the KWS model, required for the low power consumption, means it alone can’t be used to listen in on conversations, it outright doesn’t understand anything other than what it’s been trained to identify. This is also why you usually can’t customize the keyword to just anything, but one of a limited set of words or phrases.

    This all means that if you’re ever given an option for completely custom wake phrases, you can be reasonably sure that device is running full speech detection on everything it hears. This is where a smart TV or Amazon Alexa, which are plugged in, have a lot more freedom to listen as much as they want with as complex of a model as they want. High-quality speech-to-text apps like FUTO Voice Input run locally on just about any modern smartphone, so something like a Roku TV can definitely do it.