German news magazine Der Spiegel wrote they had talked to a Meta representative and they said “This is definitely an error and we will fix it fast”
My personal take: An incident like this illustrates how vulnerable we are to being censored and silenced when using social media and communication channels owned by billionaires. It’s time not only to leave those platforms but to educate your friends and family why this is absolutely necessary. No more excuses.
Original: Update, 13.30 Uhr: Auf SPIEGEL-Anfrage hat sich Meta inzwischen kurz und knapp zu dem Phänomen geäußert. »Dies ist eindeutig ein Fehler, den wir schnell beheben werden«, so eine Sprecherin des Konzerns.
At the moment a search for #democrat on my facebook page returns a bunch of posts saying that searches for #democrat are blocked and a bunch (more recent ones) saying it isn’t.
I was skeptical that this was an outrage campaign but with a real news source having a quote verifying the behavior from the company (the above referenced article was on the Der Spiegel front page and the update is at the bottom.) I’m less skeptical.
Pushing through a change doesn’t happen by accidentally clicking on a button. Where I work it’s a multistep process requiring manager approval. Even if the block happened by mistake it still means there was a mechanism in place to make it happen, configured so that it only censored left leaning hashtags. This is “not good.”
I wouldn’t be surprised if it was an AI or other automated system blocking it, because an outright block instead of a shadow ban seems unlikely to have been done by human hands. But that just indicates that there are systemic issues or biases that clearly favor one party over the other, and that’s not much better.
I know as a software engineer I frequently accidentally release bugs that block entire political parties after their opponents take power. It’s one of those things most people forget to write unit tests for.
they said “This is definitely an error and we will fix it fast”
I’m pretty sure that’s literally what they always say.
Like, has a mainstream brainwashing platform ever admitted to making the change and owning up to it? As far as I can remember, these features keep accidentally getting thought up, coded and released into the production product.
I’m at the point where there is no benefit of the doubt, especially with malicious actors like Twitter and Facebook. If there isn’t proof they didn’t intend to release it (and you can’t exactly prove a negative), it was on purpose.
German news magazine Der Spiegel wrote they had talked to a Meta representative and they said “This is definitely an error and we will fix it fast”
My personal take: An incident like this illustrates how vulnerable we are to being censored and silenced when using social media and communication channels owned by billionaires. It’s time not only to leave those platforms but to educate your friends and family why this is absolutely necessary. No more excuses.
Original: Update, 13.30 Uhr: Auf SPIEGEL-Anfrage hat sich Meta inzwischen kurz und knapp zu dem Phänomen geäußert. »Dies ist eindeutig ein Fehler, den wir schnell beheben werden«, so eine Sprecherin des Konzerns.
At the moment a search for #democrat on my facebook page returns a bunch of posts saying that searches for #democrat are blocked and a bunch (more recent ones) saying it isn’t.
I was skeptical that this was an outrage campaign but with a real news source having a quote verifying the behavior from the company (the above referenced article was on the Der Spiegel front page and the update is at the bottom.) I’m less skeptical.
Pushing through a change doesn’t happen by accidentally clicking on a button. Where I work it’s a multistep process requiring manager approval. Even if the block happened by mistake it still means there was a mechanism in place to make it happen, configured so that it only censored left leaning hashtags. This is “not good.”
I wouldn’t be surprised if it was an AI or other automated system blocking it, because an outright block instead of a shadow ban seems unlikely to have been done by human hands. But that just indicates that there are systemic issues or biases that clearly favor one party over the other, and that’s not much better.
I know as a software engineer I frequently accidentally release bugs that block entire political parties after their opponents take power. It’s one of those things most people forget to write unit tests for.
Yes, and automatically follow JD Vance and Trump. Totally a bug /s
No no, the bug is it told you it hid results instead of just secretly hiding them. Hiding them was the intended behavior.
Could’ve happened to anybody, really.
Silly Copilot. My bad folks, it’s this crazy AI.
I’m pretty sure that’s literally what they always say.
Like, has a mainstream brainwashing platform ever admitted to making the change and owning up to it? As far as I can remember, these features keep accidentally getting thought up, coded and released into the production product.
I’m at the point where there is no benefit of the doubt, especially with malicious actors like Twitter and Facebook. If there isn’t proof they didn’t intend to release it (and you can’t exactly prove a negative), it was on purpose.
They will now shadow ban these topics, so people can catch them red handed. But users will just never see it in their feed
Yeah, that was my thought too. What a dumb way to censor by search input instead of censoring the output.
It probably went like this at Meta:
Product Manager: Hey we just got this word filter list from HQ to be implemented ASAP.
Devs: Sure thing boss.
That’s the mistake: putting it in the open ban list instead of the shadow ban list.
Dev acting in #resistance or dev making an honest mistake? I suppose we’ll never know.
hopefully resistance.