Social media posts inciting hate and division have “real world consequences” and there is a responsibility to regulate content, the UN High Commissioner for Human Rights, insisted on Friday, following Meta’s decision to end its fact-checking programme in the United States.
Why is this not as simple as adding a setting button for moderation of hateful content? The user can decide to filter it out.
And who decides whether content is hateful?
Moderator groups that users can choose between.
Content moderators per community guidelines. Why is this so hard?
And who do you select as moderators? Who ensures their moderation is consistent with community guidelines? What are the consequences if they moderate unfairly?
If we are talking platforms, then the employees of that platform. If we are talking federation, then the community and groups leading the communities. The consequences are the same as always. Bans for rule violations, and the freedom we all share to use or not use these platforms.
O’Brien, he’s trustworthy