About 5 years ago, I made a comment on that platform which went something like "Americans are arrogant" (can't remember the exact words anymore). Within minutes, I received a warning and a strike against my account for violating community safety guidelines.
Was what I said really that harmful? Especially given the fact that America is the dominant leading global superpower, the one in the privileged position, and one that people criticize openly all the time? This experience really soured me on AI-powered content moderation and the obsession with polite speech. (Maybe that's what drives people to use sarcasm and euphemisms as cover, perhaps like saying "Americans are unbelievably humble people". See also https://en.wikipedia.org/wiki/Letter_of_recommendation#Langu... .)
To make matters worse, as I browsed the site and saw comments I deemed vile coming from other people - ones against certain groups of people, ones threatening physical harm - I used the report/flag function and exactly zero out of my ~20 reports ever resulted in action against that user. I feel like the system has the power to punish me but not others.
One of the examples offered as outrageous is that people on meta are now allowed to say
- "Trans people are immoral"
It's not something I agree with, but any hardline christian or muslim would take that position quite easily. And that's not a small portion of the world population (easily > 1 billion).
Is it not better that they can say that, and hence a discussion can be had. And if no discussion arises, at the very least people make their positions clear, and we know who stands where.
The Utopia the intercept (and previous policy) seemed to desire was that we just pretended this was not a position people held, and it would somehow magically go away.
Providing community notes and overt information seems far far better than covert censoring.
The problem with covert censoring is twofold: a) people get an insanely distorted perception of what average people think and b) sometimes the people in power doing the censoring are wrong (e.g., the censorship of the hunter biden laptop story as fake news). Overtly correcting false narratives via community notes evades both of these issues.
About 5 years ago, I made a comment on that platform which went something like "Americans are arrogant" (can't remember the exact words anymore). Within minutes, I received a warning and a strike against my account for violating community safety guidelines.
Was what I said really that harmful? Especially given the fact that America is the dominant leading global superpower, the one in the privileged position, and one that people criticize openly all the time? This experience really soured me on AI-powered content moderation and the obsession with polite speech. (Maybe that's what drives people to use sarcasm and euphemisms as cover, perhaps like saying "Americans are unbelievably humble people". See also https://en.wikipedia.org/wiki/Letter_of_recommendation#Langu... .)
To make matters worse, as I browsed the site and saw comments I deemed vile coming from other people - ones against certain groups of people, ones threatening physical harm - I used the report/flag function and exactly zero out of my ~20 reports ever resulted in action against that user. I feel like the system has the power to punish me but not others.
One of the examples offered as outrageous is that people on meta are now allowed to say
- "Trans people are immoral"
It's not something I agree with, but any hardline christian or muslim would take that position quite easily. And that's not a small portion of the world population (easily > 1 billion).
Is it not better that they can say that, and hence a discussion can be had. And if no discussion arises, at the very least people make their positions clear, and we know who stands where.
The Utopia the intercept (and previous policy) seemed to desire was that we just pretended this was not a position people held, and it would somehow magically go away.
Providing community notes and overt information seems far far better than covert censoring.
The problem with covert censoring is twofold: a) people get an insanely distorted perception of what average people think and b) sometimes the people in power doing the censoring are wrong (e.g., the censorship of the hunter biden laptop story as fake news). Overtly correcting false narratives via community notes evades both of these issues.
Is anyone else reporting this, or is there a link to the purported document which doesn't require supplying an email address to The Intercept?
The Intercept accidentally exposed a whistleblower by publishing a document in the past that was watermarked, and now only prints excerpts.