Big online platforms tend to brag about their ability to filter out violent and extremist content at scale, but those same platforms refuse to provide even basic information about the substance of those removals. How do these platforms define terrorist content? What safeguards do they put in place to ensure that they don’t over-censor innocent people in the process? Again and again, social media companies are unable or unwilling to answer the questions.
This is a companion discussion topic for the original entry at https://www.eff.org/deeplinks/2019/09/innocent-users-have-most-lose-rush-address-extremist-speech-online