How to police online content that transgresses community standards in as fair a way as possible

As I thought about this deep problem (policing the boundaries of free speech) I realised that *moderate* means both to be free from excess and to decide whether something stays or not on a forum. Say what you want about the U.S., free speech *maximalists* ought to be glad the internet (and with it a lot of internet culture) started in the U.S. what with its long history of constitutionally protected free speech.
If an internet forum is to self police and to not rely on the actual bobbies to police its content then it makes sense that content on that forum is policed in as transparent, fair, and dare I say it, democratic a manner as possible.

What this means is that we have seven pillars:
(1) that the “speech rules” and “speech moderation rules” need to agreed to when you sign up and then they need to be close to hand at all times. Basically the rules for speech *and* the rules in the case of moderation must be *enshrined* as the phrase goes. they must rarely change, and when they do they should do so democratically.

(2) that if a post or comment breaks a rule then a moderator (who may be drawn from the stewards of the site or from the community) ought to notify the the user that the content breaks specific rule X.Y.Z and give the user a fixed amount of time to both respond to the notification and in the absence of an ack of the notification a fixed amount of time. the user may reject or accept – if the user accepts then they may edit or delete and the process starts again ideally with a different moderator, if three mods flag the content in a row it is taken as if the user rejected. if the user rejected (explicitly or implicitly) then the content is quarantined. quarantined content may be appealed and that process happens before a jury of the user’s peers on the site and/or a three-panel judge from the stewards of the site. possibly there may be levels of appeal but start initially with one level and evolve the process – i don’t think it needs to be as multi-layered as actual court (you know -> local/regional/supreme).

(3) that a mechanism for interacting with actual law enforcement be established and this mechanism be transparent

(4) that if content is censored it is removed from quarantine (where it was flagged and had public but limited visibility) and removed from the public sphere altogether. the user that created it may still view it (it having sprung from their mind). the content is frozen and kept as a piece in the chain of record. a record of each such strike-down is preserved for all time along with the reasons for the strike-down and can be used as precedent in later cases. a description of the content is preserved but not the content itself – i am hoping that over time certain norms and standards evolve around this.

(5) a penalty point system is implemented. with each strike you gain a penalty point (points evaporate slowly over time), if your penalty points exceed a certain threshold you have your right to publish revoked for a certain amount of community agreed upon time. this is like being in speech jail. it needs to be figured out how much of the the user’s profile and how much of the moderation system they have access to during this time. when you return your points are reset. this process repeats itself. the user is never banned outright but their jail term might become quite long multiple persistent instances of extreme, egregious, and unrepentant violations. possibly we could implement a parole hearing type system for user’s who appear to have had a change of heart/mind

(6) that it is recognised that never before has a platform like the internet and web existed and that we should strive to keep a part of it as free and open and secure from subversion and as transparent as is possible within the constraints of technology and free speech ideals. (this implies FOSS and encryption i believe). this means making the rules governing moderation as clear as the rules governing content itself and that the mechanisms for dealing with content that strays outside the guideline should be as fair, transparent, and democratic as needs be in order to preserve free speech ideals.

(7) that we should work towards getting these high standards replicated across the internet and web and that we should work towards making these platforms federated – including perhaps the online policing of them or maybe each forum should keep that under its own jurisdiction – maybe that’s how the multi-layered process comes into being …

¿ Thoughts / Comments ?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.