The internet is broken.
It’s the online equivalent of The Purge, an SF thriller that envisions a world with one day per year in which law is suspended, and the strong prey on the weak. Or, in the case of the internet, where the anonymous prey on everyone else, every day.
Anonymity can be liberating, empowering, essential. It can help dissidents and victims of abuse. It can also be a metaphorical ring of invisibility, enabling any kind of socially destructive behavior without fear of reprisal.
Unfortunately, the current system does everything to protect the anonymous, and nothing to protect the identified. If you want to have a publicly visible persona, then you either have to become so bland as to invite indifference, or resign yourself to the inevitable trolls.
In the past, I didn’t really think about this much. My usual blog topics (technical interviews, management, coding, diet/exercise, and personal reminiscences) evidently aren’t the kinds of things that inspire attacks from psychotics; and my Twitter feed, this blog, and my LinkedIn account form the majority of my public profile. I’m white, male, and straight. I’m boring. Half my blog posts are about life as a middle manager, for God’s sake. It’s easy not to pay attention when you aren’t in the crosshairs.
But there’s been a war going on, and the stakes continue to rise. If trolling used to be about disrupting online forums and pissing people off for the lulz, now it’s about rape threats, death threats, doxxing, and non-stop harassment. These aren’t just 12 year old boys hopped up on hormones, Mountain Dew, and internet porn. If you suppress your gag reflex and work your way through #GamerGate, it’s clear there are a fair number of trolls spouting bile from the comfort of six figure salaries and stock options in the tech industry.
It’s astonishing that this is a problem that hasn’t been solved, and one gets the sense that it’s starting to reach a breaking point. Clay Shirky has the classic essay on groups destroying themselves, and notes that:
Now, there’s a large body of literature saying “We built this software, a group came and used it, and they began to exhibit behaviors that surprised us enormously, so we’ve gone and documented these behaviors.” Over and over and over again this pattern comes up. … Now, this story has been written many times. It’s actually frustrating to see how many times it’s been written. You’d hope that at some point that someone would write it down, and they often do, but what then doesn’t happen is other people don’t read it.
You should really go and read the essay. It’ll make you smarter. But for those without the time or inclination, the tl;dr is that online communities need moderators, power users, reputation, and persistent identity. Without them, they self-destruct. This doesn’t just make sense in the context of closed gardens (which exist, from Amazon to StackOverflow). What we need is a global persistent identity, with reputation as a service. Anonymity would always be an option, but in the case of community interactions, the incentives would strongly shift toward identity.
Here’s how it could work.
We currently have at least four major vendors of public online identity – Facebook, Google, Twitter, and LinkedIn. Of these, the only identity which involves constant public interaction with strangers – and therefore the identity that matters most in creating communal responsibility – is Twitter.
Twitter has hundreds of millions of users. Twitter identities matter – it takes time to build a base of followers and a history of tweets. Twitter has billions (trillions?) of tweets and direct messages that could be mined for patterns. Twitter could become the global provider of reputation data.
Think of reputation, or karma, as being like a credit rating. You start in a neutral state, but over time, you develop a history. On Twitter, unless you’re Beyoncé, most of your tweets are likely ignored, some are favorited, others retweeted… And just like Page Rank, who favorited or retweeted matters. High karma people favoriting your tweets might have a more positive effect, low karma people favoriting much less, or zero impact. Twitter wouldn’t be the only source, of course – StackOverflow, Medium, Quora, eBay, Amazon, TripAdvisor, etc. all have publicly-visible methods for indicating positive community engagement. These could all go into the mix in calculating a reputation.
Reputation wouldn’t just be a matter of popularity – a user could be popular among his peers precisely because of trolling behavior – and an effective reputation system would need a way to identify and mark users along multiple orthogonal axes. I.e., you wouldn’t want to have a single number that could be balanced out, since no amount of popularity, cleverness, or positive interactions should forgive rape or murder threats. Reported abuse would be taken seriously and followed up on aggressively.
In this world, you could enable your users to specify privacy or contact levels based on reputation. For instance, you could decide that you’re only willing to receive email, direct messages, or mentions from people with a certain level of positive karma; a certain length of time or level of contribution on a service; or someone who’d been vouched for by another user. You could auto-block anyone with low karma, or be completely invisible to users who had been marked as trolls. Instead of restricting privacy to people you know, you could restrict it to people who had demonstrated that they were positive members of the community.
The reason that all of this is important is that there would be consequences for good and bad behavior. These might include any of the following:
- Badging: visible indication of reputation
- Additional privileges given to users with high reputation (e.g., moderation ability, extra weight given to problem reports, expediting of service, etc.)
- Punishments levied on users with known anti-social behavior (e.g., hellbanning)
- Reputation used as a job screening technique
Of course, people would use throwaway accounts for the worst offenses (as many do currently), but the effect would be minimal – normal users would typically auto-block new/low karma accounts by default. Being marked as a troll would have real-world consequences, and could completely destroy an online identity, forcing someone to start again from scratch. Sure, you could spend time building up a fake identity to the point that it could be used for harassment, but then you’d lose all that work. There would be a cost to bad behavior.
The key is that this wouldn’t be restricted to a single site – there are plenty of closed garden reputation systems out there. Twitter would provide reputation as a service (RaaS) across the internet. People could still act anonymously, but it would be harder for them to interact with real identities when doing so (unless specifically allowed). Anonymity would be a choice, but a limiting one.
Twitter has historically been terrible at dealing with harassment and trolling, a fact which even their CEO admits:
We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years. It’s no secret and the rest of the world talks about it every day. We lose core user after core user by not addressing simple trolling issues that they face every day.
I’m frankly ashamed of how poorly we’ve dealt with this issue during my tenure as CEO. It’s absurd. There’s no excuse for it. I take full responsibility for not being more aggressive on this front. It’s nobody else’s fault but mine, and it’s embarrassing.
We’re going to start kicking these people off right and left and making sure that when they issue their ridiculous attacks, nobody hears them.
Everybody on the leadership team knows this is vital.
– Dick Costolo
While it’s nice to hear that they’re planning on cleaning up their own corner of the internet, the problem – and opportunity – is much bigger. Of course there would be technical hurdles (the scale, in particular, would be immense). Of course, the design would have to be carefully thought through. Of course there would be attempts to game the system (account hijacking / vandalism, twinking, link farms, etc.). Of course there would be missteps along the way.
But – and this is a big but – this is a core piece of the internet that’s missing. It’s as big a piece as the social graph. It will exist in ten years. There will be REO and REM, subtle tweaks to the algorithm with winners and losers. I’m sure there have been attempts before, but never by a company with the data, resources, and yes, culpability of Twitter. Here’s hoping that they have the vision to look outside their own corner of the world. It would make the web a better place.
A well-cited article, but the problem is always finding that line between cyber-harassment and censorship. Unfortunately a lot of corruption has been taken place recently: Randi Harper’s twitter account was suspended after publicly tweeting harassment towards a developer and his wife but only for 2 hours before her account was back online. Now this is a woman who puts other developers on the blocklist and may be committing fraud through her Patreon claiming that her tool ‘fights harassment’ AND being caught doxxing as well as coercing a woman off twitter along with the help of her SJW’s, why is it that Twitter didn’t boot her out the door and stay out, especially with the amount of low karma she had? I don’t believe Dick Costelo is not so fussed with the current state of affairs with troll, but I do agree that it would be a long, tedious job to solve especially as anonymity is a powerful tool, regardless of being good or bad.
I agree that the primary problem would be striking a balance between punishing harassment and not censoring reasonable content. This would be extraordinarily difficult to design well (by comparison, the technology would be difficult, but mostly because of the scale). I imagine a system with moderators, and moderator-moderators (like Slashdot used to have), with ombudspeople at the top of the pyramid. I think that Costelo does care about it – if for no other reason than that Twitter’s usage numbers are showing worrying trends. Fewer people joining, power users leaving, and the service having a reputation as a place where bad things happen to real people. Everyone’s been waiting for years for Twitter to be the adult in the room. Whether they choose to devote resources to the problem before everyone leaves for Instagram / tumblr / etc. is the question.
Can’t say I like this idea very much, sounds like mob rule to me. Should somebody have their career prospects damaged because they voiced an unpopular opinion?
Credit ratings, Facebook pages and criminal history are not publicly available for good reason, and its offence for potential employers to ask for them (at least here in CA). I’d hate to see a publicly visible, publicly determined reputation that’s devoid of any context.
I really wonder if Twitter’s model is just broken… maybe it’ll be replaced by something that does it slightly differently, in the same way Facebook usurped My Space.
I would agree that “voicing unpopular opinions” should not be problematic. However, there are certain behaviors that should be uncontroversially unacceptable. Doxxing, rape threats, and death threats, for a start.
Whilst I agree with your list, what is acceptable in one culture can be taboo in another. Who gets to decide what is universally unacceptable? A “-1” for ignorance on Stack Overflow is one thing, having your entire reputation trashed because people from a different culture finds you offensive for is something else.
The internet is global. Culture, opinions and taboos… not so much.
I take your point. I also believe that there are certain things (as you’ve said) that can readily be agreed to. We shouldn’t be so paralyzed by an inability to conceptualize a perfect solution that we shouldn’t start on the universally acceptable pieces.