So Twitter has finally given Milo Yiannopoulos the boot — apparently for good — after the Breitbart “journalist” gleefully participated in, and egged on, a vicious campaign of racist abuse directed at Ghostbusters star Leslie Jones on Twitter earlier this week.
This wasn’t the first time that Milo, formerly known as @Nero, used his Twitter platform — at the time of his suspension he had 338,000 followers — to attack and abuse a popular scapegoat (or someone who merely mocked him online). It wasn’t even the worst example of his bullying.
What made the difference this time? Leslie Jones, who has a bit of a Twitter following herself, refused to stay silent in the face of the abuse she was getting, a move that no doubt increased the amount of harassment sent her way, but one that also caught the attention of the media. And so Milo finally got the ban he has so long deserved.
But what about all those others who participated in the abuse? And the rest of those who’ve turned the Twitter platform into one of the Internet’s most effective enablers of bullying and abuse?
In a statement, Twitter said it was reacting to “an uptick in the number of accounts violating [Twitter’s] policies” on abuse. But as the folks who run Twitter know all too well, the campaign against Jones, as utterly vicious as it was, wasn’t some kind of weird aberration.
It’s the sort of thing that happens every single day on Twitter to countless non-famous people — with women, and people of color, and LGBT folks, and Jews, and Muslims (basically anyone who is not a cis, white, straight, non-Jewish, non-Muslim man) being favorite targets.
Twitter also says that it will try to do better when it comes to abuse. “We know many people believe we have not done enough to curb this type of behavior on Twitter,” the company said in its statement.
We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it’s happening and prevent repeat offenders. We have been in the process of reviewing our hateful conduct policy to prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted. We’ll provide more details on those changes in the coming weeks.
This is good news. At least if it’s something more than hot air. Twitter desperately needs better policies to deal with abuse. But better policies won’t mean much if they’re not enforced. Twitter already has rules that, if enforced, would go a long way towards dealing with the abuse on the platform. But they’re simply not enforced.
Right now I don’t even bother reporting Tweets like this, because Twitter typically does nothing about them.
https://twitter.com/Bobcat665/status/735282887965085697
And even when someone does get booted off Twitter for abuse, they often return under a new name — and though this is in direct violation of Twitter’s rules, the ban evaders are so seldom punished for this violation that most don’t even bother to pretend to be anyone other than they are.
Longtime readers here will remember the saga of @JudgyBitch1 and her adventures in ban evasion.
Meanwhile, babyfaced white supremacist Matt Forney’s original account (@realMattForney) was banned some time ago; he returned as @basedMattForney. When this ban evading account was also banned, he got around this ban by starting up yet another ban evading account, under the name @oneMattForney, and did his best to round up as many of his old followers as possible.
https://twitter.com/onemattforney/status/753087810006085634
A few days later, Twitter unbanned his @basedMattForney account.
And here’s yet another banned Twitterer boasting about their success in ban evasion from a new account:
https://twitter.com/_AltRight_Anew/status/755643864036339716
And then there are all the accounts set up for no other reason than to abuse people. Like this person, who set up a new account just so they could post a single rude Tweet to me:
In case you’re wondering, the one person this Twitter account follows is, yes Donald Trump.
And then there’s this guy, also with an egg avatar, and a whopping three followers, who has spewed forth hundreds of nasty tweets directed mostly at feminists.
Here are several he sent to me, which I’ve lightly censored:
And some he’s sent to others.
So, yeah. Twitter is rotten with accounts like these, set up to do little more than harass. And if they ever get banned, it only takes a few minutes to set up another one.
Milo used his vast number of Twitter followers as a personal army. But you don’t need a lot of followers to do a lot of damage on Twitter. All you really need is an email address and a willingness to do harm.
It’s good that Twitter took down one of the platforms most vicious ringleaders of abuse. But unless Twitter can deal with the small-time goons, with their anime avatars and egg accounts, as well, it will remain one of the Internet’s most effective tools for harassment and abuse.
@Hambeast
If that’s true, and that’s something I can easily see being the case, then there is no economic incentive, lip service and a ban at most to cool down bad pr. Then it is business as usual.
http://www.vox.com/2016/7/20/12226070/milo-yiannopoulus-twitter-ban-explained
I think the theory that Twitter willingly enable harassment start on a good basis, but is too strong and overestimate the rationality that can have internet firms.
Twitter *do* have a strong incentive to keep a wholesome image, both in the eye of the general public and in the eye of the investor. However, he have a combination of factor against a quick and efficient moderation :
* it cost gobs of money to moderate quickly, and twitter is cash-strapped
* false positive, aka unjustly banned people, are a very big problem for a social network
* the guys inside twitter likely don’t understand how bad it can get and are on the line of “it’s virtual, so no harm done”. It’s a very prevalent view in the industry after all.
* moderation isn’t a sexy task to give to devs, and isn’t a sexy feature to sell to investors and media.
So, it’s easy to see how twitter in practice enable harassment without actually trying to do so, or realizing it do. That’s a case where stupidity can adequatly replace malign intents.
(for the record, I have worked in one of the firms who created facebook game. We took a real long time, like two years, to realize that we sucked dry seriously poor and depressed people with our monetization scheme ; and quite a bit of the devs were on the line “eh, they just have to be less stupid” when it was very, very obvious that we exploited misery)
Though I’d provide some soundtrack: https://www.youtube.com/watch?v=PHQLQ1Rc_Js
Not sure what to say about the rest of it that hasn’t been said. It’s a really complicated situation, from the looks of it. Hopefully a solution to this is found soon.
@Skullpants Checking the activity of new accounts would help stop a good part of the mass harassment that’s been such a big problem lately. Your account is two minutes old and all you’ve posted is offensive comments in a hate tag? Boot. That’d be great. Not sure how widely that can be applied without turning up a bunch of false positives, though.
I agree. I think it’s important to remember that the top management at social media companies tend to be overwhelmingly white and male. It’s more of a privileged cluenessness thing than an actively pro-harassment thing. Not that it’s an excuse, just an explanation as to why these companies aren’t quite on the ball.
Here’s their leadership page
https://about.twitter.com/company/press/leadership
The CEO, COO, and CTO, that is the people who would be the most responsible for this are all white guys.
@Scildfreja re: MAC bans
(De-lurking to be pedantic) MAC bans aren’t going to be effective on the internet for a couple of reasons, the main ones being:
-They aren’t nearly as permanent and hard-coded as you think they are — most Ethernet drivers let you override the address burnt into the PROM, they aren’t hard-coded on any Wi-Fi hardware I’ve ever seen.
-MAC addresses exist in the bowels of the network stack and aren’t usually exposed to upper layers (nothing outside of your local network has any idea what your MAC address is typically, and even on your local network a web server never sees a MAC and has no idea what it is). I guess you could ask the client to provide that, but verifying that it isn’t lying is going to be a Hard Problem™(and ultimately futile given the first issue)
-(bonus historical pedantry) there are tons of (deader than disco) network technologies that have no concept of a “MAC address”
Speaking of twitter bans and evading of them. Look who’s back!
https://twitter.com/chuckcj0hnson
From the Vox article Jamesworkshop linked:
That’s the problem: Stepping stones are useless unless they’re, you know, stepped on. What I think will happen is the furor will die down and it’ll be business as usual (IOW, carry on, harassers!) especially for harassed people who aren’t famous or influential.
ETA: Ohlmann, I don’t disagree at all with you. But it all washes out to mean that nothing changes and the net result is still what I quoted.
@WWTH
To further cement proof of ignorance rather than spite, I have one of my favored newscaster Secular Talk
https://m.youtube.com/watch?v=j-RhNIPKacc
While he is against libel and slander he doesn’t exactly understand that Milo is a repeat offender and perpetrator of harassment and hate speech on twitter. Stuff that is in violation of Twitter’s rules.
Which I find frustrating because on every other issue, economic, corruption, anti bigotry and such I do agree with him, but on stuff like this I find his knowledge base stunningly lacking. Like him unable to admit that half the atheist base is as incredibly racist and sexist as some religious pundits, and I wish he understood more on feminism.
Someone had asked whether algorithms could be written to detect whether someone’s being abusive on Twitter? So that a new egg or old face that’s spewing hate by the bucket could be quarantined?
Yup, we can do that! It requires some serious NLP magic (Not MLP magic, though I’m sure that would work too), but sentiment analytics is a very active area of research.
Right now, the best algorithms we have would catch abusers. It would also catch any of the abused persons who are reacting to said abuse, though. So if you are being harangued by a few hundred sea lions, you either keep quiet and ignore them to wait for the algorithm to be tripped, or you reply and risk getting caught yourself.
(I also guarantee that someone in the chanbase would start peeling apart the NLP libraries available in order to find out which lemmas trip the algorithm and then devise ways to be just as awful and abusive without tripping them)
The trenches of internet security are muddy and bloody and gross. Don’t come here D:
@Scildfreja : NLP is like 3D printing three year ago. It’s promising, but we’re not quite here. Cutting edge implementations can give you very impressive proof of concept, some specific firm already use it with various success, but in practice it’s already hard and relatively unreliable to use it to guess the age and sex of people who aren’t trying to fool it.
Automatised bans are massively used to harass people on facebook no ? It’s not a neuro-linguistic process system, but it remind that automatization is very much a double edged sword.
I think that “technical progress will fix it !” won’t work here at all. What is needed is harasser being actually shunned and isolated by society. I believe more in educating people toward that end, given the terrible track record of the Silicon Valley in term of improving the life of non-rich white people.
Last year I read a very interesting paper about the detection of what were referred to as “future banned users.” The authors suggested that it might be possible to use standard advertising-demographics-profiling machine learning tools to identify which people were going to be banned in the future, in the hope that they could just be banned immediately and save everyone the trouble.
While I applaud the authors’ sentiment, I find myself agreeing with Ohlmann: the problem of false positives would be a huge PR issue, and may put people off using any software that pre-bans people in this fashion.
Sadly I can’t find the paper again, otherwise I’d link it. Scildfreja will doubtless know more than I do about it in any case.
I mostly use anonymous sites that run similarly to 4chan, but I’m not a snob about it, and when I see someone who does maintain an online identity get singled out, it’s hard for me to blame them simply for being an easy target. I can also recognize that on platforms like Twitter, which aren’t intended for anonymity, this lassaiz-faire approach to violent, abusive rhetoric is outdated at best.
@Scildfreja
But, as you said before about ghosting, it might give the targets some breathing room.
Personally, I like your ghosting idea. If it’s possible to exclude replies, that would make it more accurate, yes? I don’t know if that’s possible, though. Either way, though, the ghosting would be really useful.
I’m so frustrated by Twitter. Sure, they’re all “Look! We banned one asshole!” Whooptie do, big fuckin’ deal. You don’t get cookies for taking care of one relatively small part of the problem and ignoring the rest of it, especially when the problem has real-world consequences.
I want their “leadership” to see what kind of harassment users are getting, and understand it. I want them to do something substantial and meaningful to combat it rather than mouth useless platitudes and continue the status quo. I’m in a grumpy mood, so I’m (currently) ok with them experiencing the same amount of harm as their users.
@EJ TOO
I’m not cool with pre-emptive bans, personally, but I think identifying people who are higher-risk so they can have some extra scrutiny is a good idea. Wait til they actually do something ban-worthy, but the scrutiny will help shut it down faster/earlier.
@Alan
Yes, collective punishment does get controversial, and the vast majority of people (including me) find it hugely unfair. I’d imagine there almost has to be a better way.
@scildfreja
I agree with Alan that some of your ideas about MAC-banning are skirting up to the collective-punishment line, which I inherently find unfair. I also question how well it’d work-for example, couldn’t a public place like a library petition to have its banned MACs restored?
Please don’t take that the wrong way. You’re a lovely person, and I love talking to you and seeing how calm and intelligent you are in tough situations-I just don’t think you’re necessarily on the right track here.
@wwth
Excellent idea!
@Oogly
Could Twitter users volunteer to serve as moderators, or would that cause more problems than it solved?
For all thoses tools, one need actual statistic tools to decide. If a tool detect 90% of harassers, but ban ten time as much innocent people as harassers, it’s not terribly useable in practice. If it detect 20% of harassers, the breathing room provided will be quite limited. Even providing it to humans as a rough indicator seem a bad idea, because humans tend to not understand the concept that machines might be failible, and I fear they will follow the aumated opinion in msot case regardless of the situation.
Preemptive ban is an horrible idea. That’s, at best, punishing people for what they intend to do, which is a brazil-level of bad idea. When your best case is litteraly a sci-fi dystopia, that’s a strong sign it’s not a promising lead.
I have even more of an reason not to use Twitter.it makes the privacy controversy in the nsa look like a joke. Especially when the legitimate reasons for anonymous accounts have to be associated with vile people who abuse anonymity.
@Oogly
Great idea! I fear that’s not enough tho. Such a team would need all new algorithms (digital and human) to effectively handle the shit sea. Like say Buttercup’s genius suggestions
@Buttercup
Moderation/ghosting is such a good idea I’m actually a bit peeved I didn’t think of it. And I spent hours last night tryna figure this out
Re: false accusations
1)People’s safety from harassment, abuse, doxing, etc. is more important (ethically anyway) than a few people being unnecessarily banned from Twitter
2)How hard would it be to implement a trust system? How do I explain this… Say you report something as harassment, and it turns out not to be (or at least not according to Twitter rules). Would it make sense/be feasible for Oogly’s Twitter KGB to keep a tab on who is more or less likely to accurately report things?
Like, the team would need so many reports before they act, so as not to overwhelm them. Someone with a perfect record counts as a full report, someone with a worse record counts as a partial report. When the total adds up to 20 or whatevs, then it’s go time
You’d need to keep it secret, in the background. Anyone without a record would be automatically considered trustworthy, and untrustworthy reports would still count (just less so)
There’s probably something obvious I’m not thinking of that makes this whole thing untenable…
@Ohlmann
I’da thought that someone who’s willing to defend the accused would fit right in with ‘SJW’. How many racists you think still salivate with rage about To Kill a Mockingbird 50+ years later? 🙂
@Buttercup
I love all your ideas!
@WWTH:
Your idea is sort of in place already. In South Korea, the law states that whenever you sign up for an online service you must link it to your real-world ID. The other users can’t necessarily see this link, but the site admins can.
This works because South Korea has a stranglehold on Korean-language websites and so can effectively make laws to govern the Korean-language web. For English it might be much harder.
@Nikki : using voluntary work for moderation work for small volumes. Here, the sheer logistic of vetting them and not letting an harasser become a moderator is quickly overwhelming.
Given the volume of twitter, they would need thousands of moderators if they want a proactive approach, and likely at least 100 or 150 to check in depth complains. They would also need to deal with foreign language moderation. The aforementioned facebook game firm actually closed their forums because they were both a financial disaster (10% of the workforce as full-time moderators), and an hellhole. I don’t think twitter would have it much better.
@Ohlmann:
Personally I am extremely happy to be banned as a false positive, if it makes it harder for harassers to use a service. Others may disagree.
Aw, thank you, @Nikki <3 I agree that flat MAC banning is problematic, and there's collective punishment problems. It's sort of inherent to considering security and harassment than you have to start asking hard questions about what sort of unintended damage is "acceptable," though, so I don't mind following those lines of thought. You either think them through and accept them explicitly, or you don't think about them and wash your hands of them. Better to consider them directly.
I don't like collective punishment, it's awful; it also hands enforcement responsibilities to people who are (perhaps) better placed to apply meaningful punishments, but who are also forced into applying those punishments, because they’re being unjustly punished themselves. If I were to be building some sort of a MAC-level ban system (with all of the adjustable MAC systems out there, which is another issue), I’d ensure that it rolled out with an easy way for people hit by the splash to have the ban rescinded. Libraries and public services could register as such, private addresses could have multiple strikes and you’re out sort of thing – soft banning with quick recovery.
It’s all hypothetical, though, and there’s still lots of unintended problems with it. My lab’s working on a separate solution, a sort of encrypted internet ID that provides unique identification while still maintaining anonymity. It’s slow going, but there are some promising features!
@Axecalibur : “a few”. That’s unlikely to be “banning a few false positive”.
To explain the problem quickly, and with the hope that my rusted stats aren’t too bad, let’s take an algorithm that catch 90% of harasser, and have a false positive rate of 5%. That seem perfect ? It’s so terribly bad that medical tests with thoses stats would be banned from any sensible country.
Why so ? Because there is much, much more regular users than harassers. There is 645 millions of user ; if there is 1 millions of harassers, that algorithm would ban *35* time more innocent than harassers.
Since innocents people are very unlikely to ever step foots in twitter again, if we suppose 50% of banned harasser come back and 0% of innocent come back, the algorithm would ban more than*half* of twitter by the time there is less than 1000 harassers remaining.
In other word, twitter need an algorithm whose false positive rate is very, very close to zero. They may be other property it will need to provide, I only cited the classical example from my memory of statistic classes. From which my main lesson is “ask actual statistician to evaluate odds instead of trusting your guts, human are hardwired to be absolutely terrible at stats and litteraly unable to have the good intuition even if their life depend on it”.