Categories
"ethics" alt-right andrea hardie anime nazis anti-Semitism antifeminism empathy deficit entitled babies harassment hate speech literal nazis lying liars milo misogynoir misogyny sockpuppetry twitter

Twitter bans Milo for good, finally. But what about his goons?

Milo Yiannopoulos: A martyr, in his own mind
Milo Yiannopoulos: A martyr, in his own mind

So Twitter has finally given Milo Yiannopoulos the boot — apparently for good — after the Breitbart “journalist” gleefully participated in, and egged on, a vicious campaign of racist abuse directed at Ghostbusters star Leslie Jones on Twitter earlier this week.

This wasn’t the first time that Milo, formerly known as @Nero, used his Twitter platform — at the time of his suspension he had 338,000 followers — to attack and abuse a popular scapegoat (or someone who merely mocked him online). It wasn’t even the worst example of his bullying.

What made the difference this time? Leslie Jones, who has a bit of a Twitter following herself, refused to stay silent in the face of the abuse she was getting, a move that no doubt increased the amount of harassment sent her way, but one that also caught the attention of the media. And so Milo finally got the ban he has so long deserved.

But what about all those others who participated in the abuse? And the rest of those who’ve turned the Twitter platform into one of the Internet’s most effective enablers of bullying and abuse?

In a statement, Twitter said it was reacting to “an uptick in the number of accounts violating [Twitter’s] policies” on abuse. But as the folks who run Twitter know all too well, the campaign against Jones, as utterly vicious as it was, wasn’t some kind of weird aberration.

It’s the sort of thing that happens every single day on Twitter to countless non-famous people — with women, and people of color, and LGBT folks, and Jews, and Muslims (basically anyone who is not a cis, white, straight, non-Jewish, non-Muslim man) being favorite targets.

Twitter also says that it will try to do better when it comes to abuse. “We know many people believe we have not done enough to curb this type of behavior on Twitter,” the company said in its statement.

We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it’s happening and prevent repeat offenders. We have been in the process of reviewing our hateful conduct policy to prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted. We’ll provide more details on those changes in the coming weeks.

This is good news. At least if it’s something more than hot air. Twitter desperately needs better policies to deal with abuse. But better policies won’t mean much if they’re not enforced. Twitter already has rules that, if enforced, would go a long way towards dealing with the abuse on the platform. But they’re simply not enforced.

Right now I don’t even bother reporting Tweets like this, because Twitter typically does nothing about them.

https://twitter.com/Bobcat665/status/735282887965085697

And even when someone does get booted off Twitter for abuse, they often return under a new name — and though this is in direct violation of Twitter’s rules, the ban evaders are so seldom punished for this violation that most don’t even bother to pretend to be anyone other than they are.

Longtime readers here will remember the saga of @JudgyBitch1 and her adventures in ban evasion.

Meanwhile, babyfaced white supremacist Matt Forney’s original account (@realMattForney) was banned some time ago; he returned as @basedMattForney. When this ban evading account was also banned, he got around this ban by starting up yet another ban evading account, under the name @oneMattForney, and did his best to round up as many of his old followers as possible.

https://twitter.com/onemattforney/status/753087810006085634

A few days later, Twitter unbanned his @basedMattForney account.

And here’s yet another banned Twitterer boasting about their success in ban evasion from a new account:

https://twitter.com/_AltRight_Anew/status/755643864036339716

And then there are all the accounts set up for no other reason than to abuse people. Like this person, who set up a new account just so they could post a single rude Tweet to me:

femborg

In case you’re wondering, the one person this Twitter account follows is, yes Donald Trump.

And then there’s this guy, also with an egg avatar, and a whopping three followers, who has spewed forth hundreds of nasty tweets directed mostly at feminists.

Here are several he sent to me, which I’ve lightly censored:

stranger1

And some he’s sent to others.

stbig1 stbig2

So, yeah. Twitter is rotten with accounts like these, set up to do little more than harass. And if they ever get banned, it only takes a few minutes to set up another one.

Milo used his vast number of Twitter followers as a personal army. But you don’t need a lot of followers to do a lot of damage on Twitter. All you really need is an email address and a willingness to do harm.

It’s good that Twitter took down one of the platforms most vicious ringleaders of abuse. But unless Twitter can deal with the small-time goons, with their anime avatars and egg accounts, as well, it will remain one of the Internet’s most effective tools for harassment and abuse.

211 Comments
Inline Feedbacks
View all comments
Scildfreja
Scildfreja
8 years ago

@Ohlmann is right, without a nearly-perfect system, you’d have so many false positives that most people the algorithm would catch would be incorrectly banned.

There’s also way too much twitter twittering out there to properly monitor directly.

The best solution is a combination. Use the algorithm to fish out the most likely offenders, then use human eyeballs to separate the false positives from the harassers. It’d be very expensive, that’s a lot of eyeballs to pay for, but it’s the most thorough realistic way to do it that I can think of.

Would be interesting to set up a nonprofit which companies like Twitter could apply to in order to help ameliorate abuse-mitigation costs. They’re unlikely to do it themselves, but if you offered them a subsidy to do it? I bet they’d be much, much more agreeable.

(Of course, funding the nonprofit would be pretty tough; you need more than a kickstarter for that… I bet that the GG of Canada would be into that, though. Hmmmmm.)

Ooglyboggles
Ooglyboggles
8 years ago

I’m sorry for not keeping up, it’s just that this sort of thing involving algorithms, Korean laws, ghosting, dedicated moderators and concerns about false positives is all really complicated stuff to me logistics wise. If there is one thing that I can draw from all of this though, is that no matter what option, or combination of options done, there is no easy fix here that doesn’t involve Twitter doing some massive restructuring on their part.

That also includes understanding that in the future what groups do they really think will bring the most profits to them both short and long term.

ViolinlessHoax
ViolinlessHoax
8 years ago

Re: libraries. As someone who works in a library I can say that there would be no real world consequences for anyone causing the MAC-ban in a library. Mostly this is because we’d have no idea who caused the ban, so we’d have no way of knowing whose library card we should revoke. At least, that’s how it works (or wouldn’t work) in a small municipal library; it’s possible larger libraries have more tools to monitor who did what online, but because privacy etc etc, I doubt it.

Also, as an aside, libraries are suffering so much already from dwindling numbers, I can imagine this rule would be unpopular with the higher-ups.

Re: moderation. I used to be on OKCupid and the way they did it there (or used to do it anyway, haven’t been there in a while) was to choose a few hundred “trustworthy” people from their user base and have them do a first round of moderation before the actual paid mods got to make a decision based on our input. How they determined who was “trustworthy” and who wasn’t, I have no idea. Personally, I had a lot of fun moderating for them, weeding out the people who were obviously toxic – which was 90% of the reports – and leaving short messages for the other mods. It wasn’t a perfect system either, you could tell some mods really got into it and took it seriously, but a surprising amount of people would just 100% acquit offenders no matter what they did. “Oh, she clearly started it” and “lol loser reporter” were common comments from other mods on these reports.

Axecalibur: Middle Name Danger
Axecalibur: Middle Name Danger
8 years ago

@Ohlmann
Absolutely right. The false positive rate would have to be tiny. I don’t think they should let the possibility of false positives stop them from trying either. 99.99% accuracy sometimes begins at 95% and gets better from there. If a few people (or, as you rightly point out, more than a few) get banned (hard or soft) on the way to a better system, is that worth it? To make an omelette, one must 1st break some eggs (get it? ?). I just don’t want Twitter Corp, LLC to say ‘false positives! See, Leslie/Anita/whoever!? There’s nothing we can do’, and then forget about it…

Or I have no idea what I’m talking about. Very possible

Robert
Robert
8 years ago

It occurs to me that almost none of the people I know face to face would have any idea who Milo is. That reassures me about how I’m living my life.

Scildfreja
Scildfreja
8 years ago

The last thing I’d want to do is to hurt libraries, I love my little library :C Point taken! Too much splash.

The tiered, distributed mod system is a good step in mitigating moderation costs – maybe that’d be a way to get a blue checkmark? Providing a certain amount of valid moderation per month. You’d still want an algorithm to hunt down inappropriate behaviours, but having a volunteer first stage is a good way to minimize costs. Interesting!

Catalpa
Catalpa
8 years ago

I’ve heard a system proposed that would, I think, stem the tide of harassment a fair bit, if paired with people getting banned for harassment fairly commonly.

Give users a setting that they can select that auto-blocks any tweets coming from users that are less than a certain amount of time old (the account must be at least two weeks old, say), or ones that have less than a certain number of followers (10 or something). Then people who make throwaway accounts, sockpuppet accounts or ban-evading accounts have to wait a gratification-killing amount of time before they can start heaping the abuse on. And the ones that do wait out the time can be banned more easily because there won’t be such a flood of assholes.

Richard Joseph
Richard Joseph
8 years ago

How awesome is it that Milo Yiannopoulos, bad boy hero of the blogosphere, “The Ultimate Troll,” putting “social justice warriors” in their place daily, has…. less than 350K Twitter followers? If you look at Breitbart’s website, they’re treating this Twitter ban like Nixon had Walter Cronkite thrown into a gulag. They GENUINELY don’t seem to realize that 99.999% of Americans don’t have any idea who this guy is.

Paradoxical Intention - Resident Cheeseburger Slut

@Catalpa: I do like the idea of insta-moderation for all new accounts tweeting at people. We kind of do that here too.

However, it would be more up to the people getting tweeted at to moderate it, it sounds like. I feel like that’s really removing a lot of the onus from Twitter to handle their own shit.

On the other hand, it does seem like a good way to stem the tides of bullshit that some users see every day.

repentantphonebooth
8 years ago

This is so off topic and I am sorry, but you people seem like the right folks to turn to- I am looking for reliable statistics on the rate of false accusations for crimes other than rape. Does anyone have any easily accessible links you’d be willing to share?

Ooglyboggles
Ooglyboggles
8 years ago

@Catalpa
Well the GG blocker and such tends to do a good job in blocking people. Unfortunately while it does that, it cannot change the posting culture of twitter.
@PI
The change has to come from within and with permanence, otherwise they’ll take the easiest route, which so far is allow such flagrant abuse to happen 24/7. What event or series of events that could do such a thing, I have yet to see.

Jake Hamby
Jake Hamby
8 years ago

To add to what 802.11cuck wrote: MAC addresses aren’t visible outside of the local LAN. Servers can’t see them and can’t block based on them. They’re not an option, even if all the other issues people brought up about shared computers and people using other computers were solved. It just won’t work because MAC addresses don’t ever leave the local network.

The Twitter mobile app may be able to obtain one or more unique identifiers from the smartphone, such as IMEI, IMSI, ICCID, or UDID, but then the privacy, spoofing, and device sharing concerns of blocking based on any of them would still apply. (IMSI & ICCID come from the SIM card, IMEI is a unique ID for the cell radio, and UDID is a unique iOS-only ID.)

PS: the South Korean “real name” law was overturned by their Constitutional Court in 2012 as a violation of free speech.

Virgin Mary
Virgin Mary
8 years ago

Most of our volunteer run community libraries which still have Internet access actually block people from using Facebook, Twitter and dating sites.

Catalpa
Catalpa
8 years ago

@paradoxical and oogly

Yeah, it does put some of the onus onthe person being harassed, which is problematic. But it would make moderating hateful comments less of a gargantuan task, which might make twitter more likely to actually DO something. Doesn’t really change the culture, either, but it would help the current victims of it at least some.

In terms of changing the culture… Hm, what if there was a automated thing that was triggered by certain keywords, but instead of banning people, it shifted them into a state where they couldn’t see the tweets of anyone who is also on the dickhead list? This means that innocent people who tweet in response to similar phrases might get blinkered a bit too, but mostly it would prevent them from seeing assholes, so it’s not as much of a hindrance as a banning would be. Might even be seen as a benefit. And the trolls and fuckheads wouldn’t be able to see all the other hatemob members and wouldn’t be able to feed into each other and validate each other.

Buttercup Q. Skullpants
Buttercup Q. Skullpants
8 years ago

Thanks, Nikki and Axe! 🙂

I forgot that Reddit and Craiglist already do a form of ghosting (or shadowbanning) to discourage spammers…I think what I’m trying to get at is to find a way of making commenting a tiny bit more “expensive”, in a way that benign users wouldn’t notice but would add up quickly to major hassle when a troll is posting rapidly from multiple accounts. If a lie can be halfway around the world while the truth is still putting on its boots, then let’s tie an anvil around the liar’s ankle.

Awhile back I remember reading about a proposal to deal with spam by having mail servers return a small packet of junk data to the originator’s machine every time an email gets sent. An ordinary user emailing cat photos to Aunt Beulah wouldn’t notice anything, but a spammer sending out an email blast to half a million brute force addresses would see a significant performance hit. The bigger the spam list, the more degraded the performance becomes. Maybe Twitter could institute something like that, where slower, more measured users get rewarded with the fastest performance, while users with multiple socks unleashing torrents of abuse get their machines and accounts tied up for awhile. Maybe the first 5 comments are free every day, and then after that comments are published at the rate of one…. word…. every……. fifteen…….. seconds, with the gap getting longer and longer the more they try to post. (I’d suggest a sliding fee scale for >5 comments, but that means people would get harrassed mainly by rich assholes). Regular users probably wouldn’t be affected, unless they were live-tweeting a historic event. I’d have to think about how to make allowances for that (and for the fact that Twitter WANTS lively discussion and lots of people tweeting )

It’s a similar approach to the proposal to regulate ammo instead of guns. If bullets (or comments) cost the equivalent of $50 apiece, suddenly a semi-automatic rifle will seem a less attractive way of airing grievances.

ms_xeno
ms_xeno
8 years ago

Pendraeg:

“Not to support Milo or any of his followers, the ban is well deserved and long overdue. Buuuuut in Twitter’s defense, chasing down socks and checking on every reported tweet is also a colossal task. They do need to do better about it and be more consistent but at the same time I believe the attitude is that if they ban anyone who is reported then no one will use the site and taking time to fully investigate every report is cost prohibitive.

“It’s not a great policy on their part by any means, but the lackluster response they have to such abuse can be understandable.”

I always wonder why, at this late stage of the game, the costs of the caretaking you describe aren’t factored into the platform when it’s first being built? At some point, it should occur to creators that it makes little sense to not account for the Milo Fan Clubs of the world. Just like it makes no sense to build bus stops without budgeting for a trashcan to be included/maintained in each one.

Of course, if the advertisers who spend so much time and cash luring us on Twitter ever thought to raise the (fully justified) stink Jones did, I’m guessing more than just this periodic posturing on Management’s part would happen. But… who am I kidding? They don’t care about it any more than Milo’s bosses do. :/

Axecalibur: Middle Name Danger
Axecalibur: Middle Name Danger
8 years ago

@Buttercup

deal with spam by having mail servers return a small packet of junk data to the originator’s machine every time an email gets sent…

That’s fuckin devious
http://static9.comicvine.com/uploads/scale_super/5/52246/2060390-i_like_it.jpg
End of the day, it probably wouldn’t work, cos of the reasons you brought up. Still, love the idea to bits

It’s a similar approach to the proposal to regulate ammo instead of guns

Fruitloopsie
Fruitloopsie
8 years ago

WWTH
Dear God, That poor woman. My heart is hurting. Should we start a petition to bail her out? Though I don’t know how to start a petition and don’t quite understand how pretty much anything works.

Imaginary Petal
Imaginary Petal
8 years ago

@Fruitloopsie

[I] don’t quite understand how pretty much anything works.

This will be my life’s motto from now on. :p

Ooglyboggles
Ooglyboggles
8 years ago

Well at the very least I felt our discussion here was productive in figuring out a business pitch and blueprint of improving Twitter. I really found it fascinating in the different ways the moderation process could work.

AlphaBeta Soup
AlphaBeta Soup
8 years ago

I agree with Seshia that counter-brigading might be an effective tactic to counter hate. I notice that left-wing, anti-racist, feminist sites always have right-wing, racist or misogynist trolls, but right wing, etc. sites seldom do. My theory is that right-wingers, racists and misogynists like to argue with and bully people who don’t agree with them more than left-wing types do.

It is also an exhausting task to visit the cesspools of some of the comment sections of those sites. A tough task for a sensitive person.

But I believe it would do a lot of good if more people of a leftish persuasion would visit the comment sections of certain right-wing sites and counter some of their arguments with polite, reasonable comments. I’m not talking about Stormfront or the Daily Stormer or their ilk. People who would post there in the first placr are too far gone.

I used to go to TakiMag, home of people like Steve Sailer, Gavin McInness, and John Derbyshire, who fancy themselves “race realists” and make pseudointellectual racist arguments rather than mindless hateful diatribes. I finally had to quit because it was too emotionally draining, but may not have been if I had support from others. I don’t know if my efforts were in vain, but I hope I reached some fence-sitters or new readers and helped them to reject racism with my comments.

Ooglyboggles
Ooglyboggles
8 years ago

@AlphaBeta Soup
Well you might have to count guys like me out. I would throw insults and mock them relentlessly while tossing out stats and articles to debunk the parroted topics. I know the people I argue to aren’t going to change anytime soon, so I might as well make them as mad as I. And if I can bring some justification, some people just won’t be moved with nicety.

Your method is certainly alot more productive. But so far when I see documentation for that it’s better to do a one on one, it separates them from the hate group and allows them time to think and reconsider. I just figure for counter brigading, the society as a whole needs to undergo some change.

Ohlmann
Ohlmann
8 years ago

@Axe : the problem is, at 95% accuracy, it destroy Twitter in relatively short order. They need a very high accuracy to be able to use that without suiciding themselves.

@counter brigading : fostering hatred through trolling and brigading is counter-productive to me. I don’t formally condemn this because, well, they are assholes. But remember that Lovecraft quote : “when you look at the abyss, the abyss into you”.

Pony's Labia
Pony's Labia
8 years ago

I don’t have a Twitter because I find it confusing and repetitive.

I find the less time I spend interacting with people negatively on the internet, the happier I am. I can do without Twitter.

authorialAlchemy
authorialAlchemy
8 years ago

Maybe Twitter can do something like OK Cupid does with its moderation? OKC assigns well behaved users to act as a jury for questionable content. If enough mods agree the user broke rules, the content is removed or the user is banned. You can opt out of it if you don’t want to be a mod.

Although, some people still think weather or not something is racist is a matter of opinion or if it is, it’s protected as free speech. It’s not as simple as “wow, this is a dick pic, that doesn’t belong here!”

@ Ohlmann- Out of curiosity, what game did you work on, or at least, how did it exploit misery?