Categories
"ethics" alt-right andrea hardie anime nazis anti-Semitism antifeminism empathy deficit entitled babies harassment hate speech literal nazis lying liars milo misogynoir misogyny sockpuppetry twitter

Twitter bans Milo for good, finally. But what about his goons?

Milo Yiannopoulos: A martyr, in his own mind
Milo Yiannopoulos: A martyr, in his own mind

So Twitter has finally given Milo Yiannopoulos the boot — apparently for good — after the Breitbart “journalist” gleefully participated in, and egged on, a vicious campaign of racist abuse directed at Ghostbusters star Leslie Jones on Twitter earlier this week.

This wasn’t the first time that Milo, formerly known as @Nero, used his Twitter platform — at the time of his suspension he had 338,000 followers — to attack and abuse a popular scapegoat (or someone who merely mocked him online). It wasn’t even the worst example of his bullying.

What made the difference this time? Leslie Jones, who has a bit of a Twitter following herself, refused to stay silent in the face of the abuse she was getting, a move that no doubt increased the amount of harassment sent her way, but one that also caught the attention of the media. And so Milo finally got the ban he has so long deserved.

But what about all those others who participated in the abuse? And the rest of those who’ve turned the Twitter platform into one of the Internet’s most effective enablers of bullying and abuse?

In a statement, Twitter said it was reacting to “an uptick in the number of accounts violating [Twitter’s] policies” on abuse. But as the folks who run Twitter know all too well, the campaign against Jones, as utterly vicious as it was, wasn’t some kind of weird aberration.

It’s the sort of thing that happens every single day on Twitter to countless non-famous people — with women, and people of color, and LGBT folks, and Jews, and Muslims (basically anyone who is not a cis, white, straight, non-Jewish, non-Muslim man) being favorite targets.

Twitter also says that it will try to do better when it comes to abuse. “We know many people believe we have not done enough to curb this type of behavior on Twitter,” the company said in its statement.

We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it’s happening and prevent repeat offenders. We have been in the process of reviewing our hateful conduct policy to prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted. We’ll provide more details on those changes in the coming weeks.

This is good news. At least if it’s something more than hot air. Twitter desperately needs better policies to deal with abuse. But better policies won’t mean much if they’re not enforced. Twitter already has rules that, if enforced, would go a long way towards dealing with the abuse on the platform. But they’re simply not enforced.

Right now I don’t even bother reporting Tweets like this, because Twitter typically does nothing about them.

https://twitter.com/Bobcat665/status/735282887965085697

And even when someone does get booted off Twitter for abuse, they often return under a new name — and though this is in direct violation of Twitter’s rules, the ban evaders are so seldom punished for this violation that most don’t even bother to pretend to be anyone other than they are.

Longtime readers here will remember the saga of @JudgyBitch1 and her adventures in ban evasion.

Meanwhile, babyfaced white supremacist Matt Forney’s original account (@realMattForney) was banned some time ago; he returned as @basedMattForney. When this ban evading account was also banned, he got around this ban by starting up yet another ban evading account, under the name @oneMattForney, and did his best to round up as many of his old followers as possible.

https://twitter.com/onemattforney/status/753087810006085634

A few days later, Twitter unbanned his @basedMattForney account.

And here’s yet another banned Twitterer boasting about their success in ban evasion from a new account:

https://twitter.com/_AltRight_Anew/status/755643864036339716

And then there are all the accounts set up for no other reason than to abuse people. Like this person, who set up a new account just so they could post a single rude Tweet to me:

femborg

In case you’re wondering, the one person this Twitter account follows is, yes Donald Trump.

And then there’s this guy, also with an egg avatar, and a whopping three followers, who has spewed forth hundreds of nasty tweets directed mostly at feminists.

Here are several he sent to me, which I’ve lightly censored:

stranger1

And some he’s sent to others.

stbig1 stbig2

So, yeah. Twitter is rotten with accounts like these, set up to do little more than harass. And if they ever get banned, it only takes a few minutes to set up another one.

Milo used his vast number of Twitter followers as a personal army. But you don’t need a lot of followers to do a lot of damage on Twitter. All you really need is an email address and a willingness to do harm.

It’s good that Twitter took down one of the platforms most vicious ringleaders of abuse. But unless Twitter can deal with the small-time goons, with their anime avatars and egg accounts, as well, it will remain one of the Internet’s most effective tools for harassment and abuse.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

211 Comments
Inline Feedbacks
View all comments
OoglyBoggles
OoglyBoggles
4 years ago

@Hambeast
If that’s true, and that’s something I can easily see being the case, then there is no economic incentive, lip service and a ban at most to cool down bad pr. Then it is business as usual.

Jamesworkshop
Jamesworkshop
4 years ago

http://www.vox.com/2016/7/20/12226070/milo-yiannopoulus-twitter-ban-explained

Though multiple attempts have been made to paint the Ghostbusters backlash as a product of what is perceived (largely inaccurately) as a more general trend of fan entitlement, the nature of Jones’s harassment is very clearly and overwhelmingly a product of extreme racism that has nothing to do with the Ghostbusters franchise — or with fandom in general.

comment image

Ohlmann
Ohlmann
4 years ago

I think the theory that Twitter willingly enable harassment start on a good basis, but is too strong and overestimate the rationality that can have internet firms.

Twitter *do* have a strong incentive to keep a wholesome image, both in the eye of the general public and in the eye of the investor. However, he have a combination of factor against a quick and efficient moderation :

* it cost gobs of money to moderate quickly, and twitter is cash-strapped
* false positive, aka unjustly banned people, are a very big problem for a social network
* the guys inside twitter likely don’t understand how bad it can get and are on the line of “it’s virtual, so no harm done”. It’s a very prevalent view in the industry after all.
* moderation isn’t a sexy task to give to devs, and isn’t a sexy feature to sell to investors and media.

So, it’s easy to see how twitter in practice enable harassment without actually trying to do so, or realizing it do. That’s a case where stupidity can adequatly replace malign intents.

(for the record, I have worked in one of the firms who created facebook game. We took a real long time, like two years, to realize that we sucked dry seriously poor and depressed people with our monetization scheme ; and quite a bit of the devs were on the line “eh, they just have to be less stupid” when it was very, very obvious that we exploited misery)

EpicurusHog
EpicurusHog
4 years ago

Though I’d provide some soundtrack: https://www.youtube.com/watch?v=PHQLQ1Rc_Js

Not sure what to say about the rest of it that hasn’t been said. It’s a really complicated situation, from the looks of it. Hopefully a solution to this is found soon.

@Skullpants Checking the activity of new accounts would help stop a good part of the mass harassment that’s been such a big problem lately. Your account is two minutes old and all you’ve posted is offensive comments in a hate tag? Boot. That’d be great. Not sure how widely that can be applied without turning up a bunch of false positives, though.

weirwoodtreehugger: communist bonobo

So, it’s easy to see how twitter in practice enable harassment without actually trying to do so, or realizing it do. That’s a case where stupidity can adequatly replace malign intents.

I agree. I think it’s important to remember that the top management at social media companies tend to be overwhelmingly white and male. It’s more of a privileged cluenessness thing than an actively pro-harassment thing. Not that it’s an excuse, just an explanation as to why these companies aren’t quite on the ball.

Here’s their leadership page

https://about.twitter.com/company/press/leadership

The CEO, COO, and CTO, that is the people who would be the most responsible for this are all white guys.

802.11cuck
802.11cuck
4 years ago

@Scildfreja re: MAC bans

(De-lurking to be pedantic) MAC bans aren’t going to be effective on the internet for a couple of reasons, the main ones being:

-They aren’t nearly as permanent and hard-coded as you think they are — most Ethernet drivers let you override the address burnt into the PROM, they aren’t hard-coded on any Wi-Fi hardware I’ve ever seen.

-MAC addresses exist in the bowels of the network stack and aren’t usually exposed to upper layers (nothing outside of your local network has any idea what your MAC address is typically, and even on your local network a web server never sees a MAC and has no idea what it is). I guess you could ask the client to provide that, but verifying that it isn’t lying is going to be a Hard Problem™(and ultimately futile given the first issue)

-(bonus historical pedantry) there are tons of (deader than disco) network technologies that have no concept of a “MAC address”

JoeB
JoeB
4 years ago

Speaking of twitter bans and evading of them. Look who’s back!
https://twitter.com/chuckcj0hnson

(((Hambeast))) Now With Extra Parentheses
(((Hambeast))) Now With Extra Parentheses
4 years ago

From the Vox article Jamesworkshop linked:

But at the very least, Twitter’s decision to permanently ban Yiannopoulos from the site is historic and most likely will serve as a stepping stone for Twitter to refine and increase its tools for fighting abuse.

That’s the problem: Stepping stones are useless unless they’re, you know, stepped on. What I think will happen is the furor will die down and it’ll be business as usual (IOW, carry on, harassers!) especially for harassed people who aren’t famous or influential.

ETA: Ohlmann, I don’t disagree at all with you. But it all washes out to mean that nothing changes and the net result is still what I quoted.

OoglyBoggles
OoglyBoggles
4 years ago

@WWTH
To further cement proof of ignorance rather than spite, I have one of my favored newscaster Secular Talk

https://m.youtube.com/watch?v=j-RhNIPKacc

While he is against libel and slander he doesn’t exactly understand that Milo is a repeat offender and perpetrator of harassment and hate speech on twitter. Stuff that is in violation of Twitter’s rules.

Which I find frustrating because on every other issue, economic, corruption, anti bigotry and such I do agree with him, but on stuff like this I find his knowledge base stunningly lacking. Like him unable to admit that half the atheist base is as incredibly racist and sexist as some religious pundits, and I wish he understood more on feminism.

Scildfreja
Scildfreja
4 years ago

Someone had asked whether algorithms could be written to detect whether someone’s being abusive on Twitter? So that a new egg or old face that’s spewing hate by the bucket could be quarantined?

Yup, we can do that! It requires some serious NLP magic (Not MLP magic, though I’m sure that would work too), but sentiment analytics is a very active area of research.

Right now, the best algorithms we have would catch abusers. It would also catch any of the abused persons who are reacting to said abuse, though. So if you are being harangued by a few hundred sea lions, you either keep quiet and ignore them to wait for the algorithm to be tripped, or you reply and risk getting caught yourself.

(I also guarantee that someone in the chanbase would start peeling apart the NLP libraries available in order to find out which lemmas trip the algorithm and then devise ways to be just as awful and abusive without tripping them)

The trenches of internet security are muddy and bloody and gross. Don’t come here D:

Ohlmann
Ohlmann
4 years ago

@Scildfreja : NLP is like 3D printing three year ago. It’s promising, but we’re not quite here. Cutting edge implementations can give you very impressive proof of concept, some specific firm already use it with various success, but in practice it’s already hard and relatively unreliable to use it to guess the age and sex of people who aren’t trying to fool it.

Automatised bans are massively used to harass people on facebook no ? It’s not a neuro-linguistic process system, but it remind that automatization is very much a double edged sword.

I think that “technical progress will fix it !” won’t work here at all. What is needed is harasser being actually shunned and isolated by society. I believe more in educating people toward that end, given the terrible track record of the Silicon Valley in term of improving the life of non-rich white people.

EJ (The Other One)
4 years ago

Last year I read a very interesting paper about the detection of what were referred to as “future banned users.” The authors suggested that it might be possible to use standard advertising-demographics-profiling machine learning tools to identify which people were going to be banned in the future, in the hope that they could just be banned immediately and save everyone the trouble.

While I applaud the authors’ sentiment, I find myself agreeing with Ohlmann: the problem of false positives would be a huge PR issue, and may put people off using any software that pre-bans people in this fashion.

Sadly I can’t find the paper again, otherwise I’d link it. Scildfreja will doubtless know more than I do about it in any case.

banned@4chan.org
banned@4chan.org
4 years ago

I mostly use anonymous sites that run similarly to 4chan, but I’m not a snob about it, and when I see someone who does maintain an online identity get singled out, it’s hard for me to blame them simply for being an easy target. I can also recognize that on platforms like Twitter, which aren’t intended for anonymity, this lassaiz-faire approach to violent, abusive rhetoric is outdated at best.

(((Her Grace Phryne))): Tool of the Butt-Worshipping, Lesbian-Powered Elite
(((Her Grace Phryne))): Tool of the Butt-Worshipping, Lesbian-Powered Elite
4 years ago

@Scildfreja

(I also guarantee that someone in the chanbase would start peeling apart the NLP libraries available in order to find out which lemmas trip the algorithm and then devise ways to be just as awful and abusive without tripping them)

But, as you said before about ghosting, it might give the targets some breathing room.

Personally, I like your ghosting idea. If it’s possible to exclude replies, that would make it more accurate, yes? I don’t know if that’s possible, though. Either way, though, the ghosting would be really useful.

I’m so frustrated by Twitter. Sure, they’re all “Look! We banned one asshole!” Whooptie do, big fuckin’ deal. You don’t get cookies for taking care of one relatively small part of the problem and ignoring the rest of it, especially when the problem has real-world consequences.

I want their “leadership” to see what kind of harassment users are getting, and understand it. I want them to do something substantial and meaningful to combat it rather than mouth useless platitudes and continue the status quo. I’m in a grumpy mood, so I’m (currently) ok with them experiencing the same amount of harm as their users.

@EJ TOO
I’m not cool with pre-emptive bans, personally, but I think identifying people who are higher-risk so they can have some extra scrutiny is a good idea. Wait til they actually do something ban-worthy, but the scrutiny will help shut it down faster/earlier.

Nikki the Bluth Wannabe
Nikki the Bluth Wannabe
4 years ago

@Alan
Yes, collective punishment does get controversial, and the vast majority of people (including me) find it hugely unfair. I’d imagine there almost has to be a better way.

@scildfreja
I agree with Alan that some of your ideas about MAC-banning are skirting up to the collective-punishment line, which I inherently find unfair. I also question how well it’d work-for example, couldn’t a public place like a library petition to have its banned MACs restored?
Please don’t take that the wrong way. You’re a lovely person, and I love talking to you and seeing how calm and intelligent you are in tough situations-I just don’t think you’re necessarily on the right track here.

@wwth

What about having to give Twitter a piece of identifying information to get an account. Like a credit/debit card, bank account number or a tax id if it’s a business account? That would make it harder to sock because a person is only going to have so many valid cards to use. It would also mean that if a person threatens violence, the person can report it to law enforcement and law enforcement can get a warrant to obtain that identifying info from Twitter. It might seem invasive, but it might be the only way to curb the worst of it. I’m kind of starting to think that internet anonymity is a failed experiment.

Excellent idea!

@Oogly

I wonder if they really believe increased moderation would be more expensive than the real loss of potential users.

Could Twitter users volunteer to serve as moderators, or would that cause more problems than it solved?

Ohlmann
Ohlmann
4 years ago

For all thoses tools, one need actual statistic tools to decide. If a tool detect 90% of harassers, but ban ten time as much innocent people as harassers, it’s not terribly useable in practice. If it detect 20% of harassers, the breathing room provided will be quite limited. Even providing it to humans as a rough indicator seem a bad idea, because humans tend to not understand the concept that machines might be failible, and I fear they will follow the aumated opinion in msot case regardless of the situation.

Preemptive ban is an horrible idea. That’s, at best, punishing people for what they intend to do, which is a brazil-level of bad idea. When your best case is litteraly a sci-fi dystopia, that’s a strong sign it’s not a promising lead.

Eitan rosen
Eitan rosen
4 years ago

I have even more of an reason not to use Twitter.it makes the privacy controversy in the nsa look like a joke. Especially when the legitimate reasons for anonymous accounts have to be associated with vile people who abuse anonymity.

Axecalibur: Middle Name Danger
Axecalibur: Middle Name Danger
4 years ago

@Oogly

a dedicated team that are bit of a stickler for the rules already set

Great idea! I fear that’s not enough tho. Such a team would need all new algorithms (digital and human) to effectively handle the shit sea. Like say Buttercup’s genius suggestions

@Buttercup
Moderation/ghosting is such a good idea I’m actually a bit peeved I didn’t think of it. And I spent hours last night tryna figure this out

Re: false accusations
1)People’s safety from harassment, abuse, doxing, etc. is more important (ethically anyway) than a few people being unnecessarily banned from Twitter

2)How hard would it be to implement a trust system? How do I explain this… Say you report something as harassment, and it turns out not to be (or at least not according to Twitter rules). Would it make sense/be feasible for Oogly’s Twitter KGB to keep a tab on who is more or less likely to accurately report things?
Like, the team would need so many reports before they act, so as not to overwhelm them. Someone with a perfect record counts as a full report, someone with a worse record counts as a partial report. When the total adds up to 20 or whatevs, then it’s go time
You’d need to keep it secret, in the background. Anyone without a record would be automatically considered trustworthy, and untrustworthy reports would still count (just less so)
There’s probably something obvious I’m not thinking of that makes this whole thing untenable…

@Ohlmann
I’da thought that someone who’s willing to defend the accused would fit right in with ‘SJW’. How many racists you think still salivate with rage about To Kill a Mockingbird 50+ years later? 🙂

Nikki the Bluth Wannabe
Nikki the Bluth Wannabe
4 years ago

@Buttercup
I love all your ideas!

EJ (The Other One)
4 years ago

@WWTH:
Your idea is sort of in place already. In South Korea, the law states that whenever you sign up for an online service you must link it to your real-world ID. The other users can’t necessarily see this link, but the site admins can.

This works because South Korea has a stranglehold on Korean-language websites and so can effectively make laws to govern the Korean-language web. For English it might be much harder.

Ohlmann
Ohlmann
4 years ago

@Nikki : using voluntary work for moderation work for small volumes. Here, the sheer logistic of vetting them and not letting an harasser become a moderator is quickly overwhelming.

Given the volume of twitter, they would need thousands of moderators if they want a proactive approach, and likely at least 100 or 150 to check in depth complains. They would also need to deal with foreign language moderation. The aforementioned facebook game firm actually closed their forums because they were both a financial disaster (10% of the workforce as full-time moderators), and an hellhole. I don’t think twitter would have it much better.

EJ (The Other One)
4 years ago

@Ohlmann:
Personally I am extremely happy to be banned as a false positive, if it makes it harder for harassers to use a service. Others may disagree.

Scildfreja
Scildfreja
4 years ago

Aw, thank you, @Nikki <3 I agree that flat MAC banning is problematic, and there's collective punishment problems. It's sort of inherent to considering security and harassment than you have to start asking hard questions about what sort of unintended damage is "acceptable," though, so I don't mind following those lines of thought. You either think them through and accept them explicitly, or you don't think about them and wash your hands of them. Better to consider them directly.

I don't like collective punishment, it's awful; it also hands enforcement responsibilities to people who are (perhaps) better placed to apply meaningful punishments, but who are also forced into applying those punishments, because they’re being unjustly punished themselves. If I were to be building some sort of a MAC-level ban system (with all of the adjustable MAC systems out there, which is another issue), I’d ensure that it rolled out with an easy way for people hit by the splash to have the ban rescinded. Libraries and public services could register as such, private addresses could have multiple strikes and you’re out sort of thing – soft banning with quick recovery.

It’s all hypothetical, though, and there’s still lots of unintended problems with it. My lab’s working on a separate solution, a sort of encrypted internet ID that provides unique identification while still maintaining anonymity. It’s slow going, but there are some promising features!

Ohlmann
Ohlmann
4 years ago

@Axecalibur : “a few”. That’s unlikely to be “banning a few false positive”.

To explain the problem quickly, and with the hope that my rusted stats aren’t too bad, let’s take an algorithm that catch 90% of harasser, and have a false positive rate of 5%. That seem perfect ? It’s so terribly bad that medical tests with thoses stats would be banned from any sensible country.

Why so ? Because there is much, much more regular users than harassers. There is 645 millions of user ; if there is 1 millions of harassers, that algorithm would ban *35* time more innocent than harassers.

Since innocents people are very unlikely to ever step foots in twitter again, if we suppose 50% of banned harasser come back and 0% of innocent come back, the algorithm would ban more than*half* of twitter by the time there is less than 1000 harassers remaining.

Ohlmann
Ohlmann
4 years ago

In other word, twitter need an algorithm whose false positive rate is very, very close to zero. They may be other property it will need to provide, I only cited the classical example from my memory of statistic classes. From which my main lesson is “ask actual statistician to evaluate odds instead of trusting your guts, human are hardwired to be absolutely terrible at stats and litteraly unable to have the good intuition even if their life depend on it”.

Scildfreja
Scildfreja
4 years ago

@Ohlmann is right, without a nearly-perfect system, you’d have so many false positives that most people the algorithm would catch would be incorrectly banned.

There’s also way too much twitter twittering out there to properly monitor directly.

The best solution is a combination. Use the algorithm to fish out the most likely offenders, then use human eyeballs to separate the false positives from the harassers. It’d be very expensive, that’s a lot of eyeballs to pay for, but it’s the most thorough realistic way to do it that I can think of.

Would be interesting to set up a nonprofit which companies like Twitter could apply to in order to help ameliorate abuse-mitigation costs. They’re unlikely to do it themselves, but if you offered them a subsidy to do it? I bet they’d be much, much more agreeable.

(Of course, funding the nonprofit would be pretty tough; you need more than a kickstarter for that… I bet that the GG of Canada would be into that, though. Hmmmmm.)

Ooglyboggles
4 years ago

I’m sorry for not keeping up, it’s just that this sort of thing involving algorithms, Korean laws, ghosting, dedicated moderators and concerns about false positives is all really complicated stuff to me logistics wise. If there is one thing that I can draw from all of this though, is that no matter what option, or combination of options done, there is no easy fix here that doesn’t involve Twitter doing some massive restructuring on their part.

That also includes understanding that in the future what groups do they really think will bring the most profits to them both short and long term.

ViolinlessHoax
ViolinlessHoax
4 years ago

Re: libraries. As someone who works in a library I can say that there would be no real world consequences for anyone causing the MAC-ban in a library. Mostly this is because we’d have no idea who caused the ban, so we’d have no way of knowing whose library card we should revoke. At least, that’s how it works (or wouldn’t work) in a small municipal library; it’s possible larger libraries have more tools to monitor who did what online, but because privacy etc etc, I doubt it.

Also, as an aside, libraries are suffering so much already from dwindling numbers, I can imagine this rule would be unpopular with the higher-ups.

Re: moderation. I used to be on OKCupid and the way they did it there (or used to do it anyway, haven’t been there in a while) was to choose a few hundred “trustworthy” people from their user base and have them do a first round of moderation before the actual paid mods got to make a decision based on our input. How they determined who was “trustworthy” and who wasn’t, I have no idea. Personally, I had a lot of fun moderating for them, weeding out the people who were obviously toxic – which was 90% of the reports – and leaving short messages for the other mods. It wasn’t a perfect system either, you could tell some mods really got into it and took it seriously, but a surprising amount of people would just 100% acquit offenders no matter what they did. “Oh, she clearly started it” and “lol loser reporter” were common comments from other mods on these reports.

Axecalibur: Middle Name Danger
Axecalibur: Middle Name Danger
4 years ago

@Ohlmann
Absolutely right. The false positive rate would have to be tiny. I don’t think they should let the possibility of false positives stop them from trying either. 99.99% accuracy sometimes begins at 95% and gets better from there. If a few people (or, as you rightly point out, more than a few) get banned (hard or soft) on the way to a better system, is that worth it? To make an omelette, one must 1st break some eggs (get it? 😃). I just don’t want Twitter Corp, LLC to say ‘false positives! See, Leslie/Anita/whoever!? There’s nothing we can do’, and then forget about it…

Or I have no idea what I’m talking about. Very possible

Robert
Robert
4 years ago

It occurs to me that almost none of the people I know face to face would have any idea who Milo is. That reassures me about how I’m living my life.

Scildfreja
Scildfreja
4 years ago

The last thing I’d want to do is to hurt libraries, I love my little library :C Point taken! Too much splash.

The tiered, distributed mod system is a good step in mitigating moderation costs – maybe that’d be a way to get a blue checkmark? Providing a certain amount of valid moderation per month. You’d still want an algorithm to hunt down inappropriate behaviours, but having a volunteer first stage is a good way to minimize costs. Interesting!

Catalpa
Catalpa
4 years ago

I’ve heard a system proposed that would, I think, stem the tide of harassment a fair bit, if paired with people getting banned for harassment fairly commonly.

Give users a setting that they can select that auto-blocks any tweets coming from users that are less than a certain amount of time old (the account must be at least two weeks old, say), or ones that have less than a certain number of followers (10 or something). Then people who make throwaway accounts, sockpuppet accounts or ban-evading accounts have to wait a gratification-killing amount of time before they can start heaping the abuse on. And the ones that do wait out the time can be banned more easily because there won’t be such a flood of assholes.

Richard Joseph
Richard Joseph
4 years ago

How awesome is it that Milo Yiannopoulos, bad boy hero of the blogosphere, “The Ultimate Troll,” putting “social justice warriors” in their place daily, has…. less than 350K Twitter followers? If you look at Breitbart’s website, they’re treating this Twitter ban like Nixon had Walter Cronkite thrown into a gulag. They GENUINELY don’t seem to realize that 99.999% of Americans don’t have any idea who this guy is.

Paradoxical Intention - Resident Cheeseburger Slut

@Catalpa: I do like the idea of insta-moderation for all new accounts tweeting at people. We kind of do that here too.

However, it would be more up to the people getting tweeted at to moderate it, it sounds like. I feel like that’s really removing a lot of the onus from Twitter to handle their own shit.

On the other hand, it does seem like a good way to stem the tides of bullshit that some users see every day.

repentantphonebooth
repentantphonebooth
4 years ago

This is so off topic and I am sorry, but you people seem like the right folks to turn to- I am looking for reliable statistics on the rate of false accusations for crimes other than rape. Does anyone have any easily accessible links you’d be willing to share?

Ooglyboggles
4 years ago

@Catalpa
Well the GG blocker and such tends to do a good job in blocking people. Unfortunately while it does that, it cannot change the posting culture of twitter.
@PI
The change has to come from within and with permanence, otherwise they’ll take the easiest route, which so far is allow such flagrant abuse to happen 24/7. What event or series of events that could do such a thing, I have yet to see.

Jake Hamby
Jake Hamby
4 years ago

To add to what 802.11cuck wrote: MAC addresses aren’t visible outside of the local LAN. Servers can’t see them and can’t block based on them. They’re not an option, even if all the other issues people brought up about shared computers and people using other computers were solved. It just won’t work because MAC addresses don’t ever leave the local network.

The Twitter mobile app may be able to obtain one or more unique identifiers from the smartphone, such as IMEI, IMSI, ICCID, or UDID, but then the privacy, spoofing, and device sharing concerns of blocking based on any of them would still apply. (IMSI & ICCID come from the SIM card, IMEI is a unique ID for the cell radio, and UDID is a unique iOS-only ID.)

PS: the South Korean “real name” law was overturned by their Constitutional Court in 2012 as a violation of free speech.

Virgin Mary
Virgin Mary
4 years ago

Most of our volunteer run community libraries which still have Internet access actually block people from using Facebook, Twitter and dating sites.

Catalpa
Catalpa
4 years ago

@paradoxical and oogly

Yeah, it does put some of the onus onthe person being harassed, which is problematic. But it would make moderating hateful comments less of a gargantuan task, which might make twitter more likely to actually DO something. Doesn’t really change the culture, either, but it would help the current victims of it at least some.

In terms of changing the culture… Hm, what if there was a automated thing that was triggered by certain keywords, but instead of banning people, it shifted them into a state where they couldn’t see the tweets of anyone who is also on the dickhead list? This means that innocent people who tweet in response to similar phrases might get blinkered a bit too, but mostly it would prevent them from seeing assholes, so it’s not as much of a hindrance as a banning would be. Might even be seen as a benefit. And the trolls and fuckheads wouldn’t be able to see all the other hatemob members and wouldn’t be able to feed into each other and validate each other.

Buttercup Q. Skullpants
Buttercup Q. Skullpants
4 years ago

Thanks, Nikki and Axe! 🙂

I forgot that Reddit and Craiglist already do a form of ghosting (or shadowbanning) to discourage spammers…I think what I’m trying to get at is to find a way of making commenting a tiny bit more “expensive”, in a way that benign users wouldn’t notice but would add up quickly to major hassle when a troll is posting rapidly from multiple accounts. If a lie can be halfway around the world while the truth is still putting on its boots, then let’s tie an anvil around the liar’s ankle.

Awhile back I remember reading about a proposal to deal with spam by having mail servers return a small packet of junk data to the originator’s machine every time an email gets sent. An ordinary user emailing cat photos to Aunt Beulah wouldn’t notice anything, but a spammer sending out an email blast to half a million brute force addresses would see a significant performance hit. The bigger the spam list, the more degraded the performance becomes. Maybe Twitter could institute something like that, where slower, more measured users get rewarded with the fastest performance, while users with multiple socks unleashing torrents of abuse get their machines and accounts tied up for awhile. Maybe the first 5 comments are free every day, and then after that comments are published at the rate of one…. word…. every……. fifteen…….. seconds, with the gap getting longer and longer the more they try to post. (I’d suggest a sliding fee scale for >5 comments, but that means people would get harrassed mainly by rich assholes). Regular users probably wouldn’t be affected, unless they were live-tweeting a historic event. I’d have to think about how to make allowances for that (and for the fact that Twitter WANTS lively discussion and lots of people tweeting )

It’s a similar approach to the proposal to regulate ammo instead of guns. If bullets (or comments) cost the equivalent of $50 apiece, suddenly a semi-automatic rifle will seem a less attractive way of airing grievances.

ms_xeno
ms_xeno
4 years ago

Pendraeg:

“Not to support Milo or any of his followers, the ban is well deserved and long overdue. Buuuuut in Twitter’s defense, chasing down socks and checking on every reported tweet is also a colossal task. They do need to do better about it and be more consistent but at the same time I believe the attitude is that if they ban anyone who is reported then no one will use the site and taking time to fully investigate every report is cost prohibitive.

“It’s not a great policy on their part by any means, but the lackluster response they have to such abuse can be understandable.”

I always wonder why, at this late stage of the game, the costs of the caretaking you describe aren’t factored into the platform when it’s first being built? At some point, it should occur to creators that it makes little sense to not account for the Milo Fan Clubs of the world. Just like it makes no sense to build bus stops without budgeting for a trashcan to be included/maintained in each one.

Of course, if the advertisers who spend so much time and cash luring us on Twitter ever thought to raise the (fully justified) stink Jones did, I’m guessing more than just this periodic posturing on Management’s part would happen. But… who am I kidding? They don’t care about it any more than Milo’s bosses do. :/

Axecalibur: Middle Name Danger
Axecalibur: Middle Name Danger
4 years ago

@Buttercup

deal with spam by having mail servers return a small packet of junk data to the originator’s machine every time an email gets sent…

That’s fuckin devious
http://static9.comicvine.com/uploads/scale_super/5/52246/2060390-i_like_it.jpg
End of the day, it probably wouldn’t work, cos of the reasons you brought up. Still, love the idea to bits

It’s a similar approach to the proposal to regulate ammo instead of guns

Fruitloopsie
Fruitloopsie
4 years ago

WWTH
Dear God, That poor woman. My heart is hurting. Should we start a petition to bail her out? Though I don’t know how to start a petition and don’t quite understand how pretty much anything works.

Imaginary Petal
Imaginary Petal
4 years ago

@Fruitloopsie

[I] don’t quite understand how pretty much anything works.

This will be my life’s motto from now on. :p

Ooglyboggles
4 years ago

Well at the very least I felt our discussion here was productive in figuring out a business pitch and blueprint of improving Twitter. I really found it fascinating in the different ways the moderation process could work.

AlphaBeta Soup
AlphaBeta Soup
4 years ago

I agree with Seshia that counter-brigading might be an effective tactic to counter hate. I notice that left-wing, anti-racist, feminist sites always have right-wing, racist or misogynist trolls, but right wing, etc. sites seldom do. My theory is that right-wingers, racists and misogynists like to argue with and bully people who don’t agree with them more than left-wing types do.

It is also an exhausting task to visit the cesspools of some of the comment sections of those sites. A tough task for a sensitive person.

But I believe it would do a lot of good if more people of a leftish persuasion would visit the comment sections of certain right-wing sites and counter some of their arguments with polite, reasonable comments. I’m not talking about Stormfront or the Daily Stormer or their ilk. People who would post there in the first placr are too far gone.

I used to go to TakiMag, home of people like Steve Sailer, Gavin McInness, and John Derbyshire, who fancy themselves “race realists” and make pseudointellectual racist arguments rather than mindless hateful diatribes. I finally had to quit because it was too emotionally draining, but may not have been if I had support from others. I don’t know if my efforts were in vain, but I hope I reached some fence-sitters or new readers and helped them to reject racism with my comments.

Ooglyboggles
4 years ago

@AlphaBeta Soup
Well you might have to count guys like me out. I would throw insults and mock them relentlessly while tossing out stats and articles to debunk the parroted topics. I know the people I argue to aren’t going to change anytime soon, so I might as well make them as mad as I. And if I can bring some justification, some people just won’t be moved with nicety.

Your method is certainly alot more productive. But so far when I see documentation for that it’s better to do a one on one, it separates them from the hate group and allows them time to think and reconsider. I just figure for counter brigading, the society as a whole needs to undergo some change.

Ohlmann
Ohlmann
4 years ago

@Axe : the problem is, at 95% accuracy, it destroy Twitter in relatively short order. They need a very high accuracy to be able to use that without suiciding themselves.

@counter brigading : fostering hatred through trolling and brigading is counter-productive to me. I don’t formally condemn this because, well, they are assholes. But remember that Lovecraft quote : “when you look at the abyss, the abyss into you”.

Pony's Labia
Pony's Labia
4 years ago

I don’t have a Twitter because I find it confusing and repetitive.

I find the less time I spend interacting with people negatively on the internet, the happier I am. I can do without Twitter.

authorialAlchemy
authorialAlchemy
4 years ago

Maybe Twitter can do something like OK Cupid does with its moderation? OKC assigns well behaved users to act as a jury for questionable content. If enough mods agree the user broke rules, the content is removed or the user is banned. You can opt out of it if you don’t want to be a mod.

Although, some people still think weather or not something is racist is a matter of opinion or if it is, it’s protected as free speech. It’s not as simple as “wow, this is a dick pic, that doesn’t belong here!”

@ Ohlmann- Out of curiosity, what game did you work on, or at least, how did it exploit misery?