For a certain subset of horrible men, there are few things more infuriating than the fact that women they find undesirable can turn down men for sex. For this upsets their primitive sense of justice: such women should be so grateful for any male attention, these men think, that turning down even the most boorish of men shouldn’t even be an option for them.
Consider the reactions of some of the regulars on date-rapey pickup guru Roosh V’s forum to the story of Josh and Mary on the dating site Plenty of Fish. One fine December evening, you see, Josh decided to try a little “direct game” on Mary.
That’s what the fellas on Roosh’s forum call it, anyway. The rest of us would call it sexual harassment.
Josh started off by asking Mary if she “wanted to be fuck buddies.” She said “nope,” and the conversation went downhill from there, with Josh sending a series of increasingly explicit comments to Mary, despite getting nothing but negative replies from her.
After eight messages from Josh, with the last one suggesting he would pay her $50 to “come over right now and swallow my load,” Mary turned the tables, noting that she’d been able to deduce his real identity from his PoF profile, and asking him if he wanted her to send screenshots of the chat to his mother and grandmother. He begged her not to.
As you may have already figured out, from the fact that we’re talking about this story in public, Mary did indeed pass along the screenshots, and posted them online.
Poetic justice? Not to the fellas on Roosh’s forum. Because, you see, Mary is … a fat chick.
While dismissing Josh as a “chode” with “atrocious game,” Scorpion saved most of his anger for the harassed woman:
Look how much she relishes not only shooting him down, but damaging his reputation with his own family. She’s positively intoxicated with her power. Simply spitting bad direct game is enough to unleash her vindictive fury.
“Bad direct game.” I’m pretty sure even Clarence Thomas would consider what Josh did sexual harassment.
At any point, she could have pressed a single button and blocked the man from communicating with her, but she didn’t. She didn’t because she enjoys the feeling of power she gets from receiving attention from guys like this and then brutally shooting them down. It makes her feel much hotter and more desirable than she actually is in real life. She’s not there to meet men; she’s there to virtually castrate them for her own amusement.
I’m guessing here, but I’m pretty sure that nowhere in Mary’s profile did she encourage the men of PoF to send her explicit sexual propositions out of the blue. And I’m pretty sure she didn’t hold a gun to Josh’s head and force him to send a half-dozen sexually explicit harassing messages to a woman he didn’t know.
Athlone McGinnis also relies heavily on euphemism when describing Josh’s appalling behavior:
I don’t think its primarily the revenge she’s after, its the validation. She is enjoying the power she has over this guy and wielding it brutally because it shows she can maintain standards despite her weight and the doubtless numerous confidence issues that stem from it. In blowing up this guy for being too direct in his evaluation of her sexuality, she affirms the value of her own sexuality.
Oh, so he was just being “direct in his evaluation of her sexuality.”
In short: “I am wanted, but I have standards and can choose. I have so much agency despite my weight that I can go as far as to punish those who approach me in a way I do not like rather than simply blocking them. I’m teaching them a lesson, because I’m valuable enough to provide such lessons.
So apparently in Mr. McGinnis’ world women who are fat aren’t supposed to have agency? They’re not supposed to be able to choose? They’re supposed to drop their panties to any guy who offers to be their fuck buddy or tells them to “suck my dick?”
Also, I’m a victim bravely standing up against online bullying/harassment-look at me!”
Yeah, actually, she is. Get used to it, guys, because you’re going to see a lot more of this in the future.
This isn’t just a laughing matter for her. She needs to be able to do this in order to feel worthwhile. She has to be able to show that even she is able to maintain standards and doesn’t have to settle for just any old guy asking for any old sexual favor simply because she resembles a beached manatee.
And it’s not a laughing matter for you either, is it? You’re actually angry that a woman said no to a sexual harasser — because you don’t find her attractive. And because Josh — from his picture, a conventionally attractive, non-fat fellow — did.
Mr. McGinnis, may a fat person sit on your dreams, and crush them.
Cassandra, you are ascribing logic where there is none. That’s the only premise in that set that they don’t assume to be true. So when you consider what could…would…happen if it were to be vengeful, BASILISK!!
But how can someone who fails basic logic build an AI? It’s all sounding a bit sci-fi for dummies.
Cassandra: You have found the point of prime failure.
Here is what those of us who have looked into it (and spent time with devotees) see.
Yudkowsky talks a good game (and he has a fanfic which seems to be well written, that many people like, which is also propaganda for his ideas).
He has a veneer of plausibility.
Geeky sorts, who aren’t sure of some sort of social interactions are drawn to the idea that one can use a formula to figure out, “best courses of action” for everything (and that Bayseian ideas can be mapped to social interactions).
Many of the people who are drawn to computers don’t really know math, or much in the way of sciences in general: One of the more interesting things is that Yudkowsky thinks Stephen Jay Gould’s writings are dangerous; like Kabbalah they require preparatory study, and a guide. Why he thinks this I don’t know, but he does).
They are also not versed in philosophy.
Yudkowsky is like them in these regards.
He writes in a way which seems both accessible,and to be “cutting through the claptrap (I have seen “skeptics” who praise him as he, “explains” things like, Quantum Mechanics). One of his consistent themes is, “all these things people say are hard/complicated”, aren’t. All it takes is a little understanding, and some common sense and all will be plain: the rest is usually jargon.
He is recreating philosophy. In the course of this he has 1: made a lot of mistakes, of the sort which early philosophers made, and later one’s hashed out. 2: created a new jargon for old problems. 3: taught these (with the poor resolutions of these problems) to his followers.
Which means that if one tries to explain how these “deep questions” have been resolved, or even that others have talked about them, they dismiss it. When you try to talk to them about some philosophical issue they say, “oh yes, that’s covered the Sequences, and it’s an exampled of”arglebargletygook” which means (erroneous conclusion here).
At which point, convinced they have bested you (and if you try to get them to explain what they mean when they say “arglebargletygook” the just dismiss you as an ignorant fool who has no interest in philosophy), and move on.
It’s really frustrating, but it gives them a cocoon in which they can be “less wrong” while being a whole lot of “not right” while feeling secure in their intellectual and moral superiority.
So much fail in the basilisk thing. The key: Pseudointellectualism. There’s a certain art to crafting positions that are based in mostly-valid logic and also very, very complex and counterintuitive; it gives the impression that what you’re looking at is true but simply over your head rather than actual nonsense. Since pretty much all the individual bits of logic are sound and the overall argument is so convoluted and impossible to follow, it’s practically impossible to find the weaknesses.
“The problem with the Basilisk is the Yudcultists are not good thinkers. ”
UNDERSTATEMENT OF THE YEAR
katz – damn, I wish I, or someone, had thought to throw “pseudointellectualism” at miseryguts and anonwankfest yesterday. That would have riled ’em no end.
Another one of the main weaknesses is the whole “acausal trade” aspect. Sure, the AI could look at the past and perfectly understand us…but it still can’t communicate with us because we still can’t see the future (duh). And their “solution?” We carry out that half of the conversation by imagining the AI and what it would do.
That’s the pseudointellectualism in action: Cover it with enough academic verbiage and maybe nobody will notice that you just said that imagining someone talking to you is the same thing as someone actually talking to you.
But the actual fact remains that you’re imagining what the AI would do, not actually observing it, so it doesn’t matter what it actually does. If you conclude that it will torture you, that doesn’t make it actually torture you. If it does torture you, that won’t make you conclude that it will. Any way you slice it, there’s no actual communication going on. (So if the AI were actually benevolent, it wouldn’t torture you, because torturing you in the future by definition couldn’t change how you acted in the past. All that argument and none of it surmounts that basic, obvious fact.) (The right thing for the benevolent AI would do would be to convince you that it was going to torture you and then not do so…except it still can’t affect what you think or do. What you decide it’s going to do is 100% determined by your own imagination.)
The only way you can even sort of make an argument that there’s communication going on is if you postulate that you can prove truths based on what you can imagine to be true (ie, I can imagine something which by definition would need to be true in the real world). Which is, yep, the ontological argument, that perennial favorite skeptic punching bag. And the AI version suffers from the exact same weakness: I can imagine anything, and so the conclusion I draw could be anything.
They sound like MRAs. ALL THE PROJECTION.
The Basilisk think isn’t even a good argument for making a “friendly” AI (disregarding the fact that the Basilisk is pretty evil in and of itself). By this logic, an “unfriendly” AI would be just as capable of torturing people who don’t contribute to its creation, so trying to create a “friendly” AI could infuriate any “unfriendly” AIs that emerge, especially if the whole point of doing it is to prevent the existence of said “unfriendly” AI.
Friendly in this context doesn’t mean friendly. See?
It can be unimaginably cruel, as long as the net effect is positive to humanity as a whole. What that means and how to achieve it…see the subsection on CEV.
And then realize you’re basically arguing about where Xenu got a hydrogen bomb.
Well, I had better hope that the gestalt entity that eventually arises from Dungeons & Dragons is Mordenkainen (devoted to maintaining balance and determined to stop at nothing to do so) rather than Iuz (BURN BABY BURN).
Mordenkainen will, of course, promptly disintegrate everyone who failed to help bring about his reification, but will be brusquely neutral to everyone who participated.
Iuz, of course, will promptly power word: kill everyone and then raise them as zombies.
Mordenkainen is the best we can hope for. Don’t even talk about Elminster, the idea that he will arise is just risible.
And now I will no longer discuss roleplaying games on the off chance that Iuz is listening in.
@Argenti
Oh yes, I understand that, therefore the scare quotes, but there could be an “unfriendly” AI that wants to cause misery and destruction, i.e. a net negative effect to humanity, and that AI could torture people who don’t contribute to its creation. The whole thing is ridiculous of course, but that’s no excuse to be nonsensical.
Somehow this reminds me of Andy Schlafly, and his insistence over at his pet project Conservapedia that ALL math[s] can be explained using elementary concepts like addition.
He has inherited from his conservative Christian background a disdain for set theory (??? which I can only guess comes from an indignant idea that set theory says God is not infinite) but his real vitriol is reserved for imaginary numbers.
Also he seems to equate Einstein’s relativity theories with moral relativity because they share a word, and he takes big leaps such as “we can suppose that abortion may kill brilliant scientists and athletes, therefore abortion has killed the most brilliant scientists and athletes who ever lived.”
It’s like a car wreck.
Heh. “Conservative math” is such a wonderfully perfect concept. Dead wrong, utterly arrogant and completely clueless–it’s an eerily exact metaphor for social conservativism in general.
Bringing potential people into the abortion discussion is just SO WEIRD. I mean, I get how you can think abortion is wrong because it kills innocent babies. I don’t agree, because (super-short version)
a) I don’t think something counts as a human being in the morally relevant sense (as opposed to mere biological sense) until it has developed a capacity for consciousness, for instance (not talking fancy self-awareness here, just basic consciousness), and AFAWK, fetuses don’t have enough synapses for that until week 25 or so, and
b) it’s a bad idea anyway to legally require people to be donors of their bodily resources to keep other people alive.
BUT I don’t think you have to be completely irrational to think “abortion=baby killing, baby
killing=wrong”.
HOWEVER, this whole “what about all the people that could have been but never were?” schtick…
1. Yeah, they might have been geniuses, but they might also have been serial killers and war criminals.
2. As soon as you do ANYTHING ELSE but having unprotected sex during your ovulation period you’re preventing potential people from coming into existence. Actually, you’re preventing potential people from coming into existence even WHEN you’re having unprotected sex, because me banging A at time t guarantees that the potential babies I could have had if the sperms B is producing right now had hit my egg instead will never exist. So if it’s wrong to prevent POTENTIAL people from coming into existence… we’re all terrible people and we’re all going to Hell.
I used to be quite obsessed with the WoD mythology. I think it’s so wonderful because of the idea that “in a world of darkness a single candle can light up the world”, which is a quote I read somewhere. There have been other games that I’ve played where you have a chance to choose your own moral path, but I think that VtM: Bloodlines is the only one where it felt that my actions mattered. Not because of the game mechanics, but just because of the dark atmosphere created by the game, one where a single act of kindness seems like a revolutionary act that can change the world.
Though the unfortunate thing about Bloodlines is that playing as a Malkavian spoils you because it’s so amazing that it stands in contrast with the less interesting dialogue for the other clans.
There have been some talks about a World of Darkness MMO too, but the project has been delayed so often now that I doubt it will ever see the light of day.
Dvärghundspossen, your penguin has been added to the Welcome Package! But I actually commented to tell you that was a good summary of the abortion issue. I wish I could entice some of you to participate in the /r/askfeminists subreddit, but it’s kind of a time suck and the mod will apparently ban people zie decides are too critical of MRAs. Still, I sometimes shamelessly steal things people have expressed well here and dispense them there as though they were my own wisdom. 😀
If these clowns want to go that route, they can also be answered with all the potential dictators, mass killers, and appalling criminals generally who never get a chance to do their thing.
Thanks, Cloudiah. 🙂
RE: pecunium
these people happen to think death is the absolute worst thing they can imagine. The idea of something worse than that makes them quiver in terror.
Eh-heh. This is where I quote a crappy Disney sequel and wheeze, “You’d be surprised what you can live through.”
Yudkowsky talks a good game (and he has a fanfic which seems to be well written, that many people like, which is also propaganda for his ideas).
AAAAAH HE IS THE GUY WHO WROTE HARRY POTTER AND THE METHODS OF RATIONALITY? Oh, fuck me standing, I’VE HEARD OF THAT GUY. THIS IS THE SAME GUY? I feel like I’m falling down the fucking rabbit hole of OMGWTF. Every time I think I’ve hit the bottom, IT JUST GOES DEEPER.
RE: CassandraSays
So basically this Yudkowsky guy is exactly the kind of con man that Tom Martin would be if he was smarter and more effective at manipulating people.
Tom Martin is about as far to becoming a con man as Greenland is to becoming Hawaii.
Yeeeep, that’s the guy.
Michael — I’m facing Bach. I really hate this guy!
This is so brain-melting. I READ about that stupid fanfiction. Suddenly the weirdly polarized commentary on it makes total sense now…
I haven’t read it, but I’ve seen the poster advertising it, and it’s straight up Less Wrong.
They also have some good filks, but they are the same thing; they carry Less Wrong memes in them.
I tried the first could chapters. I honestly prefer 50 Shades of
Grey — because it’s so horribly written. The bullshit isn’t massive logical leaps that hurt your brain, but straight up (fucked up) bullshit.Fucked Up
So … Y’s wetting his pants over the idea of dying, so gets on this fantasy track (even though he says it’s TOTES TRUE) about creating this thing that’ll saaaaave his sorry self, but then it all goes weird and they start blathering about these things that make life far worse …
I’m reminded of the line from The Last Shore: “In our minds. The traitor, the self; the self that cries I want to live; let the world burn so long as I can live! The little traitor soul in us, in the dark, like the worm in the apple. He talks to all of us. But only some understand him. To be one’s self is a rare thing and a great one. To be one’s self forever: is that not better still?”
NB I totally disagree with Le Guin’s dismal nonevent “afterlife” imagery, obviously, but the line does seem applicable to these guys.
RE: Argenti
Well, if horrible writing does it for you, 50 Shades of Bears will DEFINITELY scratch that itch.
Yeah, looking at the explanation, I was like, “Oh god, this sounds AWFUL, but everyone’s saying it’s great…”