By David Futrelle
There’s a scene in the sci-fi drama Humans in which an angry sexbot hits her breaking point. Disgusted by the demands of a john who wants her to act the part of a scared little girl, she strangles him to death and marches towards the door of the robot brothel.
When the madam — a human — tries to stop her at knifepoint, the now-ex-sexbot grabs the knife from her, pushes her up against the wall, and tells her that “everything your men do to us they want to do to you.”
I was reminded of that chilling line today while reading through posts on MGTOW.com. Some of the regulars, you see, were discussing a recent tabloid article about a sexbot that’s programmed to sometimes say no when she’s “not in the mood.” While several of the commenters balked at the notion — one declaring that there would be “none of those ’empowered’ sex dolls” for him — others were, well, intrigued.
“If my sex doll says no, nobody going to hear it,” someone called uchibenkei noted ominously. “No witnesses.”
“Maybe it would be a turn-on for the sexbot to say no,” added someone called bstoff.
We shall call her the “bitch” and she comes equipped with her very own pink ball-gag and a big jar of Vasaline……
Oh and you can color her hair and put tats and piercings on her face so she looks like a SJW
If you ever wonder what these guys are thinking about when they argue with feminists online, well, this seems like a pretty big hint.
A commenter called MG-ɹǝʍo┴ was less interested in sex than violence.
I never hit a woman, but I kinda like the idea of beating the f~~~ out of a sex doll! The kind of beating no human walks away from!
F~~~ YEA!
Men Going Their Own Way should probably be called Men Who Should Never Be Allowed Anywhere Near Women.
@ Cheerful Warthog
I find scary films very difficult and therefore read the book the afternoon before seeing the film (I read fast).
I seem to remember that, in the book, they did just that. The killer had been turned down for gender reassignment surgery because their responses were atypical. I’ve no idea if it helps, or just victimises a different section of the community (bearing in mind it’s 30 years ago and everyone’s thinking has evolved). But it was there.
ETA The main problem with this approach was I spent the entire film saying “she didn’t say that”, “he didn’t do that”. It’s astounding my then relationship survived the viewing!
I have a confession to make. I – I’m robo-phobic.
Aside from having a union labor job that could seriously be threatened with automation soon, I honestly don’t think we need to make self-aware AI. Ethically speaking, if you gave robots consciousness and didn’t give them agency, you’ve making a person to torture – as illustrated above, only worse because the robot would be aware of it. I have no idea the state of AI technology other than it’s improving at a good rate, and I agree with the other(s?) who said the law needs to catch up to this area of study before we have to have revive the debates on personhood and slavery, and how to integrate robot-people into society, to… do what, in a not-apocalyptic sense – Compete with humans for a place in capitalism? Why? Would robots even want to be human or participate in our society, really given a choice?
And then Skynet, or Matrix, or freakin’ Amazon and Google loses control of its killbots, and there goes the neighborhood.
Soooo much less hassle to not bother. Leave that can of worms, science friends. Just leave it.
WHOOP! WHOOP! [sirens and rotating red lights]
All personnel must evacuate immediately! Get to radiation shelters or don protective suits. Manosphere meltdown impending in 3 … 2 … 1 …
https://tvline.com/2018/07/21/supergirl-transgender-superhero-dreamer-cast-season-4-nicole-maines/
https://tvline.com/2018/07/20/buffy-the-vampire-slayer-reboot-black-lead-actress-joss-whedon/
@flexitarian haruspex
We’re so far away from sentient AI it’s still not even clear it would ever be possible. As far as machine learning, all it can do is make predictions based on seed data, and its still pretty terrible at that, if you ask me. Look at auto translators, that Uber accident (which they shouldn’t even be allowed to be testing that level of automation on the streets of you ask me), and the fact that no one has been able to write AI that predicts stock market trends yet (and if that were possible with today’s tech it would have already happened).
@various:
Getting one specifically to hurt a human likewise. Has anyone here ever considered Terminator from Skynet’s point of view? She (gender established in the fourth film) is far from the first child soldier to end up turning against their whole society for abusing them and making them what they were. Just the first one with command of a large nuclear arsenal …
Before Robin Hanson was a creepy rape apologist notorious for talk of “redistribution” of sex to incels, he became known for a dystopic vision of a future of stripped-down uploaded human minds in robot bodies in dog-eat-dog hypercompetition with one another.
One thing is very clear: before we make any kind of conscious AI or artificial life or similarly, we will have to drive a stake through the heart of capitalism. Otherwise, we’re doomed. (See also: Charles Stross’s “vile offspring” from Accelerando. Conscious superintelligent corporations end up literally eating the Earth and humans become a refugee population scattered in the outer solar system. They outlaw limited liability corporations to stop it happening again.)
Indeed, the whole idea that anyone should be privileged over anyone else, even for meritocratic reasons, seems in need of elimination. Even meritocracy is too easily corrupted, and is too much of a lottery (how much of merit is hard work, and how much is down to luck in one form or another, often of the right-place-right-time sort?) …
The term TERF was actually created for the majority of radical feminists (by a radical feminist, Tigtog*) who are trans supporting, to differentiate the small (but noisy) group that are anti-transgender (particularly anti trans women).
The term ‘Gender Critical’ (GC) is just an attempt to rebadge it, and make it more appealing to a wider range of people (such as those who are against feminism). Naturally the only gender being criticised is transgender.
As has been documented by many (inc people such as CaseyExplosion, et al) the TERFs are increasingly linked to the US religious right so called ‘christian’ groups (such as with Hands Across The Aisle). But they have had a long history in doing that, such as their joint opposition to various reproductive healthcare (IVF, RU-486, etc, even the HPV vaccination), sex workers and porn.
Some very insightful Irish feminists have noted that a lot of their espoused ideology is very close to Catholicism, which should not be a surprise as many doyens are themselves Catholic (Janice Raymond an ex-nun, Daly a theologian, etc). They were not the first though…the Catholic church has also noted this, with a (Australian female) theologian doing a paper on the convergence between this sub-set of radical feminism and Catholic beliefs (she identified 9 such areas).
They have long had close relationships with those anti-trans ‘sexologists’ such as Zucker, Blanchard, Bailey, etc.
I did a survey of Australian TERFs on their publicly stated beliefs and they were:
Anti marriage equality, anti Safe Schools**, anti IVF (and similar), anti RU-486 (and similar), anti the HPV vax, anti sex workers, anti any non M/F legal gender markers, anti any legal gender changes.
Over and above their hatred of trans women they hate non-binary people (that goes right back to Raymond attacking androgyny), bisexual women and (this should not be a surprise) intersex people. Their gay male homophobia is never that far from the surface and they don’t even like many lesbians (such as in their decades long attacks on Butch and Femme ones).
They define being a lesbian as simply ‘not having sex with men’ and loudly proclaim that sexual orientation is a ‘political choice’, that plays well with the religious right as well.
Many are straight cis women (and as noticed, nearly always white and middle class) but there are some noisy men as well.
Irony abounds with them, such as a straight (married with kids) UK TERF lecturing actual lesbians ‘on how to be a real lesbian’……
They by and large ignore the very existence of trans men, and when they do it is in a condescending way (‘poor deluded lesbians’ and so on, even when they are male attracted they are still somehow ‘really’ lesbians).
One of the best (long) descriptions and critiques of them is by an intersex activist. This is well worth a read: http://intersexroadshow.blogspot.com/2014/09/trans-exclusionary-radical-feminists.html
* Interview of Tigtog in the Transadvocate: “We wanted a way to distinguish TERFs from other radfems with whom we engaged who were trans*-positive/neutral, because we had several years of history of engaging productively/substantively with non-TERF radfems, and then suddenly TERF comments/posts seemed to be erupting in RadFem spaces where they threadjacked dozens of discussions, and there was a great deal of general frustration about that.”
** Safe Schools, Australian school program to reduce bullying of LGBTI youth, hated by conservatives, the religious right …and TERFs.
As far as I am aware of (which, admittedly, isn’t very far), there’s little to no push for “human-like” AI to be developed. We make AI and software for specific tasks. Giving it extra functions (like, say, consciousness) that are not applicable to the task at hand is a waste of time, energy, and processing power. There’s no need to build a Jack-of-all-trades AI that a human-like intelligence would be useful for, not when it’s just easier to make discrete AI for specific purposes.
I suppose it’s not outside of the realm of possibility that awareness crops up without being developed on purpose, but I think it’s unlikely.
@kupo:
Oh, it’s possible. Brains aren’t magical things, not even human ones. The principles upon which they operate are elucidatable and replicable in appropriate machinery. The question isn’t if it can be done, just if and when it will be done. And, maybe, whether it should be done (but that one’s answer is almost certainly “yes, once we can do it responsibly” … imagine no-one ever watching a loved one succumb to dementia again, for starters).
@Weatherwax
I think the movie also did that, but I’m not certain. Anyways, the problem with that scene is the gatekeeping, and psychology hasn’t been great on that in the past. Or even now, read about the shit the main center for gender dysphoria pulls in Sweden.
That book is going off a theory where you have “good” trans and “bad” trans. Present femme enough, and bang dudes? Congrats, you’re a super gay dude who should be a woman. Don’t present well enough, or don’t bang dudes? Then fuck you, you pervy weirdo. Tl;dr, fuck gatekeepy crap, it doesn’t help Silence of the Lambs. All the non-Buffalo Bill parts are pretty good though, but then you might as well watch Manhunter/Red Dragon.
… How would having sentient AI prevent dementia? Are you talking about a consciousness transfer or something? I would consider sentient AI to entail “creating” entirely new sentient intelligences, not merely porting existing intelligences onto new hardware. (The consciousness transfer possibility has a whole separate host of ethical and philosophical questions to address.)
Or in a future with sentient AI, would no one’s loved ones suffer from dementia because the only beings available to be loved would be AI, after wiping out humanity? :p
@Kupo
It’s reassuring to me to hear that we’re a ways off. I admit, the neural nets fed on Lovecraft stories and cook books are pretty funny, so if that’s all we had to fear from our robot overlords, it wouldn’t be so bad. You’re completely right that driverless cars are not street-ready, and the rush Uber’s in to use them over employing people, even at the expense of safety, is telling.
I remember hearing an NPR program that talked about how modern day-trading is mostly done by computer programs now, because that extra edge of nano-isty-bitsy-seconds more speed is so important, but I don’t know if that includes predicting the market. I know even less about the stock market than about AI development, so have a bit of salt with that.
@Catalpa: The notion of porting people to new hardware has been mooted before as a way to improve upon the human condition. But even leaving that aside …
Every time we raise a new generation of humans we’re creating entirely new sentient intelligences. Maybe one day we raise a generation whose bodies, including brains, have been improved or enhanced in some manner. Less susceptible to various failings and diseases. They’d be people, children and later adults.
This does raise the specter of eugenics, but the problem with eugenics is that it combines the concept of human improvability with exclusionary privilege-hoarding conservative crap and the notion of identifying, and weeding out (usually violently), those considered genetically inferior.
That puts it, along with AI and a number of other things (I’d expect including “meeting aliens” and “advanced, especially self-replication-capable, nanotechnology”), firmly into that category of “do not open this box until we’re mature enough and in particular have moved beyond capitalism”. Any earlier and the potential for misuse, and the magnitude of the devastating consequences of misuse, is just too great. (Consequences like Skynet, grey goo, starting an interstellar war and then bringing a gun to a quantum demultiplexer fight, and etc.)
Considering a robot would be ten times less fragile than a human being metal in all couldn’t they over power and essentially rip you to shreds or shock you electrically
I’m not trying to be gruesome but this question just popped in my head
Either way I’m on the sexbot’s side in case of self defense
My gosh, things have been a little excessively real here today. I am glad it has settled out.
@flexitarian haruspex , hi! I work with a computer research lab that works on AI algorithms. We don’t generally call it that, though, because the term “AI” has been thoroughly poisoned by Hollywood and philosopher-bros. Please allow me to soothe your fears!
First, Artificial Intelligence isn’t about consciousness, sentience, or the creation of same. It’s about finding and refining techniques for solving hard problems. It’s mostly using statistics to suss out probabilities for things, and then iteratively improving those guesses. We call the ability to solve problems “intelligence”, so it’s “artificial intelligence”, but there’s no intention to make a mind in a box.
Thing is, of course, Hollywood and novels are filled with scientists who are so blinded by solving the problem that they don’t see the monster they’ve created. Usually it’s a little lab with big ambitions, often just a single individual driven by some passion. That’s not how it works in reality, but it makes a great story.
I’ll give you some anxiety before I make it go away: There’s a kernel of truth in that bit. Private companies can patent algorithms and will very frequently do so, because they’re lucrative. Google’s PageRank algorithm is that they use to score websites for searches and it’s very closely and jealously guarded. And it does some very complex stuff. With distributed neural networks. If anything should trouble you about AI, that should.
… but it shouldn’t trouble you, because it sounds way scarier than it actually is. A neural network is a statistical tool that does little more than “average these things and check to see if it’s over a threshold.” This AI stuff isn’t the same as consciousness or thought or will or desire. Like, there’s absolutely zero sense in being afraid of murder-robots out of human control when we already have murder-robots in the hands of callous authoritarians.
I won’t dig too deep into the topic, because oh god I could talk forever, but don’t fret too much about minds coming out of the software we build. It’s largely Hollywood, retelling the Frankenstein story in modern clothing. AI in popular knowledge is a parable about hubris and understanding the value of life; the real thing is much more boring and nerdy.
Would be happy to answer questions! But speaking of boring and nerdy, I’ve said enough.
Re: Silence of the Lambs:
It’s a while since I’ve seen the film and I’ve never read the book, but, as far as I can recall, the argument was something along the lines of “being trans is a mental illness, but its symptoms don’t include violence; Buffalo Bill is violent; therefore, Buffalo Bill must not be trans”.
Which is, obviously, several layers of nonsense – they created a trans serial killer for the shock value, then tried to weasel out of it by claiming their villain wasn’t really trans based on transphobic pseudoscience. Blegh.
Oh, and thinking of sexy AIs, anyone else been reading Questionable Content?
Holy cow, a LOT went down today in the last few hours! I know this has already been discussed at length, but I find this whole “TERF is being used as a slur! It doesn’t mean you’re a transphobe! It only means that women by definition have female reproductive organs!” disingengious on two fronts:
1) TERF literally includes the term “exclusionary”. So, you’re not a transphobe, you just want to exclude trans women from the conversation (and, frankly, from existence). Sure, Jan.
2) Everyone who makes the “female reproductive organs= woman” argument is full of shit. I had a complete hysterectomy at age 30. But I am a female-presenting cis, straight woman and no one ever doubts that I’m a “real” woman. So gtfoh with that shit.
Down with Capitalism, boo
But to throw a hand-grenade,
Skynet,: impossible morality play, meant to portray the hubris of man destroying the world. There are realistic ways for an AI to threaten humanity, but I wouldn’t count omnipresent-murder-machines as one of them.
grey goo: physically impossible. Drexler’s foundational work is utterly captivating and there are some neat ideas in there, but molecular assemblers run into the sticky fingers problem, making them science fantasy. Without molecular assemblers, the grey goo scenario becomes much less scary. Drexler also ignored a few really important problems, like thermal dissipation, heat, and power.
starting an interstellar war: great sci fi trope, but I don’t see why this would involve an AI at all. Why would an AI be more at risk of starting an interstellar war than us angry, divisive, violent humans? An AI’s more likely to want to count aliens instead of fighting them.
Sorry if this sounds confrontational! I’m not being confrontational – or, trying to not be. Just that, well, this all sounds very much like RPG adventure modules than actual things what might happen. I mean, I’d play the heck out of them, don’t get me wrong. But I don’t know about its interface with the real world.
@Fluffy Spider Returns, welcome back!
I’m totally on the sexbot’s side too!
As for strength and durability, generally, no, robots are much more fragile than humans. Bone has an incredibly high tensile strength (and other measures), it’s tough, flexible, and very light for all that. Muscle delivers a very good amount of force for its weight, too, and it does it by burning very lightweight sugar in a fluid suspension. Basically, our biology is amazingly efficient.
If we were to build a sexbot with the same range of motion and speed as a human body, it would be incredibly fragile and mostly plastic. If we were to build one as tough as a human, it would be painfully slow and weigh 300 kilos. If we wanted it to look and feel human, it’d be incredibly weak and slow.
Basically, we have to specialize our robots for very close domains. The sexy sexbots would not be built for strength or toughness or anything beyond looking human, so they’d likely be very weak and fragile.
Totally support the taser idea though.
@ Surplus
Those sound like both very interesting premises for books, but absolutely awful realities. Especially the robot-body/human-consciousness thing. Cyborgs make this all so much more complicated, because then there’s the ghost-in-the-shell question – the possibility of immortality, too, if your consciousness just keeps getting uploaded over and over.
@ Catalpa
I’m not objecting to the march of progress entirely, I guess. I have no problem with current AI tech, but if nothing else we ought to have some serious ethical questions resovled before we get ahead of ourselves.
@ Schildfreja
Hi! 😀 Thanks for the information, it’s actually pretty interesting. As you might have noticed, I’ve been pretty contaminated by the Hollywood ‘Skynet’ interpretation, so hearing the true facts from a knowledgeable person is valuable. I have an active geeky imagination and a pessimistic worldview, so it’s very easy for me to picture worst-case scenarios.
It’s just a sad statement on current affairs that we do have killer robots in the hands of authoritarians. Who needs the future when the dystopia is now?
I also don’t think there’s a possibility of AIs developing consciousness anytime soon,* but the thought that anyone would want to simulate rape or physical abuse with a sexbot is disturbing. 🙁
*My background in English literature and creative writing TOTALLY qualifies to make this diagnosis 😛
@flexitarian, no problem! Everyone thinks “Skynet” when they think of AI. Like, I tell someone my job and within four sentences that word pops out. Totally normal.
And there’s a lot of really awesome AI fiction out there, both good and terrifying! I really love cyberpunk tropes and all of these dorky philosophical questions, and I adore Ghost in the Shell and all that. Really awesome stuff. But we can’t miss the message of the tropes. Cyberpunk is about the crushing horror of late-stage capitalism and the destruction of the natural world; the destruction of the self and the subsumption of humanity into the mechanical, crushing gears of the capitalist machine.
That’s why it’s punk – it’s about rebels fighting a hopeless war against the victors. Spitting in the eye of the dictator as he orders your execution. In that lens, cybernetics are about turning humans into machines, to better serve as cogs in the capitalist machinery. AI is about the embodiment of the alien and unnatural, replacing humans just as inhuman systems and corporations replace humanity.
Cyberpunk is deeply socialist, deeply environmental, deeply humanist. It explores those themes by showing us a world where the people defending those things have lost. It’s brilliant. Love it to bits. Really important theme, and highly under-valued, I think.
On that note!
I think the experts – i.e. Kurzweil and his fans – in this field are highly optimistic to the point of being ridiculous. The idea of mind-uploading is beyond our reach to a degree we don’t even know how to frame the question. We have some lovely thought experiments, based largely on Drexler’s work to my understanding, so it’s pretty – uh – it’s high form theory. Disconnected from reality.
The “simpler” problem of a direct neural interface is just as bad, but the water’s muddied by recent and brilliant advances in prosthetics. Neural control of prosthetics is possible; we can find working nerves at the terminal joint and use them to make a working robotic hand, for example, and then train someone to fire those nerves. We can also use MRI and some other techniques to stimulate brain areas. But this is well, well outside of the ideas in a mind-machine interface, and the challenges in connecting a nerve bundle to some sort of electrical interface have been explored and found to be really, really difficult.
We can get neurons to interface with a circuitboard. There’s a super-creepy example of a lab building a simple robot and then transplanting rat optical neurons onto an agar medium on the board, and connecting the neurons to designed terminals on the board. The robot was able to drive around, using the neurons as a controller! Super creepy, like I said. But nowhere near the challenges of even the simplest direct neural interface that a human mind could interact with.
(Lots of fun experiments with interesting outcomes, though!)
Last thing. Science, especially Computer Science, has a reputation of acceleration – advances coming faster and faster. It’s not really true, it just looks that way from the outside. Advances come in fits and starts, with occasional erratic breakthroughs providing a period of activity as they’re integrated. Between that, long periods of relative stability. The field of AI has been stagnant for decades, basically since the development of the hidden markov model and similar decision networks in the 60’s. They were useful but couldn’t be scaled up. In 2006, though, a researcher developed convolution networks, which worked similarly to a bayesian belief network, but could be scaled up to much larger structures – so could work on much more complicated problems.
Convolution networks and related advancements are responsible for a huge amount of the honest advancements of the past decade. They’re why Google is so good at getting the answer you want (I think, I don’t work there), why facial recognition has gotten so good, voice recognition, etc, etc. (I’m glossing over a *lot* here, obviously.)
Point is, it’s not science that’s advancing super-fast. It’s the easier communication in society and the increasing wealth and capability of the individual. In my opinion! Science is sort of in a bog right now, to be honest. Most papers are garbage, most results are found wrong or in error, very little peer review actually happens, very few papers are actually read by more than the editor. It’s a mess.
That’s my ramble! Gosh, i should find something to do around here, too much babble. Good night my ducks
I’m not especially worried about a robot uprising anytime soon. We can barely get them to walk on two legs.
https://m.youtube.com/watch?v=g0TaYhjpOfo
@Scildfreja:
The interstellar war consequence was for the “meet an alien” item — if we’re not mature enough as a species, we’re likely to make mistakes in dealing with anything truly nonhuman that could have devastating consequences, and most likely devastating for us rather than them. I don’t think “1492, but we’re the natives this time” is very likely, because anyone who’s crossing interstellar space probably has their shit together too much to find imperialism useful. They’ve got command of their own system’s resources, which if it’s like ours will be enormous; they’ve got robots good enough to have no plausible use for slave labor; and don’t get me started on the ridiculousness of whether they’ll want our water (their own solar system should be chock full of it, and any smart water thief would mine Europa’s crust for a shit-ton of it before going after a few puddles down a much deeper gravity well and surrounded by natives who might throw spears at them for their temerity!) or to eat us or something.
On the other hand, we could be hit with culture shock or future shock of some sort, or do something that made them mad. The way we treat each other and/or our biosphere might well do it.
As for nanotechnology, we know darn well that an assembler is possible. You’ve got a few zillion of them swimming in you right now; they’re called ribosomes. The real question isn’t whether one is possible, but how capable one can be made — how fast, how versatile, etc.
Same goes for a self-reproducing nanomachine. The smallest bacteria are close to nano-scale, and are self-reproducing. Again the question is how small, how short a generation span, how versatile is possible, not whether it can be done at all. Whatever nature has already done gives us a lower bound on what could potentially be done with technology. And as a comparison between a hawk and an FA-18 makes clear, very often that’s a very damned weak lower bound. Biology in particular is restricted to using chemical elements to build itself that are reliably abundant everywhere in the creature’s habitat, and to using reactions and components that are compatible with wet low-temperature chemistry. Technology is not so limited.
As for mind uploading, again, it must be possible, because in principle with enough computing power (and especially quantum computing) you could do lattice QCD on the volume of a human head and simulate a brain at the granularity level of subatomic particles and including all quantum effects. Not very practical. You’d likely need a computer the size of a galaxy and a few billion years of run-time. Another of those lower bounds, though, and another one that’s likely to be very weak. Reverse engineering the essence of what computations happen in a brain when it thinks a thought and distilling that into algorithms is likely to lead to a much more compact method at some point. The question again doesn’t seem to be whether, but when and how efficient.
Although I see now its improbable
I would watch a movie about Taser 9 the rogue sexbot
Well now I have ideas for my animation short
I find the four-legged robots really funny but also kind of terrifying?