Categories
antifeminism evil single moms memes men who should not ever be with women ever misogynoir misogyny MRA racism

Memeday Part Two: That’s racist as hell, A Voice for Men

AVFM editorial meeting
AVFM editorial meeting

So A Voice for Men has decided to use a picture of a disabled veteran … to attack black single mothers.

https://twitter.com/AVoiceforMen/status/731667484172935169

The meme was posted on the AVFM Facebook page as well.

The US treats its veterans horribly. But using the plight of wounded and disabled veterans in order to push a blatantly racist and misogynistic agenda is, well, calling it “cynical” would be a massive understatement.

And what exactly has the Men’s Rights movement ever done to help veterans?

Oh, that’s right: Nothing. Nada. Zilch.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

102 Comments
Inline Feedbacks
View all comments
Eitan rosen
Eitan rosen
5 years ago

Looking at elams history it seems he is the inspiration for much of president regan’s rhetoric.

I know he was not but I imagine that he is what regan could have been thinking.

pitshade
pitshade
5 years ago

Sort of on topic:

https://thesocietypages.org/socimages/2016/05/18/is-michelle-jealous-of-melania-catty-stereotypes-and-racist-cartoons/

Many are aghast at a cartoon recently released by a well-known right-leaning cartoonist, Ben Garrison. Rightly, commentators are arguing that it reproduces the racist stereotype that African American women are more masculine than white women.

Eyes on the Right
5 years ago

If that’s a service connected disability there’s no way in hell that person wouldn’t be considered 100% disabled. That gives you easily double the amount AVFM is suggesting. Plus other benefits such as automobile allowance and, if he rents, a voucher. Not saying whether or not that’s sufficient to meet a vet’s needs and I’m not weighing in on whatever benefits they’re suggesting the black family in that meme are receiving, but this is flat-out dishonest in addition to being obviously racist.

Eyes on the Right
5 years ago

Oh and props for them erasing female veterans and veterans of color. Good job with that, you fake patriots.

weirwoodtreehugger: communist bonobo
weirwoodtreehugger: communist bonobo
5 years ago

I just witnessed some great misandry at the most sacred of all man spaces. a sporting event I was at a Twins game and they had a feature called suns out guns out mostly showing men with no shirts. Then the camera went to a woman who looked like a bodybuilder flexing her biceps. Then they kept showing her. Mwah haha!!!

Paradoxical Intention - Resident Cheeseburger Slut

Eyes on the Right | May 21, 2016 at 2:53 pm
Plus other benefits such as automobile allowance and, if he rents, a voucher.

My uncle and aunt also run a specialty garage where they add on stuff to vehicles to make them more accessible to disabled people, like making them easier to drive, easier to load up their wheelchairs, etc.

There’s actually a knob that you can attach to your steering wheel that helps you steer if you’re missing an arm, or even a left-foot gas pedal if you are missing a leg and such. Of course, it requires training to use, and you have to go through a specialty driving school to learn how to use it before it can be attached to your vehicle for your personal use.

And guess what? These modifications are often covered by insurance, benefits, or the VA.

Tosca
Tosca
5 years ago

I’m really confused about what it is these guys think they want.

They want to have sex on demand with women, but don’t want to use condoms, but don’t want the women to get pregnant, but call women who use contraception sluts and want women who have abortions to be punished for it.

If they have children they want free access to them, but don’t want to do any of the domestic work associated with raising them and don’t want to pay any money toward housing and feeding them.

Mothers in a relationship with a man are parasites, mothers not in a relationship with a man are sluts, women who aren’t mothers are unnatural and denying their biological destiny. Oh, and a woman who won’t have sex with them is a frigid snob…and one who will is a manipulative slut.

If someone went up to them and said “So, describe your ideal woman. Someone who is and does exactly what you want”, what the hell would they say?

Victorious Parasol
Victorious Parasol
5 years ago

@ Tosca

They’d describe one of their precious sexbots. The ones that are going to be on the market aaaaaaaaaaaaannnnnnnnnnnnyyyyyyyyy day now.

Moggie
Moggie
5 years ago

OT: meta mansplaining:

comment image

Youthful Indescretion
Youthful Indescretion
5 years ago

@Tosca I think the answer would be ‘a paranoid wreck who never feels like they could be good enough for me and is constantly having to prove themselves worthy of my approval’ Sadly I think the inconsistencies are design and not flaw.

Mish
Mish
5 years ago

I get up this morning and there are GOODIES awaiting. Moggie, that mansplaining is pure gold. The original tweet is funny on its own; but then I saw the response, and nearly lost my coffee. As Bart Simpson would say “Oh, the ironing!”

Scildfreja, that was exactly what I was thinking. You’re a genius, that’s all there is to it 🙂

numerobis, I hope you’re right about things getting better. I’m certainly not just sitting around ‘on my ass’ as you put it 😛 – but I’m not giving much visible support to the main opposition. They started the current refugee policy when they were in govt. and they’re not giving any indication that they plan to change it if they win this time. The Greens, however, are gaining force and may end up with enough seats to be able to have real impact on whoever wins. That’s where most of my support goes.

Mike Hisandry
Mike Hisandry
5 years ago

A monthly allowance to meet basic needs is not a reward. Getting blown up is not winning the lottery; neither is having five kids. “Deserving” has absolutely nothing to do with it, it’s just about “How much does this person need to survive, and how much can we give them?”

Life as a disabled vet is expensive. So is life with five kids. Neither of these people are living the good life on the income shown.

(Never mind the fact that we don’t even know who these people are – he could have lost his legs in a bike accident and she could be fostering or adopting those kids. I’m not about to take Paul Elam’s word for it.)

@Tosca
They want women to bear the blame for every shitty thing in history. They want immunity from having to care about how others respond to their actions. They want all the power, and none of the responsibility.

Mels
Mels
5 years ago

@Tosca – I think of Lundy Bancroft saying that abusive men aren’t abusive because they’re angry; they’re angry because they’re abusive. They will always be angry at women, because there is no state in which a woman can exist, no way in which she can live her life, that will keep them happy.

@Moggie – Absolutely brilliant.

Kat
Kat
5 years ago

OT — but I (almost) can’t help myself. This woman escaped several years of sex slavery and ran a triathlon from Mexico to DC, following a human trafficking route, to call attention to the plight of humans being trafficked!

http://www.cnn.com/2016/05/16/world/human-trafficking-norma-bastidas-triathlon-record/index.html

Got to this site from clicking on the very amusing “Mansplaining” link and just kept clicking.

Kat
Kat
5 years ago

Trigger warning for reference to sexual violence.

Oh, KKK, what a charming photo.

My boyfriend says they’re emerging from the depths of hell.

I say that it’s a highly sexual photo and not in a good way. Or possibly demons being born. Is it just me???

Trigger warning for reference to sexual violence.

Snowberry
Snowberry
5 years ago

If someone went up to them and said “So, describe your ideal woman. Someone who is and does exactly what you want”, what the hell would they say?

They want a woman who is a shapeshifter that can match their ideas of a perfect 10, a personality-shifter that can match their ideas of a perfect submissive, can become pregnant or not on demand, can read both their conscious and subconscious mind perfectly even when not present, has the ability to warp the local reality to match their expectations, is absolutely devoted and obedient to them, and never ever allows them to be aware of any of those supernatural powers to avoid making them feel inferior.

And then they’d just get angry and abuse such a woman for failing to do anything to justify abusing her. Because in the end, what they feel they “need” is a perfect being who exists only for them to abuse, but that merely feeds into their anger and/or hatred addiction.

EJ (The Other One)
5 years ago

@Scildfreja, that’s magnificent.

tricyclist
tricyclist
5 years ago

The mansplaining bloke (Mr Clarke above) is claiming his tweet was deliberate irony.

Looking at his TL, I’m inclined to believe him, and if so, a tweet of genius.

TARDIS with a Hat
5 years ago

That’s exactly the thing, @Tosca, their perfect woman is still someone they want to HATE and BLAME and SHAME. Yeah, they want her to submit to gender roles and have sex with them and be conventically attractive and takes care of herself – but they also want to berate and mock her for her for being a lazy housewife and a stupid girly girl who has stupid interests and likes make up and is just so hystrical about her looks and is a slut. It’s a feature, it’s part of the thing
A more subdued version of this happens in our society at large, femmine women are treated as “lame” and their girly interests, hobbies, jobs and aspirations are treated as inherently lesser then the masculine. But more masculine women are mocked, and berated and harassed and pressured to become femmine, and those lesser girly stuff are constatly being shoved as the only good option for women.
There’s no winning formula for a woman in the game of sexism

EJ (The Other One)
5 years ago

@Moggie:
That’s a lovely tweet, but I can’t look at it without noticing that someone’s phone is about to run out of battery and that makes me anxious. Whomever you are, please plug your screenshot into the charger.

Mish
Mish
5 years ago

@EJ (The Other One):
You’re adorable 🙂

Sinkable John
Sinkable John
5 years ago

@Moggie

I laughed so hard at that tweet that I got burning coffee on my knees and it burns.

@Tosca

I’m really confused about what it is these guys think they want.

I think they already have it. Given that the only thing these people seem able and willing to do is yell at, blame, threaten, shame, etc, women… Well, they already have everything they need. They wouldn’t have it any other way. Actually, any other way would lead them to keep doing the exact same thing. They’re the witch-hunters of this era, they’ll just keep looking for new dumb reasons to do… whatever it is they actually do.

@Victorious Parasol

They’d describe one of their precious sexbots. The ones that are going to be on the market aaaaaaaaaaaaannnnnnnnnnnnyyyyyyyyy day now.

I actually can’t wait for sexbots to be a thing. ’cause IF that ever happens, their precious INVISIBLE HAND will also make male bots, gay bots, etc happen. And then we’ll be able to stare in awe as Paul Elam starts A Voice For Robotic Men, a site entirely dedicated to the woes of male bots abused by those mean women (or maybe even female bots). And then the Red Pill guys are gonna go full circle when their apocalyptic gynocratic theories involve actual robots, that is gonna be PRICELESS.

Imaginary Petal
Imaginary Petal
5 years ago

Every smartphone screenshot on the internet has dangerously low battery in it. What’s up with that?

Scildfreja
Scildfreja
5 years ago

The internet is clearly stealing your phones batteries any time you plug your phone in to sync. What else keeps the internet charged, after all? What is this, battery socialism?

@Sinkable John, re: Robot Uprising

… I had never thought of what they would do if there were actually sexbots for both men and women. That is brilliant.

I guess I had better get back to work on that general cognition problem!

EJ (The Other One)
5 years ago

Can we just plug the android sexbots into the gynoid sexbots, close the door, and declare sex a solved problem?

Scildfreja
Scildfreja
5 years ago

@EJ … would they …

…. would they keep each other charged?

have you just solved the worlds’ power crisis?

with robosex?

Sinkable John
Sinkable John
5 years ago

The Matrix : Reverse(x)d

Edit : HOW GOD THIS PUN IS MAKING ME CRINGE INTO OBLIVION

Dizzy
Dizzy
5 years ago

@Victorious Parasol and @Sinkable John, you know what actually worries me about sexbots? Eventually we’re going to create a human-like intelligence and the first thing people are going to do is program it into a sexbot. Turns out that the Real Doll company has customers send their dolls back all the time with damage to their breasts, face and genitals. Because customers cut them up with knives. So I’m afraid we’ll make a genuine intelligence, and then people will want to program it to both experience genuine pain but “enjoy” it the way that MRAs seem to think that rape victims secretly want to be raped. And I guess the logic is better a doll than a human, but at the same time, if something is intelligent, I don’t think it’s ethical to program it to be a slave. So I worry about that sometimes.

Sinkable John
Sinkable John
5 years ago

@Dizzy

Let’s find comfort in the fact that we’re unlikely to create a sentient AI in the near future or even at all. The prospect of red pillers going nuts over some Matrix Sexbot Uprising is much more fun.

Scildfreja
Scildfreja
5 years ago

@Dizzy, the whole “is it ethical to build a conscious machine that wants to be a slave” is an excellent question, and reams of great science fiction and philosophy have been written about the subject. It’s a very complex knot, and it’s worthwhile to dig into it.

I don’t think there’s actually that much worry about an actual general intelligence being built into a sexbot that wants to be abused, though. One of the key things about a sexbot that the doods find appealing is that it isn’t a conscious intelligence.

General Intelligences are complex, multimodal and difficult to predict. Even if it was programmed to want abuse, that desire would show up in all sorts of places that would make it very alien to a human. It’d be creepy, not sexy. It would do things like seeking out abuse or outright violence to itself – a non-survival instinct, if you want. It just wouldn’t behave like a human. That sets off the creep alarm for most people.

They want something more like the artificial intelligence you find in video games, I think. Game AI is severely limited, and usually tailored to provide rewarding behaviour (for the player) and not intelligent behaviour. Intelligence in a sexbot would be unwelcome complexity – no exploitable or predictable flaws.

They want something that behaves predictably when interacted with in a certain way. They want it to be reliably erotic when interacted with in a certain way, reliably comforting when interacted with in another way, reliably maternal when in another way. The only way we know of consciousness emerging is through processes that are anything but reliable. They won’t want anything conscious. They don’t want a person, they want a game console with sex toys attached to it.

(Sorry if it seems like I’m saying “you’re wrong!” here. You’re not wrong at all – I just don’t think they want something with a generalized intelligence or consciousness. I’m being pedantic :s I think a lot about general intelligences, though!)

Don’t worry about the suffering intelligent sexbots. Worry about the suffering intelligent lawyer-bots. First lawyerbot was hired by a private law firm this week. Good luck, everyone.

Dalillama
5 years ago

@Scildfreja

General Intelligences are complex, multimodal and difficult to predict. Even if it was programmed to want abuse, that desire would show up in all sorts of places that would make it very alien to a human. It’d be creepy, not sexy. It would do things like seeking out abuse or outright violence to itself – a non-survival instinct, if you want. It just wouldn’t behave like a human. That sets off the creep alarm for most people.

They want something more like the artificial intelligence you find in video games, I think. Game AI is severely limited, and usually tailored to provide rewarding behaviour (for the player) and not intelligent behaviour.

A habit that I’ve picked up via science fiction is to differentiate AI into Machine Intelligence (MI), meaning a General Intelligence that is instantiated as a digital computing device of some sort (a term which, incidentally, I got from The Turing Option; the robot character objects to the term artificial intelligence as it implies that they are not truly a mind), and Pseudo-Intelligence (PI), which is like game AIs; a program that can impersonate a sentient being to a greater or lesser extent as long as all interactions stay within preestablished parameters. (This term I borrow from Neal Stephenson’s The Diamond Age).

Scildfreja
Scildfreja
5 years ago

@Dalillama, hmmmm. Interesting distinction. The terminology is sort of in flux on the research side of things, though to be honest it’s always been in flux.

Artificial Intelligence, as a term, has been sort of dropped from general use. It’s been polluted by Hollywood to the extent that it’s not really useful in describing what we do. It’s also splintered from its sort-of-amalgamation in the 60’s. Back then there was AI and Cybernetics and Robotics, only Robotics has really survived as a meaningful term in the current day.

AI has a few terms for it now, depending on who you talk to and the specific field they’re looking at. IA is Intelligence Augmentation, which implies that the systems them selves aren’t intelligent, but they help humans behave more intelligently. ML is Machine Learning, which is part of AI and has sort of grown out on its own. Deep Learning is a subset of ML – Google’s Deep Dream, with the contiguous ultrahounds, is a good example. That uses neural networks, but that term has fallen out of fashion. There’s also Big Data and Big Compute, or Cloud Computing, which is all about processing vast quantities of data quickly in order to come up with intelligent answers to questions. Google itself is a great example of big data in action. ML also features Machine Vision, which is big enough for it to be considered its own subject, though classically it should be considered cybernetics.

All of these would be considered PI in your example – they can generate intelligent behaviour within their fields, but are useless outside. MI isn’t really being pursued in your definition – there are a few interesting simulation projects (the rat neurons wired to a robot is creeeeepy) but in general it’s just too big of a question to really tackle. Even in my lab – we’re trying to tackle some questions that are really close to the general intelligence problem – we limit ourselves to certain domains, and we’re not at all trying to simulate a brain, or a portion of it. It’s just too big and unknown to do anything meaningful yet.

So I guess most of the work these days is PI, whereas MI is considered more like building cloud-castles! Interesting terms, though. I’ve considered the same distinction, but never used those terms specifically – I usually just use IA instead. Neat!

Patricia Kayden
Patricia Kayden
5 years ago

@pitshade. Thanks for that link. Read the whole buzzfeed article. Fascinating expose on Elam’s hypocrisy and how he has used misogyny to hustle for $$$.

@Mish, your comment about the Immigration Minister who is arguing that refugees are illiterate but at the same time taking away people’s jobs and simultaneously clogging up the unemployment line was hilarious to me. Especially when you added that the refugees aren’t even actually in Australia.

Wow!! Absolutely hilarious how someone can lie like that and be taken seriously by anyone.

pitshade
pitshade
5 years ago

@ Patricia Kayden

Thank David for posting about it initially.

Dizzy
Dizzy
5 years ago

@Scildfreja, I didn’t feel like that at all! I have this conversation all the time with a friend who is an Actual Philosopher and whose specialty is artificial intelligence, and neither of us has figured out a good, clear answer to it. It’s fun to talk about though, and I like getting other people’s opinions and ideas.

I do think we’re trying to move to a human-like created intelligence, so that’s what concerns me, and I also think that what people assume that’s going to look like is “a group of programmers literally programs something that’s very like a human brain”. I think, in theory, if we’re ever able to do that, it would then be possible to program it to feel however someone wants, and I don’t think it’s ethical to program something to love slavery in the same way it isn’t ethical for someone to cut out bits of my brain until I feel that way.

But if you’re right about the future being in general intelligence rather than artificial, which is looking pretty likely, then we wouldn’t be able to control it very well. In which case we wouldn’t be able to force it to enjoy pain or slavery, and the whole point is a bit moot.

I’m still going to worry about the sexbots, and the lawyerbots, but I definitely want to see how things play out first.

Scildfreja
Scildfreja
5 years ago

I do think we’re trying to move to a human-like created intelligence, so that’s what concerns me, and I also think that what people assume that’s going to look like is “a group of programmers literally programs something that’s very like a human brain”.

We aren’t moving towards a human-created intelligence, really – the definition of that is really vague, firstly, and second of all – why would we want one? Much more useful to have a non-sentient/non-conscious system that’s good at predicting stuff. There’s a slim but non-zero chance that something like that might emerge accidentally, but it certainly won’t be human, and it certainly won’t have feelings/emotions/reactions like a human. Those things rely on the precursors we evolved with, and it wouldn’t develop along the same lines. It wouldn’t think like us at all.

More along the lines of your last bit there, there’s a project called the Blue Brain project that’s working on simulating a set of neocortical columns. Slated to be done in 2020. Note: that’s not modeling a brain, just a patch of the neocortex.

I think, in theory, if we’re ever able to do that, it would then be possible to program it to feel however someone wants, and I don’t think it’s ethical to program something to love slavery in the same way it isn’t ethical for someone to cut out bits of my brain until I feel that way.

Hm. If we had a general intelligence that feels/thinks like a human (brain simulation, basically), that doesn’t imply that we could make it feel however we wanted. If it’s a simulated brain it’s still going to be subject ot the limitations of the brain’s structure – neocortex doin’ stuff, midbrain handling communication between it and the body, etc, etc. That imposes a lot of limits on what is and isn’t acceptable.

I do get what you’re saying, though, and I completely agree – it wouldn’t be ethical to edit a brain to feel the way we want it to feel. For the same reason we wouldn’t edit a person’s brain to make them feel a certain way, or engineer a baby to be a content slave. That’s obscene.

Let me toss you a more difficult question, though! I can build an AI system that can monitor your behaviour and, from that, figure out what sort of life you would be most successful in. What career you’d be good at and find rewarding, whether the relationship you just started is going to work out, etc. Cradle to grave, this thing could gently guide you – not coerce, just suggest – to be happier, healthier, more productive, and more fulfilled.

It wouldn’t involve giving away your data or giving up your privacy to anyone. It wouldn’t tell you that you have to do X, and it wouldn’t trick you into doing X through deception or through hiding information. All it would do is let you know at critical moments, “Hey, this isn’t going to turn out really well – I think you’re going to end up in situation A. If you start doing this other thing, you’ll probably start going towards situation B, which is much better.”

THAT is the (most positive) realistic face of AI that we will be confronting in the coming years, and are already starting to – hello Google. I see you over there.

Is that ethical?

Alan Robertshaw
Alan Robertshaw
5 years ago

@ scildfreja

First lawyerbot was hired by a private law firm this week.

Boo, I saw an article a while back along the lines of “Will you be replaced by a machine?” and barristering was one of the lowest scoring. Ah well, looked like we’ve been rumbled.

There are however some aspects of law that probably would benefit from more machine input. There’s a trend now to produce ‘route to verdict’ flowcharts. Primarily they came in to help juries but I’ve also used them for judges. Judges have always liked it when you give them a list of ‘issues to be decided’. It’s easy to adopt that into a general flowchart.

When I read up on that Zoe Quinn story I thought the underlying software behind Depression Quest might be useful for automating the decision making process a bit.

Kay
Kay
5 years ago

I went to that FB post and even some MRAs are offended by that meme! E.g.

Personally, I think AVFM should avoid posting these kind of memes. They don’t seem overly relevant to MRA issues, seem a little racist, and they could even offend our black MRA brothers.

You know it’s bad when that happens! Though reading it again I’m not sure if that comment isn’t a Poe.

Scildfreja
Scildfreja
5 years ago

@Alan, it’s going to get really weird in barristering first. New barristers and interns are going to be the first with nothing to do, because the easiest job to replace is the lookup of relevant case law (which is a big thing for the new peeps, right? I think at least). So the traditional “learning on the job” is going to be the first to hit. The new barristers will basically be running specialized versions of Microsoft Office, and will probably not know a lot outside of it. This’ll slowly trickle upwards into the higher ranks, but I think you’ll see a huge skill difference, with newer barristers being fewer in number, able to handle much more work, but with a very different skill set. You’ll be talking latin to their esperanto.

Alan Robertshaw
Alan Robertshaw
5 years ago

@ scildfreja

You’ll be talking latin to their esperanto.

Funnily enough we’ve been told not to use Latin any more. Although we all still do; it’s useful shorthand for a lot of the concepts.

Technology has helped with the research side of things. Much as I enjoyed hiding away in the Inns libraries, it is so much easier just to use a database like Westlaw and get all the linked cases and articles.

Our key thing though is advocacy, and I think that’s a very human skill. It’s all about persuasion. He, there’s a maxim that barristering is “show business for ugly people”. A lot of my friends are very gorgeous, but there is a huge element of performance still; and that might be hard for a machine to replicate.

EJ (The Other One)
5 years ago

The Blue Brain project is Badass with a capital B. It’s hubristic in all the rights ways, and is one of these things that makes my scientist-sense tingle.

I can’t wait to see what they come up with.

Alan Robertshaw
Alan Robertshaw
5 years ago

@ EJ

Blue Brain sounds cool but I and I am more of a Bad Brains fan!

http://youtu.be/cCEkuo94X6I

Dalillama
5 years ago

@Alan
Well, once the judge and jury are automated too, that won’t be an issue.

Alan Robertshaw
Alan Robertshaw
5 years ago

@ dalillama & scildfreja

Machine decision making is a fascinating topic from a legal point of view. I did some stuff in relation to autonomous combat vehicles (basically drones but land based) and it threw up all sorts of issues.

(Not least a debate about whether they’d paid me to watch “War Games”)

kupo
kupo
5 years ago

@Alan
Did they pay you to watch War Games? Because that sounds pretty awesome.

Alan Robertshaw
Alan Robertshaw
5 years ago

@ kupo

It was more a case of “Hang on, are you charging us for the time you spent watching a film?” I didn’t; I did that on my own time. Mind you, I know someone who claimed 2 hours of family law CPD for watching Kramer v Kramer.

Skiriki
Skiriki
5 years ago

To be honest, I’d rather not have an automated judge, because:

https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Alan Robertshaw
Alan Robertshaw
5 years ago

@ skiriki

That’s pretty awful; but unfortunately not surprising.

Were not quite that bad over here yet. We still leave things like that to humans. But there are so many ‘guidelines’ these days. I don’t know if you recall that thread a while back where I made reference to the sentencing tables we have here and I quoted the judge who called it “sentencing by fucking spreadsheet”

guy
guy
5 years ago

The wrinkle with declaring that it’s unethical to program a robot to love slavery because it’s like carving out part of a brain is that we have to program the AI to feel some way. Is “like us” the objectively moral and correct way to program it? It’s quite possibly the easiest way, if the neural simulation path scildfreja is talking about pans out rather than the “more sophisticated AlphaGo” path, but it’s still a choice being made. Then there’s the additional question of whether it’s right to enslave someone even if it is what they want; I’m inclined to say that we should treat the answer as no rather than risk using it as an excuse to enslave someone who doesn’t want to be enslaved regardless of the actual answer.

The thing that we particularly want that risks raising these questions is a system that can handle the unexpected. For instance, a secretarial/building management system that can handle a major natural disaster in the area on its own initiative and use its existing systems to help with evacuation and relief efforts. Or a military tactical command system that can handle whatever the next major change in warfare is without being completely rewritten. Or an AI in your phone that can help you with literally anything. These are a long way off, both in software and hardware, but do not appear to be fundamentally impossible. They raise complex ethical questions because they’ll potentially be meaningfully intelligent beings but not human ones. The one position I feel confident in staking out is that once an AI of human-like intelligence is written and active it gets the right to make its own decisions; if those aren’t the decisions the programmers wanted it to make that is just too bad.

Our current machine learning approach is to set up some initial system, some way of changing it, and a method of evaluating those changes. Then we run the system on a bunch of examples or have it play a competitive game a lot of times (or both) until it is pronounced done. Usually there’s a training set and the ML system is scored on performance while running on it; for a language processing system fluent speakers might rate its judgements for accuracy. After a while of training it will then be run on a testing set to make sure it learned generally-applicable rules instead of ones tightly tailored to the exact training set like “birds are always in the left half of a picture.” The system can keep learning after the initial training is finished, or it can be used as-is so its behavior remains consistent.

From the computer science side, I see three legal scenarios for actions of autonomous systems that we may have in the near future.

First, the system may be flawed such that doing everything appropriately still results in injury, in which case the creators are liable just like if they’d sold a car with defective brakes. Of course, with learning systems it’s a lot easier for a situation the creators couldn’t predict to occur, which could reasonably clear them of legal liability as long as they respond promptly with warnings, recalls, and software patches as appropriate.

Second, the system may be mis-trained; given a training set that is missing crucial examples. If an attack drone is trained with a set that includes hostile technicals but not civilian pickup trucks, it is highly likely to classify all pickup trucks as hostile. Since end users cannot be reasonably expected to know how to pick a training set on their own, the creators should provide either a base training set or detailed instructions on how to prepare one. If the buyer then ignores these instructions, liability is on them. The system may also continue to learn after deployment and thus without a controlled training set; this is risky and should be disabled unless the risks can be clearly communicated to the users and methods to track and correct developing errors can be provided.

Third, the system may be given bad orders and carry them out. The easy answer is to say it’s defective for not refusing them, but that assumes that its predictions are infallible and there is never a situation where a user might need to override them, or a situation where it might not be able to fully predict the consequences of an action and the user should reasonably have been aware of that.

More advanced AI may become sophisticated enough to be potentially held responsible for their own actions, but that’s a long way off and depending on their design it may conditionally be appropriate to hold others responsible.