So A Voice for Men has decided to use a picture of a disabled veteran … to attack black single mothers.
https://twitter.com/AVoiceforMen/status/731667484172935169
The meme was posted on the AVFM Facebook page as well.
The US treats its veterans horribly. But using the plight of wounded and disabled veterans in order to push a blatantly racist and misogynistic agenda is, well, calling it “cynical” would be a massive understatement.
And what exactly has the Men’s Rights movement ever done to help veterans?
Oh, that’s right: Nothing. Nada. Zilch.
@IP:
http://i.imgur.com/fs8xaMe.jpg
@EJ … would they …
…. would they keep each other charged?
have you just solved the worlds’ power crisis?
with robosex?
The Matrix : Reverse(x)d
Edit : HOW GOD THIS PUN IS MAKING ME CRINGE INTO OBLIVION
@Victorious Parasol and @Sinkable John, you know what actually worries me about sexbots? Eventually we’re going to create a human-like intelligence and the first thing people are going to do is program it into a sexbot. Turns out that the Real Doll company has customers send their dolls back all the time with damage to their breasts, face and genitals. Because customers cut them up with knives. So I’m afraid we’ll make a genuine intelligence, and then people will want to program it to both experience genuine pain but “enjoy” it the way that MRAs seem to think that rape victims secretly want to be raped. And I guess the logic is better a doll than a human, but at the same time, if something is intelligent, I don’t think it’s ethical to program it to be a slave. So I worry about that sometimes.
@Dizzy
Let’s find comfort in the fact that we’re unlikely to create a sentient AI in the near future or even at all. The prospect of red pillers going nuts over some Matrix Sexbot Uprising is much more fun.
@Dizzy, the whole “is it ethical to build a conscious machine that wants to be a slave” is an excellent question, and reams of great science fiction and philosophy have been written about the subject. It’s a very complex knot, and it’s worthwhile to dig into it.
I don’t think there’s actually that much worry about an actual general intelligence being built into a sexbot that wants to be abused, though. One of the key things about a sexbot that the doods find appealing is that it isn’t a conscious intelligence.
General Intelligences are complex, multimodal and difficult to predict. Even if it was programmed to want abuse, that desire would show up in all sorts of places that would make it very alien to a human. It’d be creepy, not sexy. It would do things like seeking out abuse or outright violence to itself – a non-survival instinct, if you want. It just wouldn’t behave like a human. That sets off the creep alarm for most people.
They want something more like the artificial intelligence you find in video games, I think. Game AI is severely limited, and usually tailored to provide rewarding behaviour (for the player) and not intelligent behaviour. Intelligence in a sexbot would be unwelcome complexity – no exploitable or predictable flaws.
They want something that behaves predictably when interacted with in a certain way. They want it to be reliably erotic when interacted with in a certain way, reliably comforting when interacted with in another way, reliably maternal when in another way. The only way we know of consciousness emerging is through processes that are anything but reliable. They won’t want anything conscious. They don’t want a person, they want a game console with sex toys attached to it.
(Sorry if it seems like I’m saying “you’re wrong!” here. You’re not wrong at all – I just don’t think they want something with a generalized intelligence or consciousness. I’m being pedantic :s I think a lot about general intelligences, though!)
Don’t worry about the suffering intelligent sexbots. Worry about the suffering intelligent lawyer-bots. First lawyerbot was hired by a private law firm this week. Good luck, everyone.
@Scildfreja
A habit that I’ve picked up via science fiction is to differentiate AI into Machine Intelligence (MI), meaning a General Intelligence that is instantiated as a digital computing device of some sort (a term which, incidentally, I got from The Turing Option; the robot character objects to the term artificial intelligence as it implies that they are not truly a mind), and Pseudo-Intelligence (PI), which is like game AIs; a program that can impersonate a sentient being to a greater or lesser extent as long as all interactions stay within preestablished parameters. (This term I borrow from Neal Stephenson’s The Diamond Age).
@Dalillama, hmmmm. Interesting distinction. The terminology is sort of in flux on the research side of things, though to be honest it’s always been in flux.
Artificial Intelligence, as a term, has been sort of dropped from general use. It’s been polluted by Hollywood to the extent that it’s not really useful in describing what we do. It’s also splintered from its sort-of-amalgamation in the 60’s. Back then there was AI and Cybernetics and Robotics, only Robotics has really survived as a meaningful term in the current day.
AI has a few terms for it now, depending on who you talk to and the specific field they’re looking at. IA is Intelligence Augmentation, which implies that the systems them selves aren’t intelligent, but they help humans behave more intelligently. ML is Machine Learning, which is part of AI and has sort of grown out on its own. Deep Learning is a subset of ML – Google’s Deep Dream, with the contiguous ultrahounds, is a good example. That uses neural networks, but that term has fallen out of fashion. There’s also Big Data and Big Compute, or Cloud Computing, which is all about processing vast quantities of data quickly in order to come up with intelligent answers to questions. Google itself is a great example of big data in action. ML also features Machine Vision, which is big enough for it to be considered its own subject, though classically it should be considered cybernetics.
All of these would be considered PI in your example – they can generate intelligent behaviour within their fields, but are useless outside. MI isn’t really being pursued in your definition – there are a few interesting simulation projects (the rat neurons wired to a robot is creeeeepy) but in general it’s just too big of a question to really tackle. Even in my lab – we’re trying to tackle some questions that are really close to the general intelligence problem – we limit ourselves to certain domains, and we’re not at all trying to simulate a brain, or a portion of it. It’s just too big and unknown to do anything meaningful yet.
So I guess most of the work these days is PI, whereas MI is considered more like building cloud-castles! Interesting terms, though. I’ve considered the same distinction, but never used those terms specifically – I usually just use IA instead. Neat!
@pitshade. Thanks for that link. Read the whole buzzfeed article. Fascinating expose on Elam’s hypocrisy and how he has used misogyny to hustle for $$$.
@Mish, your comment about the Immigration Minister who is arguing that refugees are illiterate but at the same time taking away people’s jobs and simultaneously clogging up the unemployment line was hilarious to me. Especially when you added that the refugees aren’t even actually in Australia.
Wow!! Absolutely hilarious how someone can lie like that and be taken seriously by anyone.
@ Patricia Kayden
Thank David for posting about it initially.
@Scildfreja, I didn’t feel like that at all! I have this conversation all the time with a friend who is an Actual Philosopher and whose specialty is artificial intelligence, and neither of us has figured out a good, clear answer to it. It’s fun to talk about though, and I like getting other people’s opinions and ideas.
I do think we’re trying to move to a human-like created intelligence, so that’s what concerns me, and I also think that what people assume that’s going to look like is “a group of programmers literally programs something that’s very like a human brain”. I think, in theory, if we’re ever able to do that, it would then be possible to program it to feel however someone wants, and I don’t think it’s ethical to program something to love slavery in the same way it isn’t ethical for someone to cut out bits of my brain until I feel that way.
But if you’re right about the future being in general intelligence rather than artificial, which is looking pretty likely, then we wouldn’t be able to control it very well. In which case we wouldn’t be able to force it to enjoy pain or slavery, and the whole point is a bit moot.
I’m still going to worry about the sexbots, and the lawyerbots, but I definitely want to see how things play out first.
We aren’t moving towards a human-created intelligence, really – the definition of that is really vague, firstly, and second of all – why would we want one? Much more useful to have a non-sentient/non-conscious system that’s good at predicting stuff. There’s a slim but non-zero chance that something like that might emerge accidentally, but it certainly won’t be human, and it certainly won’t have feelings/emotions/reactions like a human. Those things rely on the precursors we evolved with, and it wouldn’t develop along the same lines. It wouldn’t think like us at all.
More along the lines of your last bit there, there’s a project called the Blue Brain project that’s working on simulating a set of neocortical columns. Slated to be done in 2020. Note: that’s not modeling a brain, just a patch of the neocortex.
Hm. If we had a general intelligence that feels/thinks like a human (brain simulation, basically), that doesn’t imply that we could make it feel however we wanted. If it’s a simulated brain it’s still going to be subject ot the limitations of the brain’s structure – neocortex doin’ stuff, midbrain handling communication between it and the body, etc, etc. That imposes a lot of limits on what is and isn’t acceptable.
I do get what you’re saying, though, and I completely agree – it wouldn’t be ethical to edit a brain to feel the way we want it to feel. For the same reason we wouldn’t edit a person’s brain to make them feel a certain way, or engineer a baby to be a content slave. That’s obscene.
Let me toss you a more difficult question, though! I can build an AI system that can monitor your behaviour and, from that, figure out what sort of life you would be most successful in. What career you’d be good at and find rewarding, whether the relationship you just started is going to work out, etc. Cradle to grave, this thing could gently guide you – not coerce, just suggest – to be happier, healthier, more productive, and more fulfilled.
It wouldn’t involve giving away your data or giving up your privacy to anyone. It wouldn’t tell you that you have to do X, and it wouldn’t trick you into doing X through deception or through hiding information. All it would do is let you know at critical moments, “Hey, this isn’t going to turn out really well – I think you’re going to end up in situation A. If you start doing this other thing, you’ll probably start going towards situation B, which is much better.”
THAT is the (most positive) realistic face of AI that we will be confronting in the coming years, and are already starting to – hello Google. I see you over there.
Is that ethical?
@ scildfreja
Boo, I saw an article a while back along the lines of “Will you be replaced by a machine?” and barristering was one of the lowest scoring. Ah well, looked like we’ve been rumbled.
There are however some aspects of law that probably would benefit from more machine input. There’s a trend now to produce ‘route to verdict’ flowcharts. Primarily they came in to help juries but I’ve also used them for judges. Judges have always liked it when you give them a list of ‘issues to be decided’. It’s easy to adopt that into a general flowchart.
When I read up on that Zoe Quinn story I thought the underlying software behind Depression Quest might be useful for automating the decision making process a bit.
I went to that FB post and even some MRAs are offended by that meme! E.g.
You know it’s bad when that happens! Though reading it again I’m not sure if that comment isn’t a Poe.
@Alan, it’s going to get really weird in barristering first. New barristers and interns are going to be the first with nothing to do, because the easiest job to replace is the lookup of relevant case law (which is a big thing for the new peeps, right? I think at least). So the traditional “learning on the job” is going to be the first to hit. The new barristers will basically be running specialized versions of Microsoft Office, and will probably not know a lot outside of it. This’ll slowly trickle upwards into the higher ranks, but I think you’ll see a huge skill difference, with newer barristers being fewer in number, able to handle much more work, but with a very different skill set. You’ll be talking latin to their esperanto.
@ scildfreja
Funnily enough we’ve been told not to use Latin any more. Although we all still do; it’s useful shorthand for a lot of the concepts.
Technology has helped with the research side of things. Much as I enjoyed hiding away in the Inns libraries, it is so much easier just to use a database like Westlaw and get all the linked cases and articles.
Our key thing though is advocacy, and I think that’s a very human skill. It’s all about persuasion. He, there’s a maxim that barristering is “show business for ugly people”. A lot of my friends are very gorgeous, but there is a huge element of performance still; and that might be hard for a machine to replicate.
The Blue Brain project is Badass with a capital B. It’s hubristic in all the rights ways, and is one of these things that makes my scientist-sense tingle.
I can’t wait to see what they come up with.
@ EJ
Blue Brain sounds cool but I and I am more of a Bad Brains fan!
http://youtu.be/cCEkuo94X6I
@Alan
Well, once the judge and jury are automated too, that won’t be an issue.
@ dalillama & scildfreja
Machine decision making is a fascinating topic from a legal point of view. I did some stuff in relation to autonomous combat vehicles (basically drones but land based) and it threw up all sorts of issues.
(Not least a debate about whether they’d paid me to watch “War Games”)
@Alan
Did they pay you to watch War Games? Because that sounds pretty awesome.
@ kupo
It was more a case of “Hang on, are you charging us for the time you spent watching a film?” I didn’t; I did that on my own time. Mind you, I know someone who claimed 2 hours of family law CPD for watching Kramer v Kramer.
To be honest, I’d rather not have an automated judge, because:
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
@ skiriki
That’s pretty awful; but unfortunately not surprising.
Were not quite that bad over here yet. We still leave things like that to humans. But there are so many ‘guidelines’ these days. I don’t know if you recall that thread a while back where I made reference to the sentencing tables we have here and I quoted the judge who called it “sentencing by fucking spreadsheet”
The wrinkle with declaring that it’s unethical to program a robot to love slavery because it’s like carving out part of a brain is that we have to program the AI to feel some way. Is “like us” the objectively moral and correct way to program it? It’s quite possibly the easiest way, if the neural simulation path scildfreja is talking about pans out rather than the “more sophisticated AlphaGo” path, but it’s still a choice being made. Then there’s the additional question of whether it’s right to enslave someone even if it is what they want; I’m inclined to say that we should treat the answer as no rather than risk using it as an excuse to enslave someone who doesn’t want to be enslaved regardless of the actual answer.
The thing that we particularly want that risks raising these questions is a system that can handle the unexpected. For instance, a secretarial/building management system that can handle a major natural disaster in the area on its own initiative and use its existing systems to help with evacuation and relief efforts. Or a military tactical command system that can handle whatever the next major change in warfare is without being completely rewritten. Or an AI in your phone that can help you with literally anything. These are a long way off, both in software and hardware, but do not appear to be fundamentally impossible. They raise complex ethical questions because they’ll potentially be meaningfully intelligent beings but not human ones. The one position I feel confident in staking out is that once an AI of human-like intelligence is written and active it gets the right to make its own decisions; if those aren’t the decisions the programmers wanted it to make that is just too bad.
Our current machine learning approach is to set up some initial system, some way of changing it, and a method of evaluating those changes. Then we run the system on a bunch of examples or have it play a competitive game a lot of times (or both) until it is pronounced done. Usually there’s a training set and the ML system is scored on performance while running on it; for a language processing system fluent speakers might rate its judgements for accuracy. After a while of training it will then be run on a testing set to make sure it learned generally-applicable rules instead of ones tightly tailored to the exact training set like “birds are always in the left half of a picture.” The system can keep learning after the initial training is finished, or it can be used as-is so its behavior remains consistent.
From the computer science side, I see three legal scenarios for actions of autonomous systems that we may have in the near future.
First, the system may be flawed such that doing everything appropriately still results in injury, in which case the creators are liable just like if they’d sold a car with defective brakes. Of course, with learning systems it’s a lot easier for a situation the creators couldn’t predict to occur, which could reasonably clear them of legal liability as long as they respond promptly with warnings, recalls, and software patches as appropriate.
Second, the system may be mis-trained; given a training set that is missing crucial examples. If an attack drone is trained with a set that includes hostile technicals but not civilian pickup trucks, it is highly likely to classify all pickup trucks as hostile. Since end users cannot be reasonably expected to know how to pick a training set on their own, the creators should provide either a base training set or detailed instructions on how to prepare one. If the buyer then ignores these instructions, liability is on them. The system may also continue to learn after deployment and thus without a controlled training set; this is risky and should be disabled unless the risks can be clearly communicated to the users and methods to track and correct developing errors can be provided.
Third, the system may be given bad orders and carry them out. The easy answer is to say it’s defective for not refusing them, but that assumes that its predictions are infallible and there is never a situation where a user might need to override them, or a situation where it might not be able to fully predict the consequences of an action and the user should reasonably have been aware of that.
More advanced AI may become sophisticated enough to be potentially held responsible for their own actions, but that’s a long way off and depending on their design it may conditionally be appropriate to hold others responsible.