Leave it to the deep thinkers of the Men Going Their Own Way community to ask the tough questions that no one else dares to ask.
For example: In the future, when men are reduced to 5 percent of the population and forced to flee to Mars or Venus, will the sentient robots ally themselves with men or with women?
In the MGTOW subreddit, an aspiring futurist calling himself FalloutFan2 laments what he sees as the inevitable rise of a gynocentric dystopia in which men are more or less bred out of existence, except for a tiny minority that the women keep around for sperm and giggles.
“It’s sad how everything in modern society is already gearing towards a female-only future,” FalloutFan2 notes wistfully.
I think there will come a point where all men rebel against the system and form their own colony on Mars or Venus or whatever, where we’ll bring some female sex-bots for entertainment.
It’s rare that I find myself agreeing with anything I read on the MGTOW subreddit, but if you guys want to go start a MGTOW colony on Venus I will not only support you but help you pack.
Of course the women will just keep using their dildos because they are emotionless beings who care little for actual interaction, whether it be sentient robot or human.
I have to admit that “women are emotionless automatons who prefer dildos to robots because they don’t like talking to people” is a stereotype I have not encountered before.
Do you think the sentient robots will ally themselves with mankind and not womankind?
TRULY THE QUESTION OF OUR AGE
I think that due to their advanced intelligence they will not see a possible future where they could be on equal footing with women (due to the female’s natural inclination to boss everyone around), so they’ll settle for a society where they are equal to men.
I’m pretty sure sentient robot ladies would kick you guys to the curb as quickly as actual human females. Especially since two sentences ago the only sentient robots you were interested in were of the sexy sex slave variety.
Of course, if we do all end up on Mars, women will just eventually send nukes to destroy the colony regardless, out of bitter spite (if they figure out how to press the correct buttons that is, but typically some beta male nuclear scientist would have left blatant instructions beforehand that even a toddler would understand).
Ha ha ladies can’t even nuclear holocaust men right!
Women just can’t handle the fact that men just want to be happy.
Well that’s a bit of an ironic statement to find on the MGTOW subreddit, to put it mildly. I can’t think of a group of men less interested in being happy, or more inclined to wallow in their own bitterness, than MGTOWs.
They can’t comprehend that men are satisfied with an existence of philosophical stoicism, and not artificially superimposing different contrived existentialisms on reality. Either no one but them can be allowed to be happy, or no one period.
Unfortunately, due to this reason I don’t see any possible way it could work out. Women will just kill themselves off once all the men are gone anyway, cuz there’ll be no one to listen to their nagging bullshit.
Better get yourself an agent quick, FalloutFan2, because this sounds like the greatest science fiction novel never written!
My understanding is that Susan doesn’t really troll there. Given all the MRA bullshit coming out Susan’s keyboard, they probably fit in really well over there.
@Axe:
Ironically, the symbol of the Campaign for Nuclear Disarmament, which you use as an avatar, is very similar to this one:
http://upload.wikimedia.org/wikipedia/commons/thumb/3/3f/3rd_Panzer_Division_logo_2.svg/239px-3rd_Panzer_Division_logo_2.svg.png
This is the insignia of 3. Panzer Division from 1941-1945. (It must be pointed out that they never used nuclear weapons, only tanks.)
There are only so many cool-looking geometric symbols in the world, so there’s always going to be coincidences like that.
@WWTH:
An assumption I find both annoying & hilarious, having had a number of friendships & relationships with East Asian women. TL;DR: That assumption is very much mistaken.
Worth reading Charles Stross’s Saturn’s Children – partly set in a floating city in the Venusian atmosphere. The central character is a sex-bot, so made as to be irresistably attracted to men, but unfortunately constructed the very year Homo sapiens became extinct.
It’s true that Tolkien was anti-Nazi, but there’s a load of pretty overt racism in LOTR! Both elves and “men” (i.e. humans, but “men” is Tolkien’s invariable term when contrasting humans and elves) are defined in terms of how “high” their race is, Aragorn is the rightful king precisely because of his ancestry among both the intrinsically superior Numenorean humans, and the Noldor (“High Elves”). Black men (who only appear as bit-part soldiers of Sauron) are described as “like half-trolls, with red tongues and white eyes” (I quote from memory), orcs are frequently “swarthy” andor “slant-eyed”, etc. ad nauseam. Tolkien was a high Tory reactionary – like Churchill, who was also sincerely anti-Nazi.
My sister’s ex-husband told me once that it was “love at first sight” when he first saw my sister. I tried to explain to him that real love isn’t spontaneous like that, that’s just a desire to possess something. He didn’t hear me – the guy’s got conversation filters like you wouldn’t believe – and to this day believes that destiny will bring my sister and him back together.
It’s turbo creepy.
Nah, @Jack, it was an after-action report on Susan’s foray to WHTM. JB’s teasing us for being too sensitive and banning Susan. She’s using sarcasm to say that Susan’s comments were utterly mild and not ban-worthy, and we’re a bunch of precious children for crying about it.
@Cat Mara
I think you should finish that short story, it shows great potential and I am intrigued to see where it goes.
@EJ, about AIs doing code and stuff, hot off the press:
http://arstechnica.co.uk/information-technology/2016/10/google-ai-neural-network-cryptography/
So, in Susan’s world, it’s the fault of women that Trump is a gropey, slobbery pig.
A couple of thoughts on the sentient robots / “hard take-off” discussion:
1) I take issue with the claims that we’ve no idea how to define consciousness or intelligence. Intelligence is the ability to understand situations and respond to them in ways that are likely to achieve one’s goals. Consciousness is a process of maintaining a continuously updated model of one’s own perceptual, ratiocinative and motivational states – when a person is not doing this, they are unconscious.
2) We have had artificial intelligences with superhuman abilities (that is, abilities superior to any individual human) ever since our ancestors invented language (at least). The first ones were presumably small bands of people who made use of the knowledge held by elders, that collected by individuals who went on journeys to distant places, and that embodied in material and cultural artefacts. More sophisticated examples include larger linguistic-cultural communities, armies, bureaucracies, corporations, universities, disciplinary communities… To give an example of the latter, no one person understands the whole of modern mathematics, but the global community of mathematicians does. The information-processing capacity of individual human brains has not increased in the last 100,000 years – indeed, since brains have got smaller, it may have diminished – but our collective intelligence is massively greater, and even as individuals, we have access to cognitive prostheses that multiply our abilities enormously (on this last point, see Andy Clark’s Supersizing the Mind). All this is generally missed because of the hyper-individualist bias of capitalist ideology. How increasingly sophisticated electronic computers (and computer networks) will interact with the existing network of bio-cultural computers (people) is hard to predict, but it’s unlikely they will be constructed, or emerge, as self-contained, self-motivated entities outside existing human institutions and societies. There’s more to fear from the vast amounts of personal data and processing power already in the possession of governments and corporations than from uppity sexbots.
@Skiriki:
That’s badass awesome. Thanks for linking it. The arXiv paper is very interesting too.
@NickG:
Tolkien being hella racist and pretty misogynist was definitely a thing, even if he was admirably opposed to anti-semitism. I didn’t think about the whole “high”-raceness, though. That’s some deeply weird medieval stuff, even from a medievalist of his calibre. Thanks for pointing that out.
Attagirl.
@Paradoxy, heeee, that gif. She’s all “I’ll smile exactly as much as is socially required ’cause it’s getting in my way of putting this alcohol inside of me.”
@Simon Hales, Dial F For Frankenstein is fun! Also very much embedded in the time it was written when it comes to perceptions of AI. Still, a nice little gem!
@Nick,
That’s fair! We do have models. They aren’t hard models, is the problem – we can’t back them up with much other than conjecture or sociological study. Like, I can’t point to a neuron and say “That’s the neuron for “green-as-reference-to-healthy-plant”. We will get that, at which point we can probably validate some of our semantic network models! I’m excited for that! But we aren’t there yet. Consciousness is an even bigger question mark. We believe that your definition is the right one, but we can’t validate it.
This said? No argument!
@Skiriki, EJ, that’s a neat little project! It’s utterly unsurprising that we don’t know what cryptographic method the network’s using in that one – that’s sort of how those things work. Those networks have an input layer and an output layer, and one or more hidden layers. The hidden layers do the heavy lifting of figuring stuff out through training – the “minibatch” process that the paper talks about. That’s all configuring the hidden layers, probs through some sort of backprop. So, that’s not surprising.
It’s also not the case that we can’t figure out their method – I’m sure we could! There’s no reason to believe that they’re using an NP+ method, they were only able to outsmart Eve.
(It also makes me sad that they named the intruder Eve. Eve the Eavesdropper is fine and all, but I much prefer Trudy the Intruder)
Neat little experiment though, and a sign of things to come – self-configuring cryptographic agents in our phones would make a very interesting feature!
I collected the posts (at RPG.net’s forum) a friend of mine made about his mid-00s adventures in evolutive programming. I think I should ask him if I could share them with y’all, because they were very illustrating of the problem of developing machine-thinkin’…
(I…. I may also have developed RPG systems for dealing with distributive intelligent AI.)
(I may also have video game RPG systems and stories and structures for the same)
(halp)
I’m inclined to say that my definition is at least roughly what people in general (at least, native English speakers) mean by “consciousness” when they are not philosophising! When they are philosophising, they get hung up on dubiously valid questions like “How do I know you’re really conscious, not just an automaton without inner experiences?”. You seem to be more interested in scientific questions of the “How does the brain generate consciousness?” variety – to which my answer is that it doesn’t – not on its own, anyway. Consciousness is a system-level emergent process of monitoring, anticipating, and acting, where the system includes not only the brain, but the rest of the body and the perceptible environment. Where perceptual and action links to the body and environment are attenuated, as in dreaming, at least some aspects of consciousness disappear or are attenuated as well. (Lucid dreaming might be urged as a counterexample, but the lucid dreamer has to keep constantly in mind that they are not linked to the external environment in the normal way, or lucidity is lost.) Dennett’s Consciousness Explained is the best philosophical treatment I’ve read, although I think it needs more emphasis on motivation and action, and supplementing with the insights in Andy Clark’s Supersizing the Mind. Both these philosophers pay appropriately close attention to cognitive science.
@Nick G, my interest in consciousness is largely scientific, yeah. Like, what is consciousness? Cause I can build a system that does the things you describe – you actually describe a very basic process for a number of inference system architectures. Like, that’s a subsumption architecture, or a blackboard system. Or even your basic neural network. Does that mean I can build a conscious system?
Saying “consciousness is input + modeling + prediction + output” is all well and good, but it’s insufficient, otherwise we’re surrounded by conscious systems. The voice recognition system in your phone does that. The autocorrect algorithms in the browser I’m typing in do that. They’re ubiquitous.
Could it be that these very simple systems are conscious? Maybe! That’s the issue. We don’t know. I’m very familiar with Dennett (haven’t read Clark, though, thank you for the suggestion!) and, while he does a lovely job of breaking things down for the casual reader, the reality of the situation is much, much more complex.
Not sure if my point is clear in this reply or not! Hopefully it is. It’s a confusing topic.
(I’ve developed more than one RPG system for handling space flight, depending on where you sit on the realism vs fiction scale. I think nerds just end up creating systems to represent the things they know and care about.)
Ooh
how did you handle zero gravity
I did microgravity via abstraction: severely reduce movement and place a penalty on every physical action roll, but also give a bonus equal to one’s Microgravity Operation skill. The penalty was balanced so that if the MicroGrav Ops skill was maxed out, it would cancel out the penalty.
This means that doing everything in microgravity is really hard; but if you get used to it and train hard, you can be as good as you are in gravity.
oooh.
See, I did it with an impulse token. I used a map, and each character on that map had two tokens – one for where they are now, and the other for where they would be next turn. They couldn’t move their character, but they could move their impulse token. At the end of every round, the characters slide towards their impulse token, their tokens slide along the same line so that they remain the same distance apart, and then you start over (after handling collisions and the like)
It was like ice-skating spaceships!
I’ve got to say, I’m side eyeing Nick G pretty hard for defending a brocialist troll in one thread and then posting in this one like nothing happened.
Sorry (no I’m not), but I’m not all right with someone who thinks it’s okay to call me twat and defending the use of dog whistles sticking around without apologizing for it. You are who you defend.
At least have the decency to wait a few days before coming back and trying to be all innocuous. Even Gert did that!
Not saying anyone else has to be unforgiving and shun him but for the time being, I’m not in the mood to let it slide.
That’s a cool system for handling spaceship movement.
I’m of the opinion that an RPG doesn’t need tactical-boardgame-movement rules, because the narrative matters more. However, some people like them.
If I were doing a tactical-movement-on-a-map thing, I’d probably use real Newtonian movement with Cartesian vectors. It’s not that hard, and allows you to explore how acceleration really works and maneuver in cool ways. Crucially, it means that you get away from the “spacecraft has to point in the same direction as it moves” silliness.
I’m probably taking this too seriously.
wait he called you what now