[GUEST POST] Ann Leckie on Why The Best AIs Strike a Balance Between Logic and Emotion

Ann Leckie has published short stories in Subterranean Magazine, Strange Horizons, and Realms of Fantasy. Her story “Hesperia and Glory” was reprinted in Science Fiction: The Best of the Year, 2007 Edition edited by Rich Horton. Ann has worked as a waitress, a receptionist, a rodman on a land-surveying crew, and a recording engineer. She lives in St. Louis, Missouri, with her husband, children, and cats. You can find her at www.annleckie.com and on Twitter at @Ann_Leckie.

The Best AIs Strike a Balance Between Logic and Emotion

by Ann Leckie

If science fiction has taught us nothing else, it’s taught us that emotion and cognition are separate — maybe even mutually exclusive — things. In order to be really, truly super smart, we need to remove or repress those silly, irrational emotions of ours. If we can do that, if we can free ourselves of emotion and sentiment, we can be as brilliantly logical as Mr. Spock.

Of course, once you’re utterly without emotion — like, say, turned into a cyberman — like as not you’ll become so supremely rational that you’ll no longer have any real morals, or any check on utterly ruthless behavior (logic, of course, dictates that being utterly ruthless is the best way to approach life’s problems, or even to just fill the empty hours). Once this happens, you can only be defeated by the Power of Love. Or possibly a sufficiently clever linguistic paradox. Depends on the writer.

Artificial intelligences are often portrayed as emotionless computers, vulnerable only to the aforementioned sufficiently clever linguistic paradox, or perhaps a smartly-wielded screwdriver. But there are also plenty of computers with emotions in science fiction — sometimes played for laughs, like Eddie the Shipboard Computer on the Heart of Gold (or Marvin for that matter). And sometimes they’re treated very seriously indeed — consider HAL from 2001: A Space Odyssey. And I’ve found that it’s the AIs with emotions that stick harder in my memory, that work best for me in a story. So in my novel Ancillary Justice, the artificial intelligences that are Radchaai warships aren’t emotionless. I knew they weren’t, when I first thought of them — they couldn’t be, for the story to happen the way it does.

The troop carrier Justice of Toren, who narrates Ancillary Justice, can see a lot about its crew. Like, heart rate, temperature, and…what else? I knew that, for instance, stress was associated with adrenaline. If Justice of Toren could see its crew’s hormone levels, would it know what they were feeling, from moment to moment? I started really wondering about how emotion worked — not just for my human characters, but for the AIs I had designed.

Emotions aren’t something nebulous and mystical, they’re physical reactions, the result of things your brain does, of hormones your body releases under certain conditions. There’s a reason people talk about strong emotion being “visceral.” Your gut is, in fact, physically affected by stress and fear. That horrible, punched in the gut feeling is not, in fact, all in your head.

This was extremely useful to me — it meant that if ships like Justice of Toren could see very fine-grained medical data coming from its crew, it could potentially see their emotional state very accurately. Which meant that in certain parts of the book, I could pull off a sort of first person omniscient that would not only be extremely useful, but would also be a heck of a lot of fun.

But it also got me thinking. Emotions are very physical — those responses are very, very basic, and they’ve probably been around a long, long time. If you didn’t have a biological body, would you have emotions? And given how basic those systems are, that respond to stress and danger and pleasure, how long they’ve surely been around, surely they’re important and useful in some way. What would a being look like that didn’t have them?

And it turns out that people with particular sorts of brain damage lose the ability to feel emotions. So, is the clichéd sfnal narrative right? Do these people actually become super-geniuses, freed from all the constraints of sentiment? Or do they become ruthless psychopaths?

Well, mostly, it seems, they become really bad at making certain kinds of decisions. You don’t need the Power of Love to defeat the cyberman — you just need to ask them if they want to order spaghetti or curry for supper, and then walk away as they compare the various pros and cons. Spaghetti might be considered healthier, but then again a vegetarian curry might also be healthy. The curry might cost more…but then, you’ll get more of it than the spaghetti. The Italian place is closer, but the slightly longer walk to the Indian place is probably good for you…

We don’t think, much, about how many small decisions are made easy by emotions. Hey, curry sounds great! Let’s go! And that’s not an accident. Emotion is actually a pretty important part of how we think. It’s not separate from cognition, it’s part of the whole system. Obviously you wouldn’t want to make every single decision on an emotional basis — in some cases it would be a recipe for disaster. But neither do you want to make every single decision on a “logical” basis. Who wants to spend half the evening pondering the relative cost, ounce for ounce, of spaghetti and curry?

The importance of emotion in decision-making becomes even starker when you’re talking about fear. When a threat appears, you don’t want to spend five or six minutes wondering if there’s a problem. You want to be ready to defend yourself as quickly as possible. The case of an actual woman who apparently has no fear is instructive — she’s got a problem with detecting danger. She might not be afraid when she’s actually in danger, might handle crises with admirable calm — but she wouldn’t have been in a lot of those situations to begin with if she’d had fear to warn her off.

Or let’s consider capgras delusion. I have actually had a relative who had this, and I can tell you it’ll freak you out. In this case, my relative was sure that his wife of forty-some-odd years was not, in fact, his wife, but an imposter who had taken her place. Why do I bring this up? Because current thinking is, part of what goes into capgras is damage to a part of the brain that produces an emotional response to seeing someone you know. Without that, actually recognizing someone becomes really, really difficult even if you can match the face or some other set of physical characteristics.

The more I think about things like that, the less convincing I find statements about pure rationality being separate from or the opposite of emotion. And the more impatient I get with the unexamined assumption that emotion stands in the way of clear thought and intelligence, or the frequent science fictional idea that the removal or suppression of emotion will produce utter, brilliant, conscienceless logic. It’s more likely to produce a really indecisive person who has a hard time recognizing friends and family, and is constantly getting into trouble most folks would avoid.

So, I don’t think that the path to ultimate brilliance actually involves losing or suppressing emotions — or its science fictional opposite, for that matter (WARNING — TV Tropes link!) — so much as achieving a good balance between feeling and thinking things through logically. I strongly suspect that if we’re ever going to build a really advanced AI, it’s going to have to have something analogous to emotion to handle certain kinds of decisions. It wouldn’t be identical to human emotion — it couldn’t be, really, because our emotions are so much a product of how our bodies work. But I suspect it will need something that works in a similar way.

The AIs in Ancillary Justice aren’t emotionless at all. On the contrary, it’s emotions that get Breq/Justice of Toren into trouble to begin with. But investigating the physiology of human emotion gave me a better handle on why I had assumed, from the start, that Radchaai AIs would feel — and helped me see how that would work, and what that would mean for my story. The intense, automatic nature of emotions means they can be used to manipulate — but they can also be the impetus for necessary change. After all, if you’re not angry about something, why would you bother to do anything about it?

6 thoughts on “[GUEST POST] Ann Leckie on Why The Best AIs Strike a Balance Between Logic and Emotion”

  1. This is another case in which SF elevated (an often gendered) cliché into a truthism. Reflex emotions (fight-or-flight, aka “the Four Fs”) are processed in the thalamus. All complex emotions are processed in the cortex, are indistinguishable from thoughts and, as you point out, enable us to arrive at decisions and make choices. As I discussed in my Biology of Star Trek, Spock & Vulcans aren’t particularly rational – they just project a pompous superego persona – and, in any case, logic means something different as a scholastic system versus parsing permutations in a real-life situation.

  2. Without spoilers, yeah, the nature of the Radchaai AIs make it mandatory that they are going to have to deal with emotion as well as reason.

    Where in the writing process did you sit down and design the technology that makes Breq possible, Ann? Did it flow naturally from the story or was it one of the ideas you had going in?

Comments are closed.