[Do you have an idea for a future Mind Meld? Let us know!]

Recently, a group of futurists predicted that artificial intelligence is a deadlier threat to humanity than any sort of natural disaster, nuclear war, or large objects falling from the sky. In an article by Ross Anderson at AeonMagazine.com, David Dewey, a research fellow at the Future of Humanity Institute says, concerning the human brain and probability “If you had a machine that was designed specifically to make inferences about the world, instead of a machine like the human brain, you could make discoveries like that much faster.” He stated that “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels.” He also talked about how programming an AI with empathy wouldn’t be easy, that the steps it might take to “maximize human happiness”, for example, are not things that we might consider acceptable, but to an AI would seem exceedingly efficient.

Of course, this leads into much more complex discussion, and the possibilities with AI are vast and varied.

We asked this week’s panelists…

Q: What is your take on the future of humans and AI? Is it positive, negative, both?

Here’s what they said…

Larry Niven
Until Larry Niven is the author of Ringworld, the co-author of The Mote in God’s Eye and Lucifer’s Hammer, the editor of the Man-Kzin War series, and has written or co-authored over 50 books. He is a five-time winner of the Hugo Award, along with a Nebula and numerous others.

  • If you make an intelligent being, you must give it civil rights.
  • On the other hand, you cannot give the vote to a computer program. “One man, one vote” — and how many copies of the program would you need to win an election? Programs can merge or can generate subprograms.
  • Machines can certainly become a part of a human. Our future may see a merging of humans and machines.
  • Or all of the above. Keep reading science fiction. We always get there first.

Wesley Chu
Wesley Chu’s dreams as an NFL punter were quickly dashed when he learned at an early age that he was terrible at every sort of ball sport. Actually, he was bad at every single sport in the known universe that didn’t involve hitting someone or doing backflips. Thus, he did what all ex-gymnasts/kung-fu masters did: go into Information Technology while moonlighting as an actor. Since then, he has been following his new (and much less attainable dream than a NFL punter) dream of writing books – Science Fiction/Fantasy books with lots of action and no round objects. His first novel, The Lives of Tao, comes out on April 30th from Angry Robot Books

First of all, I’m going to start off by saying that the Aeon article was probably one of the most intelligent I’ve ever read on the Internet. Yes, I know that sounds like an oxymoron but the internet does have great uses besides Fantasy Football scores, shopping, and porn.

I should offer a few disclosures. I work for the other side, the Artificial Intelligence Death-to-Fleshies team since I’m part of the Angry Robot army. Damn straight. Our mechanized overlords have red eyes, lasers, and they’re very ill-tempered. Chickens coming to roost ya’all. My second disclosure is that those who wrote the Aeon article and probably everyone else who is contributing to this mind meld is probably smarter than me on this topic and probably most other topics, so you can choose to stop reading right about now if sheer intelligence is your measuring stick on article readability. But hell, they might be smarter but can they do the one inch punch or moon walk forward? Yeah, I didn’t think so. And my last disclosure, I recognize that this is a very complex topic so I am only addressing a tiny slice, a smidgeon, of the bigger issue at hand.

Now, to answer the question; the future of humans and AI isn’t black and white, so its effects on humanity will be gray, both positive and negative. Not only do we have AIs that can drive our cars for us (grandma thanks you) but we have robo-dogs that can hurl slabs of concrete (wurh-oh), so it’s a mixed bag. And who knows? Sometime in the future, humanity might be forced to retreat into the dark crevices of the Earth to stay warm due to the advancing onslaught of the army of mechanized death. Now, here’s my take on these futurists who are losing sleep over Skynet going to war on us. Brother, I think you should smoke some weed and relax. Hakuna matata man, hakuna matata. Is it really worth losing sleep over?

Yes, future apocalyptic extinction sucks and sounds pretty unpleasant, but if I may, when was the last time any futurist’s prediction actually came true? They predicted flying cars in every family’s garage back in the 1920s. Nearly a hundred years later, cars aren’t drastically different than they were since the days of the Model T. We still don’t have a moon base, and my cleaning lady is composed of skin, bones, and blood, albeit I admit she sounds like a robot when she talks. Hell, we can’t even get a guy to Mars let alone the next solar system. We can’t even cure the common cold.

Basically, the track record for futurists kind of suck. And the further out we get in the predictions, the less likely any of them will hit their mark. Don’t get me wrong. Many futurists have predicted the coming of the Internet, but none of them hit the mark on the truly profound impact the Internet has on our civilization, and that’s only been the last hundred or so years (assuming the credit goes to Mark Twain for his telectroscope). In my opinion, the internet is a small invention, because at its core it is nothing more than computers that can talk to each other. It’s small technology with humongous global impact that no one foresaw. Small change; short time frame; huge variables. Now, try to predict a long game with even greater impacts; it’s a pointless exercise.

Now, predictions are fine, but looking through a set of lens that far into the future is as reliable as trying to predict if your five minute old son has the chops to become a pro golfer. In both cases, while it’s great to dream, it’s a pretty futile effort. I guess what I’m really saying is, I really don’t give a shit. Not about humanity’s future; I give a shit about that. It’s the predictions I don’t’ care about. I believe there’s a 99% chance of that every prediction futurists make for the next thousand years will not actually happen. Do I have facts to back that up? Well, no, but based on past predictions from the last thousand or so years, I’d be willing to bet a week’s salary on it (which isn’t as high-rolling as that sounds) that I’m right. It’s like trying to hit a nine ball combo in pool. Just about impossible.

And then there’s one more thing that isn’t taken into consideration; what about the human variable? Yes AI will become more advanced in the future, but so will humanity. As a species, we are more knowledgeable now by leaps and bounds than we were five hundred years ago. Who’s to say we’re not equally more intelligent five hundred years from now. I feel like these nightmare scenarios being presented have AI growing exponentially but keep humanity’s innovation and creativity at a standstill, or at best a linear increase. I am confident that by the time we reach levels where we might even be close to having a threatening AI, humans will have innovated hundreds of fail safes and ideas that our futurists today haven’t even considered. So in the end, it all balances out.

Or maybe not; who knows? I hear that Marco and Lee over at Angry Robots have some truly crazy diabolical plans up their sleeves and humanity should expect to be enslaved sometime over the next few weeks in the form of a book release, or maybe it’s just a ruse to get mankind’s panties up in a tizzy so people lose sleep over it. In either case, hakuna matata, my brother, and relax. Why don’t we stop worrying about the Cylons and work on getting me my flying car first, eh? Or a sexbot. Hell, I’d even settle for a vacuum cleaner that won’t run over electric cords.

Guy Hasson
Guy Hasson is an SF author and a filmmaker. His latest books are Secret Thoughts by Apex Books and The Emoticon Generation by Infinity Plus. His 45-minute epic SF film, The Indestructibles, which he wrote and directed, will be released on the web in a few weeks, and his start-up New Worlds Comics will go live in July.

It is so easy to say “This new technology can kill us.” It’s easy to say, because it’s always true. About practically any technology. These types of statement can’t really be disproven. Put any expert on the stand and ask him, Do you know for a certainty, a 100% certainty, that this technology will not kill us? No, an honest person will have to admit, no one does. The prosecutor will continue: Is there more than 0% chance that this technology will kill us? Yes, the honest person will admit again, there is.

I believe it was Shakespeare who said, “How many technologies can kill me? Let me count the ways.” Let’s look back at the last three decades and count the ways: a nuclear war, dirty nukes, biological weapons, chemical weapons, the hole in the ozone, a true star wars above our heads (including lasers from space), bringing back the dinosaurs, global warming and the seas rising, nuclear plants melting down, terrorist use of technology, y2K, nanotechnology, artificially creating tiny black holes in CERN, and I’m sure I’m forgetting a couple. Sure, some technologies can wipe all of humanity while others can only wipe the entire city you live in. It’s not the credit of ‘What’s the biggest threat ever’ that we’re going for, surely.

What’s my point? My point is that even though something is true, the headline you get from it is false. Any idiot can scaremonger and our ears will always perk up. But how would you say something is a deadly threat without scaremongering? How do you talk realistically and seriously about something like this? Answer: By looking at the past and how these things have happened or failed to happen before and by looking at what an AI really is.

Let’s start with the nature of AI. An AI being would not be like us. It would not have been formed through millions and millions of years of evolution. We, and all animals around us, fight to survive, and, having survived, will fight for a more prominent status than those next to us. Everything we do derives from this. Everything we see acts like this. But AI’s will act the way they are programmed (whether by us or by other AI’s) without evolution-based motives or evolution-based methods of acting. An AI could be 100 times smarter and more aware than us, but be busy for its entire lifespan in finding new ways to make paper airplanes or helping cancer patients or blowing kisses in the air. How much damage it does depends on what it’s been programmed to do, whether it knows how to do it, and whether it will have the capability of doing it. To create the dangerous AI the ‘experts’ are talking about, you’d need someone who is very single-minded towards this purpose who also has the ability to create one as well as the capability of creating one. The chances for this, while not zero, are very close to zero.

Now suppose an AI exists that wants to destroy the Earth and has that capability. Destroying the Earth doesn’t take a day or two days or even ten. The Earth is so massive it’ll take years and years to destroy. History teaches us that no such historical force exists in a vacuum, and a force that acts towards everyone’s destruction will be met with other forces, striving to stop it. There will be time to stop the AI, both by humans and other AI’s. In fact, there’ll be time for more than one or even three attempts. Surely the weapons and AI technology will exist to at least give us a fighting chance.

It’s real easy to imagine that something will happen immediately. That’s how our minds work. But that’s not how history works. Look at another disaster we’ve been warned about as well as its solution: Waste and recycling. Back at the late eighties, the popular message in the US was: If we don’t recycle now, we will run out of everything and destroy the Earth. But twenty years have passed, and we’ve only just begun to recycle semi-seriously. The Earth has not yet been destroyed by us because these things take time. Historically, deadlines for disasters have proven false.

Bad things happen slowly (just like good things). When disasters begin, they are not yet the end of the world. And every historical force creates the rise of opposite historical forces. Add that to the fact that the likelihood for the occasion to rise in the first place is small, and you’ve got a completely different look at this ‘threat’.

My response to “this technology can kill us”? Been there, done that.

Me, I’m waiting for nanotechnology. Now that technology gone rogue can totally annihilate the Earth.

Karl Schroeder
Having wracked his brains to be innovative in the novels Ventus, Permanence, and Lady of Mazes, Karl Schroeder decided to relax for a while and write pirate stories, starting with Sun of Suns and continuing with Queen of Candesce, Pirate Sun, The Sunless Countries, and the final book in the series, Ashes of Candesce. Of course, these novels are pirate stories set in a world without gravity — but hey, swashes are still buckled, swords unsheathed, and boarding parties formed in the far-future world of Virga. He’s currently thinking about how to hammer science fiction into some new shapes based on current research into cognitive science. To that end, in 2011 in 2011 he obtained his Master’s degree in Strategic Foresight and Innovation from OCAD University in Toronto. When he occasionally pokes his head out of the trenches, he blogs about this stuff at www.kschroeder.com

You can see exactly where the wheels fall off this argument: when Ross Anderson says that “An AI might want to do certain things”. Right there he’s indicated the category error they’ve made: they’ve assumed that desire–specifically, desires similar to those of living things–will automatically accompany intelligence in an artificial being. But desire predates thought by hundreds of millions of years. Life is built around desire, it’s the single most important motivating faculty we have and all living things share it; even in those without brains, desire manifests in their embodied physical growth patterns and reactions to stimuli. We want to live, we want to reproduce. Desire is so fundamental and universal in everything organic that many people cannot imagine intelligence or awareness without it.

In fact, I partly agree with this picture. I think intelligence only exists, or has a function, in service of norms, aims, goals…whatever you want to call them. Consciousness is the passenger, and our biological needs are the driver. It may be impossible to create an artificial mind without endowing that mind with urges that keep it going–curiosity, truthfulness, the need to express itself, etc. However, this does not mean that an artificial intelligence has to possess all the drives we have. Ours evolved, as a suite; its can be designed, and we can leave out affects and urges we don’t want it to have (such as anger, or ambition). It can even be designed to have desires that no biological organism possesses–ones that would be suicidal in any living entity. It’s easy to imagine such an being; let me give you an example. Why should an artificial intelligence identify itself as itself? Our self-identification (or, more properly, the brain’s identification with the rest of the body) is an evolved condition, not a logical necessity; the same is true of our drive for self-preservation. What if I build an AI that identifies itself with and as me? There’s no reason why it shouldn’t; even if its subsequent rational thought processes led it to conclude that this wasn’t true, why should we design its desires such that it would want to act on that knowledge? Why shouldn’t the most important thing for it be our well-being, as defined by us?

Anderson et al. have suffered a failure of imagination. They’ve succeeded in imagining artificial intelligence but failed to imagine the more important innovation, which would be Artificial Desire. Once you’ve pictured AD, it becomes immediately obvious that the ‘problem’ of autonomous AI is no problem at all. –Or, rather, an autonomous self-interested AI is a completely avoidable design failure.

Our priorities are clear: create AD first; AI rides on the back of that or else we don’t build it at all.

Madeline Ashby
Madeline Ashby is a science fiction writer, strategic foresight consultant, anime fan, and immigrant. Her debut novel, vN, is available now. (Here are some reviews.) Her non-fiction has appeared at BoingBoing, io9, WorldChanging, Creators Project, and Tor.com. iD, her second book in the Machine Dynasty, will be out in June from Angry Robot Books.

I don’t think there’s a single future for humans and AI: there are multiple futures, and they are dependent on your position as an individual relative to location, income, privilege, need, and so on. When your boat decides that storm is too big based on searchable NOAA data and steers you back to your tiny little port in Maine and you’ve got no catch to show your commercial fishing conglomerate, well, that’s gonna suck. But it’s gonna suck because your income is tied to (a) a finite resource and (b) a corporation that doesn’t give a damn about you and the algorithm you took on just to lower the insurance premiums on your boat. Welcome to the life of a private contractor. You pays your money, you takes your chance.

Alternatively, it could work out great — if your goals and the AI’s goals are the same. Say you’re dying of cancer. You’ve tried multiple treatments that have drained of you both financial and emotional reserves. If the AI has a primary goal to “limit and where possible ameliorate human suffering,” it might indicate an end to treatment. Your family will disagree. Your doctor will probably disagree. Your insurance provider will be annoyed at the loss of revenue. But the little chain of code doing the cost/benefit analysis? It doesn’t feel for you. It just knows you’re in pain, and wants the pain to stop.

It’s in medical technology that this could actually go off the rails, especially as print-on-demand organs, genome mapping, and other designer features become the norm. Say you obtain a first trimester blood screen. The screen identifies Trisonomy-18. Your baby has Down Syndrome. Now, this can go two ways: if the AI responsible for communicating test results to you and coordinating your next appointment is in Washington State, it’ll tell you about the results. If you’re in North Dakota, though, you might never find out — because the AI will already know the statistics about Down Syndrome and abortion, and North Dakota has banned abortions after a heartbeat is detected.

Like anything else, the quality and behaviour of AI depends on who is designing, funding, and retailing the AI. Garbage in, garbage out. You get what you pay for. You reap what you sow.

Gregg Rosenblum
Gregg Rosenblum is the author of the young adult science fiction novel Revolution 19, book one of a trilogy published by HarperTeen. He works at Harvard, where he wages epic battles against technology as an editor/webmaster/communications/quasi-IT guy. Follow him on Twitter at @GreggRosenblum

In sci fi books and movies, I’ve always thought about fictional AI as pulling from one of three camps. There’s the Terminator/Skynet/Cylon/Borg bad guys—robots and cyborgs hell-bent on destroying humanity. Then you’ve got your Asimov I, Robot robots, which maintain their logic, but can still manage to do more harm than good when their understanding of logic deviates from humankind’s. And finally there’s the Rutger Hauer/Sean Young/Darryl Hannah artsy Bladerunner/Do Androids Dream of Electric Sheep version of AI, that struggles with the meaning of what it means to be “alive.”

So which version will ultimately wage war on mankind/enslave us/assimilate us/accidentally wipe us out? Or will we end up with the benignly helpful (and boring) AI of KITT from Knight Rider, Rosie the maid from The Jetsons, or my vote for the lamest robot of all time, B-9 from Lost in Space?

I have to say, at the risk of sounding wishy-washy, that I think we’re going to get a mixed bag of positives and negatives from AI technology. We’re going to have bots defusing land mines and fighting fires but also dropping bombs from unmanned drones. We’ll probably have AI cars driving without human guidance (we’ve already got self-parking cars, right?), but we’re also going to have an interesting, “transhumanism” cyborg-like blurring of the lines between technology and humanity. (Google Glass is just the tip of the iceberg—how many of us, for example, if we could have a comm. chip implanted in us that acted as a smart phone, would jump at the chance?)

I don’t think we’re going to have truly “sentient” AI for a long time, and if that’s true, then we won’t have a robot uprising awaiting us in the near future (although that would make a cool premise for a trilogy of YA sci fi books, cough, cough). We are, however, going to have increasingly smarter and smarter technological tools at our disposal. It’s how we, as humans, utilize these tools—to solve our problems or create new ones—that’ll be interesting to watch.

James Lovegrove
James Lovegrove was born on Christmas Eve 1965 and is the author of at least 35 books. His novels include The Hope, Days, Untied Kingdom, Provender Gleed, and the New York Times best selling Pantheon series (The Age Of Ra, The Age Of Zeus, The Age Of Odin). In addition he has sold more than 40 short stories, the majority of them gathered in two collections, Imagined Slights and Diversifications. He has written a four-volume fantasy series for teenagers, The Clouded World, under the pseudonym Jay Amory, and has produced a dozen short books for readers with reading difficulties, including Wings, Kill Swap, Free Runner, Dead Brigade, and the series The 5 Lords Of Pain. James has been shortlisted for numerous awards, including the Arthur C. Clarke Award, the John W. Campbell Memorial Award and the Manchester Book Award, and his work has been translated into 15 languages. His journalism has appeared in magazines as diverse as Literary Review, Interzone, and MindGames, and he is a regular reviewer of books for the Financial Times. He lives with his wife, two sons and cat in Eastbourne, a town famously genteel and favoured by the elderly, but in spite of that he isn’t planning to retire just yet.

If the artificial intelligences which we create are like ourselves, if they’re perfect simulacra of the human mind, then there isn’t necessarily cause for concern. These will be machines which will be capable of feeling as well as thinking. Thought is the father to emotion, after all, as the psychologists say. Therefore AIs will develop emotions that mimic our own. They will be able to love, marvel, regret, cherish – and hate too.

That might seem like bad news. An AI that hates could easily become an AI that wants to rid the world of humans, Skynet-style. But if other AIs are capable of benevolence, then they are the ones who would ride to our defence. Put simply, if an evil AI arises, there’d be good AIs to counterbalance it.

The problems may come if we somehow generate an AI that is so far above our ways of thinking that it becomes unknowable. Then we’re looking at a “god AI” whose mental processes are so alien to us that all we can do is bow down in subjection before it and venerate it, in the hope that it won’t become a vengeful deity and smite us all. I can easily foresee churches springing up full of worshippers of this AI and a priest caste seizing power and holding sway by being able to – or at least trying to – interpret the mind and meaning of our new computer deity. Perhaps it’ll promise us a virtual reality afterlife if we behave. The lucky, saved few will have their brain patterns uploaded into a hard-drive heaven and live for eternity as digital souls.

Maybe it’s time to start behaving virtuously online, just in case this nascent god is already watching us from cyberspace. Let’s call him Google, shall we?

Guy Haley
British writer Guy Haley is the author of Reality 36, Omega Point, and Champion of Mars. He has three books coming out from the Black Library next year – first of which will be Baneblade, followed by Skarsnik. The Crash, his latest novel for Solaris, is also out next summer. Guy has been a magazine editor and journalist for 15 years, working for SFX, Death Ray and White Dwarf. When he’s not staring at words on a screen he spends his time trying to train his Malamute to do stuff, shouting at the cat, or drinking beer; sometimes all at once.

My take on AI is ambivalent, I can never quite make up my mind. There’s something about the negative possibilities of AI in my latest book Crash, out in June, although to say more would be to say too much. On the other hand, my Richards and Klein books have AI as complex as people, struggling to find their place in the world even as they slowly take it over. Some are good, and some are not.

The fact is (are there any facts here? It’s all supposition really, isn’t it?) is that we just don’t know what would happen should an AI be created. We don’t even know how our own brains work, let alone how to emulate our level of intelligence in a machine. Our own intellect has arisen organically, emerging first from the expansion of the visual centres in our early mammalian ancestors, then evolved further by our need to process large social networks. The first isn’t really relevant here – we always assume that an AI would possess a fantastic range of senses – but would an AI be able to think at all without making the sort of connections we make in our own heads every second? And if it were capable of such linked up, consequential thinking, why would it necessarily decide we were of no consequence? This is the reverse of what happened to us, empathy came first in we hairless apes, but the result would probably be the same: a social creature. Furthermore, any AI would be born into a world so tailored to humanity, its early experiences would necessarily be shaped by its interaction with us. Assuming we don’t stick it in a cage and beat it with electro-whips, I kind of assume it’s “childhood” would be positive, and therefore its attitudes to us.

The nightmare scenario, where AI uses us as batteries a la The Matrix, or exterminates mankind like vermin like in Terminator, conveniently does away with empathy, sympathy, mercy, loyalty and a whole host of other positive human traits, while specifically imparting them with a bunch of negative emotions. Chief among these seems to me to be ambition. Why would an AI break the world to make solar panels? Who would give them these goals? Why would they feel the need to achieve them? Who would put them in a position where the AI would be able to act on them with impunity? Responsibility and access to the whole suite of tools of 21st century industry and science implies a level of trust, and if the AI couldn’t be trusted, then it wouldn’t be in that position. If they appeared trustworthy, but were not, then they’d be capable of dissembling. Lying requires a level of understanding of others, which requires an amount of empathy – even sociopaths are capable of that. If that were the case, and they lied to us to fulfil goals that actively endangered us, we must assume, from our perspective, that they would be evil.

Of course, one possible scenario, like in the films Colossus: The Forbin Project or Demon Seed, is where we create a supremely empathic being who thinks we’ve screwed up enormously and takes steps to rectify our errors – the “efficient path to human happiness” example cited above, enacted by the “arrogant AI” at whatever cost. I again touch upon this in Reality 36. In some respects, this is not very different to the Age of Reason ideal of the “enlightened despot” – one individual ruling others rationally for the overall benefit of everyone. Still, this also supposes the AI is able to act with complete freedom. Sure, they could bring down the internet. But they’re dependent on power, and don’t have thumbs.

I reckon a greater danger comes from unthinking machines, set loose to do a mindless task, that rather like the brooms in The Sorcerer’s Apprentice, cannot be stopped. The ecophagy “gray goo” scenario from Eric Drexler’s novel Engines of Creation or the robots sent to terraform Mars that end up disassembling it in Stephen Baxter’s Evolution.

So for me, I think the relationship between us and any AI will be a parent/child one. They’ll no doubt have their own struggles, their own doubts, will need to find their own way, and they’ll all be different. The greatest danger there – should we not simply merge with them, the likeliest scenario – is that they’ll stick us in the equivalent of an old people’s home and forget to visit.

Jason M. Hough
Jason M. Hough(pronounced ‘Huff’) is a former 3D Artist and Game Designer. Writing fiction became a hobby for him in 2007 and quickly turned into an obsession. He started writing The Darwin Elevator in 2008 as a NanoWriMo project, and kept refining the manuscript until 2011 when it sold to Del Rey along with a contract for two sequels. The trilogy, collectively called The Dire Earth Cycle, will be released in the summer of 2013. He lives in San Diego, California with his wife and two young sons.

Lately the discussion around AI, especially in pop-culture, is obsessed with the concept of a singularity. That is to say some specific moment when we’ll fire up a machine or program, an AI will gain sentience, and there will be no turning back. This artificial intelligence will become self-aware and grow quickly to a thing we cannot control or even, perhaps, understand.

Makes for great fiction, don’t it? If we’re talking real-world, though… I just don’t see that happening. At least not for a long, long time.

Disclaimer: as a science fiction writer my job is to consider possible futures and write about what they’ll be like. I’m not a fortune-teller. SF writers are often confused with Futurists who really do try to accurately predict what the future holds. I just wanted to make that caveat clear before I offer my “take” on the future of humans and AI. My take is just a possible future, a kinda-sorta-likely future.

So, my take… is positive I suppose. Instead of a singularity, I picture a future that comes about by incremental advances in AI (and computing in general). I think this will start with augmentations to ourselves rather than machines that think entirely on their own. We’ll seek to address shortcomings in our own brains before creating stand alone versions. We’re seeing the tip of the iceberg today with the advent of PDA’s, Smartphones, and now the conveniently recent examples of Siri and Google Now. These things are still, in my opinion, extraordinarily primitive. They are still very far from qualifying as “AI”, but they are starting to exhibit features that qualify as “machine learning”. And, they will improve significantly as we get better at writing software that can infer context and intent. A big hurdle that is slowly being crossed is the parsing and reproduction of natural language. Once that is truly nailed, all kinds of wonderful capabilities open up.

Imagine you’re negotiating for a new car, you’ve been talking to the salesmen for a while bandying about numbers and loan schemes. All the while, there’s an app on your phone that listens (or even watches, ala Google Glass), and this app knows how to parse out the specifics of a financial transaction from natural conversation and tell you if you’re getting a good deal or not. Hell, how about an app or device that can infer your situation and the choices you’re immediately facing and give you calculate probability-of-success scores for the options you have. What happens to Las Vegas? Today only a small portion of the population has a “head for numbers”. Few of us think logically, rationally, or with an open mind especially in emotional situations. Such shortcomings of the human brain will be the first things AI’s tackle for us. It’ll happen in small chunks at first, so small we might not even realize it (“Siri knew that I’d be late for my appointment if I took this route, how smart of it!”) but eventually there will be some kind of physical augmentation ala the BrainPal in Scalzi’s Old Man’s War universe. I do wonder though if such a device could train our minds from an early age so that, by adulthood, we don’t really need them. Perhaps tests will determine if and how much augmentation someone requires. Scored a little low on the SAT? Well, okay, you can get into Harvard provided you install the proper augments. Just sign this waiver and provide your credit card number…

Can you imagine a basic level of intelligence considered to be a human right? Perhaps that isn’t so far fetched. We’re already there with human health (at least it most civilized countries), where everyone has the right to be immune to various diseases and other threats. We already have people taking drugs that help them focus (including an enormous population of kids who don’t need it). I can think of a few people I wish I could give a nanite-laden pill that installs an AI in some unused corner of their brain. This AI that could gently remind them, for example, that the email they’re about to forward might just be full of bullshit (you know who you are).

At a much more basic level though, there may be something to be said for a bit of software that you can just talk to. I mean really talk to. Call it a psychologist, a confidant, even a “friend” perhaps — but one that is yours alone. One that, I’d hope, has no agenda other than to converse with you and help out where it can. Suppose its only job was to talk its host out of suicide should the situation arise? Installed for free into at-risk people and rates plummet. Such a thing might sway public opinion and open doors to other uses.

Gradually we’ll begin to grow comfortable with allowing AI a longer leash. I get a little pang of jealously when I read stories like Iain M. Banks’s The Player of Games, as characters hold conversations with AI’s that result in the creative breakthrough needed to solve some nasty problem, or when they ask an AI to go off and think about a complex subject for a while and come back with results that are useful. On first blush it’s easy to imagine human’s becoming lazy, or overly reliant, on the technology. But in Banks’s Culture novels that never seems to be the case. There’s a fundamental difference in the way the human mind works and the machines that results in a you-scratch-my-back-I’ll-scratch-yours arrangement I can believe.

All of this, in a way, could be considered an evolutionary course for humans as a species. Arguably we’ve already moved past the point of allowing random mutations to affect our path, so we have to come up with it ourselves. Maybe there’s a path that merges (or at least creates a symbiosis of) biological intelligence and machine intelligence. This is fraught with all sorts of dangers, sure, but overall I’d still worry more about an asteroid impact or political calamity than I do a rogue AI turning us all into batteries.

James K. Decker
James K. Decker was born in New Hampshire in 1970, and has lived in the New England area since that time. He developed a love of reading and writing early on, participating in young author competitions as early as grade school, but the later discovery of works by Frank Herbert and Isaac Asimov turned that love to an obsession. He wrote continuously through high school, college and beyond, eventually breaking into the field under the name James Knapp, with the publication of the Revivors trilogy (State of Decay, The Silent Army, and Element Zero). State of Decay was a Philip K. Dick award nominee, and won the 2010 Compton Crook Award. The Burn Zone is his debut novel under the name James K. Decker. He now lives in MA with his wife Kim.

I have used one form of A.I. or another in both my current series, and I’m using it again in my current work in progress. In my Revivors series, I sort of touched on it with the reanimated revivors themselves – beings with intellect but without brain chemistry. They were without empathy, for the most part, but since they were humans at one time they weren’t true A.I. and their behavior was still very human. In The Burn Zone, I play the A.I. mostly for laughs – their primary use is in advertising, bodiless intelligences which roam from station to station, following people into elevators and bathroom stalls to interactively pitch them products one-on-one. In my latest project, the A.I.s have a larger role and are true A.I. For this reason, I’ve been giving the topic a lot more thought as of late, and the more I think about it, the more dangerous the idea seems.

In science fiction, it seems that A.I.s are almost always artificial human intelligences – by which I mean they aren’t a machine intelligence so much as a machine-driven simulation of human intelligence. Examples include Data from Star Trek the Next Generation, the constructs from the movie A.I., the cylons, the geth, the Star Wars droids, Futurama’s robots, and even Isaac Asimov’s robots and in each case you basically have a very humanlike intelligence. They often have human shapes, they think like humans, they generally behave like humans, and nine times out of ten they even exhibit emotions like humans. In STNG much was made about Data striving to be more human, but in reality I found him to be very humanlike from the start, just an unemotional one (and even in spite of that conceit a lot of emotion crept through, as he was played by a human and was meant for humans to connect with). Even the cylons, for all their human-bashing, had humanlike behavior such as desire, jealousy, infighting, as well as concerns about what they were, and their place in the universe. All this even culminated into a sort of pseudo religion, another human concept.

I get why this is done in fiction – if the A.I. in question is going to be a character in a story, the reader, or viewer, needs to be able to relate to it on some level, but this creates a popular perception that A.I. might actually turn out that way. We do the same thing with animals in fiction – we tend to humanize them, giving them human feelings and concerns. In a way we have to, because the reality is that we can’t truly know their minds. I’ve heard it said that even if an animal could speak, we wouldn’t be able to communicate with it because we have no common frame of reference; we’re just too different. I think the gap would be just as vast if not more so if we were dealing with a true A.I.

Therein lies the problem, and in my current novel it’s a problem I’m finding to be a thorny one. If we’re talking about a true intelligence, some kind of self-aware network of synthetic neurons and not some kind of ‘human simulation’, I don’t see how we could have the slightest idea what it might do once it became conscious. We’d be interacting with a completely inhuman intelligence, free of empathy, or even an understanding of what life and death are. The things that are core to us as humans would mean nothing to a being like that and so given the chance to act in our world, we could have no way of guessing what it might decide to do. Even if it were somehow keyed to be beneficial to us, taking the “maximizing human happiness” example from the original question, a machine intelligence might decide the optimal way to do this would be to keep every human immobilized, and hooked up to a feeding tube with a wire running current to our pleasure centers. That would make every human happy for their entire lives, and without the ability to understand why that would be horrible it might seem like the most efficient course of action. Machines can’t be expected to think like us. I think the parable of Felix from this comic aptly demonstrates how this might go wrong:

Maybe they would decide our happiness didn’t matter, or couldn’t be quantified. Maybe instead it would decide to spread out into the universe even if it had to strip the Earth of resources to do it. It may decide on a path of self improvement, and rapidly become something even its creators can no longer recognize or exploit. It might decide we shouldn’t be here at all, or it might not even consider us interesting enough for consideration. We can’t know. For these reasons, I think heading down that path might be a mistake. Think of how little, as a species, we care for the concerns of the other species on this planet as we work toward our goals – and we have empathy.

I think in the real world an A.I. could be extremely dangerous. If it were isolated in some kind of controlled bubble where it was free to think but not to act then that would be one thing, but anything more and we’re asking for trouble. That all said, eventually someone will do it. If it can be done, someone always finds a way to do it. When our robotic overlords do finally come, I think we’d be looking at something more like The Matrix (minus the ridiculous – though deliciously creepy – ‘human battery’ idea) and less like the cylons, because they would probably see no reason to emulate us in form. If they plug us all into the Matrix, I guess that wouldn’t be the worst thing though

I’d like to put in early to be either a superhero or at the very least own an island. If it’s too late and I’m already plugged into the Matrix then don’t tell me – I don’t want to know.

Neal Asher
Neal Asher lives sometimes in England, sometimes in Crete and mostly at a keyboard. Having over eighteen books published he has been accused of overproduction (despite spending far too much time ranting on his blog, cycling off fat, and drinking too much wine) but doesn’t intend to slow down just yet.

I have as much of a problem with doomsday scenarios as I do with the ‘Rapture of Nerds’. Apparently, in twenty years or in fifty years, we are going to pass through the technological singularity, whereupon AIs will assert control across the Earth leaving us poor slow organisms in their dust. They’ll right wrongs, they’ll make us immortal, solve our energy problems while negligently inventing FTL travel … either that or in one fell swoop they’ll exterminate us all. Both ways of thinking come from the mistaken idea of unchanged humanity on one side and all powerful AIs on the other.

The Rapture of Nerds is all wishful thinking. This is the builder looking at the muddy hole and piles of bricks where he intends to build a house, thinking, ‘Wouldn’t it be great if I could go home, lie on my sofa for a couple of days, then come back and find that the house has miraculously built itself?’ This is humans looking at the exponential rise in some areas of our technological development and seeing there a solution to all ills. I’m sorry, but the whole thing is almost religious thinking. Supplant the AIs of the technological singularity with the Spaghetti Monster and there’s not much difference.

On the other side you have the catastrophists waving their ‘The End is Nigh’ placards. These would be the people who have given us Y2K, nuclear winter, bird flu, grey goo, global cooling and warming – insert catastrophe that didn’t happen and is unlikely to happen here – and they’re best ignored.

Both are lazy thinking. Yes, our computers are able to process so much more every day but AIs they are not. And if they suddenly do turn into demigods, how exactly are they going to change the world? It’s all very well having vast intelligence but if you can’t even pick up a screwdriver it isn’t going to do much good. Sorry to be blunt, but go ask Stephen Hawking about that.

Other related areas of technology have an awful long way to go before our computing can have a paradigm-changing effect on the world. Software still has a long way to go before catching up with the processing power we have, and robotics is lagging far behind both of these. Yeah, there’s some cool stuff coming out from the likes of Boston Dynamics and those researching prosthetics, but compared to the human body it’s a flint arrowhead compared to a fighter jet. And of course there’s power. Wonderful robotics there in things like Big Dog, but it doesn’t get very far without needing a high voltage cable stuck up its ass. There is, unfortunately, no Moore’s Law for batteries.

But I’m not saying that AI and the singularity will not arrive, I just think people are a little misled about how it is going to happen and its effects. Research into AI runs alongside research into how we operate, into biotech and into progressively smarter interfaces between us and the machines. I suggest that by the time we’re moving into the territory of workable AI we’ll also be working on ourselves. We’ll have found ways to mentally link to computing and upload and manipulate data, the lines between what is a machine mentality and a human organic one will be blurred. In fact ‘machine mentalities’ are quite likely to be organic anyway, while our own brains will probably be being stuffed with hard technology.

But then, of course, this is what the likes of Kurzweil and Vinge have been saying and, in the interim, something has been lost in translation. By the time the AIs start doing some heavy lifting we aren’t going to be spectators. We’re not going to be dominated by AIs because, in the end, they will be indistinguishable from us.

Tagged with:

Filed under: Mind Meld

Like this post? Subscribe to my RSS feed and get loads more!