[Do you have an idea for a future Mind Meld? Let us know!]
Recently, a group of futurists predicted that artificial intelligence is a deadlier threat to humanity than any sort of natural disaster, nuclear war, or large objects falling from the sky. In an article by Ross Anderson at AeonMagazine.com, David Dewey, a research fellow at the Future of Humanity Institute says, concerning the human brain and probability “If you had a machine that was designed specifically to make inferences about the world, instead of a machine like the human brain, you could make discoveries like that much faster.” He stated that “An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels.” He also talked about how programming an AI with empathy wouldn’t be easy, that the steps it might take to “maximize human happiness”, for example, are not things that we might consider acceptable, but to an AI would seem exceedingly efficient.
Of course, this leads into much more complex discussion, and the possibilities with AI are vast and varied.
We asked this week’s panelists…
Here’s what they said…
- If you make an intelligent being, you must give it civil rights.
- On the other hand, you cannot give the vote to a computer program. “One man, one vote” — and how many copies of the program would you need to win an election? Programs can merge or can generate subprograms.
- Machines can certainly become a part of a human. Our future may see a merging of humans and machines.
- Or all of the above. Keep reading science fiction. We always get there first.