After a seemingly dry period, significant advances have been in the last several years toward creating an artificial intelligence. But what if we succeed? Ross Andersen writes at Aeon Magazine:
[Nick] Bostrom isn’t too concerned about extinction risks from nature. Not even cosmic risks worry him much, which is surprising, because our starry universe is a dangerous place. ...
The risks that keep Bostrom up at night are those for which there are no geological case studies, and no human track record of survival. These risks arise from human technology, a force capable of introducing entirely new phenomena into the world.The technology that concerns Bostrom, and others, is the rise of machine intelligences.
... Nuclear weapons were the first technology to threaten us with extinction, but they will not be the last, nor even the most dangerous. A species-destroying exchange of fissile weapons looks less likely now that the Cold War has ended, and arsenals have shrunk. There are still tens of thousands of nukes, enough to incinerate all of Earth’s dense population centers, but not enough to target every human being. The only way nuclear war will wipe out humanity is by triggering nuclear winter, a crop-killing climate shift that occurs when smoldering cities send Sun-blocking soot into the stratosphere. But it’s not clear that nuke-levelled cities would burn long or strong enough to lift soot that high. The Kuwait oil field fires blazed for ten months straight, roaring through 6 million barrels of oil a day, but little smoke reached the stratosphere. A global nuclear war would likely leave some decimated version of humanity in its wake; perhaps one with deeply rooted cultural taboos concerning war and weaponry.
An artificial intelligence wouldn’t need to better the brain by much to be risky. After all, small leaps in intelligence sometimes have extraordinary effects. Stuart Armstrong, a research fellow at the Future of Humanity Institute, once illustrated this phenomenon to me with a pithy take on recent primate evolution. ‘The difference in intelligence between humans and chimpanzees is tiny,’ he said. ‘But in that difference lies the contrast between 7 billion inhabitants and a permanent place on the endangered species list. That tells us it’s possible for a relatively small intelligence advantage to quickly compound and become decisive.’It is a long article, but worth the read.
To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can't picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent. If its goal is to win at chess, an AI is going to model chess moves, make predictions about their success, and select its actions accordingly. It’s going to be ruthless in achieving its goal, but within a limited domain: the chessboard. But if your AI is choosing its actions in a larger domain, like the physical world, you need to be very specific about the goals you give it.
‘The basic problem is that the strong realisation of most motivations is incompatible with human existence,’ Dewey told me. ‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’
It is tempting to think that programming empathy into an AI would be easy, but designing a friendly machine is more difficult than it looks. You could give it a benevolent goal — something cuddly and utilitarian, like maximising human happiness. But an AI might think that human happiness is a biochemical phenomenon. It might think that flooding your bloodstream with non-lethal doses of heroin is the best way to maximise your happiness. It might also predict that shortsighted humans will fail to see the wisdom of its interventions. It might plan out a sequence of cunning chess moves to insulate itself from resistance. Maybe it would surround itself with impenetrable defences, or maybe it would confine humans — in prisons of undreamt of efficiency.
I prefer to call it evolution. We just happen to be designing our successors, instead of relying on natural selection to produce our biological successors.
ReplyDeleteAs a species, human evolution is stagnating. Through various social policies, we have badly interfered with natural selection. We have discouraged reproduction by the best and brightest. Instead, the best and brightest limit their offspring to what they can afford to support and raise. Meanwhile, taxes and misguided charity subsidize the indiscriminate breeding by members of the human species who are incapable of economically supporting their offspring and have no interest in raising their offspring to adulthood.
Or, maybe I completely misunderstand evolution and natural selection. Maybe natural selection is decided by quantity, not quality.
Evolution only seeks a workable solution, not necessarily the optimal solution. It can be satisfied by lots of offspring, as long as some survive; or by investing in a few offspring optimized for survival.
DeleteThe reproduction strategies of different groups in our society is more complex than merely evolution allows. First, people are increasingly reproducing along class lines. Professionals marry professionals. The lower classes increasingly don't marry, and have children among their own socio-economic strata. However, many civilizations have followed this path, and it long term consequence seems more significant to the survival of the civilization rather than the evolution of the species. Second, as demographers have noted, people of faith reproduce at higher rates than the godless. The most recent statistics show declining fecundity rates across all social strata. Thus, it isn't merely a matter of whether someone can afford children. People that can most afford children are just as likely to avoid having children as those that can ill afford them. Rather, the issue is who is the most willing to sacrifice to have children.