Pages

Monday, September 28, 2015

The Singularity: The Next Arms Race

This past week I had the opportunity to peruse an article entitled "Two Singularities" by  Calum Chace at Singularity Weblog. The primary purpose of the article is to posit some issues that will need to be considered in regard to what he terms the "economic singularity"--the loss of jobs and concentration of wealth that will occur as we approach and reach the singularity. Of course, it helps to understand what is meant by the "singularity." Chace defines it as: "the moment when the first artificial general intelligence (AGI) becomes a superintelligence and introduces change to this planet on a scale and at a speed which un-augmented humans cannot comprehend.  The term was borrowed from maths and physics, and the central idea is that there is an event horizon beyond which the future becomes un-knowable." (Christians have our own version of the "singularity"--the point after which the future becomes un-knowable--and we call it "the Second Coming").

As noted, Chace goes on to discuss what happens to the luckless humans that are unemployable and not among the elite. He writes:
In which case, I think it [i.e., the term "singularity"] can reasonably be applied to another event which is likely to take place well before the technological singularity. This is the economic singularity. We are hearing a lot at the moment about AI automating jobs out of existence. There is widespread disagreement about whether this is happening already, whether it will happen in the future, and whether it is a good or a bad thing. For what it’s worth, my own view is that it’s not happening yet (or at least, not much), that it will happen in the coming three decades, and that it can be a very good thing indeed if we are prepared for it, and if we manage the transition successfully. 
A lot of people believe that a Universal Basic Income (UBI) will solve the problem of technological unemployment. ... 
But to my mind, UBI is not the real battle. ...
The real problem, it seems to me, is that we will need more than UBI. We will need an entirely new form of economy. I see great danger in a world in which most people rub along on state handouts while a minority – perhaps a small minority – not only own most of the wealth (that is pretty much true already) but are the only ones actively engaged in any kind of economic activity. Given the advances in all kinds of technology that we can expect in the coming decades, this minority would be under immense temptation to separate themselves off from the rest of us – not just economically, but cognitively and physically too. Yuval Harari, author of the brilliant Sapiens: A Brief History of Humankind, says that humanity may divide into two classes of people: rather brutally, he calls them the gods and the useless. [See the end of his TED talk http://snglrty.co/1XlhZ1r]
I don't think the risk is that "the gods" will wall themselves off from "the useless," to borrow the terms used by Harari, but that "the gods" will simply eliminate "the useless." If you think this unlikely, you haven't been paying attention to the goals of the environmental movement.

But Chace seemingly approaches this issue from the aspect that the "super-intelligence" will be the natural result of progress in an open market-place and/or an open scientific process, where every nation will potentially benefit equally. This, to me, is unlikely. A super-intelligence in the hands of one nation, corporation, or other group would instantly give that group a whole magnitude (or more) of an advantage over any other. There is no guarantee that a super-intelligence would be able to be sufficiently controllable to be useful, but the risk of a competitor developing such a super AI first--and being able to exploit it--is too great to ignore.

Obviously, I'm not the first to have thought of this issue, and I quickly found some additional articles addressing this issue. From an abstract of the book, Singularity Hypotheses: A Scientific and Philosophical Assessment comes a similar assessment to mine:
Chalmers notes that even if it is technically feasible for humanity to produce an intelligence explosion, we may not exercise that capacity because of “motivational defeaters,” choosing to restrict, slow, and manage the development of advanced AI technologies to reduce risk. On the other hand, since a lead in AI technology may translate into overwhelming military advantage, an arms race dynamic may give states incentives to pursue even very dangerous research in hopes of attaining a leading position.
The authors go on to observe:
... the military impact of an intelligence explosion would seem to lie primarily in the extreme acceleration in the development of new capabilities. A state might launch an AI Manhattan Project to gain a few months or years of sole access to advanced AI systems, and then initiate an intelligence explosion to greatly increase the rate of progress. Even if rivals remain only a few months behind chronologically, they may therefore be left many technological generations behind until their own intelligence explosions. It is much more probable that such a large gap would allow the leading power to safely disarm its nuclear-armed rivals than that any specific technological generation will provide a decisive advantage over the one immediately preceding it.

If states do take AI potential seriously, how likely is it that a government's “in-house” systems will reach the the point of an intelligence explosion months or years before competitors? Historically, there were substantial delays between the the first five nuclear powers tested bombs in 1945, 1949. 1952, 1960, and 1964. The Soviet Union's 1949 test benefited from extensive espionage and infiltration of the Manhattan Project, and Britain's 1952 test reflected formal joint participation in the Manhattan Project.

If the speedup in progress delivered by an intelligence explosion were large, such gaps would allow the leading power to solidify a monopoly on the technology and military power, at much lower cost in resources and loss of life than would have been required for the United States to maintain its nuclear monopoly of 1945-1949. To the extent that states distrust their rivals with such complete power, or wish to exploit itthemselves, there would be strong incentives to vigorously push forward AI research, and to ensure government control over systems capable of producing an intelligence explosion.
Zoltan Istvan, writing at Motherboard, similarly states: "[N]ations should do all they can to develop artificial intelligence, because whichever country produces an AI first will likely end up ruling the world indefinitely, since that AI will be able to control all other technologies and their development on the planet."

No comments:

Post a Comment