I was following some discussion threads on Amazon regarding books dealing with the potential of human intelligence to understand the nature of things and on the prospects for developing artificial intelligence that may or may not come back to bite us as in the Terminator series of movies. The following is one of the posts I made there:
“Bostrum (the author of a book warning of the dangers of AI) seems enamored with human intelligence, assuming it to be superior to any other intelligence we know of. I believe that human self-aggrandizement is one of our key weaknesses as a species. We often assume we are the only species that matters; most proposed solutions to problems we face address only their effect on humans, as if that was all that counts.
Our “superior” intelligence has resulted in gross overpopulation of the planet, a mass extinction of magnitude that is projected to be as bad as the one that killed the dinosaurs, endless wars fought with increasingly deadly weapons obtainable by almost every group that wants them, 20 or so percent of the human population living in abject poverty, an extraordinary lack of skill in using peaceful conflict resolution skills that have already been developed, and so on.
Various other species behave more intelligently than we do in certain areas. We are the best at technological development and the arts and sciences, but that’s about it. Many other species are better at handling conflict, at raising their young, fitting into their environment, etc. In short, we are extraordinarily stupid in many areas that affect our survival and the survival of other life on the planet, and one of the most stupid ideas is that of human superiority in all things.
Ok, that being over with, I would be interested in a book that convincingly describes how superintelligence could be created and gives well documented evidence of that. I have been in computers and math for fifty years (well, computers for only forty years) and I well remember the AI craze that consumed the industry in the ’80s. People then thought that the solution to AI was just around the corner. Then reality hit, and the difficulties of producing true AI became apparent. Now we have developed a computer system that can beat a human at chess and apparently can fix a satellite without human intervention in some cases. There is still a huge gap between these feats and producing AI that can function like that shown in The Terminator movies.
Of the problems facing us now (the likelihood of our self-destruction from war, contamination and pollution, disease, including bio-warfare, and other such insane behavior) means I don’t lose much sleep over the dangers of AI, though if we could develop it, I’m sure we would, because we like to act first and think later, like the birds in Bostrum’s allegory.” Here is a link if you want to Look Inside for the Bostrum’s allegory, which I liked, and the preface I refer to.