|
GOODS South WFC3 ERS Details 1 [Courtesy NASA] |
Along this line, considerable research has been done analyzing intelligent behavior of animals. In a 2007 study by a Japanese research team, chimpanzees out-performed university students in a task to remember the location of numbers on a screen [Briggs2007]. In another remarkable study, researchers at Wofford College taught a border collie to recognize the names of 1022 objects, and to distinguish between names of the objects themselves and orders to fetch them. The researchers halted the training of this dog after three years, not because the dog could not learn more names but because of time and budget constraints [SD2011e]. Also, the average scaled brain size of various species of dolphins is greater than the average scaled brain size of chimpanzees, gorillas and orangutans, our closest primate relatives [ConwayMorris2003, pg. 247]. In short, these studies point to human intelligence only as the end of a long spectrum of intelligence among other species on this planet.
Early pioneers of computing were convinced that numerous real-world applications of artificial intelligence were just around the corner. In the early 1950s, for instance, it was widely expected that practical computer systems for machine translation would be working within "a year or two." But these early efforts foundered on the reality that emulating functions of the human brain was much more difficult than originally expected. Thus the field of computer science was content to pursue more tractable applications, such as business data processing and scientific computation, areas in which computers have proven to be spectacularly successful.
Applications of computer technology were greatly facilitated by the invention of the transistor and integrated circuit in the 1960s. Gordon E. Moore, one of the founders of Intel, observed in 1965 that the number of transistors that could be engraved on a single chip of silicon had roughly doubled each year since 1960, and, as far as he could see, this trend would continue for several more years [Moore1965]. Much to the astonishment of everyone in the field, his prediction, now known as "Moore's Law," has continued unabated for 45 years, and there is still no end in sight.
Spurred in part by Moore's Law, within the past 20 years a new generation of researchers in the artificial intelligence field has revisited some of the older applications that proved so troublesome. One breakthrough in this arena was the discovery that schemes based on statistics and probability, technically known as "Bayesian" methods, tend to be significantly more effective in "learning" than the rule-based approaches that had been used for many years. These "Bayesian" methods, by the way, are significantly more akin to the experience-based process by which humans think and learn. As one example of the numerous lines of research and development in this area, some rather good computer translation tools are now available -- try Google's online translation tool at http://translate.google.com.
IBM's Deep Blue System
This recent wave of progress in artificial intelligence was brought to the public's attention in 1996, when an IBM computer system known as "Deep Blue" defeated Gary Kasparov, the reigning world chess champion, in game one of a six-game match. After the defeat, Kasparov quickly left the stage and was described as "devastated" [Weber1996]. The basic computer program strategy employed by Deep Blue had been known for some time, although IBM's team had made some significant enhancements. However, the principal factor in the Deep Blue achievement was simply the application of enormous computer power -- the specially designed Deep Blue system (a highly parallel supercomputer) analyzed over 100 million potential moves per second, and thus was able to "see" ten or more moves ahead. Kasparov went on to win the 1996 tournament, but in a 1997 rematch Deep Blue won 3.5 games to 2.5 games, decisively establishing computer supremacy in tournament chess [Weber1997].
In spite of its achievement, Deep Blue was a fairly elementary form of artificial intelligence. It most certainly was not attempting to play chess the way in the same intuitive way that a human would. For example, Kasparov typically evaluates roughly three moves per second in tournament play, compared with over 100 million per second for Deep Blue. In addition, the game of chess can be very compactly described. Thus, in the minds of some observers, the Deep Blue achievement did not constitute a major advance of artificial intelligence.
In 2010, after the IBM team felt that enough progress had been made, IBM executives contacted executives of the Jeopardy! show, and an agreement was reached to stage a tournament, which was conducted 14-16 February 2011. On the first day, Watson opened impressively, but in the end was tied with Rutter for the lead. But on the second day Watson performed extremely well -- it rang in first on 25 of the 30 questions, and was correct on 24 of the 25. It also did very well on the third day, although not as decisively as the second day. Watson's three-day total "winnings" were $77,147, far ahead of Jennings at $24,000 and Rutter at $21,600, and so Watson was declared the victor (IBM's actual winnings of US $1,000,000 were split between two charities). In his memorable inscription conceding defeat to Watson at the end of the Jeopardy! match, Ken Jennings wrote on his screen "I for one welcome our new computer overlords" [Markoff2011a].
Since the 2011 Jeopardy! demonstration, IBM has started a new research initiative in applying its Watson technology to medicine, in particular cancer diagnosis and treatment. While they have achieved some initial success, even its promoters agree that considerable work will be necessary for their system to be accepted in clinical practice [Ross2017].
In 2017, DeepMind announced even more remarkable results: their researchers had started from scratch, programming a computer with only the rules of Go, together with some sophisticated "deep learning" algorithm, and then had the program play games against itself. Initially, the new "AlphaGo Zero" program flailed badly, but within a few days it had advanced to the point that it defeated the earlier champion-beating AlphaGo program 100 games to zero. After one month, the program's Elo rating was over 5000, compared with Lee Sedol's rating of approximately 3600, meaning that AlphaGo zero was as far above the world champion as the world champion was above a typical amateur [Greenmeier2017].
So where is all this heading? A 2011 Time article featured an interview with futurist Ray Kurzweil [Grossman2011]. Kurzweil is a leading figure in the "Singularity" movement, a loosely coupled group of scientists and technologists who foresee an era, which they predict will occur by roughly 2045, when machine intelligence will far transcend human intelligence. Such future intelligent systems will then design even more powerful technology, resulting in a dizzying advance that we can only dimly foresee at the present time. Kurzweil outlines this vision in his recent book The Singularity Is Near [Kurzweil2005].
Many of these scientists and technologists believe that we are already on the cusp of this transition. Consider for a moment the enormous advances that have occurred just since the year 2000:
Futurists such as Kurzweil certainly have their skeptics and detractors. Many question Kurzweil's long-term extensions of Moore's Law. Others note that some of the earlier predictions of the Singularity movement have not yet materialized. Still others, such as Bill Joy, acknowledge that many of these predictions will materialize, but are very concerned that humans could be relegated to minor players in the future, or that out-of-control robots or nanotech-produced "grey goo" could destroy life on our fragile planet [Joy2000].
Many others (including the present author) are concerned that these writers are soft-pedaling enormous societal, legal, financial and ethical challenges, some of which we are just beginning to see clearly. One instance of this is the increasingly strident social backlash against technology and science itself, as evidenced in part by the popularity of the creationist and intelligent design movements. Along this line, recently in the San Francisco Bay Area (hardly a bastion of conservatism) a growing backlash has arisen against Pacific Gas and Electric's "smart meters," which send hourly usage data to a central server via wireless cell technology. Objections range to claims that they are "inaccurate" (in spite of several extensive tests that have verified their accuracy) to claims that they are endangering health by sending out a few bytes of data to a central server via wireless networks once an hour (in spite the fact that many of these critics own cell phones, which send and receive thousands of times more data in every telephone conversation and Internet access) [Barringer2011].
Nonetheless, the basic conclusions of the Singularity community appear to be on target: Moore's Law is likely to continue for at least another 20 years or so. Progress in a wide range of other technologies is here to stay (in part because of Moore's Law). Scientific progress is here to stay (again, in part because of Moore's Law-based advances in instrumentation and computer simulation tools). And all this is leading directly and inexorably to real-world artificial intelligence within 20-40 years. Whether "we" merge with "them," or "they" advance along with "us" is an interesting question, but either way, the future is coming. As the mathematician I.J. Good predicted back in 1965 [Grossman2011]:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
Do these developments spell the end of humanity as we know it? Probably not, any more than any other technological advancement. Instead, we as a world society have adopted these technologies to relieve toil and suffering, advance our standard of living (to the extent possible while reducing our footprint on the earth's environment), and enable larger numbers of people to focus on what is truly human. Indeed, as computer scientist John Seely Brown has explained, machines that are facile at answering questions only serve to obscure what remains fundamentally human: "The essence of being human involves asking questions, not answering them" [Markoff2011b].
Another line of thinking in this direction follows recent developments in artificial intelligence mentioned above. Some in the Singularity community, for instance, believe that the time will come that one's brain can be scanned with such resolution that the full contents of one's mind can be "uploaded" into a super-powerful computer system, after which the "person" will achieve a sort of immortality [Kurzweil2005]. Physicist Frank Tipler is even more expansive, predicting that every human who has ever lived will ultimately be "resurrected" in an information-theoretic sense [Tipler1994]. Even if one discounts the boundless optimism of such writers, the disagreement is generally not a matter of if, but only when such changes will transpire, and whether mankind can muster the wisdom to carefully control and direct these technologies for good rather than evil.
In any event, it is curious to note that at the pinnacle of modern science and technology, mankind has identified the extension of life and, even more boldly, the conquering of death as top future priorities, goals which are also the pinnacles of Judeo-Christian religion. Further, many futurist thinkers (who by and large are of highly secular sentiment) also recognize that extension of life has significant implications for human morality. As Marc Geddes explains [Geddes2004]:
Rational people understand that actions have consequences. A life of crime may help a person in the short term, but in the long run it may get you killed or imprisoned. ... People are more likely to be moral when they understand they will have to face the consequences of their actions in the future. It follows that the further into the future one plans for, the more moral one's behavior should become.
In a similar vein, humanitarian Albert Schweitzer based his sense of ethics in a deep reverence for human life, a reverence that reverberates even today in a very different environment than the one Schweitzer originally envisioned [Schweitzer1933, pg. 157]:
Affirmation of life is the spiritual act by which man ceases to live unreflectively and begins to devote himself to his life with reverence in order to raise it to its true value. To affirm life is to deepen, to make more inward, and to exalt the will to live. At the same time the man who has become a thinking being feels a compulsion to give to every will-to-live the same reverence for life that he gives to his own. He experiences that other life in his own. He accepts as being good: to preserve life, to promote life, to raise to its highest value life which is capable of development; and as being evil: to destroy life, to injure life, to repress life which is capable of development. This is the absolute, fundamental principle of the moral, and it is a necessity of thought.