Can computers think?

Are human brains different than computers?

It has been widely believed through history (and is still widely believed by many religious-minded people) that the human mind is fundamentally distinct from anything mechanical or otherwise non-living. However, as with many other beliefs of this sort, modern science has discovered many of the workings of the human mind, and while a complete picture of conscious thought is not yet in hand, many scientists believe that the essential details are understood. Many of these findings were outlined in a 1995 book by Francis Crick, the co-discoverer of DNA [Crick1995], and more details have been discovered since then.

Along this line, considerable research has been done analyzing intelligent behavior of animals. In a 2007 study by a Japanese research team, chimpanzees out-performed university students in a task to remember the location of numbers on a screen [Briggs2007]. In another remarkable study, researchers at Wofford College taught a border collie to recognize the names of 1022 objects, and to distinguish between names of the objects themselves and orders to fetch them. The researchers halted the training of this dog after three years, not because the dog could not learn more names but because of time and budget constraints [SD2011e]. Also, the average scaled brain size of various species of dolphins is greater than the average scaled brain size of chimpanzees, gorillas and orangutans, our closest primate relatives [ConwayMorris2003, pg. 247]. In short, these studies point to human intelligence only as the end of a long spectrum of intelligence among other species on this planet.

Historical background

One of the more interesting lines of research in this area is in “artificial intelligence,” which is a term used rather broadly to describe advances in computer science that attempt to perform similar functions to the operation of human minds. Early pioneers of computing were convinced that numerous real-world applications of artificial intelligence were just around the corner. In the early 1950s, for instance, it was widely expected that practical computer systems for machine translation would be working within “a year or two.” But these early efforts foundered on the reality that emulating functions of the human brain was much more difficult than originally expected. Spurred in part by Moore’s Law, within the past 20 years a new generation of researchers in the artificial intelligence field has revisited some of the older applications that proved so troublesome. One breakthrough in this arena was the discovery of “Bayesian” methods, which, by the way, are significantly more akin to the experience-based process by which humans think and learn. As one example of the numerous lines of research and development in this area, some rather good computer translation tools are now available — try Google’s online translation tool at http://translate.google.com.

IBM’s Deep Blue System

This recent wave of progress in artificial intelligence was brought to the public’s attention in 1996, when an IBM computer system known as “Deep Blue” defeated Gary Kasparov, the reigning world chess champion, in game one of a six-game match. After the defeat, Kasparov quickly left the stage and was described as “devastated” [Weber1996]. Kasparov went on to win the 1996 tournament, but in a 1997 rematch Deep Blue won 3.5 games to 2.5 games, decisively establishing computer supremacy in tournament chess [Weber1997].

IBM’s Watson System

In 2004, an IBM research executive, while having dinner with some coworkers, noticed that everyone in the restaurant started watching a telecast of the American quiz show Jeopardy!, where Ken Jennings was in the middle of a long winning streak. After discussions with IBM scientists and executives, IBM embarked on a plan to develop a natural language question-answering system whose goal was to be powerful enough that it could compete with top human contestants on Jeopardy! The project proved every bit as challenging as it was first thought to be. According to some reports, IBM spent roughly $1 billion on the project, which was dubbed “Watson” after Thomas J. Watson, the founder of IBM.

In 2010, after the IBM team felt that enough progress had been made, IBM executives contacted executives of the Jeopardy! show, and an agreement was reached to stage a tournament. On the human side, Jeopardy! recruited legendary champs Ken Jennings, who set an all-time record of 74 consecutive wins, and Brad Rutter, who until the Watson match was undefeated and the biggest money winner. The match was conducted on 14-16 February 2011 at IBM’s headquarters. Questions were fed to Watson electronically as soon as they were displayed to the human contestants. When Watson was confident of an answer, it depressed the signaling button, and, if it was the first to ring in, enunciated the response in a computer-synthesized voice.

On the first day, Watson opened impressively, but in the end was tied with Rutter for the lead. But on the second day Watson performed extremely well — it rang in first on 25 of the 30 questions, and was correct on 24 of the 25. It also did very well on the third day, although not as decisively as the second day. Watson’s three-day total “winnings” were $77,147, far ahead of Jennings at $24,000 and Rutter at $21,600, and so Watson was declared the victor (IBM’s actual winnings of US $1,000,000 were split between two charities). In his memorable inscription conceding defeat to Watson at the end of the Jeopardy! match, Ken Jennings wrote on his screen “I for one welcome our new computer overlords” [Markoff2011a].

Future prospects

The real significance of the Watson project was IBM’s demonstration that a computer system can rather well “understand” and respond to natural language queries, which has long been a major obstacle in real-world applications of artificial intelligence. Computers have not yet passed the “Turing test,” the test mentioned above wherein humans exchange messages with an unseen partner (a computer) and judge it to be human, but they are getting close.

So where is all this heading? A 2011 Time article featured an interview with futurist Ray Kurzweil [Grossman2011]. Kurzweil is a leading figure in the “Singularity” movement, a loosely coupled group of scientists and technologists who foresee an era, which they predict will occur by roughly 2045, when machine intelligence will far transcend human intelligence. Such future intelligent systems will then design even more powerful technology, resulting in a dizzying advance that we can only dimly foresee at the present time. Kurzweil outlines this vision in his recent book The Singularity Is Near [Kurzweil2005]. Many of these scientists and technologists believe that we are already on the cusp of this transition.

Futurists such as Kurzweil certainly have their skeptics and detractors. Many question the timetable of these predictions. Others (including the present author) are concerned that these writers are soft-pedaling enormous societal, legal, financial and ethical challenges, some of which we are just beginning to see clearly. Still others, such as Bill Joy, acknowledge that many of these predictions will materialize, but are very concerned that humans could be relegated to minor players in the future, or that out-of-control robots or nanotech-produced “grey goo” could destroy life on our fragile planet [Joy2000].

Nonetheless, the basic conclusions of the Singularity community appear to be on target: Moore’s Law is likely to continue for at least another 20 years or so. Progress in a wide range of other technologies is here to stay (in part because of Moore’s Law). Scientific progress is here to stay (again, in part because of Moore’s Law-based advances in instrumentation and computer simulation tools). And all this is leading directly and inexorably to real-world artificial intelligence within 20-40 years. Whether “we” merge with “them,” or “they” advance along with “us” is an interesting question, but either way, the future is coming.

Religious implications

In the wake of these developments, many have noted that advances in technology are heading directly to a form of “immortality,” in two different ways. First of all, medical technology is in the midst of a revolution on several fronts. These developments range from advanced, high-tech prosthetics for the handicapped to some remarkable cancer therapies currently being developed [Kurzweil2005]. Of even greater interest is research into the fundamental causes of aging. Scientists have identified seven broad categories of molecular and cellular differences between older and younger people, and are working on ways to retard or stop each factor [deGrey2004].

Another line of thinking in this direction follows recent developments in artificial intelligence mentioned above. Some in the Singularity community, for instance, believe that the time will come that one’s brain can be scanned with such resolution that the full contents of one’s mind can be “uploaded” into a super-powerful computer system, after which the “person” will achieve a sort of immortality [Kurzweil2005]. Physicist Frank Tipler is even more expansive, predicting that every human who has ever lived will ultimately be “resurrected” in an information-theoretic sense [Tipler1994]. Even if one discounts the boundless optimism of such writers, the disagreement is generally not a matter of if, but only when such changes will transpire, and whether mankind can muster the wisdom to carefully control and direct these technologies for good rather than evil.

In any event, it is curious to note that at the pinnacle of modern science and technology, mankind has identified the extension of life and, even more boldly, the conquering of death as top future priorities, goals which are also the pinnacles of Judeo-Christian religion. Further, many futurist thinkers (who by and large are of highly secular sentiment) also recognize that extension of life has significant implications for human morality. As Marc Geddes explains [Geddes2004]:

Rational people understand that actions have consequences. A life of crime may help a person in the short term, but in the long run it may get you killed or imprisoned. … People are more likely to be moral when they understand they will have to face the consequences of their actions in the future. It follows that the further into the future one plans for, the more moral one’s behavior should become.

In a similar vein, humanitarian Albert Schweitzer based his sense of ethics in a deep reverence for human life, a reverence that reverberates even today in a very different environment than the one Schweitzer originally envisioned [Schweitzer1933, pg. 157]:

Affirmation of life is the spiritual act by which man ceases to live unreflectively and begins to devote himself to his life with reverence in order to raise it to its true value. To affirm life is to deepen, to make more inward, and to exalt the will to live. At the same time the man who has become a thinking being feels a compulsion to give to every will-to-live the same reverence for life that he gives to his own. He experiences that other life in his own. He accepts as being good: to preserve life, to promote life, to raise to its highest value life which is capable of development; and as being evil: to destroy life, to injure life, to repress life which is capable of development. This is the absolute, fundamental principle of the moral, and it is a necessity of thought.

Conclusion

In short, although some disagree, the consensus of scientists who have studied mind and consciousness is that there does not appear to be anything fundamental in human intelligence that cannot one day be exhibited in machine intelligence. Those of religious faith who hold out for a fundamental distinction that cannot be bridged — a cognitive science “proof” of God — are welcome to hold this view, but from all indications this notion is another instance of a “God of the gaps” theological error, wherein one looks for God in the recesses of what remains unknown in scientific knowledge at one particular point in time. Note that these findings do not refute the religious notion of a “soul,” nor do they suggest that humans do not assume responsibility for their actions and decisions, but instead merely that many if not all normal operations of human minds may one day be replicated in machine intelligence. Thus, as with most other aspects of the science-religion discussion, fundamentally there is no basis for conflict, provided that each discipline recognizes its own limitations. For additional discussion, see God of the gaps.

Some additional analysis of this issue, and some additional references, may be found at Computers-think.

References

  1. [Briggs2007] Helen Briggs, “Chimps beat humans in memory test,” BBC News, 3 Dec 2007, available at Online article.
  2. [ConwayMorris2003] Simon Conway Morris, Life’s Solutions: Inevitable Humans in a Lonely Universe, Cambridge University Press, Cambridge, UK, 2003.
  3. [Crick1995] Francis Crick, The Astonishing Hypothesis: The Scientific Search for the Soul, Touchstone, New York, 1995.
  4. [deGrey2004] Aubrey de Grey, “The War on Aging,” in Immortality Institute, The Scientific Conquest of Death, Libros en Red Publishers, Buenos Aires, 2004, pg. 29-46.
  5. [Geddes2004] Marc Geddes, “An Introduction to Immortality Morality,” in Immortality Institute, The Scientific Conquest of Death, Libros en Red Publishers, Buenos Aires, 2004, pg. 239-256.
  6. [Grossman2011] Lev Grossman, “2045: The Year Man Becomes Immortal,” Time, 10 Feb 2011, available at Online article.
  7. [Hodges2000] Andrew Hodges, Alan Turing: The Enigma, originally published 1983, republished by Walker and Co., New York, 2000.
  8. [Joy2000] Bill Joy, “The Future Doesn’t Need Us,” Wired, Apr 2000, available at Online article.
  9. [Kurzweil2005] Ray Kurzweil, The Singularity Is Near, Viking Penguin, New York, 2005.
  10. [Macrae1992] Norman Macrae, John Von Neumann: The Scientific Genius Who Pioneered the Modern Computer, Game Theory, Nuclear Deterrence, and Much More, Pantheon, New York, 1992.
  11. [Markoff2011a] John Markoff, “Computer Wins on ‘Jeopardy!’: Trivial, It’s Not,” Yew York Times, 16 Feb 2011, available at Online article.
  12. [Schweitzer1933] Albert Schweitzer, Out of My Life and Thought: An Autobiography, Felix Meiner Verlag, Leipzig, 1931, English Translation 1933, reprinted by Johns Hopkins University Press, 1998.
  13. [SD2011e] [no author] “Border Collie Comprehends Over 1,000 Object Names as Verbal Referents,” Science Daily, 6 Jan 2011, available at Online article.
  14. [Tipler1994] Frank J. Tipler, The Physics of Immortality: Modern Cosmology, God and the Resurrection of the Dead, Doubleday, New York, 1994.
  15. [Weber1996] Bruce Weber, “In Upset, Computer Beats Chess Champion,” New York Times, 11 Feb 1996, available at Online article.
  16. [Weber1997] Bruce Weber, “Computer Defeats Kasparov, Stunning the Chess Experts,” New York Times, 5 May 1997, available at Online article.

Comments are closed.