Barred spiral galaxy NGC1672 [Courtesy NASA] Palau de la Musica Catalana, Barcelona, Spain [Photo by DHB, (c) 2011]

Can computers think?

David H. Bailey
1 Jan 2017 (c) 2017

Are human brains different than computers?

It has been widely believed through history (and is still widely believed by many religious-minded people) that the human mind is fundamentally distinct from anything mechanical or otherwise non-living. However, as with many other beliefs of this sort, modern science has uncovered many of the workings of the human mind, and while a complete picture of conscious thought is not yet in hand, many scientists believe that the essential details are understood. Many of these findings were outlined in a 1995 book by Francis Crick, the co-discoverer of DNA [Crick1995], and more details have been discovered since then.

Along this line, considerable research has been done analyzing intelligent behavior of animals. In a 2007 study by a Japanese research team, chimpanzees out-performed university students in a task to remember the location of numbers on a screen [Briggs2007]. In another remarkable study, researchers at Wofford College taught a border collie to recognize the names of 1022 objects, and to distinguish between names of the objects themselves and orders to fetch them. The researchers halted the training of this dog after three years, not because the dog could not learn more names but because of time and budget constraints [SD2011e]. Also, the average scaled brain size of various species of dolphins is greater than the average scaled brain size of chimpanzees, gorillas and orangutans, our closest primate relatives [ConwayMorris2003, pg. 247]. In short, these studies point to human intelligence only as the end of a long spectrum of intelligence among other species on this planet.

Historical background

One of the more interesting lines of research in this area is in "artificial intelligence," which is a term used rather broadly to describe advances in computer science that attempt to perform similar functions to the operation of human minds. Such applications of computer technology were proposed by some of the earliest figures in the age of computing. American mathematician John von Neumann, for example, began to conceive of the notion of a computer as a "thinking machine" in the 1940s [Macrae1992]. Similarly, British mathematician Alan Turing, while helping design electronic computers to break German ciphers during World War II, became fascinated with the notion that a machine could simulate "states of mind." Later, in 1950, he proposed what is now known as the "Turing test": if a computer system becomes advanced enough that it can "chat" via text with a human well enough that the human could not determine that a computer was on the other side of the conversation, then we would have to conclude that this computer system had achieved intelligence [Hodges2000, pg. 266, 415].

Early pioneers of computing were convinced that numerous real-world applications of artificial intelligence were just around the corner. In the early 1950s, for instance, it was widely expected that practical computer systems for machine translation would be working within "a year or two." But these early efforts foundered on the reality that emulating functions of the human brain was much more difficult than originally expected. Thus the field of computer science was content to pursue more tractable applications, such as business data processing and scientific computation, areas in which computers have proven to be spectacularly successful.

Applications of computer technology were greatly facilitated by the invention of the transistor and integrated circuit in the 1960s. Gordon E. Moore, one of the founders of Intel, observed in 1965 that the number of transistors that could be engraved on a single chip of silicon had roughly doubled each year since 1960, and, as far as he could see, this trend would continue for several more years [Moore1965]. Much to the astonishment of everyone in the field, his prediction, now known as "Moore's Law," has continued unabated for 45 years, and there is still no end in sight.

Spurred in part by Moore's Law, within the past 20 years a new generation of researchers in the artificial intelligence field has revisited some of the older applications that proved so troublesome. One breakthrough in this arena was the discovery that schemes based on statistics and probability, technically known as "Bayesian" methods, tend to be significantly more effective in "learning" than the rule-based approaches that had been used for many years. These "Bayesian" methods, by the way, are significantly more akin to the experience-based process by which humans think and learn. As one example of the numerous lines of research and development in this area, some rather good computer translation tools are now available -- try Google's online translation tool at http://translate.google.com.

IBM's Deep Blue System

This recent wave of progress in artificial intelligence was brought to the public's attention in 1996, when an IBM computer system known as "Deep Blue" defeated Gary Kasparov, the reigning world chess champion, in game one of a six-game match. After the defeat, Kasparov quickly left the stage and was described as "devastated" [Weber1996]. The basic computer program strategy employed by Deep Blue had been known for some time, although IBM's team had made some significant enhancements. However, the principal factor in the Deep Blue achievement was simply the application of enormous computer power -- the specially designed Deep Blue system (a highly parallel supercomputer) analyzed over 100 million potential moves per second, and thus was able to "see" ten or more moves ahead. Kasparov went on to win the 1996 tournament, but in a 1997 rematch Deep Blue won 3.5 games to 2.5 games, decisively establishing computer supremacy in tournament chess [Weber1997].

In spite of its achievement, Deep Blue was a fairly elementary form of artificial intelligence. It most certainly was not attempting to play chess the way in the same intuitive way that a human would. For example, Kasparov typically evaluates roughly three moves per second in tournament play, compared with over 100 million per second for Deep Blue. In addition, the game of chess can be very compactly described. Thus, in the minds of some observers, the Deep Blue achievement did not constitute a major advance of artificial intelligence.

IBM's Watson System

In 2004, an IBM research executive, while having dinner with some coworkers, noticed that everyone in the restaurant started watching a telecast of the American quiz show Jeopardy!, where Ken Jennings was in the middle of a long winning streak. After discussions with IBM scientists and executives, IBM embarked on a plan to develop a natural language question-answering system whose goal was to be powerful enough that it could compete with top human contestants on Jeopardy! The project proved every bit as challenging as it was first thought to be. According to some reports, IBM spent roughly $1 billion on the project, which was dubbed "Watson" after Thomas J. Watson, the founder of IBM.

In 2010, after the IBM team felt that enough progress had been made, IBM executives contacted executives of the Jeopardy! show, and an agreement was reached to stage a tournament. On the human side, Jeopardy! recruited legendary champs Ken Jennings, who set an all-time record of 74 consecutive wins, and Brad Rutter, who until the Watson match was undefeated and the biggest money winner. The match was conducted on 14-16 February 2011 at IBM's headquarters. Questions were fed to Watson electronically as soon as they were displayed to the human contestants. When Watson was confident of an answer, it depressed the signaling button, and, if it was the first to ring in, enunciated the response in a computer-synthesized voice.

On the first day, Watson opened impressively, but in the end was tied with Rutter for the lead. But on the second day Watson performed extremely well -- it rang in first on 25 of the 30 questions, and was correct on 24 of the 25. It also did very well on the third day, although not as decisively as the second day. Watson's three-day total "winnings" were $77,147, far ahead of Jennings at $24,000 and Rutter at $21,600, and so Watson was declared the victor (IBM's actual winnings of US $1,000,000 were split between two charities). In his memorable inscription conceding defeat to Watson at the end of the Jeopardy! match, Ken Jennings wrote on his screen "I for one welcome our new computer overlords" [Markoff2011a].

Watson appeared to do very well in fairly traditional fact-based categories. For example, on the last day's Final Jeopardy, in the category "19th century novelists," the clue was "William Wilkinson's 'An Account of the Principalities of Wallachia and Moldavia' inspired this author's most famous novel." Watson generated the correct response "Who is Bram Stoker" (Stoker is the author of "Dracula.") On the minus side, Watson made several blunders. For example, on the first day Watson incorrectly responded "What is finis?" rather than "What is a terminus" to the clue "From the Latin for end, this is where trains can also originate." Also, on the second day, in the category "US Cities," the clue was "Its largest airport was named for a World War II hero; its second largest, for a World War II battle." Both Jennings and Rutter correctly wrote on their tablets "What is Chicago?," but Watson stumbled with "What is Toronto?????, " recognizing that this was unlikely to be a correct answer.

Future prospects

The real significance of the Watson project was IBM's demonstration that a computer system can rather well "understand" and respond to natural language queries, which has long been a major obstacle in real-world applications of artificial intelligence. Note that even Google's spectacularly successful search engine is not yet (as of 2011) able to understand sentences or questions -- all it can do is to return links to some webpages that have a few of the words and phrases entered. Computers have not yet passed the "Turing test," the test mentioned above wherein humans exchange messages with an unseen partner (a computer) and judge it to be human, but they are getting close.

So where is all this heading? A 2011 Time article featured an interview with futurist Ray Kurzweil [Grossman2011]. Kurzweil is a leading figure in the "Singularity" movement, a loosely coupled group of scientists and technologists who foresee an era, which they predict will occur by roughly 2045, when machine intelligence will far transcend human intelligence. Such future intelligent systems will then design even more powerful technology, resulting in a dizzying advance that we can only dimly foresee at the present time. Kurzweil outlines this vision in his recent book The Singularity Is Near [Kurzweil2005].

Many of these scientists and technologists believe that we are already on the cusp of this transition. Consider for a moment the enormous advances that have occurred just since the year 2000:

  1. Five billion mobile phones are now in service around the world, a figure that is approaching one per person on the planet.
  2. Many in North America and Eurpoe carry smartphones that are more powerful and capacious than the world's most powerful supercomputers of 20 years ago. What's more, these devices can instantly retrieve (via the Internet) far more information than is available in any local public library, not to mention music, movies and much, much more.
  3. Machine translation is now enormously better than it was just a decade ago. For example, Google provides (for free!) an iPhone or Droid translation "app" that enables a person to type or speak one language and immediately see its translation into any of numerous other languages.
  4. Social networking has exploded in popularity in the past few years. Nearly 600 million persons (roughly one in ten human beings on the planet) now have Facebook accounts. Twitter and Facebook pervade public and private daily life.
  5. Many young people spend over seven hours per day connected to one or more electronic media (often two or more simultaneously) [Kaiser2010].
  6. These powerful waves of change even afflict high-tech companies. Sun Microsystems, a pioneer of the minicomputer era, was recently acquired by Oracle after a prolonged decline, and Silicon Graphics, another technology leader of the 1980s, is now a mere shadow of its former self. Of all the early hardware vendors in the personal computer revolution, only Apple and IBM still flourish.

Futurists such as Kurzweil certainly have their skeptics and detractors. Many question Kurzweil's long-term extensions of Moore's Law. Others note that some of the earlier predictions of the Singularity movement have not yet materialized. Still others, such as Bill Joy, acknowledge that many of these predictions will materialize, but are very concerned that humans could be relegated to minor players in the future, or that out-of-control robots or nanotech-produced "grey goo" could destroy life on our fragile planet [Joy2000].

Many others (including the present author) are concerned that these writers are soft-pedaling enormous societal, legal, financial and ethical challenges, some of which we are just beginning to see clearly. One instance of this is the increasingly strident social backlash against technology and science itself, as evidenced in part by the popularity of the creationist and intelligent design movements. Along this line, recently in the San Francisco Bay Area (hardly a bastion of conservatism) a growing backlash has arisen against Pacific Gas and Electric's "smart meters," which send hourly usage data to a central server via wireless cell technology. Objections range to claims that they are "inaccurate" (in spite of several extensive tests that have verified their accuracy) to claims that they are endangering health by sending out a few bytes of data to a central server via wireless networks once an hour (in spite the fact that many of these critics own cell phones, which send and receive thousands of times more data in every telephone conversation and Internet access) [Barringer2011].

Nonetheless, the basic conclusions of the Singularity community appear to be on target: Moore's Law is likely to continue for at least another 20 years or so. Progress in a wide range of other technologies is here to stay (in part because of Moore's Law). Scientific progress is here to stay (again, in part because of Moore's Law-based advances in instrumentation and computer simulation tools). And all this is leading directly and inexorably to real-world artificial intelligence within 20-40 years. Whether "we" merge with "them," or "they" advance along with "us" is an interesting question, but either way, the future is coming. As the mathematician I.J. Good predicted back in 1965 [Grossman2011]:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

Do these developments spell the end of humanity as we know it? Probably not, any more than any other technological advancement. Instead, we as a world society have adopted these technologies to relieve toil and suffering, advance our standard of living (to the extent possible while reducing our footprint on the earth's environment), and enable larger numbers of people to focus on what is truly human. Indeed, as computer scientist John Seely Brown has explained, machines that are facile at answering questions only serve to obscure what remains fundamentally human: "The essence of being human involves asking questions, not answering them" [Markoff2011b].

Religious implications

In the wake of these developments, many have noted that advances in technology are heading directly to a form of "immortality," in two different ways. First of all, medical technology is in the midst of a revolution on several fronts. These developments range from advanced, high-tech prosthetics for the handicapped to some remarkable cancer therapies currently being developed [Kurzweil2005]. Of even greater interest is research into the fundamental causes of aging. For instance, researchers recently found that by genetically engineering mice to "overexpress" a certain gene, they were able to extend the lifespan of these mice by 20 to 30 percent [Kurosu2005]. Other researchers have found that resveratrol, a substance found in Concord grapes, blueberries and red wines, retards the aging process in rodents and is widely believed to be a factor in differences in mortality rates among some human populations [Armour2009]. In general, scientists have identified seven broad categories of molecular and cellular differences between older and younger people, and are working on ways to retard or stop each factor [deGrey2004].

Another line of thinking in this direction follows recent developments in artificial intelligence mentioned above. Some in the Singularity community, for instance, believe that the time will come that one's brain can be scanned with such resolution that the full contents of one's mind can be "uploaded" into a super-powerful computer system, after which the "person" will achieve a sort of immortality [Kurzweil2005]. Physicist Frank Tipler is even more expansive, predicting that every human who has ever lived will ultimately be "resurrected" in an information-theoretic sense [Tipler1994]. Even if one discounts the boundless optimism of such writers, the disagreement is generally not a matter of if, but only when such changes will transpire, and whether mankind can muster the wisdom to carefully control and direct these technologies for good rather than evil.

In any event, it is curious to note that at the pinnacle of modern science and technology, mankind has identified the extension of life and, even more boldly, the conquering of death as top future priorities, goals which are also the pinnacles of Judeo-Christian religion. Further, many futurist thinkers (who by and large are of highly secular sentiment) also recognize that extension of life has significant implications for human morality. As Marc Geddes explains [Geddes2004]:

Rational people understand that actions have consequences. A life of crime may help a person in the short term, but in the long run it may get you killed or imprisoned. ... People are more likely to be moral when they understand they will have to face the consequences of their actions in the future. It follows that the further into the future one plans for, the more moral one's behavior should become.

In a similar vein, humanitarian Albert Schweitzer based his sense of ethics in a deep reverence for human life, a reverence that reverberates even today in a very different environment than the one Schweitzer originally envisioned [Schweitzer1933, pg. 157]:

Affirmation of life is the spiritual act by which man ceases to live unreflectively and begins to devote himself to his life with reverence in order to raise it to its true value. To affirm life is to deepen, to make more inward, and to exalt the will to live. At the same time the man who has become a thinking being feels a compulsion to give to every will-to-live the same reverence for life that he gives to his own. He experiences that other life in his own. He accepts as being good: to preserve life, to promote life, to raise to its highest value life which is capable of development; and as being evil: to destroy life, to injure life, to repress life which is capable of development. This is the absolute, fundamental principle of the moral, and it is a necessity of thought.

Conclusion

In short, although some disagree, the consensus of scientists who have studied mind and consciousness is that there does not appear to be anything fundamental in human intelligence that cannot one day be exhibited in machine intelligence. Those of religious faith who hold out for a fundamental distinction that cannot be bridged -- a cognitive science "proof" of God -- are welcome to hold this view, but from all indications this notion is another instance of a "God of the gaps" theological error, wherein one looks for God in the recesses of what remains unknown in scientific knowledge at one particular point in time. Note that these findings do not refute the religious notion of a "soul," nor do they suggest that humans do not assume responsibility for their actions and decisions, but instead merely that many if not all normal operations of human minds may one day be replicated in machine intelligence. Thus, as with most other aspects of the science-religion discussion, fundamentally there is no basis for conflict, provided that each discipline recognizes its own limitations. For additional discussion, see God of the gaps.

References

[See Bibliography].