Back in 1958 Nation writers struggled with the prospect of “artificial brains”, especially when they fell into the hands of the military.
The concept of artificial intelligence, not to say the exact formulation, first appeared in Nation in 1958, in a review of the work of Hungarian-born mathematician and physicist John von Neumann. Computer and brain. Published a year after the author's death, the book outlined a then-new analogy between the functioning of early computers and the human mind.
NationJournal reviewer Max Black, a professor of philosophy at Cornell, praised von Neumann's early formulation of game theory as “one of the intellectual monuments of our time.” Had he lived longer, Black lamented, the scientist “might have developed an even more important theory of computing machines. Such 'artificial brains' may ultimately change our culture, but our theoretical understanding of their basic principles still remains relatively crude and unsystematic.”
Black did not mention von Neumann's connections to the military-industrial complex (a term coined three years later by President Dwight D. Eisenhower). An ardent anti-communist, von Neumann played a crucial role in the Manhattan Project and later advocated the development of intercontinental ballistic missiles large enough to carry hydrogen bombs. If von Neumann had lived longer, he would almost certainly have focused his own supercomputer-like mind on figuring out how best to use the “artificial brain” for military purposes.
That was it Nationconcern the next time the dangers of artificial intelligence were discussed. In a 1983 article, “Review of Emerging High Technologies,” Stan Norris, a researcher at the Defense Information Center, wrote about advanced tools being developed to give the United States an advantage over the Soviet Union. The CIA worked to get computers to “process information and formulate hypotheses based on it.” Other projects aimed to create robots that could replace humans on the “battlefields of the twenty-first century.”
“As these examples show,” Norris concluded, “new technologies continue to create new forms of terror. The technological arms race is gaining momentum, increasing the risk of war due to miscalculation and reducing, rather than increasing, national security. Guns have overtaken politics. The search for a degree of common security lies not in the laboratory, but at the negotiating table.”
Two years later, a graduate student named Paul N. Edwards detailed the Defense Advanced Research Projects Agency's efforts to effectively “transfer the key element of the nuclear trigger into the ghostly hands of the machine.” This was both stupid and dangerous, Edwards argued:
“The idea of an artificial intelligence more logical and reliable than our own is seductive, especially if we believe it can protect us from nuclear Armageddon. Unfortunately, it cannot do this. Computer systems, fragile and programmed by people who can never foresee every conceivable situation, will always be unreliable nuclear guardians. The solution lies, as it always has, in reducing the danger war by putting aside weapons and expanding opportunities for peaceful exchanges.”
As Michael Clare shows elsewhere On this issue, the prospect of using an “artificial brain” to replace human judgment and responsibility continues to be tantalizing, creating new forms of terror and reducing rather than increasing national security.