Humanity in the Nuclear and AI Ages
07-16-2023Roger Berkowitz
To think how to respond to the challenge AI poses to humanity, David Nirenberg turns back and asks how the pioneers of the nuclear bomb set about to think about the future of man. J. Robert Oppenheimer helped bequeath humanity not only the nuclear bomb, but also the means to think about how to live as human beings in the nuclear age. After his work on the bomb as the director of the Los Alamos Laboratory, Oppenheimer served 20 years as director of the Institute for Advanced Study in Princeton, N.J.. home to some of the world’s leading scientists, including Albert Einstein. Nirenberg writes:
Oppenheimer, Einstein, von Neumann and other Institute faculty channeled much of their effort toward what AI researchers today call the “alignment” problem: how to make sure our discoveries serve us instead of destroying us. Their approaches to this increasingly pressing problem remain instructive.
Von Neumann focused on applying the powers of mathematical logic, taking insights from games of strategy and applying them to economics and war planning. Today, descendants of his “game theory” running on von Neumann computing architecture are applied not only to our nuclear strategy, but also many parts of our political, economic and social lives. This is one approach to alignment: humanity survives technology through more technology, and it is the researcher’s role to maximize progress.
Oppenheimer agreed that technological progress was critical, and provided von Neumann with such extraordinary support that other faculty complained. But he also thought that this approach was not enough. “What are we to make of a civilization,” he asked in 1959, a few years after von Neumann’s death, “which has always regarded ethics as an essential part of human life, and…which has not been able to talk about the prospect of killing almost everybody, except in prudential and game-theoretical terms?”
He championed another approach. In their biography “American Prometheus,” which inspired Nolan’s film, Martin Sherwin and Kai Bird document Oppenheimer’s conviction that “the safety” of a nation or the world “cannot lie wholly or even primarily in its scientific or technical prowess.” If humanity wants to survive technology, he believed, it needs to pay attention not only to technology but also to ethics, religions, values, forms of political and social organization, and even feelings and emotions.
Hence Oppenheimer set out to make the Institute for Advanced Study a place for thinking about humanistic subjects like Russian culture, medieval history, or ancient philosophy, as well as about mathematics and the theory of the atom. He hired scholars like George Kennan, the diplomat who designed the Cold War policy of Soviet “containment”; Harold Cherniss, whose work on the philosophies of Plato and Aristotle influenced many Institute colleagues; and the mathematical physicist Freeman Dyson. Traces of their conversations and collaborations are preserved not only in their letters and biographies, but also in their research, their policy recommendations, and in their ceaseless efforts to help the public understand the dangers and opportunities technology offers the world.
Today, we need to be reminded that no alignment of technology with humanity can be achieved through technology alone. Artificial intelligence offers an obvious example. Many people are worried that the application of complex and non-transparent machine learning algorithms to human decision-making—in areas like criminal justice, hiring and health care—will invisibly entrench existing discrimination and inequality. Computer scientists can address this problem, and many are currently working on algorithms to increase “fairness.” But to design a “fairness algorithm” we need to know what fairness is. Fairness is not a mathematical constant or even a variable. It is a human value, meaning that there are many often competing and even contradictory visions of it on offer in our societies.