Home Politics Atheism Culture Books
Colophon Contact RSS

The Tyranny of Algorithms



Computerized decision-making poses the threat of a world such as that of Vonnegut's "Player Piano." For most, such an outcome is naturally rather undesirable:
As we think through the role that algorithms should play in our lives—and the various feats of automation that they enable—two questions are particularly important. First, is a given instance of automation feasible? Second, is it desirable? Computer scientists have been asking both questions for decades in the context of artificial intelligence.

Many early pioneers reached gloomy conclusions. In the mid-1970s, Joseph Weizenbaum of MIT railed against depriving humans of their capacity to choose, even if computers could decide everything for us. For Weizenbaum, choosing and deciding were different activities—and no algorithm should be allowed to blur the difference. A decade later, Stanford's Terry Winograd attacked the philosophical foundations of artificial intelligence, arguing that everyday human behavior was too complex and too spontaneous to be captured in rules. The philosopher Hubert Dreyfus said as much in the 1960s, when he compared artificial intelligence to alchemy. But Mr. Winograd's critique, coming from a respected computer scientist, was particularly devastating.