Does AI fundamentally support autocratic government? This is the question Martin Beraja, Andrew Kao, David Yang, and Noam Yuchtman ask in their paper AI-tocracy. They write:
Autocratic institutions have long been viewed as fundamentally misaligned with frontier innovation: autocrats’ political and economic rents are eroded by technological change and economic growth; and incentives to innovate are stifled by threats and acts of expropriation under autocracy.
Recent scholarship, however, has suggested that artificial intelligence (AI) technology—considered to be the basis for a “fourth industrial revolution” (Schwab 2017)—may exhibit characteristics that allow an alignment between frontier innovation and autocracy. As a technology of prediction (Agrawal, Gans, and Goldfarb 2019), AI may be particularly effective at enhancing autocrats’ social and political control (Zuboff 2019; Acemoglu 2021; Tirole 2021).1 Furthermore, government purchases of AI may generate broad innovation spillovers, such as those observed among dual-use technologies (Moretti, Steinwender, and Van Reenen forthcoming). More specific to AI, because government data are inputs into developing AI prediction algorithms and can be shared across multiple purposes (Beraja, Yang, and Yuchtman forthcoming), autocracies’ collection and processing of data for purposes of political control may directly stimulate AI innovation for the commercial market, far beyond government applications.2 These arguments imply the possibility of a mutually reinforcing relationship in which governments procure AI to achieve political control, and this procurement stimulates further innovation in the technology.