Motivated by fears of the existential risks posed by advanced AI falling into the hands of authoritarian regimes, longtermists have for years been quietly pressing the White House to pursue a more aggressive policy toward China.
President Joe Biden meeting with business leaders at the White House on March 9, 2022. (Doug Mills / the New York Times via Getty Images)
The philosophy of “longtermism” is having a moment. Once a fringe ideology among Silicon Valley’s doomsayers, it has come to receive widespread and favorable media coverage, motivate heavily funded runs for congressional office, and attract many millions in philanthropic donations and investments. And, with the surge of interest in GPT-4 and other AI programs, formerly niche longtermist opinions and worries about a future with advanced artificial intelligence have started to gain mainstream adherents, even among some socialists.
Of course, not all the treatment has been positive. The Sam Bankman-Fried scandal last year shone a spotlight on the darker side of the movement by allowing journalists to point to FTX and Alameda Research’s deep entanglement with longtermist charities. And its biggest talking points have been met with criticism and ridicule from many onlookers. But, for good or for ill, it is no longer the obscure intellectual brand of a few tech futurists.
Most media attention so far, however, has focused on the philosophy’s role in the tech industry and philanthropic world. This coverage omits a crucial facet: longtermism’s political interventions and ambitions, beyond those immediately touching on its charitable and scientific projects. Perhaps most strikingly, public documents reveal evidence that longtermists were key players in President Joe Biden’s choice last October to place heavy controls on semiconductor exports, a policy many commentators have been willing to ascribe to more conventional protectionist and anti-China elements in Washington. With this kind of political weight and determination, the movement deserves greater public scrutiny going forward.
Existential Risks
Longtermism emerged as a school of thought within the wider “effective altruism” (EA) movement. This movement attempts to design and assess charities and charitable contributions with an eye to maximum positive impact on the dollar. While the most publicized and best received EA work has focused on near term, immediately pressing cause areas like global poverty and disease prevention, there has long been a controversial contingent within the community more interested in futurist concerns.
Much of this longtermist work in EA has been on “existential risks,” potential cataclysms that (allegedly) threaten the existence of humanity as a whole. These “x-risks,” as they are called, run the gamut from the sci-fi-esque (giant meteor impacts) to the prescient (pandemic readiness and prevention). Among the theoretical doomsdays studied by longtermists, however, one topic has consistently stood to the fore: AI safety.
Advocates of AI safety as a cause area generally fear that, one way or another, an advanced artificial intelligence could pose a risk to human survival. Some argue that, with aims at even slightly cross purposes with ours, a superintelligent AI would be willing and able to quickly eliminate us to achieve its goals.
Biden’s export controls have, incidentally, failed spectacularly at achieving their stated ambitions.
Others suggest, more conservatively, that AI could prove dangerous in much the same way as nuclear weapons: easy to accidentally misuse, hard to predict in detail, and with far-reaching consequences if handled ineptly. But they are united in their belief that bungled advances in artificial intelligence could spell disaster for humankind, potentially to the point of extinction.
With stakes this high, it is only natural that longtermist proponents of AI safety would seek influence in national (and international) politics. Future Forward, a pro-Democratic super PAC with an explicitly longtermist outlook, was among the most munificent donors in the 2020 presidential election, and in 2022, Carrick Flynn made a congressional bid openly running on longtermist issues. Electoralism has come for the Cassandras of AI doom.
Going purely by its (rather widely covered) electoral contributions, however, one might reasonably judge the movement a side attraction at best in the broader American political landscape. While Future Forward’s 2020 donations were extensive, few pundits have suggested they on their own turned the tide against Donald Trump, and Flynn’s campaign two years later ended in overwhelming defeat.
It would be a mistake, however, to assess the political reach of longtermism purely by its electoral influence. Beyond campaigns, it has with little fanfare made its way high in the realm of think tanks, congressional commissions, and bureaucratic appointments, where it has been vocal on issues of the utmost national and global significance. Perhaps the most notable, to date, has been US policy on microchip infrastructure.
Long Term Planning for a Cold War
The Biden administration’s decision, in October of last year, to impose drastic export controls on semiconductors, stands as one of its most substantial policy changes so far. As Jacobin‘s Branko Marcetic wrote at the time, the controls were likely the first shot in a new economic Cold War between the United States and China, in which both superpowers (not to mention the rest of the world) will feel the hurt for years or decades, if not permanently.
Already the policy has devastated critical supply chains, upset markets, and roused international tensions, all with the promise of more to come. Semiconductors, the “oil of the 21st century,” are an essential component in a huge range of computing technologies, and the disruption emerging from the export controls will inevitably affect the course of their future production and innovation, for the worse.
The idea behind the policy, however, did not emerge from the ether. Three years before the current administration issued the rule, Congress was already receiving extensive testimony in favor of something much like it. The lengthy 2019 report from the National Security Commission on Artificial Intelligence suggests unambiguously that the “United States should commit to a strategy to stay at least two generations ahead of China in state-of-the-art microelectronics” and
modernize export controls and foreign investment screening to better protect critical dual-use technologies — including by building regulatory capacity and fully implementing recent legislative reforms, implementing coordinated export controls on advanced semiconductor manufacturing equipment with allies, and expanding disclosure requirements for investors from competitor nations.
The commission report makes repeated references to the risks posed by AI development in “authoritarian” regimes like China’s, predicting dire consequences as compared with similar research and development carried out under the auspices of liberal democracy. (Its hand-wringing in particular about AI-powered, authoritarian Chinese surveillance is ironic, as it also ominously exhorts, “The [US] Intelligence Community (IC) should adopt and integrate AI-enabled capabilities across all aspects of its work, from collection to analysis.”)
These emphases on the dangers of morally misinformed AI are no accident. The commission head was Eric Schmidt, tech billionaire and contributor to Future Forward, whose philanthropic venture Schmidt Futures has both deep ties with the longtermist community and a record of shady influence over the White House on science policy. Schmidt himself has voiced measured concern about AI safety, albeit tinged with optimism, opining that “doomsday scenarios” of AI run amok deserve “thoughtful consideration.” He has also coauthored a book on the future risks of AI, with no lesser an expert on morally unchecked threats to human life than notorious war criminal Henry Kissinger.
Also of note is commission member Jason Matheny, CEO of the RAND Corporation. Matheny is an alum of the longtermist Future of Humanity Institute (FHI) at the University of Oxford, who has claimed existential risk and machine intelligence are more dangerous than any historical pandemics and “a neglected topic in both the scientific and governmental communities, but it’s hard to think of a topic more important than human survival.” This commission report was not his last testimony to Congress on the subject, either: in September 2020, he would individually speak before the House Budget Committee urging “multilateral export controls on the semiconductor manufacturing equipment needed to produce advanced chips,” the better to preserve American dominance in AI.
Congressional testimony and his position at the RAND Corporation, moreover, were not Matheny’s only channels for influencing US policy on the matter. In 2021 and 2022, he served in the White House’s Office of Science and Technology Policy (OSTP) as deputy assistant to the president for technology and national security and as deputy director for national security (the head of the OSTP national security division). As a senior figure in the Office — to which Biden has granted “unprecedented access and power” — advice on policies like the October export controls would have fallen squarely within his professional mandate.
Far from securing US dominance in the area, export controls on semiconductors have accelerated the fracturing of the international AI research community into independent and competing regional sectors.
Just as importantly, in January 2019, he founded the Center for Security and Emerging Technology (CSET) at Georgetown University, a think tank on national issues that friendly analysts have described as having longtermism “baked into their viewpoint.” CSET has, since its founding, made AI safety a premier area of concern. Nor has it been shy about tying the matter to foreign policy, particularly the use of semiconductor export controls to maintain US advantage on AI. Karson Elmgren, a CSET research analyst and former OpenAI employee specializing in AI safety, published a research paper in June of last year once again counseling the adoption of such controls, and covered the October rule favorably in the first issue of the new EA magazine Asterisk.
The most significant restrictions advocates (aside from Matheny) to emerge from CSET, however, have been Saif Khan and Kevin Wolf. The former is an alum from the Center and, since April 2021, the director for technology and national security at the White House National Security Council. The latter has been a senior fellow at CSET since February 2022 and has a long history of service in and connections with US export policy. He served as assistant secretary of commerce for export administration from 2010–17 (among other work in the field, both private and public), and his extensive familiarity with the US export regulation system would be valuable to anyone aspiring to influence policy on the subject. Both would, before and after October, champion the semiconductor controls.
At CSET, Khan published repeatedly on the topic, time and again calling for the United States to implement semiconductor export controls to curb Chinese progress on AI. In March 2021, he testified before the Senate, arguing that the United States must impose such controls “to ensure that democracies lead in advanced chips and that they are used for good.” (Paradoxically, in the same breath the address calls on the United States to both “identify opportunities to collaborate with competitors, including China, to build confidence and avoid races to the bottom” and to “tightly control exports of American technology to human rights abusers,” such as… China.)
Among Khan’s coauthors was aforementioned former congressional hopeful and longtermist Carrick Flynn, previously assistant director of the Center for the Governance of AI at FHI. Flynn himself individually authored a CSET issue brief, “Recommendations on Export Controls for Artificial Intelligence,” in February 2020. The brief, unsurprisingly, argues for tightened semiconductor export regulation much like Khan and Matheny.
This February, Wolf too provided a congressional address on “Advancing National Security and Foreign Policy Through Sanctions, Export Controls, and Other Economic Tools,” praising the October controls and urging further policy in the same vein. In it, he claims knowledge of the specific motivations of the controls’ writers:
BIS did not rely on ECRA’s emerging and foundational technology provisions when publishing this rule so that it would not need to seek public comments before publishing it.
These motivations also clearly included exactly the sorts of AI concerns Matheny, Khan, Flynn, and other longtermists had long raised in this connection. In its background summary, the text of one rule explicitly links the controls with hopes of retarding China’s AI development. Using language that could easily have been ripped from a CSET paper on the topic, the summary warns that “‘supercomputers’ are being used by the PRC to improve calculations in weapons design and testing including for WMD, such as nuclear weapons, hypersonics and other advanced missile systems, and to analyze battlefield effects,” as well as bolster citizen surveillance.
Biden’s export controls have, incidentally, failed spectacularily at achieving their stated ambitions. Despite the damage the export controls have wrought on global supply chains, China’s AI research has managed to continue apace. Far from securing US dominance in the area, the export controls on semiconductors have accelerated the fracturing of the international AI research community into independent and competing regional sectors.
Longtermists, in short, have since at least 2019 exerted a strong influence over what would become the Biden White House’s October 2022 semiconductor export rules. If the policy is not itself the direct product of institutional longtermists, it at the very least bears the stamp of their enthusiastic approval and close monitoring.
Just as it would be a mistake to restrict interest in longtermism’s political ambitions exclusively to election campaigns, it would be shortsighted to treat its work on semiconductor infrastructure as a one-off incident. Khan and Matheny, among others, remain in positions of considerable influence, and have demonstrated a commitment to bringing longtermist concerns to bear on matters of high policy. The policy sophistication, political reach, and fresh-faced enthusiasm on display in its semiconductor export maneuvering should earn the AI doomsday lobby its fair share of critical attention in the years to come.