What is Politicians on AI Safety (PAIS)?
Artificial intelligence and technology policy is already one of the most important issues in Washington, and its importance is rapidly growing as technological progress accelerates. Voters, civil society groups, and other stakeholders need accurate information regarding what risks AI poses to their jobs, security, and way of life, as well as what actions their elected officials are (or are not) taking to mitigate those risks. Yet, it is difficult to keep track of candidates’ stances on AI. Up until now, there has been no easy way for voters to compare different candidates’ AI policies, and many candidates say little to nothing about AI. PAIS aims to solve this problem and bring transparency by creating an easily accessible database to track political candidates’ quotes and votes regarding artificial intelligence and its risks.
PAIS is divided into three sections, based on the main categories of AI risk found in AI safety literature:
AI Ethics
Sometimes called “mundane risks”, these are harms that AI currently poses to individuals or groups in society based on what values are built into the systems and how they are deployed. The word “mundane” is not meant to imply that these risks are unimportant, but merely that they involve different policy levers than the more extreme categories of risk. Examples of ethical risks include robot workers causing unemployment, AI-generated misinformation, nonconsensual deep-fakes, and algorithmic discrimination.
Geopolitical risks
These are risks that would cause America to lose power or security to geopolitical rivals. The most significant geopolitical risk involves fear that China will outpace the West in AI development, and thereby threaten the American-led international order. Other geopolitical risks involve the use of AI by terrorist groups, and the fear that AI will exacerbate a rise in worldwide authoritarianism.
Existential risks
These are risks to the very existence of humanity, involving worst-case scenarios like AI-induced nuclear war or total human extinction. Many leading AI researchers believe that if we invent superintelligent machines before learning how to align those machines with human welfare, then the results could be catastrophic.
Any time a candidate or elected official makes a statement about AI safety, it will be listed on their page based on what category of AI risk it belongs to. Any time a candidate or elected official releases a report about AI safety – like the Senate’s Bipartisan AI Roadmap – it will be listed on their page. Any time a candidate or elected official issues a policy directive on AI safety – like Joe Biden’s Executive Order on AI – it will be listed on their page. If a candidate has not articulated a clear position on AI safety, that will also be noted on their page.
PAIS focuses on AI safety and risks because we believe that AI risk management is an under-examined field. We do not mean to imply that AI’s impacts will be entirely or mostly negative, nor that risk mitigation is the only relevant area of AI policy. Some policy decisions, such as utilizing AI to make government agencies more efficient, are important but beyond the scope of this website to document.
To be clear, PAIS is not partisan, and it does not promote any particular agenda or set of policies. It exists with the sole purpose of educating the public about AI safety policies, thereby allowing voters to decide for themselves which candidates are worth supporting.