Countries around the world, alongside leading companies, are debating risk-management frameworks for advanced AI. Many of these policy processes have made important contributions in developing novel approaches, yet none appears likely to achieve what might be called a gold standard: a set of governance measures that optimize over a range of societal goods, including both safety and innovation. In this talk, I’ll present a research agenda in pursuit of this goal, identifying outstanding questions in current debates and mapping these onto technical and policy subquestions. Researchers can use this agenda to bring independent scientific analysis to bear on pressing questions of AI governance.