IASEAI’25 Call to Action

IASEAI Issues Call to Action for Lawmakers, Academics, and the Public Ahead of AI Action Summit in Paris

“The development of highly capable AI is likely to be the biggest event in human history. The world must act decisively to ensure it is not the last event in human history. This conference, and the cooperative spirit of the AI Summit series, give me hope; but we must turn hope into action, soon, if there is to be a future we would want our children to live in.”
Stuart Russell
Distinguished Professor of Computer Science, University of California, Berkeley; Director, Center for Human-Compatible AI; President pro tem, IASEAI

IASEAI calls for:

Download PDF

Recognition of the significance of new developments in AI

Policymakers must act with an urgency that matches the transformational potential of AI, the rapidity of change, and the increasing risks to humanity as AI capabilities begin to exceed our own.

Preventing AI-driven institutional and social disruption

The power of AI threatens to disrupt employment and social structures, worsen inequality, and severely compromise the information ecosystem. Policymakers must take proactive steps to protect institutions, individuals, and ways of life while harnessing AI to strengthen rather than weaken societies. To the extent that these problems take similar forms across nations, policymakers should collaborate to seek collective solutions.

Addressing the race to AGI

The financial and strategic incentives to achieve artificial general intelligence (AGI) or “superintelligence” lead companies and countries to undercut safety standards in an attempt to gain technological and political control. Policymakers should take coordinated actions in international fora such as the UN and the OECD to ensure that innovation and competition proceed within an agreed framework of rigorous safety standards.

Coalescing the efforts of research communities around the goal of safe and ethical AI

The increasing threats to human flourishing posed by AI systems require researchers from the AI, ethics, social science, and policy communities to collaborate and pool their efforts. The various perspectives offered by these communities are not in tension; on the contrary, they all contribute in important ways to ensuring that AI systems do not harm human society.

Adoption of mandatory safety and ethical requirements

While commendable, voluntary commitments by companies must be made more specific and legally binding. Such binding commitments might include mandatory registration of advanced AI systems, including automatic self-registration for open-source copies; installation of remotely operable off-switches; mandatory reporting of incidents; professional ethics training for engineers; and standards for training, design, development, testing, auditing, and transparency of advanced AI systems. Developers should provide a scientifically convincing safety case that their systems will not cross so-called behavioral red lines, which demarcate unacceptable behaviors.

Advancement of global cooperation

AI developments and the accompanying risks have cross-border impacts, necessitating global cooperation on AI safety research and regulation, considering the perspectives of all nations. Moreover, the benefits of AI must be equitably distributed. In this vein, we welcome the establishment of the UN AI Advisory Body and the international network of AI Safety Institutes.

An increase in publicly funded research

The scale of the challenge requires significantly more publicly funded AI safety and ethics research. Research cannot continue to be dominated by companies with significant conflicts of interest. The potentially transformative benefits of AI can be realized only if advances in capabilities are accompanied by methods to ensure that AI systems are safe by design and aligned with human interests.

Support of the AI Foundation

The forthcoming AI Foundation, as anticipated by the Paris Summit hosts, seems to be a significant step forward. The Foundation must support the design and development of AI systems to address human needs and enhance local capabilities, and governments must, in turn, support the Foundation.

Support of the Council of Europe Framework Convention on Artificial Intelligence

The Council’s AI Treaty “aims to ensure that activities within the lifecycle of artificial intelligence systems are fully consistent with human rights, democracy and the rule of law.” While the Council administers the treaty, it is open for signature by non-member states. 37 countries have signed, including the United States and United Kingdom, and more are expected to sign at the Paris Summit.

Fostering informed dialogue

AI researchers and the media need to contribute their complementary forms of expertise in informing the public and policymakers, objectively and accurately, about developments in AI and their significance.