About Us

Join

Become a Member

Our mission is to ensure that AI systems operate safely and ethically, benefiting all of humanity. We connect experts from academia, policy groups, civil society, industry, and beyond to promote research, shape policy, and build understanding around this goal.

“The potential for good things that can come from machines that understand the world better than us is amazing…like discovering new drugs, understanding how your body works so that we can fix cancer…but of course we’re not going to reap those benefits if we destroy our society and endanger our future.”

Max Tegmark
Professor, Department of Physics, Massachusetts Institute of Technology; President, Future of Life Institute

IASEAI is an independent non-profit organization founded to address the risks and opportunities associated with rapid advances in AI.

We believe that AI can be beneficial, but current systems are being developed without appropriate safeguards. As these systems become more capable and more directly involved in critical social and economic functions, it is essential to provide assurances that they will operate safely and benefit humanity. Policy should be developed with input from experts and affected communities to encourage the creation of AI systems that can support such assurances.

IASEAI will provide a unified voice for the many individuals, research groups, and organizations that share these goals and will help to create a global community to achieve them.

IASEAI’s Impact Pillars

Community

We organize in-person and virtual events designed to stimulate and showcase research and other advances in the field of safe and ethical AI. This includes an annual international conference and regular workshops. Presented research will reflect the interests of all relevant stakeholders, including the public, civil society, academia, industry, government, and international organizations.

We will create a global community comprising individual members as well as affiliate organizations (such as academic centers and nonprofit institutes) that support IASEAI’s mission. We also expect to form national and regional chapters.

Research

IASEAI will promote a wide variety of technical and sociotechnical approaches to ensuring safe and ethical AI, targeting both existing and future harms, the security of AI systems, and methods of preventing unsafe AI systems from operating. We will advocate for research funding and provide seed grants for new research centers. Where appropriate, we may commission technical research in support of policy, regulation, and enforcement. We will also recognize outstanding research papers and senior and early-career researchers through awards and fellowships.

Policy

To achieve its mission, IASEAI will develop specific policy analyses in the areas of standards, regulation, international cooperation, and research funding. An elected Council will set general policy directions and the analyses will be developed by area-specific working groups drawn from an expert network. The goal is to develop and promote policies that help to ensure that AI systems operate safely and ethically.

Education

IASEAI’s work aims to reach both technical and non-technical audiences throughout the world—including the general public, civil society, journalists, policymakers, and industry. This is achieved through course development, creative and educational videos and web content, an expert speakers bureau, webinars, and media placement.

Organization

IASEAI is currently transitioning to a new structure: a Board of Directors with fiduciary responsibility; a President and Council to develop policies and provide guidance (elections will be held in the future for these positions); an Advisory Board providing wisdom and continuity; and an appointed Executive Director with direct responsibility for activities in furtherance of IASEAI’s mission. We are also finalizing policies for the formation of regional and national chapters.

To get involved and stay informed, join the movement.

Become a Member

Board of Directors

As of July 16, 2025 the directors (with committees) are:

Amir Banifatemi (Finance)
Cofounder and Director, AI Commons
Stuart Russell (Nominating)
Distinguished Professor of Computer Science, University of California, Berkeley; director, Center for Human-Compatible AI
Adrian Weller (Finance)
Director of Research in Machine Learning, University of Cambridge
Andrew Yao (Nominating)
Dean, Tsinghua University
Council

As of July 16, 2025 the council members (with committees) are:

Florence G’sell (Policy)
Visiting professor of Law, Stanford University
Will Marshall (Policy)
Co-founder and CEO, Planet Labs
Charlotte Stix (Policy)
Head of AI Governance, Apollo Research
Kate Crawford (Research)
Research professor, Annenberg School of Communication and Journalism, University of Southern California; senior principal researcher, Microsoft Research
Aleksandra Korolova (Research)
Assistant Professor of Computer Science and Public Affairs, Princeton University
Bart Selman (Research, Community)
Professor, Cornell University
Shannon Vallor (Research)
Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence, Edinburgh Futures Institute, University of Edinburgh
Fynn Heide (Community)
Executive Director, Safe AI Forum
Sasha Luccioni (Community)
AI Researcher & Climate Lead, Hugging Face
Gaia Marcus (Community)
Director, Ada Lovelace Institute
Margaret Mitchell (Community)
Chief Ethics Scientist and Researcher, Hugging Face
Francesca Rossi (Community)
Global Lead, IBM AI Ethics
Tara Steele (Community)
Director, Safe AI for Children Alliance
Advisory Board

As of July 16, 2025 the members of the advisory board are:

Yoshua Bengio
Full Professor, Department of Computer Science and Operations Research, Université de Montréal; Co-president and Scientific Director, LawZero; Founder and Scientific Advisor, Mila - Quebec AI Institute; Turing Award winner (2018)
Gillian Hadfield
Professor of computer science, jointly appointed to the School of Government and Policy, Johns Hopkins University
James Manyika
Senior Vice President, Google-Alphabet
Jason Matheny
President and CEO, RAND Corporation
Alondra Nelson
Professor, Institute for Advanced Study
Joseph Stiglitz
Professor of Economics, Columbia University; Nobel laureate (economics, 2001)
Ya-Qin Zhang
Dean, Tsinghua University
Staff and Volunteers

IASEAI’s work is also supported by paid staff and volunteers who share their time and skills to help advance the movement for safe and ethical AI

Mark Nitzberg
Interim Executive Director
Julia Irwin
Interim Associate Director
Livia Morris
Member Coordinator
JP Gonzales
Operations Manager
Loy Sheflott
Senior Advisor
Jess Graham
Senior Advisor
Molly Shilman
Senior Advisor
Michael Phillips
Senior Advisor