About

AI safety standards worldwide must keep up with the rapid development and deployment of AI technology. Our mission is to help accelerate the writing of AI safety standards.

Formal AI safety standards can be essential in supporting government initiatives for AI technology regulation. We, therefore, aim to accelerate the writing of such formal standards in organizations like ISO/IEC, CEN-CENELEC, the IEEE, and NIST.  

We are structured as a virtual lab, bringing together experts from all over the world who self-organize to pursue the above mission.

Current project

The diagram below shows our current project. We convert insights from the existing literature into ready-made text that formal standards efforts can include in their AI safety standards documents.

We are positioning ourselves as politically and geographically neutral

We aim to support at least the following government-related standards initiatives:

The data flow in the above diagram, from our lab into formal standards writing efforts, is enabled because several of our members are active in these formal standards writing efforts. Using the terminology of the standards world, the outputs of this lab can be packaged and submitted as technical expert contributions to the respective standards efforts. It will then be up to these standards efforts to decide if and how to incorporate these contributions into their eventual standards.

Differences with formal standards writing efforts

Formal standards writing efforts tend to have very high barriers to entry. They often require that organizational fees are paid before someone can participate. To qualify for a government mandate, some efforts must also apply geographical restrictions to who can join.   

Furthermore, it often takes considerable time to understand all the details of the standards process for someone to contribute effectively to a formal standards effort. It can also take a lot of effort to locate the right subcommittee to whom to make specific contributions and the right time to make them. 

We have set up the AI Standards Lab to have much lower barriers to entry for collaborators and subject matter experts worldwide. This allows us to accelerate standards writing by accessing a pool of labor simply unavailable to formal standards efforts. 

Another significant difference with formal standards writing is that in this lab, we focus only on encoding the state of the art in AI risk management. Formal standards writing efforts have a much broader scope: They must resolve various other difficult legal-technical and regulatory questions in consultation with their respective government(s). 

The formal standards efforts we target also usually work under Chatham House rules, supplemented with additional confidentiality agreements. The lab must, therefore, operate with some confidentiality firewalls inside. Lab members who are inside formal standards efforts too will often not be able to report back to the original authors of contributions any details about how these contributions are being handled further inside their formal standards efforts.

Lab outputs

This section contains Lab outputs that we are making public

September 2024: 

In July 30 2024, the EU AI Office published a call for input to inform the drafting of the AI Codes of Practice, in support of the AI Act. 

 The inputs are intended to be divided across several topics, and 4 WGs (working groups)

On September 18th, we submitted text to this call for input: A 150 page contribution of State-of-the-Art of GPAI risk sources and risk management practices, intended for direct inclusion into the Codes. Below are the published texts: 

Survey_Contribution_AISL_2024_public.pdf

Survey input

Short answer and multiple choice questions prepared by the EU AI Office

Public_Copy_Free-text_submissions_AI Standards Lab_2024_09_17_submitted_compressed_edited.pdf

Free text input

Our main contribution, referencing a template provided by the EU AI Office

October 2024: 

We published a paper - A catalog of state-of-the-art risks and risk management measures for GPAIs, released with a 🔓 public domain license for easy adoption into GPAI standards globally.

It aims to support interoperability between different standards efforts, by providing a central hub of concrete descriptions of both current and future risks and mitigations for GPAIs or frontier AI systems. 

Any questions about the paper can be directed to inquiries@aistandardslab.org 

November 2024:

As participants in the CoP drafting process, we were invited to provide feedback on the published first draft. 

On November 28th, we provided feedback to the EU GPAI Codes of Practice. We provide a lightly edited version of our submitted feedback, as well as supporting documents. 

Lab members

Current Lab Leadership

Co-project lead, and resident standards expert 

Co-project lead

Present and past contributors

Not all contributors are listed below. Contributors can choose not to have their names or full names disclosed in public.

Research Analyst 

Research Analyst

Research Analyst

Research Analyst

Christopher Denq

Jonathan Claybrough

Express your interest in collaborating

Please note: Due to our current workload, we have not been able to process or respond to all expressions of interest. We may have more capacity to start onboarding new collaborators in 2025. Please email us at hello@aistandardslab.org for time-sensitive responses.

Current expectations of volunteer collaborators:

All our work happens in a closed workspace, and all discussions happen under the Chatham House Rule.