About
AI safety standards worldwide must keep up with the rapid development and deployment of AI technology. Our mission is to help accelerate the writing of AI safety standards.
Formal AI safety standards can be essential in supporting government initiatives for AI technology regulation. We, therefore, aim to accelerate the writing of such formal standards in organizations like ISO/IEC, CEN-CENELEC, the IEEE, and NIST.
We are structured as a virtual lab, bringing together experts from all over the world who self-organize to pursue the above mission.
Current project
The diagram below shows our current project. We convert insights from the existing literature into ready-made text that formal standards efforts can include in their AI safety standards documents.
We are positioning ourselves as politically and geographically neutral.
We aim to support at least the following government-related standards initiatives:
AI safety standards writing in CEN-CENELEC JTC21, in support of the EU AI Act
GPAI provider Codes of Practice writing in support of the EU AI Act
AI safety standards writing in the US NIST AI risk management framework, such as the announced NIST Public Working Group on AI and the Berkeley CLTC initiative for a Profile for Increasingly Multi- or General-Purpose AI
Any AI safety standards effort that might emerge from the UK AI Safety Institute or AI Safety summits.
The data flow in the above diagram, from our lab into formal standards writing efforts, is enabled because several of our members are active in these formal standards writing efforts. Using the terminology of the standards world, the outputs of this lab can be packaged and submitted as technical expert contributions to the respective standards efforts. It will then be up to these standards efforts to decide if and how to incorporate these contributions into their eventual standards.
Differences with formal standards writing efforts
Formal standards writing efforts tend to have very high barriers to entry. They often require that organizational fees are paid before someone can participate. To qualify for a government mandate, some efforts must also apply geographical restrictions to who can join.
Furthermore, it often takes considerable time to understand all the details of the standards process for someone to contribute effectively to a formal standards effort. It can also take a lot of effort to locate the right subcommittee to whom to make specific contributions and the right time to make them.
We have set up the AI Standards Lab to have much lower barriers to entry for collaborators and subject matter experts worldwide. This allows us to accelerate standards writing by accessing a pool of labor simply unavailable to formal standards efforts.
Another significant difference with formal standards writing is that in this lab, we focus only on encoding the state of the art in AI risk management. Formal standards writing efforts have a much broader scope: They must resolve various other difficult legal-technical and regulatory questions in consultation with their respective government(s).
The formal standards efforts we target also usually work under Chatham House rules, supplemented with additional confidentiality agreements. The lab must, therefore, operate with some confidentiality firewalls inside. Lab members who are inside formal standards efforts too will often not be able to report back to the original authors of contributions any details about how these contributions are being handled further inside their formal standards efforts.
Lab outputs
This section contains Lab outputs that we are making public.
September 2024:
In July 30 2024, the EU AI Office published a call for input to inform the drafting of the AI Codes of Practice, in support of the AI Act.
The inputs are intended to be divided across several topics, and 4 WGs (working groups).
GPAI models: transparency and copyright-related provisions (WG1).
GPAI models with systemic risk: risk taxonomy, assessment (WG2) and mitigation (WG3), and "internal risk management" (WG4).
Reviewing and monitoring the Codes of Practice
On September 18th, we submitted text to this call for input: A 150 page contribution of State-of-the-Art of GPAI risk sources and risk management practices, intended for direct inclusion into the Codes. Below are the published texts:
Survey input
Short answer and multiple choice questions prepared by the EU AI Office
Free text input
Our main contribution, referencing a template provided by the EU AI Office
October 2024:
We published a paper - A catalog of state-of-the-art risks and risk management measures for GPAIs, released with a 🔓 public domain license for easy adoption into GPAI standards globally.
It aims to support interoperability between different standards efforts, by providing a central hub of concrete descriptions of both current and future risks and mitigations for GPAIs or frontier AI systems.
Any questions about the paper can be directed to inquiries@aistandardslab.org
Lab members
Current Lab Leadership:
Present and past contributors
Not all contributors are listed below. Contributors can choose not to have their names or full names disclosed in public.
Christopher Denq
Jonathan Claybrough
Express your interest in collaborating
Please note: Due to our current workload, we have not been able to process or respond to all expressions of interest. We may have more capacity to start onboarding new collaborators in 2025. Please email us at hello@aistandardslab.org for time-sensitive responses.
Current expectations of volunteer collaborators:
Time commitment:
2-5h/week: review and comment on our existing standard contribution drafts.
5-10h/week: author or co-author new contributions by leveraging safety literature and using our style guide.
Requirements:
Basic knowledge of AI technology and safety or quality engineering, with direct research work in the AI Safety field being a plus but not required.
We welcome and prefer contributions from collaborators who are deep subject matter experts in a specific field and want to convert the latest insights from their field into standard texts.
We also welcome collaborators who wish to use our lab as a potential stepping stone towards joining a formal standards organization to participate in a formal AI safety standard writing effort.
All our work happens in a closed workspace, and all discussions happen under the Chatham House Rule.