AI safety standards worldwide must keep up with the rapid development and deployment of AI technology. Our mission is to help accelerate the writing of AI safety standards.
Recent outputs & updates
-
Recommendations on the European Parliament Amendments to the EU AI Act in the Digital Omnibus
We have published a report analysing some of the 750+ AI Act amendments that were proposed by the European Parliament in the context of the EU AI Act Omnibus. We provide a first analysis of these amendments, highlighting specific ones that we either welcome or oppose, based on our area of expertise.
-
A Scorecard for the Quality of AI Evaluations
We have published a working draft of a Quality Scorecard for AI Evaluations, a standards-based framework for assessing the reliability, validity, and rigour of AI evaluations. The scorecard provides structured scoring across five dimensions and a classification system to match evaluations to appropriate governance and deployment contexts.
-
Our Feedback on the First Draft Code of Practice on Transparency of AI-Generated Content
We provided feedback on the First Draft Code of Practice on Transparency of AI-Generated Content, addressing feasibility concerns, proportionality for SMEs, and operational clarity across marking, detection, and disclosure requirements.
-
Recommendations on the Digital Omnibus Amendments to the EU AI Act
We analysed the Commission’s Digital Omnibus proposals for the AI Act, highlighting concerns with Article 6(4) database deletion, Article 75(1) enforcement centralisation, and Article 4a data processing rules, whilst proposing targeted amendments to address critical regulatory gaps.
