AI safety standards worldwide must keep up with the rapid development and deployment of AI technology. Our mission is to help accelerate the writing of AI safety standards.
Recent outputs & updates
-
Our Feedback on the First Draft Code of Practice on Transparency of AI-Generated Content
We provided feedback on the First Draft Code of Practice on Transparency of AI-Generated Content, addressing feasibility concerns, proportionality for SMEs, and operational clarity across marking, detection, and disclosure requirements.
-
Recommendations on the Digital Omnibus Amendments to the EU AI Act
We analysed the Commission’s Digital Omnibus proposals for the AI Act, highlighting concerns with Article 6(4) database deletion, Article 75(1) enforcement centralisation, and Article 4a data processing rules, whilst proposing targeted amendments to address critical regulatory gaps.
-
We presented a poster at the AI and Societal Robustness Conference
Rokas Gipiškis (AI Standards Lab) and Rebecca Scholefield presented the poster “AI Incident Reporting: Pipeline and Principles” at the AI and Societal Robustness Conference in Cambridge, organised by the UK AI Forum. The work examines post-deployment AI incidents through an end-to-end pipeline spanning definitions and taxonomies, monitoring, reporting, and downstream analysis (including multi-causal approaches and
