Certification Approach

Key to AI certification:
Machine learning assurance

“The unique characteristic of AI is that the implementation is learned, rather than designed… the guidance for the traceability of software or complex hardware cannot be extended to an AI system, … This may be addressed through extensive stress testing for AI systems at item level and additional test at both the system and aircraft levels...” 

 FAA, “Roadmap for Artificial Intelligence Safety Assurance V.1”, Sept. 2024

“Learning assurance: All of those planned and systematic actions used to substantiate, at an adequate level of confidence, that errors in a data-driven learning process have been identified and corrected such that the AI/ML constituent satisfies the applicable requirements at a specified level of performance, and provides sufficient generalisation and robustness capabilities.”

EASA, “Concept Paper: guidance for Level 1 & 2 machine learning applications, Issue 02”, April 2024

Systems must be designed to perform their intended functions under all foreseeable operating conditions. This requirement applies to the entire system. That’s why we speak of AI-enhanced systems, or systems that include an AI component.The overall design and development of traditional components remain governed by established standards.
As for the AI component, it's true that we can't fully "see into" the black box. It's not deterministic in how it's created—but its results are deterministic. According to FAA classification, it is machine-learned, not machine-learning: once training, testing, and validation are completed in the lab, the neural network’s parameters are fixed. It doesn’t evolve or learn during operation. So when deployed as a software component, it behaves deterministically: given the same input, it will always produce the same output.
The question, then, is how to ensure the machine-learned component works as intended—and what that depends on. The answer lies in three things: 1) properly designed data for training, testing, and validation; 2) a sound learning process; 3) and a well-structured model—the mathematical framework tuned during learning. If any of these are flawed, the system can fail. But all three can be verified.
Together with EASA, we published two research reports that provide mathematical proof of these principles: Concepts of Design Assurance for Neural Networks (CoDANN), and CoDANN II. These ideas form the basis of the W-shaped model for Learning Assurance and later contributed to EASA’s guidance for Level 1 and 2 machine learning applications.

Let’s connect!

Tell us about your use case and ask us anything. We're happy to discuss your needs and explore how we can help.
Leave your details, and we'll get back to you as soon as possible.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.