Author: Luuk van Dijk, founder&CEO of Daedalean
Avionics today deals with uncertainty in all phases of flight, be it the unexpected side-wind on landing, the traffic encountered en-route, the lower climb-rate in hot weather, by delegating it more or less entirely to the human pilot. Daedalean’s systems can take over human functions that involve making judgement calls, including abort/go-around decisions based on high-dimensional input from a camera or a radar. These systems need to deal with uncertainty in a principled and demonstrably safe way.
Engineering such systems using traditional hand-crafted software, no matter how rigorous the requirements, design, validation and verification processes, is not feasible: the set of possible inputs is simply too large to verify exhaustively.
Instead, so-called Machine Learning techniques can provide high-quality solutions that rival or outperform the human on certain specific tasks of limited scope such as detecting an aircraft, a runway or a cat in an image.
The uncertainty, or “non-determinism” of the problem, is often misattributed to the solution, but any system or human dealing with the real world has to deal with uncertainty, and has to be proven to be fit for purpose and safe enough to the public (as assessed by the certification authority). For a machine learned system this is no different, but the way Machine Learning works actually provides more rigorous bounds on the behaviour and, paradoxically, although the guarantees are only of a statistical nature, they provide more certainty than any process involving humans writing traditional software can.
In the project conducted with EASA we introduce a number of methods to assure that our algorithms produce reliable outputs, suitable for safety-critical applications. Through dataset coverage we can guarantee that the system’s performance will statistically generalize to an uncertain real world. We can also verify and prove that the system is robust to variations in the inputs. To pass the certification bar, our systems have functions to monitor and detect their own possibly erroneous outputs and unexpected inputs.
By properly taking into account performance guarantees and by designing systems, aircraft and operations to deal with inevitably imperfect performance of any component – machine learned or not – we can create a denser used airspace, with autonomous traffic that is safer overall, using all the certainty objective data can provide.