"With traditional software, we have lines of code and software development assurance systems. When it fails, we can look at the code and understand where and why it failed. Artificial intelligence and neural networks are not like this. How can we establish the trust that artificial agents will behave safely?
AI can't be understood. It can't be explained. It is non-deterministic. Thus, it can neither be trusted by the general public nor certified by authorities."