It looks like France is desperately trying to take the UK's lead in the European AI research. If people they manage to recruit won't die from anaphylactic shock at Gare du Nord and learn French in 2 weeks (this is how long one can survive without water), they may have a non-zero chance to succeed.
Villani: Finally, our digital society could not be governed by black box algorithms: artificial intelligence is going to play a decisive role in critical domains for human flourishing (health, banking, housing, etc) and there is currently a high risk of embedding existing discrimination into AI algorithms or creating new areas where it might occur. Further, we also run the risk that normalization may spread attitudes that could lead to the general development of algorithms within artificial intelligence. It should be possible to open these black boxes, but equally to think ahead about the ethical issues that may be raised by algorithms within artificial intelligence.
A meaningful AI finally implies that AI should be explainable: explaining this technology to the public so as to demystify it — and the role of the media is vital from this point of view — but also explaining artificial intelligence by extending research into explicability itself. AI specialists themselves frequently maintain that significant advances could be made on this subject.
Honestly, I haven't encountered a more concise concept than machine learning in my whole research life (and I haven't worked on anything particularly complicated). It amounts to building a very complex model, namely with a large number of free parameters. This calls for some parsimonious validation criterion, which under above circumstances requires both a lot computational power and a method to estimate. The method is a stochastic steepest descent. Everything is statistically sound here, even if some particular "guts" lack a theoretical proof. The whole process is the most illustrative example of a statistical model validation I can imagine.
Talking about "opening the black box" always makes me feel uneasy. In my personal view, statistical models are supposed to be black boxes, which make no sense when opened - or at least there are no statistical grounds to ascribe any meaning to what one could see inside. I believe this is particularly applicable to NNs.