AI in general 1, 2018

AI for Humanity - French Strategy for Artificial Intelligence



French Government


Open data, Sandboxes, Transparency, Explainability, Labour market effects, Sustainability, Audit, Impact assessment,


The strategy was presented by French President, Emmanuel Macron on 29 March 2018 at the Collège de France, in Paris. The strategy is derived from the so-called Villani-report, produced by french mathematician and Member of Parliament, Cedric Villani, who was commissioned by the French Prime Minister in September 2017 to lay the foundations of an AI strategy. With this, France was among the first countries to start working on a The report’s full title is “For a Meaningful Artificial Intelligence - Towards a French and European Strategy.”
The French strategy - AI for Humanity - contains three central commitments and a number of specific measures to implement them.
First, the government will make a €1.5 billion investment over the next 5-year period in order to strengthen the country’s research base and support the AI ecosystem. Second, the government will take steps to make the country’s massive centralised datasets available for the purposes of innovation. Third, an ethical framework will be erected to deal with the challenges of AI.
There are 7 main recommendations adapted from Villani report.

  1. Developing an aggressive data policy is proposed as a means of attaining sovereignty and strategic autonomy from the major US, Chinese and Russian data giants. As part of this policy, the government must encourage companies to pool their data into sectoral data commons based on the principle of reciprocity, cooperation and sharing. The right to data portability should also be supported, whereby citizens could grant government authorities or researchers access to their data, following GDPR guidelines.

  2. Targeting four strategic sectors, where France (and Europe) already excel: health, transport, the environment, defence and security. In these areas, sector specific policies should be implemented that focus on the major issues. Furthermore, platforms should be established to “facilitate innovation by creating controlled environments for experiments.” Finally, sandboxes should be implemented, which temporary ease the regulatory burden to help innovators conduct mini tests in ‘real-life’ conditions.

  3. Boosting the potential of French research in order to meet the challenges of ‘brain drain’ and the low transfer of research into applications. As a remedy, interdisciplinary AI institutes should be established across France, and appropriate resources, like a dedicated supercomputer, should be made available. Furthermore, pursuing a career in research should be made more attractive to attract and retain talent.

  4. Planning for the impact of AI on labour. Three concrete measures are proposed. The creation of a public laboratory on the transformation of work could study labour market changes as well as support those affected by transitions. In particular, complementarity between humans and machines should be prioritised and new methods of funding vocational training explored.

  5. Making AI more environmentally friendly. The strategy clearly references climate change as an indisputable reality that may both exacerbated and mitigated through increased digitisation. To this end, the government must be committed to contributing to a ‘smart ecological transition’ by launching a research centre dedicated to this topic and implementing a platform to measure and monitor the impact of smart digital tools on the environment. Moreover, the government must support initiatives, like the greening of the European cloud industry. Finally, ecological data up to 2019 must be made open and widely available to support the development of AI-based solutions.

  6. Opening up the black box of AI. Algorithm transparency and mechanisms to audit them must be developed to ensure that AI is socially acceptable. More explainable models, intuitive user interfaces and understanding the conditions for satisfactory explanations form aspects of this work. In order to create a sense of responsibility, ethics training should be mandatory for AI engineers and researchers, and discrimination impact assessment should form part of algorithm development. A consultative ethics committee on AI and digital technologies should be established, as well as broad and regular public dialogue initiatives undertaken. Finally, human responsibility for the use of AI tools must be maintained.

  7. Ensuring that AI supports inclusivity and diversity. It is proposed that by 2020 the proportion of women enrolled in digital engineering courses should be raised to 40%. A government system to automatically manage administrative procedures could be launched in order to improve citizens’ access to rights in a personalised manner and to increase public knowledge of administrative rules. Simultaneously, human mediation tools should be developed. Lastly, AI-based social innovations should be supported.

AI Governance

This database is an information resource about global governance activities related to artificial intelligence. It is designed as a tool to help researchers, innovators, and policymakers better understand how AI governance is developing around the world.

We're collecting more examples as we find them. If you'd like to add yours to this map, please suggest an entry.

Suggest an entry
  • Filter by:
  • Current filters:
256 entries are listed below
  • Sort by: