AI in general 1, 2017

AI & Machine Learning Policy Paper



The Internet Society


Accountability, Transparency, Bias, Data quality, Human control, Interoperability, Explainability, Safety, Liability,


The Internet Society is a US-based non-profit founded in 1992, with offices around the world. Its founders are Vint Cerf and Bob Kahn, and the organisation’s mission is to support and promote the “development of the Internet as a global technical infrastructure, a resource to enrich people’s lives, and a force for good in society.”
This policy paper offers a brief introduction to AI for policymakers in order to help the decision-making process about AI and Internet regulation. It describes key factors for consideration to ensure people’s trust in the new technology. These are socio-economic impacts; transparency, bias and accountability; new uses for data that might threaten user privacy; security and safety, including harmful AI behaviours or malicious exploitation of systems; ethics; and the emergence of new ecosystems (applications, services, etc.) created by AI.
These factors contribute to a set of challenges that must be overcome. These include

  • Transparency and interpretability in decision making: corporate and governmental secrecy limits thorough understanding when it comes to AI, while the unknown internal decision logic based on advanced ML creates hardships and complications for programmers

  • Data quality and bias: one possible harm is making “bad” decisions based on the low or biased provided data, while the other is due to AI’s ability to identify new patterns or “re-identify anonymized information,” which could lead to a breach of the user’s fundamental rights, leading to profiling and discriminating

  • Safety and security: AI agents may cause harm based on their indifference to their acts, how they learn from their environment, and because their algorithms can be manipulated.

  • Accountability: when it comes to ML, algorithms and their reasonings may stay unknown for programmers, perpetuating biases with non-intentional impacts. It raises the question: who should be responsible for the unintentional consequences of the actions done by advanced ML? The current solution is not making programmers responsible for the consequences of machine learning in order to not impede and discourage future innovation. However, the question of responsibility should be clarified in the near future between programmers, operators, and manufacturers.

  • Social and economic impact: automatization will bring new jobs and a higher level of convenience for consumers, while it will also impact unskilled, low-paying labor just as much as it will hit high skilled jobs. A higher degree of structural unemployment should be expected together with a global impact on the division of labor.

  • Governance is still in its infancy.

  • The policy paper enlists 6 guiding principles for the deployment of AI in Internet services, which are meant to meet the challenges introduced above:

    1. AI system designers should adopt a user-centric approach and consider their collective responsibility. Industry and researchers should adopt ethical standards, and innovation policies should require adherence to ethical standards as a prerequisite for funding;

    2. Human interpretability of algorithmic decisions must be ensured, especially for applications that might have harmful or discriminatory consequences. Users must be empowered to request explanations about AI-based decisions.

    3. Public and consumer empowerment must be supported and algorithmic literacy must be a basic skill.

    4. To ensure responsible deployment humans must be in control of autonomous AI systems, safety and privacy must be priorities and a policy of data minimisation should be followed. AI systems should not be trained with biased data. Vulnerabilities must be disclosed. Internet-connected AIs must be especially secure, and vulnerabilities should be disclosed.

    5. There must be legal certainty on how existing laws and policies apply to algorithmic decision-making and legal accountability must be ensured when automated decision systems take over from humans. Governments need to create clarity on liability for when AI “goes wrong” and any applicable laws must take a user-centred approach, ensuring the ability to challenge AI-based decisions.

    6. All stakeholders should engage in dialogue to “shape an environment where AI provides socio-economic opportunities for all.”

    7. An open, multi-stakeholder approach to governance should be adopted, informed by the Internet Society’s relevant work on the topic, and underpinned by the principles of

      1. Inclusiveness and transparency;

      2. Collective responsibility;

      3. Effective decision making and implementation and

      4. Collaboration through distributed and interoperable governance

    AI Governance

    This database is an information resource about global governance activities related to artificial intelligence. It is designed as a tool to help researchers, innovators, and policymakers better understand how AI governance is developing around the world.

    We're collecting more examples as we find them. If you'd like to add yours to this map, please suggest an entry.

    Suggest an entry
    • Filter by:
    • Current filters:
    256 entries are listed below
    • Sort by: