AI in general 1, 2017

ITI AI Policy Principles



The Information Technology Industry Council (ITI)


Safety, Human control, Liability, Accountability, Bias, Transparency, Labour market effects, Security


The ITI is a Washington, DC-based trade association representing the ICT industry. It has been described as a ‘lobbying group’ and its membership includes the largest technology companies, like Apple, Amazon, Google, Intel, IBM, and others.
The paper offers a general outline of possible future cooperation between the private and the public sector, globally. ITI highlights industry’s responsibility in promoting responsible development and use of AI, the opportunity for governments to invest in and enable the AI ecosystem, and the opportunity for public-private partnerships, and groups its policy principles under these 3 headings.
Industry’s responsibility includes:
  • Responsible Design and Deployment, meaning the “responsibility to integrate principles into the design of AI technologies, beyond compliance with existing laws”, in short, a commitment to ‘ethics-by-design’;

  • Safety and Controllability, meaning that AI systems must be safe, minimise risk to humans and remain controllable by them;

  • Robust and Representative Data, meaning that “ industry has a responsibility to understand the parameters and characteristics of the data, to demonstrate the recognition of potentially harmful bias, and to test for potential bias before and throughout the deployment of AI systems.”

  • Interpretability entails a commitment to working with partners to mitigate bias, inequity and other harms;

  • Liability of AI Systems Due to Autonomy, expresses the commitment to work with relevant stakeholders to create an accountability framework for autonomous systems.

  • [/ul]
    With regard to governments’ role, ITI has several proposals:
  • Investment in AI R&D, especially cyber-defence, data analytics, detection of fraudulent transactions, robotics, human augmentation, natural language processing, interfaces, visualizations;

  • Flexible regulatory approach, including especially the avoidance of overregulation. The application of sector-specific approaches is encouraged instead of making general policies. It also urges governments to evaluate existing policy tools before implementing changes in order to not “impede the responsible development and use of AI”;

  • Promoting innovation and the security of the Internet, with special care to government absence: companies should not be obliged to “transfer or provide access to technology, source code, algorithms, or encryption keys” as a condition of doing business;

  • Cybersecurity and privacy, encouraging the governments to use strong, globally accepted and deployed cryptography to ensure trust and interoperability; voluntary information sharing on hacks is proposed as a way of enabling consumer protection;

  • Voluntary, industry-led, consensus-based standards and best practices should be developed in order to promote competition and international collaboration.

  • [/ul]
    ITI also embraces public-private partnerships to democratise access to AI resources, strengthening STEM education, and efforts aimed at managing the labour market transformations brought about by AI.

    AI Governance

    This database is an information resource about global governance activities related to artificial intelligence. It is designed as a tool to help researchers, innovators, and policymakers better understand how AI governance is developing around the world.

    We're collecting more examples as we find them. If you'd like to add yours to this map, please suggest an entry.

    Suggest an entry
    • Filter by:
    • Current filters:
    256 entries are listed below
    • Sort by: