Its 11 PM. Do You Know Where Your AI Is?

Hello? Dev Community? Yes, it’s the European Union. We were just wondering, is your AI trustworthy? We have a checklist for you.


AI Standpoints Announcement.png

Since we joined the European AI Alliance back in June, a lot has been happening. On June 26th, the High-Level Expert Group on Artificial Intelligence (AI HLEG) held the first AI Alliance Assembly. The group presented The Policy and Investment Recommendations on AI, addressed to the European Commission and the Member States. They also launched the piloting process of the AI Ethics Guidelines.

Today we’re publishing our full position in Europe’s “Trustworthy AI” debate. For those that are unfamiliar, this started with the European Union commission’s goal to set guidelines for ethical AI creation, started with the Ethics Guidelines for Trustworthy AI published by the AI HLEG last year. As part of this discussion we’ve developed our own positions. We’re publishing those here.

While we encourage all developers (especially those who work with artificial intelligence, machine learning, neural networks, and the like), to take a longer look at the position, here are the main ideas:

  • Any policy or regulatory measures should carefully crafted and limited in scope.

  • Proposals should be evidence-based, as opposed to focused on hypotheticals.

  • Because AI can be deeply technical, regulators should work alongside specialists and experts in crafting rules and guidelines.

  • Given AI’s global context, regulators should seek to accommodate both the international impact of EU regulation as well as the impact of international regulations on EU actors.

  • The EU should abstain from protectionist impulses, as they might unintentionally damage innovation and the competitiveness of European businesses.

If you’re a company, a member of the public (like academia), or anyone working with or researching AI, the AI HLEG wants to hear from you. All interested stakeholders are invited to participate and to test the seven key requirements or principles that make up the assessment list, developed by the AI HLEG, that ensures a “trustworthy AI” (meaning, the AI product complies with European values).

The piloting phase will run until the 1st of December 2019. Based on the feedback received, the AI HLEG will propose to the European Commission a revised version of the assessment list in early 2020. We highly encourage all developers to participate in the project. You can register here.

Since Ursula von der Leyen, President-Elect of the new European Commission, issued a programmatic declaration stating the intention to propose AI regulation within 100 days of taking office, it is imperative that the developer community submit their comments and become involved.

Whether or not you choose to register in the pilot process, we need your comments. They will help us to ensure that the voice of the developer community working with AI is heard in this important EU debate. Without your input AI regulation in Europe risks being driven by those without direct involvement in the field. It’s very important indeed.

If you choose to formally comment (or not), please share your thoughts with us using the form below. We’d like to know who and how the community becomes involved in this historic discussion.

Related Content

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance Joins Call for EU Policymakers to Swiftly Adopt the Extension of the Interim ePrivacy Derogation

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

Developers Alliance’s Reaction to the Political Agreement on the New EU Law on Liability for Defective Products

A Busy Regulatory End of the Year in Europe 

A Busy Regulatory End of the Year in Europe 

Join the Alliance. Protect your interests.

©2019 Developers Alliance All Rights Reserved.