AI regulations are still in flux with no definite decisions, but every government plans to set local rules. As an industry, what can we predict from the emerging draft rules, codes and frameworks? On April 19th, Developers Alliance presented a workshop at the World Summit AI in Montreal about “Three practical steps to handle future regulations.” World Summit AI Americas brought together AI leaders, academics, and researchers for conversations around understanding the impact of AI on the products we consume as a society. The event also showcased keynotes and panel discussions about AI opportunities and challenges for health, security, economics, business, and governance.
The workshop by Developers Alliance presented an actionable and practical checklist of steps that engineering, and development teams can prepare to confront upcoming regulations and make compliance easier. Attendees included developers and industry leaders from Women in AI, ServiceNow Inc., Borealis AI, Procogia, QueerTech, License Inc, Divinity Corp., UBC, Optrec, Marc-AI, Affectra AI, Fujitsu, Keyword Studios, PWC Canada, My Brand AI, Competition Bureau and many others.
Phil Dawson, Head of AI Policy at Armilla AI provided insights into data models, their quality assurance offering, and the alignment tools they were building for generativeAI. Bruce Gustafson, CEO of Developers Alliance offered insights into assessing and mitigating potential failure modes. Developers questioned the hurdles with regard to evaluation of features, mindset of traditional programming models, and volume of data in terms of distribution. Common practices were further discussed in relation to the Data Nutrition Label and the cost to store and manage different versions of data. The need for a comprehensive standard was proposed that would contain treatment of unwanted bias, assessment of the ML systems and responsible AI with regard to ISO standards.
The interactive workshop was moderated by Geoff Lane, US Policy Head of the Developers Alliance.
The speakers offered three practical steps a developer could take to better prepare for AI regulations- key takeaways included:
- Know the data your model is trained on – investigate the data, understand training data vs. system’s input data, look at potential bias in the datasets and also comprehend the data rights issues with regard to IPR, Copyright etc.
- Define the limitation of your system – mark the out-of-bounds (unintended use of the code/product), narrowly define your system – explore its limits, evil ramifications, the certifications and licenses that might apply when systems are replaced by humans.
- Brainstorm how your system might fail – what harm might result. Take time to assess and mitigate potential failure modes, what’s the worst that can happen when used inside your sandbox? Can your system be tricked or gamed?
In the US, Congress has yet to pass any comprehensive AI laws. In October 2022 the Biden Administration released its Blueprint for an AI Bill of Rights. This document, while recognizing the benefits of automated systems, also identified five principles that should guide the design, use, and deployment of AI to protect Americans. The five principles are:
- Protecting Americans from unsafe or ineffective systems.
- Algorithms and systems should be used and designed in an equitable way.
- Protecting Americans from abusive data practices.
- Knowing that an automated system is being used and understanding how it impacts you.
- And, where appropriate, giving Americans the ability to opt out and have access to an individual who can remedy problems you encounter.
Finally, in January of this year NIST released its AI Management Framework to better manage risks to individuals, organizations, and society associated with AI. The framework is voluntary.
The EU never misses a chance to weigh-in on cutting edge technology. It’s taking a hard law approach, with a horizontal regulation (AI act) already being negotiated, while the UK has decided not to rush to regulate and instead identify the need for adaptation of each sector. The EU’s AI Act imposes different rules depending on an AI system’s level of risk, from bans (e.g. social scoring), to compulsory certification for ‘high-risk AI systems’, to lighter obligations (transparency). The EU lawmakers are also contemplating special obligations for providers of general-purpose AI, including generative AI like ChatGPT. Like GDPR, they intend to enforce the rules on any system that touches the EU.
About Developers Alliance: Developers Alliance was founded in 2012 to serve as the voice for devs with elected officials and regulators in capitals around the world. While we do represent companies ranging in size from small startups to multinational corporations, the backbone of our membership are the 70,000 or so individual devs around the world who are our members. These individual members are critical in shaping our policy objectives and charting a path forward for the organization. We depend on these individuals to be engaged with us via surveys, events, webinars, and more.
Are you interested in attending a webinar about AI regulations? Share your questions or thoughts below on specific topics you would like to learn more about or email us at firstname.lastname@example.org