The EU feels they scored big with GDPR by being first to fill an international void in privacy regulation. New documents lay the groundwork for EU-style regulation of global AI next – can a repeat be in the works?
We were all surprised to wake up one day and realize that Europe had defined online privacy regulation while the world dithered. AI regulation… is this déjà vu?
Why Should You Care?
A global power with a unique view of how society, politics, and the economy should work will either; 1) set the baseline for liability, legality, and limitations of AI built by anyone, anywhere based on a foreign value system; or 2) fracture the digital marketplace such that AI systems will need to be restricted geographically to remain “legal”. If you currently code for a global internet, either scenario crushes your international aspirations.
The debate is already raging across Europe, with alternatives ranging from draconian to merely technophobic. Under some scenarios, virtually all software with an algorithmic base (remember these are policy wonks, not devs) would need to be audited by the bureaucrats, classified, retrained on approved and euro-compliant data, and then released. The catch is that this would only be with complete legal liability attached for anything that ever goes wrong – to anyone anywhere. Forever. Whether it’s your fault or not.
This is not the approach developers on either side of the Atlantic would favor. We’d be happy to connect you with the people promoting this approach if you’d like to help them understand why this might not work.
Luckily, there are competing voices in the debate, some of whom seem to truly respect and support European innovation and the benefits it will bring to the region. While still a work in progress, the latest draft report from Axel Voss, Rapporteur of the Committee on Legal Affairs, focusing on legal liability in AI systems presents a much stronger example of balancing the benefits and risks to all sides.
The EU, like most countries and regions, already has a system of regulations for the various products that consumers buy and use. If your toaster malfunctions and causes a fire, or your car’s brakes fail, sector-specific rules kick in to protect consumers and encourage manufacturers to clean up their act. Rather than add a category called “AI”, into which virtually every modern product and service would soon fall, Voss offers the better alternative of simply adapting the existing rules to accommodate AI as part of the existing sector-specific rules.
Those that fear the unknowable in AI, fear a future where advanced AI takes on important control functions in systems where unanticipated system behavior could be truly dangerous. In a nod to those fears, Voss proposes a short and specific list of applications where deployer liability would automatically attach. Like strict liability for keeping tigers on your property (due to their “known dangerous propensities”!), unmanned aircraft, fully autonomous vehicles and robots, and specific autonomous systems like traffic management would come with strict liability attached. If you put those systems in the market, and someone gets hurt, it’s your fault — period. You’re free to chase after the people further up the supply chain, but the consumer gets covered without questions upfront.
Two things I like about the Voss approach; first, he’s proposing a short and specific list of AI applications that deserve special rules, and second, he places the strict liability burden on those that put the system into the market. By creating a living list, the industry has a clear understanding of where society has drawn the line between cost/convenience and absolute consumer safety. Since the companies capable of deploying such complex systems will undoubtedly be large and sophisticated, Voss places the burden of liability and safety on those in the best position to manage it well.
There is always room for improvement, however. Policymakers need to think harder about product life cycles, updates, and how to anticipate the costs and burdens of policies. If those deploying critical AI must bear the cost of any future damage, then they must be able to define the useful product life and maintain appropriate control during the life cycle to force patches and updates. They can’t be responsible for off-label use, or third-party modifications and sabotage. Without reasonable limits on the duration and scope of their obligations, it’s likely that some systems will become so prohibitively priced that important innovations will never reach the consumer.
Developers that work with AI (a lot of you!) know that it’s simply another evolution in programming and algorithmic problem-solving. While software can often perform tasks much better than people can, there is also a known tendency to hold it to a much higher standard. If this is done as a matter of policy, rather than superstition, then so-be-it. The people have spoken. If it’s done out of fear and ignorance without concern for costs or benefits, then it needs to be resisted.
We thank Herr Voss for demonstrating how thought and clarity can lead to a profitable balance for all involved and hope others will follow his lead.