AI isn’t part of the legal curriculum, and allowing lawyers to regulate the space is risky. The EU is moving forward with plans to regulate AI anyways – are they up to the task?
Whether you love or hate GDPR, if you write code for an EU audience you’re stuck with it. Developers have themselves to blame. Our audiences told us to improve. We were too slow to step-up and safeguard privacy even as the horror stories started to emerge. Gift or curse, the EU was wildly successful in filling the vacuum with GDPR. Now it has exported its interpretation of universal privacy values too much of the world.
A sequel is in the works. Europe is going to establish and then export the rules for AI as well.
Unlike privacy, examples of AI run amok are hard to find, at least in the real world. Pop culture long ago instilled a distrust of intelligent machines. The Terminator and Skynet, H.A.L. 9000, Octagon – science fiction is filled with stories of domination by superhuman, unknowable, computer intelligences. Something should be done!
There is a political philosophy built into the European experiment called the “precautionary principle”. The idea is that one of the government’s roles is to protect society from potential harms before they happen and people get hurt. Regulate nukes *before* the bombs start dropping. Test medicines *before* clinical trials. Reasonable stuff, really.
Where the principle breaks is when it’s aggressively applied to areas of general social advancement – imagining a parade of horribles due to the misuse of a general-purpose technology. Like AI.
This week, the German Data Ethics Commission released an Opinion on data and algorithmic systems which will certainly influence where the EU takes things. While the full report is written in German, I don’t sprechen zi Duetsch. Luckily there is an executive summary in English. Not for the faint of heart, either way.
My review is in two parts, like the report. The Commission’s take on Data – how to think of it, meaningful attributes, the risks, and benefits, is a good step-and-a-half to a good model. It maps well to how we at the Developers Alliance see the world. They correctly use themes like “rights in data”, as opposed to ownership. They identify the non-rivalrous nature of data as key to how to look at it in an economic and legal context. They talk of rights and complementary obligations, and personal versus non-personal data.
On the other hand, the report’s proposals to create new agency mandates to oversee things feel like regulatory creep. Raised in Canada, living in the US, having spent many years working for a European company, I admit my bias is for less government control. I prefer more principles-based rules that nudge the market in the preferred direction as gently as possible.
Bottom line: I give them a “B” in data. There is some sophisticated policy thinking, but it is all a bit heavy-handed. Good progress, though.
The second part of the report is on Algorithmic systems, a much broader scope than AI, and the analysis comes off the rails almost immediately. They start with a reasonable taxonomy, but then careen wildly into the world of the precautionary principle and quickly get lost in models and conjecture. Grade “D” on this one.
AI is a tool. It doesn’t cure cancer, it helps people *find* a cure for cancer. It doesn’t replace the driver, it augments the car. It may be virtual, but it fits easily into our current legal and regulatory regime because it is human-created, human-targeted, and human-controlled. Confusing the tool with the creator and user leads the authors on a strange and unnecessary diversion into models and thresholds and talk of new liabilities and accreditations. The best observation is that AI are not people, and don’t need legal rights of their own.
Without a tangible baseline, the theoretical approach has resulted in ideas without a real-world anchor point. My recommendation to policymakers is to repeat the assessment. Use smart thermostats or smart door-locks, and perhaps self-driving cars as theoretical cases, and to walk through how “dumb” versions of these same tools are regulated. Let’s take a look at what happens when we add a human operator or manufacturer to that base case, and then see what adaptation would be needed to accommodate AI into the mix.
On the other hand, we all need to acknowledge that AI will make the previously impossible, possible, and the possible (deep fakes, the surveillance state, autonomous weapons systems) more formidable. We need to collectively decide where to place the guardrails such that there’s a broad space for experimentation. We also need clear boundaries, however, which should not be crossed. The cases before us are already well categorized by those working in the field, and if it was up to me would be the primary focus for precautionary work, coupled with a global dialog for these pending global issues.
Despite the shortcomings, I’m encouraged to see academics from the legal, policy and technical spheres taking on a difficult subject like this. Dialog is important and we have to start somewhere. With luck, there will be future iterations of this report with tighter analogies to hone in the thinking. Rest assured I’ll weigh in again as the thinking evolves.
Or as the old quote goes: “I’ll be back…”