Expert guidance from actual experts
The challenge with regulating future-focused tech is that there’s a chronic shortage of technical experts in government. It doesn’t matter how smart the lawyers are; if they don’t understand it, they can’t write coherent rules around it. On the other hand, when you give the task to an organization packed with experts who do this stuff for a living, something useful just might emerge. AI is the latest example of that.
The US government’s National Institute of Standards and Technology. one of the world’s preeminent standards bodies, today released their Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST AI 100-1. You know these guys get it when the pdf has a widget on the front page that allows you to check for updated versions. Exclamation! A living document – yes please! In contrast with efforts elsewhere, NIST embarked on a lengthy stakeholder process to produce a document with valuable procedures, insights, and guidance for engineers, devs and scientists to both articulate and manage AI risk using a common language and familiar framework that security and process professionals already know and use.
Over and above being actually useful and clearly future-proof, there are a number of insights and principles that caught my eye – insights that would be incredibly powerful if they were acknowledged by the EU in their overly proscriptive proposed Artificial Intelligence Act.
First, the NIST RMF acknowledges up front that the goal of the framework is to “…minimize anticipated negative impacts of AI systems and identify opportunities to maximize positive impacts.” The explicit acknowledgment that risk is a balancing exercise is simply common sense elsewhere in life, but seems to have been far from the minds of regulators focused on AI harms. While the EU proposal waves an arm and points to boilerplate praise for what AI can enable, the actual regulation ignores balance in favor of proscription.
Second, NIST understands that there may be many market actors between the originator of an AI system and its users. In NIST’s words “Risk metrics or methodologies used by the organization developing the AI system may not align with the risk metrics or methodologies used by the organization deploying or operating the system.” This is a key insight; risk management must take place at each layer – it’s ineffective to pick a single player and allocate the effort to them. The developers of generic AI tools cannot anticipate every potential use – that task should be shifted downstream closer to those deploying the tools. If everyone adopts a common framework, the entire effort is cohesive and effective. Well done NIST.
Third, NIST surfaces a key issue in AI risk management – that some systems are simply opaque and inscrutable – and uses the framework to break down this potential problem as part of the overall risk management process. Taking the characteristics of AI systems that lead to opaque systems and working through each of them to increase transparency will ultimately do far more to produce trustworthy systems than simply outlawing random categories of AI tools.
Fourth, there is a growing theme in the regulatory community that risk can be completely eliminated if only the engineers work harder. This, of course, ignores the reality of life on planet Earth, where every act and decision we make is a risk/reward tradeoff, and only death and taxes are certain. Systems will have bugs, people will make mistakes, and you will spill coffee on your pants one day, no matter how hard you try to avoid these things. Or as NIST says, “Attempting to eliminate negative risk entirely can be counterproductive in practice because not all incidents and failures can be eliminated.” Best save some resources for mitigation measures. There will always be “residual risk”, and the framework takes the time to focus on this rather than wishing it away or punishing people for the unavoidable.
Fifth, NIST highlights that the best approach to risk management is to bring in a diversity of views and frames of reference; in their words “[i]deally, AI actors will represent a diversity of experience, expertise, and backgrounds and comprise demographically and disciplinarily diverse teams.” Risks look different to different stakeholders, as do benefits. Stakeholder input is critical, and should be built into the process.
Sixth, the RMF talks about the use of AI operational boundaries – the idea that within certain limits and under certain conditions, risk is managed or manageable. Once defined in these terms, risk management might be layered, with one framework appropriate inside the boundaries, and another on the outside. Treating systemic risk in this in-bounds/out-of-bounds context allows for both bright-line regulation and enough flexibility to innovate where it’s safe to do so.
Finally, NIST’s well articulated taxonomy for trustworthiness is a gift to the diverse actors who share a stake in the success of AI. The structure should resonate with everyone from civil society to scientists. Again, in NIST’s words, “[c]haracteristics of trustworthy AI systems include: valid and reliable, safe, secure and resilient, accountable and trans- parent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.” The NRF then goes on to talk through the various elements, adding the insight that “[u]ltimately, trustworthiness is a social concept that ranges across a spectrum and is only as strong as its weakest characteristics.” I encourage the developer community to reflect on that on all their projects, AI or not.
One additional concept in the report is worth noting; the fact that even the best risk management efforts will eventually be tested by bad actors. Regulators who focus only on harm as an outcome of negligence or incompetence overlook the fact there are very real threats targeting AI systems and those who deploy them. Penalizing the victim is a poor incentive for safeguarding society, but it is increasingly the trend as government offloads its responsibilities onto private actors. The abuse or misuse of AI by adversaries, “data poisoning” and other ideas emerging from the defense sector are just as relevant in the context of security, hackers and non-state actors. Regulators should work in harmony with developers to tackle these threats.
Risk is an inevitable part of life. For developers and engineers, the goal might be to design a system where “[t]he AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and it can fail safely, particularly if made to operate beyond its knowledge limits.” That makes sense to both the tech community, and should provide a regulatory direction for any future laws and guidelines. NIST’s guidance on feedback mechanisms; use-case, temporal and cross-sectoral risk profiles; and transparency around incidents and errors round out a rational framework that is business case supportive rather than punitive.
We encourage regulators everywhere to read the report and adopt the framework in their own jurisdiction. Thanks, NIST!