Carnegie Mellon University Conference on “Operationalizing the NIST Risk Management Framework”
The Carnegie Mellon University Block Center recently held a conference focused on “Operationalizing the NIST Risk Management Framework” that GETTING-Plurality Research Network member, Sarah Hubbard, attended. The attendees spanned government, academia, and industry partners who are working towards Responsible AI. Below, we include a short recap from the event.
NIST AI Risk Management Framework
The NIST AI Risk Management Framework, and the companion NIST AI RMF Playbook, is a flexible, voluntary framework which aims to mitigate the risks of artificial intelligence. The Framework includes suggested outcomes mapped across their Govern, Map, Manage, Measure categories, and the Playbook includes suggested actions organizations may take to meet these outcomes. The Framework will continue to go through an interactive process of improvements and additions, particularly as the AI space is rapidly changing and best practices emerge.
In discussions on the current iteration of the Framework, different stakeholders pointed out challenges such as navigating the Framework with all the existing policies and structures within their organization; how the Framework is enforced or particularly across various departments, with various levels of adoption, and with third-party vendors; and the lack of a post-implementation impact analysis. Opportunities could include writing this language into contracts and procurement practices, as well as building out other tools and best practices on how to implement the Framework in practice.
Looking forward, a few areas for improvement and next steps were identified for the Framework such as: the need to evaluate within the context of application instead of in abstract, pre-deployment testing under post-deployment conditions (waiting until the end of the lifecycle to test is too late), reframing AI to be more human-centered instead of system-centered and taking human behavior into account, and how to measure impact in multi-agent systems. Others identified that moving forward it will be critical to reconcile NIST with other frameworks such as the EU AI Act, MAS Veritas, and more.
CMU Research and Resources
On the current use cases of AI, some people pointed out that we are more likely to impose AI systems on more vulnerable populations first and many systems are developed in isolation of the communities they serve. There needs to be more attention given to all stakeholders voices that are present within an AI system (e.g. the company, the customer, the worker), as oftentimes development of a system includes some fundamental misunderstanding about how work or processes are carried out in practice. Communities and stakeholders must be empowered to provide meaningful feedback and that feedback must be effectively incorporated back into the development process.
Various research teams and labs demonstrated some of their work. A few resources include:
- CMU Responsible AI: https://www.cmu.edu/block-center/responsible-ai/index.html
- CoALA Lab, Co-Augmentation Learning & AI: https://www.thecoalalab.com/
- WeAudit, tools to help people collectively audit AI: https://forum.weaudit.org/
- Zeno, AI data management and evaluation: https://zenoml.com/
In discussion on developing training resources, panelists discussed how the most common questions they get from organizations are how to leverage AI (what opportunities provide the highest ROI), when they should adopt AI, and clarity around what AI actually is in practice. Often, complexity seems to come not from the technical skills, but from figuring out how to embed the technology in an organization and the cultural difficulties that arise. When developing any training resources, it is also critical to blend in Responsible AI and demonstrate examples of failures where AI systems were deployed without first establishing the right context.
Other research focused on explainability, accountability, transparency, fairness and impact evaluation were all discussed. It is critical to take a more nuanced view of the ML pipeline to identify opportunities at each stage in order to build more Responsible AI systems.
The GETTING-Plurality Research Network is honored to participate in this event and collaborate with others focused on the ethical and responsible deployment of technology in society.Facebook