AI

Responsible Artificial Intelligence Governance Insights


The development of AI continues to advance at a blistering pace, increasing the need for companies to employ AI governance and adopt policies for the responsible development and deployment of AI. While the term “responsible AI” is frequently used, it is rarely understood and often complex. Fortunately, a growing body of resources are becoming available to help companies understand and implement responsible AI. Two of the more recent resources are a set of publications by NIST (the National Institute of Standards and Technology) and Microsoft. These publications provide examples of efforts by these institutions to develop best practices for responsible AI development.

What is responsible AI?

According to NIST, responsible AI embodies a set of criteria that must be balanced based on a particular AI system’s context of use. These criteria are that the AI must be:

  • valid and reliable
  • safe, secure and resilient
  • accountable and transparent
  • explainable and interpretable
  • privacy-enhanced and
  • fair with harmful bias managed. 

One of the actions mandated by the White House Executive Order on AI was for the NIST to update its January 2023 AI Risk Management Framework (AI RMF 1.0), which it has now done. To this end, NIST released four draft publications intended to help improve the safety, security and trustworthiness of artificial intelligence (AI) systems and launched a challenge series to support development of methods to distinguish between content produced by humans and content produced by AI. The AI RMF 1.0 is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. For more information on these four new publications see NIST Updates AI RMF as Mandated by the White House Executive Order on AI.

Additionally, In July 2023, Microsoft committed to publishing an annual report on its responsible AI program. It recently published its inaugural Responsible AI Transparency Report. This report sheds light on how Microsoft builds generative AI applications responsibly, how it makes decisions about releasing generative AI applications and how it supports customers as they build their own AI applications. The report notes that it builds on NIST’s AI RMF, which requires teams to map, measure, and manage risks for generative applications throughout their development cycle.

Some of the interesting takeaways from the 40-page report include the following:

  • Microsoft implemented mandatory training for all employees to increase the adoption of responsible AI practices. In our view one of the important things that companies need to do is to ensure employees are trained on AI legal issues.
  • In 2023, it used its Responsible AI Standard to formalize a set of generative AI requirements, which follow a responsible AI development cycle. These requirements align with the core functions of the NIST AI RMF—govern, map, measure, and manage—with the aim of reducing generative AI risks and their associated harms. This involved putting into place policies, practices, and processes on AI governance. In our view one of the other most important things that companies need to do is to develop and implement policies on development and deployment of AI.
  • It identifies risks through a combination of threat modeling, responsible AI impact assessments, customer feedback, incident response and learning programs, external research, and AI red teaming.
  • It uses systematic measurement to evaluate application and mitigation performance against defined metrics.
  • Once risks have been mapped and measured, they are managed across two layers of the technology – platform and applications – to provide a “defense in depth” approach to mitigating risks.
  • It empowers customers with responsible AI tools and features in three ways: i) it stands behind customers’ deployment and use of AI through its AI Customer Commitments; ii) it builds AI tools for its customers to use in developing their own AI applications responsibly; and iii) it provides transparency documentation to customers to provide important information about its AI platforms and applications. As an important note, some of the commitments (e.g., indemnification) require customers to meet a set of preconditions which the customer must demonstrate it has met to benefit from the commitment. It is important to understand and implement these preconditions.
  • Microsoft’s AI governance starts at the top with CEO Satya Nadella and includes, among other things, a Responsible AI Council and an Office of Responsible AI. In our view, all companies should adopt some form of an AI governance committee to address the issues with and develop policies for the responsible development and deployment of AI.
  • Looking forward, the report notes that Microsoft will continue to invest in four key areas to enable the scaling of responsible AI across the industry by:
    • Innovating new approaches to responsible AI development in its own products – it will continue to develop policies, tools, and solutions to mitigate risks in its AI products.
    • Creating tools for customers to responsibly develop – in addition to using innovative tools to protect its users, it also makes them available to customers, through tools like Azure AI Content Safety and Azure AI Studio.
    • Sharing learnings and best practices with the responsible AI ecosystem at large – to expand and improve its collective playbook of responsible AI best practices, it will continue to share its learnings in deploying AI in a safe, secure, and trustworthy manner, provide updates through its own channels like the Microsoft On the Issues blog and participate in multistakeholder organizations where it can both share learnings and learn from experts outside the company.
    • Supporting the development of laws, norms, and standards via broad and inclusive multistakeholder processes – it will embrace global AI governance and do its part to ensure that laws, norms, and standards are developed via broad and inclusive multistakeholder processes.

I applaud Microsoft for the transparency and thoughtfulness that went into this Report. Hopefully, it will inspire other leaders in the AI space to follow suit. It is in everyone’s best interest to share and adopt best practices for responsible AI to ensure that the benefits of AI are realized while avoiding the societal harms that can arise if they are not.

For companies getting into the AI game, whether developing or deploying AI, we strongly encourage you to take three actions:

  1. Develop an AI governance body for your organization.
  2. Obtain training (and regular updates) on AI legal issues to inform members of the governance body of the legal and regulatory issues, which are changing rapidly.
  3. Adopt and implement written policies on the development and deployment of AI.

For more information, see Why Companies Need AI Legal Training and Must Develop AI Policies and feel free to reach out if you have any questions or need any help.

Listen to this post 



Source

Related Articles

Back to top button