Generative AI

AI guardrails vs. ‘guiderails’: Navigating the curvy road ahead


Generative AI adoption and experimentation have exploded in a remarkably short time. According to the business intelligence firm Domo, ChatGPT gets approximately 7,000 prompts (or questions) every minute of every day. And as the technology continues to improve and become more familiar to people, that number will certainly continue its steep climb.

Among early AI adopters are many state and local government employees. Most raise their hand when asked if they have ever tried a generative AI application, but they say they use the technology privately and out of sight of their supervisors. That’s because public sector managers and policymakers continue to insist on developing and enforcing guardrails that often restrict AI’s use, intentionally or not.

Guardrails typically set boundaries or constraints to prevent systems from operating outside certain predefined limits. These limits include ethical considerations, legal requirements, safety measures and performance thresholds. Guardrails act as safeguards to ensure that systems do not cause harm or deviate from their intended purpose. For example, guardrails might limit a system’s decisions or actions to ensure they comply with a governmental authority’s ethical guidelines or regulatory standards.

However, most agency policies that serve as AI guardrails are full of either aspirational statements proclaiming “AI for good” or featuring a comprehensive list of do-nots. No wonder many state and local government employees experiment outside policies that they find too general, unrealistic or overly restrictive. Such policies and guidelines have a chilling effect that reduces creativity, productivity and careful experimentation.

There is certainly a genuine need for caution—especially when generative AI developers themselves have great difficulty explaining how data is processed and reported out in what is referred to as a “black box.” State and local government senior managers are also justified in their concerns regarding generative AI’s compliance with privacy and copyright laws. 

Sound AI policies are both necessary and should be required. But at the same time, there must be a recognition of the huge difference between staff using generative AI for personal productivity and creativity versus agencies deploying public-facing government applications like chatbots or enlisting AI-powered data analytics tools for internal planning and improved decision-making. Each user category requires a different set of parameters.

Instead of heavily relying on guardrails, state and local governments need “guiderails.” Unlike guardrails, which primarily focus on preventing undesirable outcomes, guiderails are more proactive in nature and are used to steer or guide the behavior of AI systems toward desired outcomes. Guiderails provide guidance or recommendations to AI systems to help them optimize performance, make better decisions and achieve desired objectives. For instance, guiderails might include best practices, algorithms or feedback mechanisms designed to steer the decision-making process of AI systems toward expected outcomes.

Further, guiderails can help employees increase their productivity by encouraging safe use of approved AI applications, such as technology that helps:

  • Improve writing by helping organize thoughts, create outlines and edit drafts.
  • Schedule both live and virtual meetings and summarize discussions.
  • Gather news and information on topics of interest and research new or unfamiliar subjects.
  • Generate computer code for internal applications.
  • Create illustrations and charts from data.
  • Translate between written and spoken languages.

Simply put, guardrails are about setting boundaries and constraints to prevent undesirable outcomes, while guiderails are about actively guiding and shaping the behavior of AI systems to deliver desired goals or outcomes.

The next issue is responsibility for AI adoption. Who should play the role of the AI police and determine which applications to experiment with and which to avoid? President Joe Biden’s  “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” calls for establishing a chief AI officer in every federal agency.  While this order has no direct effect on state and local governments, the idea of having someone in charge of AI initiatives is warranted. Some resource-challenged agencies may balk at the time and expense of adding to headcount, so, at least initially, assigning this responsibility to an existing staff member may be a reasonable start. Two likely candidates would be the chief information officer or the chief data officer. A chief AI officer (or equivalent) would help coordinate applications, perhaps create and lead an “AI application review committee,” help coordinate AI exploratory efforts, provide safe sandboxes for staff to learn and play with AI, and test out new and promising applications on a pilot basis. Employees could form groups that meet regularly to share what they have learned. 

Chief AI officers developing guiderails to direct an agency’s AI journey should be sure to erect the obvious caution signs for staff to follow:

  • Never use AI to impersonate an individual’s voice, video or picture.
  • Always display a disclaimer when AI is used in an application.
  • Never enter any document or spreadsheet into a public-facing generative AI application that contains any personally identifiable information.
  • Always check for any signs of intentional or unintentional bias and the potential for ethical violations.
  • Mandate human oversight of AI systems to ensure that AI decisions align with legal and ethical standards.
  • Establish mechanisms for accountability in cases where AI systems cause harm or violate ethical standards.
  • Foster open dialogue and engagement with the public regarding government agencies’ use of AI. Promote transparency by making information about AI systems and their usage publicly accessible, subject to appropriate privacy and security considerations.

AI is constantly evolving in terms of new offerings and applications, and there are many twists and turns ahead with each new product or service offering. While state and local governments should exercise caution, they should set up guiderails that make AI safe and easy to use, encourage experimentation and give staff room to explore. Guardrails may be necessary, but avoiding roadblocks is essential.

Dr. Alan R. Shark is the Executive Director of the Public Technology Institute (PTI) and Associate Professor for the Schar School of Policy and Government, George Mason University, where he is also an affiliate faculty member at the Center for Advancing Human-Machine Partnership (CAHMP). Shark is a National Academy of Public Administration Fellow and Co-Chair of the Standing Panel on Technology Leadership. Shark also hosts the bi-monthly podcast Sharkbytes.net. Dr. Shark acknowledges collaboration with generative AI in developing certain materials.





Source

Related Articles

Back to top button