CFTC Sharpens Focus on Artificial Intelligence with Appointment of Chief Artificial Intelligence Officer | BakerHostetler
Key Takeaways
- On May 1, 2024, the U.S. Commodity Futures Trading Commission (“CFTC” or the “Commission”) announced Dr. Ted Kaouk as the Commission’s first Chief Artificial Intelligence (“AI”) Officer.
- As part of his mandate, Dr. Kaouk will seek to identify how AI can be used to improve the CFTC’s oversight and enforcement responsibilities.
- Dr. Kaouk’s appointment is one of several technology initiatives at the Commission, all of which are focused on maintaining dialogue with the public. The CFTC will also work to create a regulatory framework to ensure the effective and safe use of these technologies in Commission-regulated markets.
Background
A Unified Government Approach on AI
In October 2023, the Biden Administration released an Executive Order on governing the development and use of AI in a safe and responsible manner, while working with federal agencies to regulate and disperse the benefits of the technology. The Executive Order, titled “Safe, Secure, and Trustworthy Artificial Intelligence” (Biden EO), followed a number of actions reflecting a government-wide approach to achieving unified guidance on AI, including (1) the White House’s Blueprint for an AI Bill of Rights concerning the design, use and deployment of automated systems; (2) the National Institute of Standards and Technology’s (“NIST”) Risk Management Framework 1.0; and (3) the White House’s effort to obtain “AI commitments” from leading companies to promote the safe, secure and transparent development of generative AI.
In accordance with promoting a collective response to AI and associated technologies, the Biden EO focuses, in part, on the government’s own use and deployment of AI, for which it recognizes the potential for AI to improve regulation and governance while cutting costs and enhancing security of government systems. To that end, the Biden EO provides that the modernization for a federal AI infrastructure includes (1) issuance of guidance for federal agencies’ use of AI targeted at protection of rights and safety; (2) help with acquiring AI products and services in a more efficient and cost-effective manner; and (3) acceleration of the hiring of AI professionals, as well as AI training for employees at all levels in relevant fields.
In furtherance of this directive, in November 2023, Vice President Kamala Harris announced the Office of Management and Budget’s draft policy “Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence.” The draft sought public comment on proposals concerning AI governance structures in federal agencies, including the designation of a Chief AI Officer and the responsibilities associated with such position. An official government-wide policy was announced in March 2024, which included the requirement for federal agencies to designate Chief AI Officers and to establish AI Governance Boards. Several agencies, including the U.S. Department of Health and Human Services and the Department of Justice, have already appointed Chief AI Officers.
CFTC Appoints Chief AI Officer
On May 1, CFTC Chair Rostin Behnam announced the designation of Dr. Ted Kaouk as the agency’s first Chief AI Officer. Dr. Kaouk is currently the CFTC’s Chief Data Officer and Director of the Division of Data. Prior to the CFTC, Kaouk was the Chief Data Officer and Responsible Official for AI at the Office of Personnel Management and the Chief Data Officer for the U.S. Department of Agriculture.
The mission of the CFTC’s new Chief AI Officer is to identify the ways in which AI can enhance and improve the CFTC’s oversight, surveillance and enforcement in the markets it oversees. However, in the CFTC’s press release announcing Dr. Kaouk’s designation, there is no mention of the role he will play in designing appropriate regulation of or around use of AI, but the logical inference is that Dr. Kaouk will have a role in that part of the CFTC’s mission as well given that, on the day following the CFTC announcement, CFTC Commissioner Kristin Johnson announced the CFTC’s agenda to assess risks associated with the integration of AI in the financial markets, including a “principles-based framework to assess the risks of integrating certain AI technologies” in CFTC-regulated markets and heightened penalties for the misuse of the technology.
CFTC’s Other Recent AI-Related Initiatives
Technology Advisory Committee (“TAC”) Subcommittee Report and “AI Day”
On May 2, the TAC Subcommittee on Emerging and Evolving Technologies put forward a report and recommendations to the CFTC on responsible AI in financial markets. The purpose of the report is to facilitate an understanding of the impact of AI on the financial markets, and it made five recommendations to the CFTC:
- Public Engagement: The CFTC is encouraged to host a public roundtable discussion and directly engage in outreach with CFTC-registered entities to gain insight into the types of AI technologies used more frequently. The purpose of this roundtable is to inform the CFTC about key technical and policy considerations regarding AI use in the financial markets.
- Definition and Adoption of an AI Risk Management Framework: Adoption of a risk management framework for the sector could ensure understanding of the norms and standards being developed by NIST. Introducing those practices to CFTC-regulated industries could promote financial markets and a regulatory system that are more resilient to emerging AI technologies.
- Creation of an Inventory of Existing Regulations Related to AI and Gap Analysis: The TAC recognizes that existing regulations already require the management of risk; however, there may be a need to clarify guidance or engage in additional rulemakings.
- Establishment of Processes to Gain Alignment with AI Policies of Other Federal Agencies: The CFTC is encouraged to leverage and utilize best practices implemented by other agencies, including the U.S. Securities and Exchange Commission (“SEC”) and others concerned with financial stability in the U.S. markets.
- Staff Engagement: The TAC encourages onboarding and engagement of staff to build out a pipeline of AI experts.
In her remarks about the report, Commissioner Christy Goldsmith Romero recognized AI as a valuable tool to improve automated processes and highlighted its ability to detect unusual activities in real time, but also noted its potential to create or exacerbate market instability. She emphasized the need for data governance, saying that if it is not possible for some registered entities to put in place sound data governance, the CFTC may be required to propose rules or issue guidance so firms may access AI models in a way that balances the intellectual property interest the creator has in the AI model with the need for the user of the model to manage and report its trading data. The Commissioner suggested that the CFTC and market participants make identifying specific risks that are highly relevant to the CFTC-regulated markets a high priority.
On May 2, the TAC, sponsored by Commissioner Goldsmith Romero, held an “AI Day” where, among other things, the Subcommittee presented its findings in the report. In her opening remarks, the Commissioner emphasized the TAC’s efforts to examine what she referred to as “responsible AI,” including with respect to fairness, transparency, explainability of outputs, safety and security. She noted that, within this study, governance is critical to ensuring responsible AI, and that a “whole of government approach” will allow the CFTC to leverage existing practices and resources of other federal agencies to harmonize an approach to engaging with and using these technologies.
The Commissioner also highlighted the work the CFTC and the TAC are doing to prevent harm to U.S. markets caused by AI and its users. Specifically, she noted that the CFTC must be at the forefront of the examination of the safety and trustworthiness of the U.S. financial markets, where investor trust is paramount to their functioning. As a market regulator, the CFTC has a mandate to promote the integrity, resiliency and vibrancy of markets. To that end, the TAC has examined the potential for AI to harm markets or undermine trust in them, and it will continue to support responsible innovation in AI.
What’s Old Is New Again
Of course, the focus on AI is not wholly new for the CFTC. For decades, it has worked to keep up with innovative technologies, from both a policy and practice perspective. For example, it launched the TAC in 1999 to explore the intersection of technology, law and policy in the CFTC’s regulated markets. The TAC meets regularly, convening the public and private sectors to discuss current issues presented by technology and report its findings to the Commission. There are presently three subcommittees: (1) digital assets and blockchain, (2) emerging and evolving technologies, and (3) cybersecurity.
The CFTC also created LabCFTC in 2017, whose mission centered on providing regulatory certainty to innovators and users of innovative technology, and identifying technologies the CFTC could use in carrying out its oversight and enforcement mission. In late 2019, LabCFTC issued an AI primer in which it identified AI as a technology with the potential to enhance CFTC-regulated markets and committed to engaging relevant stakeholders to promote it. The interest is twofold: (1) to identify ways in which AI can enhance the CFTC’s technology and oversight, and (2) to ensure that the CFTC regulations are appropriate for AI and its use in the markets the CFTC oversees. LabCFTC later became the Office of Technology Innovation under Chairman Behnam.
These efforts have seemingly crystallized this year. On January 25, the CFTC issued a request for comment on the use of AI in the CFTC-regulated markets. The Commission requested comment on the definition of AI and its application, including how it is being used in risk management, trading, compliance, recordkeeping, data processing and analytics, customer interactions, and cybersecurity. The Commission also asked for input on risks of AI, specifically risks of AI in the markets the CFTC oversees.
Ahead of the April 24 deadline, the CFTC received fewer than 25 comments, which included letters from many exchanges: Robinhood, Coinbase, CBOE, Nodal, Nasdaq, and a joint letter from Futures Industry Association, the FIA Principal Traders Group, CME Group Inc. and the Intercontinental Exchange Inc. The Commission also received comments from the Securities Industry and Financial Markets Association, the U.S. Chamber of Commerce, and S&P Global Commodity Insights. The Commission will use those comments in charting next steps with regard to regulation of use of AI by its registrants and in its markets, which could include rulemaking, interpretive guidance or policy statements.
Looking Ahead
The government is increasingly focused on the opportunities and threats presented by emerging technologies. On Capitol Hill, both the Senate and the House have an AI Caucus. The Senate also has a bipartisan AI working group, and the House of Representatives has a bipartisan Task Force on AI. There has been – and will be more – legislation introduced to regulate use of AI, and the expectation is that agencies will, with their new Chief AI Officers, engage with the technology and innovators to make the best use of the available technology for their own purposes as well.
[View source.]