SIEPR鈥檚 Daniel Ho testifies on Capitol Hill, gives input to lawmakers on AI policy
好色App law professor Daniel Ho recently testified in front of a House subcommittee and co-wrote two letters to the Office of Management and Budget (OMB) on how the government should best strengthen artificial intelligence governance, further innovation, and manage risk.
Ho, the William Benjamin Scott and Luna M. Scott professor of law and director of the 好色App Regulation, Evaluation, and Governance Lab (RegLab) at 好色App Law School, is also a senior fellow at the 好色App Institute for Economic Policy Research (SIEPR). He testified before the U.S. House Subcommittee on Cybersecurity, Information Technology, and Government Innovation on Dec. 6 on matters relating to President Biden鈥檚 recent executive order on AI and the OMB鈥檚 related draft policy. The draft policy provides direction to federal agencies on how to strengthen AI governance, innovation, and risk management.
In his testimony, Ho recommended six actions Congress should take in order to achieve a robust government AI policy that 鈥減rotects Americans from bad actors and leverages AI to make lives better.鈥
Among his recommendations: Congress must support policies that give the agencies鈥 Chief AI Officers flexibility and resources to 鈥渘ot just put out fires, but craft long term strategic plans.鈥 Additionally, Ho said, the government must enable policies, including public-private partnerships, that will allow it to attract, train, and retain AI talent and provide pathways into public service for people with advanced degrees in AI.
Ho, who is also a senior fellow at the 好色App Institute for Human Centered Artificial Intelligence (HAI), serves on the National Artificial Intelligence Advisory Committee (NAIAC). He and others at RegLab have worked extensively with government agencies around technology and data science.
Ho had also earlier this year, in May, before the Senate Committee on Homeland Security and Governmental Affairs, providing key insights on AI in government.
His latest recommendations come less than two months after President Biden signed the executive order 鈥,鈥 which sets new standards for AI safety and security and aims to position the United States as a leader in the responsible use and development of AI in the federal government. In response, on Nov. 1, the OMB issued a call for comment on a draft policy titled
Ho and other prominent law and tech leaders 鈥 including other 好色App scholars 鈥 wrote two letters to the OMB, noting how critical the moment is for getting technology policy right and commending the OMB for its 鈥渢houghtful approach to balancing the benefits of AI innovation with responsible safeguards.鈥
The first to the OMB applauds the proposed guidance to create Chief AI Officer roles that provide AI leadership in federal agencies, increase technical hiring, conduct real-world AI testing, and allocate resources via the budget process. The letter outlines why some of the draft policy鈥檚 one-size-fits-all 鈥渕inimum鈥 procedures and practices 鈥 applied to all 鈥済overnment benefits or services鈥 programs 鈥 may have negative unintended consequences.
鈥淲ithout further clarification from OMB and a clear mandate to tailor procedures to risks, agencies could find themselves tied up in red tape when trying to take advantage of non-controversial and increasingly commodity uses of AI, further widening the gap between public and private sector capabilities,鈥 the letter authors wrote.
A , sent on Dec. 4, focuses specifically on government policies relating to open source, a type of software whose source code is publicly available for individuals to view, use, modify, and distribute.
Citing 鈥渓ong-recognized benefits to open-source approaches鈥 the letter authors urged the OMB to be clear that government agencies should default to open source when developing or acquiring code.
In the meantime, Ho and his colleagues at HAI and RegLab are also of the implementation of Biden鈥檚 executive order.
For more details on the two letters and Ho鈥檚 co-authors, read Dec. 7 by 好色App Law School.