House panel moves bill that adds AI systems to National Vulnerability Database
A bill that would push the National Institute of Standards and Technology to set up a formal process for reporting security vulnerabilities in AI systems sailed through a House committee Wednesday.
The AI Incident Reporting and Security Enhancement Act, introduced by Reps. Deborah Ross, D-N.C., Jay Obernolte, R-Calif., and Don Beyer, D-Va., was approved via voice vote by the House Science, Space and Technology Committee.
It would direct NIST to add AI systems to the National Vulnerability Database, the federal government’s centralized repository for tracking cybersecurity vulnerabilities in software and hardware. It would also require the agency to consult with other federal agencies, like the Cybersecurity and Infrastructure Security Agency, the private sector, standards organizations and civil society groups to establish common definitions, terminology and standardized reporting rules for AI security incidents.
Ross noted that the introduction of companion legislation in May from Sens. Mark Warner, D-Va., and Thom Tillis, R-N.C., means that “we have friends in the Senate” who can help pass the bill into law.
However, the bill includes language specifying that these actions are “subject to the availability of appropriations,” and Ross acknowledged “significant funding and scaling challenges that NIST has with the NVD” under its existing workload.
NIST has had well-established challenges managing the ballooning number of vulnerabilities it is already responsible for tracking and analyzing. In February, the agency temporarily stopped enriching data around reported security vulnerabilities — a process where agency analysts tag and connect specific vulnerability entries to other relevant public information. Cybersecurity practitioners have said NIST’s enrichment work adds invaluable context that organizations use to address existing vulnerabilities.
In March, Tanya Brewer, who manages NIST’s NVD program, cited budget cuts, flat staff growth and an exponential increase in incoming email traffic related to the database over the past four years as reasons for the pause.
“My colleagues and I on this committee are actively exploring solutions to help NIST address this problem and get the money,” Ross said.
Obernolte referenced a number of high-profile cybersecurity incidents over the past three years, including the 2021 Colonial Pipeline ransomware attack, the Change Healthcare hack and a CrowdStrike bug that crashed systems across the globe as examples of how software glitches and vulnerabilities can severely disrupt the flow of supply chains.
This threat “is especially true with AI systems, since they tend to be not only less deterministic but also less understood,” he said, adding that he intends to fight for the bill to get a full House vote later this year.
Although the bill passed by voice vote, some members raised concerns. Rep. Bill Posey, R-Fla., signaled his support for the underlying bill but said that more work is needed to define terms like “substantial artificial intelligence security incident” and “intelligence incident” and measures to ensure that civil society groups invited to provide input don’t include foreign standards organizations from China and other adversarial nations.
Such scoping is particularly necessary, Posey said, in light of a recent Supreme Court ruling that overturned the so-called “Chevron doctrine,” a legal precedent whereby courts defer to federal agencies to interpret how to implement laws passed by Congress.
“These really jumped out at me post-Chevron,” Posey said. “That elected people should really decide the spectrum at which we want them to operate and not let the bureaucracy take off again with a free wheel to do whatever they want.”