As NIST funding challenges persist, Schumer announces $10 million for its AI Safety Institute

Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.

US Senate Majority Leader Chuck Schumer (D-NY) announced today that the National Institute Of Standards and Technology (NIST) will receive up to $10 million to establish the US Artificial Intelligence Safety Institute (USAISI) — which was established in November 2023 to “support the responsibilities assigned to the Department of Commerce” under the AI Executive Order.

Until now, there have been few details disclosed about how the institute would work and where its funding would come from — especially since NIST itself, with reportedly a staff of about 3,400 and an annual budget of just over $1.6 billion — is known to be underfunded. And just yesterday, the Washington Post ran an expose of NIST’s decaying offices, including a leaky roof, black mold, frequent blackouts and flaky internet.

Schumer called AI funding a ‘strong down payment’

NIST, which is part of the US Department of Commerce, was given a great deal of responsibility in the White House’s AI EO to “undertake an initiative for evaluating and auditing capabilities relating to Artificial Intelligence (AI) technologies and to develop a variety of guidelines, including for conducting AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.”

Now, Senator Schumer announced that the recently-released Commerce, Justice, and Science Fiscal Year 2024 appropriations bill includes up to $10 million for establishment of the USAISI at NIST. A press release called it a “first-of-its-kind” funding and Schumer heralded it as a “strong down payment” on the implementation of Biden’s EO on AI.

VB Event

The AI Impact Tour – Boston

We’re excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data integrity in 2024 and beyond. Space is limited, so request an invite today.

Request an invite

“As Majority Leader, I have kept up the drumbeat that our government must implement smart guardrails to make sure that we balance the need for the U.S. to continue to lead in innovation while also addressing any potential risks posed by artificial intelligence,” he said. “I fought for this funding to make sure that the development of AI prioritizes both innovation and safety, accountability, and transparency, while supporting American industry and allowing for progress.”

According to Schumer, the NIST AI Institute “will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks. The AI Safety Institute is working in coordination with a Consortium of 200 companies and organizations focused on research and development as well as testing and evaluation, among other activities, to improve the safety and accountability of AI systems.”

Criticism about lack of NIST funding and transparency

In February, VentureBeat reported on criticism about NIST’s lack of transparency around the USAISI: In mid-December, House Science Committee lawmakers from both parties sent a letter to NIST that Politico reported “chastised the agency for a lack of transparency and for failing to announce a competitive process for planned research grants related to the new U.S. AI Safety Institute.”

The lawmakers said they were particularly concerned over a planned AI research partnership between NIST and the RAND Corporation — an influential think tank tied to tech billionaires, the AI industry and the controversial “effective altruism” movement (VentureBeat has also reported about a “widening web” of effective altruism (EA) in AI ‘safety’ and security, including within RAND and leading LLM model company Anthropic).

In the letter, the lawmakers wrote: “Unfortunately, the current state of the AI safety research field creates challenges for NIST as it navigates its leadership role on the issue,” adding that “findings within the community are often self-referential and lack the quality that comes from revision in response to critiques by subject matter experts. There is also significant disagreement within the AI safety field of scope, taxonomies, and definitions.”

At the time, VentureBeat also spoke with Rumman Chowdhury, who formerly led AI efforts at Accenture and also served as head of Twitter (now X)’s META team (Machine Learning Ethics, Transparency and Accountability) from 2021-2011. She said that funding was an issue for the USAISI.

“One of the frankly under-discussed things is this is an unfunded mandate via the executive order,” she said. “I understand the politics of why, given the current US polarization, it’s really hard to get any sort of bill through…I understand why it came through an executive order. The problem is there’s no funding for it.”

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read More

Sharon Goldman