Washington: Google DeepMind, Microsoft, and xAI have entered into agreements with the US government to facilitate early testing of advanced artificial intelligence models, aimed at identifying potential national security risks. The US Commerce Department announced these collaborations, highlighting their significance in enhancing AI security measures.
According to Anadolu Agency, the department's Center for AI Standards and Innovation emphasized that these partnerships will enable pre-deployment testing and research, thereby aiding in the assessment of advanced AI capabilities. This initiative is intended to bolster security by allowing the government to evaluate AI models prior to their public release and to perform post-deployment assessments.
The center has already completed over 40 evaluations, which include reviews of unreleased frontier AI systems. Developers have been regularly providing the center with models, occasionally with reduced or removed safeguards, to facilitate a more thorough evaluation of capabilities and risks associated with national security.
Director Chris Fall stated, "Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications." He further noted that these expanded industry collaborations are crucial for scaling the center's work in the public interest during a pivotal time.
Furthermore, the center has been designated as the primary point of contact for the US government in terms of industry testing, collaborative research, and the development of best practices concerning commercial AI systems.