Regulators Seek to Control AI Power Amid Security Concerns

As artificial intelligence technology rapidly evolves, regulators are trying to establish guidelines to prevent highly advanced AI systems from becoming potential security threats. A new threshold, based on computational power, mandates that AI models capable of performing 100 septillion calculations per second (10^26 flops) must be reported to the U.S. government. California is poised to introduce even stricter requirements under recently passed legislation.
The concern is that such powerful AI systems could pose serious dangers, including the development of weapons of mass destruction or catastrophic cyberattacks. Supporters of the regulation argue it’s a necessary safeguard for AI’s potential risks. AI systems from leading companies such as OpenAI, Google, and Meta have already reached significant levels of power, making regulation more urgent.
Critics, however, say the threshold is arbitrary. Some, like venture capitalist Ben Horowitz, argue it could stifle innovation. Horowitz pointed out that large-scale models could play a crucial role in breakthroughs like cancer research. Others worry the metric fails to capture the complexities of AI capabilities and is too simplistic a measure to determine a model’s true risk.
Despite the criticism, supporters of the regulation, including the Biden administration and California state legislators, maintain that oversight is needed as AI technology advances. California’s Senate Bill 1047, awaiting Gov. Gavin Newsom’s signature, adds another criterion: any AI model costing more than $100 million to build must also undergo regulatory scrutiny.
The global landscape is shifting as well. The European Union’s AI Act uses a similar computational metric but sets the bar lower, covering a broader range of systems. China is also exploring ways to regulate AI’s power.
Physicists and AI researchers acknowledge that while the current metric isn’t perfect, it represents an essential starting point. “This computation, this flop number, is the best thing we have,” said Anthony Aguirre, executive director of the Future of Life Institute. He added that most AI models today perform large-scale calculations, and this regulation is designed to manage those capabilities.
With the potential for revisions in the future, lawmakers see this as a temporary solution to a fast-evolving issue. However, AI researchers like Yacine Jernite caution that some models, which use less computing power but still have significant societal impacts, may not fall under the proposed guidelines, raising concerns about oversight gaps.
As regulators work to balance innovation with safety, the debate over how best to measure AI’s risks and capabilities continues.



Post Comment