Group of AI founders argue that California AI security bill violates freedom of speech

On 24 May, the US Senate passed the “The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act” in California which prohibits large-scale and powerful AI systems from aiding in the development of chemical, biological, radiological, or nuclear weapons. The Bill, among other things, called for creating the ‘Frontier Model Division’ to monitor the potential safety and security risks of an AI model. It also empowered California’s Attorney General to take legal action against any AI developer for negligence or threatening public safety.

AGI House, a community of AI founders, builders, and researchers, “who are dedicated to advancing the field of artificial intelligence in a responsible and ethical manner”, has shared their criticisms of the Bill.

They stated that the Bill’s mandate to monitor AI models before their deployment could potentially violate the US’ Freedom of Speech Laws. This they state is a significant constitutional concern. In an article, AGI House cited previous legal precedents that classified computer code as free speech and argued the same for neural network weights that are used to train AI models. Further, they elaborated on the various ways the Bill could impact innovation in the AI landscape and affect businesses.

Here are the arguments made by AGI House:

Legal Precedents on Code as Free Speech

AGI House cited various past cases that set a precedent for code to be considered as expression and thus protected by Freedom of Speech laws under the First Amendment of the American Constitution. They cited the case of Bernstein v. United States, where a PhD student challenged the government’s restrictions on the export of cryptographic software he created because his code was classified as “munition” under the ‘International Traffic in Arms Regulations ‘(ITAR). The Court voted in favor of Bernstein stating, “like music and mathematical equations, computer language is just that, language, and it communicates information” and thus must be protected as free speech. Similarly, in Junger v. Daley and Karn v. U.S. Department of State, the court ruled in favor of the parties that challenged the government’s restriction on their code under ITAR, and stated that publishing and sharing code is protected under the First Amendment.

Similarly, in Universal City Studios, Inc. v. Reimerdes the court ruled in favor of the party who wished to distribute DeCSS, a software program capable of decrypting the content of DVDs by stating, “Computer code, and particularly source code, is an expressive means for the exchange of information and ideas about computer programming. It is protected by the First Amendment”.

Neural Network Weights and Free Speech

Having argued that American courts consider computer code as Free Speech, AGI House made the argument that neural network weights that are used to create AI models should also be considered as a form of expression, and must be protected by freedom of speech laws. Neural network weights are “parameters learned by a machine learning model during training, encapsulate the knowledge and patterns extracted from the data.” They are a result of a training process where a model learns to map inputs to outputs based on a given dataset.

AGI House stated that neural network weights are “not merely numerical values; they represent the distilled knowledge and insights derived from the data.” They stated that the process of training a neural network involved “selecting a model architecture, defining a loss function, and optimizing the weights to minimize the loss.” Developers are required to make numerous design choices that reflect their unique expertise and perspective.

Thus, they argued that the process of training a neural network is inherently creative and a unique form of expression “similar to how a piece of art or a scientific paper encapsulates the creator’s understanding and interpretation of the world.”

Further, they called to consider Generative AI, wherein AI can be used to generate new content. They argued that Generative AI models like GPT-3 and Dall-E used trained neural network weights to create human-like output in the form of art, literature etc. which is traditionally protected as free speech.

Is the Bill restricting free speech?

AGI House argued that certain provisions under the Bill could hinder the developers’ right to free speech. They particularly opposed a provision within the Bill that requires a developer to report to the Government about the capabilities of their model, before its deployment. They called this “prior restraint” of free speech and noted that “prior restraints are generally disfavored under First Amendment jurisprudence because they prevent speech before it occurs.” They cited a  Supreme Court case that stated “any system of prior restraints of expression comes to this Court bearing a heavy presumption against its constitutional validity.”

It argued that mandates within the Bill that require safety determinations, compliance with safety standards, and reporting of AI safety incidents could be seen as prior restraints on speech.

It also stated that they agree there needs to be regulation to curtail the risk caused by AI but argued that the current Bill “stifles innovation and expression in the AI field.”

“The Act’s restrictions on the development and use of AI models could hinder the free exchange of ideas and information, which is essential for progress in the field of artificial intelligence,” AGI House said.

Implications of the Bill

Innovation and Research

AGI House argued that the Bill restricts the “dissemination and use of neural network weights.”  They stated that researchers and developers rely on the ability to share and build upon each other’s work to advance AI and thus, restricting the sharing and development of AI can “stifle the free exchange of ideas and hinder scientific progress”.

The reporting requirements for AI safety incidents, as mandated by the Bill, could discourage researchers from pursuing “high-risk, high-reward projects”, which they stated could have a chilling effect on AI development.

It argued that by restricting AI models, developers lose the ability to audit and understand AI models which is crucial for ensuring their safety and fairness. They stated that transparency in AI development is necessary as it allows researchers “to identify and address biases, improve model performance, and ensure that AI systems are aligned with societal values.”

The community further said it believed that the need to conduct extensive safety evaluations before training models, as the Bill requires,  may slow down the pace of research and limit the ability to experiment with novel architectures and techniques.

Businesses and Commercialization

It stated that smaller AI companies were far more likely to be impacted by the requirements of the Bill, which could create barriers of entry in the AI landscape. They argued that the safety testing would require substantial investments in infrastructure, personnel, and processes and introduce significant compliance costs for companies.

It also highlighted that these strict reporting requirements “could lead to an environment where companies are reluctant to disclose issues for fear of litigation or negative publicity”, which would be counterproductive to the aim of the Bill, which is to counter threats from AI early in its development.

Further, it argued that the creation of these regulatory barriers could slow down the adoption of AI technologies by other fields of business, where AI could have significant contributions to innovation and efficiency.

Broader Technological Landscape

AGI House argued that while this legislation was restricted to the state of California in the United States, this regulatory approach could have ripple effects worldwide as California is a major hub for AI research and development. Thus, if this jurisdiction were to impose stringent regulations on AI development, other jurisdictions would be likely to  follow suit they suggested.

It opposed this, stating:

“Restrictions on the dissemination of neural network weights could hinder international collaboration and the ability to build upon each other’s work. This could lead to a fragmented research landscape, where progress is siloed and innovation is stifled.”

Recommendations for balancing regulation and innovation

AGI House called for a regulatory framework that encourages a more “balanced approach” that they believed would involve “promoting transparency and accountability in AI development without stifling innovation.”

It cited IEEE and Partnership on AI as ideal guidelines and frameworks that encourage the adoption of best practices for AI safety and ethics and help mitigate risks without imposing restrictive regulations.

The community suggested a “risk-based regulatory framework” that focuses on the potential impact of AI systems that would allow targeted interventions that address specific risks without “imposing unnecessary burdens on the broader AI ecosystem.”

It also provided an example, stating that high-stakes applications such as autonomous vehicles and medical diagnostics may warrant more stringent oversight, while lower-risk applications could be subject to lighter-touch regulation.

AGI House encouraged fostering an open dialogue between policymakers, researchers, and industry stakeholders and creating a regulatory frameworks that respect free speech rights while addressing concerns of safety and security.

Also Read:

STAY ON TOP OF TECH NEWS: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!

Read More

Simone Lobo