Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Google calls for weakened copyright and export rules in AI policy proposal

Google, Follow after OpenaiPresent published a guideline proposal in response to the Trump government’s request for a national “AI campaign plan”. The tech giant supported weak copyright restrictions for AI training and “balanced” export controls that “protect national security and at the same time enable US exports and global business processes”.

“The United States has to pursue an active international economic policy in order to work for American values ​​and to support the KI innovation internationally,” Google wrote in the document. “The political design of the AI ​​has often ignored the risks of the risks and often ignored the costs that failed regulation for innovation, national competitiveness and scientific leadership – a dynamic that shifted under the new administration.”

One of Google’s controversy recommendations concerns the use of IP-protected material.

Google argues that “fair use and mining exceptions of text and data” are “critical” for AI development and scientific innovation in connection with AI. Like OpenaiThe company tries to codify the right that IT and competitors are mainly trained on publicly available data – including data protected under copyright law.

“These exceptions enable the use of copyrighted, publicly available material for AI training without significantly influencing the righteous,” wrote Google.

Google what has Reports trains a Number of models On public, copyrighted data is battle Complain In the case of databases who accuse the company, not to do it and compensate them. US courts still have to decide whether the doctrine effectively gives the low use of AI developers before IP law disputes.

In its AI guideline proposal, Google also has problems with Certain export controls that were imposed in the context of the bidges managementWhat does “can” undermine the goals of economic competitiveness “by imposing” disproportionate stress for US cloud service providers “. This is contrary to statements from Google competitors such as Microsoft, which in January in January said it was “confident” The rules could “completely adhere to”.

It is important that the export rules, according to which the availability of advanced AI chips in unfavorable countries are supposed to restrict, exceptions for trustworthy companies who are looking for large clusters from chips.

Elsewhere in its proposal, Google calls for “long -term, persistent” investments in fundamental national f & e, which reset against the latest federal efforts. Reduce the expenses and remove Grant Awards. The company said that the government should publish data records that may be helpful for commercial AI training and assign funds for “breakfast f & e” and at the same time ensure that computers and models for scientists and institutions are “widespread”.

Google indicates the chaotic regulatory environment that was created by the patchwork of the state AI laws in the United States, and asked the government to adopt federal legislation for AI, including a comprehensive framework for privacy and security measures. Just over two months in 2025, The number of upcoming AI bills in the USA has grown to 781According to an online tracking tool.

Google warns the US government to impose what it perceives as stressful obligations towards AI systems, such as: B. obligations to liability for use. In many cases, Google argues, the developer of a model has “little to no visibility or control” about how a model is used and therefore should not be responsible for abuse.

Historically, Google spoke laws such as the defeated SB 1047 in California against laws that clear What would a AI developer make up for precautionary measures before the publication of a model and in which developers are liable for model-induced damage.

“Even in cases in which a developer provides a model directly to the mountain system, the providers are often best placed in order to understand the risks of downstream uses, to carry out effective risk management and to carry out monitoring and logging according to the market,” wrote Google.

In its proposal, Google also referred to disclosure requirements, as they are viewed by the EU “excessively wide”, and said that the US government should lean against transparency rules that “give up the” submission of business secrets “” to duplicate products or impair national security by bypass the protective measures to prevent protection against protection or jail models. ” Models. “.

A growing number of countries and states has passed laws according to which AI developers are obliged to show more about how their systems work. California From 2013 Mandates that companies that develop AI systems publish a high-ranking summary of the data records they used to train their systems. In the EU, in order to comply with the AI ​​law after compliance with the AI ​​Act, companies must provide detailed instructions on the operation, the restrictions and the risks associated with the model.

Leave a Reply

Your email address will not be published. Required fields are marked *