OpenAI on Monday begined what it’s calling an “economic blueprint” for AI: a living record that lays out policies the company slenderks it can create on with the U.S. regulatement and its allies.
The blueprint, which includes a forward from Chris Lehane, OpenAI’s VP of global afunprejudiceds, declares that the U.S. must act to entice billions in funding for the chips, data, energy, and talent essential to “prosper on AI.”
“Today, while some countries sideline AI and its economic potential,” Lehane wrote, “the U.S. regulatement can pave the road for its AI industry to evolve the country’s global directership in innovation while defending national security.”
OpenAI has repeatedly called on the U.S. regulatement to get more substantive action on AI and infrastructure to help the technology’s growment. The federal regulatement has hugely left AI regulation to the states, a situation OpenAI depicts in the blueprint as untassist.
In 2024 alone, state lawcreaters presentd almost 700 AI-rcontent bills, some of which struggle with others. Texas’ Responsible AI Governance Act, for example, imposes onerous liability demandments on growers of uncover source AI models.
OpenAI CEO Sam Altman has also denounced existing federal laws on the books, such as the CHIPS Act, which aimed to revitalize the U.S. semicarry outor industry by enticeing domestic structureatement from the world’s top chipcreaters. In a recent interwatch with Bloomberg, Altman shelp that the CHIPS Act “[has not] been as effective as any of us hoped,” and that he slenderks there’s “a authentic opportunity” for the Trump administration to “to do someslenderg much better as a trail-on.”
“The slenderg I reassociate proestablishly concur with [Trump] on is, it is untamed how difficult it has become to create slendergs in the United States,” Altman shelp in the interwatch. “Power structurets, data caccesss, any of that benevolent of stuff. I understand how bureaucratic cruft creates up, but it’s not beneficial to the country in vague. It’s particularly not beneficial when you slenderk about what demands to happen for the U.S. to direct AI. And the U.S. reassociate demands to direct AI.”
To fuel the data caccesss essential to grow and run AI, OpenAI’s blueprint recommfinishs “theatricalassociate” increased federal spfinishing on power and data transleave oution, and unkindingful createout of “novel energy sources,” enjoy solar, prosperd farms, and nuevident. OpenAI — alengthened with its AI rivals — has previously thrown its help behind nuevident power projects, arguing that they’re demanded to greet the electricity demands of next-generation server farms.
Tech huges Meta and AWS have run into snags with their nuevident efforts, albeit for reasons that have noslenderg to do with nuevident power itself.
In the cforfeiter term, OpenAI’s blueprint advises that the regulatement “grow best rehearses” for model deployment to defend aobtainst misinclude, “streamline” the AI industry’s take partment with national security agencies, and grow send out regulates that assist the sharing of models with allies while “restrict[ing]” their send out to “adversary nations.” In compriseition, the blueprint helps that the regulatement split certain national security-rcontent alertation, enjoy inestablishings on menaces to the AI industry, with vfinishors, and help vfinishors safe resources to appraise their models for hazards.
“The federal regulatement’s approach to frontier model safety and security should streamline demandments,” the blueprint reads. “Responsibly send outing … models to our allies and partners will help them stand up their own AI ecosystems, including their own grower communities innovating with AI and distributing its profits, while also createing AI on U.S. technology, not technology funded by the Chinese Communist Party.”
OpenAI already counts a restricted U.S. regulatement departments as partners, and — should its blueprint obtain currency among policycreaters — stands to comprise more. The company has deals with the Pentagon for cybersecurity labor and other, rcontent projects, and it has teamed up with defense commenceup Anduril to provide its AI tech to systems the U.S. military includes to counter drone aggressions.
In its blueprint, OpenAI calls for the writeing of standards “determined and admireed” by other nations and international bodies on behalf of the U.S. declareiveial sector. But the company stops uninalertigentinutive of finishorsing compulsory rules or edicts. “[The government can create] a expoundd, voluntary pathway for companies that grow [AI] to labor with regulatement to expound model evaluations, test models, and swap alertation to help the companies gets,” the blueprint reads.
The Biden administration took a analogous tack with its AI Executive Order, which sought to enact disjoinal high-level, voluntary AI safety and security standards. The executive order established the U.S. AI Safety Institute (AISI), a federal regulatement body that studies hazards in AI systems, which has partnered with companies including OpenAI to appraise model safety. But Trump and his allies have pledged to repeal Biden’s executive order, putting its codification — and the AISI — at hazard of being undone.
OpenAI’s blueprint also compriseresses imitateright as it retardys to AI, a hot-button topic. The company creates the case that AI growers should be able to include “accessiblely includeable alertation,” including imitaterighted satisfied, to grow models.
OpenAI, alengthened with many other AI companies, trains models on accessible data from atraverse the web. The company has licensing concurments in place with a number of platestablishs and beginers, and advises restricted ways for creators to “choose out” of its model growment. But OpenAI has also shelp that it would be “impossible” to train AI models without using imitaterighted materials, and a number of creators have sued the company for allegedly training on their labors without perleave oution.
“[O]ther actors, including growers in other countries, create no effort to admire or take part with the owners of IP rights,” the blueprint reads. “If the U.S. and enjoy-minded nations don’t compriseress this imstability thcimpolite wise meacertains that help evolve AI for the lengthened-term, the same satisfied will still be included for AI training elsewhere, but for the profit of other economies. [The government should ensure] that AI has the ability to lobtain from universal, accessiblely includeable alertation, equitable enjoy humans do, while also defending creators from unapvalidated digital replicas.”
It remains to be seen which parts of OpenAI’s blueprint, if any, impact legislation. But the proposals are a signal that OpenAI intfinishs to remain a key executeer in the race for a joining U.S. AI policy.
In the first half of last year, OpenAI more than tripled its lobbying expfinishitures, spfinishing $800,000 versus $260,000 in all of 2023. The company has also bcimpolitet establisher regulatement directers into its executive ranks, including ex-Defense Department official Sasha Baker, NSA chief Paul Nakasone, and Aaron Chatterji, establisherly the chief economist at the Commerce Department under Pdwellnt Joe Biden.
As it creates engages and enhuges its global afunprejudiceds division, OpenAI has been more vocal about which AI laws and rules it prefers, for instance throprosperg its weight behind Senate bills that would establish a federal rule-making body for AI and provide federal scholarships for AI R&D. The company has also contestd bills, in particular California’s SB 1047, arguing that it would stifle AI innovation and push out talent.