“It is surprising that intellectual property did not make the cut as a high-priority mention in Biden’s executive order, especially since the order advances various actions with the goals of protecting American workers and American innovation, and IP is intimately involved in both.” – Angela Kalsi, Greensfelder
On October 30, President Joe Biden issued an executive order (EO) announcing a series of new agency directives for managing risks related to the use of artificial intelligence (AI) technologies. The EO prioritizes risks related to critical infrastructure, cybersecurity and consumer privacy but it does not establish clear directives on copyright issues related to generative AI platforms that have garnered much debate in Congress in recent months.
The EO directs federal agencies to take a wide array of actions on AI policy that are grouped into eight key sections. In AI safety and security standards, the EO calls for new reporting requirements for certain federally-funded biological engineering projects and content authentication mechanisms to prevent deceptive AI uses. The EO’s directives for promoting innovation and competition include expanding grants for AI projects in healthcare and climate change and streamlining visa criteria to increase the highly-skilled immigrant workforce. Other sections of this EO issue directives on protecting Americans’ privacy; advancing equity and civil rights; standing up for consumers, patients and students; supporting workers; advancing American leadership abroad; and ensuring responsible and effective government use of AI.
Section 5.2(c) of the EO does address IP issues. Section 5.2(c)(iii) directs the U.S. Patent and Trademark Office (USPTO) Director to “within 270 days of the date of the order or 180 days after the United States Copyright Office of the Library of Congress publishes its forthcoming AI study” issue recommendations to the President, in consultation with the Director of the Copyright Office, on potential executive actions to be taken relating to copyright and AI. And sections i and ii call on the USPTO Director to issue guidance to examiners on inventorship and AI within 120 days, and within 270 days, additional guidance on other issues, including patent eligibility and AI.
Angela Kalsi, IP attorney at Greensfelder, notes that the “bold and expansive” EO is a “missed opportunity” to establish concrete IP-related directives for the USPTO and Copyright Office around IP ownership and human involvement in AI-created works. “It is surprising that intellectual property did not make the cut as a high-priority mention in Biden’s order, especially since the order advances various actions with the goals of protecting American workers and American innovation, and IP is intimately involved in both,” Kalsi said.
This past summer, the IP Subcommittees of both the U.S. Senate and U.S. House of Representatives held hearings on generative AI issues, from the use of copyrighted material to train generative AI models to the creation of a new federal right of publicity to provide rights against deepfake creators. Further, fundamental disagreements have developed over the application of fair use doctrine to the ingestion of copyrighted material for AI training.
This May, the U.S. Copyright Office announced that it was seeking public comments on copyright law and policy issues related to generative AI. The agency plans to use that public input for a study on AI issues that could result in legislative recommendations to Congress. On October 31, more than 1,600 comments already received by the agency were posted to Regulations.gov, most coming from anonymous or named individuals. A pair of comments from policy organizations representing either side of the debate have also been posted online.
ICLE: Secondary Use of Copied Material May Not Be Transformative Enough for Fair Use
The International Center for Law & Economics (ICLE) filed a comment noting that the training of AI models raise several “thorny copyright-law issues” that might not survive a fair use analysis. ICLE likens the situation to the fair use issues at play in the Second Circuit’s 1994 decision in American Geophysical Union v. Texaco, which found Texaco liable for copyright infringement for photocopying entire articles from scientific journals to which Texaco had subscribed. Texaco defended its use as transformative for the purpose of training its scientists, but the Second Circuit found that the copies themselves were untransformed duplications having the same intrinsic purpose as the original.
As for infringement by generative AI outputs, ICLE argued that current copyright law was capable of assessing whether an output was substantially similar to a copyrighted work or whether vicarious liability should attach to an AI developer. While creation of a federal right of publicity might not be necessary, ICLE said that it could facilitate more economic cooperation between AI companies and copyright owners.
R Street: Licensing Regimes Would Be Too Burdensome for AI Companies
Conversely, the R Street Institute filed a comment arguing that training machine learning systems on comprehensive datasets including copyrighted content should be considered fair use. Instead of appropriating expressive likeness, R Street argues that AI model training should be understood as “fair learning. As for market impacts, R Street said that it was unlikely that AI training would detract from the artist’s intended market “because AI learning models were never the target audience or consumer.”
R Street also opined that it would be too difficult to establish a licensing regime for AI model training, which would create “the logistical nightmare of ensuring compliance for extensive datasets.” While Congress should not consider a sui generis right protecting AI outputs, R Street contended that humans should be considered the sole author of outputs that aren’t generated autonomously by the AI platform.
Individual Comments Underscore Biden’s Failure to Address Copyright Theft Concerns
Most comments filed by individuals argued that AI platforms should not be considered authors under copyright law, and that AI developers should not use copyrighted content in their training models. “AI steals from real artists,” reads a comment by Millette Marie, who says that production companies are using AI for the free use of artists’ likeness and voices. Megan Kenney believes that “generative AI means a death of human creativity,” and worries that her “skills are becoming useless in this capitalistic hellscape.” Jennifer Lackey told the Copyright Office her concerns about “Large Language Models… scraping copyrighted content without permission,” calling this stealing and urging that “we must not set that precedent.”
This article was updated on 10/31 to clarify that the EO does address AI and copyright.