“Baumgarten said that Damle’s ‘blanket assertion that input for generative AI ‘is fair use’ may well be simply wrong.’”
In response to last week’s hearing of the House of Representatives’ Subcommittee on Courts, Intellectual Property and the Internet about the impact of artificial intelligence (AI) on copyright law, former Copyright Office General Counsel, Jon Baumgarten, submitted a letter this week to the Subcommittee expressing his concerns with the testimony of one of the witnesses, Sy Damle of Latham & Watkins, who also formerly served as U.S. Copyright Office General Counsel. The letter was published in full on the Copyright Alliance website.
Baumgarten was particularly distressed by Damle’s contention during the hearing that “outside of some unspecified cases of machine memorization or close reproduction that might occasionally ‘go too far,’ the input side of ingestion and processing by generative AI is almost categorically privileged as ‘fair use’.”
Damle testified that existing copyright laws are sufficient to set the bounds for generative AI (GAI), as the law has done for new technologies before this, including the VCR, Napster and software APIs. He argued at one point that the GAI learning process is similar to the human learning process, and what matters is the output in terms of applying copyright law. “The copyrighted works are being used not to create a collage, but to learn statistical facts about the works themselves,” Damle said. “It’s a very similar process to the way humans learn.”
Baumgarten said that he “could not disagree more” with Damle’s characterization of the process and that his “blanket assertion that input for generative AI ‘is fair use’ may well be simply wrong.” Baumgarten compared Damle’s statements with the perspective of some stakeholders during the 1960s, when the photocopier gained popularity for use in businesses and education. While many dismissed the concerns of authors and scientific textbook publishers as “clearly fair use”, case law later proved them wrong, Baumgarten wrote.
He argued that a collective licensing regime for copyrighted works used in AI training could work, and pointed to the advantages of such a regime for both copyright owners and users. Both Damle and another witness at the Subcommittee hearing essentially dismissed collective licensing as unworkable in the context of AI training due to the scale of works involved.
The House IP Subcommittee will be holding a series of hearings on the topic of AI and intellectual property. The other witnesses last week included three artists who all expressed similar concerns about the potentially dire effects of generative AI (GAI) applications on their respective industries and careers.
The Copyright Alliance also issued a statement following the hearing in which its CEO, Keith Kupferschmid, said that “[l]icensing models for AI ingestion already exist for many types of copyrighted works, including literary works and works of visual art, which AI developers recognize as valuable training material for large language models (LLMs) and image generators. While some AI companies are transparent that they only use licensed material to train their systems, others have used works scraped from the internet without authorization and have ignored available licenses.”
However, the statement acknowledged that some of the leading AI companies have recently indicated they are taking copyright into consideration. Representative Deborah Ross (D-NC) said during last week’s hearing that Sam Altman, the CEO of OpenAI, indicated to Congress that new versions of OpenAI are contemplating ways to compensate copyright owners for content and style. “When we’re working on new models, if an AI system is using your content or style, you get paid for that,” Altman said, according to Ross.
“Time will tell whether these companies are true to their word,” the Copyright Alliance statement added.
Image Source: Deposit Photos
Image ID: 7322225