Content area
The rapid advancement of generative artificial intelligence (AI) poses significant challenges to traditional copyright frameworks, intensifying debates over the copyrightability of AI‐generated outputs. By comparing judicial practices in China and the United States, it has been observed that the United States maintains a conservative stance of adhering to substantive control, while China demonstrates an inclusive approach through the criterion of creative contribution. Building upon this, this article transcends the traditional binary judgment model and constructs a tiered copyright determination model. Based on the level of human control and contribution in the AI generation process, it introduces dimensions such as technological controllability and density of human intent, classifying generative AI into three tiers: strong protection, weak protection, and non‐protection. Regarding the copyrightability of content generated by generative AI, this article argues that the issue should be addressed within the framework of copyright law itself. When human participation is involved and the substantial contribution of the direct user is reflected in the AI‐generated content, meeting the requirements for copyrightable works under copyright law, corresponding protective measures should be granted.
INTRODUCTION
In 2017, DeepMind's artificial intelligence (AI) program AlphaGo defeated the world's top Go players. In 2022, OpenAI launched the large language model ChatGPT. In early 2024, OpenAI released the video generation software Sora, capable of simulating physical environments in the real world. In 2025, China officially open-sourced its latest generative AI (AIGC) product DeepSeek-R1, whose downloads rapidly surpassed ChatGPT. These models leverage cutting-edge technologies such as generative adversarial networks (GANs) and large-scale pre-trained models (e.g., transformer) to generate diverse content through deep learning and data analysis, exhibiting remarkable generalization capabilities that have fundamentally reshaped daily life, learning, and work patterns (Huang & Liu, 2023). Copyright has been “a child of technology” since its inception (Goldstein, 2003). New technologies now test copyright law's capacity to regulate information and content markets. With the rapid advancement of AIGC technology, current copyright systems face unprecedented challenges. AI-generated content, created by machines rather than natural persons, raises urgent legal and academic questions regarding their legal status and copyright ownership, such as whether AI-generated content possesses copyrightability, can express human thought, or necessitates revisions to originality standards. These issues superficially concern adjustments to legal frameworks but fundamentally reflect value judgments toward emerging technologies like AI, making the resolution of AIGC copyrightability a critical theoretical and practical imperative.
The academic discussion on the copyrightability of AI-generated content can be divided into three categories: (1) Radical view: proposes that AI itself can be recognized as an author, and the content it generates should be regarded as works created by AI (Samuelson, 1985; Zhu, 2019); (2) Conservative view: argues that however specific the textual prompts and descriptions provided by users of AI drawing tools may be, they merely constitute the creation of literary works rather than artistic works, and that even with complex prompts, users of large-scale models lack complete control over the actual generated content. Given the unpredictability of large-scale model-generated art, it remains questionable whether such outputs can be deemed the intellectual achievements of the user (Wang, 2017; Liu, 2020; Bi, 2023; Ramalho, 2017; Le Thi, 2023; Morocco-Clarke, Sodangi, & Momodu, 2024); (3) T moderate view: holds that AI-generated content reflecting human originality in expression should qualify for copyright protection under copyright law (Xiang, 2020; Ding, 2023; Levendowski 2018; Kalyatin, 2022). Internationally, scholars predominantly lean toward denying copyright for AI-generated works through legal interpretation or legislation, while Chinese scholars adopt a more open stance, advocating for some form of copyright protection for such works (Chen, 2023). Concurrently, judicial and administrative authorities worldwide have rendered differing rulings on the copyrightability of AI creations. Intellectual property embodies economic interests that reflect market value within economic relationships (Feng, 2006). From the market value chain perspective of AI creation, AI developers profit by providing technical services to users, making user demand for AIGC the source of value for AI technical services (Xu, 2024). As creative intellectual products, AI-generated content possesses potential economic value, yet the allocation of ownership rights remains highly complex, requiring comprehensive consideration of interests among technology developers, data providers, investors, and users. If AIGC systems could be classified as legal subjects whose creations satisfy relevant legal criteria for object attributes, the copyright ownership of AI-generated content would warrant in-depth examination. Thus, exploring the copyrightability of AIGC outputs not only clarifies the boundaries of copyright law in emerging technological domains but also provides legal safeguards for the development of related digital industries.
JUDICIAL DIVERGENCE: THE DE FACTO REALITIES OF COPYRIGHT ADJUDICATIONS ON GENERATIVE ARTIFICIAL INTELLIGENCE OUTPUTS IN CHINA AND THE UNITED STATES
Copyright disputes involving AIGC primarily arise during two stages: training data acquisition (input) and content generation (output). Input-stage disputes center on whether large models infringe copyrights or qualify as fair use when utilizing others’ works for training. Although no domestic cases exist yet, international litigation has emerged, such as Thomson Reuters v. Ross Intelligence, Authors Guild v. OpenAI, Chabon v. OpenAI, Tremblay v. OpenAI, Andersen v. Stability AI, Getty Images v. Stability AI, Huckabee v. Meta Platforms, and Kadrey v. Meta (Suo, 2024). Output-stage controversies focus on four key aspects: (1) Originality determination: whether AIGC outputs meet originality requirements to constitute copyrightable works; (2) Ownership attribution: whether copyrights belong to the AI system, developers, users, or investors; (3) Fair use versus infringement: whether AIGC outputs constitute fair use of existing works or infringe others’ copyrights; and (4) Liability allocation: who bears responsibility and how when AIGC outputs infringe rights (Cong & Li, 2023). This article examines the complexity, divergence, and contentiousness of AIGC output copyright issues through comparative case studies between China and the United States.
Case analysis and controversial focus
Chinese courts have taken a more inclusive approach to AIGC. As the first copyright infringement case involving AIG-generated images in China, the case of Li v. Liu for infringement of AIGC-generated images (“Case No. 11279”), which not only was selected as one of the top 10 events in the implementation of the rule of law in China in 2023 but also marked the first time that the court recognized the creative rights and interests enjoyed by users of AI-generated painting models in generated pictures (Zhang & Bian, 2024). The court pointed out that there were two links in the process of generating the analysis report involved in this case, in which natural persons participated as the main body, one was the software development process, and the other was the software use process. Software developers are not involved in the creation of analytical reports; if the user of the software only submits keywords for search on the operation interface, and this behavior does not convey the original expression of the user's thoughts and feelings, it should not be deemed to be the user's creation. Therefore, neither the developer nor the user of the software should be the author of the content generated by the computer software, and the content cannot constitute a work. Non-creators naturally cannot sign as authors and should add the logo of the generating software to the analysis report from the perspective of protecting the public's right to know, maintaining social honesty and trustworthiness, and facilitating cultural dissemination, indicating that it is automatically generated by the software.1 However, the judgment also clearly states whether the content generated by AI constitutes a work; case analysis needs to be conducted based on the specific case and cannot be generalized. The key conclusions established by the judgment of this case in terms of fact determination and legal application are of great reference value for the trial of related cases involving AI technology in the future. Keith Kelly, an attorney at Sheppard Mullin Law Firm in the United States, believes that although the court's reasoning for this case is controversial, the results can be seen to some extent as promoting intellectual property enforcement; it may have lasting precedent value for the copyright of AI-generated images.2
In contrast, the United States copyright law practices are centered on human authors to determine the object of copyright protection. The Guidelines for Copyright Registration point out that only human intellectual activity can be considered a “creation”, and only the intellectual achievements of human beings can be regarded as a “work”.3 As early as 2011, in the “ape selfie case”, the United States Copyright Office (USCO) emphasized that only human works were protected. On December 27, 2023, The New York Times sued OpenAI and Microsoft for copyright infringement, becoming the first major United States media organization in the world to sue the two companies for copyright infringement of their written works (Deng & Zhu, 2023). In recent years, the USCO has made several decisions to refuse copyright registration of AI works: (1) Zarya of the Dawn. The Copyright Office adheres to the principle of “not supporting the registration of copyright in works without human authors”, and determines that authors only enjoy copyright in the arrangement of textual narratives and visual elements, and can be registered; machine-generated images, on the other hand, cannot be registered as copyrighted works.4 (2) A Recent Entrance to Paradise. The examiner of the Copyright Office and the Examination Board of the Copyright Office found that the author of the work did not have human authorship and that his work “did not have any creative contribution from a human author”.5 (3) Theatre D'Opéra Spatial. The Board of Review of the Copyright Office emphasized the long-standing position of the United States Copyright Registry that copyrighted works must meet the requirements of “original work by author,” which does not include content that is not created by humans.6 From the above cases, it can be seen that the basic essence of the administrative law enforcement and judicial view of copyright in the United States is to emphasize that the work is the creative work of human authors. Authorship is the starting point and attribution of the copyrightability of literary and artistic works, including AI-generated works, and the key to whether the content generated by human authors operating AI-generated is to determine whether the copyright is granted lies in whether their “intellectual input” controls the “expression” of the work and “actually forms” the “element” of authorship. Whether copyright protection is provided for AI-generated content must be analyzed on a case-by-case basis, in which the part created by a natural person can be granted copyright, while the AI-generated part is not within the scope of copyright protection (Zhang & Wang, 2024). It can be seen that the practice of the United States copyright law generally does not support the copyrightability of AI works. However, in January 2025, the USCO approved a copyright registration application for an image generated by AI, titled A Single Piece of American Cheese, this marks a milestone in copyright history. The case holds profound significance, because this decision signals a turning point in the relationship between AI and copyright. Given this shift, it is anticipated that EU judicial bodies may soon clarify the relevant recognition criteria, establishing minimum standards for human intervention and defining which types of AI-generated content qualify for copyright protection.7
The controversies surrounding the AIGC model training domestically and internationally primarily focus on three issues:
Does model training replicate copyrighted works? If AIGC does not reproduce the works but merely “reads” them without creating copies—similar to human learning—it would not infringe reproduction rights. However, without technologies like “federated learning” or “cloud computing,” AIGC model training typically involves copying and storing copyrighted works. Disputes over reproduction constitute a litigation strategy requiring plaintiffs to prove infringement or risk losing the case (Feng, 2019).
Even if works are copied during training, does such reproduction fall within copyright law's permissible scope? Certain copying may be exempted either under “caching freedom” provisions of the safe harbor rules or through qualification as fair use (Feng & Pan, 2020). The former has sparked industry-academic debates about granting “copying freedom” for AIGC training, while the latter centers on whether such copying constitutes fair use under the United States copyright law.8
During the model training phase, what rights govern the analytical processing of works? Rights holders argue that works generated by AIGC constitute derivative works of original creations—essentially disrupting and reorganizing the original to form new works—and should therefore fall under the scope of the “adaptation right.” Generally, AIGC learns from vast quantities of works to build parametric models reflecting patterns in expression (words, sentences, lines, colors, tones, etc.), generating new works based on these parameters when given prompts (Ding, 2023). Tech companies typically contend that AIGC model training merely draws inspiration from the expressive styles inherent in numerous works, and under the idea–expression dichotomy, style belongs to the realm of ideas, unprotected by copyright law. The debate surrounding AIGC model training revolves around content creators advocating for copyright licensing versus tech companies asserting fair use, intertwined with disputes over specific balancing measures such as remuneration rights, statutory licensing, opt-out mechanisms, and collective copyright management.
Standard refinement and practical criticism
Chinese Judicial Practice: Flexible Attribution Logic Based on “Creative Nexus” Between Input and Output. Developing an identification path centered on “input creativity-output attribution,” the core lies in the “degree of creative nexus” between human input and AI output. In “Case No. 11279,” the court did not demand absolute human control over AI's language organization process, but focused on the creative contributions of the core team in input stages such as data indicator selection and analytical framework design. As long as these contributions bore a direct intellectual connection to the final data arrangement in the report, copyrightability was recognized. Furthermore, stylistic choices and parameter adjustments in prompts, as long as they resulted in identifiable human aesthetic choices in the output images—even if the AI had autonomy in detailed rendering—still met the originality requirement of the “minimal creativity standard” (Li, 2019). This standard transcends the rigid requirement of “input must control output,” instead tracing the creative contributions of human input from the output results, reflecting functional recognition of “the irreplaceability of human creativity in human-AI collaboration.” However, this approach faces theoretical and practical issues: Firstly, ambiguity in original sources. Although emphasizing human creative contributions, courts fail to clearly define the legal status of innovative outputs generated by AI algorithms in copyright attribution, such as visual styles created by Stable Diffusion's latent space transformations—should these fall under human copyright? Secondly, attribution challenges posed by technical black boxes. When AI outputs exceed human expectations, courts struggle to precisely distinguish the proportion of human input versus algorithmic autonomy under existing standards, for example, a prompt for “sunflowers” generating variant images with Van Gogh's style. Thirdly, judicial unpredictability. Abstract terms like “aesthetic choices” and “individual judgment” lack quantifiable criteria, potentially leading to inconsistent rulings in similar cases due to judges’ divergent interpretations, such as varying court assessments of the correlation between prompt complexity and originality.
The United States Judicial Practice: The technical reductionism of “substantial control” at the input stage manifests through a binary determination framework of “input control–output predictability,” whose core lies in human “substantial control” over AI output during the input phase. This standard requires human instructions to directly dictate core expressive elements of the work. As exemplified in the “Zarya of the Dawn” case, courts ruled that text prompts failed to establish preset control over key visual elements such as compositional ratios and color schemes. Despite the presence of prompts, copyrightability was denied due to output unpredictability. The USCO's Compendium of U.S. Copyright Office Practices further clarifies that humans must exercise sufficient control over “the selection and arrangement of expressive elements.” This essentially reduces AIGC creation to a linear “command input–algorithm execution” relationship, a mechanical reductionism that overlooks the “nonlinear semantic mapping” technical characteristic of AIGC.9 The United States Supreme Court declares that the authors protected by copyright laws must be “persons” (persons or individuals), and that copyright is “the exclusive right of human beings to works created based on their natural endowments or wisdom.”10 However, there are three contradictions in this standard: Firstly, the lack of technological neutrality, and AI is completely regarded as a tool object controlled by humans, ignoring the “intelligent emergence” characteristics of new algorithms such as diffusion models and transformers, namely the independent decision-making ability formed by the algorithm in data training (Liu et al., 2023); Secondly, the quantitative standards are fuzzy, and the “substantive control” lacks a clear threshold. For example, the 624 iterations of the prompt word are still considered to have insufficient contributions in the “Theatre D'opera Spatial” case, but the specific number of iterations or high instruction accuracy is required to meet the standards, and there has never been clear guidance; Thirdly, the industrial adaptability is lagging, and new models such as dynamic interactive creation and multimodal generation have broken through the human participation of traditional tools, but the evaluation standards are still stuck in the binary judgment of “all or nothing”, making it difficult for emerging creative forms to obtain reasonable protection (Guo, 2022).
Standard commonness and paradigm dilemma
A comparison of the relevant cases between China and the United States shows that there are significant differences between the two countries on the core issues, but there are some commonalities. Both jurisdictions recognize AIGC as a technological tool rather than an independent copyright holder. However, fundamental divergences emerge regarding whether human-AI collaborative output constitutes copyrightable creation, particularly in legal assessments of “input control.” China emphasizes the creative causal connection between input and output, while the United States prioritizes preset technical control over results during input. Though formally acknowledging AI's assistive role, both systems inadequately address the emergent characteristics distinguishing AIGC from conventional tools (Yao & Shen, 2018). The core jurisprudential divide lies in evaluating human input: China focuses on creative causality, whereas the United States seeks technical control intensity. Crucially, both adhere to the principle that human input remains a necessary condition for copyrightability, yet their binary “all-or-nothing” frameworks fail to accommodate the continuum of human involvement in AIGC creation (Huang & Huang, 2019). This shared limitation underscores the need for a tiered model where copyright determinations are based on “creative contribution density” at the input stage, establishing graduated standards aligned with technological realities.
From a technological reality perspective, AIGC creation exhibits a tripartite structure of diminishing human intentional density: (1) Human-directed precise instruction crafting; (2) Human-AI negotiated prompt-guided creation; and (3) AIGC-autonomous stochastic generation (Jiao, 2022). Existing standards reductively compress this complex technological spectrum into a binary “human-controlled or not” determination. As demonstrated in representative cases, even after hundreds of prompt iterations, AIGC outputs may still be excluded from protection due to “lack of substantive human control over core expressive elements.” This dilemma fundamentally reflects the conflict between traditional instrumentalist paradigms and AI's technological reality—when algorithms possess autonomous decision-making capacity, characterizing them merely as human tools becomes untenable (Wen, 2024). China's standards risk overextending copyright protection when AI demonstrates significant output-side innovation, while the United States’ criteria may unduly restrict protection (thereby stifling innovation incentives) when substantial human creative input fails to manifest identically in outputs. This impasse stems from the paradigmatic clash between copyright law's “subject-object dichotomy” and the hybrid human-AI cocreation model, as Latour's actor-network theory (ANT) reveals. AI algorithms function not as passive instruments but as active agents co-constituting creative networks with humans (Li, 2020). Theoretical limitations of existing frameworks necessitate a new evaluative model that simultaneously recognizes the core value of human creative investment while accommodating AI's technical attributes, avoids rights monopolies hindering technological progress, and incentivizes human-AI collaboration through calibrated protection.
THE COPYRIGHT DILEMMA: THE CONSTRUCTION OF A LADDER MODEL FOR GENERATIVE ARTIFICIAL INTELLIGENCE GENERATORS
Theoretical basis and technical logic
Traditional copyright theories have always focused on “human creations”, but the emergence of AI has made the “subject-object dichotomy” face fundamental challenges. This core development of the dilemma of existing copyright theories in the determination of copyrightability in AIGC is essentially a paradigm conflict between Kant's philosophy of “subject-object dichotomy” and the reality of “human-machine hybrid creation” in the era of AI (Wang 2023). Based on Latour's ANT as the core philosophical foundation, the ladder model regards AIGC creation as a dynamic network product formed by heterogeneous actors such as human prompt words, AI algorithms, training data, and technology platforms through the “translation” mechanism, and its core breakthrough lies in breaking the “anthropocentrism”, believing that AI algorithms are not passive tools, but “actors” equal to human beings, and they are “inscriptions” and “translations” with humans. The structure of the creative network formed by (translation) directly determines the level of copyrightability of AIGC.11 From the perspective of technical reality, AIGC creation presents a continuous spectrum of “decreasing density of human intentions”, which provides an objective basis for the hierarchical division of the model. The core technical features of AIGC (such as the “latent spatial transformation” of the diffusion model and the “self-attention mechanism” of the transformer) make three typical relationships between human input and AI output: (1) Fully predictable instruction execution; (2) Partially predictable semantic negotiation; and (3) Completely unpredictable autonomous generation (Li & Kuang, 2023). This technical characteristic corresponds to the “actor translation intensity” in Latour's theory, and together constitute the dual support of the model.
Hierarchy and recognition criteria
Based on the analysis of ANT and technical characteristics, the ladder model divides AIGC into three protection levels, and the core differences between each level are reflected in the “translation intensity” and the proportion of creative contribution between human actors and AI actors, as shown in Table 1.
TABLE 1 Tiered copyrightability framework.
| Tier level | Technical characteristics | Typical scenarios | Protection strength |
| Tier 1: Strong protection |
Human-directed creation via structured directives (e.g., storyboard scripts, precise parameters); AI executes mechanical implementation |
Designer converting hand-drawn sketches to digital images using ControlNet | Full copyright protection |
| Tier 2: Qualified protection |
Human-provided creative framework through prompts; AI demonstrates substantial autonomous interpretation in style and details |
User generating images via input “cyberpunk cityscape” | Limited protection (rights restricted to core expression) |
| Tier 3: Exclusion from protection | AI-autonomous generation; human involvement limited to initiation/generalized instructions | AI creating randomized artwork from input “generate a painting” | Copyright protection excluded |
Tier 1: Strong protection. This tier is characterized by human actors exerting “prescriptive translation” upon AI algorithms through structured directives, thereby integrating AI as “intelligent executors” into human creative workflows. For instance, users may select initial AI-generated drafts and iteratively refine prompts or parameters to personalize visual details (Cui, 2023). Firstly, human inputs must demonstrate high structural specificity, requiring concrete expressive elements such as aspect ratios, color palettes, and defined subject morphology in image generation scenarios. Natural language processing (NLP) techniques parse such instructions to identify at least three distinct creative elements. These structured directives establish explicit frameworks for algorithmic execution, ensuring direct correspondence between output and human intent regarding core expressive components. Secondly, outputs must exhibit high traceability to input instructions, verifiable through technological means like computer vision algorithms that cross-map features between final outputs and original directives. Consider designers using ControlNet to convert hand-drawn sketches into digital images: compositional elements, proportions, and line trajectories from the sketches must be faithfully preserved in final renders, with AI functioning primarily as a technical executor converting 2D sketches to 3D renderings without substantially reconfiguring core creative elements. Furthermore, comprehensive process documentation is essential for establishing strong protection. Blockchain-based systems should record the complete iterative journey, including prompt versioning, parameter adjustment logs, intermediate outputs, and revision artifacts, to objectively demonstrate sustained human control throughout the creative process, proving AI algorithms operated under continuous human guidance.
Tier 2: Qualified protection. This tier involves human actors providing creative frameworks through unstructured prompts, with AI algorithms performing interpretative transformations based on semantic networks within training data, establishing symbiotic cocreation. When the prompt words entered by humans lack specific technical parameters, the AI algorithm, as an independent actor, will reconstruct the human intention according to the knowledge system encoded in its training data, so as to reflect the creative mixture of both humans and machines in the output results. In the dimension of judicial determination, the core standard of this level is to measure the balance between the “semantic density” of human prompt words and the “variation amplitude” of AI algorithms. From the perspective of semantic density, prompts need to contain core elements that are sufficient to define the creative framework. This semantic framework provides a deductive anchor for AI algorithms, such as “vaporwave style”, which limits the selection of color modes and visual symbols, while AI autonomously fills in specific expression elements such as light and shadow processing and texture details based on the training data of millions of relevant images (Shi, 2020). The evaluation of the variation amplitude of the algorithm needs to quantify the style difference between the output results and the training data through computer vision technology. When the prompt contains more than three specific creative elements and has significant autonomy, it can be concluded that the human creative framework has not been excessively reconstructed in the translation process. For example, if a user inputs “cyberpunk–style Tokyo streets”, the AI-generated image retains core symbols such as neon colors and mechanical prosthetics, and even if there is an independent innovation in architectural details, it is still within the acceptable range of variation. Conversely, if the output completely deviates from the semantic framework of the prompt word, it may fall into the third tier due to excessive mutation rate. In the human-computer collaborative creation model, an important factor in judging the contribution of human creativity is the depth of manual screening, that is, the creator is required to intervene with aesthetic judgment on the original output of AI, which is not a mechanical operation, but reflects the secondary creative processing of AI-generated content by humans (Sun & Wang, 2021).
Tier 3: Exclusion from protection. This tier is defined by the actor-subnetwork comprising AI algorithms and training data, assuming creative primacy, with human input reduced to mere triggering functions incapable of effective creative intervention, resulting in severed translation chains between human intent and outputs. The theoretical basis of this mechanism stems from the concept of “technological autonomy” of Latour's network theory of actors, and when the algorithmic complexity of an AI system exceeds the threshold of human control, its autonomously generated content is closer to “natural phenomena” than to “intellectual expression”, which is in line with the basic position of copyright law to “protect originality rather than chance”. In judicial practice, the determination at this level needs to meet the double negative conditions: Firstly, the technical verification of the generalization of the instruction. The semantic analysis of prompt words through NLP technology needs to confirm that the input content belongs to the general expression in the public domain. Such instructions lack specific creative elements to provide substantive creative guidance for AI algorithms, and their functions are equivalent to the mechanical operation of initiating a program (Suo, 2024). Secondly, the technical proof of the fracture of creativity. Through computer vision algorithms or text feature comparison techniques, it needs to be confirmed that the feature correlation between the AI output results and the training data is weak, and the algorithm plays a dominant role in the formation of expression elements (Sun, 2019). In this case, the core expression elements of the output content are mainly derived from the autonomous interpretation of AI algorithms, rather than the transformation of human creative input, such as random abstract patterns generated by AI through the noise sampling mechanism of the diffusion model, or subjectless text independently generated based on the transformer model, which cannot meet the basic requirements of copyright law for “intellectual creation” due to the lack of creative logic chain attributable to humans. At the level of technical implementation, the determination of this level also needs to rely on the platform's appropriate disclosure of the algorithmic black box.
Theoretical innovation and practical breakthrough
The construction of this model not only responds to the practical needs of AIGC for the determination of copyrightability but also achieves a double breakthrough at the theoretical and institutional levels, providing a solution with both theoretical depth and operational possibilities for the innovation of the copyright system in the digital era.
Firstly, at the theoretical level, the triple paradigm innovation is realized. Firstly, ontological innovation. Breaking through the traditional “subject-object dichotomy” thinking, based on Latour's ANT, AI algorithms and training data are regarded as “actors” equal to human beings, and jointly constitute a creative network. This breaks the monopoly narrative of “anthropocentrism”. For example, the USCO, when processing the Zarya of the Dawn registration, concluded that only the original expression of the human author through prompt word selection and arrangement, screening, and post-editing was copyrightable, and that the images generated by Midjourney were not protected themselves. Although the ANT is not directly adopted in this case, it highlights the practical need to distinguish between the creative contribution of human “actors” and the generative role of algorithmic “actors”, and echoes the ontological cognition of “co-constituting creative networks”. Secondly, methodological innovation. Realize the transformation from qualitative judgment to quantitative analysis, introduce the operationalist approach of the philosophy of science, and deconstruct the abstract “originality” into calculable technical indicators. By analyzing the semantic entropy of prompt words, the number of parameter adjustment dimensions and other factors to measure human intent density, and the comparison of output mutation rate and similarity of training data is used to determine the contribution of the algorithm, so that the originally subjective judicial determination can be carried out based on objective data, which effectively solves the operational dilemma of “multiple iterations of prompt words without clear standards” in judicial practice (Wu, 2020). For example, in “Case No. 11279”, when determining whether the picture in question constituted a work, the court focused on the level of detail of the prompt words entered by the plaintiff, the number of adjustments made to the parameters, and the specific effects. Although the judgment is ultimately based on the “originality” of human intellectual investment, the court's focus on these quantifiable operational steps reflects the need for a change from purely qualitative to quantitative support at the methodological level and provides a basis for judicial practice for the specific quantitative indicators proposed in this model. Thirdly, the innovation of the theory of value. Beyond the single dimension of traditional “incentive innovation”, a three-tier interest balance mechanism is constructed. A dynamic balance between technological innovation and the public interest is sought by granting full copyright to strongly protected works to encourage high-investment creation, limiting the term of rights to weakly protected works to prevent algorithmic innovation from being monopolized, and opening the public domain to unprotected works to ensure the supply of raw materials for cultural creation (Wen & Shen, 2023).
Secondly, the framework demonstrates significant implementational synergy. Firstly, compatibility with existing judicial standards. For the United States, the “substantial control” criterion transforms this into quantifiable metrics like “instructional structuralization” and “output traceability,” enhancing operational applicability. Regarding China's “creative contribution” standard, it objectifies subjective judgments through “creative contribution mapping” for proportional analysis, improving adjudicative consistency. In interpreting Article 3 of Copyright Law of China, it frames AIGC originality as “effective human creative inscription within actor-networks,” aligning with legislative intent. Secondly, technology-enabled judicial advancement. Proposes an integrated “AIGC Copyright Assessment System” incorporating NLP semantic analysis, computer vision comparison, and blockchain verification modules. This interfaces with Beijing Internet Court's “JusticeChain” for end-to-end digital management from creation to determination. Thirdly, industry normative guidance. The framework steers AIGC sector development by providing content platforms with compliance benchmarks, offering creators cocreation protocols, and establishing techno-ethical frameworks for AI developers to foster lawful industry maturation (Wu, 2020).
PATH EXPLORATION: FEASIBLE LEGAL PROTECTION STRATEGIES FOR GENERATIVE ARTIFICIAL INTELLIGENCE-GENERATED OBJECTS
Hawking argues that the rise of powerful AI is either the best or the worst thing in human history, and that AI could also be the end of the history of human civilization unless we learn how to avoid danger.12 Even if it is a human creation, it must meet the requirements of the law to constitute a work to be the object of protection under the Copyright Law. Similarly, not all AI-generated content is copyrightable. Only AI-generated content that satisfies the requirements required by law to constitute a work is copyrightable (Li & Kuang, 2023). Generally speaking, the objective expression of AIGC content is no different from that of human-created works, and it is relatively easy to determine the facts that meet the requirements of “belonging to the fields of literature, art, and science” and “reproducibility”. However, to place the new content of AI-generated content in the old regulations, it is necessary to find a reasonable path in legal interpretation in terms of its originality, rights subject, and intellectual achievement attributes.
Originality stems from substantial contributions from users
Originality is the basic requirement for the protection of copyrighted works. According to the Copyright Law Implementation Regulations of China, the constituent elements of a work mainly include four aspects: (1) It must belong to the field of literature, art, or science; (2) Originality; (3) To be able to be reproduced in some tangible form; and (4) For an intellectual achievement. AI-generated content is copyrightable only if it meets these conditions. In the context of the United States's Copyright Law, the definition of “work” focuses on two core elements: originality and the ability to exist in some form on a tangible carrier. Generally speaking, the understanding of “originality” consists of two core elements: “originality” means that the work is done independently by the author and is not copied from the work of others; “Creation” means that the work should demonstrate a certain degree of creativity, that is, have a minimum of innovation (Huang & Huang, 2019). Objective criteria should be used to determine whether a work is original, and if the AI-generated content is independently generated by an algorithm and is not identical to the work of others, and its external form is no different from that of a traditional work, it should be deemed to be original. The key to judging originality is whether the work itself contains a minimum amount of creativity and can meet the needs of the public, bringing the same value to human beings as traditional works (Wu, 2020; Xie & Chen, 2019).
At this stage, the operation and output of AIGC are highly dependent on the role played by humans. AI would not work without complete human input, carefully collected data to build a database, and the rules and instructions for its operation. According to China's Copyright Law, the core of the originality of a work lies in the creative activity of the person, and the key to evaluating the originality of AI-generated works is to identify the substantial contribution of the person in it (Wang, 2017). In the two key parts of AI-generated objects, “setting operating rules” and “giving instructions”, although both involve human behavior, not all directly reflect human ingenuity. “Setting operating rules” is mainly a technical activity based on technical principles, algorithm design, and data input, which is essentially more inclined to the implementation of technology than the expression of creation, although these works give AI functions and performance. Therefore, this link reflects more technical contributions, rather than “human originality” in the sense of copyright law. However, the “give instructions” part is different; when the AI provides users with enough creative space, the user directly reflects the creative intent through the instructions given to the AIGC. The user's intentions are presented to society in the form of generative objects through the processing and output of AI. Although AI processes data in this process, the user's creative intention can still be relatively directly reflected in the generated product. If users have sufficient freedom to make a substantial contribution to their personalized expression when using AI, then their actions can be considered as creative activities, which in turn give the resulting product originality. Conversely, if AI is too restrictive and the user cannot personalize it according to his or her intentions, then its resulting product will not be able to reflect originality. AI machines inherently lack the ability to think and create independently, and the output of all their work is closely centered around the instructions of the direct user (Xie & Chen, 2019). The way in which the direct user gives instructions is not only crucial but also directly determines the level of quality of the work generated. In short, the birth of this kind of work is the joint crystallization of the ideology of the direct user and the intelligent processing of the machine. As a result, the ingenuity of AI-generated products derives primarily from those who use AI directly and make substantial contributions. This view is not only in line with the process of creating AI-generated products but also in line with the jurisprudence of copyright law. When assessing the originality of AI-generated objects, emphasis should be placed on the creative intent and personalized expression of the human being, to the exclusion of technical contributions.
Clarify the subject of rights
Regarding rights attribution for AIGC outputs, divergent perspectives exist. One position advocates assigning copyright to AI systems through the legal fiction doctrine, proposing that conferring independent legal personhood via legislative techniques could establish more sophisticated human-AI legal relations (Sun & Wang, 2021; Yang & Zhang, 2018). However, AIGC systems, despite achieving autonomous learning capabilities through big data, lack free will and self-awareness. Unlike humans, AI cannot comprehend its existence, generating content solely through algorithmic processes, learning, and imitation. This view finds support in the Nearest Entrance to Paradise case, where the USPTO rejected DABUS's patent application on the grounds that non-human entities cannot qualify as inventors. An alternative perspective contends rights should vest in investors or designers. Investors recover costs through user fees, while designers contribute knowledge and social context during AI development rather than output creation (Zhang & Ren, 2018; Zhang & Liu, 2020). Existing frameworks, including computer software copyright and patent laws, already provide sufficient incentives for AI research and development. Granting additional copyrights for outputs would create duplicative protection, impeding balanced industry development.
AIGC, as the subject of creation, does not belong to the category of natural persons, but shows the ability of rationality and behavior similar to human beings. From the perspective of jurisprudence, whether it can become a legal subject and a civil legal subject depends on whether it has legal capacity and capacity (Yi, 2017). Through learning patterns and creating purpose-driven works deployed in real-world contexts, AIGC exhibits genuine intentional expression. Technological evolution warrants open-minded recognition of AIGC legal personhood through civil law fiction mechanisms to address emerging legal challenges. Assigning rights to AIGC users proves more justifiable for three reasons: Firstly, during deep learning processes, outputs often transcend designers’ original parameters while directly reflecting users’ creative intents through operational commands. Secondly, economically incentivized users who incur access costs demonstrate stronger motivation to refine and disseminate outputs (Zhang & Wu, 2020). Their iterative prompt engineering fosters copyright innovation, advancing institutional and public interests underlying copyright law. As primary operators and disseminators, users substantially promote AI industry development while fulfilling copyright protection objectives. Thirdly, from transaction cost perspectives, user entitlement aligns with creative input theory while minimizing distribution barriers, thereby maximizing work utility. Moreover, designers/investors typically secure intellectual property rights or market returns when protection thresholds are met. Granting additional output copyrights would create duplicative protection and over-incentivization (Zheng & He, 2020). Since AIGC itself remains ineligible for copyright, reversing subject-object relationships remains impermissible. Thus, vesting AIGC output rights with users constitutes the optimal approach.
Distinguishing between generative artificial intelligence products and human works
Confusion between AIGC creations and ordinary human works due to their similar appearance should be avoided. On the one hand, there are differences between AI-generated objects and ordinary human works in terms of the generation process, generation method, and generation result, and the audience's right to know and choose about the work should be respected. On the other hand, the output speed of AI generated products is fast and the cost is low, driven by capital and technology, the number of AI generated objects will continue to increase, if they are not distinguished from ordinary human works, it will be extremely detrimental to the development of ordinary human works and even squeeze the market space of ordinary human works. Therefore, it is necessary to distinguish between AIGC creations and ordinary human works.
Firstly, establishing a regulatory framework for machine-generated works is essential to ensure normative utilization of AIGC outputs, particularly given their explosive proliferation amid diversifying software capabilities. Without proper governance, misuse could trigger severe societal repercussions, as evidenced post-ChatGPT launch by international reports of students committing academic fraud via AI-generated coursework, with violations often self-disclosed rather than instructor-detected. Consequently, numerous universities and publishers now explicitly reject AIGC-authored submissions to prevent improper gains, underscoring that regulated AI deployment constitutes the primary defense against “algorithmic malpractice” (Zhang & Zheng, 2021). The intangible nature of such works makes receiver-side identification technologically challenging and socially costly, potentially disrupting industry ecosystems. Although declaratory bans and user pledges offer partial safeguards, they remain vulnerable to exploitation. Therefore, copyright law must implement origin-tracing mechanisms by mandating machine-origin identifiers modeled after attribution rights provisions. Specifically, all AIGC outputs require conspicuous machine-author labeling within defined disclosure scopes. To prevent public confusion between human and AI creations, direct users (authors) must affix such identifiers before dissemination, with unmarked AIGC outputs barred from distribution channels.
Secondly, integrate legislation with ethical norms. Currently, China has not yet enacted foundational legislation in the field of AI. The Next Generation Artificial Intelligence Development Plan issued by the State Council in 2017 set a goal to initially establish a legal, regulatory, ethical, and policy system for AI by 2025. To achieve this goal, attention must be paid to the effective integration of ethical guidelines and legal norms. Firstly, future AI laws should incorporate ethical norms to ensure the harmonious unity of technology and social values. Simultaneously, the interpretation and application of existing laws should also reflect the ethical principles of AI. Secondly, a comprehensive legal normative system needs to be constructed, based on general AI legislation, combined with specific regulations for AIGC, and integrated with relevant laws concerning algorithms, data, intellectual property, and anti-unfair competition. Although general AI legislation is an inevitable trend, it is not the most urgent need at present. Globally, jurisdictions exhibit diverse approaches to copyright legislation concerning AI-generated works; these varied legislative practices provide important references for China's lawmaking in this area. For instance, the US Copyright Office adopts a more conservative stance, emphasizing that copyright protects the intellectual creations of human authors; the EU copyright system's approach to protecting works created by programs offers a possible legal framework for the copyright attribution of AI works; UK copyright law explicitly defines the copyright ownership of computer-generated works. Synthesizing international legislative trends and specific cases, China's copyright legislation for AI works should refine the issue of copyright attribution, clearly define the concept of the creator, and consider the unique nature of AI technology, necessitating the establishment of a distinct copyright category or protection mechanism for AI works. Furthermore, in response to the widespread application of current AIGC, corresponding management regulations need to be swiftly introduced to mitigate legal risks and avoid market disorder and unhealthy competition. For example, specific regulations should be formulated for AIGC products such as DeepSeek, ensuring they provide convenience to enterprises while also complying with legal and ethical standards.
Protection of generative artificial intelligence products under the value objectives of copyright law
AI technology can innovate human creative techniques to a certain extent, and the originality standards of works are also applicable to assessing the copyrightability of AI products. Therefore, AI products with original characteristics should enjoy copyright protection. The core of the copyright system is to protect rights and stimulate creative vitality, aiming to promote the widespread dissemination of knowledge and culture, and focus on the realization of people's value (Zhu & Li, 2023). Therefore, adhering to the value goal of copyright law is to maintain the dominant position of people in creation. However, with the development and breakthrough of AI technology, its products, as emerging products, face the challenge of being difficult to fully cover the existing copyright system, and the contradiction between the protection demand for AI products and the supply of the current copyright systems is becoming increasingly prominent. We should adhere to the value orientation of copyright law and identify content that can reflect human originality as works and provide protection. At the same time, products that lack original contributions should not be included in the scope of copyright protection.
In addition to copyright protection for generative AI creations, many scholars believe that the relevant rights system—neighboring rights should be protected, and neighboring rights should be used to protect AI-generated results, and the corresponding rights should be reasonably divided and shared according to the size of the contributions of various stakeholders and AI itself in the process of generating the results, and the corresponding obligations and responsibilities should be allocated. Given the relatively limited direct intellectual contribution of human beings in the creation of AI, the creation of AI is regarded as the object of neighboring rights and protected with an appropriate legal framework (Zhang & Liu, 2020; Guo, 2022). This article argues that the protection of neighboring rights for generative AI products is not within the scope of the object of protection of neighboring rights and is also contrary to the essence of AI creations. Traditional neighboring rights, for example, had rights for performers for their unique artistic performances, producers of phonograms had specific rights for their phonograms produced, and radio and television organizations had rights for their production and broadcast of radio and television programmes, all of which were derived from the interpretation and wide dissemination of the work. The core objective of the protection of neighboring rights is the rights and interests of the disseminator of the work, covering the rights and interests involved in the process of reproducing, reproducing, and disseminating the work of others, but does not directly lead to the creation of new works. In short, neighboring rights protect interests in the act of communication, not the creation of new works (Zheng & He, 2020). The core of determining whether copyright or neighboring rights should be used is whether there is an original investment and the birth of intellectual achievements. Originality input encompasses both the subjective creative intent and the objective creative effort of the creator. In the process of AIGC, both developers and direct users have invested substantial labor. In particular, the direct users manipulated AI with the original intention of creation and finally obtained original works. Therefore, in essence, AI-generated objects are more suitable for protection in the form of copyright in works. In contrast, it is not appropriate to use neighboring rights to protect AI-generated products when relevant subjects do not have the original intention to create in activities such as performances, audio and video recordings, etc., and know that their actions will not produce original works. This does not reflect the creative intent of the subject in question, nor does it accurately interpret the attributes of the AI-generated object as a work. Therefore, copyright is more appropriate for the protection of AI-generated products.
CONCLUSION
With the rapid development of AIGC technology, its powerful ability to generate content is profoundly changing the creative ecology, which poses a severe challenge to the current copyright law system. The traditional intellectual property system aims to stimulate and promote human innovation activities by giving exclusive rights and expected revenue opportunities to rights holders. However, AIGC is deeply involved in the creation process, which blurs the boundary between the creation of traditional works and the ownership of rights. At present, AIGC is still essentially a tool, and the copyrightability of its products lies in the depth and control of human users. Therefore, a prudent and fair approach should be taken in the construction of the copyright regime for AIGC products. Based on the degree of human control and contribution to the AI generation process, such as technical controllability and human intention density, a step-by-step copyright identification model can be constructed, the products were divided into three levels: strong protection, weak protection, and no protection. The key is to return to copyright law itself: when the AIGC product has the substantial participation of the direct user in the creative process and embodies its creative contribution, and the final results in line with the copyright law on the “works” of originality requirements should be given the corresponding copyright protection. On the one hand, the system design needs to consider the long-term trend of technological development and leave room for the future application of law; on the other hand, we must pay attention to the balance between public interests and private incentives, avoid unfairness, or abuse due to overprotection. In addition, when discussing the copyright issues of AIGC products, the potential infringement liability cannot be ignored. If the generated content violates the rights and interests of others, the natural person (or legal person user) who directly uses the technology and controls or guides the generated content should bear the corresponding tort liability for the consequences of their use of the tool. The determination of liability should be based on the principle of fault liability of copyright law and examined the specific behavior of users in the process of generation and the predictability of infringement results. This is not to simply blame the developers or platform owners who are not directly involved in the specific generation, but to focus on the users who are directly responsible for the infringing products. In short, the development of AIGC technology has brought profound challenges and opportunities for copyright law. In the construction of a copyright system to adapt to new technologies, we need to return to the basic principles of copyright law and maintain an open and inclusive attitude. It is necessary not only to respect and encourage users who make substantial and creative contributions to AI products through reasonable copyright protection mechanisms but also to pay attention to the public interest and social responsibility. By exploring the ladder-like identification model based on human control and contribution, and constantly improving it in practice, we can find the solution most in line with the spirit of law and the reality of technology.
CONFLICT OF INTEREST STATEMENT
No potential conflict of interest was reported by the author(s).
Bi, W. X. 2023. “The Risk Regulatory Dilemma of AI‐Generated Content and its Resolution: From the Perspective of ChatGPT's Regulation.” Journal of Comparative Law Research (03): 155–172.
Chen, F. M. 2023. “Challenges and Responses: Approaches to Copyright Protection for AI‐Generated Content.” Publishing and Distribution Research (06): 20–28.
Cong, L. X., and Y. L. Li. 2023. Copyright Risk and Governance of Chatbot‐Generated Content–From the Perspective of ChatGPT's Application Scenarios. China Publishing (05): 16–21.
Cui, G.B. 2023. Users' Originality Contributions in AI‐Generated Content. China Copyright (06): 15–23.
Deng, J.P., and Y.C. Zhu. 2023. “Legal Risks and Countermeasures of the ChatGPT Model.” Journal of Xinjiang Normal University (Philosophy and Social Sciences Edition) 44(05): 91–101+2.
Ding, X.D. 2023. “Deconstruction and Reconstruction of Copyright: A Jurisprudential Reflection on the Legal Protection of AI‐Generated Works.” Law and Social Development 29(5): 109–127.
Feng, G. 2019. A Preliminary Study on the Legal Protection Path of AI‐Generated Content. China Publishing (01): 5–10.
Feng, X.Q. 2006. “Copyright Expansion and Insight into Its Causes.” Law and Social Development (6): 74–87.
Feng, X.Q., and B.H. Pan. 2020. “Research on the Identification of Artificial Intelligence “Creation” and the Protection of Property Rights and Interests–and Comment on the ″First Case of Copyright Infringement of AI‐Generated Content.” Journal of Northwest University(Philosophy and Social Science) 50(2): 39–52.
Goldstein, P. 2003. Copyright's‐Highway: From Gutenberg to the Celestial Jukebox. Silicon Valley: Stanford University Press.
Guo, W.M. 2022. “The Legal Nature and Copyright Protection of Artificial Intelligence Generated Works.” Publishing and Distribution Research (05): 58–64.
Huang, H., and J. Huang. 2019. “The Rationality of Protecting AI‐Generated Content as Works.” Jiangxi Social Sciences 39(02): 33–42+254.
Huang, X. R., and L. Liu. 2023. “The Significance of ChatGPT From the Perspective of Technology and Philosophy.” Journal of Xinjiang Normal University (Philosophy and Social Science) 44(6): 123–130.
Jiao, H. P. 2022. “Copyright Risks and Mitigation Paths of Data Acquisition and Utilization in Artificial Intelligence Creation.” Contemporary Jurisprudence (04): 128–140.
Kalyatin, V. O. 2022. “Establishing of Subject of Rights to Intellectual Property Created With the Use of Artificial Intelligence.” Pravo‐Zhurnal Vysshei Shkoly Ekonomiki 15(04): 24–50.
Le Thi, M. 2023. “Copyright Protection for Works Created by Artificial Intelligence Technology Under the EU Law and Vietnamese Law.” Review of European and Comparative Law 55(4): 7–28.
Levendowski, A. 2018. “How Copyright Law Can Fix Artificial Intelligence's Implicit Bias Problem.” Washington Law Review, 93(02): 579–630.
Li, M. D. 2020. “Work Protection Systems Under the Two Major Legal Systems.” Intellectual Property (7): 3–13.
Li, X., and Y. Kuang. 2023. “Criminal Law Considerations for ChatGPT‐Like Artificial Intelligence and Its Generations.” Journal of Guizhou Normal University (Social Science Edition) (04): 78–91.
Li, Y. 2019. Fundamental Principles of Copyright Law. Beijing: Intellectual Property Publishing House.
Liu, C., Li X., Yin B. Zheng Q, L., Wei, M.. 2023. “Large Model Technology and Industry: Current Status, Practices, and Reflections.” Artificial Intelligence (4): 32–42.
Liu, Q. 2020. Research on the Legal Issues of Copyright in Generative Artificial Intelligence. Beijing: Law Press.
Morocco‐Clarke, A., F. A. Sodangi, and F. Momodu. 2024. “The Implications and Effects of ChatGPT on Academic Scholarship and Authorship: A Death Knell for Original Academic Publications?” Information & communications technology law 33(1): 21–41.
Ramalho, A. 2017. Will Robots Rule the (Artistic)World? A Proposed Model for the Legal Status of Creations by Artificial Intelligence Systems “Journal of Internet Law.” New York: Social Science Electronic Publishing 21(01):12–25.
Samuleson, P. 1985. “Allocating Ownership Right in Computer‐Generated Work.” University of Pittsburgh Law Review (47): 1185–1228.
Shi, D. 2020. “Copyright Ownership of Artificial Intelligence Creations and its Coping Strategies.” Guangxi Social Sciences (04): 124–130.
Sun, Y. G., and D. B. Wang. 2021. “A Marxist Examination of the Subject Status of Artificial Intelligence Simulation.” Gansu Social Sciences (02): 81–88.
Sun, Z. 2019. “An Analysis of the Copyright for Content Generated by Artificial Intelligence.” Tsinghua Law Journal (6): 190–204.
Suo, W. 2024. “Technical Principles, Application Challenges, and Optimization Paths of Generative Intelligent Publishing.” Communication and Copyright (08): 59–61.
Wang, Q. 2023. “Revisiting the Characterization of AI‐Generated Content in Copyright Law.” Tribune of Political Science and Law 41(4): 16–33.
Wang, Q. 2017. “On the Qualification of AI‐Generated Content in Copyright Law.” Legal Science (Journal of Northwest University of Political Science and Law) 35(05): 148–155.
Wen, T. G. 2024. “Refuting the ‘Tool of Creation’ Theory in Artificial Intelligence.” Intellectual Property (1): 85–105.
Wen, Y. Z., and Y. Y. Shen. 2023. “Confusion of Human‐Machine Coexistence: Analysis of Copyright Ownership and Infringement Crises of Robot Journalism.” Modern Communication (Journal of Communication University of China) 45(09): 28–35.
Wu, H. D. 2020. “Questions of Copyright Law Regarding Artificial Intelligence Generated Works.” Peking University Law Journal 32(03): 653–673.
Wu, Y. H. 2020. “Copyright Protection of Artificial Intelligence Creations: Issues, Controversies, and Possible Futures.” Modern Publishing (06): 37–42.
Xiang, B. 2020. “On the Protection of Neighboring Rights of Artificial Intelligence‐Generated Achievements.” Science & Technology and Publishing (06): 70–75.
Xie, L., and W. Chen. 2019. “Solving the Copyright Dilemma of Artificial Intelligence Generations Under the Rule of Fictional Authors.” Law Application (09): 38–47.
Xiong, Q. 2017. “Copyright Recognition of AI‐Generated Content.” Intellectual Property (07): 3–8.
Xu, X. H. 2024. “On Equal Protection of AI‐Generated Content in Copyright Law.” China Legal Science (1): 166–185.
Yang, Q. W., and L. Zhang. 2018. “On the Fictional Legal Personality of Artificial Intelligence.” Journal of Hunan University of Science and Technology (Social Science Edition) 21(06): 91–97.
Yao, Z. W., and Y. Shen. 2018. “On the Copyright Ownership of Artificial Intelligence Creations.” Journal of Xiangtan University (Philosophy and Social Sciences) 42(3): 29–33.
Yi, J.M. 2017. Are AI‐Generated Creations Works?. Science of Law (Journal of Northwest University of Political Science and Law) 35(05): 137–147.
Zhang, C. Y., and X. Ren. 2018. “Copyrightability and Ownership of Artificial Intelligence Creations.” Contemporary Law Review 16(04): 22–28.
Zhang, H. B., and S. L. Liu. 2020. “Challenges and Responses: Copyright Issues of Artificial Intelligence Creations.” Journal of Dalian University of Technology (Social Science Edition) 41(01): 76–81.
Zhang, H. B., and H. B. Wang. 2024. “Copyright Priority or Technology Priority? Trends and Implications of France's Response to AIGC Copyright Risks.” Editorial Friend (5): 103–112.
Zhang, J., and P. Wu. 2020. “Reconstruction and Generation Path of the Editorial Capability System in the Era of Artificial Intelligence.” Publishing and Distribution Research (04): 72–77.
Zhang, X. B., and L. Bian. 2024. “Research on Copyright Protection of AI‐Generated Content.” Journal of Comparative Law Research (02): 77–91.
Zhang, X. P., and P. Zheng. 2021. “On the Dilution of the Natural Source of Originality in Artificial Intelligence Creations.” Journal of Dalian University of Technology (Social Science Edition) 42(06): 106–113.
Zheng, Y. M., and X. X. He. 2020. “Review of the Protection Path of Artificial Intelligence Generations From a Results Perspective.” Science, Technology and Law (03): 14–21.
Zhu, H. J., and X. Y. Li. 2023. “Non‐Copyright and Copyright Infringement Risks of ChatGPT‐Generated Content.” Journalists (06): 28–38.
Zhu, M. Y. 2019. “Research on Copyright Protection of AI‐Generated Content.” PhD diss., Wuhan University.
© 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.