Content area
This article analyzes the divergence between China’s Personal Information Protection Law (PIPL) and the EU’s General Data Protection Regulation (GDPR), despite their textual similarities. It argues that China’s approach to data protection is shaped by distinct domestic understandings of “risk,” rooted in past legislation, judicial practices, and social concerns. Using focal point theory, the authors identify three key dimensions of risk in China: large-scale participation, economic loss, and threats from third parties. These focal points explain why China’s risk-based approach prioritizes different enforcement goals than the GDPR. The article also shows how these differences manifest in several areas, including the definition of personal information, the regulation of automated decision-making, and the design of enforcement authorities. Ultimately, the article challenges the assumption that legal diffusion through the “Brussels Effect” leads to uniform global standards. Instead, it highlights how domestic cultural and institutional factors reshape transplanted laws, creating seemingly performative enforcement that reflects localized regulatory logics.
Introduction
Data protection law is sweeping across the globe. Legal diffusion scholars attribute this trend to the manifestation of the Brussels Effect. They argue that the GDPR generates this effect. Driven by the accessibility and effectiveness of the renowned European data protection law, the General Data Protection Regulation (GDPR), and the strong economic appeal of the European consumer market, other jurisdictions have strong incentives to establish their data protection laws using the GDPR as the model. (Bradford, 2020; Schwartz, 2021). As Professor Paul Schwartz (2019) aptly notes, the GDPR is becoming the global gold standard for data protection.
Zooming in, however, one may find that the legal diffusion effect of the GDPR seems to remain at the textual level, without substantially extending to the enforcement level. Some countries have enacted data protection laws more or less modeled after the GDPR (Belli and Doneda, 2023), yet their enforcement practices differ significantly from the GDPR. For example, China may face challenges in enforcement due to fragmented enforcement authorities and ambiguous standards (You, 2022; Zhang, 2024); in Africa, data protection regulations are sometimes deliberately ignored by major internet companies (LaCasse, 2024); even in EU, regulators sometimes selectively neglect violations of data protection law (Manancourt, 2022). Privacy law scholars refer to this insufficient enforcement as performative, suggesting that the enactment, enforcement, and compliance practices of data protection laws are largely symbolic, intended to satisfy external observers rather than to achieve substantive regulatory outcomes (Waldman, 2022). But why do these countries transplant the GDPR while enforcing it only in a performative way? How should we understand this phenomenon?
One possible explanation is about the administrative capacity of regulators. Recent literature points out that while the primary drivers of legal diffusion are economic cooperation size and geographical proximity between countries, the extent to which these laws develop further and are effectively enforced may depend on the administrative resources available in the adopting country (Bradford et al., 2024). This potentially explains why many Global South countries exhibit notable differences in data protection enforcement compared to the GDPR, especially given that the full enforcement of a GDPR-like data protection law demands substantial administrative resources (Chander et al., 2021; Lancieri, 2022). In fact, even for some EU member states, a lack of administrative resources is a key reason why their enforcement of data protection laws tends to be superficial or merely performative (Satariano, 2020). In Professor Iza Ding’s words (2022a), when regulators face significant external pressure to enforce laws yet lack the capacity to fully meet those demands, their actions tend to become performative. Through the performative governance, they can largely assuage citizens’ complaints (Ding, 2022a).
However, even in jurisdictions where regulators possess ample enforcement resources, performative enforcement still emerges, revealing obstacles to the legal diffusion of the GDPR model. This article attempts to explain these limitations in legal diffusion from another take, with China as an example. It argues that the reason for these seemingly performative enforcements could also be that these states have different understandings of key concepts in data protection compared to EU legislators. This paper refers to these differences as cultural differences. In other words, such deliberate or limited enforcement is not intended to serve a performative function, but instead reflects differing substantive priorities in data protection. The following of this article demonstrates that China’s internal demand for data protection causes these differences. Before the implementation of the Chinese data protection law, the Personal Information Protection Law (PIPL), there was a patchwork of fragmented laws protecting personal data. These laws aim to address China’s severe issues with telecom fraud and identity theft. They generated what Professor Richard McAdam (2000) calls focal points, not only expressing to the public which data practices are legally recognized or not but also potentially delineating the main concerns related to data protection for the public. These laws and cases occupied a considerable part of the public discussion on data privacy in China. Then, these discussions ultimately catalyze legislation through public demands for lawmakers to respond, shaping the positioning of China’s data protection law (Alford and Liebman, 2001; Zhang and Ding, 2017; Zhang and Ginsburg, 2019). As a result, China’s understanding of risk in data protection law differs from that of the GDPR. The PIPL is more focused on managing socially defined economic risks, which often do not stem from the direct relationship between data controllers and data subjects, but rather from third parties external to that relationship. In other words, compared to the GDPR, the PIPL functions more like a “data trespass law,” aiming to address unauthorized data collection and misuse by third parties.
The contribution of this article is two-fold. First, it contributes to the understanding of the legal diffusion of data protection laws by explaining the practical limitations of such diffusion with focal point theory. Some scholars have begun to question the actual impact of the Brussels Effect, arguing that existing domestic approaches to data protection create path dependency, preventing a complete shift toward the GDPR (Schwartz, 2009; Chander et al., 2021). Professor Mark Jia (2024) notes that for countries that previously underappreciated privacy, such as China, domestic social demands may be the primary driver behind their shift toward data protection. A recent study suggests that the Brussels Effect may be limited to encouraging countries like China to adopt ready-made concepts from the GDPR, while existing domestic legal frameworks create a path dependency that leads to structural differences in data protection laws (Li and Chen, 2024). However, it remains unclear how these internal drivers interact with the Brussel effect of the GDPR and ultimately shape these legal divergences. This article argues that this interaction is reflected in the way that, after introducing certain concepts from the GDPR, the substance of these concepts is filled in by China’s pre-existing domestic legal system and practices. This preconditioning operates through focal points. When frontline enforcers implement a new law, existing systems and practices provide them with an interpretive framework for understanding the new legal concepts. As a result, it becomes difficult for them to deviate from these entrenched understandings. This suggests that while the GDPR may capture other jurisdictions through its accessibility, effectiveness, and economic appeal, preexisting domestic legal system and practices continue to shape how these jurisdictions interpret the novel legal concepts introduced by the GDPR, ultimately leading to divergences in implementation.
Second, it suggests one more possible cause of performative data protection enforcement among the Global South. Cultural differences between China and the EU in data protection influence their understanding of risk-based approaches, which are often regarded as the core of data protection law (Kuner et al., 2015). As European data protection scholars have pointed out, the risk-based approach in the GDPR is primarily concerned with the fundamental rights of data subjects (van Dijk, 2016). In contrast, Chinese lawmakers associate risk with three different dimensions: large-scale participation, economic interests, and threats from third parties. This difference in understanding has led to a situation where, although the PIPL closely mimics the GDPR at the textual level, its actual enforcement focuses on different priorities. This gap between data protection on the book and on the ground contributes to the seemingly performative data protection enforcement.
A final word about the Chinese data protection framework is necessary. While the PIPL is the only comprehensive data protection law akin to the GDPR, it did not emerge in a legal vacuum. Prior to its enactment, China already had a variety of laws regulating data privacy. In the pre-PIPL era, data privacy protection in China followed two main legal approaches. The first consists of private law mechanisms that allowed individuals to seek remedies through litigation, such as the now-repealed Tort Law (2009) and Contract Law (1999), and their successor, the Civil Code (2021). The second consists of public law mechanisms, including the Consumer Protection Law (2013), E-Commerce Law (2018), Cybersecurity Law (2016), Data Security Law (2021), and Criminal Law (2009). Although these laws also sometimes permit individuals to take lawsuits, enforcement primarily depends on regulatory agencies to ensure corporate compliance with legal obligations. The PIPL integrates both approaches (PIPL, Articles 50, 60–66 and 69). Besides, because all of these laws were enacted by the National People’s Congress (NPC) or its Standing Committee, there is no formal hierarchy among them in terms of legal authority. Some provisions of the PIPL even represent restatements or even modifications of existing legal rules. While it remains unclear which law takes precedence in cases of overlap, Chinese legal scholars generally regard the PIPL as the lex specialis, and therefore the law that should take priority in application (Wang, 2021; Ding 2023).
This article proceeds as follows. Part I analyzes the textual similarities and substantial differences between the PIPL and the GDPR. Part II, with the focal point theory, examines how cultural differences in the understanding of risk between China and the EU emerge. Part III explores how these cultural differences shape the distinct risk-based approaches of the PIPL and the GDPR. Then, this part rereads several key concepts of the PIPL through the lens of its risk-based approach and discusses how these concepts contribute to China’s seemingly performative enforcement of data privacy law.
China’s personal information protection law: a mini-GDPR?
The GDPR is an important factor in the emergence of the PIPL. As scholars have noted, adapting foreign legal templates to domestic contexts can better help integrate legal innovations into domestic legal systems (Scott, 2009). China’s first attempts to enact a comprehensive data protection law began in the early 2000s but repeatedly failed (Huang and Shi, 2021). However, within the past 3 years, the PIPL rapidly moved from drafting to enactment. For the reason behind this sudden acceleration, Li and Chen argue that the GDPR provided a kind of gravity assist, helping Chinese lawmakers recognize the GDPR framework as an effective and comprehensive model for data protection (Li and Chen, 2024). This external validation ultimately facilitated the approval of a proposal modeled on the GDPR (Li and Chen, 2024).
This part identifies four elements that the PIPL has transplanted from the GDPR: the omnibus law model, the constitutional foundations of data protection, the legal bases for data processing, and data subject rights. For data protection scholars, these elements are core features of the GDPR and are the primary aspects that distinguish it from other data protection approaches (Schwartz and Peifer, 2017; Hoofnagle et al., 2019; Jones and Kaminski, 2020).
The legislative model
The GDPR (2016) is a typical omnibus data protection law, meaning that it regulates how all “data controllers” process any information they hold about individuals, rather than focusing only on data processing within specific sectors (GDPR, Article 4(1), (8)). Given the broad definition of personal data under the GDPR, very few data processing activities fall outside its scope (Purtova, 2018). In other words, the GDPR classifies almost all information or data as personal data, thereby applying a uniform regulatory framework (Nissenbaum, 2024). In this context, at the enforcement level, the GDPR potentially requires a model of collaborative governance, where regulated companies first interpret and implement the rules, and regulators then assess whether these interpretations can qualify as best practices (Kaminski, 2019).
The PIPL (2021) also adopts the omnibus model. It defines personal information based on personally identifiable information, excluding only anonymized data (PIPL, Article 4). In addition, it applies to all personal information processing activities, including those conducted by the government (PIPL, Article 2, 3). This means that, although China’s data protection regime has traditionally relied on sectoral laws, the PIPL seeks to apply a unified set of rules to all personal information processing, instead of contextual rules.
The PIPL also mirrors the GDPR in distinguishing data protection from privacy protection. Both laws exclude the processing of personal information by a natural person in the course of purely personal or household activities—processing that typically involves privacy (GDPR, Article 2(c); PIPL, Article 72). As Professor Ola Lynskey (2014) notes, data protection is not context-dependent and extends to data related to identifiable individuals even if their identity remains unknown. This distinction suggests that, at least textually, the PIPL embraces the GDPR’s concept of a right to data protection, rather than a consumer privacy concept under which whether personal information is considered private depends on its context of use.
However, in certain contexts, the PIPL is rarely enforced in practice, or it is enforced only loosely. On the one hand, scholars have found that the PIPL faces significant challenges in requiring some data controllers to fully meet its compliance obligations. For example, government data collection and processing in China often does not satisfy all transparency requirements and are rarely subject to frequent challenges by regulators (Horsley, 2021). On the other hand, certain purpose-specific data collection and processing activities are often only required to meet the PIPL’s basic obligations. For instance, compared to the GDPR’s stricter consent requirements, the requirement of user consent obtained in China for data collection to provide public services may be weakened (Ding and Zhong, 2021). Additionally, Chinese regulators have also relaxed data protection requirements to support the development of emerging technologies that rely on data use (Xiong et al., 2023; Zhang, 2025).
The constitutional root
One key rationale for the EU’s prioritization of data protection over data use is that data protection is recognized as a fundamental right. The Charter of Fundamental Rights of the European Union (2009), which was drafted around 2000 and came into legal effect in 2009, states in Article 8 that “[e]very one has the right to the protection of personal data concerning him or her”. The Article 1 of GDPR also emphasizes that it “protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data”. As the Court stated in the landmark case of Google v. Spain (2014), to ensure effective and complete protection of fundamental rights, especially the right to privacy in personal data processing, words concerning data protection “cannot be interpreted restrictively.” When the right to data protection conflicts with another fundamental right, the right to data protection is consistently given priority (Lynskey 2015; Burri, 2023).
This fundamental right does not merely constrain the government. Fundamental rights in the EU have “horizontal effects,” meaning they apply not only to government-on-private relationships but also to private-on-private interactions (Schwartz and Peifer, 2017). Under this type of constitutional regime, data protection in the EU has developed into a rights-talk model, which is mainly about interests of dignity, personality, and self-determination (Schwartz and Peifer, 2017).
The PIPL, in its text, also acknowledges the constitutional roots of data protection. Article 1 of the PIPL explicitly states: “This Law is formulated based on the Constitution.” Indeed, recognizing a law as being based on Chinese Constitution appears to be a formal convention in Chinese legislation. For example, the Civil Code contains a similar provision (Civil Code, Article 1). However, what is notable about the PIPL is that this constitutional reference was absent from the early publicly released draft that was submitted for review by China’s ultimate legislative decision-makers (NPC, 2021). This clause was added only during the final stages of the legislative process and was ultimately preserved in the enacted text. This may suggest that Chinese lawmakers may have considered the clause to carry substantive significance. Its inclusion in the final text indicates, at the very least, an intention to leave open the possibility of framing data protection as having a constitutional basis, and room for future clarification of the constitutional source of data protection in China. Some Chinese leading scholars have pointed out that several clauses of the Constitution may, via interpretation, implicate to grant citizens the right to data protection (Wang and Peng, 2021; Zhang, 2022; Ding, 2022b). They argue that provisions in China’s Constitution regarding human rights, personal dignity, and the freedom and confidentiality of correspondence potentially serve as constitutional sources for data protection. At a minimum, these three categories of fundamental rights may provide the “essence” of China’s data protection rights (Constitution, Article 33, 38, and 40). As some data privacy law scholars suggest, such essences can help clarify the ambiguous meaning and boundaries of the right to data protection (Angel and Calo, 2024). Indeed, prior to the formal enactment of the PIPL, Chinese lawmakers, in publicly announcing revisions to the PIPL draft, also acknowledged that the law plays a significant role in protecting these three fundamental rights (NPC Constitution and Law Committee, 2021).
However, there are two substantive differences between the PIPL and the GDPR regarding their constitutional roots. On the one hand, constitutional rights in China lack horizontal effects. Even if the right to data protection has constitutional roots, it primarily governs government-on-private relationships rather than private-on-private interactions. As a result, the right to data protection in China may require the government to take action to improve corporate compliance, rather than imposing accountability on companies directly toward individuals. In practice, this could mean that individuals may not be able to bring claims directly against companies for violating their accountabilities. For example, in Zuo v. A Co. (Shanghai) Ltd. & Y Co., Guangdong Internet Court individuals lack standing to bring a lawsuit against personal information processors unless they have first requested the processors to exercise their personal information rights (Guangzhou Internet Court, 2022). By contrast, in Schrems II (2020), the plaintiff’s claim focused on the United States’ failure to provide adequate data protection, even if they had not requested those companies to correct their practices. On the other hand, the constitutional roots of the right to data protection in China remains contested. Some scholars argue that the right to data protection is merely an extension of the right to personality under private law, with its legal foundation rooted in the Civil Code. (Wang and Xiong, 2020) This implies that data protection in China may primarily fall under tort law and contract law. In fact, in some Chinese data protection cases, courts have dismissed plaintiffs’ claims on the grounds that there was no concrete harm. For example, Hangzhou Internet Court held that the failure to provide a convenient channel for exercising the right to access did not give rise to a further risk of substantive harm; therefore, the plaintiff had no standing to bring a lawsuit (Hangzhou Internet Court, 2022).
The lawfulness of processing
The PIPL and the GDPR shares most of their legal bases of data processing. Under both laws, companies can make their data processing legal on the basis of: (1) obtaining the individual’s consent; (2) contractual necessity; (3) necessity of compliance with legal obligations;(4) safeguarding the public interest; and (5) protecting the vital interests of people (PIPL, Article 13; GDPR, Article 6). For companies under the GDPR, they can also obtain their legal basis by proving that the processing is necessary for their legitimate interests (GDPR, Article 6.1(f)).
However, regarding publicly available personal data, there is a clear divergence between the two laws. Under the GDPR, before processing personal data, companies must consider two key questions: whether they have a valid legal basis and whether the data falls into a category of prohibited data (Dove and Chen, 2021). When individuals voluntarily make certain sensitive data public, this may allow companies to process otherwise prohibited data categories, but controllers are still required to establish an independent legal basis before processing (Pathak, 2022). In contrast, under the PIPL, one important legal basis for data processing is that the personal data has been voluntarily disclosed by the individual or otherwise lawfully disclosed (PIPL, Article 13(6)). This means that once data has been voluntarily made public by the individual, companies automatically acquire a legal basis for processing, without needing to establish any additional basis. This divergence becomes even more apparent when considering the design of the right to withdraw consent under both laws. While both the GDPR and the PIPL allow individuals to withdraw consent and request the deletion of their personal data (GDPR, Article 7(1), 17; PIPL, Article 15, 47), the underlying logic differs. Under the PIPL, companies may process publicly available personal data by default, and individuals may later withdraw their consent to invalidate the company’s legal basis. For example, in one of the model cases published by Beijing Internet Court, the court held that the data collector is not liable for collecting photos that individuals have published on publicly accessible social media platforms (Beijing Internet Court, 2024). In contrast, under the GDPR, companies are prohibited from processing publicly available personal data unless they first establish a valid legal basis; if that basis is consent, withdrawal renders any further processing unlawful. In other words, although both laws share similar structures for establishing legal bases, the PIPL tends to default toward permitting data processing, whereas the GDPR defaults toward prohibiting it (Jones and Kaminski, 2020).
The rights of the data subject
The PIPL and the GDPR share a high level of similarity regarding personal data rights. Both laws grant individuals the right to notice, the right to access, the right to rectification, the right to deletion (right to be forgotten), the right to portability, the right to explanation, and the right to reject automated processing and algorithmic decision-making (GDPR, Article 12–20; PIPL, article 44–48).
However, there are differences between the two laws in how to enforce these rights. When individuals find that a company has failed to ensure convenient means for exercising their personal data rights granted by the GDPR, they may take a lawsuit to seek remedies, as such refusal may constitute “the processing of his or her personal data in non-compliance with this Regulation” under Article 79 of the GDPR. However, the PIPL provides a safe harbor for companies. Individuals must first prove that they have submitted a request to exercise their rights and that the company unreasonably refused to comply; otherwise, they will lack standing to bring a lawsuit (Hangzhou Internet Court, 2022). In other words, companies also have the opportunity to defend themselves by demonstrating that fulfilling the individual’s request would have been economically unreasonable or impossible, thereby establishing an exemption from liability (Beijing Internet Court, 2021).
The shaping of the meaning of risk in the PIPL: a focal point explanation
Data protection law has long relied on the risk-based approach. By mirroring multiple aspects of the GDPR’s text, the PIPL also embraces a risk-based approach. However, risk is an inherently ambiguous concept. In the GDPR, the risk-based approach is closely linked to the protection of fundamental rights (Yeung and Bygrave, 2021). While the textual similarities between the GDPR and the PIPL are evident, the PIPL adopts a different definition of risk, leading to substantive differences in their respective regulatory frameworks. Using focal point theory, this section argues that China’s past legislative practices and social concerns have shaped the PIPL’s conception of risk, which primarily reflects three dimensions: large-scale participation, economic interests, and threats from third parties.
Focal point theory and the Brussel Effect
In his classic The Strategy of Conflict, Thomas Schelling (1960) notes that two strangers who lack communication need to coordinate rely on some focal point, which is the mutual expectation of what each person expects the other will expect them to do, to guide their behavior. For example, he asked people that when they need to meet someone in New York but cannot communicate to agree on a place, more than half of the respondents said they would wait at Grand Central Station (Schelling, 1960).
The reason that most of these respondents could make the same choice is that Grand Central Station, as the focal point, coordinated their behavior. In fact, many locations in New York could logically be more optimal than Grand Central Station (McAdams, 2000). However, the reason Schelling’s respondents chose the station was that they lived in New Haven, where people typically arrive in New York City at that location (Schelling, 1960). For them, this choice was the most mutually salient among their possible decision outcomes. When communication was not possible, there were multiple possible outcomes for selecting a meeting place. However, for these respondents, other choices were less acceptable than this station, because they knew that other participants would also expect them to be more likely to appear at this station. In other words, the focal point narrowed down the outcomes of their decision by anchoring their decisions around it.
What makes a point focal? Scholars argue that when participants are more homogeneous in their knowledge or experience, a point may become more focal (Mehta et al., 1994; McAdams, 2000; Smith, 2003). This homogeneity of knowledge and experience may arise from legal provisions, economic rationality, social consensus, or historical events. Once this homogeneity reaches a sufficient level, one may accept it because they have a strong expectation that it will not easily change (Hadfield and Weingast, 2012). Otherwise, they may suffer losses due to the miscoordination between their actions and those of others if they refuse to conform to this homogeneity in their behavior (McAdams, 2009).
The focal point theory offers an alternative explanation for why the Brussels Effect may give rise to divergences during enforcement. The Brussels Effect explains why other jurisdictions may adopt data protection laws that are textually similar to the GDPR. However, even in jurisdictions that are introducing a data protection law for the first time under the influence of the Brussels Effect, data misuse problems may have already emerged, and legal tools to address those problems may have been developed prior to the GDPR’s influence. These legal legacies create focal points. Rather than strictly following the GDPR’s enforcement model, enforcers in these jurisdictions may reasonably expect that others will continue to rely on pre-existing legal practices and institutional frameworks to interpret the new legal concepts and tools introduced by the GDPR. As a result, these focal points, though not the trigger for the adoption of the new law, play a coordinating role in its enforcement, leading to divergences from the GDPR model. In fact, as examined above, the differences between the enforcement of the PIPL and the GDPR appear to stem more from differing preferences and interpretations of the same legal texts by PIPL enforcers.
Three focal points in China’s risk-based approach
In the context of China, three focal points coordinate the social understanding of risks associated with personal data processing. These focal points indicate the most prominent types of risk in China, which have become shared social experiences.
First focal point: risk caused by large-scale participation
The large-scale participation focal point implies that Chinese lawmakers assume risks associated with personal data arise from multiple, independent sources. Rather than emphasizing that individuals are unable to maintain autonomy when consenting to corporate data collection, this focal point highlights the fact that individuals are simultaneously exposed to numerous entities capable of misusing their personal data, exceeding the amount that they can reasonably manage. China first protected personal data through criminal law, which primarily targets on systemic risk. In 2000, Chinese lawmakers declared that “illegally intercepting, altering, or deleting another person’s emails or other data materials, thereby infringing upon citizens’ freedom and confidentiality of correspondence,” would constitute a crime (NPC Standing Committee, 2000). Subsequently, in 2009, China made its first statutory attempt to protect personal data through the Seventh Amendment to China’s Criminal Law. This law criminalized the illegal sale, provision, or acquisition of personal information by public sector employees (NPC, 2009). Lawmakers later expanded the scope of criminal liability in the Ninth Amendment to the Criminal Law, removing the restriction on offenders (NPC, 2015). As a result, any individual, not just public sector employees, could be charged with this crime. This change suggests that, for lawmakers, focusing solely on public sector employees was insufficient to achieve their intended objectives. In an official legislative interpretation of this amendment, lawmakers stated that the expansion of identification of offenders aimed to allow criminal law to address upstream crimes that facilitate other crimes and to crack down on the entire black market for personal data transactions (Zang and Li, 2015).
The trigger that made Chinese lawmakers recognize the importance of protecting personal data was the frequent occurrence of online violence facilitated by personal data misuse (Downey 2010). One notable case involved a wife who committed suicide after discovering her husband’s infidelity, leaving behind posts detailing her emotional torment (Shen, 2016). Following this, internet users attempted to obtain her husband’s personal information, tracked his location, and engaged in prolonged online harassment, real-life death threats, and a campaign that made him unemployable (Shen, 2016). A survey conducted by a Chinese government-affiliated organization found that 80% of respondents believed the state should regulate such forms of online violence (New York Times, 2008). In an article published by a state-run newspaper, a prominent Chinese criminal law expert, pointed out that the introduction of data protection into China’s Criminal Law aimed to address the widespread issue of online violence.
What makes online violence distinct is that the risk arises from a wide range of participants. On the one hand, the victims’ personal information is often not voluntarily disclosed, but instead leaked by employees of data-holding entities, such as employers or government agencies. Once the victim’s alleged misconduct gains public attention, the individuals who come from multiple regions voluntarily engage in online violence and act collectively (Ong, 2012). On the other hand, the fact that criminal law is used as the primary tool for protection suggests that lawmakers aim to deter online violence through harsh punishment, rather than to remedy the harm suffered by victims. Because of the diffuse and large-scale nature of participation, lawmakers have focused on preventing online violence before it occurs, rather than relying solely on post-hoc enforcement (Canaves, 2010).
Second focal point: risk to economic interests
The second focal point arises from China’s tort law protections of personal data. The next major legislative move on data protection after the Seventh Amendment to the Criminal Law was the Tort Liability Law of China (TLL) (NPC, 2009). It should be acknowledged that this law did not explicitly recognize personal data interests as a distinct legal entitlement. Instead, it created space for personal data protection by introducing two broad concepts. First, Article 2 of the TLL defined a wide-ranging concept of private interests, which could be interpreted to include personal data interests (TLL, Article 2). Second, TLL made specific provisions for online torts, explicitly stating that internet users may also be liable for using the internet to infringe on others’ private interests (TLL, Article 36). In practice, this kind of tort often referred to cases where internet users exploited others’ personal data to cause harm.
Chinese judicial practice then translated this latent potential into actual protection by subordinating personal data to the right to reputation. One typical tort involving internet users is the infringement of another person’s right to personality (Zhang, 2011). Most of them take the form of harm to the individual’s reputation. Such reputational harm is often interpreted as a form of damage to economic interests. Even when a defendant causes reputational damage by using non-public personal information, courts often refuse to support the plaintiff’s claim in the absence of demonstrable economic loss (Dai, 2021). This requirement of economic harm also, in effect, became a precondition for proving disproportionate data processing risks, when the Tort Liability Law (TLL) was interpreted to include personal data protection.
The economic interest requirement has continued to persist as China’s private law gradually strengthens personal data protection. A landmark development was the enactment of the Civil Code, which formally recognized personal data interests as a legal entitlement. The Civil Code preserved the earlier logic by subordinating personal data interests to the right to personality (Wang and Xiong, 2023). To be sure, it is unclear about the relationship between the data protection rules in the Civil Code and those in the (PIPL). Nonetheless, judges handling data protection cases are mostly drawn from the traditional tort law judiciary, and they inevitably interpret the new rules under the PIPL through the lens of earlier economic interest–based reasoning (Balganesh and Zhang, 2021). Therefore, at the very least, economic interests remain an important subset of the risk assessment criteria in Chinese data protection enforcement.
Third-party threats as a focal point
China’s consumer protection laws focused on yet another dimension of data protection concern prior to the enactment of the PIPL. In the 2010s, a widespread social phenomenon in China was that consumers would readily provide as much accurate personal information as possible to companies. This practice gave rise to a persistent issue that troubled Chinese lawmakers for years: the widespread problem of telecom fraud.
In 2013, Chinese lawmakers amended the Law on the Protection of Consumer Rights and Interests (LPCRI) in response to this concern. This law largely sought to prevent third parties from accessing consumers’ personal data without authorization (LPCRI, Article 44). On the one hand, it required online transaction platform providers to act as gatekeepers, ensuring that sellers and service providers properly protect consumers’ personal data (LPCRI, Article 29 2012). On the other hand, it introduced principles on the personal data processing, obligating companies to maintain the confidentiality of consumer data, implement security measures, and take remedial actions when there is a data breach.
The risk posed by third parties also became a key concern addressed by the PIPL. As noted by Professor Hanhua Zhou, a scholar who was deeply involved in drafting the PIPL, past telecom fraud cases in China served as a direct catalyst for reviving the long-stalled legislative efforts toward a comprehensive data protection law (Huang and Shi, 2021). One of the most widely known cases was the 2016 Yuyu Xu incident. A few days after Xu received notice of a government-issued financial aid grant, a telecom scammer—posing as the aid authority—defrauded her of 9900 yuan in tuition fees (Zhou, 2016). As a poor student unable to bear the financial loss, Xu tragically died from grief and distress. The case drew widespread public attention and was selected as one of “China’s Top Ten Rule of Law Promoting Cases” in 2017 (Cao, 2018). The case involving the largest fraud amount was the 2015 Guizhou telecom fraud case, in which scammers impersonating government officials gained the trust of a local officer and succeeded in having over $17 million in public funds transferred to fraudulent accounts (Wu, 2016). In both cases, the risk did not originate from the data processing activities of the entities that held the data, but rather from unauthorized third parties who exploited vulnerabilities in data protection systems to access and misuse personal information.
The divergences in risk-based approaches between the GDPR and the PIPL
The understanding of risk has shaped how legislators in both the EU and China implement the risk-based approach in their respective data protection laws. Although the PIPL draws heavily from the GDPR, Chinese legislators and regulators have coordinated their interpretation of risk around three focal points, ultimately forming a distinct conception of risk from that found in the GDPR. This part examines how these three focal points shape a risk-based approach under the PIPL that differs from the GDPR, and further explores how this divergence manifests in three key areas: the identification of personal information, the regulation of automated decision-making, and the design of enforcement authorities.
The divergences in risk-based approaches
Cooperation problem v. many-to-one asymmetry
The risk-based approach of the GDPR focuses on cooperation problem among data subjects. The GDPR under rights talk adopts a collective model, which limits what individuals can do with their data protection rights (Schwartz and Peifer, 2017). However, this collective model is designed to address coordination problems among data subjects. Scholars have argued that the failure of data self-management largely stems from the fact that individuals often receive immediate benefits from disclosing their data, while the associated risks and harms are largely externalized to others (Solove, 2013; Fairfield and Engel, 2015; Viljoen, 2021). Moreover, even if a data subject chooses not to disclose personal data, the data disclosed by others can still be used to infer information about them (Solow-Niederman, 2022). As a result, data subjects often lack the incentive to withhold data, particularly when they do not bear the majority of costs and cannot control whether others disclose similar data (Hu, 2021). In this context, even though the GDPR adopts a collective model, it does so to coordinate individual behavior in advance, with the ultimate goal of protecting individual interests.
By contrast, the PIPL’s risk-based approach focuses on addressing the many-to-one asymmetry between individuals and companies. Influenced by the first focal point, the PIPL is more concerned with contexts in which a large number of entities simultaneously harm a single individual, rather than addressing coordination problems among multiple, unrelated individuals, as in the GDPR. As Professor Daniel Solove observes, in some cases, personal data harm resembles “death by a thousand cuts”: its severity lies in the accumulation of damage caused by different data processors, rather than a single act of harm that, on its own, may seem too minor to warrant serious attention (Solove, 2013).
Procedural interests v. economic losses
The rights-centered model leads the GDPR’s risk-based approach to place significant emphasis on procedural interests. This model focuses on the power asymmetry between individuals and companies, and grants individuals’ proactive procedural rights—such as the right to be forgotten—in order to level the bargaining power between the two sides as much as possible (Macenaite, 2017). In this context, risk assessment centers on the extent to which users can effectively exercise their rights. Therefore, under the GDPR, enforcement often focuses on whether companies have taken adequate compliance measures to enable the exercise of user rights and to calibrate their legal obligations in light of potential risks (Quelle, 2018). As scholars have noted, under the GDPR’s risk-based approach, compliance measures are largely aimed at safeguarding the data subject’s ability to exercise their rights, and thus at protecting their autonomy (Gellert, 2018; Demetzou, 2019a, 2019b).
What makes the PIPL distinct from the GDPR is its emphasis on economic losses. Influenced by the second focal point, the PIPL evaluates risk based on the magnitude and likelihood of economic harm. Therefore, under this risk-based approach, when a company’s data processing activities are unlikely to result in actual economic losses, regulators may consider the company to be effectively in compliance with the law. If the company merely fails to implement sufficient measures to facilitate the exercise of individual rights, enforcers often respond by requiring the company to fulfill the request or initiate communication, rather than imposing harsher penalties. Moreover, linking risk to economic interests places the data protection interest on the same level as other types of interests in regulatory balancing. As a result, unlike the GDPR, which tends to prioritize data protection over the economic benefits of data use (Hoofnagle et al., 2019; Brkan, 2018), the PIPL may support the promotion of data utilization, so long as a cost-benefit analysis of risks and economic returns favors such use, even when it may entail some level of risk.
Collecting stage v. processing stage
Under the GDPR, the risk-based approach applies equally to both the data collection and data processing stages. Because the GDPR is rights-centered, and the right to data protection is recognized as a fundamental right, the GDPR’s risk-based framework treats data processing as risky per se (Demetzou, 2019a, 2019b). In other words, the act of collecting data itself is considered a risky practice, as it makes subsequent data processing possible. Therefore, under the GDPR, the level of risk does not vary significantly depending on whether data is at the collection or processing stage. Instead, the risk assessment focuses more on the purpose for which the data is collected and processed (Yeung and Bygrave, 2021).
The PIPL, by contrast, places greater emphasis on risks arising during the data processing stage. Influenced by the third focal point, the law tends to view data collection and processing as not inherently risky, since the primary risks are understood to stem from unauthorized third-party access that occurs after collection. The act of collecting data itself is not considered particularly as the cause of risk, especially when the data is processed for purposes that generate clear benefits. However, the processing stage becomes more complex: once data is stored, it becomes vulnerable to unauthorized access and potential misuse, which introduces risk. The PIPL’s risk-based approach thus requires companies to implement safeguards during the processing phase to prevent such third-party access. Accordingly, companies’ compliance obligations under the PIPL focus more on preventing beneficial data uses from turning into harmful outcomes.
Rereading the PIPL: different understanding from the GDPR
This section synthesizes the analysis of the differences between the GDPR and the PIPL in their respective risk-based approaches, and explains how the PIPL adopts a distinct understanding in the identification of personal information, the regulation of automated decision-making, and the design of enforcement authorities. These three aspects reveal how the risks of data collection and processing are conceptualized under data protection law. By analyzing them, this sub-part shows that the three focal points have led Chinese lawmakers to adopt a distinct understanding of personal data risks, despite they enacted the PIPL, a law that is textually very similar to the GDPR. As a result, what appears under the PIPL is seemingly performative enforcement with the GDPR as a reference. First, the definition of personal information determines the presumed scope of risks. As Paul Schwartz and Daniel Solove have noted, the way in which data protection law defines personally identifiable information shapes the scope of potential harm (Schwartz and Solove, 2011). A broader conception of harm implies a broader conception of risk. Second, the regulatory attitude toward automated decision-making reflects lawmakers’ understanding of the source of risks of data collection and processing. Focusing on the procedural and technical requirements of automated decision-making may suggest that lawmakers regard the decision-making method itself as a source of risk. By contrast, an exclusive focus on the outcomes of such decision-making may indicate that lawmakers see no fundamental distinction in risk between automated and human decisions. Third, the institutional design of enforcement authorities reveals how lawmakers understand the nature of data-related risks. When independent regulatory bodies are preferred, this suggests that personal data is seen as giving rise to distinct and independent risks. Conversely, integrating data protection enforcement into consumer or general regulatory agencies implies that such risks are perceived as intertwined with or subordinate to broader consumer protection concerns.
The identification of personal information
Article 4 of the PIPL defines personal information as “all kinds of information, recorded by electronic or other means, related to identified or identifiable natural persons (PIPL, Article 4).” This definition is similar to the one of the GDPR, which defines personal data as “any information relating to an identified or identifiable natural person.” Both laws use “identification” as the conceptual pivot for determining what qualifies as personal information or data, allowing a distinction between identified and identifiable individuals. However, identification is a vague concept.
One possible interpretation is that identification refers to the ability to distinguish a specific person from others, or to assign that person to a particular, labeled group (Ohm, 2010; Zuiderveen Borgesius, 2016). The GDPR largely adopts this view. Information or data becomes personal data when it contributes to identifying someone’s identity or classification. In its Recitals, the GDPR’s lawmakers explain that the process of “singling out” is itself a likely means of identification (GDPR, Recital 26). According to the predecessor of the European Data Protection Board (2012), a person is identifiable “when, within a group of persons, he or she can be distinguished from other members of the group and consequently be treated differently.” It means that GDPR does not require that the individual be identified by name, but rather focuses on whether the data can be used to distinguish that person from others.
Another, narrower interpretation of identification is that the data should be capable of establishing a link to a specific individual, which is adopted by the PIPL. That is, as more personal information accumulates, the ambiguity surrounding the individual diminishes, finally to the point where one can determine exactly who the person is. According to Article 73 of the PIPL, de-identification refers to “the process of personal information undergoing handling to ensure it is impossible to identify specific natural persons without the support of additional information (PIPL, Article 73).” An obvious practical example is that Chinese regulators allow personal information processors to use and disclose various imprecise types of personal data, such as a user’s provincial location, which may not directly identify a specific individual (Feng, 2022). On the one hand, this reflects the PIPL’s risk-based approach, which focuses on many-to-one harms. In such cases, the ability to determine who the person is, as opposed to merely placing an individual into a certain group, makes the data subject more susceptible to concrete harm.
On the other hand, when the law emphasizes economic loss as a key threshold for harm, an overly expansive definition of personal information may undermine the optimal balance between data protection and data utilization. For Chinese lawmakers, the line is drawn at the point where the risk arising from data processing targets an identifiable individual, rather than an indeterminate or abstract group. When this threshold is crossed, the risk is understood to fall within the regulatory scope of the PIPL.
Regulation of automated decision-making
Automated decision-making is under strict regulation in the GDPR and regulated in an integrated manner. According to Article 4 of the GDPR, automated decision-making refers to “the use of personal data to evaluate certain personal aspects relating to a natural person (GDPR, Article 4(4)).” Data controllers should inform of the existence of “automated decision-making” when collecting personal data (GDPR, Article 13(f)). Besides, Individuals have a right not to be subject to “a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her (GDPR, Article 22(1)).” Some scholars argue that this right suggests that individuals can refuse the processing of personal data where they are processed for direct marketing purposes (Wachter et al., 2017; Tosoni, 2021). This may reflect the GDPR’s emphasis on procedural interests, given that this right can serve as a safeguard for due process (Bayamlıoğlu, 2022). Moreover, these limitations suggest that the GDPR regards automated decision-making as inherently risky, regardless of whether it occurs at the data collection or processing stage.
Differently, the PIPL offers only limited and moderate protections for individuals in the context of automated decision-making. On the one hand, the right to request an explanation of such decisions and the right to refuse them can only be exercised when the decision has a “significant impact” on the individual (PIPL, Article 24). This lax regulatory attitude toward automated decision-making extends to the algorithmic transparency guideline from China’s primary regulator, the Cyberspace Administration of China (CAC) 2021. The CAC does not mandate the disclosure of algorithmic details, suggesting that it does not view automated decision-making as inherently more risky than human decision-making. Instead, the CAC merely “encourages” companies to enhance transparency in order to “prevent and reduce disputes and conflicts [with consumers] (CAC, 2021).” This reflects the PIPL’s greater emphasis on the processing stage of data. For Chinese lawmakers, automated decision-making is not considered risky per se. In other words, they do not presume that automated decisions are inherently more dangerous than human decisions. Restrictions are necessary only when such decisions are likely to produce harmful outcomes.
On the other hand, the PIPL imposes additional obligations on companies that use automated decision-making, focusing primarily on preventing economic harm. It requires companies to ensure “the transparency of decision-making and the fairness and impartiality of outcomes,” and to avoid applying differential treatment, such as pricing discrimination, based on consumer profiling (PIPL, Article 24). This requirement is result-oriented. For example, in a recent case on pricing discrimination, the Guangdong Internet Court assessed whether the company had violated this obligation by examining whether the plaintiff had paid more than other consumers for the same service (Guangdong Internet Court, 2023). The plaintiff alleged that he spent more money to receive the same in-game character improvements that other users obtained. However, the court dismissed his claim on the ground that he failed to prove that other consumers paid less (Guangdong Internet Court, 2023). Following the PIPL’s risk-based approach, companies are expected to ensure that automated decisions do not subject consumers to disproportionate financial losses in commercial transactions. In fact, the requirements of transparency, fairness, and impartiality are general principles that apply to data processing under the PIPL (PIPL, Article 5, 7). This restatement suggests that, for Chinese lawmakers, automated decision-making is merely one type of data processing practice that may result in economic harm, rather than a uniquely high-risk activity.
Design of enforcement authorities
One of the key innovations of the GDPR in the field of data protection is the designation of dedicated supervisory authorities as enforcers. It requires its member states to provide for one or more independent supervisory authorities with complete independence to carry out the responsibility for monitoring the enforcement of this law (GDPR, Article 51, 52). This model of assigning a single independent body to supervise legal compliance is extended to the corporate level as well. Under the GDPR, companies are required to appoint independent Data Protection Officers to monitor internal practices and ensure compliance with the Regulation (GDPR, Article 39).
In China, data protection is supervised by multiple regulatory authorities. The PIPL designates the CAC as the lead agency responsible for the overall coordination of personal data protection supervision and enforcement. This means that although the CAC is authorized to issue legal interpretations and data protection standards, as well as to conduct investigations and enforcement actions, it is not the only authority vested with such authorities. In practice, in the central level, several other agencies, including the State Administration for Market Regulation (SAMR), the Ministry of Industry and Information Technology (MIIT), the Ministry of Public Security, and the National Financial Regulatory Administration, also hold various enforcement authorities in the field of data protection.
On the other hand, none of these authorities, including the CAC, are dedicated data protection agencies. Rather, all of them are just responsible for data protection within the scope of its broader regulatory mandate. In other words, data protection is only one of the many regulatory functions they perform. The CAC, for example, also oversees online content moderation, the licensing of online businesses, cybersecurity, and data security (Horsley and Creemers, 2023).
The reason for this divergence may lie in the PIPL’s risk-based approach, which places greater emphasis on economic losses and many-to-one asymmetries. On the one hand, by associating data processing risks primarily with economic harm, the PIPL suggests that such risks are not fundamentally different from those encountered in traditional regulatory domains. As a result, Chinese lawmakers have tended to treat data protection as an addition within the functions of existing regulators, reducing the need to establish a dedicated and independent data protection authority.
On the other hand, the emphasis on many-to-one asymmetries may lead Chinese lawmakers to focus more on data processing practices that span multiple sectors. For example, the risks arising from a single data processing activity may implicate issues in financial regulation, cybersecurity, and consumer protection simultaneously. In such contexts, effective enforcement and supervision may require the cooperation of multiple regulatory agencies. Therefore, in China, what is needed may be a coordinating body like the CAC, rather than an independent data protection authority. One example is that the CAC has been relatively inactive in day-to-day enforcement. According to recent research, the MIIT issued 172 out of a total of 184 publicly available penalties for violations of data protection law (Shao et al., 2025). The CAC appears to focus on a few high-profile cases, such as the unprecedented penalty imposed on Didi, to signal to companies and other regulators the circumstances under which serious sanctions may be applied for violations of the PIPL (Shao et al., 2025).
Conclusion
The Brussels Effect of the GDPR has also influenced the development of China’s data protection law. At the textual level, the PIPL shares a great deal of similarity with the GDPR. However, beneath these textual parallels lie substantive differences. This article explores the internal forces that have shaped these substantive divergences. Prior to the enactment of the PIPL, domestic demands had already driven the emergence of a preliminary data protection framework in China. This regime offered Chinese lawmakers three focal points for understanding data processing risks: large-scale participation, economic interests, and threats from third parties. This distinct risk perception underpins the PIPL’s risk-based approach and sets it apart from that of the GDPR. Therefore, even though the Brussels Effect has encouraged China to adopt a GDPR-inspired data protection law, the two regimes have diverged significantly in their approaches to enforcement.
Acknowledgements
This research is supported by the Major Project of National Social Science Fund of China (23&ZD154).
Author contributions
Xiaodong Ding & Hao Huang: Conceptualization, Methodology, Writing—Original draft. Yeliang Wang: Writing—Reviewing and Editing. Zhengyu Shi: Writing—Cases analysis.
Data availability
Data sharing is not applicable to this research as no data were generated or analyzed.
Competing interests
The authors declare no competing interests.
Ethics approval
This article does not contain any studies with human participants performed by any of the authors.
Informed consent
This article does not contain any studies with human participants performed by any of the authors.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Alford, WP; Liebman, BL. Clean air, clean processes? The struggle over air pollution law in the People’s Republic of China. Hastings Law J; 2001; 52, 703.
Angel, MP; Calo, R. Distinguishing privacy law: a critique of privacy as social taxonomy. Columbia Law Rev; 2024; 124, 507.
Article 29 Working Party (2012) Opinion 08/2012 Providing Further Input on the Data Protection Reform Discussions. WP 199, 5 Oct 2012
Balganesh, S; Zhang, T. Legal internalism in modern histories of copyright. Harv Law Rev; 2021; 134, pp. 1093-1119.1066.
Bayamlıoğlu, E. The right to contest automated decisions under the general data protection regulation: beyond the so-called “right to explanation”. Regul Gov; 2022; 16, 1058. [DOI: https://dx.doi.org/10.1111/rego.12391]
Beijing Internet Court (2021) Guo et al. v. Technology Company et al., Case No. (2021) Jing 0491 Minchu No. 47643
Beijing Internet Court (2024) Typical Cases involving Personal Information and Data by the Beijing Internet Court. The Paper. https://m.thepaper.cn/baijiahao_29205833. Accessed 30 Oct 2025
Belli, L; Doneda, D. Data protection in the BRICS countries: legal interoperability through innovative practices and convergence. Int Data Priv Law; 2023; 13, 1. [DOI: https://dx.doi.org/10.1093/idpl/ipac019]
Bradford, A et al. Dynamic diffusion. J Int Econ Law; 2024; 27, 538. [DOI: https://dx.doi.org/10.1093/jiel/jgae034]
Bradford A (2020) The Brussels Effect: how the European Union rules the world. Oxford University Press, Oxford
Brkan, M. The concept of essence of fundamental rights in the EU legal order: peeling the onion to its core. Eur Const Law Rev; 2018; 14, 332.
Burri, M. Cross-border data flows and privacy in global trade law: has trade trumped data protection?. Oxf Rev Econ Policy; 2023; 39, pp. 83-89. [DOI: https://dx.doi.org/10.1093/oxrep/grac042]
Canaves S (2010) Stanley Lubman: will an expanded right of privacy deter China’s internet vigilantes? Wall Street J. https://www.wsj.com/articles/BL-CJB-6934. Accessed 24 Mar 2025
Cao Y (2018) Fraudster blamed for student’s death makes top court’s 2017 list of influential cases. China Daily. https://www.chinadaily.com.cn/a/201802/01/WS5a72c106a3106e7dcc13a3a3.html. Accessed 24 Mar 2025
Case C-131/12, Google Spain SL and Google Inc. v. Agencia Española de Protección de Datos (AEPD) and Mario Costeja González, ECLI:EU:C:2014:317. Judgment of 13 May 2014, para 53
Case C-311/18, Data Protection Commissioner v. Facebook Ireland Ltd and Maximillian Schrems (Schrems II), ECLI:EU:C:2020:559. Judgment of 16 July 2020
Chander, A et al. Catalyzing privacy law. Minn Law Rev; 2021; 105, pp. 1733-1802.
CAC (2021) Provisions on the Administration of Algorithmic Recommendations in Internet Information Services. Official Gazette of the State Council of the PRC. https://www.gov.cn/zhengce/2022-11/26/content_5728941.htm. Accessed 7 Jul 2025
Dai, X. Privacy, reputation, and control: public figure privacy law in contemporary China. Peking Univ Law J; 2021; 9, 143. [DOI: https://dx.doi.org/10.1080/20517483.2021.2020497] 156–160
Demetzou, K. Data protection impact assessment: a tool for accountability and the unclarified concept of “high risk” in the General Data Protection Regulation. Comput Law Secur Rev; 2019; 35, [DOI: https://dx.doi.org/10.1016/j.clsr.2019.105342] 105342.
Demetzou, K. Data protection impact assessment: a tool for accountability and the unclarified concept of “high risk” in the General Data Protection Regulation. Eur J Risk Regul; 2019; 502, pp. 503-504.9.
Ding, X. On the new right characteristics of digital human rights. Sci Law; 2022; 6, 52.
Ding, X. The jurisprudential relationship between the right to privacy and personal information protection: with reference to the application of the civil code and the personal information protection law. Stud Law Bus; 2023; 40,
Ding, X; Zhong, DY. Rethinking China’s social credit system: a long road to establishing trust in Chinese society. J Contemp China; 2021; 30, 630. [DOI: https://dx.doi.org/10.1080/10670564.2020.1852738]
Ding I (2022a) The performative state: public scrutiny and environmental governance in China. Cornell University Press, Ithaca
Dove, ES; Chen, J. What does it mean for a data subject to make their personal data ‘manifestly public’? an analysis of GDPR Article 9(2)(e). Int Data Priv Law; 2021; 11, pp. 107-124. [DOI: https://dx.doi.org/10.1093/idpl/ipab005]
Downey T (2010) China’s Cyberposse. New York Times. https://www.nytimes.com/2010/03/07/magazine/07Human-t.html. Accessed 24 Mar 2025
Fairfield, JAT; Engel, C. Privacy as a Public Good. Duke Law J; 2015; 65, pp. 385-457.
Feng C (2022) Chinese Social Media to Display User Locations Based on IP Address, Including Platforms from ByteDance and Zhihu. South China Morning Post. https://www.scmp.com/tech/big-tech/article/3174487/chinese-social-media-display-user-locations-based-ip-address. Accessed 6 Jul 2025
Gellert, R. Understanding the notion of risk in the general data protection regulation. Comput Law Secur Rev; 2018; 34, 279.285.
Guangzhou Internet Court (2022) Mai v. Beijing Fa Technology Co., Ltd. & Beijing Lv Information Technology Co., Ltd. Case No. (2022) Yue 0192 min Chu No. 20966
Guangzhou Internet Court (2023) Zhang v. Hangzhou NetEase Thunderfire Technology Co., Ltd. et al. Case No. (2023) Yue 0192 min Chu No. 256
Hadfield, GK; Weingast, BR. What is law? A coordination model of the characteristics of legal order. J Leg Anal; 2012; 4, 471. [DOI: https://dx.doi.org/10.1093/jla/las008]
Hangzhou Internet Court (2022) Du v. Internet Company, Case No. (2022) Zhe 0192 Minchu No. 4330
Hoofnagle, CJ et al. The European Union general data protection regulation: what it is and what it means. Inf Commun Technol Law; 2019; 28, 65. [DOI: https://dx.doi.org/10.1080/13600834.2019.1573501]
Horsley J, Creemers R (2023) The cyberspace administration of China: a portrait. In: Creemers R, Papagianneas S, Knight A (eds) The emergence of China’s smart state. Rowman & Littlefield Publishers, pp 10–16
Horsley JP (2021) How will China’s privacy law apply to the Chinese state? New America. https://www.newamerica.org/cybersecurity-initiative/digichina/blog/how-will-chinas-privacy-law-apply-to-the-chinese-state/. Accessed 24 Mar 2025
Hu, Y. Individuals as gatekeepers against data misuse. Mich Tech Law Rev; 2021; 28, 115. [DOI: https://dx.doi.org/10.36645/mtlr.28.1.individuals] 146–149
Huang Y, Shi M (2021) Top Scholar Zhou Hanhua Illuminates 15+ Years of History Behind China’s Personal Information Protection Law. DigiChina. https://digichina.stanford.edu/work/top-scholar-zhou-hanhua-illuminates-15-years-of-history-behind-chinas-personal-information-protection-law/. Accessed 24 Mar 2025
Jia, M. Authoritarian privacy. Univ Chic Law Rev; 2024; 91, 733.
Jones, ML; Kaminski, ME. An American’s guide to the GDPR. Denv Law Rev; 2020; 98, 93.
Kaminski, ME. Binary governance: lessons from the GDPR’s approach to algorithmic accountability. S Cal Law Rev; 2019; 92, pp. 1529-1559.
Kuner, C et al. Risk management in data protection. Int Data Priv Law; 2015; 5, 95. [DOI: https://dx.doi.org/10.1093/idpl/ipv005]
LaCasse A (2024) Report Examines State of African Nations’ Data Protection laws, Implementation Efforts. IAPP. https://iapp.org/news/a/evaluating-african-nations-comprehensive-privacy-laws-and-their-implementation/. Accessed 24 Mar 2025
Lancieri, F. Narrowing Data Protection's Enforcement Gap. Me L Rev; 2022; 74, 15.
Li, W; Chen, J. From Brussels Effect to gravity assists: understanding the evolution of the GDPR-inspired personal information protection law in China. Comput Law Secur Rev; 2024; 54, [DOI: https://dx.doi.org/10.1016/j.clsr.2024.105994] 105994.
Lynskey, O. Deconstructing data protection: the ‘added-value’ of a right to data protection in the EU legal order. Int Comp Law Q; 2014; 63, pp. 569-583. [DOI: https://dx.doi.org/10.1017/S0020589314000244]
Lynskey O (2015) The foundations of EU data protection law. Oxford University Press, Oxford
Macenaite, M. The “riskification” of European data protection law through a two-fold shift. Eur J Risk Regul; 2017; 8, 506. [DOI: https://dx.doi.org/10.1017/err.2017.40]
Manancourt V (2022) Europe’s State of mass surveillance. Politico. https://www.politico.eu/article/data-retention-europe-mass-surveillance/. Accessed 24 Mar 2025
McAdams, RH. Focal point theory of expressive law. Va Law Rev; 2000; 86, 1649. [DOI: https://dx.doi.org/10.2307/1073827]
McAdams, RH. Beyond the prisoners’ dilemma: coordination, game theory, and law. S Cal Law Rev; 2009; 82, pp. 243-244.209.
Mehta, J et al. The nature of salience: an experimental investigation of pure coordination games. Am Econ Rev; 1994; 84, 658.
New York Times (2008) Chinese court fines web user in “cyber-violence” case. https://www.nytimes.com/2008/12/19/world/asia/19iht-china.1.18816892.html. Accessed 24 Mar 2025
Nissenbaum, H et al. The great regulatory dodge. Harv J Law Technol; 2024; 37, pp. 1231-1239.
NPC (2009) Amendment (VII) to the Criminal Law of the People’s Republic of China. Official Gazette of the Standing Committee of the National People’s Congress of the PRC. http://www.npc.gov.cn/zgrdw/npc/lfzt/rlys/2009-06/09/content_1882888.htm. Accessed 30 Oct 2025
NPC (2015) Amendment (IX) to the Criminal Law of the People’s Republic of China. Official Gazette of the Standing Committee of the National People’s Congress of the PRC. http://www.npc.gov.cn/zgrdw/npc/xinwen/2015-08/31/content_1945587.htm. Accessed 30 Oct 2025
NPC Constitution and Law Committee (2021) Report on the deliberation results of the draft personal information protection law of the People’s Republic of China. NPC. http://www.npc.gov.cn/npc//////c2/c30834/202108/t20210820_313090.html. Accessed 24 Mar 2025
NPC (2021) The Draft of Personal Information Protection Law and 14 Other Bills to Be Submitted for Review at This Session of the Standing Committee. NPC. http://www.npc.gov.cn/c2/kgfb/202108/t20210813_312832.html. Accessed at 31 Oct 2025
Ohm, P. Broken promises of privacy: responding to the surprising failure of anonymization. UCLA Law Rev; 2010; 57, 1701.
Ong, R. Online vigilante justice Chinese style and privacy in China. Inf Commun Technol Law; 2012; 21, 127. [DOI: https://dx.doi.org/10.1080/13600834.2012.678653]
Pathak, G. Manifestly made public: clearview and GDPR. Eur Data Prot Law Rev; 2022; 8, pp. 419-420. [DOI: https://dx.doi.org/10.21552/edpl/2022/3/13]
Purtova, N. The law of everything: broad concept of personal data and future of EU data protection law. Law Innov Technol; 2018; 10, 1. [DOI: https://dx.doi.org/10.1080/17579961.2018.1452176]
Quelle, C. Enhancing compliance under the general data protection regulation: the risky upshot of the accountability- and risk-based approach. Eur J Risk Regul; 2018; 9, 502. [DOI: https://dx.doi.org/10.1017/err.2018.47] 504–505
Satariano A (2020) Europe’s privacy law hasn’t shown its teeth, frustrating advocates. New York Times. https://www.nytimes.com/2020/04/27/technology/GDPR-privacy-law-europe.html. Accessed 24 Mar 2025
Schelling TC (1960) The strategy of conflict. Harvard University Press, Cambridge
Schwartz, PM. Preemption and privacy. Yale Law J; 2009; 118, pp. 902-931.
Schwartz, PM. Global data privacy: the EU Way. N. Y Univ Law Rev; 2019; 94, 77.
Schwartz, PM. The data privacy law of Brexit: theories of preference change. Theor Inq Law; 2021; 22, 111. [DOI: https://dx.doi.org/10.1515/til-2021-0019]
Schwartz, PM; Solove, DJ. The PII problem: privacy and a new concept of personally identifiable information. NYU Law Rev; 2011; 86, pp. 1814-1894.
Schwartz, PM; Peifer, KN. Transatlantic data privacy law. Georget Law J; 2017; 106, 115.
Scott, J. From Brussels with love: the transatlantic travels of European law and the chemistry of regulatory attraction. Am J Comp Law; 2009; 57, 897. [DOI: https://dx.doi.org/10.5131/ajcl.2008.0029]
Shen, W. Online privacy and online speech: the problem of the human flesh search engine. Univ Pa Asian Law Rev; 2016; 12, pp. 276-278.268.
Shao, G et al. Assessing the implementation of China’s personal information protection law: a two-year review. Int Data Priv Law; 2025; 15, 18. [DOI: https://dx.doi.org/10.1093/idpl/ipae022]
Smith, HE. The language of property: form, context, and audience. Stanf Law Rev; 2003; 55, 1105.1129.
Solove, DJ. Introduction: privacy self-management and the consent dilemma. Harv Law Rev; 2013; 126, 1880.
Solow-Niederman, A. Information privacy and the inference economy. NW Univ Law Rev; 2022; 117, 357.
Tosoni, L. The right to object to automated individual decisions: resolving the ambiguity of article 22(1) of the GDPR. Int Data Priv Law; 2021; 11, 145. [DOI: https://dx.doi.org/10.1093/idpl/ipaa024]
van Dijk, N et al. A risk to a right? Beyond data protection risk assessments. Comput Law Secur Rev; 2016; 32, 286. [DOI: https://dx.doi.org/10.1016/j.clsr.2015.12.017]
Viljoen, S. A relational theory of data governance. Yale Law J; 2021; 131, 370.
Wachter, S et al. Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. Int Data Priv Law; 2017; 7, 76. [DOI: https://dx.doi.org/10.1093/idpl/ipx005]
Waldman, AE. Privacy, practice, and performance. Calif Law Rev; 2022; 110, 1221.
Wang, L. On the application relationship between the personal information protection law and the civil code. Huxiang Leg Rev; 2021; 1, 1.
Wang, L; Xiong, B. Personality rights in China’s new civil code: a response to increasing awareness of rights in an era of evolving technology. Mod China; 2020; 47, pp. 703-721. [DOI: https://dx.doi.org/10.1177/0097700420977826]
Wang, X; Peng, C. Constitutional foundation of personal data protection law in China. Tsinghua Univ Law J; 2021; 3, 6.
Wang L, Xiong B (2023) Personality rights in China’s new civil code: a response to increasing awareness of rights in an era of evolving technology. In: Jiang H, Sirena P (eds) The making of the Chinese civil code: promises and persistent problems. Cambridge University Press, Cambridge, pp 59–67
Wu W (2016) China’s 117-Million-Yuan Phone Scam: involvement of Taiwanese suspects may fuel Beijing’s row with Taipei. South China Morning Post. https://www.scmp.com/news/china/article/1937963/chinas-117-million-yuan-phone-scam-involvement-taiwanese-suspects-likely. Accessed 24 Mar 2025
Xiong, B et al. Unpacking data: China’s ‘bundle of rights’ approach to the commercialization of data. Int Data Priv Law; 2023; 13, 93. [DOI: https://dx.doi.org/10.1093/idpl/ipad003]
Yeung, K; Bygrave, LA. Demystifying the modernized European data protection regime: cross-disciplinary insights from legal and regulatory governance scholarship. Regul Gov; 2021; 16, 137. [DOI: https://dx.doi.org/10.1111/rego.12401]
You, C. Half a loaf is better than none: the new data protection regime for China’s platform economy. Comput Law Secur Rev; 2022; 45, [DOI: https://dx.doi.org/10.1016/j.clsr.2022.105668] 105668.
Zang T, Li S (2015) Ninth amendment to the criminal law of the People’s Republic of China: article interpretation, legislative reasons and related provisions. Law Press, Beijing, pp 125–127
Zhang, AH. The promise and perils of China’s regulation of artificial intelligence. Columbia J Transnatl Law; 2025; 63, 1.
Zhang, M. Tort liabilities and torts law: the new frontier of Chinese legal horizon. Rich J Glob Law Bus; 2011; 10, 457.415
Zhang, T; Ginsburg, T. China’s turn toward Law. Va J Int Law; 2019; 59, 306.
Zhang, X. Constitutional justification of the right to personal information—based on the reflections on the differential protection theory and the dominance theory. Glob Law Rev; 2022; 1, 53.
Zhang, X; Ding, X. Public focusing events as catalysts: an empirical study of “pressure-induced legislations” in China. J Contemp China; 2017; 26, 664. [DOI: https://dx.doi.org/10.1080/10670564.2017.1305484]
Zhang A (2024) High wire: how China regulates big tech and governs its economy. Oxford University Press, Oxford
Zhou D (2016) Chinese schoolgirl, 18, dies of heart attack after falling victim to university tuition fee scam. South China Morning Post. https://www.scmp.com/news/china/society/article/2008277/chinese-schoolgirl-18-dies-heart-attack-after-falling-victim. Accessed 24 Mar 2025
Zuiderveen Borgesius, FJ. Singling out people without knowing their names: behavioural targeting, pseudonymous data, and the new data protection regulation. Comput Law Secur Rev; 2016; 32, 256. [DOI: https://dx.doi.org/10.1016/j.clsr.2015.12.013]
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.