Full text

Turn on search term navigation

© 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Agentic AI shifts stacks from request‐response to plan‐execute. Systems no longer just answer; they act—planning tasks, calling tools, keeping memory, and changing external state. That shift moves privacy from policy docs into the runtime. This opinion piece argues that we do not need a new privacy theory for agents; we need enforceable, observable controls that render existing rights as product behavior. Anchoring on GDPR—with portable touchpoints to CPRA, LGPD, and PDPA, we propose a developer‐first toolkit: optional, bounded, user‐visible memory; a purpose‐aware egress gate that enforces minimization and transfer rules; proportional safeguards that scale with stakes; and traces that tell a coherent story across components and suppliers. We show how the EU AI Act's risk management, logging, and oversight can scaffold these controls and enable evidence reuse. The result is an agentic runtime that keeps people in control and teams audit‐ready by design.

Details

Title
From rights to runtime: Privacy engineering for agentic AI
Author
Navaie, Keivan 1   VIAFID ORCID Logo 

 School of Computing and Communications, Lancaster University, UK 
Section
COLUMN
Publication year
2025
Publication date
Dec 1, 2025
Publisher
John Wiley & Sons, Inc.
ISSN
07384602
e-ISSN
23719621
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3265115457
Copyright
© 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.