Introduction
How does the humanitarian sector deal with the “future shock” of generative AI (GenAI), the “dizzying disorientation brought on by the premature arrival of the future” (Leslie & Perini 2024)? While discussions of the risks and opportunities of AI have been ongoing for many decades, the rapid and unplanned adoption of GenAI on a global level, including within the humanitarian sector, suggests that this technology has a transformative but unpredictable impact on the sector. After featuring in the background of discussions on the digital transformation of aid for some years (Pizzi et al. 2020; Coppi et al. 2021; Madianou 2021; Beduschi 2022), the adoption and adaptation of AI/GenAI have become a main focus of these conversations (Spencer 2024; McElhinney & Spencer 2024; Spencer & Masboungi 2025), reaching “peak hype” in 2024. In this commentary, I focus on AI as a buzzword in “aid talk” (Borchgrevink & Sandvik 2022). “Aid talk” refers to discussions within organizational spaces and among aid practitioners and commentators about the means and ends of humanitarian assistance, challenges encountered, and strategies to improve aid delivery.1 Drawing on my previous work on the digital transformation of aid and AI, the aim of this commentary is to support ongoing discussions on AI in the aid sector. I aspire for this contribution to be relevant for practitioners, policymakers, and academics alike.2 Given how fast these conversations move, the contribution can hopefully also be a baseline or reference point for later scholarly inquiries. This is a commentary; hence, the tone is personal. The intervention calibrates and explores three issues related to the framing of AI in aid talk. The first issue concerns the framing of AI as a humanitarian problem. The second issue relates to the reframing of AI as a topic of humanitarian governance in policy. The third issue deals with the role and place of AI in the ongoing deframing of humanitarian knowledge practices. I begin by outlining the use of narrative and frame as key concepts in the discussion.
Setting the stage: narratives and frames
Narratives and frames shape how problems come into being and are defined. This includes conceptualizing policy contents, implementation, and goals (De Guavara et al. 2018). As I use the concept in this commentary, frames refer to constructing and organizing meaning (Goffman 1974) through images, metaphors, messages, etc. Reframing is the process of changing the construction and organization of meaning. Deframing is the act of dissolving existing frames for producing, recognizing, and utilizing knowledge. The master narrative of humanitarianism is that it alleviates the suffering of distant strangers. This narrative is framed through humanitarian imperatives and principles: humanitarianism is about providing aid according to need and doing no harm, guided by principles of humanity, universality, and impartiality. To calibrate emerging AI talk in aid, thinking in terms of frames helps structure perceptions of digital innovations, focus attention, and make connections between crisis contexts, aid practices, and technology.
A particular aspect of AI frames is the proliferation and importance of metaphors about what AI is (such as a machine with a human brain or maybe an oracle), what AI does (like an idea calculator), and how AI can be controlled (by winning the AI race) (Wallenborn 2022). Metaphors transmit meaning: they describe one thing in terms of another, helping to describe novel technologies by referring to something familiar (Lakoff and Johnson 1980; Wyatt 2021). Metaphors shape understandings of what something is and can be used for—now and in the future (Bones et al 2021). Successful metaphors contribute to simple and convincing narratives, but metaphors can also amplify misunderstandings and expectations. Metaphors play a role in power struggles, for example, by buttressing arguments about technological determinism, where technology, due to its nature, is portrayed as difficult or impossible to control (Winkel 2024). As noted by critics, there is a tension between the proliferation of future-oriented frames focusing our attention on the ultimate goals of the field of AI and the frames used for grappling with the “already ubiquitous effects of AI use” (Kajava and Sawhney 2023). This means that we need to pay attention to the role of metaphors in humanitarian AI discussions, including how the metaphors used to frame interventions in the aid field merge with those circulating in technology discourses.
Humanitarian AI problems
Broadly described, humanitarian conversations on AI run like this: AI has the potential to transform the management and delivery of aid, improve efficiencies, and save costs. More and better-analyzed data are expected to enable actors to do more things. At the same time, the concerns about risk and misuse proliferate, making up what I will call “the humanitarian AI problem.” I suggest that the framing of the humanitarian AI problem in aid talk reflects but does not correspond directly to the general litany of societal problems with AI, such as discrimination, privacy, bias, lack of transparency, or sustainability. Similarly, this framing is only indirectly connected to ongoing discussions about the potentially catastrophic humanitarian impact of AI, such as lethal autonomous weapons. The humanitarian AI problem encompasses dilemmas arising from the interplay between policy, programming, logistics, protection, and management with the challenges of digital transformation generally and AI specifically (see OCHA 2024). For the aid sector, AI is a specific but not exceptional problem that needs to be considered a matter of professionalism. Currently, what the humanitarian AI problem will look like—even in the near future—is uncertain. However, we know that problems will arise, and it is possible to provide some approximations. For example, the rise of “AI creep” will likely result in mission creeps for aid actors or data function creeps beyond humanitarian mandates.3 A new configuration of the digital shadow—I am tentatively calling this “humanitarian AI shadows”—can produce adverse effects through a combination of the problem of marginalization and invisibilization of individuals or groups (due to their geography, lifestyle, gender, or less datafied lives4) and strategy and capacity gaps hampering change and adjustment within humanitarian organizations. The cuts and cancellations in the aid sector starting in Spring 2025 have reinforced both aspects. Another important dimension of the humanitarian AI problem is how AI shapes efforts to make the sector more accountable: the prospects of localization—a key ambition of the sector since the World Humanitarian Summit in 2016—seem uncertain given AI systems’ tendency towards centralization and flattening of knowledge, of decisional processes and profit-making (Raftree 2024). In sum, AI brings uncertain opportunities—and uncertain challenges. In the following, I reflect on how humanitarians grapple with AI in aid, i.e., how they frame AI, how AI is reframed from technology hype to becoming an issue of humanitarian governance, and finally, the role of AI in the ongoing deframing of humanitarian knowledge production.
Framing perceptions of AI in aid
How do humanitarians understand the role and place of AI?
The first framing concerns how we think about AI as a so-called new technology. Much of what arises at the start of “AI in aid” conversations plays out according to a familiar script, yet I would argue that the overall framing is one of expected unpredictability, where AI is both an all-knowing co-pilot and an uncontrollable disruptor. There is a strong tendency towards hype and solutionism. The crowd is recognizable: There are tech optimists, tech evangelists, and luddites who want nothing to do with “it.” The evangelists believe that technology can rid the aid sector of its structural problems. According to their vision of technology, digital tools come without politics and skirt justice and distribution agendas. At the other end of the spectrum are those who, by conviction or refusal to adapt and upskill, reject new technology. The outliers share a deterministic attitude to the technology in question: it will work as intended, whether for good or bad. Most commentators and practitioners appear to find themselves somewhere in the middle, ranging from skeptics to those who are positive about AI and the silent majority who simply adopt any technology proposed to them for work. Compared to previous humanitarian technology hypes, the conversations about AI are not occurring in a vacuum but in the context of the global backlash against Big Tech and “AI villains”—it seems the group of skeptics is in the majority. Moreover, in AI aid talk there is much less of the staple techno-optimistic comment that “technology is neutral, it is people who use it for good or bad.” We do not know where the global technology race is going, who will be leading it, what it can contribute, what it will ultimately cost, and what all of this will mean for humanitarian action.
Concerning the short but cyclical history of humanitarian technology, the issue of “solutions in need of problems” is well known: from apps to drones to innovations in blockchain, entrepreneurs and innovators with little familiarity with the aid sector have frequently looked to the sector to trial, market and sell their products (Jacobsen and Sandvik 2018). Intrinsically linked to critical engagement with solutionism is a critical engagement with problem framing. What is a humanitarian problem that AI can contribute to solving? On one level, this is a question that speaks to the sector’s wide-ranging efforts to professionalize aid work, become fit for purpose, and embrace data-driven decision-making and monitoring and evaluation. Yet, with the use of AI to support the needs assessments that form the basis for identifying, calculating, and formulating humanitarian problems, several new issues arise. While it remains important to be critical of the sector’s tendencies towards “neophilia” (Scott-Smith 2016), it is imperative to be alert to the possibility that something is new concerning how we demarcate humanitarian space: in terms of everyday needs assessments for example, have we already arrived at a point where AI determines that AI is necessary to identify humanitarian problems? My point here is that there seems to be widespread recognition that we do not know.
Additionally, for humanitarians to formulate a clear AI problem definition also requires a well-informed overview of traditional digital transformation stakeholders and approaches and a realistic appraisal of available alternatives and choices to be made. This means ascertaining the role and place of lessons learned as we try to assess what AI can bring to the table. As a sector with inherently fleeting institutional memory, when trying to assess what digital tools, including what AI contribute, there is a need for some sort of shared genealogy for what projects and stakeholders participated in, when, and with what outcome (Coppi 2024). Yet, even if we can adequately calibrate how aid actors understand technology, incorporate the relevant lessons learned in our thinking, and agree on the appropriate actual and potential uses of AI, the intended foresight-nature of such conversations is being undermined by the fact that because the humanitarian digital supply chain is largely the same as the Big Tech supply chain. There is not much choice: There is a rapid, largely invisible, and already partially completed “AI-fication” of digital humanitarian infrastructure. AI is being integrated into social media platforms, cyber security, and office management tools. Even the most skeptical aid workers are probably already using AI—inadvertently, but daily—in their work.
Humanitarian AI fictions and the rise (and rise) of new buzzwords
The second framing concerns AI fictions: imaginary and often anthropomorphic explanations of what AI is and how it works. While generative AI is a language model and not an information model, we need to consider the distributive impacts of humanitarian AI fictions on how we construe ethics problems. While technical explanations make it clear that GenAI does not (yet) have intent or consciousness and that it is not an information model, people interact with GenAI and some of the attendant concepts attached to it as if GenAI were a being and the concepts’ indisputable analytical tools were real. I think the form of anthropomorphizing going on in aid talk can broadly be divided into the two categories “AI as a monster” and “AI as a magician.” The humanitarian fascination (including my own) with the monstrosity of novel technologies is familiar: a good illustration would be the drone discussions happening in the aid sector a decade back (Sandvik & Lohne 2014). Yet, something appears to be new: namely, the fantastical but also fatalistic notion of AI as “magic”—that based on prompts, AI is capable of pulling truth, stories, and narrative (emails, reports, research summaries) out of thin air. While AI is not “enchanted,” as observed by Morais (2023), “this is a moment when we should reflect on AI’s storytelling facility, be cognizant of the artificial ethos conveyed in some of its stories and take seriously their cultural implications.” We should think carefully about what this means for the humanitarian sector, where the stakes are high for calculations and approximations of truth regarding the scale and nature of suffering and the amount of relief needed to address it.
I will use an example from my attempts to discuss the adoption and adaptation of AI. In the spring of 2024, I was invited to a medium-sized humanitarian organization to convene a conversation about the embryonic uses of generative AI in the organization. “I know some are using it” someone in the group stated. “I am using it sometimes” someone else said. “I am NEVER going to use it” replied a third. To my ear, it almost sounded like they were talking about something illicit, like drugs or prohibited painkillers, not technology integration. It was interesting to observe that even aid workers who argued that they would strenuously refuse to have anything to do with AI had a clear vision of what they thought AI could do and why AI was monstrous. It should be noted that several of my previous attempts to engage humanitarian audiences have resulted in a collective dash to embellish upon the imagery of AI as a shapeshifting Voldemort of Silicon Valley. Despite the lingering presence of the uncanny, this particular group conversation was in the end rather successful, I thought. We got around to discussing many opportunities and challenges and identifying some key organizational steps that needed to be taken concerning guidelines, quality assurance, and streamlining of licensing of AI products across the organization. My point is: aid workers have formed strong opinions about AI and expectations about what it will do—or not. There is a dilemma here. While the AI turn might mostly have practical implications and requirements regarding getting organizations “AI ready” so humanitarian staff can continue to do their job, unlearning the fascination with monsters and magicians will require serious digital capacity training across organizations. Additionally, I would submit that this is urgent. We are faced with a unique situation whereby large swathes of humanitarian staff—many falling into the younger age brackets making up most of the employees in the sector—almost simultaneously picked up GenAI tools and began using them for work because they got help with the boring stuff.
At the same time, something else is at play, which we need to be aware of as we build literacy and institutionalize AI vocabularies for our everyday use. This is the shapeshifting capacity of AI language and the incredibly rapid rise of buzzwords in mainstream AI discussions. Buzzwords highlight certain aspects of a situation, create problem statements, and thus define courses of action. In aid, buzzwords shape “world-making projects” and confer legitimacy, which actors need to justify their interventions (Cornwall & Brock 2005). We must engage with the mushrooming AI-vocabulary of non-emergency settings and track how it evolves—but also critically ascertain the migration of well-known concepts into our sector and everyday aid talk and the trajectories they follow. At present, as we are striving to make sense of what AI means for aid, there is an unreflective uptake of buzzwords such as “Foundational AI,” “Frontier AI,” “hallucinations,” and safety with little critical recognition and reflection. If we understand our sector as a side event in global governance, paying attention to what happens in the main hall is vital. This also concerns the compression and acceleration of timelines: while AI itself is poised to reign the hype cycle for a long(ish) time, it will be a reign of discursive multitudes and rapidly shifting debates.
According to Helfrich (2024), efforts to “shift the linguistic terrain of AI” are succeeding. In 2021, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) rebranded large machine learning models as the “foundation model.” “Frontier AI” is a new name and characterization that emphasizes capability and the notion that these models will serve as the “foundation” for a wide range of tasks. In 2022, in a non-peer-reviewed research paper on arXiv, researchers
and policymakers with ties to the high-profile Future of Humanity Institute (which focuses on the “existential risk” that AI supposedly poses to humanity) and the Effective Altruist movement put forward another new term designed to feed the AI hype cycle. Helfrich observes that “Frontier AI” is by nature speculative; it does not exist yet, it is imminent, and it may (or may not) be highly dangerous (Helfrich 2024). In 2023, the concept of “hallucination” as a way of describing false information was popularized by Google. In what Helfrich labels as a “major win for AI hype,” “hallucinate” was named by the Cambridge Dictionary as the Word of the Year for 2023. The Dictionary also updated the definition to include its use in AI-related contexts. She notes “to hallucinate is to sense something not there, but machine learning does not do that” (Helfrich 2024). Similarly, Hick et al. (2024) are “calling bullshit on AI bullshit,” arguing that when ChatGPT produces wrong information it bullshits, it does not mislead or “hallucinate” because it has no concerns for the truth.
At the same time, we must think about how some of those “main hall” events are meaningfully recalibrated in humanitarian space. While the concept of safety has long been a staple of ethics discussions in AI research, a flurry of global governance policy initiatives is now organized around this concept (such as the 2023 UK AI Safety Summit). According to its critics, the safety focus narrows governance discussions and gives way to corporate and technical problem framings (Leslie and Perini 2024). As noted by Gebru and Torres (2024), foregrounding safety allows AI-makers to present themselves as AI safety organizations dedicated to safeguarding humanity while evading meaningful oversight and accountability. I suggest that this shapeshifting of the meaning, role, and import of safety and how it is rapidly attaining a central place in humanitarian AI imaginary merits significant concern. A set of pertinent questions for the aid sector are: What kind of idea is safety in humanitarian space? What are the costs and tradeoffs involved when we steer our policy conversations toward this concept? What kind of “safety”—and safety risks—is being imported into humanitarian language?
From meta to micro-framings of AI as humanitarian experimentation
The third framing is that of humanitarian experimentation. As I have noted together with colleagues elsewhere, experimentation describes a defined, structured process to test and validate the effect and effectiveness of new products or approaches. Humanitarian work, due to its uncertain and often insecure context, is inevitably experimental. Using well-known, evaluated approaches—technological, medical, nutritional, or logistical, for example—in an uncertain environment does not make that practice experimental, though it may introduce risk through the variability of the context of its application. However, the use of untested approaches in uncertain environments makes clear the need for more structured processes: it compounds the risks of experimental practice with those of unstable environments (Sandvik et al., 2017). Drawing on this conceptualization, I suggest that experimentalism is a useful way of describing the ongoing process of wrapping humanitarian action around digital infrastructures that are increasingly AI-driven. What I am concerned with in this commentary is not so much the exploitation of the unstable humanitarian context to experiment with AI, but the role and place of GenAI and its perpetual beta-testing mode as a form of continuous and remote experimentation—including its continuous need for and use of training data—that reshapes humanitarian governance beyond its “digital tools.” Above, I have used frames as an analytical device—this works for humanitarian experimentation too, but at the same time, the experimentation frame is also how humanitarians themselves grapple with AI.
The sector urgently needs to consider the logistical implications of basing programming on digital infrastructures. The summer of 2024 featured headlines such as “CrowdStrike impact: How a global IT outage unraveled the world’s tech” (Petras et al. 2024). Yet, while resources are allocated to remedy threats and damage to critical digital infrastructure in the Global North, the resources are scarcer and the timelines longer for fixing data damage elsewhere. This discrepancy is inevitably part of the experimental adoption and adaptation of AI in the sector. Additionally, vulnerability is unequally distributed. Khan (2024) observes that contexts characterized by authoritarian governance, low trust, and resource scarcity are particularly susceptible to AI-generated disinformation. While the humanitarian community is currently discussing digital shadows and strategic underserving of populations (through 3G instead of 4G or 5G), the increased climate and disaster risk coupled with disrupted global supply chains or resource scarcity might mean that the humanitarian sector on a more general level is setting itself up for serious access problems. For emergency care, disruptions in a digitally dependent system will have a catastrophic impact on response.
Additionally, while digital surveillance, mis- and disinformation, and censorship were recognized as fundamental challenges for the sector well before the rise of GenAI, colleagues working on humanitarian communication observe that AI both exacerbates and obscures problems: in reflecting on the limitations of AI tools regarding small language groups, Christina Wille of Insecurity Insight (2024) argued that this dis-attention engendered “whole pockets of realities we are completely unaware of.” The consequence is not only a lack of moderation of hate speech, surveillance, and harassment but that new threat actors, discourses that should have triggered early warning notifications and rapid escalation of ground hostilities, go undetected and unanalyzed. Somewhat prosaically, what may result is no aid talk and a gap in awareness, planning, and response. According to Helen McElhinney (2024), director of the CDAC network, which focuses on communication with disaster-affected communities, the humanitarian sector is now facing the challenge of supporting “what is real, at scale”: that climate change is man-made, that pandemics are real, or that children are civilians. Seeing humanitarian communication through the lens of experimentalism helps foreground the issues at stake but also resists the pull of resignation and any notion that AI is inevitably “going to work like that.”
Yet perhaps the most important discussions about “AI-as-experimentation” are not meta-discussions or those we can have at high-level policy meetings or large conferences. The important discussions are focused on specific issues. They are also practice-oriented, involving practitioners and communities. Debates about humanitarian accountability regularly feature concerns about siloed conversations and initiatives which, due to their piecemeal nature, fail to make much imprint. In the case of AI, the conversations have perhaps not been siloed enough. To grapple meaningfully with humanitarian AI experimentation, we have to investigate specific types of humanitarian work in the sectors of health, education, or WASH (water, sanitation, and hygiene). This means paying attention to how interventions are designed, the role of AI, and what we can learn about effectiveness and efficiency from monitoring and evaluating interventions. What is needed is not less AI aid talk but more specific and more practical AI aid talk.
Reframing AI problems as humanitarian policy challenges
From reflecting on how AI and the humanitarian AI problem is framed, I move to take stock of how recent policy efforts are reframing AI from an evolving technology with inherently unpredictable future impact to something that must be governed. This entails seeing AI as a technology with an already ubiquitous practical impact and as constituting a challenge to humanitarian governance and operations. I am interested not so much in reframing processes but in examples of the reframing of AI as a humanitarian policy issue. Humanitarian governance can be construed as the internationalized attempt to save lives, enhance welfare, and reduce the suffering of the world’s most vulnerable populations (Barnett 2013). Technology is part of a humanitarian accountability progress narrative where technology is deployed to solve predefined governance problems (Fast and Jacobsen 2019) across the sector. However, over the last decade, it has also become increasingly clear that technology adoption and adaptation engender new governance problems. An important part of examining how humanitarian AI is framed concerns how issues become problematized, the power relations involved in the discursive framing of problems and their resulting solutions, and the constitutive effects of problem framings (Riemann 2023). To that end, I am interested in how AI problems are represented and given visibility and meaning in humanitarian policy documents.
Over the last decade, there has been a proliferation of mandate modifications and organizational strategies related to the digital transformation of aid (Sandvik 2023a) as well as an avalanche of more technical policies, codes of conduct, handbooks, and guidelines on data protection, biometrics, cybersecurity, drones, and similar. Humanitarian policies targeting governance or operational issues related to AI build on this body of policy guidance and will likely replicate the proliferation of problem framings present in previous generations of humanitarian technology policies. Policies will also be guided and constrained by emergent non-humanitarian ethical and legal frameworks.5 Nevertheless, at this moment, it is worth looking at some of the problems and issues that the diverse cohort of humanitarian AI policy instruments aims to tackle as humanitarian thinking around the issue moves from generalized reflection on transparency, fairness, and explainability to field and mandate-specific problem framings. The selection is deliberately eclectic, aiming to illustrate the diversity of concerns and rationales. I have included examples from donors (USAID), the UN and INGOs (UNICEF), the ICRC and an NGO platform (NETHOPE), addressing programming, capacity building, activities, communication (with notable interventions on imagery), collaborations and the overarching linkages between missions, mandates and the digital transformation.
The first example refers to mandate-specific approaches. After extensive consultations, UNICEF launched “Policy Guidance on AI for Children 2.0” in 2021, aiming to bridge children’s rights and AI adoption and adaptation internally and generally in the child rights field. While this policy was highly innovative, bringing a concerted policy focus to protecting children’s digital bodies (Sandvik 2020), 5 years later, it appears “old” in the sense that it neither accounts for the acceleration of AI integration due to generative AI or the global backlash against technologizing childhoods. For example, the wording of #7, “Empower governments and businesses with knowledge of AI and children’s rights” would possibly focus more on accountability, responsibility—or even liability—today. At the same time, it also encapsulates the challenge of guiding a digital future that is not only unknown but whose premises are also continuously modified, as illustrated by #8, encouraging that we “Prepare children for present and future developments in AI” The guidance notes that parents and guardians may not “be aware of future, unknown uses of their children’s data” (UNICEF 2020: 22). In hindsight, looking at developments from 2020–2025, nor were almost anyone else.
Donor guidelines are also emerging. Whatever the lessons from its demise, it is useful to bring out the USAID Artificial Intelligence Plan from 2022 as an example of donor-driven guidance. This plan focused on development but was aimed at “chartering the course for responsible AI in USAID programming” (USAID 2022). Launched six months before ChatGPT, the plan focused strongly on data infrastructure as a public good and the need to build capacity, strategy, and oversight mechanisms through multistakeholder processes. Similar to the UNICEF guidance, there was a strong focus on rights protection. While the end vision of this policy is relevant to thinking about humanitarian outcomes, the plan also illustrates the profound difference between grappling with AI in a development context where the state is mostly a willing if not always able partner, and the role and import of AI in emergency responses where the state is unable, unwilling or both.
A third example is more classically humanitarian. The ICRC “Building A Responsible Humanitarian Approach: The ICRC Policy on Artificial Intelligence” from 2024 starts from the mandate of the ICRC, presenting the policy as based on a “purely humanitarian approach” and as “value-based,” building on previous ICRC digital policies, which have been agenda setting for the humanitarian field. At least nominally, this policy represents a contrast to other policies by being explicitly oriented toward internal objectives, to “help ICRC staff learn about AI” but also to enhance governance and the organization’s overall capacity to engage in AI-related debates. The policy attempts to carve out a humanitarian space by denoting the binary tenor of external AI debates as a “politicized” discourse, where utopian views are pitted against dystopian views and enthusiasts against pessimists, suggesting that the policy is intended to stake out a different type of conversation. Whereas other ICRC policies such as the rules of personal data protection are binding on the organization, this policy is explicitly framed as a methodology, developed in response to the lack of a common policy framework and aimed at ensuring that “various activities are organized and managed by a clear, consistent formal and professional methodology.” Regardless of the caveats and emphasis on focus, this policy is likely to be important for normative guidance in the sector (ICRC 2024).
The final example concerns collaborative policies and the NetHope Humanitarian AI Code of Conduct. Operating across humanitarian, development, and human rights spaces, NetHope is a global consortium of non-profit organizations instigated back in 2001 aiming to “improve the efficiency and effectiveness of humanitarian aid and development efforts through the power of technology and collaboration,” focusing on collective impact (NetHope 2021). While this Code of Conduct is aspirational, it matters how aspirational it is. Divided into three succinct sections, the Code sets out standards for AI use, a set of normative agreements, and future objectives. Stakeholders agree to abide by humanitarian principles and frameworks and ensure that AI use has a “net positive impact” on the realization of mission objectives and communities. Somewhat puzzling, there is an agreement both to “Do No Harm” and to not exacerbate problems stakeholders are working to solve. Among the innovative and potentially increasingly contentious aspects of the code is the emphasis that AI use must also be feminist and intersectional (#4). Referring specifically to high-risk contexts, the Code also includes an interesting statement on images. At the face of it, this appears to be a detailed ban: stakeholders agree “not to use ai to generate photo realistic images or videos of vulnerable groups including children and program participants, for the purposes of publication, including campaign and fundraising.” Interesting questions arise concerning the standards for when something is photo-realistic, the definitions of vulnerability and the understanding of the meaning of publication. Moreover, referring back to UNICEF and the time carousel aspect of AI policy-making, the NETHOPE code of conduct #10 recognizes that a complex and consequential regulatory framework is emerging and emphasizes the need to “align where we have different regulatory environments”—presumably configuring this alignment into an operational model—and “uphold the highest available standards and practices of data protection, privacy and security used by this model.”
The implementation and impact of humanitarian policy documents vary enormously. Yet policy and the frames and narratives used in policy guidance shape everyday aid talk. The examples above are intended to illustrate the myriads of ways in which AI is being reframed as part of humanitarian practice, the multiple framings regarding what the problems are represented to be (and not represented to be), and who the problem holders are. From this perspective AI is more of what is already there: Technology can be a response to humanitarian needs, an attempt to fix operational protection problems, one of those “solutions in need of problems” situations pushed onto the humanitarian agenda by eager innovators—or a problem arising from ongoing deployment of digital solutions, either as a technical implementation problem (whether relating to connectivity, malfunctioning, design problems or the tool in question being fundamentally unfit for purpose in emergency context) or as a problematic result of that implementation.
Deframing humanitarian knowledge?
When I began drafting this commentary during the Spring of 2024, it did not have a third part. When I first embarked on sketching out such a part in the fall of 2024, it was tentative and future-oriented: I was asking whether we were seeing the contours of a new type of humanitarian knowledge practice, namely deframing. As I write this, I do no doubt that a de-framing and decoupling of the deep structure of knowledge production and information is taking place, with significant impact on humanitarian governance and policymaking, operational activities and everyday aid talk. With deframing, I refer to the act of dissolving existing frames to produce, recognize, and utilize knowledge. From a humanitarian innovation perspective, deframing old models for knowledge production is intrinsic to achieving change and reimagining traditional models and concepts. The humanitarian system itself has long been considered “unfit for purpose,” with calls for localization, decolonization, and even complete dissolution. A process of deframing seems to be what is asked for—and this is possibly also how we may characterize what is going on. However, regardless of one’s position on change in the aid sector, for humanitarians, the notion of “ground truth”—information gathered through direct observation or measurement and thus considered accurate and dependable—is crucial for effective planning, providing feedback, and ensuring accountability. Operationally, getting to ground truth in aid also involves accounting for and incorporating muted voices and oppressed groups while recognizing the importance of diversity. The unprecedented targeting of organizations and programs addressing these objectives is based on political decisions, even though automated screening tools are key to their execution.
A parallel issue concerns the role of digital tools in producing humanitarian information and whether AI has set in motion a deframing of knowledge production that is just as consequential as problems of misinformation (Leyland et al 2023) and disinformation (Bunce 2019) adversely impacting humanitarian operations. Technology has had a significant impact on humanitarian knowledge politics over the last two decades: From a humanitarian communication perspective, the veracity of suffering is key to humanitarian narratives (Wilson & Brown 2009), however, establishing this veracity is not straightforward. The distinction between fictional and non-fictional narratives is crucial for navigating our everyday worlds—but also for believing that a crisis one cannot see or is in any way affected by is real and severe. It has been observed that in humanitarian communication, there has been a move toward rhetorical discourses where the format and structure of storytelling challenge the audience’s ability to distinguish the fictionalized from the non-fictional (Iversen 2019). Added to what might be understood as a deliberately calibrated realness is the distributive impact of the digital transformation. Digital tools have introduced distinct epistemologies that obscure many forms of knowledge in crises and emergencies but also produce limited understandings of how a crisis unfolds. Big Data entails profound changes in how data is collected, processed, and visualized. According to Burns (2015), digital humanitarianism obscures knowledge about how a crisis is unfolding and promotes the knowledges of people located distant from crises and privileges a professionalized and remote volunteer-based labor force (Burns 2015). Whereas digital humanitarianism promises better decision-making by way of automated processes for sorting information and reducing complexity, automatization also points attention to new horizons regarding what is unknown: the possibility of knowledge remains elusive. This development has been coined by Fejerskov et al (2024) as “humanitarian ignorance.”
Nevertheless, these contributions and observations are mainly concerned with technological change and can be placed on the same epistemological trajectory. My critique concerns something that has yet to materialize and be fully visible to us, namely the AI-generated destabilization of knowledge production resulting from deframing. The generation of inaccessible realities such as those described by Wille above, adds to the erosion of trust in humanitarian information. I suggest the in-world decoupling of the processes of knowledge production is equally significant but so far insufficiently understood. With humanitarian actors in an increasingly precarious position, the ability to hold on to certain agreed-upon processes for producing knowledge is important. Fake and fabricated content (data, facts, arguments, claims, conclusions) undermines the foundations of knowledge in a democratic society. In the broad field of humanitarian studies, including forced migration, mass violence, and war, AI manipulation generates acute questions regarding knowing about life and death, including the trustworthiness and reliability of academics as experts, commentators, and analysts.
An important and by now familiar issue concerns the illegal, unethical, and deceitful use of AI: This includes situations where a prompt generates false results or situations where a prompt should not have been used for legal or ethical reasons, thus voiding the output. A related dimension of deframing involves the false attribution of authorship, where a made-up piece of research, reporting, or commentary is attributed to existing sources, such as a journal or a publishing platform—and real activists, researchers, practitioners, or policymakers. Additionally, there are problems with false citations for non-existing papers and reports, as well as the wholesale invention of papers and reports. The potentially cascading effects of relatively simple (but not innocent) transgressions such as inventing sources or using wrong or non-existing citations represent a new challenge. These transgressions undermine the way we as a community of knowledge producers, organize ourselves and make sense of our work—and the legitimacy of our contributions to and critiques of that elusive ground truth. What is evidence-based humanitarianism if there is neither evidence nor humanitarianism? Moreover, on an individual level, how can management, practitioners, and humanitarian studies researchers defend their practice and publishing record? In the current geopolitical climate, not only has research on specific issues become highly politicized and subject to controversies, threats, and conspiracies, but knowledge producers are also experiencing new types of challenges concerning their reputation, the integrity of their research, and their aid work. Depending on where knowledge producers are situated, these challenges may amount to a significant risk to life, liberty, and security (Sandvik 2024b). A final issue discussed here concerns the seemingly random eradication of author names, colloquially labeled “the personal name guillotine.” On Social media, the practice of banning (or allowing) certain words, phrases, and themes due to community standards or political sensitivity has been the subject of longstanding debate. Whereas such decisions have been based on more or less automated content moderation, AI censorship operates in ways we still understand little about. The personal name guillotine example concerns recent cases of prominent (white male) academics in the Global North being excluded by ChatGPT: ChatGPT knows what it is going to say, pauses, and then a seemingly human-generated filter—or personal name guillotine—appears. Some names are filtered out due to privacy requests or persistent manipulation by AI—but other remains a mystery (Beres 2024; Zittrain 2024). Whereas explicit censorship undertaken by Deepseek is based on official Chinese policy (McCarthy 2025), the underlying reasons for and mechanisms behind ChatGPT’s refusal to say certain names and offer error messages remain largely unexplored. While this aspect may seem minor in the current context, I contend that it forms part of the broader destabilization and unmooring of humanitarian knowledge practices.
Conclusion
In the context of a rapidly changing geopolitical landscape, AI has become increasingly prominent on the international agenda. This means that truth, financial resources, and humanitarian access are up for grabs in new ways in the aid sector. Eschewing the more academic-sounding “discourse” and trying to incorporate the sense of speed, confusion, and fascination permeating the sectors’ engagement with AI, I use the term “aid talk” as a broad category incorporating these diverse discussions. Narratives and frames co-create how problems come into being, are defined and discussed, and what solutions are proposed. I have focused on three key issues in AI aid talk: perceptions of AI, policymaking efforts, and knowledge production. I have used framing, reframing, and deframing as interlinked analytical concepts. I have explored how AI is framed as a humanitarian problem, how AI is reframed from a technology whose impact is in the future to a current policy issue, and the potential and detrimental deframing of traditional knowledge practices through AI. As these conversations move and change rapidly, I have opted for broadly stated claims and arguments that others may disagree with, build on, and improve.
Much of what is going on with AI is external to the aid sector. For example, AI-as-humanitarian-buzzword is part of a larger family of buzzwords circulating outside the global humanitarian field and co-constituted through these. Yet, it is within our purview as researchers to provide critical scrutiny of the rise of AI in the sector and the shifting calibrations of AI aid talk. This commentary pushes one overarching argument: Even as the sector grapples with ongoing systemic shocks, attention must be paid to the transformative nature of AI. I aim to contribute to academic conversations about AI but also to help humanitarian practitioners make sense of their work and humanitarian decision-makers think strategically at the policy-making and norm-crafting levels.
Acknowledgements
I am grateful to peer reviewers, Linda Raftree, Maria Gabrielsen Jumbert, and Kristoffer Lidén for reading earlier versions of this commentary. I am also grateful to Christina Wille, Linnet Taylor, Giulio Coppi, Helen McElhinney, Andrea Düchting and Sarah Spencer for conversations on AI and the humanitarian space.
Author’s contributions
The author is sole author. The author read and approved the final manuscript.
Funding
The research for this commentary was funded by the PRIO strategic initiative “Artificial Intelligence, Humanitarian Ideas and Discourse–KnowingAID” https://www.prio.org/projects/2013 led by Maria Gabrielsen Jumbert.
Data availability
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
Declarations
Competing interests
The author declares that she has no competing interests.
I draw inspiration from the long-standing debates on rights-talk—not in the sense of impoverished political discourse, but as explored by Merry (2003) and other scholars of rights consciousness, who view rights as experienced realities, adopted and adapted by actors in their everyday practice.
2Methodologically, the intervention builds on my previous efforts to make sense of the rise of AI in aid, requiring clarity on what these are. See Sandvik 2022, 2023a, b, c and 2024a and b. Sandvik and Jumbert (2023) and Sandvik and Lidén (2023).
3Sandvik (2023a: 31) describes humanitarian mission creep as when, for example, UNHCR might start to provide welfare services to non-displaced populations because it has processed the data to do so. Function creep, on the other hand, happens “when data is appropriated by actors who have no right to collect it but would like to use it for purposes other than UNHCR’s protection mandate.”.
4Lerman (2013) observes that data sets can be affected by the “nonrandom, systemic omission of people who live on big data’s margins, whether due to poverty, geography, or lifestyle, and whose lives are less ‘datafied’ than the general population’s.” These technologies may create “a new kind of voicelessness, where certain groups’ preferences and behaviors receive little or no consideration when powerful actors decide how to distribute goods and services and how to reform public and private institutions.”.
5A large number of AI ethics and governance initiatives, followed by successful regulatory action (the EU AI Act, the CoE convention on AI and human rights) have been tabled over the last half-decade. This includes the OECD principles (2019), the UNESCO recommendations (2021) and the Hiroshima principles (2023), and the UN Global Digital Compact (generally Huw et al. 2024). The humanitarian UN organizations refer specifically to the UN “Principles for the Ethical Use of Artificial Intelligence in the United Nations System” from 2022.
(UN 2022).
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
Barnett, MN. Humanitarian governance. Annu Rev Polit Sci; 2013; 16,
Beduschi, A. Harnessing the potential of artificial intelligence for humanitarian action: opportunities and risks. International Review of the Red Cross; 2022; 104,
Beres D (2024) ChatGPT won’t say his name. https://www.theatlantic.com/newsletters/archive/2024/12/chatgpt-wont-say-this-name/681129/
Bones, H et al. In the frame: the language of AI. Philosophy & Technology; 2021; 34, pp. 23-44. [DOI: https://dx.doi.org/10.1007/s13347-020-00422-7]
Borchgrevink, K; Sandvik, KB. The afterlife of buzzwords: the journey of rights-based approaches through the humanitarian sector. The International Journal of Human Rights; 2022; 26,
Bunce, M. Humanitarian communication in a post-truth world. Journal of Humanitarian Affairs; 2019; 1,
Burns, R. Rethinking big data in digital humanitarianism: practices, epistemologies, and social relations. GeoJournal; 2015; 80,
Coppi, G; Jimenez, RM; Kyriazi, S. Explicability of humanitarian AI: a matter of principles. Journal of International Humanitarian Action; 2021; 6,
Coppi, G (2024) Private tech, humanitarian problems: how to ensure digital transformation does no harm. https://www.accessnow.org/wp-content/uploads/2024/02/Mapping-humanitarian-tech-February-2024.pdf
Cornwall, A; Brock, K. What do buzzwords do for development policy? A critical look at ‘participation’, ‘empowerment’ and ‘poverty reduction’. Third World Quarterly; 2005; 26,
De Guevara B, Kostis R (2018) Knowledge production in/about conflict and intervention: Finding ‘facts,’ telling ‘truth.’ In Knowledge and Expertise in International Interventions (pp. 1–20). Routledge
Fejerskov, AM; Clausen, M-L; Seddig, S. Humanitarian ignorance: towards a new paradigm of non-knowledge in digital humanitarianism. Disasters; 2024; 48,
Gebru, T; Torres, EP. The TESCREAL bundle: eugenics and the promise of utopia through artificial general intelligence. First Monday; 2024; [DOI: https://dx.doi.org/10.5210/fm.v29i4.13636]
Goffman, E (1974) Frame analysis: An essay on the organization of experience. Harvard University Press
Helfrich G (2024) The harms of terminology: why we should reject so-called ‘frontier AI.’ AI and Ethics, 1–7
Hicks, MT; Humphries, J; Slater, J. ChatGPT is bullshit. Ethics Inf Technol; 2024; 26,
ICRC. (2024). Building a responsible humanitarian approach: the ICRC policy on artificial intelligence. https://shop.icrc.org/building-a-responsible-humanitarian-approach-the-icrc-s-policy-on-artificial-intelligence-pdf-en.htmlbuidling
Iversen, S. ‘Just because it isn’t happening here, doesn’t mean it isn’t happening’: narrative, fictionality and reflexivity in humanitarian rhetoric. Eur J Engl Stud; 2019; 23,
Jacobsen, KL; Sandvik, KB. UNHCR and the pursuit of international protection: accountability through technology?. Third World Quarterly; 2018; 39,
Jacobsen KL, Fast L (2019) Rethinking access: how humanitarian technology governance blurs control and care. Disasters 43: S151-S168
Kajava K, Sawhney N (2023) Language of algorithms: agency, metaphors, and deliberations in AI discourses. In Handbook of Critical Studies of Artificial Intelligence (pp. 224–236). Edward Elgar Publishing
Khan S (2024) Unpacking AI governance from the margins. Harvard Data Science Review (Special Issue 5). https://doi.org/10.1162/99608f92.e6245c19
Lakoff G, Johnson K (1980) Metaphors we live by. University of Chicago press
Lerman J (2013) Big data and its exclusions. Stanford Law Review Online, 66. http://www.stanfordlawreview.org/online/privacy-and-big-data/big-data-and-its-exclusions
Leslie D, Perini AM (2024) Future Shock: Generative AI and the international AI policy and governance crisis. Harvard Data Science Review (Special Issue 5).https://doi.org/10.1162/99608f92.88b4cc98
Leyland, J; Tiller, S; Bhattacharya, B. Misinformation in humanitarian programmes: lessons from the MSF listen experience. Journal of Humanitarian Affairs; 2023; 5,
Madianou, M. Nonhuman humanitarianism: when ‘AI for good’ can be harmful. Inf Commun Soc; 2021; 24,
McElhinney H, Spencer SW (2024) The clock is ticking to build guardrails into humanitarian AI. The New Humanitarian. https://www.thenewhumanitarian.org/opinion/2024/03/11/build-guardrails-humanitarian-ai
McCarthy S (2025) DeepSeek is giving the world a window into Chinese censorship and information control. CNN.https://edition.cnn.com/2025/01/29/china/deepseek-ai-china-censorship-moderation-intl-hnk/index.html
Merry, SE. Rights talk and the experience of law: Implementing women’s human rights to protection from violence. Hum Rights Q; 2003; 25,
Morais RJ (2023) AI’s truth lies and ethos. Public Anthropologist. https://publicanthropologist.cmi.no/2023/07/19/ais-truth-lies-and-ethos/
NETHOPE (2021) Humanitarian AI code of conduct.https://nethope.org/toolkits/humanitarian-ai-code-of-conduct/
OCHA (2024) Briefing note on artificial intelligence and the humanitarian sector. https://www.unocha.org/publications/report/world/briefing-note-artificial-intelligence-and-humanitarian-sector
Petras G, Loehrke J, Padilla R (2024) CrowdStrike impact: how a global IT outage unraveled the world’s tech. USA TODAY. https://eu.usatoday.com/story/graphics/2024/07/19/crowdstrike-outage-global-effect/74467247007/
Pizzi, M; Romanoff, M; Engelhardt, T. AI for humanitarian action: human rights and ethics. International Review of the Red Cross; 2020; 102,
Raftree L (2024) Do humanitarians have a moral duty to use AI to reduce human suffering? Four key tensions to untangle. https://alnap.org/humanitarian-resources/publications-and-multimedia/do-humanitarians-have-a-moral-duty-to-use-ai/
Riemann, M. Studying problematizations: the value of Carol Bacchi’s ‘what’s the problem represented to be?’ (WPR) methodology for IR. Alternatives; 2023; 48,
Sandvik, KB; Lohne, K. The rise of the humanitarian drone: giving content to an emerging concept. Millennium; 2014; 43,
Sandvik, KB et al. Do no harm: A taxonomy of the challenges of humanitarian experimentation. International Review of the Red Cross; 2017; 99,
Sandvik KB, Jumbert MG (2023) AI in aid: framing conversations on humanitarian policy. Global Policy Opinion
Sandvik KB, Liden K (2023) Ungovernable or humanitarian experimentation? Generative AI as an accountability issue
Sandvik, KB. Wearables for something good: aid, dataveillance, and the production of children’s digital bodies. Information, Communication & Society; 2020; 23,
Sandvik KB (2022) Optics as politics: culture, language and learning with UiO ChatGPT. https://blogs.prio.org/2023/12/optics-as-politics-culture-language-and-learning-with-uio-chatgpt/
Sandvik, KB. Humanitarian extractivism: the digital transformation of aid; 2023; Manchester University Press: [DOI: https://dx.doi.org/10.7765/9781526165831]
Sandvik KB (2023b). Taking stock: generative AI, humanitarian action, and the aid worker. Global Policy Opinion.
Sandvik, KB. Framing humanitarian AI conversations: what do we talk about when we talk about ethics? PRIO Paper; 2024; Oslo, PRIO:
Sandvik Kristin (2024b) Authorship and involuntary attribution: how and why should we contest AI manipulation? Global Policy Journal. https://www.globalpolicyjournal.com/blog/29/10/2024/authorship-and-involuntary-attribution-how-and-why-should-we-contest-ai
Scott-Smith, T. Humanitarian neophilia: the ‘innovation turn’ and its implications. Third World Quarterly; 2016; 37,
Spencer S (2024) Seizing the potential and sidestepping the pitfalls. Humanitarian Practice Network, 89. https://odihpn.org/wp-content/uploads/2024/05/HPN_Network-Paper89_humanitarianAI.pdf
Spencer S, Masboungi C (2025) Artificial intelligence in gender-based violence in emergency proramming: perils and potentials. https://clearinghouse.unicef.org/sites/ch/files/ch/sites-PD-ChildProtection-Knowledge%20at%20UNICEF-AI%20in%20GBV%20Emergencies-5.0.pdf
UN (2022) Principles for the ethical of AI in the UN System. https://unsceb.org/sites/default/files/2022-09/Principles%20for%20the%20Ethical%20Use%20of%20AI%20in%20the%20UN%20System_1.pdf
UNICEF (2020) Policy guidance on AI for children 2.0. https://www.unicef.org/innocenti/media/1341/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf
USAID (2022) The USAID artificial intelligence action plan. https://www.usaid.gov/digital-development/artificial-intelligence-action-plan
Wallenborn JT (2022) AI as a flying blue brain? How metaphors influence our visions of AI. https://www.hiig.de/en/ai-metaphors/
Wilson RA, Brown RD (2008) Humanitarianism and suffering: the mobilization of empathy introduction
Winkel M (2024) Controlling the uncontrollable: the public discourse on artificial intelligence between the positions of social and technological determinism. AI & Society, 1–13
Wyatt, S. Metaphors in critical Internet and digital media studies. New Media Soc; 2021; 23,
Zittrain J (2024) The words that stop ChatGPT in its tracks. Why won’t the bot say my name? The Atlantic. https://www.theatlantic.com/technology/archive/2024/12/chatgpt-wont-say-my-name/681028/
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
While discussions of the risks and opportunities of AI have been ongoing for many decades, the rapid and unplanned adoption of GenAI on a global level, including within the humanitarian sector, suggests that this technology has a transformative but unpredictable impact on the sector. Metaphors play a role in power struggles, for example, by buttressing arguments about technological determinism, where technology, due to its nature, is portrayed as difficult or impossible to control (Winkel 2024). [...]the rise of “AI creep” will likely result in mission creeps for aid actors or data function creeps beyond humanitarian mandates.3 A new configuration of the digital shadow—I am tentatively calling this “humanitarian AI shadows”—can produce adverse effects through a combination of the problem of marginalization and invisibilization of individuals or groups (due to their geography, lifestyle, gender, or less datafied lives4) and strategy and capacity gaps hampering change and adjustment within humanitarian organizations. According to their vision of technology, digital tools come without politics and skirt justice and distribution agendas.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 PRIO, Grønland, Norway (GRID:grid.425244.1) (ISNI:0000 0001 1088 4063)