Content area
This study develops a co-evolutionary foresight framework to explore the future of human roles in AI-integrated aviation. Moving beyond deterministic models of automation, it conceptualizes AI integration as a recursive process shaped by technological innovation, institutional adaptability, and workforce transformation.
Drawing on evolutionary economics and socio-technical systems theory, the research integrates three methodological layers: historical case analysis of aviation transitions, theory-informed scenario construction, and a Delphi-based expert validation process with senior aviation stakeholders. The resulting 2×2 scenario matrix outlines four plausible futures: Strategic Co-evolution, Human-Centric Continuity, Latent Obsolescence, and Human Displacement, each reflecting different configurations of AI intensity and institutional responsiveness. Among these, Strategic Co-evolution emerges as the most plausible and desirable path, highlighting the importance of anticipatory governance, hybrid role design, and institutional alignment. In contrast, scenarios marked by institutional inertia or fragmented oversight raise concerns about skill erosion, trust degradation, and systemic risk. The study contributes a transferable methodology for futures research and provides actionable insights for regulators, training institutions, and labor actors. It underscores that the trajectory of AI integration in aviation is not technologically preordained but depends critically on the strategic codesign of institutional safeguards, workforce readiness, and socio-technical trust. The framework offers a model for examining AI transitions in other high-stakes domains, advancing a participatory and empirically grounded approach to exploring human–AI futures.
Introduction
Artificial intelligence (AI) is reshaping complex systems across sectors, yet aviation offers a particularly insightful testbed due to its early, safety–critical, and highly regulated adoption of intelligent technologies. Unlike AI deployments oriented toward consumer personalization or cost reduction, aviation integrates AI to enhance operational reliability, regulatory compliance, and human safety. At the same time, global regulators such as the United States Federal Aviation Administration (FAA), the European Union Aviation Safety Agency (EASA), and the International Civil Aviation Organization (ICAO) are developing certification frameworks and oversight protocols for AI systems, positioning aviation at the forefront of governance experimentation in high-stakes automation.
Despite this strategic relevance and transformative potential, academic research on AI in aviation remains fragmented and technocentric. Most studies emphasize operational performance, efficiency, and safety enhancements. Others focus on workforce implications, including labor displacement and skill mismatches. What is largely missing is a systemic, future-oriented understanding of how human expertise, organizational routines, and institutional structures co-evolve with AI integration.
This study addresses that gap by developing a co-evolutionary scenario framework to explore the future of human roles in AI-integrated aviation. Drawing on evolutionary economics, transition theory, and futures studies, the framework models the recursive dynamics between technological variation, institutional selection, and workforce adaptation. Human–AI interaction is framed as a dynamic process shaped by feedback between innovation, regulation, and practice.
The research is guided by two interrelated questions:
How do co-evolutionary processes shape the transformation and persistence of human roles in AI-integrated aviation systems?
What plausible future scenarios for human-AI collaboration can inform anticipatory governance and workforce planning in the sector?
While focused on aviation, the framework also applies to other safety–critical and regulated sectors such as healthcare, logistics, defense, and energy systems. It also contributes to futures research by advancing a methodologically rigorous, participatory, and transferable approach to exploring socio-technical transitions under uncertainty.
This study contributes to the literatures on futures and socio-technical transitions in three interconnected ways. First, it advances the conceptualization of human–AI collaboration by embedding aviation within a co-evolutionary framework that links technological variation, institutional adaptation, and workforce transformation. In doing so, it extends beyond technocentric accounts of aviation automation and complements previous work in futures studies that emphasize governance and ethical dimensions [43, 53, 54, 60]. Second, it combines scenario construction, case studies, and a three-round Delphi process. This responds to calls for empirically grounded and participatory foresight methods [58, 59]. Third, it generates practical insights for anticipatory governance, demonstrating how institutional foresight and workforce investment can shape more desirable socio-technical futures in high-stakes, safety–critical sectors. This aligns with prior calls for governance frameworks that move from reactive regulation to anticipatory, participatory, and reflexive approaches [20, 67].
The remainder of the paper is structured as follows: Sect. "Theoretical and Empirical Background" consolidates the theoretical and empirical background, combining the literature review, co-evolutionary framework, and three historical case studies that ground the analysis. Sect. "Methodology" presents the research methodology, which integrates case evidence, scenario construction, and a Delphi process. Sect. "Results" reports the main results, introducing the scenario matrix and the expert validation. Sect. "Discussion and Implications" develops the discussion addressing implications for theory and methodology, stakeholders, society and governance, and futures studies. Sect. "Conclusion" concludes with the study’s main contributions, limitations, and directions for future research.
Theoretical and empirical background
Understanding the future of human–AI collaboration in aviation requires a foundation that integrates the key strands of literature, theory, and empirical evidence. Rather than addressing these elements separately, this section consolidates them into a single background section to improve coherence and reduce redundancy.
Literature review
The integration of AI in aviation is advancing rapidly, reshaping how safety, efficiency, and reliability are managed in one of the most highly regulated industries. Existing research covers a wide spectrum, from technical applications such as predictive maintenance and automated traffic management to human-centered issues including workload, cognitive integration, and trust.
To assess current knowledge, we conducted a systematic literature review on the topic. We searched major academic databases, including Web of Science, Scopus, and Google Scholar, for English-language, peer-reviewed journals, technical reports, and conference proceedings published between 2000 and 2025. Our search used the Boolean query: ("AI" OR "Artificial Intelligence") AND ("aircraft" OR "airlines" OR "air traffic" OR "aviation" OR "airport"). Of the 256 sources initially screened, 201 met our criteria for analytical and methodological rigor. The resulting body of literature was organized into six thematic areas: (1) human–AI teaming and cognitive integration; (2) AI-driven decision-making and strategic agility; (3) regulation, certification, and safety assurance; (4) AI bias, explainability, and public perception; (5) AI in training, education and skills development; and (6) emerging technologies in operations and interfaces.
Research on human–AI teaming and cognitive integration highlights how AI can augment human performance, redistribute workload, and transform teamwork in both cockpits and control towers. Studies demonstrate how cognitive tasks are increasingly supported by AI, with implications for trust, role allocation, and adaptive collaboration [13, 23, 32, 34, 39, 62, 65, 69]. A second body of work examines AI-driven decision-making and strategic agility, showing how machine learning tools can improve operational planning, scheduling, and fleet management, thereby strengthening airlines’ ability to respond to crises and dynamic environments [21, 35, 37, 71].
At the institutional level, regulation, certification, and safety assurance have emerged as critical domains. Scholars and agencies emphasize the need for adaptive certification frameworks, transparent oversight, and proactive governance to build legitimacy and public trust in AI-enabled systems [2, 18, 49, 51, 52, 63]. Complementary research on bias, explainability, and public perception warns that societal acceptance remains fragile. Issues of transparency, fairness, and interpretability are recurrent barriers to adoption [5, 26, 50, 57].
The workforce dimension has also received growing attention. Studies on training, education, and skills development underline the importance of hybrid competencies that blend digital literacy, system supervision, and traditional airmanship. Pilot training, maintenance education, and human resource strategies are all being redesigned to accommodate AI-based tools [29, 30–31, 33, 46]. Finally, research on emerging operational technologies and interfaces explores new frontiers such as augmented reality displays, drone and urban air mobility (UAM) traffic management, and intelligent airport systems [8, 38].
Despite these advances, four persistent gaps remain. First, most studies are micro-level, focusing on task-level interactions or interface design, often under controlled simulation environments. As a result, there is limited inquiry into how AI systems are scaled across organizations and integrated into existing institutional architectures. Second, there is little use of foresight and scenario-based methods. Few studies employ structured foresight techniques or explore long-term trajectories of human–AI teaming. When scenarios are used, they often lack grounding in empirical data or theoretical models. Third, the literature is siloed, with limited integration across human factors, engineering, institutional theory, and labor studies. Cross-disciplinary integration remains rare. Fourth, human agency is under-theorized, with workers often treated as passive recipients of automation rather than as active co-shapers of socio-technical systems. This perspective downplays the capacity of institutions and labor systems to shape AI deployment and, consequently, AI is viewed as exogenous rather than relational. These limitations signal the need for a futures-oriented perspective that situates aviation within broader socio-technical imaginaries and emphasizes the role of institutions, governance, and labor in shaping technological trajectories.
Recent literature on futures studies moves beyond deterministic views of AI disruption, exploring how human agency, institutional dynamics, and socio-technical systems shape and are shaped by AI integration. This shift reflects a broader move in the field toward co-evolutionary and relational models of technological change, particularly in sectors with high reliability, risk, and regulatory complexity [45].
A key strand of this literature focuses on human–AI teaming as a foresight concern. Vervoort and Gupta [67], for instance, argue that futures research must grapple with the normative and political dimensions of who gets to shape AI-enabled futures. Similarly, Felt et al. [20] emphasize the role of anticipatory governance in shaping socio-technical imaginaries, reinforcing the idea that futures are not simply extrapolated from trends but co-constructed through sociopolitical processes.
In parallel, research on human–AI collaboration within foresight contexts has grown. Sardar [60] and Miller [43] note the epistemological tensions posed by intelligent systems in decision-making environments, as machines increasingly inform or mediate decision support, futures studies must interrogate how human reasoning, trust, and interpretability remain central.
Within this broader context, co-evolutionary and transition-based models have gained traction in futures scholarship. Drawing from evolutionary economics and complex systems thinking, studies such as Geels [22] propose that transitions unfold through multi-level interactions between niche innovation, institutional adaptation, and social negotiation.
Despite these advances, two persistent gaps remain in the futures literature. First, much futures work emphasizes governance, ethics, or macro-scenarios, with less attention to how human roles transform within socio-technical systems. The lived experience of adaptation, resistance, and reskilling remains underexplored. Second, scenario practices in futures studies often remain abstract or sector-agnostic. Few integrate empirical case studies or apply co-evolutionary logic to trace how past transitions inform plausible futures. The opportunity—and the gap this study addresses—is to integrate these literatures into a co-evolutionary framework that situates AI not only as a technological innovation but also as a socio-technical process shaped by institutions, labor, and anticipatory governance.
Theoretical framework
The theoretical framework is based on the ontological assumption that “the future” is not singular or deterministic but emerges from present systems as contingent trajectories, shaped by values, power, and design [40, 68]. In this context, the study focuses on plausible futures—trajectories that may unfold given what we know about existing constraints and the dynamic interactions among actors. The use of the term “plausible” in this study emphasizes that the scenarios are internally coherent and grounded in both theoretical reasoning and empirical analogues, as opposed to being forecasts or predictions of what will occur.
At its core, the logic of variation, selection, and retention (VSR) explains how innovations emerge, are filtered through institutional environments, and are stabilized in workforce roles and organizational routines [41, 47]. Rather than being treated as linear stages, these are recursive processes that shape how technologies are introduced, evaluated, and embedded in socio-technical systems (Fig. 1).
[See PDF for image]
Fig. 1
Theoretical framework of the study
The evolutionary logic is extended through interdisciplinary integration. From innovation studies, the framework incorporates how technological variation arises, diffuses, and recombines, through interaction with operational practices, design choices, and industry standards [25]. From institutional theory, it highlights legitimacy, certification, and regulatory adaptation, showing how rules, norms, and governance structures filter technological innovations and shape their trajectories [17, 19]. From labor studies, it foregrounds workforce transformation and the reconfiguration of professional roles, recognizing that technological change is mediated through skill adaptation, training regimes, and evolving forms of expertise and accountability [6, 48]. From the perspective of emergent human roles, the framework emphasizes how new socio-technical configurations create hybrid forms of responsibility, decision-making, and human–machine collaboration. Finally, from futures research, it brings the emphasis on anticipation, imaginaries, and systemic uncertainty [53, 54, 60], underscoring how collective visions of the future influence present decisions and how scenario-based reasoning can broaden the range of trajectories considered.Case studies
The literature review and theoretical framework suggest that we should examine how the co-evolutionary dynamics of technology, institutions, and labor unfold in practice. For this purpose, historical transitions in aviation provide valuable analogies. By analyzing past episodes of transformation, we can identify recurrent co-evolutionary patterns that both validate the theoretical framework and inform the construction of future scenarios.
Our selection of cases followed a maximum variation strategy to capture diversity across technological domains (cockpit automation, maintenance intelligence, autonomous logistics), institutional maturity, and labor impacts. Applying the co-evolutionary framework to these cases allowed us to trace the mechanisms through which innovations emerged, became institutionalized, and were embedded in workforce roles. Importantly, the cases also highlighted points of friction—such as institutional lag, trust erosion, or skill displacement—that directly shaped the plausibility of alternative future scenarios.
Case study 1: Glass cockpit digitization
The transition to glass cockpits in the late 1980 s and 1990 s represents one of the most significant socio-technical transitions in modern aviation. Early digital flight decks, pioneered in military aircraft, were progressively adopted in commercial aviation, with Boeing and Airbus introducing fully integrated cockpit displays by the mid-1980s. This transition displaced analogue gauges with multifunctional electronic displays, enabling greater system integration, automated monitoring, and data-driven flight management [7, 70]. While the technological benefits were evident, the change introduced new risks: pilots had to develop “glass cockpit literacy,” with extensive retraining programs implemented to mitigate loss of manual flying skills and situational awareness [9, 64]. Trust in automation became a recurrent challenge, as crews adapted to digital flight management systems and occasional malfunctions eroded confidence [61].
Viewed through the co-evolutionary framework, the glass cockpit illustrates all three evolutionary mechanisms. Variation occurred through the introduction of digital display technologies, largely driven by aerospace manufacturers. Selection was mediated by certification authorities and airlines, which imposed safety and training requirements before widespread adoption. Retention was achieved through new training curricula, institutionalized crew resource management (CRM) practices, and regulatory standards that stabilized the digital cockpit as the industry norm (Fig. 2).
[See PDF for image]
Fig. 2
Socio-technical evolution of the glass cockpit
The dynamics of innovation and institutional adaptation were complex. Manufacturers promoted safety and efficiency gains, but FAA and EASA validation slowed adoption. Airlines diverged in their responses depending on fleet composition and capacity to invest in retraining, while unions pressed for negotiated standards on workload redistribution. Flight schools integrated new simulators, and training infrastructures expanded rapidly to produce a generation of digitally competent pilots. These interactions illustrate how the technology’s diffusion was not linear but shaped by overlapping dynamics of industrial innovation, regulatory filtering, and workforce negotiation [3].
For pilots themselves, the transition redefined professional identity. The role shifted from manual operator to hybrid system manager, requiring competencies in systems monitoring, diagnostic reasoning, and human–automation coordination. Concerns about automation complacency and skill degradation led to new forms of training and oversight, while the institutionalization of CRM helped balance technical proficiency with collaborative decision-making [14, 27]. The case of glass cockpit exemplifies how technological change in aviation not only reconfigures operational practices but also generates emergent human roles, producing hybrid forms of expertise.
Case study 2: Predictive maintenance
The adoption of predictive maintenance (PdM) technologies in the 2010 s illustrates how data-driven innovation reshapes both technical operations and professional practices in aviation. Building on advances in sensors, telemetry, and machine learning, predictive systems enabled airlines and manufacturers to anticipate component failures, reduce downtime, and optimize maintenance scheduling [44]. These systems marked a departure from reactive or preventive models of maintenance, introducing new forms of operational intelligence and tighter integration between airlines, maintenance, repair and overhaul (MRO) providers, and original equipment manufacturers (OEMs).
Variation occurred through the introduction of digital sensors, onboard diagnostics, and algorithms capable of identifying anomalies. Selection was mediated by regulatory authorities, which required demonstration of reliability and certification standards before predictive analytics could be fully integrated into maintenance programs [18]. Retention was achieved through the institutionalization of new maintenance protocols, the redesign of manuals and procedures, and the creation of specialized training for engineers and technicians. These dynamics are captured in Fig. 3, which illustrates how predictive maintenance diffused through technological, institutional, and labor layers.
[See PDF for image]
Fig. 3
Socio-technical evolution of predictive maintenance
The dynamics of innovation and institutional adaptation were shaped by the strategic role of data ownership and system integration. Airlines sought competitive advantage through improved reliability and cost savings, while OEMs such as Airbus and Boeing increasingly retained control of data streams to consolidate their role in aftermarket services [12, 15]. Regulatory bodies, including the FAA and EASA, introduced new certification for condition-based maintenance, but lagged industry innovations, creating tensions between rapid adoption and safety assurance. Workforce dynamics were also in flux as predictive tools redefined the tasks of engineers, shifting focus from manual inspection to digital diagnostics, data interpretation, and proactive decision-making [29].
For maintenance professionals, this transition reconfigured expertise and created emergent hybrid roles. Traditional mechanical skills remained vital but were complemented by competencies in data analysis, systems integration, and human–machine collaboration [24]. The increased reliance on algorithmic outputs raised new concerns about transparency, explainability, and over-trust in digital systems, echoing challenges seen in other safety–critical domains [28]. At the same time, institutional investment in training and certification programs helped engineers adapt to these hybrid demands, embedding predictive maintenance within the professional identity of aviation maintenance.
Case study 3: Partially autonomous cargo drones
The emergence of autonomous cargo drones represents a frontier of socio-technical change in aviation logistics. Leveraging AI-assisted flight control and remote supervision, these unmanned aerial vehicles (UAVs) offer faster, more cost-effective alternatives to piloted cargo aircraft. Companies such as Amazon Prime Air, DHL, and UPS have piloted drone-based operations to expand delivery capabilities and reduce operational costs [4, 16, 56]. These initiatives illustrate how logistics is becoming a key domain for the early adoption of aviation autonomy.
Variation arises from the introduction of autonomous platforms that can perform operations that go far beyond conventional human-piloted logistics. Selection is shaped by regulatory bodies, airspace integration, and societal acceptance, with governments and airspace authorities remaining cautious and requiring new certification and policy frameworks for unmanned cargo. Retention is a slower, more contested process, as commercial routes and organizational practices must adapt to integrate autonomous systems into logistics networks (Fig. 4).
[See PDF for image]
Fig. 4
Socio-technical evolution of partially autonomous cargo drones
The innovation behind autonomous cargo drones spans autonomy algorithms, communication networks, and cloud-based control systems, designed to substitute pilots in short-haul and repetitive routes, especially last-mile delivery and remote regions [11]. Institutional adaptation has lagged as UAVs blur traditional aviation categories of licensing, command responsibility, and airspace management. Regulators have experimented with temporary authorizations and scenario-based certification, but comprehensive integration into controlled airspace remains unresolved, creating a liminal zone where innovation advances under fragmented arrangements [55].
Workforce dynamics are also shifting. While core cargo handling persists, new supervisory and technical roles—UAV fleet coordinators, mission controllers, operations supervisors—require systems thinking, software literacy, and oversight competencies. Labor unions have called for reskilling and role protections [1], yet training and accreditation frameworks remain underdeveloped, leaving responsibilities fluid and often shared across legacy and emergent categories.
For humans, the rise of autonomous cargo drones creates emergent roles. Instead of piloting, operators manage fleets remotely, focusing on anomaly detection, mission oversight, and coordination with air traffic controllers. Engineers must develop competencies in both airframe maintenance and software integrity, reflecting the hybridization of mechanical and digital skills. Concerns about job substitution persist, particularly among cargo pilots, yet early deployments suggest a pattern of role transformation rather than outright elimination [10].
Methodology
The research design combined conceptual, empirical, and participatory components. This approach allows a nuanced understanding of the topic, often fragmented into technical innovation, human factors, and governance perspectives. By bringing together multiple strands of inquiry, the methodology enables a systemic examination of the conditions under which different human–AI futures may emerge and stabilize. The design is not a rigid, linear sequence but a reflexive process where insights from later stages inform and refine earlier ones (Fig. 5).
[See PDF for image]
Fig. 5
Research design and methodological flow
The initial research problem focused on the fragmented treatment of AI in aviation. A systematic literature review helped refine this problem by mapping the field into six thematic areas. This mapping exercise clarified the scope of the inquiry and provided the empirical base for the theoretical framework.
The theoretical framework uses an interdisciplinary, evolutionary lens to socio-technical change. Built on the VSR model, the framework incorporates ideas from institutional theory on regulatory adaptation and legitimacy, from labor studies on workforce transformation and role reconfiguration, and from futures research on anticipation, imaginaries, and systemic uncertainty.
The framework is then applied to three historical case studies that serve as empirical anchors. Each case is analyzed through the interdisciplinary theoretical lens, which enables a structured analysis rather than a simple historical description. This analysis revealed recurring patterns that provided an empirical grounding for the later stages of the research.
Building on this foundation, the research team used the scenario-axes method [66] to construct four distinct scenarios. To select the axes, a broader set of driving forces was first derived from the theoretical framework and the case studies. These included the degree of AI integration, institutional adaptability, workforce skill adaptation, trust in automation, regulatory responsiveness, global market pressures, organizational strategies, and socio-political acceptance of automation. After screening these for relevance and uncertainty the team determined that while many were important, most were closely correlated with, or nested within, two broader dimensions: degree of AI integration, and institutional adaptability. These two were selected as the axes because they were considered both highly uncertain and highly consequential.
Crossing these axes produced four trajectories, each illustrating unique configurations of technological intensity, institutional responsiveness, and human role transformation. The narratives of these scenarios were then developed by the team in a structured manner, expanding each quadrant of the 2 × 2 matrix into a storyline grounded in the co-evolutionary dynamics of technology, institutions, and labor. For reasons of space, the paper presents condensed versions of these narratives, which are structured to emphasize their underlying co-evolutionary logic. These scenarios were not treated as forecasts, but rather as structured foresight artifacts designed to explore systemic uncertainty and support anticipatory governance.
A three-round Delphi process was conducted to assess and refine the scenarios. The research team convened 16 senior aviation experts from various sectors, including airlines, regulatory authorities, manufacturers, airports, and air traffic management organizations. This process had a dual purpose: it validated the plausibility, coherence, and strategic relevance of the scenarios while also generating new insights that were integrated into the narratives. Iteration across rounds facilitated convergence on shared assessments but also surfaced areas of disagreement and contested assumptions. This participatory approach reinforced the credibility of the scenarios and highlighted the central role of institutional adaptability in shaping aviation’s futures. To ensure an independent assessment and minimize confirmation bias, the experts involved in the Delphi validation were entirely separate from the research team that developed the initial scenarios. This intentional separation helped ensure the objectivity of the assessment.
Following the Delphi, the team integrated theory, empirical evidence, and expert judgment into a unified foresight process that served as the foundation for the discussion and implications developed in the subsequent sections of the study.
Results
The analysis of the theoretical and empirical evidence gathered in the study generated two interrelated results: (1) a set of four future scenarios for human–AI collaboration in aviation, and (2) their systematic evaluation through a three-round Delphi process with senior aviation experts.
Scenario axes and logics
The scenario framework was developed using the scenario axes method, adapted to reflect the co-evolutionary dynamics identified in the theoretical model and historical case studies. From the range of drivers examined, two variables best capture the tension between technological potential and institutional response, and reflect the recursive logic of VSR central to the co-evolutionary framework:
Degree of AI Integration—The extent to which AI systems transform aviation operations, ranging from minimal integration (AI as a support tool) to pervasive automation (AI as a primary operational actor).
Institutional Adaptability—The capacity of regulatory, training, and labor systems to respond effectively to AI integration through timely governance, role redesign, and workforce development. It ranges from high adaptability (proactive coordination and foresight) to low adaptability (inertia, fragmentation, or resistance).
Crossing the two axes produces a 2 × 2 matrix that defines four plausible socio-technical trajectories. This matrix provides the structural foundation for the scenario narratives, as each quadrant represents a distinct configuration of human roles, institutional structures, and technological intensity.
Four future scenarios
Each scenario is not a prediction, but a structured narrative grounded in the co-evolutionary framework, historical analogues, and expert input. Together, they highlight how institutional foresight and governance choices interact with technological change to shape human roles in safety–critical systems (Fig. 6).
[See PDF for image]
Fig. 6
Scenario matrix for human roles in AI-driven aviation
Scenario A: Strategic Co-evolution (High AI Integration/High Institutional Adaptability)
This future envisions a coordinated transition in which technological advancement is matched by agile and anticipatory response. Regulatory bodies, labor unions, and training institutions proactively engage in early collaboration with manufacturers and airlines to define standards, upskilling programs, and operational safeguards. AI systems are integrated into air traffic management, predictive maintenance, and cockpit operations, but always with clear human oversight mechanisms.
In cockpit operations, pilots routinely collaborate with AI “co-pilots” capable of monitoring thousands of parameters, predicting turbulence, and recommending fuel-optimal reroutes. The human captain retains authority but increasingly supervises AI suggestions and focuses on ethical or safety–critical judgments. In air traffic management, hybrid human–AI systems support controllers by simulating traffic patterns across entire regions, enabling congestion to be anticipated and flights rerouted before bottlenecks occur. However, when disruptions such as volcanic ash or cyberattacks arise, human controllers are trained and empowered to override machine recommendations, preserving accountability.
Maintenance and engineering also undergo profound change. Predictive dashboards flag engine failures weeks in advance, enabling parts to be shipped proactively and reducing costly delays. Airlines invest heavily in retraining, creating a new generation of “fleet health managers” who combine mechanical expertise with data analytics. At airports, AI platforms orchestrate gates, pushbacks, and baggage flows, but transparency requirements ensure passengers and unions understand how decisions are made.
At the institutional level, this scenario features robust international coordination. Regulators converge on certification protocols for AI functions, while training academies co-design curricula with AI firms to embed digital reasoning alongside traditional airmanship. Labor unions secure agreements that guarantee workers are retrained rather than displaced, and passengers come to view AI not as a threat but as a visible enabler of safety.
Scenario B: Human-Centric Continuity (Low AI Integration/High Institutional Adaptability)
In this future, institutions remain agile and responsive, but the adoption of AI is selective and gradual. Risk aversion, public trust concerns, and cultural commitment to human oversight slow the pace of automation. AI tools are available and reliable, but they are deliberately positioned as assistive technologies rather than replacements for human expertise.
Cockpit operations continue to revolve around pilot authority. AI contributes primarily as a decision-support partner, scanning flight data for anomalies, providing fuel optimization suggestions, and monitoring weather patterns. Pilots remain central figures, trained to use AI as a supportive aide while retaining command responsibility.
In air traffic management, AI tools forecast congestion and suggest flow adjustments, but controllers issue all tactical clearances. This ensures that professional judgment and human accountability remain intact, even as algorithms quietly optimize sequencing and separation. Similarly, in maintenance, technicians consult AI-based diagnostic platforms, but only as one input into their broader engineering assessments. The authority to decide on repairs and sign off on airworthiness continues to rest with human professionals.
Training systems adapt to this environment by emphasizing cognitive flexibility, ethical reasoning, and teamwork. Programs incorporate AI literacy, but the overriding goal is to strengthen human capacity to interpret, question, and, when necessary, override automated advice. Passengers trust in visible human oversight leads airlines and regulators to keep human decision-making central.
The outcome is a stable path that preserves professional identity and trust. Yet it also carries vulnerabilities. Regions or carriers that adhere strictly to human-centric continuity may find themselves under competitive pressure from operators that embrace higher levels of automation, potentially exposing them to efficiency gaps.
Scenario C: Latent Obsolescence (Low AI Integration/Low Institutional Adaptability)
In this future, AI technologies continue to advance, but institutions fail to adapt. Regulatory frameworks remain outdated, training programs are underfunded, and labor negotiations stall. As a result, aviation organizations gain access to sophisticated tools but lack the governance capacity and workforce readiness to implement them effectively.
In cockpit operations, pilots still fly with legacy systems while AI assistants are bolted on in piecemeal fashion. Interfaces vary across aircraft, forcing crews to juggle different displays and procedures. Controllers face similar challenges: prototype AI systems capable of predicting sector congestion are available, but their outputs are not standardized or integrated across regions. This inconsistency makes handovers more error-prone and limits the value of the technology.
Maintenance practices highlight the dysfunction most clearly. Original equipment manufacturers deliver predictive alerts through proprietary platforms, warning that a component may fail weeks in advance. But airlines without data expertise or logistics coordination cannot act on these insights in time. Spare parts arrive late, and aircraft are grounded despite the availability of predictive intelligence. Technicians grow frustrated, as their mechanical expertise is sidelined by algorithmic warnings they cannot fully interpret or use. At airports, AI-based crowd management tools identify bottlenecks at boarding gates, but contractual fragmentation among ground handlers, airlines, and airport authorities prevents coordinated responses. Passengers see congestion worsen during disruptions, eroding confidence in aviation’s ability to modernize.
This scenario reflects institutional inertia, where human roles persist but they lose relevance as external technologies outpace the systems meant to embed them. Workers retain their positions but find their skills increasingly mismatched with their tools. This leads to operational inefficiencies, such as growing delays, rising maintenance costs, and strained safety margins due to outdated procedures. This demonstrates that technological availability alone is not enough; without institutional adaptation and deliberate investment in workforce, aviation risks stagnating in a state where new capabilities fail in practice, leading to systemic vulnerability.
Scenario D: Human Displacement (High AI Integration/Low Institutional Adaptability)
This future describes a rapid, technology-driven transition in which AI integration accelerates but institutional safeguards fail to keep pace. Competitive pressures push airlines and manufacturers to deploy advanced systems without adequate regulatory oversight, training infrastructures, or workforce protections. The result is a sector technologically advanced but socially unstable.
In the cockpit, single-pilot operations become increasingly common as AI copilots manage navigation, monitor systems, and even execute emergency procedures. Pilots are reduced to supervisory roles with limited authority, expected to oversee complex algorithmic decisions without the tools or training to intervene effectively. In air traffic management, AI allocates routes and separation standards in real time, while human controllers monitor multiple automated sectors simultaneously. Their role shifts to passive supervision, leaving them accountable for outcomes they cannot control.
Maintenance follows a similar trajectory. AI-driven schedulers determine when aircraft are taken out of service and what repairs are prioritized. Technicians find themselves following opaque algorithmic work orders, often constrained from applying their professional judgment. When errors occur—such as an unnecessary part replacement or a misdiagnosed fault—responsibility falls ambiguously between human staff and automated systems, creating liability disputes.
At airports, optimization platforms dictate gate assignments, passenger flows, and baggage handling with minimal human oversight. While efficiency increases during routine operations, disruption management falters. When weather events or system outages occur, there are too few empowered staff to resolve cascading failures, leaving passengers stranded and further eroding trust.
Regulators scramble to catch up, but fragmented governance means rules lag far behind practice. Labor unions protest, yet weakened institutional capacity prevents effective negotiation of reskilling or role redesign. Workers experience widespread displacement rather than transformation, fueling social resistance and industrial unrest.
Validation and expert assessment
To assess the robustness and strategic relevance of the scenario framework, we conducted a three-round Delphi process. In this study, the Delphi served both as a validation mechanism for scenario narratives and as a participatory foresight tool that elicited critical feedback, explored points of convergence and divergence, and identified underlying assumptions across professional domains [36].
Expert participants were purposively sampled to ensure diversity across domains—airlines, air traffic control, aircraft maintenance, airports, original equipment manufacturers, logistics providers, consulting, and regulatory agencies—and represented a balance of regional perspectives from North America, Europe, Asia, and the Middle East. All of them had over 20 years of professional experience.
Round 1: Internal logic and clarity
In the first round, experts evaluated the draft scenarios on conceptual clarity, internal consistency, and completeness. A structured online survey combined Likert-scale ratings with open-ended qualitative prompts, allowing participants to comment on specific elements of each scenario. Feedback emphasized refining institutional dynamics, training systems, and role realism. Overall, Strategic Co-evolution and Human-Centric Continuity were rated highly, while Latent Obsolescence and Human Displacement were critiqued for underdeveloped treatment of institutional inertia and labor dynamics (Table 1).
Table 1. Delphi round 1 – scenario assessment and refinements
Scenario | Conceptual Clarity (Avg./5) | Internal Logic (Avg./5) | Missing Dimensions or Critiques | Expert Observations |
|---|---|---|---|---|
A. Strategic Co-evolution | 4.6 | 4.8 | None flagged | “Highly structured and plausible interplay.” |
B. Human-Centric Continuity | 4.4 | 4.5 | Add detail on training systems | “Stable but underplays global AI pressure.” |
C. Latent Obsolescence | 4.0 | 3.9 | Weakness in explaining institutional inertia | “Interesting but less grounded in historical trends.” |
D. Human Displacement | 3.7 | 3.5 | Lacks nuance on labor substitution and resistance | “Too dystopian — needs more variation in response.” |
Round 2: Plausibility and enabling conditions
The second round shifted focus to external plausibility and contextual realism. Participants reassessed the revised scenarios using an expanded set of evaluation criteria that included perceived likelihood, contextual realism, and alignment with current sectoral signals. Participants were also asked to identify enablers and inhibitors for each scenario, as well as potential policy or governance triggers that could influence trajectory shifts. Strategic Co-evolution emerged as the most likely pathway, cited as consistent with ongoing regulatory initiatives, cross-sector collaboration, and pilot initiatives in AI governance. Human-Centric Continuity was viewed as viable in conservative or risk-averse contexts, but potentially unstable under intensifying global automation pressure. Latent Obsolescence and Human Displacement were rated less plausible overall but considered useful as risk scenarios, highlighting vulnerabilities in governance-weak or deregulated contexts (Table 2).
Table 2. Delphi round 2 – expert assessment of scenario plausibility and enabling conditions
Scenario | Plausibility (Avg./5)/Most Selected (%) | Expert Feedback & Enabling Conditions |
|---|---|---|
A. Strategic Co-evolution | 4.7/58% | Viewed as the most realistic and desirable path. Enabled by regulatory foresight, coordinated training systems, and AI standardization. Supported by current institutional trends |
B. Human-Centric Continuity | 3.8/25% | Seen as plausible in risk-averse or culturally conservative sectors. Enabled by stable regulation, modular AI tools, and strong public trust in human oversight. May face challenges under global automation pressure |
C. Latent Obsolescence | 3.1/8% | Considered a credible risk in contexts with weak governance, institutional inertia, and underinvestment in skills. Highlights misalignment between available technologies and operational practice |
D. Human Displacement | 2.6/8% | Deemed least plausible. Raised concerns over safety and labor disruption from unregulated, OEM-led automation. Could emerge in deregulated or poorly governed domains |
Round 3: Convergence and strategic relevance
The final round sought convergence without forcing consensus. Experts reassessed their earlier ratings in light of anonymized group feedback. As shown in Table 3, ratings were highly stable, with Strategic Co-evolution again dominant and Human-Centric Continuity consistently supported as a viable secondary path. In addition, experts assessed whether scenarios were conceptually credible and strategically relevant. Here, Strategic Co-evolution achieved the strongest endorsements on both dimensions (92%/67%), while Latent Obsolescence and Human Displacement were recognized less for plausibility than for their diagnostic value in illuminating governance risks. Importantly, experts underscored that scenarios should be seen not as fixed endpoints but as potential trajectories between institutional interventions, regulatory shifts, or critical incidents.
Table 3. Delphi round 3 –expert plausibility ratings and convergence
Scenario | Agreement Score (Avg./5) | % Rated as Conceptually Credible | % Endorsed as Strategically Relevant |
|---|---|---|---|
A. Strategic Co-evolution | 4.8 | 92% | 67% |
B. Human-Centric Continuity | 4.3 | 85% | 42% |
C. Latent Obsolescence | 3.4 | 75% | 17% |
D. Human Displacement | 3.0 | 70% | 25% |
Across all three rounds, the Delphi process validated the scenario set as both credible and useful. The results highlight institutional adaptability as the decisive variable: when governance mechanisms keep pace with technological integration, scenarios cluster toward Strategic Co-evolution or Human-Centric Continuity; when they lag, the risk trajectories of Latent Obsolescence and Human Displacement become more likely.
Discussion and implications
Implications for theory and methodology
This study contributes to futures scholarship by integrating evolutionary economics, innovation theory, and socio-technical foresight to examine how human roles are reconfigured in AI-integrated aviation. Rather than framing automation as a linear substitution process, the proposed framework highlights the recursive interactions among technological variation, institutional selection, and labor retention. This adaptation of the VSR logic challenges deterministic accounts of AI adoption and emphasizes the role of institutional mediation in shaping socio-technical transitions. Furthermore, the study grounds its foresight in both empirical transitions and expert anticipation, aligning with calls for futures research that is historically situated, systemically aware, and reflexively designed.
Methodologically, the three-round Delphi process illustrates how participatory approaches can enrich scenarios beyond validation. Expert feedback refined the narratives, surfaced institutional and labor dynamics that were underdeveloped, and emphasized the permeability of scenarios, showing that aviation systems may shift between futures depending on governance, investment, or societal response. This positions Delphi as more than a reliability check: a tool for reflexive, dialogical, policy relevant foresight.
Finally, the framework places human agency as an active force in shaping futures. Rather than treating workers, unions, and training institutions as passive recipients of AI, it presents them as co-designers of socio-technical trajectories. Their actions influence whether aviation evolves toward co-evolution, continuity, obsolescence, or displacement. Recognizing this agency challenges determinism and highlights institutional mediation in shaping technological pathways. For researchers, embedding foresight in evolutionary logics makes scenario work more rigorous and transferable across domains. For practitioners and policymakers, understanding institutional mediation and scenario permeability highlights key leverage points for anticipatory governance, workforce planning, and trust-building.
Implications for stakeholders
The scenario framework provides not only analytical insight but also concrete guidance for actors shaping the future of aviation. For regulators and standard-setters, the key recommendation is to move from certifying technologies as isolated artefacts to certifying AI functions with requirements for auditability, explainability, and post-event traceability. Early adoption of scenario-based regulatory sandboxes can allow new systems to be tested under simulated disruptions, producing shared learning and enabling more adaptive oversight.
For airlines and operators, foresight underscores the need to redesign roles before automation is widely deployed. Governance boards that bring together technical experts, frontline staff, and union representatives should decide which decisions remain human-owned, and under what conditions. Embedding AI-in-the-loop training in recurrent programs could ensure that pilots, controllers, and technicians practice interventions in cases where algorithms fail or diverge from human judgment.
For training institutions and academies, curricula must evolve to cultivate hybrid competencies that blend airmanship and engineering knowledge with data literacy, systems supervision, and ethical reasoning. Micro-credentials and modular training can enable career-long upskilling, while simulation platforms should expose learners to both routine AI support and failure modes where human arbitration is critical.
For labor organizations and professional bodies, anticipatory negotiation is central. Workers should be involved in co-designing hybrid roles and progression pathways to prevent role hollowing. Portable credentials and continuous learning funds can ensure that reskilling is not tied to a single employer but follows the worker across the sector. Embedding worker representation in AI governance bodies strengthens legitimacy and ensures that professional knowledge informs deployment standards.
Across scenarios, the priority is not whether AI is adopted quickly or slowly, but whether adoption is accompanied by institutional foresight. Strategic Co-evolution requires scaling anticipatory governance and training capacity; Human-Centric Continuity demands preservation of human decision rights alongside AI literacy; Latent Obsolescence calls for urgent institutional investment to avoid stagnation; and Human Displacement underscores the need for immediate guardrails and staged deployment. Taken together, these recommendations highlight that effective governance is less about restraining technology and more about designing resilient institutional pathways that allow human roles, trust, and safety to evolve in tandem with intelligent systems.
Implications for society and governance
The scenarios developed in this study reveal not only divergent institutional pathways but also second-order effects that can emerge when AI is integrated into aviation. These effects extend beyond operational performance or technical reliability to fundamental issues of legitimacy, accountability, and systemic vulnerability, all of which are central to anticipatory governance in complex socio-technical systems.
A consistent pattern across the scenario matrix is that when institutional adaptation lags technological deployment, trust erodes. In the Human Displacement and Latent Obsolescence scenarios, rapid innovation unfolds in a vacuum of coordination, creating opaque decision-making structures, unclear lines of accountability, and erosion of professional identity. These consequences accumulate slowly but decisively, undermining legitimacy even when safety performance is not immediately compromised.
Labor markets and professional identity are often the first domains to feel these pressures. Workers who see their responsibilities hollowed out without corresponding reskills experience a decline in confidence in their profession. This can spill into wider effects such as industrial unrest, declining attractiveness of aviation careers, and skill shortages across transport and logistics sectors. Such disruptions to professional identity directly affect the resilience of the sector and its ability to adapt to further technological change.
The consequences may also reach into the structure of markets. As AI capabilities become embedded in proprietary platforms, the ownership of data and control over integrated services may concentrate power in a handful of global OEMs or technology providers. This creates dependencies that weaken the bargaining power of airlines, fragment oversight across jurisdictions, and make regulators more reactive than proactive. Power asymmetries in turn heighten vulnerabilities in safety and governance.
Issues of liability and insurance are closely linked to these shifts. As algorithms make more operational decisions, lines of accountability blur between manufacturers, operators, and regulators. Without harmonized evidentiary standards—such as logs, explainability, and traceability—responsibility gaps widen, insurance premiums rise, and legal disputes become protracted. The credibility of the system as a whole is at risk when accountability cannot be clearly demonstrated.
Finally, these societal effects are intensified by geopolitical divergence. While some jurisdictions accelerate automation, others adopt a more conservative approach. This creates the risk of regulatory arbitrage, complicates cross-border operations, and produces uneven safety standards. Divergent governance pathways therefore amplify the vulnerabilities already identified in labor markets, market concentration, liability frameworks, and public trust.
Implications for futures studies
This study offers several contributions to the field of futures studies by demonstrating how scenarios are constructed, validated, and interpreted within complex socio-technical systems. First, it operationalizes a co-evolutionary perspective—grounded in the VSR model—to explore the dynamic interplay between technological innovation, institutional response, and labor adaptation. By embedding scenarios in this logic, the study enhances the theoretical depth and explanatory power of scenario planning, moving beyond linear or deterministic approaches to AI.
Second, the research reinforces the importance of historical grounding in foresight. By drawing from three major transitions in aviation, the scenario set is anchored in observed socio-technical patterns rather than speculative abstraction. This historical layering strengthens scenario plausibility and demonstrates how futures research can integrate case-based reasoning to inform long-term exploration [42]
Third, the study reframes Delphi not only as a validation mechanism but as a reflexive foresight process. The multi-round design enabled expert engagement, revealing contested assumptions, refined scenario narratives, and emphasized permeability between trajectories. This enriches scenarios as sensemaking devices that capture systemic tensions rather than as static end-states, and aligns with recent calls for more iterative and dialogical methods in futures research [40].
Fourth, the research positions human agency as a central element for constructing futures. By treating workers, unions, and institutions as active co-creators of technological trajectories, the study aligns with inclusive, participatory, and justice-oriented strands of futures research, emphasizing that anticipation must account for social negotiation as well as technical possibility [53, 54]
Finally, the framework demonstrates methodological transferability. Although developed in aviation, its layered architecture—combining evolutionary theory, empirical case studies, and participatory foresight—can be applied to other domains where AI intersects with institutional governance and human labor (i.e., healthcare, energy, defense, logistics). In this way, the study contributes not only sector-specific insights but also methodological guidance for futures research under conditions of deep uncertainty.
Conclusion
This study developed and tested a co-evolutionary scenario framework to examine how AI integration may reshape human roles in aviation. Drawing on insights from evolutionary economics, innovation studies, institutional theory, labor research, and futures studies, the framework models socio-technical transitions through recursive interactions among technological change, institutional adaptation, and labor dynamics. Rather than treating automation as a unidirectional process of substitution, the study conceptualizes human–AI collaboration as a relational transformation shaped by institutional capacity, regulatory foresight, and workforce adaptability.
Empirically, the framework was grounded in three historical case studies of AI-related transitions in aviation and further elaborated through four scenarios, validated and refined via a Delphi process with senior experts. The resulting scenario matrix, structured around the dual axes of AI Integration and Institutional Adaptability, yielded four plausible futures: Strategic Co-evolution, Human-Centric Continuity, Latent Obsolescence, and Human Displacement. Together, these scenarios illuminate the systemic consequences of institutional choices and highlight leverage points for shaping desirable human–AI futures.
The findings underscore that the trajectory of human–AI collaboration in aviation is not technologically predetermined. It depends on the coordinated evolution of training systems, regulatory regimes, and organizational routines. Institutions are not passive reactors to technological change but co-constructors of futures. This reorientation has profound implications for anticipatory governance, strategic workforce planning, and foresight practices in high-stakes sectors.
Beyond the aviation domain, the study contributes methodologically by integrating case-based reasoning, co-evolutionary theory, and expert-based foresight into a structured approach for analyzing socio-technical transitions under uncertainty. The framework also extends to broader societal concerns, clarifying how second-order effects—from labor market disruption and liability disputes to market concentration and geopolitical divergence—shape the legitimacy and resilience of human–AI futures. Its applicability extends beyond aviation to other complex domains such as healthcare, energy, logistics, or defense, where intelligent systems simultaneously challenge and reshape human roles.
Limitations
This study has several limitations. First, while the selected case studies are analytically rich, they do not capture the full institutional heterogeneity of global aviation systems. Regional variations in regulation, labor markets, and technological readiness may yield alternative pathways. Second, the Delphi panel, although diverse and experienced, was limited in size (n = 16) and scope. Including a broader range of frontline personnel, operational managers, and training institutions could reveal additional insights into workforce adaptation. Third, scenario construction inevitably reflects researcher judgment in axis selection and narrative framing. Alternative logics may produce different scenario typologies. Finally, the analysis emphasizes qualitative processes and does not incorporate quantitative metrics such as performance indicators, cost–benefit ratios, or diffusion timelines. Future work could address these gaps with simulation-based stress testing or comparative analysis.
Future research directions
Several avenues for further research arise from this study. Cross-sectoral application would test the framework’s transferability to other high-reliability industries. Hybrid modeling approaches that combine scenario-based foresight with computational models (e.g., agent-based modeling, system dynamics) could provide greater granularity and explore second-order effects under varying policy configurations. Workplace-level research engaging frontline personnel in co-design processes could reveal micro-level frictions, informal adaptations often missed in top-down expert assessments. Comparative governance analysis across regulatory regimes (e.g., FAA, EASA, ICAO) would help map the geopolitical dimensions of anticipatory governance. Deeper inquiry is also needed into legal and ethical infrastructures, especially liability, explainability, and the design of trust-enhancing practices in AI-mediated systems. Finally, Artificial General Intelligence (AGI) should be integrated into futures research as an advanced, "horizon technology" capable of flexible reasoning, transfer learning, and cross-domain problem-solving. Its emergence could amplify both opportunities for advanced human-AI collaboration and risks of large-scale workforce displacement.
Overall, this study advances the proposition that the future of human work under AI is not a matter of passive adaptation, but of active design. Co-evolutionary foresight offers a powerful lens through which to make this design anticipatory, inclusive, and responsive to the deep uncertainties of socio-technical transformations.
Acknowledgements
The authors would like to express their gratitude to the anonymous reviewers for their insightful comments and constructive suggestions. Their careful reading and thoughtful feedback substantially improved the overall quality of the original manuscript.
Authors’ contributions
FJNM designed the work, analysed the data, interpreted the data, drafted the work, approved the submitted version, and agreed to be personally accountable for the authors’ contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author is not personally involved, are appropriately investigated and resolved. FPM analysed the data, interpreted the data, approved the submitted version, and agreed to be personally accountable for the authors’ contributions and to ensure that questions related to the accuracy or integrity of any part of the work, even ones in which the author was not personally involved, are appropriately investigated and resolved.
Funding
This action has been funded through the R&D activities programme with reference PHS-2024/PH-HUM-530 and acronym DiTeCaM-CM granted by the Community of Madrid through the Directorate-General for Research and Technological Innovation via Order 5694/2024.
Data availability
Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Air Line Pilots Association (2015). Remotely Piloted Aircraft Systems: Challenges for Safe Integration into Civil Airspace (ALPA White Paper). https://www.alpa.org/-/media/ALPA/Files/pdfs/news-events/white-papers/uas-white-paper.pdf
2. Ali, H; Pham, DT; Alam, S; Schultz, M; Li, MZ; Wang, Y; Itoh, E; Duong, VN. Human-AI hybrids in safety-critical systems: concept, definition and perspectives from air traffic management. Adv Eng Inform; 2025; 65, [DOI: https://dx.doi.org/10.1016/j.aei.2025.103256] 103256.
3. Amalberti, RR. Automation in Aviation: A Human Factors Perspective; 1999; In Handbook of Aviation Human Factors, Lawrence Erlbaum Associates:
4. Amazon Prime Air (2015). Revising the Airspace Model for the Safe Integration of Small Unmanned Aircraft System. Amazon Primer Air. https://www.nasa.gov/wp-content/uploads/2024/04/amazon-revising-the-airspace-model-for-the-safe-integration-of-suas6-0.pdf?emrc=dca052.
5. Bernard D, Perello-March JR, Solis-Marcos I, Cooper M (2024). A User-Centered Ontology for Explainable Artificial Intelligence in Aviation. Proceedings of the 2nd International Conference on Cognitive Aircraft Systems ICCAS 88–94.
6. Bessen, J. Automation and jobs: when technology boosts employment. Econ Policy; 2019; 34,
7. Billings, CE (1997). Aviation Automation: The Search for a Human-Centered Approach. CRC Press. https://doi.org/10.1201/9781315137995.
8. Blundell, J; Harris, D. Designing augmented reality for future commercial aviation: a user-requirement analysis with commercial aviation pilots. Virtual Reality; 2023; 27,
9. Casner, SM; Geven, RW; Recker, MP; Schooler, JW. The retention of manual flying skills in the automated cockpit. Hum Factors; 2014; 56,
10. Civil Aviation Authority (2024). Unmanned Aircraft System Operations in UK Airspace – Policy and Guidance (CAP 722 | Ninth Edition Amendment 2). https://www.caa.co.uk/publication/download/21784
11. Cohn P, Green A., Langstaff M, Roller M. (2017). Commercial drones are here: The future of unmanned aerial systems. McKinsey & Company. https://www.mckinsey.com/industries/logistics/our-insights/commercial-drones-are-here-the-future-of-unmanned-aerial-systems.
12. Daily J, Peterson J (2016). Predictive Maintenance: How Big Data Analysis Can Improve Maintenance. In Supply Chain Integration Challenges in Commercial Aerospace: A Comprehensive Perspective on the Aviation Value Chain (pp. 267–278). Springer.
13. de Oliveira Morais L, Krus P, Pereira L (2024). A HUMAN-CENTERED SYSTEMS ENGINEERING APPROACH FOR INTEGRATING ARTIFICIAL INTELLIGENCE IN AVIATION: A Review of AI Systems. 34th Congress of Thee Intenational Council of the Aeronautical Sciences.
14. Degani, A; Wiener, EL. Procedures in complex systems: the airline cockpit. IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans; 1997; 27,
15. Deloitte (2017). Predictive Maintenance Taking pro-active measures based on advanced data analytics to predict and avoid machine failure (Position Paper). Deloitte Analytics Institute. https://www.deloitte.com/content/dam/assets-zone2/de/de/docs/about/2024/Deloitte_Predictive-Maintenance_PositionPaper.pdf
16. DHL (2018). Unmanned Aerial Vehicles in Logistics: A DHL Perspective on Implications and Use Cases for the Logistics Industry. DHL Customer Solutions & Innovation. https://www.dhl.com/discover/content/dam/dhl/downloads/interim/full/dhl-trend-report-uav.pdf.
17. Downer, J. Trust and technology: the social foundations of aviation regulation. Br J Sociol; 2010; 61,
18. EASA (2023). Artificial Intelligence Roadmap 2.0: Human-centric approach to AI in aviation (Research & Innovation). EASA. https://www.easa.europa.eu/en/domains/research-innovation/ai
19. Emanuilov, I; Dheu, O. Flying high for AI? Perspectives on EASA’s roadmap for AI in aviation. Air Space Law; 2021; [DOI: https://dx.doi.org/10.54648/AILA2021001]
20. Felt U, Wynne B, Callon M, Gonçalves ME., Jasanoff S, Jepsen M, Joly PB., Konopasek Z, May S, Neubauer C (2007). Taking European knowledge society seriously. Luxembourg: DG for Research. EUR, 22, 700.
21. Flores A, Paselk A, McAndrew I (2024). Advancing Perspectives: A Scoping Review of Artificial Intelligence Applications in Aviation Human Factors for Flight Crews. Human Factors in Design, Engineering, and Computing 159(159).
22. Geels, FW. Co-evolutionary and multi-level dynamics in transitions: the transformation of aviation systems and the shift from propeller to turbojet (1930–1970). Technovation; 2006; 26,
23. Gicquel L, Bartheye O, Fabre L (2024). AI in Flight: Advancing Aviation Safety Through Real-Time Monitoring of Pilots’ Neuropsychological States. ICCAS International Conference on Cognitive Aircraft Systems.
24. Gramopadhye, AK; Drury, CG; Watson, J; Johnson, WB; Kanki, B; Allen, J; Rankin, B; Taylor, J. Human factors in aviation maintenance: challenges for the future. Proc Hum Factors Ergon Soc Annu Meet; 2000; 44,
25. Gross CJ (2024). Technological Innovation and the Rise of Aviation, 1903–1941. Springer Nature.
26. Hanaran D, Angelo AC, McFarland JS, Wheeler BE. (2024). Measuring willingness to fly onboard aircraft equipped with two pilots, a single pilot, and a single pilot with artificial intelligence. Proceedings of the IEMS 2024 Conference
27. Helmreich RL, Merritt AC, Wilhelm JA (2009 ). The Evolution of Crew Resource Management Training in Commercial Aviation. In Dismukes RK (ed.), Human Error in Aviation (pp. 275–288). Routledge. https://doi.org/10.4324/9781315092898
28. IATA (2022). From Aircraft Health Monitoring to Aircraft Health Management White Paper on AHM (White Paper on AHM). IATA. https://www.iata.org/contentassets/fafa409c883d41198aeb87628c848851/ahm-wp-1sted-2022.pdf
29. Kabashkin, I; Perekrestov, V. Ecosystem of aviation maintenance: transition from aircraft health monitoring to health management based on IoT and AI synergy. Appl Sci; 2024; 14,
30. Kay A, McDonald N, O’Sullivan L (2025). Strategic Human Resource Management, Training and Design for Future Flight Operations. International Conference on Human-Computer Interaction 49–60.
31. Kim SY, Park MS (2022). Robot, AI and Service Automation (RAISA) in Airports: the case of South Korea. In 2022 IEEE/ACIS 7th International Conference on Big Data, Cloud Computing, and Data Science (BCD) (pp. 382-385). IEEE. https://doi.org/10.1109/BCD54882.2022.9900831.
32. Kirwan B, Venditti R, Giampaolo N, Sánchez M V (2024). A Human Centric Design Approach for Future Human-AI Teams in Aviation. Human Interaction and Emerging Technologies (IHIET 2024) 1(1).
33. Korentsides J, Keebler JR., Fausett CM, Patel SM, Lazzara EH (2024). Human-AI Teams in Aviation: Considerations from Human Factors and Team Science. J Aviat/Aerosp Educ Res 33(4): 7.
34. Laskowski J, Pytka J, Laskowska A, Tomilo P, Skowron Ł, Kozlowski E, Piatek R, Mamcarz P (2024). AI-Based Method of Air Traffic Controller Workload Assessment. 2024 11th International Workshop on Metrology for AeroSpace (MetroAeroSpace), 46–51.
35. Lertworawanich P, Pongsakornsathien N, Xie Y, Gardi A, Sabatini R (2021). Artificial Intelligence and Human-Machine Interactions for Stream-Based Air Traffic Flow Management. 32nd Congress of the International Council of the Aeronautical Sciences, ICAS 2021.
36. Linstone, HA; Turoff, M. The Delphi Method; 2002; Techniques and Applications, Addison-Wesley Educational Publishers Inc:
37. Lozano Tafur C, Orduy Rodríguez JE, Aldana Rodríguez D, Reinoso Pintor D (2025). Artificial Intelligence in the Aviation Operations: A State of the Art. Revista Ciencia y Poder Aéreo 20(1).
38. Lundberg J, Bång M, Johansson J, Cheaitou A, Josefsson B, Tahboub Z (2019). Human-in-the-loop AI: Requirements on future (unified) air traffic management systems. 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC) 1–9.
39. Malakis S, Baumgartner M, Berzina N, Laursen T, Smoker A, Poti A, Fabris G, Velotto S, Scala M, Kontogiannis T (2023). A Framework for Supporting Adaptive Human-AI Teaming in Air Traffic Control. International Conference on Human-Computer Interaction. 320–330.
40. Mangnus, AC; Oomen, J; Vervoort, JM; Hajer, MA. Futures literacy and the diversity of the future. Futures; 2021; 132, [DOI: https://dx.doi.org/10.1016/j.futures.2021.102793] 102793.
41. Metcalfe, JS. Evolutionary Economics and Creative Destruction; 1998; Routledge: [DOI: https://dx.doi.org/10.4324/9780203018927]
42. Miller R (2018a). Sensing and making-sense of Futures Literacy: Towards a Futures Literacy Framework (FLF). In Miller R (Ed.), Transforming the Future: Anticipation in the 21st century (pp. 15–50). Routledge.
43. Miller, R. Transforming the Future: Anticipation in the 21st century; 2018; Taylor & Francis: [DOI: https://dx.doi.org/10.4324/9781351048002]
44. Mobley, RK. An Introduction to Predictive Maintenance; 2002; Second Elsevier: [DOI: https://dx.doi.org/10.1016/B978-0-7506-7531-4.X5000-3]
45. Motti VV. (2017). Sources of Futures Studies from Foresight to Anticipation. In Poli R (ed.), Handbook of Anticipation: Theoretical and Applied Aspects of the Use of Future in Decision Making. Springer, Cham. Springer. https://doi.org/10.1007/978-3-319-31737-3_98-1
46. Mumtaz SN, Makhdoom TR, Hassan N, Malokani DKAK, tu Zehra FChandio SP (2022). Artificial intelligence and its impact on HRM functions of Pakistani airlines: evidence from moderated mediation model. J Positive School Psychol 6(4):150–157.
47. Nelson, RR; Winter, SG. An Evolutionary Theory of Economic Change; 1982; Cambridge, MA, Harvard University Press:
48. Parasuraman R, Byrne EA (2003). Automation and human performance in aviation. Principles and Practice of Aviation Psychology 311–356.
49. Pascarella D, Gigante G, Lanzi P, Spiller E, Fornaciari E (2025). Gaps and Challenges in Automation Assessment to Support Human-Centric Aviation Certification. International Conference on Human-Computer Interaction 105–119.
50. Peuaud A, Clerquin A, Alaverdov A (2025). Bias Influence on AI Accuracy: The Case of Air Traffic Controllers’ Experience. International Conference on Human-Computer Interaction 120–139.
51. Pham DT, Ali H, Fennedy K, Hsieh MH, Alam S, Duong V (2024). Human-AI hybrid paradigm for collaborative air traffic management systems. SESAR Innovation Days 2024.
52. Pilon RV (2023). Artificial Intelligence in Commercial Aviation: Use Cases and Emerging Strategies. Routledge. https://doi.org/10.4324/9781003018810
53. Poli R (2014). Anticipation: a new thread for the human and social sciences? Futuribili. Rivista Di Studi Sul Futuro e Di Previsione Sociale.
54. Poli R (2017). Introduction to Anticipation Studies. Berlin: Germany.Springer. https://doi.org/10.1007/978-3-319-63023-6
55. Rauhala A, Tuomela A, Leviäkangas P (2023). An overview of unmanned aircraft systems (UAS) governance and regulatory frameworks in the European Union (EU). Unmanned Aerial Systems in Agriculture 269–285.
56. Raza W, Renkhoff J, Ogirimah O, Bawa GK, Stansbury RS (2025). Advanced Air Mobility: Innovations, Applications, Challenges, and Future Potential. J Air Transport 1–18.
57. Rice GM, Snider D, Linnville S (2024). A METHODOLOGY FOR USING ARTIFICIAL INTELLIGENCE (AI) TO IDENTIFY COGNITIVE PERFORMANCE DECREMENTS IN AVIATION OPERATIONAL ENVIRONMENTS. Aerospace Med Human Perform 95(8).
58. Rosa, AB; Kimpeler, S; Schirrmeister, E; Warnke, P. Participatory foresight and reflexive innovation: setting policy goals and developing strategies in a bottom-up, mission-oriented, sustainable way. Eur J Futures Res; 2021; 9,
59. Rutting, L; Vervoort, J; Mees, H; Driessen, P. Strengthening foresight for governance of social-ecological systems: an interdisciplinary perspective. Futures; 2022; 141, [DOI: https://dx.doi.org/10.1016/j.futures.2022.102988] 102988.
60. Sardar, Z. The namesake: futures; futures studies; futurology; futuristic; foresight—what’s in a name?. Futures; 2010; 42,
61. Sarter, NB; Woods, DD. Pilot interaction with cockpit automation: operational experiences with the flight management system. Int J Aviat Psychol; 1992; 2,
62. Shmelova T, Sikirda Y (2020). Artificial Intelligence for Evaluating the Mental Workload of Air Traffic Controllers. In A. Realyvásquez-Vargas, K. Arredondo-Soto, G. Hernández-Escobedo, & J. González-Reséndiz (Eds.), Evaluating Mental Workload for Improved Workplace Performance (pp. 184-212). IGI Global Scientific Publishing. https://doi.org/10.4018/978-1-7998-1052-0.ch009 https://doi.org/10.4018/978-1-7998-1052-0.ch009
63. Sjöström Falk H (2024). Digitalisation and AI in air traffic control: balancing innovation with the human element. https://www.eurocontrol.int/article/digitalisation-and-ai-air-traffic-control-balancing-innovation-human-element
64. Sweet, W. The glass cockpit [flight deck automation]. IEEE Spectrum; 1995; 32,
65. Tyburzy L, Jameel M, Hunger R, Böhm J (2024). Empowering Human-AI Collaboration in Air Traffic Control through Smart Interaction Design. 2024 AIAA DATC/IEEE 43rd Digital Avionics Systems Conference (DASC) 1–9.
66. Van der Heijden, K. Scenarios: the art of strategic conversation; 2005; John Wiley & Sons:
67. Vervoort, J; Gupta, A. Anticipating climate futures in a 1.5 C era: the link between foresight and governance. Curr Opin Environ Sustain; 2018; 31, pp. 104-111. [DOI: https://dx.doi.org/10.1016/j.cosust.2018.01.004]
68. Voros, J. A generic foresight process framework. Foresight; 2003; 5,
69. Westin C, Klang KJ, Basjuka J, Söderholm G, Lundberg J, Bång M, Lundin Palmerius, K, Boonsong S, Taraldsson Å, Fylkner G (2025). Human-AI Teaming in the Urban Air Mobility Coordinator Work Position: A Proof-of-Concept Design. International Conference on Human-Computer Interaction 256–275.
70. Wiener, EL; Curry, RE. Flight-deck automation: promises and problems. Ergonomics; 1980; 23,
71. Ziakkas D, Plioutsias A, Pechlivanis K (2022). Artificial intelligence in aviation decision making process. The transition from extended minimum crew operations to single pilot operations (SiPO). 13th AHFE International Conference on Artificial Intelligence and Social Computing. 101–107.
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.