Content area
Infusing artificial intelligence algorithms into production aerospace systems can be challenging due to costs, timelines, and a risk‐averse industry. We introduce the Onboard Artificial Intelligence Research (OnAIR) platform, an open‐source software pipeline and cognitive architecture tool that enables full life cycle AI research for on‐board intelligent systems. We begin with a description and user walk‐through of the OnAIR tool. Next, we describe four use cases of OnAIR for both research and deployed onboard applications, detailing their use of OnAIR and the benefits it provided to the development and function of each respective scenario. Lastly, we describe two upcoming planned deployments which will leverage OnAIR for crucial mission outcomes. We conclude with remarks on future work and goals for the forward progression of OnAIR as a tool to enable a larger AI and aerospace research community.
INTRODUCTION
The aerospace sector contains diverse research problems that provide fruitful domains for artificial intelligence (AI) research applications. For example, traversing remote planetary environments in the search for life necessitates autonomy due to the challenges associated with human-in-the-loop ground control stations, which introduce latency, proximity, and bandwidth limitations (National Academies of Sciences 2018). Similarly, reacting to space weather events on Mars in the quest for building remote habitable worlds will require on-board data processing and modeling, which is challenging to enable in space due to its unique computational constraints (National Research Council 2013). Enabling the integration of AI into aerospace systems will drastically transform what is possible in space, and the need for this intelligence has existed for quite some time. Applications such as intelligent distributed systems missions (e.g., swarms and constellations), onboard data mining, and autonomous guidance navigation and control in space systems all require contribution from AI research (for example, multi-agent systems, machine learning/computer vision, and reinforcement learning, respectively) (Dahl et al. 2023).
Despite the exciting possibilities for AI in space, the aerospace sector is traditionally conservative and risk-averse. Space research contains a high barrier to entry due to mission sensitivity, inaccessibility, and cost of flight missions, making the infusion of current low-maturation, low-assurance, and low-trust AI algorithms a risky endeavor. To this end, we present the On-Board Artificial Intelligence Research Platform (OnAIR), a Python-based software pipeline and cognitive architecture designed to unify and standardize AI research and development for space systems. OnAIR offers a robust and easy-to-use framework that reduces timelines/barriers of entry for AI researchers, expediting the integration of AI algorithms into space applications. Additionally, OnAIR is agnostic to the domain, application, agent and technology development stage (Figure 1). OnAIR has been used to facilitate numerous diverse research and mission applications at various stages of maturation, including two successfully deployed NASA missions as a tool to enable AI reasoning on board.
[IMAGE OMITTED. SEE PDF]
In this paper, we provide background on the OnAIR tool, a walk-through of its use, and a review of its successful deployments to a range of research projects and missions, all at varying stages of maturation and across disparate domains. Additionally, we describe how OnAIR has enabled diverse researchers with varying backgrounds to work together under one architecture to develop AI algorithms and tools for space systems. OnAIR is actively maintained and developed as part of an internal NASA project; community feedback and contributions are welcome.
RELATED WORK
The infusion of artificial intelligence algorithms into space has been ongoing since the 1990s, with its early use in “Spike,” Deep Space 1, and Earth Observing 1 (Dahl et al. 2023). However, each of these missions employed their custom architectures for AI infusion. More recently, there have been efforts to create standard flight software frameworks like NASA's Core Flight System (cFS) and F-Prime (McComas et al. 2016; McComas 2012; Bocchino et al. 2018), both of which have supported major missions, and both of which are open source. Although these architectures offer publish-subscribe capabilities, they are not explicitly engineered for AI research. Alternatively, the SpaceROS environment offers similar features to the Robot Operating System (ROS) (Macenski et al. 2022), but has been repurposed for use in space, with more emphasis on memory and timeliness safety (Probe et al. 2023). Implementing autonomous functions within these related architectures is challenging due to the complexity of connecting numerous disparate components, where it would be more beneficial to have an all-inclusive cognitive architecture in place for autonomous functions. To the best of our knowledge, OnAIR is the first ever cognitive architecture for space, flown in space. As AI deployment in space increases, AI architectures will be essential to streamline processing and decision-making, collectively emitting intelligence as opposed to hard-wiring many individual components, which may become burdensome or even unfeasible in sufficiently large autonomous functions for spacecraft.
THE OnAIR COGNITIVE ARCHITECTURE
OnAIR was developed at NASA Goddard Space Flight Center (GSFC) to support the interoperation of diverse researchers to collectively enable autonomy for fleet mission types, under the Distributed Systems Mission (DSM) project (Gramling et al. 2022). The OnAIR tool was successful in combining the efforts of multi-disciplinary researchers with varying software engineering/AI experience across multiple development environments at various stages of maturation, discussed later in this paper. The primary objective of deploying OnAIR is to fuse independent systems at low cost to development teams, to allow for parallel development of those systems, and to provide the structure of data flow and timing to standardize operations across a scenario.
OnAIR is a plugin-based software pipeline and cognitive architecture that provides modular, interchangeable, user-designed components with an implementation that follows the Russell and Norvig (Russell et al. 2010) sense-perceive-act structure of a rational agent. Sensing happens through a user-defined data adapter (Figure 2B). Perception takes place across four levels of abstraction meant to emulate the neurobiological premise of cognition (Figure 2C). Decision-making and actuation happen at the highest level of cognition, meant to emulate executive control reasoning of the human brain and to support ensemble-based algorithms. Each abstraction layer is represented by a plugin interface. See Table 1 for descriptions and examples of each abstraction layer.
TABLE 1 An overview of OnAIR's interface plugin types, with specific implementation examples.
| Interface plugin | Description | Example |
| Knowledge representation () | Knowledge representation plugins synthesize high-level knowledge from the raw environmental dataframes in according to user-defined algorithms. | Planning domain definition language (PDDL) symbolic predicate descriptors (arm_lifted()) are synthesized from sub-symbolic robot end effector trajectory information. |
| Learners () | Learner plugins receive both the high-level information generated by , and low-level dataframes in | Sub-symbolic feature data from and symbolic labels generated from train a vision transformer neural network |
| Planners () | Planner plugins receive high-level information from and to be used by high-level planners, and can additionally append these recommendations to the . | A Task and Motion Planner (TAMP) uses state information and weather predictions to deduce science instrumentation decisions. |
| Complex reasoners () | Complex reasoner plugins receive high-level outputs from all plugins, and combine them into an “executive function” decision-making process. | An ensemble decision-making approach is employed, where both path planning in and robot battery-level prediction in are combined to make optimized search and rescue locomotive action decisions. |
[IMAGE OMITTED. SEE PDF]
Benefit of using OnAIR
OnAIR's use in both research and deployment scenarios is to enable integration when development requires bringing parallel systems together to accomplish holistic tasks. In the following research and deployment sections, the systems discussed are comprised of independent systems attached to OnAIR as plugins. OnAIR provides the structured, regulated data flow required for sequential processing and inter-plugin communication, mirroring cognitive flow, and enabling the independent components to create a full functionality loop. This binding of disparate systems reduces development, deployment, and maintenance costs and generalizes to a range of tasks, scenario fidelity, and problem complexity. OnAIR's ease of use is proven by its use in teams to accomplish research, production, and implementation tasks across a range of environments and timescales, including applications in opportunistic science discovery (Theiling et al. 2022, 2024), mission resilience (Gizzi et al. 2022a; Staley et al. 2023), creative problem-solving in robots (Gizzi et al. 2022b), and intelligent distributed systems. In the “Research Applications of OnAIR” section, “Deployed Applications of OnAIR” section, and “Planned Deployments of OnAIR” section, we review multiple applications where OnAIR has facilitated, or will facilitate, research and development for projects at varying maturation levels in different domains, rigorously demonstrating its proven utility.
How to use OnAIR
In this section, we describe how to use the OnAIR platform, shown in Figure 2.
- Configurations: The user begins by specifying which data adapter will be used for the target environment . For example, to load raw data from a .csv file, the user must specify the data file path and the csv adapter within the provided config.ini file. The data adapter is the main way of integrating the OnAIR pipeline with external tools and workflows, and the open-source repository provides three example adapters for use at the time of writing: csv, Redis, and cFS.
- Data Pipeline: At runtime, data from the environment begins piping into the software pipeline of OnAIR through specified data adapter (Figure 2A,B). processes data from into a DataSource object , which buffers this processed data into , which is a list of frames , where is a fixed array of heterogeneous data of size .
- Cognitive Architecture: The cognitive architecture of OnAIR senses information about the environment through the DataSource object, which regularly renders frames from its buffer . OnAIR contains four main plug-in interfaces, shown in Figure 2C and described in Table 1. Each plugin interface can house a non-fixed amount of user-define Python plugin modules. OnAIR requires all plugins to implement an update function (which receives a frame ) and a render_reasoning function.
The full OnAIR stack is open source, allowing for the construction of DataSources and additional plugins as required. Documentation is extensive, including high-level architecture design, component descriptions, plugin development, unit testing, and full end-to-end examples. To date, OnAIR has reached roughly 30 users and open-source contributors spanning both internal and external domains. Internal and community collaborators have configured OnAIR to run in concert with various simulators (Gazebo, PyBullet, JMAVSim, AirSim), technologies (Robot Operating System, PX4, I2C, MAVLink, R, Docker), and embodied agents (Turtlebots, ModalAI Starlings, and custom drones), as demonstrated in Figure 3.
[IMAGE OMITTED. SEE PDF]
RESEARCH APPLICATIONS OF OnAIR
OnAIR has been used as a prototyping research tool for numerous early-stage research applications with NASA and in collaboration with university and industry partners. This section contains examples of OnAIR's uses in autonomy research across multiple institutions and disciplines to enable and streamline the development process.
Crop health and contaminant detection
Researchers from NASA Goddard Space Flight Center, Tufts University, University of Vermont, and Massachusetts Institute of Technology used OnAIR to develop a framework for identifying and characterizing saliency in agricultural data, detecting unhealthy soil using a twin network, and classifying the type of deviation; see Figure 4 for an example case. The system was built from the ground up within OnAIR. Researchers built out unique features to enable cross-network communication between agents (and their OnAIR instances) geolocated near their three campuses. These systems were built around OnAIR's data ingestion and were easy to integrate using the compartmentalized data source component. For algorithmic development, OnAIR was able to switch easily between data fidelity levels, accepting data from stock images, simulations, or local Raspberry Pi sensors, with no changes required to follow-on plugins. Agents in heterogeneous environments could additionally communicate seamlessly (e.g., a sim agent and a Pi sensor agent). OnAIR facilitated distributed agent communication and data source configuration to accept knowledge synthesized by other agents. Algorithms for data processing, saliency detection, and cross-communication were developed and integrated in parallel by different teams as plugins, and the structured sequential dataflow forced accounting for each translation and reasoning step in the detection pipeline.
[IMAGE OMITTED. SEE PDF]
Mission operations and biosignature detection
A collaboration between Aurora Engineering, the University of Tulsa, and NASA Goddard Space Flight Center (Theiling et al. 2022, 2024; Williams et al. 2023) is currently making use of OnAIR as a mission operations research tool, with OnAIR as the focal point connecting several otherwise disparate and difficult-to-coordinate systems. The team is working on a proposed use case around Saturn's moon Enceladus, involving a constellation of orbiters studying the plume ejecta above the surface using mass spectrometers (Figure 5). Several instances of OnAIR run on Raspberry Pis, each simulating an orbiter using prepackaged navigational and science telemetry of the simulated mission run, are fed to OnAIR instances through static data sources and processed by onboard algorithms. OnAIR handles each agent's datastream and integrates the backend of all simulated systems (simulated orbiter trajectories and telemetry, mass spectra collection) with the onboard autonomy processes, which classify mass spectra, detect possible biosignatures, and communicate with other agents. The classification code developed in R was integrated with OnAIR through a single plugin wrapping the operations in Python. Agents communicate through a Redis server, which mocks the more complex in situ protocols to speed development and iteration and is supported natively in OnAIR. OnAIR unites the disparately developed systems for telemetry processing, machine learning classification methods, and communications that were developed separately with minimal modification by each user group, enabling rapid integration and structuring of data flow across agents.
[IMAGE OMITTED. SEE PDF]
DEPLOYED APPLICATIONS of OnAIR
OnAIR has been used as a prototyping research tool and deployment architecture for on-board artificial intelligence algorithms for two NASA research missions, described below. This section describes each mission, its instantiation of OnAIR, and how OnAIR streamlined the development process and enabled onboard AI.
Network for assessment of methane activity in space and terrestrial environments
The Network for Assessment of Methane Activity in Space and Terrestrial Environments (NAMASTE) is a 3-year (2022–2025) research campaign led by Dr. Mahmooda Sultana (NASA GSFC) for characterizing the methane distributions in the permafrost sites of Alaska (funded by Planetary Science and Technology Analog Research (PSTAR)). Due to the challenging terrain, autonomous drones are deployed to navigate the landscape and perform methane measurements (see Figure 6A–C). Intelligent decision-making is required among a fleet of drones to identify areas of high scientific interest while minimizing redundant or otherwise uninteresting measurements and maximizing science-data throughput and understanding in a limited timeline. OnAIR aided research, development, and deployment by providing a standard for data ingestion and interaction between the many subsystems involved in the NAMASTE software architecture, including the autonomy planning and control stack.
[IMAGE OMITTED. SEE PDF]
OnAIR setup
OnAIR plugins were developed for multiple stages, most notably for science data processing, drone commanding and telemetry, and path planning, to constitute the main intelligent sense-and-seek functionality of the mission. The science data processing plugin (Figure 6C, “Science”) ingested the complex raw sensor readings embedded in the OnAIR dataframe using a Redis adapter and outputted a single methane concentration value. The drone commanding and telemetry plugins use MAVLink, a standard protocol for communicating with various unmanned vehicles, to control and add GPS information from the UAV to (Figure 6C, “GPS to loc”). The path planning plugin was developed to reason about the methane concentration value, GPS information, and current mission status within , outputting the next target location for the drone leveraging standard grid or gradient-based searching algorithms that consider the extent and direction of maximum methane concentration (Figure 6C, “EDP” and “Actuation”).
Results
The NAMASTE mission is currently ongoing, with the second field campaign having been completed in the Summer of 2024. OnAIR was an integral factor in accomplishing the research goals to date. OnAIR allowed the diverse, distributed, and multi-experienced team to accomplish interoperability and software/hardware maturation on an aggressive agile timeline. OnAIR enabled interoperability in 2 months (in a 3-year project). OnAIR allowed separate teams to continue parallelized development as various portions of the campaign matured at different times due to the diversity of project contribution. Additionally, OnAIR enabled the development of all software interfaces, including the custom methane sensor interface, the sUAS interface, the radio communication interface, and all AI module interfaces.
SpaceCube edge-node intelligent collaboration
The SpaceCube Edge-Node Intelligent Collaboration (SCENIC) (Geist et al. 2023) payload was deployed on Space Test Program—Houston 9, a satellite tethered to the International Space Station (ISS) that launched in 2023 (led by Dr. Christopher Wilson, funded by the Space Force) (Figure 6E–G). In conclusion of the primary mission, the satellite hardware was made available for additional experiments, where OnAIR was chosen for an onboard AI demonstration. This posed several challenges, however, including (I) a 6-month timeline with a small team, (II) a resource-constrained processor with a specialized instruction set architecture (FPGA-based platform running at 100 MHz), and (III) the need to integrate with the existing publish-subscribe flight software system (cFS) operating in the C language. For brevity, we refer interested readers to (McComas 2012; McComas et al. 2016) for a full account and description of the cFS architecture.
OnAIR setup
The short timeline forced a minimal experiment that would demonstrate OnAIR's functionality in a low-level autonomous function. Two plugins were developed to support this goal: a Kalman filter to operate over all incoming telemetry and a plugin to write the dataframe and Kalman filter residuals to a file. The limited computing power of the 100 MHz processor also limited what algorithms could be implemented, but it proved sufficient for the performance overheads introduced by OnAIR and the Kalman filter implementation.
Data was ingested by OnAIR through the cFS Software Bus Network (SBN), a client library to connect to the SBN, and the SBN data adapter available in the open-source OnAIR release. The SBN data adapter subscribed to an existing telemetry packet that consisted of forty telemetry points and was sent at a rate of 0.25 Hz. The cFS to OnAIR data pipeline had been developed as part of a previous demonstration but required updates to work with the cFS version Caelum and the latest version of OnAIR.
Results
OnAIR and the required flight software updates were successfully uploaded to SCENIC. The onboard processor was rebooted, and OnAIR successfully launched. OnAIR periodically wrote the results to a CSV file in a directory that was automatically downlinked and later reconstructed on the ground. OnAIR ran without error for 4 days alongside cFS, at which point the experiment was stopped in favor of other mission priorities. The SCENIC/OnAIR experiment was a success: a small team was able to implement plugins that process live spacecraft telemetry in a short amount of time. OnAIR plugin development was the easiest part of the project: more time was spent adapting to the FPGA instruction set and integrating with the SCENIC hardware. Using OnAIR made it possible to write, integrate, and test these plugins rapidly and independently, first using historical data through the csv data source and eventually with live data from cFS running on a SCENIC laboratory testing environment. OnAIR made it possible to experiment in such a short timeframe, facilitating rapid reuse of components and easy prototyping capabilities. OnAIR proved instrumental in the rapid development of an autonomous function, enabling accelerated development time compared with traditional pipelines that leverage the C coding language.
PLANNED DEPLOYMENTS OF OnAIR
OnAIR is currently being used in multiple ongoing research and development efforts across NASA, under the GSFC Adaptable Science and Technology for Responsive Autonomy (ASTRA) project, led by Dr. Bethany Theiling, Dr. James Marshall, Alan Gibson, and Dr. Evana Gizzi. ASTRA is focused on developing intelligent extensible missions architectures (IEMAs) for earth and space systems, specifically targeting software standards. Broadly, IEMAs enable high-level modularity and reconfiguration of mission assets, both within a mission and across disparate missions (Gizzi et al. 2025). The flexibility enabled by IEMAs would help realize novel mission formats, such as incremental mission types, and the coupling of disparate missions (see Figure 7).
[IMAGE OMITTED. SEE PDF]
OnAIR will be used in two planned deployed IEMA missions, described next. The first mission uses a combination of drone and human-operated science sensors in an environmentally abundant field campaign. The second mission uses machine learning on board a small satellite to perform anomaly handling. Both experiments use a significant number of plug-ins, highlighting the value of using cognitive architectures like OnAIR for deployed autonomy applications in the real world. Some details of each mission have been omitted for competition sensitivity.
Drone field campaign
OnAIR will be used to support intelligent decision-making in an ASTRA-led field campaign, planned for August 2025. The campaign will target opportunistic science discovery at a domestic field site, which will serve as an analogue for missions deployed for science discovery in complex planetary environments (led by scientist Dr. Bethany Theiling). The selected field site consists of a lake, a freshwater reef, and a lakeshore, which provides a varied ecosystem needed for comprehensive testing. The ASTRA team will use three agents in the field experiment—two ModalAI Starling 2 Max drones outfitted with low-cost commercial sensors and one human-in-the-loop “terranaut” carrying a hyperspectral imager. Each ModalAI Starling sUAS will be deployed with different mission objectives. One drone will be used for exploration and learning (“explorer”), and the other will be used for targeted mapping (“supporter”) (see Figure 8A,B).
[IMAGE OMITTED. SEE PDF]
OnAIR Setup
Each agent will be running an individual instance of OnAIR, shown in Figure 8C. Drone telemetry, onboard sensor data, and received inter-agent messages are pulled through a Redis adapter into Knowledge Rep plugins for preprocessing, using the Knowledge Acquisition and Synthesis Tool (KAST, developed by Aurora Engineering) and the Fleet Knowledge Rep (FleetKR). Next, this packaged sensor data is passed to Learner plugins. The anomaly detection (AD) and spectral imagery classification (SIC) plugins use machine learning methods to determine whether a salient science event exists in the data stream. Next, this packaged sensor data is passed to Planner plugins. The objective-based AI (OBAI) plugin determines whether the detected anomaly warrants additional sampling measurement, in which the fast downward (FD) classical planner generates a new action sequence. Finally, the Complex Reasoners plugins make final action decisions. First, requests from other agents in the fleet are processed through the Fleet Complex Reasoner (FleetCR) plugin. Lastly, the Remote Capability-Auctioning Planner (ReCAP) plugin combines insights from all previous plugins to determine a mission action in the form of a drone movement or a sampling action using a combination of behavior trees and an auctioning system.
R5 spacecraft experiment
In the fall of 2025, OnAIR will be used in an on-board experiment (see Figure 8D–F) co-led by three teams across NASA (Goddard Space Flight Center - GSFC, Johnson Space Center - JSC, and Ames Research Center - ARC). The experiment involves on-board anomaly detection and follow-up decision-making in star tracker imagery, using two OnAIR instances running distinct anomaly detection algorithms (OnAIR-1 and OnAIR-2). Each OnAIR instance will run as a separate containerized process, using ARC's Opportunistic Software Experiments for Spacecraft Autonomy Testbeds (OSE-SAT). This experiment will be hosted on a small satellite within JSC's Realizing Rapid, Reduced-cost high-Risk Research (R5) satellite fleet (Pedrotty et al. 2023).
OnAIR setup
Each OnAIR instance runs a nearly identical set of plugins, detailed in Figure 8F. Images are ingested into OnAIR through a custom flight adapter and pre-processed by two Knowledge Rep plugins—the Fleet Knowledge Rep (FleetKR) and the Knowledge Acquisition Synthesis Tool (KAST). The FleetKR plugin processes external agent action requests. The KAST plugin extracts symbolic data from raw inputs. Next, the data is processed by Learner plugins, which classify the star tracker image as either nominal or off-nominal. In this case, the two instances of OnAIR run unique plugins, where OnAIR-1 uses Convolutional Neural Networks (CNN) for classification, and OnAIR-2 uses a custom Twin Neural Network (TNN) (developed by Hayley Owens). The classification result is passed to the Consensus plugin at the Complex Reasoner level, which determines whether the classification should be sent through the Fleet Complex Reasoner (FleetKR) plugin to the other OnAIR instance for confirmation. In the case that the two OnAIR instances (CNN and TNN) reach a consensus and agree that an anomaly has been detected, a down-link command is queued for the next ground station communication window/opportunity.
CONCLUSION AND FUTURE WORK
In conclusion, the On-Board Artificial Intelligence Research Platform has proven to be an invaluable tool for autonomy research and development. The presented applications of OnAIR demonstrate its utility and versatility across multi-domain, multi-stage, and multi-organizational research. The design choices of OnAIR—including its modularity, plugin-based architecture, and python implementation—have significantly reduced the barriers to entry for deploying autonomy solutions into real world applications. In these instances, OnAIR facilitated seamless integration of disparate systems, enabled rapid prototyping, and accelerated the development of complex autonomous functions, even under severe resource constraints and tight deadlines.
The OnAIR development team has continued feature development and maintenance of OnAIR, with expanded contributions from the public research community. OnAIR is currently in the research plans for upcoming missions in snow hydrology, space weather, satellite resilience, remote sensing (spectroscopy), exoplanet detection, and more (all at various stages of maturation in their research) to support reasoning in drones, rovers, ground controller stations, and satellites.
Due to its open-source availability, OnAIR has been organically used by the greater research community (industry and academic), and we plan on continuing to encourage this use and contribution. In future work, we will host regular hackathons and information sessions to enable the general research community to engage, use, and contribute to the code base. We are excited to see that OnAIR has served as a “bridge” tool to ease AI researchers into the aerospace domain and to increase AI literacy for aerospace researchers.
ACKNOWLEDGMENTS
We would like to acknowledge the following interns who have contributed to OnAIR (and its earlier iterations): Jeffrey St. Jean, Nicholas Pellegrino, Gabriel Rasskin, James Staley, William Zhang, and Charles Zhao. Thank you to the contributors to the OnAIR open source repository GitHub. Thank you to the Goddard Space Flight Center Internal Research and Development (IRAD) program and the Distributed Systems Missions project for supporting this work. Thank you to all mission deployment teams for their contributions to this work.
CONFLICT OF INTEREST STATEMENT
The authors declare no conflicts of interest.
Bocchino, Robert, Timothy Canham, Garth Watney, Leonard Reder, and Jeffrey Levison. 2018. “F Prime: An Open‐Source Framework for Small‐Scale Flight Software Systems.” The 2018 Small Satellite Conference.
Chase, Timothy, Justin Goodwill, Karthik Dantu, and Christopher Wilson. 2024. “Profiling Vision‐Based Deep Learning Architectures on NASA SpaceCube Platforms.” In 2024 IEEE Aerospace Conference. 1–16. IEEE. https://doi.org/10.1109/AERO58975.2024.10521096.
Dahl, Mary, Christine Page, Kerri Cahoy, and Evana Gizzi. 2023. “Developing Intelligent Space Systems: A Survey and Rubric for Future Missions.” In 2023 Small Satellite Conference. Utah State University.
Geist, Alessandro, Gary Crum, Cody Brewer, Dennis Afanasev, Sebastian Sabogal, et al. 2023. “NASA Spacecube Next‐Generation Artificial‐Intelligence Computing for STP‐H9‐SCENIC on ISS.” In 37th Annual Small Satellite Conference, Logan, UT: AIAA/USU.
Gizzi, Evana, Hayley Owens, Nicholas Pellegrino, Christopher Trombley, James Marshall, et al. 2022. “Autonomous System‐Level Fault Diagnosis in Satellites using Housekeeping Telemetry.” In Small Satellite Conference.
Gizzi, Evana, Lakshmi Nair, Sonia Chernova, and Jivko Sinapov. 2022. “Creative Problem Solving in Artificially Intelligent agents: A Survey and Framework.” Journal of Artificial Intelligence Research 75: 857–911.
Gizzi, Evana, Alan Gibson, James Marshall, Shannon Bull, Timothy Chase Jr, et al. 2025. “Standards and Schematics for Intelligent Extensible Mission Architectures in Space.” In Small Satellite Conference. Utah State University.
Gramling, Cheryl, Gary Crum, Matthew Dosberg, Evana Gizzi, Christopher Green, et al. 2022. “NASA's Goddard Space Flight Center's Distributed Systems Missions Architecture.” In International Astronautical Congress 2022. International Astronautical Federation.
Krebs, Gunter D. R5 S2, S4. Gunter's Space Page. Retrieved July 25, 2025, from https://space.skyrocket.de/doc_sdat/r5‐s2.htm
Macenski, Steven, Tully Foote, Brian Gerkey, Chris Lalancette, and William Woodall. 2022. “Robot Operating System 2: Design, Architecture, and Uses in the Wild.” Science Robotics 7(66): eabm6074.
McComas, David. 2012. “NASA/GSFC's Flight Software Core Flight System.” In Flight Software WorkshopFlight Workshop, GSFC.CPR.7525.2013.
McComas, David, Jonathan Wilmot, and Alan Cudmore. 2016. “The Core Flight System (cFS) Community: Providing Low Cost Solutions for Small Spacecraft.” In Annual AIAA/USU Conference on Small Satellites, GSFC‐E‐DAA‐TN33786. NASA Goddard Space Flight Center.
Office of the Chief Technologist NASA. 2020. 2020 NASA Technology Taxonomy. Washington, DC: National Aeronautics and Space Administration.
National Academies of Sciences, Division on Engineering and Physical Sciences, Space Studies Board, and Committee on the Review of Progress Toward Implementing the Decadal Survey Vision and Voyages for Planetary Sciences. 2018. Visions into Voyages for Planetary Science in the Decade 2013‐2022: A Midterm Review. National Academies Press.
National Research Council, Division on Engineering and Physical Sciences, Aeronautics and Space Engineering Board, Space Studies Board, and Committee on a Decadal Strategy for Solar and Space Physics (Heliophysics). 2013. Solar and Space Physics: A Science for a Technological Society. National Academies Press.
Pedrotty, Sam, Roger C. Hunter, and Christopher Baker. 2023. “Realizing Rapid, Reduced‐Cost High‐Risk Research (R5).” No. FS‐2023‐07‐02‐ARC. NASA Technical Memorandum.
Probe, Austin, Amalaye Oyake, S.W. Chambers, Matthew Deans, Guillaume Brat, et al. 2023. “Space ROS: An Open‐Source Framework for Space Robotics and Flight Software.” In AIAA SciteCH 2023 Forum, 2709. AIAA.
Russell, Stuart J., and Peter Norvig. 2010. Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall Inc.
Staley, James, Kerri Lu, Elaine Schaertl Short, and Evana Gizzi. 2023. “A Framework for Multi‐Agent Fault Reasoning in Swarm Satellite Systems.” Small Satellite Conference.
Purdin Theiling, Bethany, Matthew A. Brandt, Lily A. Clough, Gary Crum, Evana Gizzi, et al. 2022. “Using Coordinated, Multi‐Agent Platforms for Dynamic Ocean Worlds Science.” In AGU Fall Meeting Abstracts, P23A–05. American Geophysical Union.
Theiling, Bethany P., Wayne Yu, Lily A. Clough, Pavel Galchenko, Fredrick Naikal, et al. 2024. “A Science‐Focused Artificial Intelligence (AI) Responding in Real‐Time to New Information: Capability Demonstration for Ocean World Missions.” In Astrobiology Science Conference (AbSciCon).
Williams, Conor, Leyton McKinney, Lily Anne Clough, Bethany Theiling, and Brett McKinney. 2024. “Autonomous Science: Simulated Solar System Mission to Enceladus, Icy Ocean Moon of Saturn.” The University of Tulsa Student Research Colloquium Tulsa, OK: The University of Tulsa.
© 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.