Content area

Abstract

Exploring deep space objects such as planets, comets, moons, and asteroids involves ambitious and increasingly complex scientific pursuits, requiring spacecraft to land on or maneuver within close proximity to surfaces of highly irregular and dangerous terrain; a significant challenge to navigation as communication latency is often too great to permit any Earth-based assistance through radiometric tracking, real-time planning and control, or precise GPS positioning. More recently, these challenges have been addressed through the optical tracking of prominent surface features to provide Terrain Relative Navigation (TRN). With compute power limited by radiation-tolerant hardware, current approaches to TRN are template matching and correlation techniques using static maps and features that are collected and constructed a priori with extensive human involvement. Although proven effective on recent missions, this two-stage approach limits adaptability and generalization, increases mission costs and timelines, and reduces applicable deployment scenarios. In contrast, terrestrial robotics has demonstrated the efficiency of one-stage navigation solutions such as Simultaneous Localization and Mapping (SLAM) for nearly two decades. Dynamically constructing the map and localizing within it at runtime, this "show up and navigate" paradigm offers greater flexibility, but its deployment in space is hindered by numerous challenges in visual perception that are unique to celestial environments, including a lack of rich, diverse textures, dynamic illumination conditions, and the computational complexity of image processing algorithms. To that end, this work proposes many improvements to perception in space, striving towards end-to-end visual understanding for spacecraft TRN. We begin by quantifying the feature complexities found in space environments and present interest point improvements that include state-informed matching and uncertainty-aware feature reasoning. We subsequently address the applicability of visual deep learning on spacecraft processors and introduce advancements to learning-based solutions in the presence of sparse training labels, including sim-to-real terrain detection and multi-view attention for distinctive description. Through rigorous evaluation, we demonstrate how the proposed techniques mitigate the failure modes of traditional space-vision, establishing a new state-of-the-art in extraterrestrial image processing and fostering a cohesive, unified TRN perception pipeline.

Details

1010268
Business indexing term
Title
Rethinking Visual Perception for Spacecraft Autonomy: Towards End-to-End Terrain Relative Navigation
Author
Number of pages
121
Publication year
2025
Degree date
2025
School code
0656
Source
DAI-B 87/3(E), Dissertation Abstracts International
ISBN
9798293833955
Committee member
Chowdhury, Souma; Yuan, Junsong
University/institution
State University of New York at Buffalo
Department
Computer Science and Engineering
University location
United States -- New York
Degree
Ph.D.
Source type
Dissertation or Thesis
Language
English
Document type
Dissertation/Thesis
Dissertation/thesis number
32173287
ProQuest document ID
3250375379
Document URL
https://www.proquest.com/dissertations-theses/rethinking-visual-perception-spacecraft-autonomy/docview/3250375379/se-2?accountid=208611
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.
Database
ProQuest One Academic