Content area

Abstract

Traditional caption models are mainly dependent on 2D visual properties, which limit their ability to understand and describe spatial conditions, depth and three-dimensional structures in images. These models struggle to capture object interviews, beef and light variations, which are important for generating relevant and spatial conscious details. To address these boundaries, we introduce Neural Radiance Feilds Captioning (NeRF-Cap) framework is a new Neural Radiance Field based on multimodal image-tight frame that integrates 3D-visual reconstruction with natural language treatment (NLP). NeRF's ability to create a constant volumetric representation of a view of several 2D approaches enables the recovery of depth-individual and geometrically accurate functions, which improves the descriptive power of the caption generated. Our approach also integrates the advanced visual language models such as Bootstrapping Language-Image Pre-training (BLIP), Contrastive Language-Image Pretraining (CLIP) and Large Language Model Meta AI (LLaMA) which process the text details by involving semantic object interlation, depth such and light effect in the caption process. By taking advantage of the high definition 3D representation of the NeRF, NeRF-Cap improved traditional captions by providing spatial consistent, photorealist and geometrically consistent details. We evaluate our method for synthetic and real-world datasets, and perform complex spatial properties and its effectiveness in capturing visual dynamics. Experimental results indicate that NeRF-Cap outperforms existing captioning models in terms of spatial awareness, contextual accuracy, and natural language fluency, as measured by standard benchmarks such as Bilingual Evaluation Understudy (BLEU), Metric for Evaluation of Translation with Explicit Ordering (METEOR), Consensus-based Image Description Evaluation (CIDEr) and a novel Depth-Awareness Score. Our work highlights the potential of 3D-aware multimodal captioning, paving the way for more advanced applications in robotic perception, augmented reality, and assistive vision systems.

Details

10000008
Title
A NeRF-Based Captioning Framework for Spatially Rich and Context-Aware Image Descriptions
Publication title
Volume
58
Issue
5
Pages
1059-1064
Number of pages
7
Publication year
2025
Publication date
May 2025
Publisher
International Information and Engineering Technology Association (IIETA)
Place of publication
Edmonton
Country of publication
Canada
Publication subject
ISSN
12696935
e-ISSN
21167087
Source type
Scholarly Journal
Language of publication
English; French
Document type
Journal Article
Publication history
 
 
Online publication date
2025-05-31
Milestone dates
2025-05-16 (Accepted); 2025-05-06 (Revised); 2025-04-03 (Received)
Publication history
 
 
   First posting date
31 May 2025
ProQuest document ID
3231508339
Document URL
https://www.proquest.com/scholarly-journals/nerf-based-captioning-framework-spatially-rich/docview/3231508339/se-2?accountid=208611
Copyright
© 2025. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2025-07-25
Database
ProQuest One Academic