Content area
The rapid proliferation of video data from various sources underscore the pressing need for effective Content-based Video Retrieval (CBVR) systems. Traditional retrieval methodologies are increasingly inadequate for managing the complexities and scale of video big data, which necessitates the development of advanced distributed computing frameworks. This study identifies and addresses critical challenges in CBVR , specifically the implementation of lambda architecture for the retrieval of both streaming and batch video data, the enhancement of in-memory analytics for video data structures, and the efficient indexing of heterogeneous video features. We propose , a novel scale-out system which integrates state-of-the-art big data technologies with deep learning algorithms. The system architecture is inspired by lambda principles and is designed to facilitate both near real-time and offline video indexing and retrieval. Key contributions of this research include: (1) the formulation of a lambda-style architecture tailored for video big data, (2) the development of an in-memory processing framework that provides a high-level abstraction for video analytics, (3) the introduction of a unified distributed indexer, termed Distributed Encoded Deep Feature Indexer (DEFI), capable of indexing multi-type features from both streaming and batch video datasets, and (4) a comprehensive bottleneck analysis of the proposed system. Performance evaluations utilizing three benchmark datasets demonstrate the system’s effectiveness, revealing insights into performance bottlenecks related to storage, video stream acquisition, processing, and indexing. This research provides a foundational framework for scalable and efficient video analytics, significantly advancing the state-of-the-art in cloud-based CBVR systems.
Introduction
Video stream sources, such as CCTV, smartphones, drones, and other devices, play a crucial role in generating video content, prompting the development of systems for video content analysis, management, and retrieval. As online video traffic is projected to exceed a zettabyte annually by 2030, the need for a scalable and distributed Content-based Video Retrieval (CBVR) architecture is becoming crucial [1]. This rapidly expanding volume of video data, referred to as "video big data", far exceeds the capabilities of conventional analytics methods. CBVR, which retrieves relevant videos based on content rather than metadata, demands scalable systems for indexing, querying, and feature extraction at scale.
The rapid increase in video data from sources like CCTV, drones, and online platforms has far exceeded the ability of traditional retrieval systems. In distributed setups, real-time analytics and historical video searches usually need separate processing systems. This results in duplicated infrastructure, inconsistent indexing, and higher latency. Although distributed computing frameworks like Hadoop [2] and Spark [3] offer scalable infrastructure, they were not originally designed to handle the temporal, semantic, and high-dimensional nature of video data. Consequently, adapting these platforms to efficiently support CBVR is still a significant challenge. Systems designed for large-scale CBVR must effectively integrate video ingestion, deep feature computation, indexing, and retrieval for both streaming and batch pipelines. Despite ongoing research, there is still a lack of scalable, end-to-end solutions that are capable of handling multi-modal video content in real-time and offline settings.
Furthermore, CBVR has gained significant attention in recent years [4, 5–6]. Prior works have focused on specific aspects of video retrieval such as handcrafted features [7, 8–9], deep feature extraction [10, 11], indexing [11, 12, 13–14], retrieval modes such as text-based, image-based, or clip-based search [9, 15, 16], action recognition [17, 18], and object/face recognition [19]. Other works address similarity search [20, 21] or specific execution models like batch [19, 22] and stream [20, 23] processing. Some systems adopt distributed architectures [20, 24], but none offer a unified solution that supports distributed, multi-modal, and real-time CBVR. This fragmentation reveals a significant gap: existing systems fail to offer general-purpose, scalable, and low-latency CBVR frameworks that address end-to-end video analytics workflows.
To address this, the lambda architecture paradigm [25] offers a promising foundation for scalable video analytics, and has attracted both the industry’s and researchers’ attention [26, 27, 28–29]. However, its application to CBVR remains underexplored [30, 31], and adapting it to handle video-specific challenges like feature extraction and indexing remains non-trivial. Although distributed processing engines such as Spark [3] enables in-memory distributed computing, it lacks native support for video processing and deep learning workloads. This limitation poses a key challenge in effectively extending and optimizing Spark to handle large-scale, low-latency video analytics pipelines for both streaming and batch videos.
Similarly, distributed streaming systems [32] are widely used for real-time, fault-tolerant message streaming. Their application however, is non-trivial in video big data workflows. As such, integrating such systems into a CBVR system requires significant architectural adaptation to ensure efficient stream acquisition, buffering, and delivery of video and feature data. Moreover, the widespread use of Convolutional Neural Networks (CNNs) for video analysis brings significant challenges related to feature diversity. Different CNN architectures generate various feature representations such as global features, object features, and temporal semantics, which differ in scale, dimensionality, and structure. This variety makes it harder to develop unified indexing and retrieval strategies in distributed CBVR systems.
Implementing big data technology stacks for end-to-end video analytics encounters several key system-level bottlenecks. These include issues with orchestration, scalability, inefficiencies in acquiring real-time streams, challenges in distributed processing, and the computational overhead of indexing and querying high-dimensional deep features. This highlights the need for a complete and unified architectural framework that can effectively support both real-time and batch processing of large-scale video data in distributed settings.
While it is theoretically possible to connect different systems for each workload, this often creates significant architectural and operational challenges. Batch systems expect static data and global aggregation, while streaming systems need incremental updates and quick processing. These approaches differ not only in data models and state handling but also in how they maintain consistency. Simple integration results in duplication, synchronization issues, and inconsistent retrieval results. The lambda architecture partially solves this by defining separate batch and stream layers that meet at a serving layer. We leverage on this idea to unify not just ingestion and feature extraction, but also indexing and querying from both real-time streams and offline video data. This architecture bridges the batch-stream divide, enables scalable distributed processing, and supports low-latency video search in dynamic environments.
In this work, we present , a scalable distributed framework for CBVR based on lambda architecture to support both batch and real-time processing. comprises of three layers, Video Big Data Layer (VBDL), Video Big Data Analytics Layer (VBDAL), and Web Service Layer (WSL) to manage the acquisition, structural analysis, deep feature extraction, and indexing of large-scale video data. We develop Distributed Encoded Deep Feature Indexer (DEFI) to encode deep global and object-level features for indexing to facilitate efficient multimodel retrieval. The design philosophy in reflect the practical challenges of merging streaming and batch processing.
The remainder of this paper is organized as follows: Section "Related Work" discusses related work in CBVR, big data processing, and indexing. Section "Research Objectives" outlines the research objectives and architectural challenges. Section "Proposed Architecture" presents the proposed architecture in detail. Section "Evaluation and Discussion" describes the experimental setup, datasets, and evaluation metrics, followed by performance evaluation. Finally, Sect. "Conclusion and Future Work" concludes the paper.
Related work
Several approaches have been proposed to address CBVR, focusing on feature extraction, indexing, and system scalability. These methods span from traditional handcrafted descriptors to more recent deep learning and cloud-based techniques. However, most prior works address isolated components of the pipeline rather than providing an integrated architecture suitable for large-scale, hybrid video processing.
Traditional CBVR systems relied on handcrafted features and static metadata. Amato et al. [13] proposed a large-scale system for retrieval using text, color histograms, and object detection. Sun et al. [33] presented a framework for segment-level video retrieval. Li et al. [15] proposed Sentence Encoder Assembly to support text-to-video matching. These systems, while effective for specific retrieval tasks, do not address scalability or unified system architecture. Other efforts explored temporal and semantic representations. Shang et al. [23] used frame-level gray-scale intensities and time-oriented video structures but incurred high parallel processing costs. Sivic et al. [34] developed classifiers for character-specific tracking in television content, and Song et al. [8] applied multi-feature hashing in Hamming space. Lai et al. [9] combined appearance and motion trajectories for object-centric retrieval, while Araujo et al. [35] performed large-scale retrieval using image queries. These works generally assume static datasets and lack mechanisms for real-time or adaptive stream processing. Zhang et al. [12] created a video frame information extraction model using CNN and Bag of Visual Words for retrieving videos on a large scale using image queries. Such conventional CBVR approaches cannot be considered as an effective candidate solution for Video big data. Furthermore, these CBVR systems can process a limited or no amount of near-real-time video streams.
Recent efforts have looked into big data technologies to improve scalability in CBVR. Saoudi et al. [36] developed a distributed system to create compact video signatures using motion and residual features along with clustering-based indexing. Gao et al. [10] introduced a cloud-based actor identification framework that keeps spatial coherence in feature encoding. Wang et al. [21] used MapReduce and GPU acceleration for near-duplicate retrieval with Hessian-Affine detector [37] encoded through Bag of Features and stored on HDFS. Zhu et al. [20] presented Marlin, a streaming pipeline for distributed indexing and micro-batch feature extraction. However, it does not support batch analytics or generalized features. Muhling et al. [17] use a cluster computing approach to propose a CBVR system based on Distributed Resource Management Application API [24] and deep learning for professional film, television, and media production. Lin et al. [38] developed a cloud-based facial retrieval system. Other approaches [19, 22, 39] also tried to design cloud-based CBVR. These approaches, however, are task-specific and lack architectural integration.
Indexing is crucial in the information retrieval system, and various indexing strategies have been proposed for video retrieval. Chen et al. [40] convert spatial-temporal features into compact binary codes using supervised hashing to train hash functions. Liu et al. [41] index features on a large scale by encoding each image with a pre-trained CNN and constructing a visual dictionary of codewords. They then design a hash-based inverted index for retrieval. However, this approach suffers from linear search time growth with increasing data volume and lacks scalability for distributed or cloud environments. Amato et al. [42] proposed a permutation-based strategy for CNN features. Permutations are created by combining a collection of metric space reference objects. However, it necessitates calculating the distances between pivots and targets, which is time-consuming in the case of deep features. Anuranji et al. [43] argue that existing hashing methods inadequately represent video frame features and fail to exploit temporal information, leading to significant feature loss during dimensionality reduction. They propose a hashing mechanism that captures both spatial and temporal features efficiently using stacked convolutional networks and bi-directional learning for improved feature extraction.
Despite these advances, several fundamental challenges remain unaddressed. First, most existing CBVR systems lack a unified architecture that supports both real-time streaming and offline batch video processing, leading to fragmented pipelines and redundant indexing logic. Second, current indexing methods often struggle to scale with high-dimensional, heterogeneous features (e.g., spatial, temporal, object-level), limiting their applicability across varied video analytics tasks. Third, many systems are not designed for distributed, cloud-native environments, making them difficult to deploy or extend at scale.
Table 1. Feature wise comparison of the proposed with state-of-the-art.
Approach | Lambda Architecture | Scalable System | Plugable Framework | Service Oriented | Big data Store | Distributed Messaging | Incremental Index | Multi-Type Features | Feature Encoding | Batch Processing | Stream processing | Multistream Environment | Query By Text | Query By Image | Query By Clip |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
[13] | |||||||||||||||
[44] | – | – | – | ||||||||||||
[36] | |||||||||||||||
[38] | |||||||||||||||
[7] | |||||||||||||||
[17] | |||||||||||||||
[45] | |||||||||||||||
[10] | |||||||||||||||
[21] | |||||||||||||||
[20] | – | – | – | ||||||||||||
To address these limitations, we introduce , a scalable and fault-tolerant system based on lambda architecture that combines streaming and batch pipelines. Table 1 shows a comparison of key CBVR systems alongside our platform, emphasizing the architectural and operational aspects important for scaling video retrieval. Lambda Architecture refers to systems that integrate batch and stream processing layers to ensure both low-latency and accurate results. A Scalable System can manage increasing data volumes and user demands without losing performance. A Plugable Framework makes it easy to integrate new components or algorithms. Service Oriented means the system is divided into independent services, which improves maintainability and scalability. Big data Store enables efficient storage and access to large video datasets. Distributed Messaging allows separate, fault-tolerant communication between components, which is essential for coordinating real-time data flows in streaming pipelines. An Incremental Index updates as new data arrives, avoiding the need for expensive full re-indexing. Multi-Type Features supports the handling of diverse types of features. Feature Encoding refers to the transformation of computed features for efficient indexing. Batch processing deals with large amounts of offline video data. Stream Processing receives and analyzes video streams in real-time. A Multistream Environment refers to the ability to acquire and process video data from a diverse range of connected sources. Query By Text allows users to find videos using text input, while Query By Image and Query By Clip enable visual input in content-based searches.
Research objectives
The main research objective of this work are listed below:
Lambda-architecture based CBVR: To design a scalable system for video big data indexing and retrieval that leverages lambda architecture principles to process, index, and retrieve both streaming and batch video data. The integration of these two processing modes poses significant architectural challenges due to differences in latency requirements, data consistency, and processing semantics, challenges that are largely unaddressed in existing CBVR systems.
In-memory distributed video analytics: General-purpose Directed Acyclic Graph based distributed computing engines such as Spark lack native support for video analytics, including specialized data structures and video-specific semantics. This limitation forces developers to manage low-level complexities manually, which is labor-intensive, error-prone, and hinders scalability. Addressing this gap is significant for enabling efficient, high-throughput video processing in large-scale systems. Implementing an end-to-end, video-aware processing framework on top of such engines is a non-trivial task, but essential for supporting real-time and batch video analytics in distributed environments.
Deep feature indexer: The diversity of video features-including textual metadata, deep visual representations, and spatiotemporal patterns-poses a major challenge for CBVR. These features differ in structure, scale, and semantic meaning, making unified indexing complex. Moreover, the high-dimensional and dense nature of deep features significantly increases the computational cost of indexing and retrieval. Designing a scalable, distributed indexer that can efficiently handle and unify these heterogeneous feature types across both streaming and batch video data is a critical and non-trivial challenge with direct impact on retrieval accuracy and system performance.
High-level abstractions: To design and optimize high-level abstractions over big data technologies that support scalable video analytics while hiding the complexity of the underlying distributed computing stack. This objective aims to simplify the development of video processing pipelines by providing domain-specific constructs that encapsulate data ingestion, transformation, and feature extraction, enabling efficient deployment in both real-time and batch processing environments.
Bottlenecks analysis: To systematically identify and analyze performance bottlenecks that arise in distributed video analytics pipelines. These include challenges related to video storage and orchestration, scalability, real-time stream acquisition, and high-dimensional deep feature indexing. The goal is to understand the limitations of existing big data technologies when applied to video workloads, and to use these insights to guide the design of more efficient, scalable, and fault-tolerant CBVR systems. Special attention is given to how architectural choices affect latency, throughput, and resource utilization in both streaming and batch processing contexts.
We propose a lambda-inspired layered architecture that supports both near real-time and batch video processing, enabling scalable and unified retrieval of video big data.
We design and implement an in-memory video analytics framework that introduces a high-level abstraction over distributed processing engines, facilitating efficient video operations such as frame extraction and deep feature computation.
We address the challenge of heterogeneous feature representation by introducing DEFI, a unified distributed indexer capable of indexing and retrieving multi-type deep features, including spatial, object-level, and temporal representations, from video sources.
We introduce a set of high-level abstractions built over big data technologies to simplify the construction of scalable video analytics pipelines. These abstractions encapsulate complex tasks such as data ingestion, transformation, and feature extraction, and are designed to support both streaming video (processed in real-time with low-latency constraints) and batch video (processed offline for large-scale analysis), ensuring flexibility and consistency across diverse workloads.
We develop and evaluate the proposed system using three benchmark video datasets, demonstrating its scalability and performance under real-world conditions. Our analysis highlights key system-level bottlenecks across storage, stream acquisition, distributed processing, and deep feature indexing.
[See PDF for image]
Fig. 1
The proposed comprises three layers. The initial layer, VBDL, actively orchestrates video big data in its life cycle, from acquiring and storing to archiving or obsolescence. VBDL performs structural analysis and mining operations on top of Spark, an in-memory processing that enhances the MapReduce model for efficient computations. The WSL is the gateway that provides access to the top-notch features of as clear role-based web services to provide the proposed framework’s functionality across the web.
Proposed architecture
Here, we formally introduce the proposed architecture, followed by detailed technical specifications in the subsequent sections. is comprised of three layers: Video Big Data Layer (VBDL), Video Big Data Analytics Layer (VBDAL), and Web Service Layer (WSL), as illustrated in Fig. 1. VBDL forms the base layer of , managing the video big data lifecycle, from acquisition and storage to archiving or obsolescence. Within VBDL, we introduce the unified feature indexer DEFI, which encodes and indexes deep global and object features, referred to as Intermediate Results (IR) throughout this work. Additionally, the Structured Metastore manages structured metadata, including user information, video sources and configurations, video retrieval service subscriptions, application logs, and system workloads. VBDAL handles structural analysis, mining operations, generates query maps, and performs ranking operations. Lastly, consolidates all advanced functionalities into role-based web services through the WSL, which is built on top of both VBDL and VBDAL, enabling the proposed framework to operate seamlessly over the web. In the subsequent sections, we provide the technical details of each layer within the proposed .
[See PDF for image]
Fig. 2
Video Stream Acquisition and Producer Service. (a) Internal logic and flow. (b) Data Structures. SDS are various connected "Streaming Data Sources" and the
Video Big Data Layer
VBDL is responsible for large-scale acquisition and management of batch video data and real-time video streams, along with indexing encoded deep features. The four key components of VBDL are the Video Stream Manager, Video Big Data Store, Structured Metastore, and DEFI.
Video Stream Manager
The source devices provide the real-time video stream and executors perform on-the-fly processing and deep feature extraction on these streams. The role of this component is to acquire video streams in real-time and their orchestration.
[See PDF for image]
Algorithm 1
Video stream event producer
Video Stream Acquisition and Producer Service
Video Stream Acquisition and Producer Service(VSAPS) provides interfaces for the configuration and acquisition of video streams from the source devices. For a specific video source subscribed to the proposed , the configuration metadata is obtained from the Structured Metastore by the VSAPS, which is then accompanied by the configuration of the video streaming source device. Once configured, the video stream is decoded by VSAPS and is followed by the frame extraction process. Once the frames are extracted, preprocessing operations are performed on the frames as per scenario which include metadata extraction, frame correction & sizing, etc. Communication with the source of video stream occurs through a JSON object comprising five fields:
[See PDF for image]
Fig. 3
Workflow of VSAPS. Here
Distributed Message Broker
The Distributed Message Broker contains topics, and each topic contains one or more partitions.
Video stream records are distributed across partitions within each topic, with the number of partitions proportional to the parallelism degree and overall throughput. When a new video stream source is registered with the framework, the
[See PDF for image]
Fig. 4
Flowchart for replication factor
Video Stream Consumer
We acquire the incoming streams from the video sources as mini-batches that then reside inside respective topics in the brokers. The
Intermediate Results Manager
VBDAL layer generates deep features and requires proper management and indexing. The extracted IR are sent and consumed from the topic using Intermediate Results Manager. The schema is composed of
Video Big Data Store
The VBDL provides a permanent distributed persistence and storage space for video big data. The data is systematically stored and mapped in accordance with the ’s business logic. A
Since streams are acquired from heterogeneous connected sources, it is imperative to orchestrate their storage and access in accordance with the proposed system for efficiency. To handle this issue, we carefully design Video Stream Unit (
[See PDF for image]
Fig. 5
Video stream unit (
Structured Metastore
Structured Metastore manages the structured data of the proposed system which includes User Centric Metastore, Video Data Source Metastore, Subscription Metastore, System Logs, and Sys Configs. The
Distributed Encoded Deep Feature Indexer
We designed DEFI as a unified indexing mechanism for which is composed of four modules: Feature Encoder, IR Aggregator, IR indexer, and Query Handler, as shown in Fig. 6.
[See PDF for image]
Fig. 6
IR Indexer: A unified deep feature indexing and retrieval component in . The Acquired deep features are transformed, aggregated, and indexed in the inverted index which immediately becomes available for retrieval.
To avoid multiple indexing schemes, we designed a unified indexer that facilitates indexing and querying multiple types of features. However, because of their high dimensionality and dense nature, incorporating deep features directly into an inverted index-based system poses a significant challenge. Thus, we designed a Feature Encoder motivated by [48] to transform the deep features into a representation most suitable for the inverted index-based system. Given a feature vector , we use a transformation function to make it compatible with our inverted index system.
Table 2. factor vs and its effect on Empty Yield.
Value of factor | 10 | 20 | 30 | 40 | 50 |
|---|---|---|---|---|---|
Features Null Yield | 294999 | 8723 | 4792 | 2321 | 499 |
Percent Null Yield | 29.5 | 0.87 | 0.48 | 0.23 | 0.04 |
For feature encoding in [48], they use a hard-coded constant to quantize the feature vectors. The drawbacks of this approach are two-fold. The first one is finding the optimal value of the quantization factor. Since is a magic number, a trial and error method must be determined. Secondly, if ’s value is small, then because of the floor function, the features in which the value of every dimension is near zero are yielded empty (null), thus missing corresponding videos. On the other hand, the encoding size is proportional to the value of , a larger value of yields larger encodings. Table 2 summarizes the effect of this empty yield on 1M features extracted from the Youtube-8M [49], V3C1 [50], and Sports-1M [51] datasets using the VGG’s VGG-19 prediction layer.
We modify the quantization factor in our proposed approach and can be defined as:
1
where d is the dimensionality of the and e is Euler’s constant. This adaptive quantization factor prevents under- or over-sampling of features in the inverted index. For each component (where ), the encoding transformation is:2
After quantization, we prune zero or near-zero values:3
This step helps reduce the data’s size, removing dimensions that contribute little to the overall representation and thereby improving storage efficiency in the index. Each transformed component is further converted to a hexadecimal representation, and repeated times:4
Finally, the encoded feature vector is assembled by concatenating the hexadecimal values across all dimensions. The purpose of hexadecimal encoding in the proposed DEFI is to create a compact, invertible representation of high-dimensional, dense feature vectors. Encoding each dimension of the feature vector in hexadecimal allows us to efficiently store, retrieve, and index these transformed features.
[See PDF for image]
Algorithm 2
Global features transformation and encoding
The proposed DEFI aggregates multiple types of features, represented as:
5
where are encoded feature vectors for spatial, object, and textual features, respectively. These are indexed collectively in the distributed inverted indexer. Algorithm 2 outlines the proposed DEFIIn distributed systems, managing the query load across multiple servers (in this case, N servers in the distributed inverted indexer) is crucial for performance, scalability, and reliability. The Weighted Round Robin (WRR) algorithm is an effective way to allocate queries based on the capacity of each server, ensuring that servers with more resources handle a greater proportion of the load.
6
where is the weight assigned to server i based on its computational power . The load is distributed to balance the workload dynamically.[See PDF for image]
Fig. 7
MapReduce-based workflow for batch video analytics. In this figure, MR represents MapReduce. VidRDD, FeRDD, and EFeRDD represents the Video, Feature, and Ecnoded Feature abstractions on top of Spark’s native RDD. (a) Step by step workflow. (b) Data representation inside VidRDD’s. (c) Processing carried out on each node
Video Big Data Analytics Layer
We designed VBDAL comprising four components: Video Grabber, Structure Analyzer, Feature Extractor, and Query Handler. We utilize Spark [3] for distributed video processing and analytics. At the point of this writing, Spark has no native support for video data handling. To address this issue, we first build Video Resilient Distributed Dataset (VidRDD), a unified API wrapper utilizing Spark Resilient Distributed Dataset (RDD) that can be integrated with both Spark and Spark Streaming. We load the video data as a binary stream into the VidRDD. All the operations of VBDAL are performed on our proposed VidRDD as shown in Fig. 7.
Video Grabber
The Video Grabber component comprises two modules: Streaming Mini-batch Reader and Batch video Loader. The Streaming Mini-batch Reader allows us to subscribe to distributed broker’s topics to acquire and load video mini-batches into distributed main memory (as shown in Algorithm 1) for near real-time video analytics. Likewise, the Batch video Loader performs the loading of batch video data as VidRDD into distributed main memory from the Video Big Data Storage (Raw Video Space) for batch video analytics.
Structure Analyzer
Once the VidRDD is initialized and populated with video data, structure analysis operations are carried out by the Structure Analyzer which in most cases include frame extraction, metadata extraction, and preprocessing. At this stage, the video data resides in the distributed main memory as VidRDD. The
[See PDF for image]
Algorithm 3
Distributed batch video processing
[See PDF for image]
Algorithm 4
Video stream processing & analytics
Metadata extraction is vital for indexing and retrieving video data and video frames. Metadata is categorized into two categories. General metadata contains information about the video file/stream, such as the Format (MPEG-4, MKV, etc.), Codec, File Size, Duration, bitrate mode (Constant or variable) & overall bitrate, Timestamp, etc. Video metadata includes the Format, profile, actual bitrate, Width and height of the frame, Display aspect ratio, Frame rate mode (Constant or variable), actual Frame rate, Color space, Bit depth, and Stream size. This metadata helps in understanding the video content. In batch video data, the metadata is extracted directly from the video data [52]. In the case of video streams, the metadata packed within the Kafka messages is used Algorithm 4 (steps 2–8).
The Frame Extractor performs the video frame extraction operation on the VidRDD. Once the frames are extracted then the VidRDD is transformed into FrameRDD. The FrameRDD holds attributes like, i.e.,
Since video content is often noisy due to non-uniform lighting conditions, objects in an occluded context, and other conditions, the extracted video frames must first be preprocessed before inputting them into the convolutional neural network. The Preprocessor component handles operations like frame resizing, downsampling, background subtraction, color and gamma correction, histogram equalization, and exposure and white balance. Once the structural analysis is performed the next step is IR extraction.
IR Extractor
In the context of CBVR, the IR are of two types: deep global features, and object features, which are computed by the
Deep Global Feature Extractor
The global features provide a good description of image content, which is typically represented by a single feature vector. Deep Feature Extractor employs VGG-19 [53] to compute global spatial features from video frames.
We use video keyframes for feature extraction for two reasons; first, it reduces the computation overhead, and second, keyframes are sufficient for global representation of the video because there is no significant difference between the consecutive frames within the two keyframes. For batch video processing, we load the video data into VidRDD which resides in distributed memory. Structure analysis (decomposition into frames, metadata extraction, etc.) on the VidRDD is performed by the Structure Analyzer. Then the preprocessor performs pre-processing operations such as downsampling, resizing, histogram equalization, etc. on the extracted frames to improve object precision and further accelerate deep-feature extraction, respectively.
We compute spatial features after preprocessing utilizing the VGG-19 based
Deep Object Extractor
Similar to Deep Global Feature Extractor, the object features are also computed from the frames residing in the FrameRDD as shown in Fig. 7 and algorithm 3 (step 6 to 10). We compute the object features from the keyframes utilizing Yolo [54]. The objects information is properly structured and stored alongside the deep global features in the FeRDD. To maintain the semantics, we create a list style semantic structure depicting the transitions of objects in the frames.
[See PDF for image]
Fig. 8
Query execution plan against three types of queries
Query Handler
The purpose of Query Handler is to receive user queries, invoke the APIs of the against those queries, and return the results to the users. Upon receiving the request from users, the
Evaluation and discussion
In this section, we delineate the experimental configuration for , encompassing the dataset, performance parameters, and the evaluative experiment. The assessment of video retrieval performance involves testing the accuracy and speed of the retrieved videos.
[See PDF for image]
Fig. 9
Experimental in-house cluster setup
Table 3. Topics for Video Stream Services and IR management. Parts indicate the number of partitions
Topic | Description | Replication Factor | Parts |
|---|---|---|---|
Consumes video streams | 3 | 140 | |
Consumes IR for Indexer | 3 | 140 |
Experimental setup
We set up an in-house distributed cloud environment with ten computing nodes for testing and evaluation. The cluster configuration and specifications for each node are detailed in Fig. 9. The Cluster consists of five types of nodes: Server, Worker Agents, HDFS Server, Solr Server, and Broker Servers. The Server hosts VSAS, Video Stream Producer (VSP), and the Web Server, along with Ambari Server [55] for cluster management and configuration.
The HDFS Server (Agent-1) configures HDFS Name Node, Spark2 History Server, Yarn Resource Manager and Zookeeper Server. Worker Agents consist of four different types of agents that configure the components of such as VBDL (Video Stream Consumer, Video Big Data Store, and DEFI), and VBDAL (Structure analyzer, Deep feature extractors services). The real-time and offline video analytics are carried out by these nodes. For clarification, we used these Worker agents first for near real-time evaluation and then for batch evaluation.
The Broker Server is configured on agents six, seven, and eight to buffer large-scale video streams, and the IR (the deep features). Agent nine deploys the Solr Server and Structured Metastore schema. Some nodes in Fig. 9 have clients and data node services configured. The clients are instances of Zookeeper, Yarn, Spark, and Solr, and the data node is a HDFS node. Table 3 shows the topics for Video Stream Services and IR management.
Datasets
We evaluated on three datasets: V3C1 [50], YouTube-8M [49], and Sports-1M [51]. V3C1 consists of 7,475 Vimeo clips with an average duration of eight minutes, spanning around 1000 hours (1.3 TB in size). In a JSON file, videos’ metadata such as keywords, description, and title are available. YouTube-8M contains approximately 6 million video IDs with high-quality machine-generated annotations, extracted from a diverse vocabulary of over 3800 visual entities. Each video within this dataset has a duration ranging from 120 to 500 seconds. Sports-1M contains 1 million YouTube videos categorized into 487 sports categories.
Video Big Data Layer evaluation
We do a thorough analysis of the various Video Big Data Layer components, including DEFI, Video Stream Manager, and Video Big Data store.
Video Stream Acquisition Service performance evaluation
As the VSAS is agnostic to heterogeneous devices, we registered several heterogeneous devices along with offline video stream sources. These devices have different frame rates, comprising of cellphones with frame rate 60, and others such as depth camera [56], IP camera [57], and RTSP [58] with frame rate 30. A video file saved on secondary storage has also been tested with the VSAS sub-component for offline video analytics.
[See PDF for image]
Fig. 10
Video big data store performance evaluation: (a) Video stream acquisition and synchronization benchmarking (b) VSAS and video stream consumer service performance
The acquired frame’s resolution was set to pixels by the VSAS. As a result, the size of each captured frame was 623.584 KB. With an average speed of 6ms, the VSAS transforms the obtained frames into messages which are then sent to VSP by the VSAS. Moreover, before sending the message to Broker Server at a rate of 12ms on average, the VSP compresses the message on average to 140.538 KB. Both of these modules are employed on Server, for collecting and transferring streams to the Broker Servers. The improvements can be seen in Fig. 10(a), as we achieved a frame rate of 30, and 58 fps on average from heterogeneous real-time and offline sources, which is approximately 36% and 116% faster than the ideal 25 fps for real-time video analytics.
We also experimented on assessing the scalability for , by increasing the number of video sources from five to 140 on Server. At the same time, this experiment assessed the VSP and VSAS sub-components by increasing the average number of messages to 54 per second for each video stream. Furthermore, we acquire and create streams from 70 devices at 40 messages for each stream using over VSAS and VSP modules which is a significant boost in performance. However, adding more streaming devices results in performance degradation, therefore we recommend connecting 70 sources per system for better performance. In our case, we can further scale up by adding more brokers and producers.
Table 4. Performance evaluation of VSCS in terms of messages per second on a single thread (ST) and maximum (optimal) number of threads (MT)
Scenario | Mini-Batch Size | Avg. Msgs/sec (ST) | Optimal Max Threads | Avg. Msgs/sec (MT) |
|---|---|---|---|---|
SCN-1 | 4096KB | 50 | 20 | 814 |
SCN-2 | 6144KB | 88 | 13 | 791 |
SCN-3 | 8192KB | 101 | 9 | 744 |
SCN-4 | 10240KB | 166 | 7 | 808 |
Video Stream Consumer Services performance evaluation
In this section, we evaluate Video Stream Consumer Services (VSCS) performance which is responsible for acquiring video streams from the Broker server as mini-batches for analytics. In the context of near real-time analytics, mini-batch size is significant and is determined by the
[See PDF for image]
Fig. 11
Video big data store performance evaluation: (a) Performance of distributed persistent big data store (active data writer) (b) Performance evaluation of passive data reader and writer
Video Big Data Store performance evaluation
We conducted experiments on both the Active Data Reader & Writer and Passive Data Reader & Writer. Active Data Writer consumes the video stream from the topic
Likewise, we evaluate the performance of Passive Data Reader and Writer operations (shown in Fig. 11(b)) on batch video data. These operations were conducted for five different batch video sizes, i.e., 1024 MB, 5120 MB, 10240 MB, 15360 MB, and 20480 MB. The outcomes reveal that the write operation outpaces the read operation, and the time disparity for both read and write is proportionate to the volume of batch video data.
Distributed Encoded Deep Feature Indexer evaluation
We evaluated the indexing performance in terms of feature encoding size, concurrent queries and their response time with and without load balancing, the effect of batch commit operation, and retrieval time. Our findings indicate that the feature encoding in our proposed system exhibits significant performance in all aspects. Figure 12(a) illustrates the comparison of index storage sizes between the Deep Global features and their encoded counterparts. The size of the Global features without encoding exhibits linear growth, while the size of the encoded features remains consistently small even with a linear increase in the number of features. The compact size not only reduces indexing time but also lowers query latency, thereby enhancing the overall system performance. Without encoding, the preferred method of similarity measure for CNN features is cosine similarity, as outlined in 7. However, cosine similarity is computationally expensive when dealing with very large numbers of features. The TF-IDF similarity measure of the inverted table results not only in a significant performance boost of the similarity computation but also improves the video retrieval accuracy.
[See PDF for image]
Fig. 12
Distributed encoded deep feature indexer evaluation: (a) Feature index size comparison (b) Feature load balancing (c) Feature Batch size time (d) Feature query vs time
7
Figure 12(b) shows the query performance of our system. The proposed framework demonstrates commendable query performance against concurrent queries. Moreover, the load balancing mechanism also demonstrates effective performance in substantially reducing retrieval time. Figure 12(c) shows the feature indexing time on the Youtube-8M, V3C1, and Sports-1M datasets. Specifying the ideal batch size is crucial due to the computational and IO cost associated with commit operations after each batch. If the batch time is too short, more commit operations are required, leading to increased feature indexing time. Conversely, excessively increasing the batch size does not enhance indexing efficiency. The experiments reveal that a batch size ranging from 500 to 1000 is optimal. Additionally, in Fig. 12(d), the query and retrieval time in relation to a query feature are depicted. Retrieval time escalates with the number of videos in the result. Employing incremental retrieval is a prudent approach for latency reduction.[See PDF for image]
Fig. 13
Video big data analytics performance evaluation on 1-Million frames of V3C1, Youtube-8m, and Sports-1M datasets: (a) Processing time and scalability testing (b) Accuracy results of keyword, image, and query clip
Video Big Data Analytics performance
We retrieved the top k videos based on a query video clip, image, or keyword, and then calculated the precision, recall, and accuracy as indicated by (8), (9), and (10), where True Positive (TP) represents the number of relevant videos retrieved, False Positive (FP) represents the number of non-relevant videos retrieved, True Negative (TN) represents the number of non-relevant videos not retrieved, and False Negative (FN) represents the number of relevant videos not retrieved.
8
9
10
Figure 13(a) shows the performance of feature and object extraction tasks using over 1 million keyframes from three datasets: Sports-1M, YouTube-8M, and V3C1. For all the datasets, we notice a steady decrease in processing time as the number of computing nodes increases from 1 to 4, which is evident of effective parallelism and workload distribution across nodes in our proposed system. For example, the total processing time for Sports-1M dataset drops from over 600 minutes on a single node to under 200 minutes on four nodes. This threefold reduction shows that our system has near-linear scalability in practice. Overall, feature extraction using the VGG-19 model is more time-consuming than object extraction with YOLO. This is probably due to the deeper network structure and greater number of convolutional layers in VGG-19 compared to the optimized real-time design of YOLO.Figure 13(b) shows the retrieval performance for three query types (text, image, and video clip) on the V3C1, YouTube-8M, and Sports-1M datasets. The evaluation uses four standard metrics: Accuracy, Precision, Recall, and F1-Score. Across the datasets, image-based retrieval consistently achieve the highest accuracy. This suggests that static visual features provide a strong semantic match for retrieval tasks. Video clip-based queries usually perform better in recall and F1-score, especially on YouTube-8M and Sports-1M datasets. This indicates that temporal features play a significant role in capturing relevant content. Text-based queries have relatively lower precision and recall, especially in large datasets like YouTube-8M. This shows the natural uncertainty in text-based queries and the difference between keywords and visual content. While V3C1 shows fairly balanced results across all types of queries, YouTube-8M and Sports-1M show more variability. Text-based queries perform relatively weaker performance compared to image and video. These differences underscore the importance of modality selection in retrieval systems and highlight the trade-offs between semantic specificity, content structure, and dataset characteristics.
Comparison with state-of-the-art
Our primary focus is on the architectural aspect of CBVR in a Cloud Environment with a complete end-to-end framework. To the best of our knowledge, there is no existing comparable work in the literature to date. In this section, we provide an in-depth feature-wise comparison of with various state-of-the-art approaches in content-based video retrieval illustrated in Table 1. Our proposed system excels in multiple aspects, showcasing a comprehensive and unified approach to address the complexities of large-scale video analytics. Unlike [13], adopts a Lambda Architecture, allowing it to efficiently handle both real-time and batch processing. The scalability of is evident through its scalable system architecture, pluggable framework, and service-oriented design, setting it apart from [44] and [36]. Additionally, leverages big data stores, distributed messaging, and incremental indexing, addressing the limitations of [38] and other baseline systems. The flexibility to process multi-type features, advanced feature encoding techniques, and support for stream as well as batch processing contribute to the superior performance of . Notably, outshines [7, 10, 17, 21, 45], and [20] by incorporating a multistream environment and providing robust support for querying by text, image, and clip. The combination of these features positions as a highly efficient, versatile, and scalable solution for content-based video retrieval in large-scale environments.
Moreover, Table 5 presents a comprehensive performance comparison of with several state-of-the-art approaches in content-based video retrieval. Our proposed system exhibits superior performance with a processing time of 10.91s, outperforming baseline systems such as [44] (baseline 1) with a processing time of 500 s and [36] (baseline 2) with 12.8s. Notably, achieves a remarkable accuracy of 0.93 and precision of 0.67, surpassing other methods in the evaluation metrics. The enhanced performance of can be attributed to its unified indexing mechanism (DEFI), which efficiently encodes deep features using an innovative quantization method inspired by [48]. This encoding strategy not only avoids the drawbacks of fixed constant quantization but also addresses the high dimensionality and dense nature of deep features. Additionally, the integration of Apache Spark, VidRDD, and a fine-tuned load-balancing mechanism contributes to the system’s scalability and efficient distributed processing. The adoption of VGG-19 and You only look once (YOLO)-V3 for deep feature extraction ensures the system’s ability to capture rich global spatial and object features, leading to improved accuracy in content-based video retrieval. Therefore, emerges as a robust and efficient solution for large-scale video analytics, offering significant advancements over existing state-of-the-art techniques.
Table 5. Platform performance comparison with State of the Art
Approach | Processing Time | Accuracy | Precision |
|---|---|---|---|
[44] | 500s | 0.89 | 0.92 |
[36] | 12.8s | 72.37 | – |
[59] | 33.02s | – | 0.8 |
[60] | 1,010s | – | 75.6 |
10.91s | 0.93 | 0.67 |
Conclusion and future work
This paper introduces , a feature-rich content-based video retrieval (CBVR) system designed for offline and near-real-time video retrieval from diverse sources, such as IP cameras. Tailored for cloud environments, emphasizes robustness, speed, and scalability. Built on lambda architecture principles, the system leverages distributed deep learning and in-memory computation for efficient video analytics and processing. The VidRDD abstraction serves as the core unit for distributed in-memory video analytics.
To enhance video retrieval, computes multi-type features and supports seamless integration, indexing, and retrieval of these features through an incremental distributed index. Retrieval is further optimized by generating a query execution plan before the actual query is triggered. The system’s distributed data management and in-memory computation technologies ensure efficient operations.
We evaluated using three benchmark datasets: YouTube-8M, Sports-1M, and V3C1, focusing on key bottlenecks. Results demonstrate satisfactory performance in terms of scalability, efficiency, computation time, and accuracy. We believe this work contributes to the research community and the industry, offering a scalable and efficient video big data analytics solution.
Acknowledgements
Not applicable.
Author Contributions
MNK developed the research idea, the research design, and wrote the main manuscript. AA contributed to the methodology, algorithms, and visualizations in the manuscript. YKL reviewed the manuscript and provided feedback on research design, analysis, and writing. All authors contributed to the editing and proofreading. All authors read and approved the final manuscript.
Funding
This research was supported by the Institute of Information and Communications Technology Planning and Evaluation (IITP), funded by the Korea Government (MSIT), under Grant 2021-0-02068 (Artificial Intelligence Innovation Hub) and Grant RS-2022-00155911 (Artificial Intelligence Convergence Innovation Human Resources Development, Kyung Hee University).
Data availability
No datasets were generated or analyzed during the current study. The source code is available at https://github.com/lambdaclovr/lambdaclovr.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Abbreviations
Content-based Video Retrieval
Convolutional Neural Network
Distributed Encoded Deep Feature Indexer
FeatureRDD
Intermediate Results
Resilient Distributed Dataset
Video Big Data Analytics Layer
Video Big Data Layer
Video Resilient Distributed Dataset
Video Stream Acquisition and Producer Service
Video Stream Acquisition Service
Video Stream Consumer Services
Video Stream Producer
Web Service Layer
You only look once
Supplementary Information
The online version contains supplementary material available at https://doi.org/10.1186/s40537-025-01308-1.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Mohan A, Gauen K, Lu Y-H, Li WW, Chen X. Internet of video things in 2030: a world with many cameras. In: 2017 IEEE Int Symp Circuits Syst. (ISCAS), 2017. pp. 1–4. https://doi.org/10.1109/ISCAS.2017.8050296.
2. Lam, C. Hadoop in Action; 2010; Shelter Island, NY, Manning Publications Co.:
3. Zaharia, M; Xin, RS; Wendell, P; Das, T; Armbrust, M; Dave, A et al. Apache spark: a unified engine for big data processing. Commun ACM; 2016; 59,
4. Navarrete, E; Nehring, A; Schanze, S; Ewerth, R; Hoppe, A. A closer look into recent video-based learning research: A comprehensive review of video characteristics, tools, technologies, and learning effectiveness. Int J Artif Intell Educ; 2025; [DOI: https://dx.doi.org/10.1007/s40593-025-00481-x]
5. Dubey, SR. A decade survey of content based image retrieval using deep learning. IEEE Trans Circuits Syst Video Technol; 2022; 32,
6. Pareek, P; Thakkar, A. A survey on video-based human action recognition: recent updates, datasets, challenges, and applications. Artif Intell Rev; 2021; 54,
7. Ding, S; Li, G; Li, Y; Li, X; Zhai, Q; Champion, AC et al. Survsurf: human retrieval on large surveillance video data. Multimedia Tools Appl; 2017; 76,
8. Song J, Yang Y, Huang Z, Shen HT, Hong R. Multiple feature hashing for real-time large scale near-duplicate video retrieval. In: Proc 19th ACM Int Conf Multi. 2011. p.423–432. ACM
9. Lai, Y-H; Yang, C-K. Video object retrieval by trajectory and appearance. IEEE Trans Circuits Syst Video Technol; 2014; 25,
10. Gao, G; Liu, CH; Chen, M; Guo, S; Leung, KK. Cloud-based actor identification with batch-orthogonal local-sensitive hashing and sparse representation. IEEE Trans Multimedia; 2016; 18,
11. Dong J, Li X, Xu C, Yang X, Yang G, Wang X, et al. Dual Encoding for Video Retrieval by Text. IEEE Trans Pattern Anal Mach Intell. 2021;1. https://doi.org/10.1109/TPAMI.2021.3059295.
12. Zhang, C; Lin, Y; Zhu, L; Liu, A; Zhang, Z; Huang, F. Cnn-vwii: an efficient approach for large-scale video retrieval by image queries. Pattern Recogn Lett; 2019; 123, pp. 82-8. [DOI: https://dx.doi.org/10.1016/j.patrec.2019.03.015]
13. Amato G, Bolettieri P, Carrara F, Falchi F, Gennaro C, Messina N, Vadicamo L, Vairo C. Visione: A large-scale video retrieval system with advanced search functionalities. In: Proceedings of the 2023 ACM International Conference on Multimedia Retrieval. ICMR ’23, pp. 649–653. Association for Computing Machinery, New York, NY, USA 2023 https://doi.org/10.1145/3591106.3592226 .
14. Fernandez-Beltran, R; Pla, F. Latent topics-based relevance feedback for video retrieval. Pattern Recogn; 2016; 51, pp. 72-84. [DOI: https://dx.doi.org/10.1016/j.patcog.2015.09.007]
15. Li X, Zhou F, Xu C, Ji J, Yang G. SEA: Sentence Encoder Assembly for Video Retrieval by Textual Queries. IEEE Trans Multi. 2020;1. https://doi.org/10.1109/TMM.2020.3042067.
16. Sivic J, Zisserman A. Video google: a text retrieval approach to object matching in videos. In: Null, 2003;1470 IEEE
17. Mühling, M; Korfhage, N; Müller, E; Otto, C; Springstein, M; Langelage, T et al. Deep learning for content-based video retrieval in film and television production. Multimedia Tools Appl; 2017; 76,
18. Mallick, AK; Mukhopadhyay, S. Video retrieval framework based on color co-occurrence feature of adaptive low rank extracted keyframes and graph pattern matching. Inf Process Manage; 2022; [DOI: https://dx.doi.org/10.1016/j.ipm.2022.102870]
19. ElAraby, M; Shams, M. Face retrieval system based on elastic web crawler over cloud computing. Multimed Tools Appl; 2021; [DOI: https://dx.doi.org/10.1007/s11042-020-10271-3]
20. Zhu N, He W, Hua Y, Chen Y. Marlin: Taming the big streaming data in large scale video similarity search. In: 2015 IEEE Int Conf on Big Data (Big Data), 2015;1755–1764. IEEE.
21. Wang, H; Zhu, F; Xiao, B; Wang, L; Jiang, Y-G. Gpu-based mapreduce for large-scale near-duplicate video retrieval. Multimedia Tools Appl; 2015; 74,
22. Mandal A, Sinaeepourfard A, Naskar SK. Vda: Deep learning based visual data analysis in integrated edge to cloud computing environment. In: Adjunct Proc 2021 Int Conf Dist Comput Netw. 2021;7–12
23. Shang L, Yang L, Wang F, Chan K-P, Hua X-S. Real-time large scale near-duplicate web video retrieval. In: Proc 18th ACM Int Conf Multi. 2010;531–540 ACM
24. API A, Templeton D, Brobst R. Dist Res Manage Appl. API 2.0 2008.
25. Marz, N; Warren, J. Big Data: Principles and Best Practices of Scalable Realtime Data Systems; 2015; 1 USA, Manning Publications Co.:
26. Wang S, Zhao M, Chen C, Kong J, Liu Z. Application of lambda architecture in intelligent operation and maintenance system of rail transit vehicles. In: 2022 Int Conf Artif Int Every (AIE), 2022;568–573. https://doi.org/10.1109/AIE57029.2022.00113.
27. Hassan, AA; Hassan, TM. Real-time big data analytics for data stream challenges: an overview. Eur J Inf Tech Comput Sci; 2022; 2,
28. Essaidi A, Bellafkih M. A new big data architecture for analysis: The challenges on social media. Int J Adv Comput Sci Appl. 2023;14(3):
29. Deepthi, BG; Rani, KS; Krishna, PV; Saritha, V. An efficient architecture for processing real-time traffic data streams using apache flink. Multi Tools Appl.; 2024; 83,
30. Garriga M, Monsieur G, Tamburri D. In: Liebregts, W., Heuvel, W.-J., Born, A. (eds.) Big Data Architectures, pp. 63–76. Springer, Cham. 2023. https://doi.org/10.1007/978-3-031-19554-9_4 .
31. Kreps J. Questioning the Lambda Architecture. Published on O’Reilly 2014. https://www.oreilly.com/radar/questioning-the-lambda-architecture/. Accessed 2025-05-26.
32. Kreps J, Narkhede N, Rao J, et al.: Kafka: A distributed messaging system for log processing. In: Proc NetDB, 2011;11:1–7
33. Sun, X; Long, X; He, D; Wen, S; Lian, Z. VSRNet: end-to-end video segment retrieval with text query. Pattern Recogn; 2021; 119, [DOI: https://dx.doi.org/10.1016/j.patcog.2021.108027] 108027.
34. Sivic J, Everingham M, Zisserman A. "who are you?"-learning person specific classifiers from video. In: 2009 IEEE Conf Comput Vision and Pattern Recogn. 2009;1145–1152. IEEE.
35. Araujo A, Chaves J, Angst R, Girod B. Temporal aggregation for large-scale query-by-image video retrieval. In: 2015 IEEE Int Conf Image Proc (ICIP), 2015;1519–1522. IEEE.
36. Saoudi, EM; Jai-Andaloussi, S. A distributed content-based video retrieval system for large datasets. J Big Data; 2021; 8,
37. Mikolajczyk, K; Schmid, C. Scale & affine invariant interest point detectors. Int J Comput Vision; 2004; 60,
38. Lin, F-C; Ngo, H-H; Dow, C-R. A cloud-based face video retrieval system with deep learning. J Supercomput; 2020; [DOI: https://dx.doi.org/10.1007/s11227-019-03123-x]
39. Al Kabary I, Schuldt H. Scalable sketch-based sport video retrieval in the cloud. In: Int Conf Cloud Comput. 2020;226–241. Springer.
40. Chen, H; Hu, C; Lee, F; Lin, C; Yao, W; Chen, L; Chen, Q. A Supervised Video Hashing Method Based on a Deep 3D Convolutional Neural Network for Large-Scale Video Retrieval. Sensors; 2021; 21,
41. Liu, R; Wei, S; Zhao, Y; Yang, Y. Indexing of the cnn features for the large scale image search. Multimedia Tools Appl; 2018; 77,
42. Amato G, Carrara F, Falchi F, Gennaro C, Vadicamo L. Large-scale instance-level image retrieval. Inf Proc Manage. 2019;102100. https://doi.org/10.1016/j.ipm.2019.102100.
43. Anuranji, R; Srimathi, H. A supervised deep convolutional based bidirectional long short term memory video hashing for large scale video retrieval applications. Digit Signal Process; 2020; 102, [DOI: https://dx.doi.org/10.1016/j.dsp.2020.102729] 102729.
44. Phan, T-C; Phan, A-C; Cao, H-P; Trieu, T-N. Content-based video big data retrieval with extensive features and deep learning. Appl Sci; 2022; [DOI: https://dx.doi.org/10.3390/app12136753]
45. Lv J, Wu B, Yang S, Jia B, Qiu P. Efficient large scale near-duplicate video detection base on spark. In: 2016 IEEE Int Conf Big Data (Big Data), 2016;957–962. IEEE.
46. Gunderson SH. Snappy: a fast compressor/decompressor.
47. Rivest, R. Rfc 1321: The md5 message-digest algorithm, April 1992; 2014; Status, INFORMATIONAl:
48. Amato G, Debole F, Falchi F, Gennaro C, Rabitti F. Large scale indexing and searching deep convolutional neural network features. In: Int Conf Big Data Anal Know Disc. pp. 2016;213–224. Springer.
49. Abu-El-Haija S, Kothari N, Lee J, Natsev A, Toderici G, Varadarajan B, Vijayanarasimhan S. Youtube-8m: A large-scale video classification benchmark. ArXiv abs/1609.0. 2016.
50. Berns F, Rossetto L, Schoeffmann K, Beecks C, Awad G. V3c1 dataset: An evaluation of content characteristics. In: Proceedings of the 2019 on International Conference on Multimedia Retrieval. ICMR ’19, pp. 334–338. Assoc Comput Mach. New York, NY, USA 2019. https://doi.org/10.1145/3323873.3325051 .
51. Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L. Large-scale video classification with convolutional neural networks. In: Proc IEEE Conf Comput Vision and Pattern Rec (CVPR), 2014;1725–1732.
52. Mattmann, CA; Zitting, JL. Tika in Action; 2012; ???, Manning:
53. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. 2014.
54. Redmon J, Farhadi A. Yolo9000: better, faster, stronger. In: Proc IEEE Conf Comput Vision and Pattern Rec. 2017;7263–7271.
55. Wadkar, S; Siddalingaiah, M; Venner, J. Pro Apache Hadoop; 2014; 2 USA, Apress:
56. Zhang, Z. Microsoft Kinect sensor and its effect. IEEE Multimed; 2012; 19, pp. 4-12. [DOI: https://dx.doi.org/10.1109/MMUL.2012.24]
57. Yang M, Tham JY, Wu D, Goh KH. Cost effective ip camera for video surveillance. In: 2009 4th IEEE Conf Ind Elect Appl. 2009;2432–2435. https://doi.org/10.1109/ICIEA.2009.5138638.
58. Schulzrinne H, Rao A, Lanphier R. RFC2326: Real Time Streaming Protocol (RTSP). RFC Editor, USA 1998.
59. Xu, W; Uddin, MA; Dolgorsuren, B; Akhond, MR; Khan, KU; Hossain, MI et al. Similarity estimation for large-scale human action video data on spark. Appl Sci; 2018; [DOI: https://dx.doi.org/10.3390/app8050778]
60. Uddin, MA; Joolee, JB; Alam, A; Lee, Y-K. Human action recognition using adaptive local motion descriptor in spark. IEEE Access; 2017; 5, pp. 21157-67. [DOI: https://dx.doi.org/10.1109/ACCESS.2017.2759225]
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.