Content area
Full text
Abstract - Edge computing is a decentralized computing paradigm that brings computation and data storage closer to data sources, enabling faster processing and reduced latency. This approach is critical for real-time applications, but it introduces significant challenges in managing resources efficiently in edge-cloud environments. Issues such as increased response times, inefficient autoscaling, and suboptimal task scheduling arise due to the dynamic and resource-constrained nature of edge nodes. Kubernetes, a widely used container orchestration platform, provides basic autoscaling and scheduling mechanisms, but its default configurations often fail to meet the stringent performance requirements of edge environments, especially in lightweight implementations like KubeEdge. This work presents an ILP-optimized, LSTM-based approach for autoscaling and scheduling in edge-cloud environments. The LSTM model forecasts resource demands using both real-time and historical data, enabling proactive resource allocation, while the integer linear programming (ILP) framework optimally assigns workloads and scales containers to meet predicted demands. By jointly addressing auto-scaling and scheduling challenges, the proposed method improves response time and resource utilization. The experimental setup is built on a KubeEdge testbed deployed across 11 nodes (1 cloud node and 10 edge nodes). Experimental results show that the ILP-enhanced framework achieves a 12.34°7c reduction in response time and a 7.85°7c increase in throughput compared to the LSTM-only approach.
Keywords - autoscaling, edge computing, ILP optimization, Kubernetes, LSTM, resource efficiency, scheduling, throughput
1. Introduction
Edge computing represents an approach to data processing that enables computation and storage closer to the data source than using centralized cloud servers. This decentralized model improves real-time data analysis, reduces latency, and improves resource utilization, making it suitable for applications like autonomous systems, smart cities, and industrial automation.
The unique requirements of edge computing environments, including low latency responses and efficient resource utilization, create significant challenges in workload management and resource optimization.
KubeEdge is an open-source framework designed to extend Kubernetes functionality to edge computing environments,enabling efficient management of containerized applications across distributed edge nodes. Bridges the gap between cloud infrastructure and edge devices, facilitating seamless deployment and orchestration of workloads in resource-constrained and geographically dispersed locations.
The architecture of KubeEdge includes components optimized for edge scenarios, such as the CloudCore module, which manages edge node control, configuration, and communication with the Kubernetes API server at the cloud level,...





