Content area
With the rapid advancement of cloud computing, edge computing, and the Internet of Things, traditional routing protocols such as OSPF—which rely solely on network topology and link state while neglecting computing power resource status—struggle to meet the network-computing synergy demands of the Computing Power Network (CPN). Existing reinforcement learning-based routing approaches, despite incorporating deep strategies, still suffer from issues such as resource imbalance. To address this, this study proposes a reinforcement learning-based computing-aware routing path selection method—the Computing-Aware Routing-Reinforcement Learning (CAR-RL) algorithm. This achieves coordination between network and computing power resources through multi-factor joint computation of “computational power + network”. The algorithm constructs a multi-factor weighted Markov Decision Process (MDP) to select the optimal computing-aware routing path by real-time perception of network traffic and computing power status. Experiments conducted on the GN4–3N network topology using Mininet and K8S simulations demonstrate that compared to algorithms such as Q-Learning, DDPG, and CEDRL, the CAR-RL algorithm achieves performance improvements of 24.7%, 35.6%, and 23.1%, respectively, in average packet loss rate, average latency, and average throughput. This research not only provides a reference technical implementation path for computing-aware routing selection and optimisation in computing power networks but also advances the efficient integration of network and computing power resources.
Details
Deep learning;
Internet of Things;
Adaptability;
Markov processes;
Communications traffic;
Optimization;
Real time;
Edge computing;
Manufacturing;
Machine learning;
Energy resources;
Network topologies;
Simulation;
Network management systems;
Experiments;
Routing (telecommunications);
Route planning;
Cloud computing;
Decision making;
Neural networks;
Network latency;
Algorithms;
Resource management
