Split-Federated Reinforcement Learning for IoMT Data and Task Management in Edge-Fog-Cloud Infrastructure
Conférence : Communications avec actes dans un congrès international
Workload management in edge-fog-cloud infrastructure
for the Internet of Medical Things (IoMT) involves
significant costs and difficulties, primarily due to the challenge
of managing tasks and the unpredictability of data generated
by IoMT devices while processing real-time patient health states.
Deep Reinforcement Learning (DRL) shows promise in efficiently
addressing dynamic data placement and task offloading. However, DRL suffers from poor scalability in distributed multi node IoT environments. To address these challenges, Federated Learning (FL) and DRL are combined to provide potential solutions, but they face coordination complexities, privacy concerns, high computational and communication costs, and scalability issues. In this paper, a new solution called Split Federated Hybrid Decisionbased DRL (SF-HDRL) is introduced to tackle these challenges. Our algorithm combines continuous and discrete actions within a splitting actor-critic architecture to achieve joint optimized data placement, as well as node selection for task execution, while minimizing IoMT operational costs. Simulations indicate that the proposed SF-HDRL algorithm is superior in scalability and workload management costs compared to existing optimal non federated and federated approaches.