Mobile QR Code
Title Deep Q-network-based RF Link Selection for On-device Edge AI in Dynamic Maritime Environments
Authors 신하 쉬르티카(Shrutika Sinha) ; 황아리(Ari Hwang) ; 윤동진(Dong-Jin Yoon) ; 박수현(Soo-Hyun Park)
DOI https://doi.org/10.5573/ieie.2026.63.2.73
Page pp.73-85
ISSN 2287-5026
Keywords DQN; Link selection; Maritime; On-device AI; and wireless
Abstract Cloud-based AI for wireless control assumes stable backhaul connectivity and data distributions that change slowly over time. In maritime relay networks, however, an AUV?buoy?vessel topology experiences non-stationary ocean channels, hardware drift, and “hidden congestion” where high RSSI masks severe latency, so the data seen after deployment can diverge sharply from what any cloud-trained model expects. This paper studies on-device deep reinforcement learning for RF link selection under these conditions, formulating the Wi?Fi/LTE switching task as a Markov Decision Process with state variables including signal strength, end-to-end latency, energy consumption, and line-of-sight flags, and solving it with a Deep Q-Network (DQN) running and updating directly on an NVIDIA Jetson Orin. When tested on the Orin simulator with dynamic sea conditions and misleading high-RSSI congestion, the DQN maintained stable latency, whereas the RSSI heuristic frequently failed and showed significant jitter. Compared to A2C, the DQN also converged faster, especially when penalty events were infrequent but severe. These results show that when the environment is both unpredictable and poorly observable from the cloud, shifting adaptation onto the edge device via on?device RL can provide more steady maritime link control than cloud-centric or fixed policies.