Robust Deep Reinforcement Learning Scheduling via Weight Anchoring

Authors: S. Gracla, E. Beck, C. Bockelmann, A. Dekorsy
Abstract:

Questions remain on the robustness of data-driven learning methods when crossing the gap from simulation to reality. We utilize weight anchoring, a method known from continual learning, to cultivate and fixate desired behavior in Neural Networks. Weight anchoring may be used to find a solution to a learning problem that is nearby the solution of another learning problem. Thereby, learning can be carried out in optimal environments without neglecting or unlearning desired behavior. We demonstrate this approach on the example of learning mixed QoS-efficient discrete resource scheduling with infrequent priority messages. Results show that this method provides performance comparable to the state of the art of augmenting a simulation environment, alongside significantly increased robustness and steerability.

Document type: Journal Paper
Publication: IEEE, October 2022
Journal: IEEE Communications Letters
Files:
Robust Deep Reinforcement Learning Scheduling via Weight Anchoring
00_paper_anchoring_preprint.pdf292 KB
BibTEX
Last change on 09.11.2022 by S. Gracla
AIT ieee GOC tzi ith Fachbereich 1
© Department of Communications Engineering - University of BremenImprint / Contact