Title |
Soft-constrained Deep Reinforcement Learning Controller using Non-linear Model Predictive Filter for Real-time Safe Autonomous Driving |
Authors |
유윤하(Yoon-Ha Ryu) ; (Oualid Doukhi) ; 전양배(Yang Bae Jeon) ; 이덕진(Deok-Jin Lee) |
DOI |
https://doi.org/10.5573/ieie.2024.61.3.105 |
Keywords |
Safe-critical; Optimal control; Data-driven control; Reinforcement learning; Non-linear system |
Abstract |
Recently, the robotics community has shown increasing interest in deep reinforcement learning for offline autonomous driving. It is nearly impossible to accurately perceive an unstructured environment for online autonomous driving, and real-time operation heavily depends on computational power. In this study, we combine a non-linear model predictive filter?a modified version of the traditional non-linear model predictive controller, which is a hallmark of online autonomous driving?with a multi-objective deep reinforcement learning-based controller in both serial and parallel configurations. By focusing solely on the safety aspect in the cost function of the non-linear model predictive controller, we propose an optimized trajectory that ensures the control input of the learning-based controller remains within a safe boundary. The non-linear model predictive filter presented in this research is enhanced with the integration of CasADi and IPOPT, making it suitable for tackling not only convex but also intricate non-linear optimization challenges, thereby guaranteeing safe autonomous driving across diverse scenarios. Training to obtain the results for the learning-based controller design and the model predictive filter was conducted using the open-source simulator, GAZEBO. Finally, the efficacy of the proposed algorithm was validated by deploying it on an autonomous driving platform. |