In our RSS 2024 paper, we present a novel adversarial attack method designed to identify failuer cases in any type of locomotion controller, including state-of-the-art reinforcement learning (RL)-based controllers. Traditional heuristic tests, such as standard benchmarks or human experience, often fall short in uncovering these vulnerabilities. Our approach reveals the vulnerabilities of black-box neural network controllers, providing valuable insights that can be leveraged to enhance robustness through retraining.
Project website: fanshi14.githu...
Paper link: arxiv.org/abs/...
27 авг 2024