Distributed control of leader-follower systems under adversarial inputs using reinforcement learning
In this paper, a model-free reinforcement learning (RL) based distributed control protocol for leader-follower multi-agent systems is presented. Despite successful utilization of RL for learning optimal control protocols in multi-agent systems, the effects of the adversarial inputs are neglected in existing results. The susceptibility of the standard synchronization control protocol against adversarial inputs is shown. Then, a RL-based distributed control framework is developed for multi-agent systems to stop corrupted data of a compromised agent from propagating across the network. To this end, only the leader communicates its actual sensory information and other agents estimate the leader state using a distributed observer and communicate this estimation to their neighbors to reach consensus on the leader state. The observer cannot be physically changed by any adversarial input. Therefore, it guarantees that all intact agents synchronize to the leader trajectory except compromised agent. A distributed control protocol is used to further enhance the resiliency by attenuating the effect of the adversarial inputs on the compromised agent itself. An off-policy RL algorithm is developed to solve the output synchronization control problem online and using only measured data along the system trajectories. © 2017 IEEE.
2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Proceedings
Moghadam, R., Wei, Q., & Modares, H. (2017). Distributed control of leader-follower systems under adversarial inputs using reinforcement learning. 2017 IEEE Symposium Series on Computational Intelligence (SSCI): 1-8. doi: 10.1109/SSCI.2017.8280840