Title

Distributed control of leader-follower systems under adversarial inputs using reinforcement learning

Document Type

Conference Proceeding

Publication Date

2-2-2018

Department

Electrical Engineering

Abstract

In this paper, a model-free reinforcement learning (RL) based distributed control protocol for leader-follower multi-agent systems is presented. Despite successful utilization of RL for learning optimal control protocols in multi-agent systems, the effects of the adversarial inputs are neglected in existing results. The susceptibility of the standard synchronization control protocol against adversarial inputs is shown. Then, a RL-based distributed control framework is developed for multi-agent systems to stop corrupted data of a compromised agent from propagating across the network. To this end, only the leader communicates its actual sensory information and other agents estimate the leader state using a distributed observer and communicate this estimation to their neighbors to reach consensus on the leader state. The observer cannot be physically changed by any adversarial input. Therefore, it guarantees that all intact agents synchronize to the leader trajectory except compromised agent. A distributed control protocol is used to further enhance the resiliency by attenuating the effect of the adversarial inputs on the compromised agent itself. An off-policy RL algorithm is developed to solve the output synchronization control problem online and using only measured data along the system trajectories. © 2017 IEEE.

DOI

10.1109/SSCI.2017.8280840

First Page

1

Last Page

8

Publication Title

2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Proceedings

ISBN

9781538627259

Comments

At the time of publication, Rohollah Modhadam was affiliated with Missouri University of Science and Technology.

This document is currently not available here.

COinS