Various emergencies occur frequently, posing threats and challenges to people’s lives and social security. In consequence, the evacuation of multi-Agent has become a significant part of the emergency response process. However, a few existing works only focus on the evacuation of a small number of agents, which does not consider the problem of multi-Agent cooperation caused by the increase of the number of agents and the impact of emergencies. Therefore, a framework for event-driven multi-Agent evacuation is proposed in this paper, which includes three parts: event collection, event sending, and task execution. During task execution, agents are divided into groups and select the leader in the group, while other agents in the group move with the leader. Then, the reinforcement learning algorithm Space Multi-Agent Deep Deterministic Policy Gradient (SMADDPG), proposed in this paper, is used for path planning. In addition, the state, action and reward based on the Markov game are designed, and an environment with emergencies is presented as agents evacuation scenario. The experiment results show that the method proposed can shorten the length of path, and improve the interoperability between multi-Agent when emergencies occur, which can provide decision-making reference for emergency departments to formulate evacuation plans.
KEYWORDS: Gesture recognition, Data modeling, Image segmentation, Feature extraction, Visual process modeling, Data processing, Image processing, Data fusion, Cameras, Sensors
Gesture recognition can play a crucial role in addressing the issue of Human-computer interaction. In this paper, we proposed a vision-based Multi-input fusion deep network (MIFD-Net), which consists of Multilayer Perceptron (MLP) and Convolutional Neural Networks (CNN). MIFD-Net first processes hand keypoint data and gesture images using Euclidean distance normalization (ED-Normalization) and image segmentation technologies, respectively. Then, two kinds of data are simultaneously used as input to MIFD-Net. The experimental results show that the MIFD-Net achieves an average accuracy of 99.65% on the self-built dataset in this paper and 99.10% on the NUS hand posture datasets II (NUS-II). The MIFD-Net significantly decreases its FLOPs and the number of parameters and reduces the complexity of the model while maintaining a high recognition rate compared with other gesture recognition models. The MIFD-Net can obtain high accuracy and strong robustness in different environments, lighting, and angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.