Keynote Talks


background

Keynote Talks

Keynote Topic: Coordinated Control of Multiple Autonomous Surface Vehicles
Keynote Speaker:
    Distinguished Professor Qing-Long Han, FIEEE, FIFAC, FIEAust, FCAA
    Foreign Member of the Academia Europaea (The Academy of Europe)
    Pro Vice-Chancellor (Research Quality), Swinburne University of Technology, Australia

Keynote Time:May 8, 2023


This keynote talk aims to provide recent advances in coordinated control of multiple ASVs. First, the background on coordinated control of multi-ASV systems are briefly introduced. Second, some challenging issues and scenarios in motion control of ASVs are presented. Third, recent results on trajectory-guided, path-guided, and target-guided coordinated control of multiple ASVs are reviewed in detail. Fourth, coordinated target enclosing of multiple ASVs with experiments is presented. Finally, several theoretical and technical issues are suggested to direct future investigations.


Please click here to learn the details of this Keynote Talks



Session Organizers:

  • Dr. Weidong JIAO, Zhejiang Normal University, China; Email: jiaowd1970@126.com
  • Dr. Attiq Ur REHMAN, Zhejiang Normal University, China; Email: atnutkani@gmail.com
  • Dr. Anping WAN, Zhejiang University City College, China; Email: wanap@zucc.edu.cn
  • Dr. Jianan WEI, Guizhou University, China; Email: jawei@gzu.edu.cn
  • Dr. Haisong HUANG,Guizhou University, China; Email: hshuang@gzu.edu.cn

Session Description:
Machine-learned recognition/diagnosis models have shown promise in every application. Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. However, in a real-world application, it is very difficult to classify accurately where it needs to deal with unknown classes. A more realistic scenario is open-set classification or open-set recognition. This requires the classifiers to not only accurately classify the known classes but also effectively deal with unknown ones that were not anticipated during the training phase. This special session is organized to focus on the recent development of open-set learning and diagnostics.

Potential topics include but are not limited to the following:

  • Open-set fault diagnosis approach for machinery components.
  • Intelligent fault diagnosis based on deep/transfer learning and big data.
  • Open-set medical diagnosis.
  • Intelligent fault diagnosis models and algorithms (e.g., imbalanced learning, few-shot learning, positive-unlabeled (PU) learning, zero-shot learning, and modeling methods under various operating conditions).
  • Other advances in fault diagnosis/prognosis and life prediction based on deep learning algorithms.


Please click here to learn the details of this Special Sessions

Session Organizers:

  • Dr. Nankun Mu, Chongqing University, China; Email: nankun.mu@cqu.edu.cn

Session Description:
Deep neural networks (DNNs) play a critical role in a wide range of applications, such as image classification, autonomous driving and facial recognition. As we all know, DNNs are data-driven, depending on the size and quality of the training data. Therefore, it seems that DNNs are vulnerable while attackers have access to the training data and they can make DNNs misclassify by modifying even a small proportion of the data. As a result, the poisoned model may not work as people originally expected. For example, the classification accuracy may drop dramatically and in a stealthier way, the classification accuracy will just decrease on some target poisoned samples and behave normally on other benign samples. To this end, we need to pay more attention to the poisoning attack in deep neural networks (DNNs). This special session is organized to focus on the various of poisoning attacks in deep neural networks and some corresponding defenses.

Potential topics include but are not limited to the following:

  • Adversarial examples in deep neural networks.
  • Different backdoor attacks including poison-label attack and clean-label attack (i.e. without changing label of the poisoned samples).
  • Recent novel sample-targeted attacks towards DNNs.
  • Some state-of-the-art defenses against the existing poisoning data attack.


Please click here to learn the details of this Special Sessions