Block Body


Bloc body2

Keynote Speakers

Dr. Qi DOU
Dr. Murilo M. MARINHO
Prof. Guangzhi WANG

Dr. Qi DOU is currently a postdoctoral research associate at the Department of Computing at Imperial College London. Before that, she has received her Ph.D. degree in Computer Science and Engineering at The Chinese University of Hong Kong in July 2018. Her research interests are in the development of advanced machine learning methods for medical image computing and surgical robotics perception, with expertise in deep learning. Dr. Dou has published 40+ papers in top conferences and journals on AI for healthcare, with Google Scholar h-index of 20. She has won the Best Paper Award of Medical Image Analysis-MICCAI in 2017, the Best Paper Award of Medical Imaging and Augmented Reality in 2016, and MICCAI Young Scientist Award Runner-up in 2016. She has also won the HKIS Young Scientist Award 2018 and the CUHK Faculty Outstanding Thesis Award 2018. She serves as Area Chair of MIDL’19, PC of IJCAI’18-19, AAAI’19, Reviewer of MICCAI’17-19, IROS’19, journals including TPAMI, TMI, Medical Image Analysis, IJCV, TBME, JBHI, Pattern Recognition, Neurocomputing, etc.

Title: "Surgical Visual Perception towards AI-powered Context-Aware Operating Theaters"

Abstract: With advancements in medicine and information technologies, the operating room has undergone tremendous transformations evolving into a highly complicated environment. These achievements furthers innovate the surgery procedure and have great promise to enhance the patient safety. Within the new generation of operating theatre, the computer-assisted system plays an important role to provide surgeons with reliable contextual support. In this talk, I will present a series of deep learning methods towards interdisciplinary researches at artificial intelligence for surgical robotic perception, for automated surgical workflow analysis, instrument presence detection, surgical tool segmentation, surgical scene perception, etc. The proposed methods cover a wide range of deep learning topics including design of network architectures, novel learning strategies such as attention mechanism, multi-task learning, transfer learning, adversarial training, etc. The challenges, up-to-date progresses and promising future directions of AI-powered context-aware operating theaters will also be discussed.











Dr. Murilo M. MARINHO received his B.Sc. in Mechatronics Engineering in 2012 and his M.Sc. in Electronics and Automation Engineering in 2014, both from the University of Brasilia, Brasilia, Brazil. He received his Ph.D. degree in Mechanical Engineering in 2018 from the University of Tokyo, Tokyo, Japan. In 2019, he was a visiting researcher with the Johns Hopkins University. He is currently an assistant professor at the University of Tokyo, Tokyo, Japan. His research interests include robotics applied to medicine, robot control theory, and image processing. Recently, his team won the 'Best Innovation Award' and the 'Overall Winner Award' in the Surgical Robot Challenge 2019. 

Title: "A versatile solution for robot-aided surgery in constrained workspaces: SmartArm and related technologies"

Abstract: Surgical robots have received much attention from the robotics community and are seen by many as an integral part of the operating room of the future. Some surgical procedures, in special those in constrained workspaces, are beyond what can be achieved with current commercial systems. In the context of the ambitious research project “Bionic Humanoids Propelling New Industrial Revolution” led by Prof. Kanako Harada, our group developed the SmartArm versatile robotic system.
In this talk, after a brief contextualization, I will present the SmartArm robotic system, including the system integration, the hardware design, and the control algorithms. I will show relevant use cases of our system in pediatric surgery and endonasal surgery using realistic anatomical models. After, I will present some of the related technologies being developed by our group, namely AI-enabled systems for the recognition of the surgical environment and instruments, and photorealistic virtual-reality simulations. With this, I will share our vision for the operating theaters of the future.

















Prof. Guangzhi WANG, received his B.E. and M.S. Degrees in mechanical engineering in 1984 and1987 respectively,and received his Ph.D. degree in Biomedical Engineering in 2003, all from Tsinghua University, Beijing, China. Since 1987, he has been with the Biomedical Engineering division of Tsinghua University. Since 2004, He is a tenured full Professor of Biomedical Engineering Department, Tsinghua University, Beijing, China. Since 2015 Dr. Guangzhi Wang was elected as the vice president of Chinese Society of Biomedical Engineering, and since 2018, he was elected as vice president of Chinese Association of Medical Imaging Technology. His research interest includes biomedical image processing, image based surgical planning and computer aided surgery.

Title: "Neurosurgical Robot in the Unstructured Operating Room Environment: Perception and Modeling"

Abstract: Stereotactic neurosurgical robots have been used for many years in clinic. However, to prioritize the basic requirements of stereotactic neurosurgery, most robots are hypothesized to work in a static, simplified and fully accessible surgical environment, which in practice could be unstructured and dynamic. The surgery task varies with different surgical plans and relates to patents’ anatomy and postures during the surgery. Many obstacles exist around the patient’s head, such as head frame, breathing tube and so on. Moreover, unexpected body movements and other environment changes may occur during the operation. Therefore, the perception and modeling of the operation object and surrounding obstacles are essential for neurosurgical robots for a context-aware operating theater.

We have developed a neurosurgical stereotactic robot to perform the minimally invasive SEEG electrode implantation in the operating room. Multimodal image-based planning and registration were used for the robot-guided operation. To enhance the robot’s context-aware ability during operations, we have been focusing on further improving surgical robots’ perception and real-time modeling ability to adapt to the unstructured operation room environment better, which could make neurosurgical robots not only more precision and safer, but also more efficient, more intelligent.In this lecture, we will share our previous research experience in this field, and then discuss the future automatic and active stereotactic robots from the perspective of context-aware. We divide the development of stereotactic robots into four levels according to their perceptual abilities: internal anatomy and primary environment perception, local environment perception, global environment perception, and multimodal environment perception. Our research and some attempts to environment perception and modeling will be introduced under this framework. In addition, the mature technologies and the technologies yet to be developed will also be discussed for the future automatic and active stereotactic robot.