This research group implements projects based on artificial intelligence and virtual reality for medical applications, while enjoying collaboration of eye surgeons of Farabi Eye Hospital. These projects focus on developing robotic tools and methods for surgical training, training assessment, and diagnosing eye diseases. There are three main projects currently under development in this group, which are detailed on the group website. The requirement to join either of these projects are given as follows.
Surgical Videos: Detection, Tracking, and Skill Assessment
This project was started in 2017 which is considered on the leading edge of technologies in medical applications, while benefiting from international cooperation. This project aims to separate the visual and motion characteristics of expert, intermediate, and novice surgeons to reach a skill-based feature space for surgical skill transfer. We tend to automate surgical skill assessment and leave it to an artificial intelligence.. By successfully implementing the automatic evaluation of surgical skills based on motion and simulated data of the JIGSAWS dataset, this project entered the automated evaluation of surgical skills from real surgical videos in 2020. To this end, a software and some structures based on deep learning and computer vision have been developed to analyze and evaluate surgical skills. This project also involves detecting and tracking key areas in surgery, such as surgical tools and tissues. The following keywords identify some current issues in this project: AI, Computer Vision, Video Classification, Video Understanding, Image Classification, Object Detection and Tracking, Image & Video Processing, CNN, Python and Qt, PyTorch and TensorFlow.
Development of the Surgical Metrics and Implementation of Skill Assessment Methods in the Eye Surgery Haptic Training System
Feedback and assessment of expert surgeons in basic robotic-assisted minimally invasive surgery (RMIS) training is very time consuming. It is very favorable to automatically feed- back all the surgeons in different skill levels during the training process. Various methods have been proposed to assist the training and evaluation process for novice surgeons. In this project, we focused on this subject and have used the surgical robot kinematic dataset named JIGSAWS, to extract 43 features from its suturing data profile. Furthermore, by extracting critical features of tool movements, it has been attempted to categorize surgeons with high accuracy into three levels, novice, intermediate and expert surgeons. The features were extracted from time-, motion-, entropy-, and frequency-based indices. With the aid of feature selection methods, the features that make the most distinction between surgeons have been identified, and skill assessments have been based on them. To find out the most relevant features in skill assessment, we perform and compare two feature selection/reduction methods, namely principal component analysis (PCA) and relief algorithm. We separately reduce the features based on these two methods, while using a combined method (PCA+relief). Although resulting in an acceptable accuracy in skill level detection of 84% when using each of them separately, the combined method results in 92% particularly noteworthy accuracy. Moreover, after presenting the proposed method and its validation on the JIGSAWS data set, we apply it to the dataset extracted from the ARASH:ASiST, our developed eye surgery training system. This experiments, which was on vitrectomy surgery, are performed by surgeons in three skill levels, a specialist surgeon, a surgical resident, and a person without any medical background. The experiments were performed on human eyes bank at Farabi Hospital. While, ARASH:ASiST recorded the position and speed data, the data set is pre-examined and the outliers data are removed. Then the acceleration and jerk indices are obtained by deriving the velocity and acceleration data, respectively. After collecting the data set and normalizing them, we apply the proposed method and classify the surgeons into three skill levels (novice, intermediate, expert) with an accuracy of 86%, which is remarkable given the real and unstructured environment. Briefly, the main contribution of the thesis is to introduce a skill assessment method and its development based on different feature extraction/selection methods. Additionally, the experimental results on JIGSAWS and ARASH:ASiST data sets are presented to verify the proposed methodology, and to provide a promising skill assessment method for facilitating vitrectomy surgery training.
Evaluation of surgical skills in facilitating haptics system of ARAS surgery training using machine learning algorithms
Surgery training is very delicate and of utmost importance for a novice, surgeon to become an expert. Surgery experience can significantly help novice surgeons in her/his surgery skills, However, with the cost of operating on real patients. Therefore surgery simulation may come as a rescue and by collecting surgeons’ operation data during surgery it is possible to enhance the training process and make it more efficient if collected data could be efficiently processed and means for improvements could be provided. In this project, different machine learning methods have been examined in surgical fields. For this means, ARASH:ASiST the ARAS haptic system for eye surgery training is used to extract data from surgeries and the surgery training procedure. Furthermore, the model for this system is used to test and implement the proposed machine learning methods. For the proof of concept, the JIGSAWS dataset and the JHU-ISI Gesture and Skill Assessment Working data Set is used, which is a surgical activity dataset for human motion modeling. JIGSAWS was captured using the da Vinci Surgical System from eight surgeons with different levels of skill performing five repetitions of three elementary surgical tasks on a bench-top model: suturing, knot-tying, and needle-passing, which are standard components of most surgical skills training curricula. The kinematics data of this dataset is used to train our machine learning models. First, every single surgery data is considered as a time series and recurrent neural networks (RNN), especially LSTM (long-short term memory) networks are trained on them. In another method, all our surgery kinematics data are converted to image data type and by use of convolutional neural networks (CNN) to classify our data and train our model. By Using expert surgeon’s data our machine learning model is trained to learn the best surgery pattern and use that pattern to evaluate any new data, extracted from novice surgeons. By this means, it is possible to use these methods to evaluate surgeons’ skills both online and offline also enables us to help them during surgery. In the final stage of this research, a haptic system simulator is used to train a reinforcement learning agent. By Using unity software, ARASH:ASiST model is developed into a labscale simulator to extract exhaustive data, and furthermore, to experience more surgery scenarios in the simulator to train our reinforcement learning agent, and implement our machine learning methods more freely on the simulator platform. It is possible to have a model to evaluate CNN or LSTM methods but different situations in surgery may cause problems in the assessment criteria, therefore in this stage of the research reinforcement learning is used as a complementary amendment. In this method, the simulator is used to experience almost all situations that may happen in surgery as our agent’s episodes and during these episodes, our reinforcement learning agent can learn different expert surgeon’s reactions in different situations. By this means, before any assessment, our reinforcement learning agent can make an optimal pattern for that surgery, and then that surgery pattern is used to evaluate novice or intermediate surgeons through previously developed methods.
Eye surgery performance evaluation and improvement using artificial intelligence (AI)
Skill assessment and transfer in eye surgery is one of the important considerate goals in surgery training and evaluation. ARAS Haptic System has implemented and developed in Surgical Robotic group of ARAS lab. These haptic devices can measure position and insertion force of tool that can be very useful in practical skill assessment. Based on eye surgery, especially vitrectomy, there are some important and necessary skills for novices like path length, tool’s speed in the entry point, depth perception, motion smoothness, applied forces and etc. I am studying some kinematic and dynamic parameters that can describe the movements well and they were reliable and valid in a practical skills assessment.
Development of Skill-Based Video Dataset, Deep Learning Research on Video-Based Cataract Surgery Training and Skill (Quantity and Quality) Assessment
Approach: The eye lens works like a camera lens, focusing the light on the retina at the back of the eye. The blurring of the eye lens is called a cataract. Cataract surgery is currently one of the most common surgeries performed in the medical field. In cataract surgery, a blurred eye lens of a patient is replaced with a transparent synthetic lens. Today, Phacoemulsification is the most common method used for cataract surgery. In this method, ultrasound waves are used to break the lens. The device used to crush the lens in this method is thick and enters the eye through a cut of about 3 mm. The surgeon then creates a circular incision on the anterior surface of the lens capsule (a very thin membrane that surrounds the lens), a process called capsulorhexis. This process, which is done with an insulin needle, requires great care; Because the thickness of the lens capsule is 16 to 20 microns. Performing this process is the most fateful part of this surgery and requires a very high level of skill, and surgeons must be well trained and evaluated to accomplish it. So, one of the most critical issues in the cataract surgery process is the proper training of novice surgeons for doing the capsulorhexis process well. On the other hand, diagnosing some important ocular features and eye problems, and their effect on the success of cataract surgery and its aftermaths can help us manage this surgery with fewer complications. Today, due to the advancement of artificial intelligence knowledge, it is possible to automate all or part of the surgeons’ surgical training process and surgical skill assessment. We can also automate the diagnosis of ocular features and their effects on the quality of surgery. Such use of artificial intelligence knowledge in medical science will improve the quality of surgeries and reduce the complications of these surgeries for patients. The codified and specific steps for the implementation of this project are considered. We first provide a comprehensive dataset of the capsulorhexis process in cataract surgery, which includes surgery videos and related educational-diagnostic based annotation, with skill-based technical explanations which indicate the level of skill of the surgeon in performing the capsulorhexis process. To perform this step, it is first necessary to record video data related to surgical operations by providing appropriate equipment such as quality cameras and installing them in cataract surgery rooms and proper use of their data on an efficient communication platform. Fortunately, due to the agreement between ARAS and FARABI Hospital, the means to accomplish this step has been provided. After these steps, a proper dataset is formed for joint work between medicine and artificial intelligence. Due to the principled and comprehensive annotations that are available in this dataset, it is a rich source for training the artificial intelligence models, and it can be used for automating the surgical training process and diagnosis of some important ocular features, to reduce surgeries complications and adopt proper strategies for managing the difficulties in capsulorhexis process. Finally, we provide an artificial intelligence structure to automate assessing surgical skills based on video data. The proposed structure can expand surgical training and, by diagnosing important ocular features, assist surgeons in better management of surgical procedures to reduce surgeries complications. According to the principled and codified method adopted, this AI-based automated surgical skill assessment system closely resembles what is happening in real.
Outcomes: Among the previous works in this field, we can mention evaluating the surgeon’s skill according to the video and kinematic data related to the three operations of suturing, knot-Tying, needle-Passing [1, 2]. Similar work has also been done in rehabilitation and sports training, which, based on the formation of a similar dataset and by taking a new video, automatically suggests training tips related to the skill of performing a rehabilitation or sports task [3, 4]. On the other hand, some works investigate the effect of ocular characteristics on the success of the capsulorhexis process for better managing the difficulties of the capsulorhexis process [5, 6]. Our Research Group has done some successful projects in the field of automating the surgical skills assessment process (considering specific tasks and based on kinematic data of surgical instruments). Although many studies have been done in this field, most of them were based on kinematic data; while these data are not available in many surgical environments, and we have to use expensive equipment to reach them. As a result of this study, while preparing a comprehensive dataset for artificial intelligence and medical applications, a suitable platform is provided for the use of video data to automate surgical training and skill assessment process in any environment without worrying about the use of any expensive equipment.
Innovation: This comprehensive product, which includes data related to the capsulorhexis process in cataract surgery and numerous worth annotations based on surgical training and diagnosis of ocular features for better management of surgery training, has no internal or external alternative. Explanation-based annotation of surgical processes is done for the first time in this field to provide the possibility of creating a platform for automation the surgical training close to its reality. This project is significantly helpful in optimizing the training process for the surgical trainers and trainees the possibility of repetition of the training with high accuracy, shortening the bed occupancy time, shortening surgery time for the patient, shortening the period of skills acquisition, and improving patients’ surgical outcomes, reducing training costs and increasing the quality of education, advancing medical research and investigating the causes of unsuccessful surgeries and their problems, and reducing complications during surgery. Also, due to the COVID-19 pandemic, there is more motivation and need to make educations with the use of technology and virtual. Furthermore, the annotations in this database can be used to analyze an ongoing surgery, anticipate potential risks, and make some suggestions in the future.
 Hassan Ismail Fawaz and Germain Forestier and Jonathan Weber and Lhassane Idoumghar and Pierre-Alain Muller (2018). Evaluating surgical skills from kinematic data using convolutional neural networks. CoRR, abs/1806.02750.
 Isabel Funke and Sören Torge Mees and Jürgen Weitz and Stefanie Speidel (2019). Video-based surgical skill assessment using 3D convolutional neural networks. CoRR, abs/1903.02306.
 Yalin Liao and Aleksandar Vakanski and Min Xian (2019). A Deep Learning Framework for Assessing Physical Rehabilitation Exercises. CoRR, abs/1901.10435.
 Paritosh Parmar and Brendan Tran Morris (2019). What and How Well You Performed? A Multitask Learning Approach to Action Quality Assessment. CoRR, abs/1904.04346.
 Mohammadpour, M., Erfanian, R., & Karimi, N. (2012). Capsulorhexis: Pearls and pitfalls. Saudi journal of ophthalmology : official journal of the Saudi Ophthalmological Society, 26(1), 33–40. https://doi.org/10.1016/j.sjopt.2011.10.007
 Abhay Vasavada, & Raminder Singh (2000). Phacoemulsification in eyes with a small pupil. Journal of Cataract and Refractive Surgery, 26(8), 1210–1218.
 Yixin Gao, S. Swaroop Vedula, Carol E. Reiley, Narges Ahmidi, Balakrishnan Varadarajan, Henry C. Lin, Lingling Tao, Luca Zappella, Benjam ́ın B ́ejar, David D. Yuh, Chi Chiung Grace Chen, Ren ́e Vidal, Sanjeev Khudanpur and Gregory D. Hager, The JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS): A Surgical Activity Dataset for Human Motion Modeling, In Modeling and Monitoring of Computer Assisted Interventions (M2CAI) – MICCAI Workshop, 2014.
Medical Images: Diagnosis, Detection and Classification
Through the analysis of medical images, this project aims to diagnose eye diseases. The project is being led in collaboration with Farabi Hospital. One of the challenges facing the medical system is recognizing some eye features, such as Keratoconus, and categorizing them into normal, suspicious and KC categories. While these diagnoses are critical, they take up much time and resources from the medical system. AI and deep learning methods are being developed in this group for image classification and to assist surgeons with a more comprehensive view of medical images. To this end, a balanced dataset for the diagnosis of keratoconus is being collected, from the patients in Farabi Eye Hospital, and by using synthetic data generation by a variational autoencoder (VAE). The results of this project assist surgeons to decide whether to perform vision refractive surgeries on patients. The following keywords identify some current issues in this project to get a better understanding of the workspace: AI, Computer Vision, Image Classification, Image Analyzing, Image Processing, CNN, VAE, Python.
Organize a corneal disease database based on medical images using artificial intelligence techniques
Ectasia is a condition in which the human cornea gradually becomes thinner and its slope becomes steeper, and eventually the pressure of the internal fluid of the eye causes the cornea to protrude forward. In general, everyone’s cornea is divided into three groups: Normal, Suspect and KC. Keratoconus is important because it limits performing refractive surgeries such as LASIK and femto. To limit KC usually preoperative care is performed using advanced imaging devices.
In this project, AI technology is used to diagnose KC categories based on four-maps eye images. in order to use this technology, it is necessary to produce a comprehensive database of medical images in the diagnosis of KC disease and its categories. This framework uses the output data of the Pentacam device, especially the Four Maps Refractive images. The data available at Farabi Hospital will be used to collect this data. In the first stage, the collected data need to be properly labeled and annotated to determine the patient’s health level in one of the three available classes.
The labeling process is very important for the data obtained from different sources because the reliability of the data is one of the important assumptions in creating the proper performance of methods based on artificial intelligence. For this means, the detailed expertise of physicians is needed to label and annotate these images. After the collection and labeling process, the potential challenges of this big data are identified and addressed, and the data and annotations associated with each data are aggregated and a comprehensive database is created to classify images.
Mixed Reality in Surgery: VR and MR approaches
ARAS Mixed Reality research project is aimed to build a simulator of vitrectomy and cataract surgery using virtual reality to provide a completely similar environment to real surgery for eye surgery residents. The Objective of research in this group will be completed in joint collaboration research with the Surgical Robotic (SR) research group of ARAS. We aim to incorporate the mixed reality simulation tool developed in this group to ARASH:ASiST, the product developed in SR group for Vitrectomy training. The following keywords identify some current issues in this project to get a better understanding of the workspace: Mixed Reality, VR and MR approach in Simulation Environment, Sofa, Unity, and Game engines, Unreal Engine, Blender, UV mapping, Meshing, Texture, Cmake, CUDA, C++.
Simulation of Vitrectomy Surgery
Vitreoretinal surgery is an idiom in ophthalmology which is contained surgical process in the vitreous body and retinal space which is one of the most sensitive surgeries in eye surgery, especially surgeries near the retinal space Vitrectomy is a surgical procedure undertaken by a specialist where the vitreous humor gel that fills the eye cavity is removed to provide better access to the retina. This allows for a variety of repairs, including the removal of scar tissue, laser repair of retinal detachments and treatment of macular holes. In the ARAS Mixed Reality group, the goal is to simulate this operation, we simulate the entire surgical environment with the SOFA simulator in 3D model, and the graphical environment is simulated in the Unreal Engine software. Surgical simulation helps residents a lot and makes the surgical process easier for them. Finally, We also use The ARASH: ASiST haptic device in this group that facilitates the procedure of surgery training by involving the expert and novice physicians in the process and providing the appropriate haptic feedback.
Simulation of cataract surgery
The eye works a lot like a camera, using a lens to focus on an image. If your camera lens became cloudy, you’d have a hard time viewing the world around you. Just like a camera, the lenses in your eyes can become cloudy as you age, making it harder for you to see. This clouding is a natural condition, known as cataracts. With today’s technology, your surgeon can safely remove your cataract, and implant a replacement lens to restore your vision.These surgeries have special skills that trainee surgeons need to learn. Augmented or Mixed Reality can shorten the learning curve and reach a better result. SOFA framework is a tool for simulate and modeling complicated organ like an eye. In SOFA we make a realistic 3D model of different parts of the eye with creating mesh and material of it. In the end, we add surgical tools to the scene and making a connection between physic of simulation and surgical robot for force feedback.
Presentation of eye surgery in virtual reality
The fast-evolving virtual reality technology has numerous applications in the medical field and specially in the training of surgery by simulation. The virtual reality provides an immersive experience which makes the surgical training possible with lower cost in much less time than training in the real world. In order to make the virtual reality environment, a connection between the SOFA (the simulator engine) and the Unity (the visualization engine) is developed by using the Inter Process Connection (IPC) technique. All of the required data for representation of a 3D model, such as vertices, normal vectors, polygons, quads and edges are transformed from the SOFA engine to the unity in order to represent the simulation result in virtual reality.
Online Surgical Robot Simulator
According to the aim of the ARAS Mixed Reality group in implementing a simulation of eye surgeries including, vitrectomy, and cataract surgery for training surgeons, the trainee uses a haptic device to manipulate the surgery tool while wearing a headset to experience a virtual surgery. The main interest of interactive simulation is that the trainee can modify the training procedure for the trainee and teach him/her the details of the surgeries step by step. Therefore, an online and accurate interaction between the haptic device and the simulator is desirable with the lowest possible connection delay. This project aims to transfer the actual movement of the surgical tool into the simulator environment. For this means, a platform is being designed to create a bidirectional connection between the mixed reality environment and the haptic device called ARASH:ASiST, in order to see the online movement of the surgical tool in the simulator.This is most cruitial for the surgical simulation, when the surgical tool comes into contact with soft-tissue, in which instantaneous deformations shall be computed and implemented in the simulator. This visual feedback of the contact may be enhanced by haptic rendering such that the surgeon can transparently feel the contact. Moreover, computing the force feedbacks according to the user actions during the operation and applying it into the end-effector of the haptic device shall be implemented in real time.
Eye Surgery Environment Real-Time Analyzer (ESERA)
This software is a tool for extracting motion information and surgical statistical patterns to analyze and evaluate surgical videos that Advanced Robotics and Automated System (ARAS) Research Group has developed in collaboration with Farabi Ophthalmology Hospital.
Mohammad Sina Allahkaram (M.Sc.)
Mohammad Sina Allahkaram received his bachelor’s degree in Electrical Engineering from K. N. Toosi University of Technology. He is currently pursuing his M.Sc. degree in Mechatronics Engineering under the supervision of Prof. Hamid D. Taghirad. His main research interest includes Artificial intelligence & Deep Learning in Autonomous Robotics.
Mohammad Javad Ahmadi was born in Dec. 1996 in Sari near the beautiful Caspian Sea in northern IRAN. In 2011 he was accepted into the NODET (National Organization for Development of Exceptional Talent), spent four years in Shahid Beheshti, high school, and graduated with a Diploma GPA of 20/20. In 2015 he entered Amirkabir University of Technology (Polytechnic of Tehran), Tehran, Iran and received his B.Sc. degree in Electrical and Control Engineering with a GPA of 4/4. Currently, He is a graduate student in Control Engineering at K. N. Toosi University of Technology, Tehran, Iran and he has joined the surgical robotics group in Advanced Robotics and Automated System (ARAS) Lab under the supervision of Prof. Hamid D. Taghirad. His main research interest includes medical robots, surgical robots, mobile robots, flying robots, etc. He is also interested in doing research in Control theory, Artificial intelligence & Neural network, IIoT & IoT, and Multi-agent system & Consensus algorithm. (Personal Website | ARAS Website)
Marzie Lafouti was born in Tehran, Iran, in 1998. She was graduated from Farzanegan high school (NODET), Tehran, Iran, in 2016. She has received her B.Sc. degree in electrical engineering from K. N. Toosi University of Technology, Tehran, Iran in 2020. Now, she is a master’s student of control engineering at K. N. Toosi University of Technology, Tehran, Iran. She has joined the ARAS robotics team in 2020. She is interested in intelligent control, deep learning, image processing, detection, and tracking objects.
Parisa Forghani (Freelancer)
Parisa Forghani studied at K.N. Toosi University of Technology, Tehran, Iran, where she obtained a B.Sc. degree in Electrical Engineering (Control-Oriented) in 2020, her thesis being “Design and Construction of a Motor Asymmetry Assessment System to Monitor Patients with Parkinson’s Disease”. Parisa joined the ARAS Lab, Autonomous Robotics research group, in December 2020. Her current project focuses on Artificial Intelligence ways to help Corneal Diseases like Keratoconus.
Artificial Intelligence, Machine Learning, Data Science and, Computer Vision are her research interests.
Parisa Ghorbani (M.Sc.)
Parisa Ghorbani studied at Shahid Beheshti University, Tehran, Iran, where she obtained a B.Sc. degree in Electrical Engineering in 2020. Parisa joined the ARAS Lab, Autonomous Robotics research group, in December 2021. Her current project focuses on providing a database for neural network training using artificial intelligence techniques to help with corneal diseases such as keratoconus. Artificial intelligence, deep learning, machine learning, data science and computer vision are some of his research interests.
Arefe Rezaee received her M.Sc. in Artificial Intelligence from K.N. Toosi University, Tehran, Iran in 2019. She joined to APAC group under the supervision of DR. Alireza Fatehi and DR. Behrooz Nasihatkon in 2016. She was a Researcher in Autonomous Driving Systems and Advanced driver-assistance systems (ARAS) at the Industrial Automation laboratory. Her thesis was a Traffic Sign Recognition System based on Hierarchical Convolutional Neural Networks and Using Geometric Features For Autonomous Driving. Now she joined the ARAS group as a member of the autonomous robotics team and her current research interests are Self Driving, 3D pose estimation, and action recognition.
Ashkan Rashvand was born in 26 June 1997 in Qazvin. He graduated from Shahid Babaii NODET (National Organization for Developement of Exceptional Talent) high school. He finished his B.Sc. in controls engineering at Imam Khomeini International University in 2019 with 3.78/4 total GPA. He has always believed that the most important factors to a person’s success in a specific field are their motivation and interest in that field. Believing this idea helped him to be ranked one of the top student during his bachelor position. He started his M. Sc. program in the same major at K.N Toosi University of Technology under supervision of Prof. Taghirad . Now he is a member of ARAS group. His main research interests is robotics (Robots motion planning ,Robots-Assisted Therapy) and Artificial Life.
Dr. Parisa Abdi currently works at Farabi hospital as a assistant Professor of Cornea and External Eye Diseases Department of Ophthalmology, School of Medicine.
Seyed Farzad Mohammadi received the Fellowship degree in anterior segment of the eye from the Tehran University of Medical Sciences, Tehran, Iran, in 2007. He is currently an Associate Professor of ophthalmology with Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences. His current research interests include bench-to-bedside researches and ophthalmic epidemiology. He is board-certified in ophthalmology and a fellow of the International Council of Ophthalmology, and he is currently involved in Ophthalmic Epidemiology & Public Health Ophthalmology initiatives; Anterior Segment, Cornea & Refractive Surgery clinical research; Anterior Segment Regenerative Ophthalmology (Corneal Endothelial Cell Therapy); Robotics in Ophthalmic Surgery Education; Natural Lens Biomechanics; & Corneal Imaging (non-optical).
Dr. Alireza Lashay obtained his PhD in vetreo retinal surgery from Tehran University of Medical Sciences,iran in the year 1993. He holds a Master Degree (MSc) in Opthalmology from Tehran University of Medical Sciences, Iran in the year 1988, followed by a Bachelor’s Degree in general medicine from Tehran University of Medical Sciences,Iran in the year 1981. He is now working as a Professor in Opthalmology in Tehran University of Medical Sciences.
Seyed Mahdi Tavakoli Afshari received his BSc and MSc degrees in Electrical Engineering from Ferdowsi University and K.N. Toosi University, Iran, in 1996 and 1999, respectively. He then received his PhD degree in Electrical and Computer Engineering from the University of Western Ontario, London, ON, Canada, in 2005. In 2006, he was a post-doctoral research associate at Canadian Surgical Technologies and Advanced Robotics (CSTAR), London, ON, Canada. In 2007-2008, and prior to joining the Department of Electrical and Computer Engineering at the University of Alberta, Dr. Tavakoli was an NSERC Post-Doctoral Fellow with the BioRobotics Laboratory of the School of Engineering and Applied Sciences at Harvard University, Cambridge, MA, USA. Dr. Tavakoli’s research research focuses on haptics and teleoperation control, medical robotics, surgical robotics and image-guided surgery.
Dr. Mohammad Motaharifar received the B.Sc. degree in electrical engineering from the Iran University of Science and Technology, Tehran, Iran, in 2009, and the M.Sc. degree in electrical engineering from the Amirkabir University of Technology, Tehran, in 2011. He was a Research Assistant at Real time Systems Laboratory, Electrical Engineering Department, Amirkabir University of Technology from 2011 to 2014. He is currently working toward the Ph.D. degree from the K. N. Toosi University of Technology, Tehran. His research interests include robotics, medical robotics, adaptive control of nonlinear systems, and robust control.
Dr. Hamid Riazi currently works at Farabi eye hospital as an assistant professor of ophthalmology. Hamid does research in Ophthalmology. Their most recent publication is ‘Successful treatment of tubercular multifocal serpiginous-like choroiditis without use of anti-inflammatory drugs: A case report with multimodal imaging’.
Parisa Hasani received the B.Sc. degree in electrical engineering from Shariaty Technical College, Tehran, Iran, in 2018. She is currently pursuing the M.Sc. degree in control engineering from the K. N. Toosi University of Technology, Tehran, Iran. She joined the ARAS laboratory and Surgical Robotics group in October 2018. Her research interests include haptics, medical robotics, master-slave teleoperation, and evaluation in robotic-assisted systems for minimally invasive surgery.
Hamed Sadeghi was born in Tehran, Iran in 1996. Hamed finished his B. Sc. in Control engineering at K.N Toosi University of Technology – Tehran in 2018 and started his M. Sc. program in the same major at K.N Toosi University of Technology – Tehran since September 2018 and he joined ARAS group since 2015. His thesis was on focused on Surgical Robotics, mainly design and implement the Electronic hardware of ARASH-ASiST: ARAS Haptic System for EYE Surgery Training and currently He is working as a researcher in the surgical robotics group.
I received my B.Sc. degree in electrical engineering from the Shariaty Technical College, Tehran, Iran. I am currently working toward the M.Sc. degree at the K. N. Toosi University of Technology, Tehran, Iran since October 2017. I joined Surgical Robotics team because of the interesting work done here such as implementation of a Haptic system for eye surgery training. I am working on novice skill assessment methods through this Haptic system.
|Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic|
Mohammad Motaharifar, Alireza Norouzzadeh, Parisa Abdi, Arash Iranfar, Faraz Lotfi, Behzad Moshiri, Alireza Lashay, Seyed Farzad Mohammadi, Hamid D Taghirad
Frontiers in Robotics and AI, 258
This paper examines how haptic technology, virtual reality, and artificial intelligence reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education stages might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews novel recent breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.
|Adaptive Robust Impedance Control of Haptic Systems for Skill Transfer|
Ashkan Rashvand, Mohammad Javad Ahmadi, Mohammad Motaharifar, Mahdi Tavakoli, Hamid D Taghirad
2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM)
Designing control systems with bounded input is a practical consideration since realizable physical systems are limited by the saturation of actuators. The actuators' saturation degrades the performance of the control system, and in extreme cases, the stability of the closed-loop system may be lost. However, actuator saturation is typically neglected in the design of control systems, with compensation being made in the form of over-designing the actuator or by post-analyzing the resulting system to ensure acceptable performance..
|A Review on Applications of Haptic Systems, Virtual Reality, and Artificial Intelligence in Medical Training in COVID-19 Pandemic|
R Heidari, M Motaharifar, H Taghirad, SF Mohammadi, A Lashay
Journal of Control
This paper presents a survey on haptic technology, virtual reality, and artificial intelligence applications in medical training during the COVID-19 pandemic. Over the last few decades, there has been a great deal of interest in using new technologies to establish capable approaches for medical training purposes. These methods are intended to minimize surgerychr('39')s adverse effects, mostly when done by an inexperienced surgeon
|ARAS-Farabi Experimental Framework for Skill Assessment in Capsulorhexis Surgery|
Mohammad Javad Ahmadi, Mohammad Sina Allahkaram, Ashkan Rashvand, Faraz Lotfi, Parisa Abdi, Mohammad Motaharifar, S Farzad Mohammadi, Hamid D Taghirad
2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM)
Automatic surgical instruments detection in recorded videos is a key component of surgical skill assessment and content-based video analysis. Such analysis may be used to develop training techniques, especially in ophthalmology. This research focuses on capsulorhexis, the most fateful process in cataract surgery, which is a very delicate procedure and requires very high surgical skill. Assessment of the surgeon’s skill in handling surgical instruments is one of the main parameters of surgical quality assessment, and requires the proper detection of important instruments and tissues during a surgical procedure. The traditional methods to accomplish this task are very time-consuming and effortful, and therefore, automating this process by using computer vision approaches is a stringent requirement. In order to accomplish this requirement, a proper dataset is prepared. By consulting the expert surgeons, the pupil …
|Towards an Efficient Computational Framework for Surgical Skill Assessment: Suturing Task by Kinematic Data|
Parisa Hasani, Faraz Lotfi, Hamid D Taghirad
2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM)
During the course of the residency, novice surgeons develop specific skills before they perform actual surgical procedures. Manual feedback and assessment in basic robotic-assisted minimally invasive surgery (RMIS) training take up much of the expert surgeons’ time, while it is very favorable to automatically feedback to all surgeons in various skill levels. Towards this end, we use the surgical robot kinematic dataset named JIGSAWS, a public database collected from Da Vinci robot operated by 7 surgeons, to extract 49 metrics for the suturing task using three types of features, namely time and motion-based, entropy-based, and frequency-based. To find out the most relevant metrics in skill assessment, we perform and compare two feature selection/reduction methods, namely principal component analysis (PCA) and relief algorithm. We separately reduce the features based on these two methods, while using a …
|Surgical Instrument Tracking for Vitreo-retinal Eye Surgical Procedures Using ARAS-EYE Dataset|
F Lotfi, P Hasani, F Faraji, M Motaharifar, HD Taghirad, SF Mohammadi
28th Iranian Conference on Electrical Engineering (ICEE)
Real-time instrument tracking is an essential element of minimally invasive surgery and has several applications in computer-assisted analysis and interventions. However, the instrument tracking is very challenging in the vitreo-retinal eye surgical procedures owing to the limited workspace of surgery, illumination variation, flexibility of the instruments, etc. In this article, as a powerful technique to detect and track surgical instruments, it is suggested to employ a convolutional neural network (CNN) alongside a newly produced ARAS-EYE dataset and OpenCV trackers. To clarify, firstly you only look once (YOLOv3) CNN is employed to detect the instruments. Thereafter, the Median-flow OpenCV tracker is utilized to track the determined objects. To modify the tracker, every “ n” frames, the CNN runs over the image and the tracker is updated. Moreover, the dataset consists of 594 images in which four “shaft”, “center”, “laser”, and “gripper” labels are considered. Utilizing the trained CNN, experiments are conducted to verify the applicability of the proposed approach. Finally, the outcomes are discussed and a conclusion is presented. Results indicate the effectiveness of the proposed approach in detection and tracking of surgical instruments which may be used for several applications.
|A Force Reflection Impedance Control Scheme for Dual User Haptic Training System|
M. Motaharifar, A. Iranfar, and H. D. Taghirad 2019 27th Iranian Conference on Electrical Engineering (ICEE)
In this paper, an impedance control based training scheme for a dual user haptic surgery training system is introduced. The training scheme allows the novice surgeon (trainee) to perform a surgery operation while an expert surgeon (trainer) supervises the procedure. Through the proposed impedance control structure, the trainer receives trainee’s position to detect his (her) wrong movements. Besides, a novel force reflection term is proposed in order to efficiently utilize trainer’s skill in the training loop. Indeed, the trainer can interfere into the procedure whenever needed either to guide the trainee or suppress his (her) authority due to his (her) supposedly lack of skill to continue the operation. Each haptic device is stabilized and the closed loop stability of the nonlinear system is investigated. Simulation results show the appropriate performance of the proposed control scheme.
|Skill Assessment Using Kinematic Signatures: Geomagic Touch Haptic Device|
N. S. Hojati, M. Motaharifar, H. D. Taghirad, A. Malekzadeh
International Conference on Robotics and Mechatronics
|he aim of t his paper is to develop a practical|
skill assessment for some designed experimental tasks, retrieved
from Minimally Invasive Surgery. The skill evaluation is very
important in surgery training, especially in MIS. Most of the
previous studies for skill assessment methods are limited in the
Hidden Markov Model and some frequency transforms, such as
Discret e Fourier transform, Discrete Cosine Transform and etc.
In this paper , some features have been ext racted from timefrequency analysis with t he Discret e Wavelet Transform and
t emporal signa l ana lysis of some kinematic metrics whi ch were
compute d from Geom agic Touch kinematic data. In addition,
the k-n earest neighbors classifier are employed to detect skill
level based on ext racted features. Through cross-validation
results, it is demonstrated that t he proposed methodology has
appropriate accuracy in skill level det ection