AI and VR in Medical Robotics: AVMR

The AI in Surgery group aims to develop new AI-based technologies for robot-assisted surgery and surgery training applications. This includes the design and integration of AI-based systems as well as the development of innovative control structures for surgical systems. These AI-based systems will enhance the safety and efficiency of medical surgeries, which leads to more satisfaction in all of the people dealing with the healthcare systems, especially the patients, the surgeons, and the residents. This group has enjoyed the collaboration and consultation of several national and international partners in the fields of engineering and medical science.

Current Projects

This research group implements projects based on artificial intelligence and virtual reality for medical applications, while enjoying collaboration of eye surgeons of Farabi Eye Hospital. These projects focus on developing robotic tools and methods for surgical training, training assessment, and diagnosing eye diseases. There are three main projects currently under development in this group, which are detailed on the group website. The requirement to join either of these projects are given as follows.

Surgical Videos: Detection, Tracking, and Skill Assessment

This project was started in 2017 which is considered on the leading edge of technologies in medical applications, while benefiting from international cooperation. This project aims to separate the visual and motion characteristics of expert, intermediate, and novice surgeons to reach a skill-based feature space for surgical skill transfer. We tend to automate surgical skill assessment and leave it to an artificial intelligence.. By successfully implementing the automatic evaluation of surgical skills based on motion and simulated data of the JIGSAWS dataset, this project entered the automated evaluation of surgical skills from real surgical videos in 2020. To this end, a software and some structures based on deep learning and computer vision have been developed to analyze and evaluate surgical skills. This project also involves detecting and tracking key areas in surgery, such as surgical tools and tissues. The following keywords identify some current issues in this project: AI, Computer Vision, Video Classification, Video Understanding, Image Classification, Object Detection and Tracking, Image & Video Processing, CNN, Python and Qt, PyTorch and TensorFlow.

Medical Images: Diagnosis, Detection and Classification

Through the analysis of medical images, this project aims to diagnose eye diseases. The project is being led in collaboration with Farabi Hospital. One of the challenges facing the medical system is recognizing some eye features, such as Keratoconus, and categorizing them into normal, suspicious and KC categories. While these diagnoses are critical, they take up much time and resources from the medical system. AI and deep learning methods are being developed in this group for image classification and to assist surgeons with a more comprehensive view of medical images. To this end, a balanced dataset for the diagnosis of keratoconus is being collected, from the patients in Farabi Eye Hospital, and by using synthetic data generation by a variational autoencoder (VAE). The results of this project assist surgeons to decide whether to perform vision refractive surgeries on patients. The following keywords identify some current issues in this project to get a better understanding of the workspace: AI, Computer Vision, Image Classification, Image Analyzing, Image Processing, CNN, VAE, Python.

Mixed Reality in Surgery: VR and MR approaches

ARAS Mixed Reality research project is aimed to build a simulator of vitrectomy and cataract surgery using virtual reality to provide a completely similar environment to real surgery for eye surgery residents. The Objective of research in this group will be completed in joint collaboration research with the Surgical Robotic (SR) research group of ARAS. We aim to incorporate the mixed reality simulation tool developed in this group to ARASH:ASiST, the product developed in SR group for Vitrectomy training. The following keywords identify some current issues in this project to get a better understanding of the workspace: Mixed Reality, VR and MR approach in Simulation Environment, Sofa, Unity, and Game engines, Unreal Engine, Blender, UV mapping, Meshing, Texture, Cmake, CUDA, C++.

For more information, you can refer to this page and this page.

Featured Products

Surgical Analysis Software
Eye Surgery Environment Real-Time Analyzer (ESERA)

This software is a tool for extracting motion information and surgical statistical patterns to analyze and evaluate surgical videos that Advanced Robotics and Automated System (ARAS) Research Group has developed in collaboration with Farabi Ophthalmology Hospital.

Our Team

Mohammad Sina Allahkaram (M.Sc.)

Mohammad Sina Allahkaram received his bachelor’s degree in Electrical Engineering from K. N. Toosi University of Technology. He is currently pursuing his M.Sc. degree in Mechatronics Engineering under the supervision of Prof. Hamid D. Taghirad. His main research interest includes Artificial intelligence & Deep Learning in Autonomous Robotics.

Mohammad Javad Ahmadi (M.Sc.)

Mohammad Javad Ahmadi was born in Dec. 1996 in Sari near the beautiful Caspian Sea in northern IRAN. In 2011 he was accepted into the NODET (National Organization for Development of Exceptional Talent), spent four years in Shahid Beheshti, high school, and graduated with a Diploma GPA of 20/20. In 2015 he entered Amirkabir University of Technology (Polytechnic of Tehran), Tehran, Iran and received his B.Sc. degree in Electrical and Control Engineering with a GPA of 4/4. Currently, He is a graduate student in Control Engineering at K. N. Toosi University of Technology, Tehran, Iran and he has joined the surgical robotics group in Advanced Robotics and Automated System (ARAS) Lab under the supervision of Prof. Hamid D. Taghirad. His main research interest includes medical robots, surgical robots, mobile robots, flying robots, etc. He is also interested in doing research in Control theory, Artificial intelligence & Neural network, IIoT & IoT, and Multi-agent system & Consensus algorithm. (Personal Website | ARAS Website)

Marzie Lafouti (M.Sc.)

Marzie Lafouti was born in Tehran, Iran, in 1998. She was graduated from Farzanegan high school (NODET), Tehran, Iran, in 2016. She has received her B.Sc. degree in electrical engineering from K. N. Toosi University of Technology, Tehran, Iran in 2020. Now, she is a master’s student of control engineering at K. N. Toosi University of Technology, Tehran, Iran. She has joined the ARAS robotics team in 2020. She is interested in intelligent control, deep learning, image processing, detection, and tracking objects.

Parisa Forghani (Freelancer)

Parisa Forghani studied at K.N. Toosi University of Technology, Tehran, Iran, where she obtained a B.Sc. degree in Electrical Engineering (Control-Oriented) in 2020, her thesis being “Design and Construction of a Motor Asymmetry Assessment System to Monitor Patients with Parkinson’s Disease”. Parisa joined the ARAS Lab, Autonomous Robotics research group, in December 2020. Her current project focuses on Artificial Intelligence ways to help Corneal Diseases like Keratoconus.
Artificial Intelligence, Machine Learning, Data Science and, Computer Vision are her research interests.

Parisa Ghorbani (M.Sc.)

Parisa Ghorbani studied at Shahid Beheshti University, Tehran, Iran, where she obtained a B.Sc. degree in Electrical Engineering in 2020. Parisa joined the ARAS Lab, Autonomous Robotics research group, in December 2021. Her current project focuses on providing a database for neural network training using artificial intelligence techniques to help with corneal diseases such as keratoconus. Artificial intelligence, deep learning, machine learning, data science and computer vision are some of his research interests.

Arefe Rezaee (Freelancer)

Arefe Rezaee received her M.Sc. in Artificial Intelligence from K.N. Toosi University, Tehran, Iran in 2019. She joined to APAC group under the supervision of DR. Alireza Fatehi and DR. Behrooz Nasihatkon in 2016. She was a Researcher in Autonomous Driving Systems and Advanced driver-assistance systems (ARAS) at the Industrial Automation laboratory. Her thesis was a Traffic Sign Recognition System based on Hierarchical Convolutional Neural Networks and Using Geometric Features For Autonomous Driving. Now she joined the ARAS group as a member of the autonomous robotics team and her current research interests are Self Driving, 3D pose estimation, and action recognition.

Ashkan Rashvand (M.Sc.)

Ashkan Rashvand was born in 26 June 1997 in Qazvin. He graduated from Shahid Babaii NODET (National Organization for Developement of Exceptional Talent) high school. He finished his B.Sc. in controls engineering at Imam Khomeini International University in 2019 with 3.78/4 total GPA. He has always believed that the most important factors to a person’s success in a specific field are their motivation and interest in that field. Believing this idea helped him to be ranked one of the top student during his bachelor position. He started his M. Sc. program in the same major at K.N Toosi University of Technology under supervision of Prof. Taghirad . Now he is a member of ARAS group. His main research interests is robotics (Robots motion planning ,Robots-Assisted Therapy) and Artificial Life.

Dr. Mehdi Salmani (Farabi Hospital)


TitleAbstractYearTypePDFResearch Group
Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic
Mohammad Motaharifar, Alireza Norouzzadeh, Parisa Abdi, Arash Iranfar, Faraz Lotfi, Behzad Moshiri, Alireza Lashay, Seyed Farzad Mohammadi, Hamid D Taghirad
Frontiers in Robotics and AI, 258

This paper examines how haptic technology, virtual reality, and artificial intelligence reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education stages might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews novel recent breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.

2021JournalPDFSurgical Robotics
Adaptive Robust Impedance Control of Haptic Systems for Skill Transfer
Ashkan Rashvand, Mohammad Javad Ahmadi, Mohammad Motaharifar, Mahdi Tavakoli, Hamid D Taghirad
2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM)

Designing control systems with bounded input is a practical consideration since realizable physical systems are limited by the saturation of actuators. The actuators' saturation degrades the performance of the control system, and in extreme cases, the stability of the closed-loop system may be lost. However, actuator saturation is typically neglected in the design of control systems, with compensation being made in the form of over-designing the actuator or by post-analyzing the resulting system to ensure acceptable performance..

2021ConferencePDFSurgical Robotics
A Review on Applications of Haptic Systems, Virtual Reality, and Artificial Intelligence in Medical Training in COVID-19 Pandemic
R Heidari, M Motaharifar, H Taghirad, SF Mohammadi, A Lashay
Journal of Control

This paper presents a survey on haptic technology, virtual reality, and artificial intelligence applications in medical training during the COVID-19 pandemic. Over the last few decades, there has been a great deal of interest in using new technologies to establish capable approaches for medical training purposes. These methods are intended to minimize surgerychr('39')s adverse effects, mostly when done by an inexperienced surgeon

2021JournalPDFSurgical Robotics
ARAS-Farabi Experimental Framework for Skill Assessment in Capsulorhexis Surgery
Mohammad Javad Ahmadi, Mohammad Sina Allahkaram, Ashkan Rashvand, Faraz Lotfi, Parisa Abdi, Mohammad Motaharifar, S Farzad Mohammadi, Hamid D Taghirad
2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM)

Automatic surgical instruments detection in recorded videos is a key component of surgical skill assessment and content-based video analysis. Such analysis may be used to develop training techniques, especially in ophthalmology. This research focuses on capsulorhexis, the most fateful process in cataract surgery, which is a very delicate procedure and requires very high surgical skill. Assessment of the surgeon’s skill in handling surgical instruments is one of the main parameters of surgical quality assessment, and requires the proper detection of important instruments and tissues during a surgical procedure. The traditional methods to accomplish this task are very time-consuming and effortful, and therefore, automating this process by using computer vision approaches is a stringent requirement. In order to accomplish this requirement, a proper dataset is prepared. By consulting the expert surgeons, the pupil …

2021ConferencePDFSurgical Robotics
Towards an Efficient Computational Framework for Surgical Skill Assessment: Suturing Task by Kinematic Data
Parisa Hasani, Faraz Lotfi, Hamid D Taghirad
2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM)

During the course of the residency, novice surgeons develop specific skills before they perform actual surgical procedures. Manual feedback and assessment in basic robotic-assisted minimally invasive surgery (RMIS) training take up much of the expert surgeons’ time, while it is very favorable to automatically feedback to all surgeons in various skill levels. Towards this end, we use the surgical robot kinematic dataset named JIGSAWS, a public database collected from Da Vinci robot operated by 7 surgeons, to extract 49 metrics for the suturing task using three types of features, namely time and motion-based, entropy-based, and frequency-based. To find out the most relevant metrics in skill assessment, we perform and compare two feature selection/reduction methods, namely principal component analysis (PCA) and relief algorithm. We separately reduce the features based on these two methods, while using a …

2021ConferencePDFSurgical Robotics
Surgical Instrument Tracking for Vitreo-retinal Eye Surgical Procedures Using ARAS-EYE Dataset
F Lotfi, P Hasani, F Faraji, M Motaharifar, HD Taghirad, SF Mohammadi
28th Iranian Conference on Electrical Engineering (ICEE)

Real-time instrument tracking is an essential element of minimally invasive surgery and has several applications in computer-assisted analysis and interventions. However, the instrument tracking is very challenging in the vitreo-retinal eye surgical procedures owing to the limited workspace of surgery, illumination variation, flexibility of the instruments, etc. In this article, as a powerful technique to detect and track surgical instruments, it is suggested to employ a convolutional neural network (CNN) alongside a newly produced ARAS-EYE dataset and OpenCV trackers. To clarify, firstly you only look once (YOLOv3) CNN is employed to detect the instruments. Thereafter, the Median-flow OpenCV tracker is utilized to track the determined objects. To modify the tracker, every “ n” frames, the CNN runs over the image and the tracker is updated. Moreover, the dataset consists of 594 images in which four “shaft”, “center”, “laser”, and “gripper” labels are considered. Utilizing the trained CNN, experiments are conducted to verify the applicability of the proposed approach. Finally, the outcomes are discussed and a conclusion is presented. Results indicate the effectiveness of the proposed approach in detection and tracking of surgical instruments which may be used for several applications.

2020ConferencePDFSurgical Robotics
A Force Reflection Impedance Control Scheme for Dual User Haptic Training System
M. Motaharifar, A. Iranfar, and H. D. Taghirad 2019 27th Iranian Conference on Electrical Engineering (ICEE)

In this paper, an impedance control based training scheme for a dual user haptic surgery training system is introduced. The training scheme allows the novice surgeon (trainee) to perform a surgery operation while an expert surgeon (trainer) supervises the procedure. Through the proposed impedance control structure, the trainer receives trainee’s position to detect his (her) wrong movements. Besides, a novel force reflection term is proposed in order to efficiently utilize trainer’s skill in the training loop. Indeed, the trainer can interfere into the procedure whenever needed either to guide the trainee or suppress his (her) authority due to his (her) supposedly lack of skill to continue the operation. Each haptic device is stabilized and the closed loop stability of the nonlinear system is investigated. Simulation results show the appropriate performance of the proposed control scheme.

2019ConferencePDFSurgical Robotics
Skill Assessment Using Kinematic Signatures: Geomagic Touch Haptic Device
N. S. Hojati, M. Motaharifar, H. D. Taghirad, A. Malekzadeh
International Conference on Robotics and Mechatronics
he aim of t his paper is to develop a practical
skill assessment for some designed experimental tasks, retrieved
from Minimally Invasive Surgery. The skill evaluation is very
important in surgery training, especially in MIS. Most of the
previous studies for skill assessment methods are limited in the
Hidden Markov Model and some frequency transforms, such as
Discret e Fourier transform, Discrete Cosine Transform and etc.
In this paper , some features have been ext racted from timefrequency analysis with t he Discret e Wavelet Transform and
t emporal signa l ana lysis of some kinematic metrics whi ch were
compute d from Geom agic Touch kinematic data. In addition,
the k-n earest neighbors classifier are employed to detect skill
level based on ext racted features. Through cross-validation
results, it is demonstrated that t he proposed methodology has
appropriate accuracy in skill level det ection
2019ConferencePDFSurgical Robotics