ARAS AI in Surgery Research Group

The AI in Surgery group aims to develop new AI-based technologies for robot-assisted surgery and surgery training applications. This includes the design and integration of AI-based systems as well as the development of innovative control structures for surgical systems. These AI-based systems will enhance the safety and efficiency of medical surgeries, which leads to more satisfaction in all of the people dealing with the healthcare systems, especially the patients, the surgeons, and the residents. This group has enjoyed the collaboration and consultation of several national and international partners in the fields of engineering and medical science.

Current Projects

The major focus of AI in the surgery team is the design and construction of software to make surgical procedures and surgical training more accurate, effective and less invasive. The current project followed in our team is the design and evaluation of a novel AI-based system for facilitating eye surgery training. Undoubtedly, our collaboration with the Farabi Ophthalmology Hospital, the Center of Excellence in Ophthalmology, is a valuable and constructive asset for this project. This project involves designing and integrating AI-based systems and the development of advanced autonomous control frameworks for these systems. In addition, the following projects are the most recent projects conducted in our research group:

      • Eye surgery performance evaluation and improvement using artificial intelligence (AI)
      • Investigation of the metrics and methods of surgical skill assessment in the Eye Surgery Haptic System
      • Development of Skill-Based Video Dataset, Deep Learning Research on Video-Based Cataract Surgery Training and Skill (Quantity and Quality) Assessment
      • Development of the Surgical Metrics and Implementation of Skill Assessment Methods in the Eye Surgery Haptic Training System
      • Evaluation of surgical skills in facilitating haptics system of ARAS surgery training using machine learning algorithms
Development of Skill-Based Video Dataset, Deep Learning Research on Video-Based Cataract Surgery Training and Skill (Quantity and Quality) Assessment

Approach: The eye lens works like a camera lens, focusing the light on the retina at the back of the eye. The blurring of the eye lens is called a cataract. Cataract surgery is currently one of the most common surgeries performed in the medical field. In cataract surgery, a blurred eye lens of a patient is replaced with a transparent synthetic lens. Today, Phacoemulsification is the most common method used for cataract surgery. In this method, ultrasound waves are used to break the lens. The device used to crush the lens in this method is thick and enters the eye through a cut of about 3 mm. The surgeon then creates a circular incision on the anterior surface of the lens capsule (a very thin membrane that surrounds the lens), a process called capsulorhexis. This process, which is done with an insulin needle, requires great care; Because the thickness of the lens capsule is 16 to 20 microns. Performing this process is the most fateful part of this surgery and requires a very high level of skill, and surgeons must be well trained and evaluated to accomplish it. So, one of the most critical issues in the cataract surgery process is the proper training of novice surgeons for doing the capsulorhexis process well. On the other hand, diagnosing some important ocular features and eye problems, and their effect on the success of cataract surgery and its aftermaths can help us manage this surgery with fewer complications. Today, due to the advancement of artificial intelligence knowledge, it is possible to automate all or part of the surgeons’ surgical training process and surgical skill assessment. We can also automate the diagnosis of ocular features and their effects on the quality of surgery. Such use of artificial intelligence knowledge in medical science will improve the quality of surgeries and reduce the complications of these surgeries for patients. The codified and specific steps for the implementation of this project are considered. We first provide a comprehensive dataset of the capsulorhexis process in cataract surgery, which includes surgery videos and related educational-diagnostic based annotation, with skill-based technical explanations which indicate the level of skill of the surgeon in performing the capsulorhexis process. To perform this step, it is first necessary to record video data related to surgical operations by providing appropriate equipment such as quality cameras and installing them in cataract surgery rooms and proper use of their data on an efficient communication platform. Fortunately, due to the agreement between ARAS and FARABI Hospital, the means to accomplish this step has been provided. After these steps, a proper dataset is formed for joint work between medicine and artificial intelligence. Due to the principled and comprehensive annotations that are available in this dataset, it is a rich source for training the artificial intelligence models, and it can be used for automating the surgical training process and diagnosis of some important ocular features, to reduce surgeries complications and adopt proper strategies for managing the difficulties in capsulorhexis process. Finally, we provide an artificial intelligence structure to automate assessing surgical skills based on video data. The proposed structure can expand surgical training and, by diagnosing important ocular features, assist surgeons in better management of surgical procedures to reduce surgeries complications. According to the principled and codified method adopted, this AI-based automated surgical skill assessment system closely resembles what is happening in real.

Outcomes: Among the previous works in this field, we can mention evaluating the surgeon’s skill according to the video and kinematic data related to the three operations of suturing, knot-Tying, needle-Passing [1, 2]. Similar work has also been done in rehabilitation and sports training, which, based on the formation of a similar dataset and by taking a new video, automatically suggests training tips related to the skill of performing a rehabilitation or sports task [3, 4]. On the other hand, some works investigate the effect of ocular characteristics on the success of the capsulorhexis process for better managing the difficulties of the capsulorhexis process [5, 6]. Our Research Group has done some successful projects in the field of automating the surgical skills assessment process (considering specific tasks and based on kinematic data of surgical instruments).  Although many studies have been done in this field, most of them were based on kinematic data; while these data are not available in many surgical environments, and we have to use expensive equipment to reach them. As a result of this study, while preparing a comprehensive dataset for artificial intelligence and medical applications, a suitable platform is provided for the use of video data to automate surgical training and skill assessment process in any environment without worrying about the use of any expensive equipment.

Innovation: This comprehensive product, which includes data related to the capsulorhexis process in cataract surgery and numerous worth annotations based on surgical training and diagnosis of ocular features for better management of surgery training, has no internal or external alternative. Explanation-based annotation of surgical processes is done for the first time in this field to provide the possibility of creating a platform for automation the surgical training close to its reality. This project is significantly helpful in optimizing the training process for the surgical trainers and trainees the possibility of repetition of the training with high accuracy, shortening the bed occupancy time, shortening surgery time for the patient, shortening the period of skills acquisition, and improving patients’ surgical outcomes, reducing training costs and increasing the quality of education, advancing medical research and investigating the causes of unsuccessful surgeries and their problems, and reducing complications during surgery. Also, due to the COVID-19 pandemic, there is more motivation and need to make educations with the use of technology and virtual. Furthermore, the annotations in this database can be used to analyze an ongoing surgery, anticipate potential risks, and make some suggestions in the future.


[1] Hassan Ismail Fawaz and Germain Forestier and Jonathan Weber and Lhassane Idoumghar and Pierre-Alain Muller (2018). Evaluating surgical skills from kinematic data using convolutional neural networks. CoRR, abs/1806.02750.

[2] Isabel Funke and Sören Torge Mees and Jürgen Weitz and Stefanie Speidel (2019). Video-based surgical skill assessment using 3D convolutional neural networks. CoRR, abs/1903.02306.

[3] Yalin Liao and Aleksandar Vakanski and Min Xian (2019). A Deep Learning Framework for Assessing Physical Rehabilitation Exercises. CoRR, abs/1901.10435.

[4] Paritosh Parmar and Brendan Tran Morris (2019). What and How Well You Performed? A Multitask Learning Approach to Action Quality Assessment. CoRR, abs/1904.04346.

[5] Mohammadpour, M., Erfanian, R., & Karimi, N. (2012). Capsulorhexis: Pearls and pitfalls. Saudi journal of ophthalmology : official journal of the Saudi Ophthalmological Society, 26(1), 33–40.

[6] Abhay Vasavada, & Raminder Singh (2000). Phacoemulsification in eyes with a small pupil. Journal of Cataract and Refractive Surgery, 26(8), 1210–1218.

[7] Yixin Gao, S. Swaroop Vedula, Carol E. Reiley, Narges Ahmidi, Balakrishnan Varadarajan, Henry C. Lin, Lingling Tao, Luca Zappella, Benjam ́ın B ́ejar, David D. Yuh, Chi Chiung Grace Chen, Ren ́e Vidal, Sanjeev Khudanpur and Gregory D. Hager, The JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS): A Surgical Activity Dataset for Human Motion Modeling, In Modeling and Monitoring of Computer Assisted Interventions (M2CAI) – MICCAI Workshop, 2014.

More Info

Project Managers: Mohammad Sina Allahkaram, and Mohammad Javad Ahmadi

Featured Products

Surgical Analysis Software
Eye Surgery Environment Real-Time Analyzer (ESERA)

This software is a tool for extracting motion information and surgical statistical patterns to analyze and evaluate surgical videos that Advanced Robotics and Automated System (ARAS) Research Group has developed in collaboration with Farabi Ophthalmology Hospital.

Our Team

Mohammad Sina Allahkaram (M.Sc.)

Mohammad Sina Allahkaram received his bachelor’s degree in Electrical Engineering from K. N. Toosi University of Technology. He is currently pursuing his M.Sc. degree in Mechatronics Engineering under the supervision of Prof. Hamid D. Taghirad. His main research interest includes Artificial intelligence & Deep Learning in Autonomous Robotics.

Mohammad Javad Ahmadi (M.Sc.)

Mohammad Javad Ahmadi was born in Dec. 1996 in Sari near the beautiful Caspian Sea in northern IRAN. In 2011 he was accepted into the NODET (National Organization for Development of Exceptional Talent), spent four years in Shahid Beheshti, high school, and graduated with a Diploma GPA of 20/20. In 2015 he entered Amirkabir University of Technology (Polytechnic of Tehran), Tehran, Iran and received his B.Sc. degree in Electrical and Control Engineering with a GPA of 4/4. Currently, He is a graduate student in Control Engineering at K. N. Toosi University of Technology, Tehran, Iran and he has joined the surgical robotics group in Advanced Robotics and Automated System (ARAS) Lab under the supervision of Prof. Hamid D. Taghirad. His main research interest includes medical robots, surgical robots, mobile robots, flying robots, etc. He is also interested in doing research in Control theory, Artificial intelligence & Neural network, IIoT & IoT, and Multi-agent system & Consensus algorithm. (Personal Website | ARAS Website)

Marzie Lafouti (M.Sc.)

Marzie Lafouti was born in Tehran, Iran, in 1998. She was graduated from Farzanegan high school (NODET), Tehran, Iran, in 2016. She has received her B.Sc. degree in electrical engineering from K. N. Toosi University of Technology, Tehran, Iran in 2020. Now, she is a master’s student of control engineering at K. N. Toosi University of Technology, Tehran, Iran. She has joined the ARAS robotics team in 2020. She is interested in intelligent control, deep learning, image processing, detection, and tracking objects.

Parisa Forghani (Freelancer)

Parisa Forghani studied at K.N. Toosi University of Technology, Tehran, Iran, where she obtained a B.Sc. degree in Electrical Engineering (Control-Oriented) in 2020, her thesis being “Design and Construction of a Motor Asymmetry Assessment System to Monitor Patients with Parkinson’s Disease”. Parisa joined the ARAS Lab, Autonomous Robotics research group, in December 2020. Her current project focuses on Artificial Intelligence ways to help Corneal Diseases like Keratoconus.
Artificial Intelligence, Machine Learning, Data Science and, Computer Vision are her research interests.

Parisa Ghorbani (M.Sc.)

Parisa Ghorbani studied at Shahid Beheshti University, Tehran, Iran, where she obtained a B.Sc. degree in Electrical Engineering in 2020. Parisa joined the ARAS Lab, Autonomous Robotics research group, in December 2021. Her current project focuses on providing a database for neural network training using artificial intelligence techniques to help with corneal diseases such as keratoconus. Artificial intelligence, deep learning, machine learning, data science and computer vision are some of his research interests.

Arefe Rezaee (Freelancer)

Arefe Rezaee received her M.Sc. in Artificial Intelligence from K.N. Toosi University, Tehran, Iran in 2019. She joined to APAC group under the supervision of DR. Alireza Fatehi and DR. Behrooz Nasihatkon in 2016. She was a Researcher in Autonomous Driving Systems and Advanced driver-assistance systems (ARAS) at the Industrial Automation laboratory. Her thesis was a Traffic Sign Recognition System based on Hierarchical Convolutional Neural Networks and Using Geometric Features For Autonomous Driving. Now she joined the ARAS group as a member of the autonomous robotics team and her current research interests are Self Driving, 3D pose estimation, and action recognition.

Ashkan Rashvand (M.Sc.)

Ashkan Rashvand was born in 26 June 1997 in Qazvin. He graduated from Shahid Babaii NODET (National Organization for Developement of Exceptional Talent) high school. He finished his B.Sc. in controls engineering at Imam Khomeini International University in 2019 with 3.78/4 total GPA. He has always believed that the most important factors to a person’s success in a specific field are their motivation and interest in that field. Believing this idea helped him to be ranked one of the top student during his bachelor position. He started his M. Sc. program in the same major at K.N Toosi University of Technology under supervision of Prof. Taghirad . Now he is a member of ARAS group. His main research interests is robotics (Robots motion planning ,Robots-Assisted Therapy) and Artificial Life.

Dr. Mehdi Salmani (Farabi Hospital)


TitleAbstractYearTypePDFResearch Group
Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic
Mohammad Motaharifar, Alireza Norouzzadeh, Parisa Abdi, Arash Iranfar, Faraz Lotfi, Behzad Moshiri, Alireza Lashay, Seyed Farzad Mohammadi, Hamid D Taghirad
Frontiers in Robotics and AI, 258

This paper examines how haptic technology, virtual reality, and artificial intelligence reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education stages might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews novel recent breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.

2021JournalPDFSurgical Robotics
Adaptive Robust Impedance Control of Haptic Systems for Skill Transfer
Ashkan Rashvand, Mohammad Javad Ahmadi, Mohammad Motaharifar, Mahdi Tavakoli, Hamid D Taghirad
2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM)

Designing control systems with bounded input is a practical consideration since realizable physical systems are limited by the saturation of actuators. The actuators' saturation degrades the performance of the control system, and in extreme cases, the stability of the closed-loop system may be lost. However, actuator saturation is typically neglected in the design of control systems, with compensation being made in the form of over-designing the actuator or by post-analyzing the resulting system to ensure acceptable performance..

2021ConferencePDFSurgical Robotics
A Review on Applications of Haptic Systems, Virtual Reality, and Artificial Intelligence in Medical Training in COVID-19 Pandemic
R Heidari, M Motaharifar, H Taghirad, SF Mohammadi, A Lashay
Journal of Control

This paper presents a survey on haptic technology, virtual reality, and artificial intelligence applications in medical training during the COVID-19 pandemic. Over the last few decades, there has been a great deal of interest in using new technologies to establish capable approaches for medical training purposes. These methods are intended to minimize surgerychr('39')s adverse effects, mostly when done by an inexperienced surgeon

2021JournalPDFSurgical Robotics
ARAS-Farabi Experimental Framework for Skill Assessment in Capsulorhexis Surgery
Mohammad Javad Ahmadi, Mohammad Sina Allahkaram, Ashkan Rashvand, Faraz Lotfi, Parisa Abdi, Mohammad Motaharifar, S Farzad Mohammadi, Hamid D Taghirad
2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM)

Automatic surgical instruments detection in recorded videos is a key component of surgical skill assessment and content-based video analysis. Such analysis may be used to develop training techniques, especially in ophthalmology. This research focuses on capsulorhexis, the most fateful process in cataract surgery, which is a very delicate procedure and requires very high surgical skill. Assessment of the surgeon’s skill in handling surgical instruments is one of the main parameters of surgical quality assessment, and requires the proper detection of important instruments and tissues during a surgical procedure. The traditional methods to accomplish this task are very time-consuming and effortful, and therefore, automating this process by using computer vision approaches is a stringent requirement. In order to accomplish this requirement, a proper dataset is prepared. By consulting the expert surgeons, the pupil …

2021ConferencePDFSurgical Robotics
Towards an Efficient Computational Framework for Surgical Skill Assessment: Suturing Task by Kinematic Data
Parisa Hasani, Faraz Lotfi, Hamid D Taghirad
2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM)

During the course of the residency, novice surgeons develop specific skills before they perform actual surgical procedures. Manual feedback and assessment in basic robotic-assisted minimally invasive surgery (RMIS) training take up much of the expert surgeons’ time, while it is very favorable to automatically feedback to all surgeons in various skill levels. Towards this end, we use the surgical robot kinematic dataset named JIGSAWS, a public database collected from Da Vinci robot operated by 7 surgeons, to extract 49 metrics for the suturing task using three types of features, namely time and motion-based, entropy-based, and frequency-based. To find out the most relevant metrics in skill assessment, we perform and compare two feature selection/reduction methods, namely principal component analysis (PCA) and relief algorithm. We separately reduce the features based on these two methods, while using a …

2021ConferencePDFSurgical Robotics
Surgical Instrument Tracking for Vitreo-retinal Eye Surgical Procedures Using ARAS-EYE Dataset
F Lotfi, P Hasani, F Faraji, M Motaharifar, HD Taghirad, SF Mohammadi
28th Iranian Conference on Electrical Engineering (ICEE)

Real-time instrument tracking is an essential element of minimally invasive surgery and has several applications in computer-assisted analysis and interventions. However, the instrument tracking is very challenging in the vitreo-retinal eye surgical procedures owing to the limited workspace of surgery, illumination variation, flexibility of the instruments, etc. In this article, as a powerful technique to detect and track surgical instruments, it is suggested to employ a convolutional neural network (CNN) alongside a newly produced ARAS-EYE dataset and OpenCV trackers. To clarify, firstly you only look once (YOLOv3) CNN is employed to detect the instruments. Thereafter, the Median-flow OpenCV tracker is utilized to track the determined objects. To modify the tracker, every “ n” frames, the CNN runs over the image and the tracker is updated. Moreover, the dataset consists of 594 images in which four “shaft”, “center”, “laser”, and “gripper” labels are considered. Utilizing the trained CNN, experiments are conducted to verify the applicability of the proposed approach. Finally, the outcomes are discussed and a conclusion is presented. Results indicate the effectiveness of the proposed approach in detection and tracking of surgical instruments which may be used for several applications.

2020ConferencePDFSurgical Robotics
A Force Reflection Impedance Control Scheme for Dual User Haptic Training System
M. Motaharifar, A. Iranfar, and H. D. Taghirad 2019 27th Iranian Conference on Electrical Engineering (ICEE)

In this paper, an impedance control based training scheme for a dual user haptic surgery training system is introduced. The training scheme allows the novice surgeon (trainee) to perform a surgery operation while an expert surgeon (trainer) supervises the procedure. Through the proposed impedance control structure, the trainer receives trainee’s position to detect his (her) wrong movements. Besides, a novel force reflection term is proposed in order to efficiently utilize trainer’s skill in the training loop. Indeed, the trainer can interfere into the procedure whenever needed either to guide the trainee or suppress his (her) authority due to his (her) supposedly lack of skill to continue the operation. Each haptic device is stabilized and the closed loop stability of the nonlinear system is investigated. Simulation results show the appropriate performance of the proposed control scheme.

2019ConferencePDFSurgical Robotics
Skill Assessment Using Kinematic Signatures: Geomagic Touch Haptic Device
N. S. Hojati, M. Motaharifar, H. D. Taghirad, A. Malekzadeh
International Conference on Robotics and Mechatronics
he aim of t his paper is to develop a practical
skill assessment for some designed experimental tasks, retrieved
from Minimally Invasive Surgery. The skill evaluation is very
important in surgery training, especially in MIS. Most of the
previous studies for skill assessment methods are limited in the
Hidden Markov Model and some frequency transforms, such as
Discret e Fourier transform, Discrete Cosine Transform and etc.
In this paper , some features have been ext racted from timefrequency analysis with t he Discret e Wavelet Transform and
t emporal signa l ana lysis of some kinematic metrics whi ch were
compute d from Geom agic Touch kinematic data. In addition,
the k-n earest neighbors classifier are employed to detect skill
level based on ext racted features. Through cross-validation
results, it is demonstrated that t he proposed methodology has
appropriate accuracy in skill level det ection
2019ConferencePDFSurgical Robotics