Virtual Reality in Medicine is a three-dimensional teaching tool used across the field of healthcare as a means of both education and instruction. Virtual Reality commonly refers to healthcare simulation environments in which learners can experience visual stimuli delivered via computer graphics and other sensory experiences. This advanced technology allows learners to obtain the knowledge and understanding necessary to perform a number of tasks and procedures involving the human body, without ever having to practice on a live patient. Central to this technology is the immersive capacity of Virtual Reality, e.g. the simulated environment surrounds a learner’s perceptual field. This means that the user feels psychologically present in the digital world, rather than in their physical reality. Utilized to educate learners on diagnosis, treatment, rehabilitation, surgery, counseling techniques and more, Virtual Reality in medicine is helping to train the next generation of healthcare professionals. This medical simulation technology has shown to have a number of benefits, such as allowing learners to practice their skills without fear of error causing potentially life-threatening impacts. The Virtual Reality tools still provide the hands-on experience required to acquire a familiarity and comfort in performing procedures, but in a safe and controlled setting. Therefore, as learners make mistakes, they can be thoroughly corrected in real-time and without risk. As Virtual Reality modules still require interaction, skills are able to become second nature before they are applied in real world scenarios. ARAS Mixed Reality research group, is aimed to build a simulator of vitrectomy and cataract surgery using virtual reality to provide a completely similar environment for real surgery for eye surgery residents. The Objective of research in this group will be completed with a joint collaboration research with Surgical Robotic (SR) research group of ARAS. We aim to incorporate the mixed reality simulation tool developed in this group to ARASH:ASiST, the product developed in SR group for Vitrectomy training.
Core Projects
Simulation of vitrectomy surgery
Vitreoretinal surgery is an idiom in ophthalmology which is contained surgical process in the vitreous body and retinal space which is one of the most sensitive surgeries in eye surgery, especially surgeries near the retinal space Vitrectomy is a surgical procedure undertaken by a specialist where the vitreous humor gel that fills the eye cavity is removed to provide better access to the retina. This allows for a variety of repairs, including the removal of scar tissue, laser repair of retinal detachments and treatment of macular holes. In the ARAS Mixed Reality group, the goal is to simulate this operation, we simulate the entire surgical environment with the SOFA simulator in 3D model, and the graphical environment is simulated in the Unreal Engine software. Surgical simulation helps residents a lot and makes the surgical process easier for them. Finally, We also use The ARASH: ASiST haptic device in this group that facilitates the procedure of surgery training by involving the expert and novice physicians in the process and providing the appropriate haptic feedback.
Simulation of cataract surgery
The eye works a lot like a camera, using a lens to focus on an image. If your camera lens became cloudy, you’d have a hard time viewing the world around you. Just like a camera, the lenses in your eyes can become cloudy as you age, making it harder for you to see. This clouding is a natural condition, known as cataracts. With today’s technology, your surgeon can safely remove your cataract, and implant a replacement lens to restore your vision.These surgeries have special skills that trainee surgeons need to learn. Augmented or Mixed Reality can shorten the learning curve and reach a better result. SOFA framework is a tool for simulate and modeling complicated organ like an eye. In SOFA we make a realistic 3D model of different parts of the eye with creating mesh and material of it. In the end, we add surgical tools to the scene and making a connection between physic of simulation and surgical robot for force feedback.
Presentation of eye surgery in virtual reality
The fast-evolving virtual reality technology has numerous applications in the medical field and specially in the training of surgery by simulation. The virtual reality provides an immersive experience which makes the surgical training possible with lower cost in much less time than training in the real world. In order to make the virtual reality environment, a connection between the SOFA (the simulator engine) and the Unity (the visualization engine) is developed by using the Inter Process Connection (IPC) technique. All of the required data for representation of a 3D model, such as vertices, normal vectors, polygons, quads and edges are transformed from the SOFA engine to the unity in order to represent the simulation result in virtual reality.

Online Surgical Robot Simulator
According to the aim of the ARAS Mixed Reality group in implementing a simulation of eye surgeries including, vitrectomy, and cataract surgery for training surgeons, the trainee uses a haptic device to manipulate the surgery tool while wearing a headset to experience a virtual surgery. The main interest of interactive simulation is that the trainee can modify the training procedure for the trainee and teach him/her the details of the surgeries step by step. Therefore, an online and accurate interaction between the haptic device and the simulator is desirable with the lowest possible connection delay. This project aims to transfer the actual movement of the surgical tool into the simulator environment. For this means, a platform is being designed to create a bidirectional connection between the mixed reality environment and the haptic device called ARASH:ASiST, in order to see the online movement of the surgical tool in the simulator.This is most cruitial for the surgical simulation, when the surgical tool comes into contact with soft-tissue, in which instantaneous deformations shall be computed and implemented in the simulator. This visual feedback of the contact may be enhanced by haptic rendering such that the surgeon can transparently feel the contact. Moreover, computing the force feedbacks according to the user actions during the operation and applying it into the end-effector of the haptic device shall be implemented in real time.
Title | Abstract | Year | Type | Research Group | |
---|---|---|---|---|---|
Reconstruction of B-spline curves and surfaces by adaptive group testing Alireza Norouzzadeh Ravari, Hamid D. Taghirad Computer-Aided Design | Abstract: Point clouds as measurements of 3D sensors have many applications in various fields such as object modeling, environment mapping and surface representation. Storage and processing of raw point clouds is time consuming and computationally expensive. In addition, their high dimensionality shall be considered, which results in the well known curse of dimensionality. Conventional methods either apply reduction or approximation to the captured point clouds in order to make the data processing tractable. B-spline curves and surfaces can effectively represent 2D data points and 3D point clouds for most applications. Since processing all available data for B-spline curve or surface fitting is not efficient, based on the Group Testing theory an algorithm is developed that finds salient points sequentially. The B-spline curve or surface models are updated by adding a new salient point to the fitting process iteratively until the Akaike Information Criterion (AIC) is met. Also, it has been proved that the proposed method finds a unique solution so as what is defined in the group testing theory. From the experimental results the applicability and performance improvement of the proposed method in relation to some state-of-the-art B-spline curve and surface fitting methods, may be concluded. | 2016 | Journal | Mixed Reality in Surgery | |
NURBS-based Representation of Urban Environments for Mobile Robots Alireza Norouzzadeh Ravari and Hamid D. Taghirad 2016 4th International Conference on Robotics and Mechatronics (ICROM) | Abstract: Representation of the surrounding environment is a vital task for a mobile robot. Many applications for mobile robots in urban environments may be considered such as self-driving cars, delivery drones or assistive robots. In contrast to the conventional methods, in this paper a Non Uniform Rational B-Spline (NURBS) based technique is represented for 3D mapping of the surrounding environment. While in the state of the art techniques, the robot's environment is expressed in a discrete space, the proposed method is mainly developed for representation of environment in a continuous space. Exploiting the information theory, the generated representation has much lower complexity and more compression capability in relation to some state of the art techniques. In addition to representation in a lower dimensional space, the NURBS based representation is invariant against 3D geometric transformations. Furthermore, the NURBS based representation can be employed for obstacle avoidance and navigation. The applicability of the proposed algorithm is investigated in some urban environments through some publicly available data sets. It has been shown by some experiments that the proposed method has better visual representation and much better data compression compared to some state-of-the-art methods. | 2016 | Conference | Mixed Reality in Surgery | |
Reconstruction of B-spline curves and surfaces by adaptive group testing Alireza Norouzzadeh Ravari, Hamid D. Taghirad Computer-Aided Design | Abstract: Point clouds as measurements of 3D sensors have many applications in various fields such as object modeling, environment mapping and surface representation. Storage and processing of raw point clouds is time consuming and computationally expensive. In addition, their high dimensionality shall be considered, which results in the well known curse of dimensionality. Conventional methods either apply reduction or approximation to the captured point clouds in order to make the data processing tractable. B-spline curves and surfaces can effectively represent 2D data points and 3D point clouds for most applications. Since processing all available data for B-spline curve or surface fitting is not efficient, based on the Group Testing theory an algorithm is developed that finds salient points sequentially. The B-spline curve or surface models are updated by adding a new salient point to the fitting process iteratively until the Akaike Information Criterion (AIC) is met. Also, it has been proved that the proposed method finds a unique solution so as what is defined in the group testing theory. From the experimental results the applicability and performance improvement of the proposed method in relation to some state-of-the-art B-spline curve and surface fitting methods, may be concluded. | 2015 | Journal | Mixed Reality in Surgery | |
Loop Closure Detection by Compressed Sensing for Exploration of Mobile Robots in Outdoor Environments Alireza Norouzzadeh Ravari and Hamid D. Taghirad 2015 3rd RSI International Conference on Robotics and Mechatronics (ICROM) | Abstract: In the problem of simultaneously localization and mapping (SLAM) for a mobile robot, it is required to detect previously visited locations so the estimation error shall be reduced. Sensor observations are compared by a similarity metric to detect loops. In long term navigation or exploration, the number of observations increases and so the complexity of the loop closure detection. Several techniques are proposed in order to reduce the complexity of loop closure detection. Few algorithms have considered the loop closure detection from a subset of sensor observations. In this paper, the compressed sensing approach is exploited to detect loops from few sensor measurements. In the basic compressed sensing it is assumed that a signal has a sparse representation is a basis which means that only a few elements of the signal are non-zero. Based on the compressed sensing approach a sparse signal can be recovered from few linear noisy projections by l1 minimization. The difference matrix which is widely used for loop detection has a sparse structure, where similar observations are shown by zero distance and different locations are indicated by ones. Based on the multiple measurement vector technique which is an extension of the basic compressed sensing, the loop closure detection is performed by comparison of few sensor observations. The applicability of the proposed algorithm is investigated in some outdoor environments through some publicly available data sets. It has been shown by some experiments that the proposed method can detect loops effectively. | 2015 | Conference | Mixed Reality in Surgery | |
3D Scene and Object Classification Based on Information Complexity of Depth Data A. Norouzzadeh, H. D. Taghirad Mathematics | Abstract: In this paper the problem of 3D scene and object classification from depth data is addressed. In contrast to high-dimensional feature-based representation, the depth data is described in a low dimensional space. In order to remedy the curse of dimensionality problem, the depth data is described by a sparse model over a learned dictionary. Exploiting the algorithmic information theory, a new definition for the Kolmogorov complexity is presented based on the Earth Moverâs Distance (EMD). Finally the classification of 3D scenes and objects is accomplished by means of a normalized complexity distance, where its applicability in practice is proved by some experiments on publicly available datasets. Also, the experimental results are compared to some state-of-the-art 3D object classification methods. Furthermore, it has been shown that the proposed method outperforms FAB-Map 2.0 in detecting loop closures, in the sense of the precision and recall. | 2015 | Journal | Mixed Reality in Surgery | |
Transformation Invariant 3D Object Recognition Based On Information Complexity Alireza Norouzzadeh Ravari and Hamid D. Taghirad 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM) | Abstract: The 3D representation of objects and scenes as a point cloud or range image has been made simple by means of sensors such as Microsoft Kinect, stereo camera or laser scanner. Various tasks, such as recognition, modeling and classification can not be performed on raw measurements because of the curse of high dimensionality, computational and algorithm complexity. Non Uniform Rational Basis Splines (NURBS) are a widely used representation technique for 3D objects in various robotics and Computer Aided Design (CAD) applications. In this paper, a similarity measurement from information theory is employed in order to recognize an object sample from a set of objects. From a NURBS model fitted to the observed point cloud, a complexity based representation is derived which is transformation invariant in the sense of Kolmogorov complexity. Experimental results on a set of 3D objects grabbed by a Kinect sensor indicates the applicability of the proposed method for object recognition tasks. Furthermore, the results of the proposed method is compared to that of some state of the art algorithms. | 2014 | Conference | Mixed Reality in Surgery | |
Loop Closure Detection By Algorithmic Information Theory: Implemented On Range And Camera Image Data Alireza Norouzzadeh Ravari and Hamid D. Taghirad IEEE Transactions on Cybernetics | Abstract: In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. | 2014 | Journal | Mixed Reality in Surgery | |
Unsupervised 3D Object Classification from Range Image Data by Algorithmic Information Theory Alireza Norouzzadeh Ravari and Hamid D. Taghirad 2013 First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM) | Abstract: The problem of unsupervised classification of 3D objects from depth information is investigated in this paper. The range images are represented efficiently as sensor observations. Considering the high-dimensionality of 3D object classification, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In order to remedy this problem, a low-dimensional representation is defined here. The sparse model of every range image is constructed from a parametric dictionary. Employing the algorithmic information theory, a universal normalized metric is used for comparison of Kolmogorov complexity based representations of sparse models. Finally, most similar objects are grouped together. Experimental results show efficiency and accuracy of the proposed method in comparison to a recently proposed method. | 2013 | Conference | Mixed Reality in Surgery |