Mixed Reality Group Research

 “Mixed Reality research group”, is aimed to build a simulator of vitrectomy and cataract surgery using virtual reality to provide a completely similar environment for real surgery for eye surgery residents. The Objective of research in this group will be completed with a joint collaboration research with Surgical Robotic (SR) research group of ARAS. We aim to incorporate the mixed reality simulation tool developed in this group to ARASH:ASiST, the product developed in SR group for Vitrectomy training.mythology.For implementing an online Mixed reality platform for the eye surgery training system, different core projects are presented which are progressing simultaneously. For each project, different phases are moving forward. Some movies are added below to show the flow:

virtual Reality in eye surgery II

The virtual reality and augmented reality are getting more interest as a training technique in the medical fields unlocking significant benefits such as safety, repeatability and efficiency. Furthermore, VR/AR based simulators equipped with a haptic device can be used in medical surgery training in order to achieve skill improvement and training time reduction. With haptics as part of the training experience it is observed that a 30% increase in the speed of skills acquisition and up to a 95% increase in accuracy is achieved. Six out of nine studies showed that tactile feedback significantly improved surgical skill training.Eye surgery training is considered for VR/AR based training in ARAS group as it is one of the most complex surgical procedures. The ARASH:ASiST haptic system is integrated into the eye surgery training system in conjunction with a physical simulation engine and the unity software to visualize the simulation results in Oculus VR headset. The hand motions of the expert surgeon is captured by a haptic system where later the motion data is used to train the hand motions of surgery students by force feedback. In the developed eye surgery training system, two types of eye surgeries are simulated, namely the cataract and vitrectomy. In each type of eye surgery, the haptic system is used to simulate the surgery tool motion. The interaction of the virtual surgery tool with the 3D modeled eye is computed through the SOFA framework. The simulation results are transformed to the unity game engine in order to visualize the results in an Oculus VR headset

Simulation of ARASH:ASIST surgical Robotic 

  • Online simulation of ARASH:ASiSTsurgical robot in Unity has been implemented in order to bring data of the surgeon’s needle movements to the mixed reality system in realtime mode, specialized for vitrectomy surgery.

Videos

Geomagic haptic in Sofa framwork

  • Online simulation of Geomagic haptic device in Sofa framework is added to cover cataract surgery’s workspace
  • Application of online data by a Geomagic haptic robot on a simulated object in Sofa framework

Videos

Vitrectomy   surgery   in  Sofa   framwork 

Different parts of vitrectomy surgery are initially accomplished in Sofa framework concerning the essential physical accuracy of vitrectomy surgery simulation.

  • The stages consist of, tissue suction by tool , grab and pull the vitreous tissue out of the eye, Rupture and destruction of tissue by surgical instruments.
  • Tissue suction in Sofa framework

  • Catching and pulling eye tissue in Sofa framework

Videos

Software and Hardware

  • Sofa Framework

    Simulation module

  • Unity and Sofa

    Graphic connection and simulation

  • Unity

    Graphic connection and scene construction

  • Haptic device

    Aras Haptics: A System for EYE Surgery Training (ARASH:ASiST)

  • Build models

    Blender and Maya

Selected Related Publications

TitleAbstractYearTypePDFResearch Group
Reconstruction of B-spline curves and surfaces by adaptive group testing
Alireza Norouzzadeh Ravari, Hamid D. Taghirad
Computer-Aided Design
Abstract:

Point clouds as measurements of 3D sensors have many applications in various fields such as object modeling, environment mapping and surface representation. Storage and processing of raw point clouds is time consuming and computationally expensive. In addition, their high dimensionality shall be considered, which results in the well known curse of dimensionality. Conventional methods either apply reduction or approximation to the captured point clouds in order to make the data processing tractable. B-spline curves and surfaces can effectively represent 2D data points and 3D point clouds for most applications. Since processing all available data for B-spline curve or surface fitting is not efficient, based on the Group Testing theory an algorithm is developed that finds salient points sequentially. The B-spline curve or surface models are updated by adding a new salient point to the fitting process iteratively until the Akaike Information Criterion (AIC) is met. Also, it has been proved that the proposed method finds a unique solution so as what is defined in the group testing theory. From the experimental results the applicability and performance improvement of the proposed method in relation to some state-of-the-art B-spline curve and surface fitting methods, may be concluded.

2016JournalPDFMixed Reality in Surgery
NURBS-based Representation of Urban Environments for Mobile Robots
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
2016 4th International Conference on Robotics and Mechatronics (ICROM)
Abstract:

Representation of the surrounding environment is a vital task for a mobile robot. Many applications for mobile robots in urban environments may be considered such as self-driving cars, delivery drones or assistive robots. In contrast to the conventional methods, in this paper a Non Uniform Rational B-Spline (NURBS) based technique is represented for 3D mapping of the surrounding environment. While in the state of the art techniques, the robot's environment is expressed in a discrete space, the proposed method is mainly developed for representation of environment in a continuous space. Exploiting the information theory, the generated representation has much lower complexity and more compression capability in relation to some state of the art techniques. In addition to representation in a lower dimensional space, the NURBS based representation is invariant against 3D geometric transformations. Furthermore, the NURBS based representation can be employed for obstacle avoidance and navigation. The applicability of the proposed algorithm is investigated in some urban environments through some publicly available data sets. It has been shown by some experiments that the proposed method has better visual representation and much better data compression compared to some state-of-the-art methods.

2016ConferencePDFMixed Reality in Surgery
Reconstruction of B-spline curves and surfaces by adaptive group testing
Alireza Norouzzadeh Ravari, Hamid D. Taghirad
Computer-Aided Design
Abstract:

Point clouds as measurements of 3D sensors have many applications in various fields such as object modeling, environment mapping and surface representation. Storage and processing of raw point clouds is time consuming and computationally expensive. In addition, their high dimensionality shall be considered, which results in the well known curse of dimensionality. Conventional methods either apply reduction or approximation to the captured point clouds in order to make the data processing tractable. B-spline curves and surfaces can effectively represent 2D data points and 3D point clouds for most applications. Since processing all available data for B-spline curve or surface fitting is not efficient, based on the Group Testing theory an algorithm is developed that finds salient points sequentially. The B-spline curve or surface models are updated by adding a new salient point to the fitting process iteratively until the Akaike Information Criterion (AIC) is met. Also, it has been proved that the proposed method finds a unique solution so as what is defined in the group testing theory. From the experimental results the applicability and performance improvement of the proposed method in relation to some state-of-the-art B-spline curve and surface fitting methods, may be concluded.

2015JournalPDFMixed Reality in Surgery
Loop Closure Detection by Compressed Sensing for Exploration of Mobile Robots in Outdoor Environments
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
2015 3rd RSI International Conference on Robotics and Mechatronics (ICROM)
Abstract:

In the problem of simultaneously localization and mapping (SLAM) for a mobile robot, it is required to detect previously visited locations so the estimation error shall be reduced. Sensor observations are compared by a similarity metric to detect loops. In long term navigation or exploration, the number of observations increases and so the complexity of the loop closure detection. Several techniques are proposed in order to reduce the complexity of loop closure detection. Few algorithms have considered the loop closure detection from a subset of sensor observations. In this paper, the compressed sensing approach is exploited to detect loops from few sensor measurements. In the basic compressed sensing it is assumed that a signal has a sparse representation is a basis which means that only a few elements of the signal are non-zero. Based on the compressed sensing approach a sparse signal can be recovered from few linear noisy projections by l1 minimization. The difference matrix which is widely used for loop detection has a sparse structure, where similar observations are shown by zero distance and different locations are indicated by ones. Based on the multiple measurement vector technique which is an extension of the basic compressed sensing, the loop closure detection is performed by comparison of few sensor observations. The applicability of the proposed algorithm is investigated in some outdoor environments through some publicly available data sets. It has been shown by some experiments that the proposed method can detect loops effectively.

2015ConferencePDFMixed Reality in Surgery
3D Scene and Object Classification Based on Information Complexity of Depth Data
A. Norouzzadeh, H. D. Taghirad
Mathematics
Abstract:

In this paper the problem of 3D scene and object classification from depth data is addressed. In contrast to high-dimensional feature-based representation, the depth data is described in a low dimensional space. In order to remedy the curse of dimensionality problem, the depth data is described by a sparse model over a learned dictionary. Exploiting the algorithmic information theory, a new definition for the Kolmogorov complexity is presented based on the Earth Mover’s Distance (EMD). Finally the classification of 3D scenes and objects is accomplished by means of a normalized complexity distance, where its applicability in practice is proved by some experiments on publicly available datasets. Also, the experimental results are compared to some state-of-the-art 3D object classification methods. Furthermore, it has been shown that the proposed method outperforms FAB-Map 2.0 in detecting loop closures, in the sense of the precision and recall.

2015JournalPDFMixed Reality in Surgery
Transformation Invariant 3D Object Recognition Based On Information Complexity
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM)
Abstract:

The 3D representation of objects and scenes as a point cloud or range image has been made simple by means of sensors such as Microsoft Kinect, stereo camera or laser scanner. Various tasks, such as recognition, modeling and classification can not be performed on raw measurements because of the curse of high dimensionality, computational and algorithm complexity. Non Uniform Rational Basis Splines (NURBS) are a widely used representation technique for 3D objects in various robotics and Computer Aided Design (CAD) applications. In this paper, a similarity measurement from information theory is employed in order to recognize an object sample from a set of objects. From a NURBS model fitted to the observed point cloud, a complexity based representation is derived which is transformation invariant in the sense of Kolmogorov complexity. Experimental results on a set of 3D objects grabbed by a Kinect sensor indicates the applicability of the proposed method for object recognition tasks. Furthermore, the results of the proposed method is compared to that of some state of the art algorithms.

2014ConferencePDFMixed Reality in Surgery
Loop Closure Detection By Algorithmic Information Theory: Implemented On Range And Camera Image Data
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
IEEE Transactions on Cybernetics
Abstract:

In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.

2014JournalPDFMixed Reality in Surgery
Unsupervised 3D Object Classification from Range Image Data by Algorithmic Information Theory
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
2013 First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM)
Abstract:

The problem of unsupervised classification of 3D objects from depth information is investigated in this paper. The range images are represented efficiently as sensor observations. Considering the high-dimensionality of 3D object classification, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In order to remedy this problem, a low-dimensional representation is defined here. The sparse model of every range image is constructed from a parametric dictionary. Employing the algorithmic information theory, a universal normalized metric is used for comparison of Kolmogorov complexity based representations of sparse models. Finally, most similar objects are grouped together. Experimental results show efficiency and accuracy of the proposed method in comparison to a recently proposed method.

2013ConferencePDFMixed Reality in Surgery
Menu