Mixed Reality (MR) in Surgery

Virtual Reality in Medicine is a three-dimensional teaching tool used across the field of healthcare as a means of both education and instruction. Virtual Reality commonly refers to healthcare simulation environments in which learners can experience visual stimuli delivered via computer graphics and other sensory experiences. This advanced technology allows learners to obtain the knowledge and understanding necessary to perform a number of tasks and procedures involving the human body, without ever having to practice on a live patient. Central to this technology is the immersive capacity of Virtual Reality, e.g. the simulated environment surrounds a learner’s perceptual field. This means that the user feels psychologically present in the digital world, rather than in their physical reality. Utilized to educate learners on diagnosis, treatment, rehabilitation, surgery, counseling techniques and more, Virtual Reality in medicine is helping to train the next generation of healthcare professionals. This medical simulation technology has shown to have a number of benefits, such as allowing learners to practice their skills without fear of error causing potentially life-threatening impacts. The Virtual Reality tools still provide the hands-on experience required to acquire a familiarity and comfort in performing procedures, but in a safe and controlled setting. Therefore, as learners make mistakes, they can be thoroughly corrected in real-time and without risk. As Virtual Reality modules still require interaction, skills are able to become second nature before they are applied in real world scenarios. ARAS Mixed Reality research group, is aimed to build a simulator of vitrectomy and cataract surgery using virtual reality to provide a completely similar environment for real surgery for eye surgery residents. The Objective of research in this group will be completed with a joint collaboration research with Surgical Robotic (SR) research group of ARAS. We aim to incorporate the mixed reality simulation tool developed in this group to ARASH:ASiST, the product developed in SR group for Vitrectomy training.

Core Projects

Publications

TitleAbstractYearTypePDFResearch Group
Reconstruction of B-spline curves and surfaces by adaptive group testing
Alireza Norouzzadeh Ravari, Hamid D. Taghirad
Computer-Aided Design
Abstract:

Point clouds as measurements of 3D sensors have many applications in various fields such as object modeling, environment mapping and surface representation. Storage and processing of raw point clouds is time consuming and computationally expensive. In addition, their high dimensionality shall be considered, which results in the well known curse of dimensionality. Conventional methods either apply reduction or approximation to the captured point clouds in order to make the data processing tractable. B-spline curves and surfaces can effectively represent 2D data points and 3D point clouds for most applications. Since processing all available data for B-spline curve or surface fitting is not efficient, based on the Group Testing theory an algorithm is developed that finds salient points sequentially. The B-spline curve or surface models are updated by adding a new salient point to the fitting process iteratively until the Akaike Information Criterion (AIC) is met. Also, it has been proved that the proposed method finds a unique solution so as what is defined in the group testing theory. From the experimental results the applicability and performance improvement of the proposed method in relation to some state-of-the-art B-spline curve and surface fitting methods, may be concluded.

2016JournalPDFMixed Reality in Surgery
NURBS-based Representation of Urban Environments for Mobile Robots
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
2016 4th International Conference on Robotics and Mechatronics (ICROM)
Abstract:

Representation of the surrounding environment is a vital task for a mobile robot. Many applications for mobile robots in urban environments may be considered such as self-driving cars, delivery drones or assistive robots. In contrast to the conventional methods, in this paper a Non Uniform Rational B-Spline (NURBS) based technique is represented for 3D mapping of the surrounding environment. While in the state of the art techniques, the robot's environment is expressed in a discrete space, the proposed method is mainly developed for representation of environment in a continuous space. Exploiting the information theory, the generated representation has much lower complexity and more compression capability in relation to some state of the art techniques. In addition to representation in a lower dimensional space, the NURBS based representation is invariant against 3D geometric transformations. Furthermore, the NURBS based representation can be employed for obstacle avoidance and navigation. The applicability of the proposed algorithm is investigated in some urban environments through some publicly available data sets. It has been shown by some experiments that the proposed method has better visual representation and much better data compression compared to some state-of-the-art methods.

2016ConferencePDFMixed Reality in Surgery
Reconstruction of B-spline curves and surfaces by adaptive group testing
Alireza Norouzzadeh Ravari, Hamid D. Taghirad
Computer-Aided Design
Abstract:

Point clouds as measurements of 3D sensors have many applications in various fields such as object modeling, environment mapping and surface representation. Storage and processing of raw point clouds is time consuming and computationally expensive. In addition, their high dimensionality shall be considered, which results in the well known curse of dimensionality. Conventional methods either apply reduction or approximation to the captured point clouds in order to make the data processing tractable. B-spline curves and surfaces can effectively represent 2D data points and 3D point clouds for most applications. Since processing all available data for B-spline curve or surface fitting is not efficient, based on the Group Testing theory an algorithm is developed that finds salient points sequentially. The B-spline curve or surface models are updated by adding a new salient point to the fitting process iteratively until the Akaike Information Criterion (AIC) is met. Also, it has been proved that the proposed method finds a unique solution so as what is defined in the group testing theory. From the experimental results the applicability and performance improvement of the proposed method in relation to some state-of-the-art B-spline curve and surface fitting methods, may be concluded.

2015JournalPDFMixed Reality in Surgery
Loop Closure Detection by Compressed Sensing for Exploration of Mobile Robots in Outdoor Environments
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
2015 3rd RSI International Conference on Robotics and Mechatronics (ICROM)
Abstract:

In the problem of simultaneously localization and mapping (SLAM) for a mobile robot, it is required to detect previously visited locations so the estimation error shall be reduced. Sensor observations are compared by a similarity metric to detect loops. In long term navigation or exploration, the number of observations increases and so the complexity of the loop closure detection. Several techniques are proposed in order to reduce the complexity of loop closure detection. Few algorithms have considered the loop closure detection from a subset of sensor observations. In this paper, the compressed sensing approach is exploited to detect loops from few sensor measurements. In the basic compressed sensing it is assumed that a signal has a sparse representation is a basis which means that only a few elements of the signal are non-zero. Based on the compressed sensing approach a sparse signal can be recovered from few linear noisy projections by l1 minimization. The difference matrix which is widely used for loop detection has a sparse structure, where similar observations are shown by zero distance and different locations are indicated by ones. Based on the multiple measurement vector technique which is an extension of the basic compressed sensing, the loop closure detection is performed by comparison of few sensor observations. The applicability of the proposed algorithm is investigated in some outdoor environments through some publicly available data sets. It has been shown by some experiments that the proposed method can detect loops effectively.

2015ConferencePDFMixed Reality in Surgery
3D Scene and Object Classification Based on Information Complexity of Depth Data
A. Norouzzadeh, H. D. Taghirad
Mathematics
Abstract:

In this paper the problem of 3D scene and object classification from depth data is addressed. In contrast to high-dimensional feature-based representation, the depth data is described in a low dimensional space. In order to remedy the curse of dimensionality problem, the depth data is described by a sparse model over a learned dictionary. Exploiting the algorithmic information theory, a new definition for the Kolmogorov complexity is presented based on the Earth Mover’s Distance (EMD). Finally the classification of 3D scenes and objects is accomplished by means of a normalized complexity distance, where its applicability in practice is proved by some experiments on publicly available datasets. Also, the experimental results are compared to some state-of-the-art 3D object classification methods. Furthermore, it has been shown that the proposed method outperforms FAB-Map 2.0 in detecting loop closures, in the sense of the precision and recall.

2015JournalPDFMixed Reality in Surgery
Transformation Invariant 3D Object Recognition Based On Information Complexity
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM)
Abstract:

The 3D representation of objects and scenes as a point cloud or range image has been made simple by means of sensors such as Microsoft Kinect, stereo camera or laser scanner. Various tasks, such as recognition, modeling and classification can not be performed on raw measurements because of the curse of high dimensionality, computational and algorithm complexity. Non Uniform Rational Basis Splines (NURBS) are a widely used representation technique for 3D objects in various robotics and Computer Aided Design (CAD) applications. In this paper, a similarity measurement from information theory is employed in order to recognize an object sample from a set of objects. From a NURBS model fitted to the observed point cloud, a complexity based representation is derived which is transformation invariant in the sense of Kolmogorov complexity. Experimental results on a set of 3D objects grabbed by a Kinect sensor indicates the applicability of the proposed method for object recognition tasks. Furthermore, the results of the proposed method is compared to that of some state of the art algorithms.

2014ConferencePDFMixed Reality in Surgery
Loop Closure Detection By Algorithmic Information Theory: Implemented On Range And Camera Image Data
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
IEEE Transactions on Cybernetics
Abstract:

In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.

2014JournalPDFMixed Reality in Surgery
Unsupervised 3D Object Classification from Range Image Data by Algorithmic Information Theory
Alireza Norouzzadeh Ravari and Hamid D. Taghirad
2013 First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM)
Abstract:

The problem of unsupervised classification of 3D objects from depth information is investigated in this paper. The range images are represented efficiently as sensor observations. Considering the high-dimensionality of 3D object classification, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In order to remedy this problem, a low-dimensional representation is defined here. The sparse model of every range image is constructed from a parametric dictionary. Employing the algorithmic information theory, a universal normalized metric is used for comparison of Kolmogorov complexity based representations of sparse models. Finally, most similar objects are grouped together. Experimental results show efficiency and accuracy of the proposed method in comparison to a recently proposed method.

2013ConferencePDFMixed Reality in Surgery
Menu