ARAS Public Webinars

ARAS is intended to promote knowledge to a broad and interested audience. For this reason, a monthly public webinar is organized, in which, the research findings of different research teams will be presented with a general overview, to be useful for the public.
Poster 
Presentation File
Abstract:

Intraocular surgery is a hot topic of research among researchers, who are looking into novel areas in which haptic systems and assistive technologies would be beneficial. Since the human eye is a highly delicate organ with minuscule anatomic structures, the ocular surgeries are needed to be performed under extra precision and higher manipulation capabilities in contrast to other surgeries. In fact, any minute surgeon miss-manipulation, which might be negligible in the majority of other surgical operations, might lead to disastrous complications and even blindness for the patient in the ocular surgeries. This fact underlines the importance of assistive technologies in eye surgical procedures. The assistive technologies aim at giving extra manipulation to the surgeon during the operation or help the novice surgeons to obtain the required skills before performing actual operations at the surgical room. This presentation reviews novel recent breakthroughs along with new areas in which haptic systems and assistive technologies might provide a viable solution to overcome current challenges. Furthermore, ARAS Haptic system developed for eye surgery training will be introduced as a novel system used in intraocular surgeries.

Date: Monday, Nov. 2, 2020
(12 Aban 1399)

Webinar Videos

Time: 18:00-19:30 (+3:30 GMT Tehran local time)
or 9:30-11:00 (-5:00 GMT Canada Eastern Time Zone)

Abstract:

Artificial intelligence has found its permanent place among cutting-edge researches in various applications. In particular, the deep learning approaches are very promising with optimized solutions for a variety of real-world problems. Considering the enhanced computational capability to execute these algorithms on embedded systems such as NVIDIA Jetson boards with an outstanding performance, deep learning approaches are progressively employed in autonomous robotics. As a major part of each autonomous robot, a camera plays a significant role to extract rich information on the surrounding environment.  Object detection and tracking is a necessary task for the autonomous robot to maneuver suitably in an unstructured environment. Furthermore, these methods are very promising in other applications like monitoring and evaluating a process. Regarding the tracking task, estimation of the depth is of high importance to realize a 3D object in the environment. It is usually critical to have a depth map of the robot’s frontal view, which may be considered as a more comprehensive requirement for autonomous vehicles. In this presentation, a review of the ARAS autonomous robotics group on the development, implementation, and optimization of deep learning approaches especially in image processing are addressed. We aim to focus on applications used for autonomous robots and vehicles and introduce some of our recent industrial projects.

Date: Monday, Sept. 28, 2020
(7 Mehr 1399)

Webinar Videos

Time: 18:00-19:30 (+4:30 GMT Tehran local time)
or 9:30-11:00 (-4:00 GMT Canada Eastern Time Zone)

Part1

Part2

Part3

Cable and parallel robotics have been gaining more attention among researchers due to their unique characteristics and applications. Simple structure, high payload capacity, agile movements, and simple structures are the main characteristics that nominate cable-robots from the other types of manipulators for many applications such as imaging, cranes, agriculture, etc. The ARAS Parallel and Cable Robotics (PACR) group is focused on the development of such novel manipulators and their possible applications. Interdisciplinary research fields such as dynamics and kinematic analysis using classic and modern approaches, development of easily deployable robots thorough robust controllers, implementation of novel self-calibration algorithms, and establishing modern and multi-sensor perception systems for them are among the active lines of research in this group. The theoretical results of this active research group are also directly incorporated for producing commercial products through the spin-off and startup companies originated from the team.
Kamalolmol® robot is a representative of such products, which is a fast deployable edutainment cable-driven robot for calligraphy and painting (chiaroscuro) applications. Additionally, PACR exploits the simplicity of cable robots combined with SLAM and perception algorithms to create commercial inspection and imaging robots for various applications. In this webinar, the underlying concepts of such robots, and the current state-of-the-art development of the group will be presented.
Date: Monday, August 31, 2020
(10 Shahrivar 1399)

Webinar Videos

Webinar Teaser

Time: 18:00-19:30 (+4:30 GMT Tehran local time)
or 9:30-11:00 (-4:00 GMT Canada Eastern Time Zone)

Part1

Part2

Part3

The virtual reality and augmented reality are getting more interest as a training technique in the medical fields unlocking significant benefits such as safety, repeatability and efficiency. Furthermore, VR/AR based simulators equipped with a haptic device can be used in medical surgery training in order to achieve skill improvement and training time reduction. With haptics as part of the training experience it is observed that a 30% increase in the speed of skills acquisition and up to a 95% increase in accuracy is achieved. Six out of nine studies showed that tactile feedback significantly improved surgical skill training.Eye surgery training is considered for VR/AR based training in ARAS group as it is one of the most complex surgical procedures. The ARASH:ASiST haptic system is integrated into the eye surgery training system in conjunction with a physical simulation engine and the unity software to visualize the simulation results in Oculus VR headset. The hand motions of the expert surgeon is captured by a haptic system where later the motion data is used to train the hand motions of surgery students by force feedback. In the developed eye surgery training system, two types of eye surgeries are simulated, namely the cataract and vitrectomy. In each type of eye surgery, the haptic system is used to simulate the surgery tool motion. The interaction of the virtual surgery tool with the 3D modeled eye is computed through the SOFA framework. The simulation results are transformed to the unity game engine in order to visualize the results in an Oculus VR headset.

Speakers:

| Webinar Details | Presentation File | Group Website |

Date: Monday, August 3, 2020
(13 Mordad 1399)

Webinar Videos

Webinar Teaser

Time: 18:00-19:30 (+4:30 GMT Tehran local time)
or 9:30-11:00 (-4:00 GMT Canada Eastern Time Zone)

Part1

Part2

Part3

Menu