ARAS Public Webinars

ARAS is intended to promote knowledge to a broad and interested audience. For this reason, a monthly public webinar is organized, in which, the research findings of different research teams will be presented with a general overview, to be useful for the public.
Abstract:

Oil spills are serious threats to the marine environment in the era of increasing environmental concern. Effective efforts have been made to tackle this problem, amongst them, the concept of using swarm robotics for Oil Spill Monitoring and Cleanup is a promising solution. Thanks to the recent progress in robotics, there are opportunities to use autonomous arial drones for labor-intensive environment monitoring purposes. Engaging multiple drones instead of one, not only increases the robustness and scalability of the system in time-sensitive and hazardous events such as oil spills, but also in spatiotemporal oil spill cases increases the accuracy of measurements by multi-sampling the environment concurrently. Marine environmental monitoring tasks can strongly benefit from these advantages, as the monitoring areas are typically large, and communication with a central control unit might not always be available. Among these obvious advantages, more work needs to be done to understand the capabilities and limitations of autonomous systems and the resources required in the marine environment and ensure their acceptable use to the regulatory agencies. In this presentation we first review a number of arial drones developed in ARAS, which are very promising in such application. Then a review on swarm robotics concepts and potential research gaps are given. Finllay, The application of oil spill monitoring and cleanup in Persian Gulf is given professional notice. Since the monitoring of the oil spill expansion on the water surface is a spatiotemporal problem, we also address the oil spill modeling by a Gaussian mixture model, which is based on NOAA’s advanced oil spill model (GNOME). Furthermore, a cooperative control framework developed for a group of Unmanned Aerial vehicles (UAVs) will be introduced as a novel strategy in oil spill monitoring in the Persian Gulf.

Presentation File
Read More
Date: Monday, Nov. 30, 2020 (10 Azar 1399)

Webinar Videos

Time: 18:00-20:00 (+3:30 GMT Tehran local time)

9:30-11:30  (-5:00 GMT Canada Eastern Time Zone)

#4 -ARAS Research on Haptic Technology in Intraocular Surgeries

Abstract:

Intraocular surgery is a hot topic of research among researchers, who are looking into novel areas in which haptic systems and assistive technologies would be beneficial. Since the human eye is a highly delicate organ with minuscule anatomic structures, the ocular surgeries are needed to be performed under extra precision and higher manipulation capabilities in contrast to other surgeries. In fact, any minute surgeon miss-manipulation, which might be negligible in the majority of other surgical operations, might lead to disastrous complications and even blindness for the patient in the ocular surgeries. This fact underlines the importance of assistive technologies in eye surgical procedures. The assistive technologies aim at giving extra manipulation to the surgeon during the operation or help the novice surgeons to obtain the required skills before performing actual operations at the surgical room. This presentation reviews novel recent breakthroughs along with new areas in which haptic systems and assistive technologies might provide a viable solution to overcome current challenges. Furthermore, ARAS Haptic system developed for eye surgery training will be introduced as a novel system used in intraocular surgeries.

Date: Monday, Nov. 2, 2020
(12 Aban 1399)

Webinar Videos

Time: 18:00-19:30 (+3:30 GMT Tehran local time)
or 9:30-11:00 (-5:00 GMT Canada Eastern Time Zone)

Part1

Part2

Part3

#3 – ARAS Research on Artificial Intelligence and Deep Learning Methods in Autonomous Robotics

Abstract:

Artificial intelligence has found its permanent place among cutting-edge researches in various applications. In particular, the deep learning approaches are very promising with optimized solutions for a variety of real-world problems. Considering the enhanced computational capability to execute these algorithms on embedded systems such as NVIDIA Jetson boards with an outstanding performance, deep learning approaches are progressively employed in autonomous robotics. As a major part of each autonomous robot, a camera plays a significant role to extract rich information on the surrounding environment.  Object detection and tracking is a necessary task for the autonomous robot to maneuver suitably in an unstructured environment. Furthermore, these methods are very promising in other applications like monitoring and evaluating a process. Regarding the tracking task, estimation of the depth is of high importance to realize a 3D object in the environment. It is usually critical to have a depth map of the robot’s frontal view, which may be considered as a more comprehensive requirement for autonomous vehicles. In this presentation, a review of the ARAS autonomous robotics group on the development, implementation, and optimization of deep learning approaches especially in image processing are addressed. We aim to focus on applications used for autonomous robots and vehicles and introduce some of our recent industrial projects.

Date: Monday, Sept. 28, 2020
(7 Mehr 1399)

Webinar Videos

Time: 18:00-19:30 (+4:30 GMT Tehran local time)
or 9:30-11:00 (-4:00 GMT Canada Eastern Time Zone)

Part1

Part2

Part3

#2 – ARAS Research on Parallel and Cable Robotics: Products and Algorithms

Abstract:

Cable and parallel robotics have been gaining more attention among researchers due to their unique characteristics and applications. Simple structure, high payload capacity, agile movements, and simple structures are the main characteristics that nominate cable-robots from the other types of manipulators for many applications such as imaging, cranes, agriculture, etc. The ARAS Parallel and Cable Robotics (PACR) group is focused on the development of such novel manipulators and their possible applications. Interdisciplinary research fields such as dynamics and kinematic analysis using classic and modern approaches, development of easily deployable robots thorough robust controllers, implementation of novel self-calibration algorithms, and establishing modern and multi-sensor perception systems for them are among the active lines of research in this group. The theoretical results of this active research group are also directly incorporated for producing commercial products through the spin-off and startup companies originated from the team.

Kamalolmol® robot is a representative of such products, which is a fast deployable edutainment cable-driven robot for calligraphy and painting (chiaroscuro) applications. Additionally, PACR exploits the simplicity of cable robots combined with SLAM and perception algorithms to create commercial inspection and imaging robots for various applications. In this webinar, the underlying concepts of such robots, and the current state-of-the-art development of the group will be presented.

Date: Monday, August 31, 2020
(10 Shahrivar 1399)

Webinar Videos

Webinar Teaser

Time: 18:00-19:30 (+4:30 GMT Tehran local time)
or 9:30-11:00 (-4:00 GMT Canada Eastern Time Zone)

Part1

Part2

Part3

#1 – ARAS eye surgery training system: A VR/AR Approach

Abstract:

The virtual reality and augmented reality are getting more interest as a training technique in the medical fields unlocking significant benefits such as safety, repeatability and efficiency. Furthermore, VR/AR based simulators equipped with a haptic device can be used in medical surgery training in order to achieve skill improvement and training time reduction. With haptics as part of the training experience it is observed that a 30% increase in the speed of skills acquisition and up to a 95% increase in accuracy is achieved. Six out of nine studies showed that tactile feedback significantly improved surgical skill training.Eye surgery training is considered for VR/AR based training in ARAS group as it is one of the most complex surgical procedures. The ARASH:ASiST haptic system is integrated into the eye surgery training system in conjunction with a physical simulation engine and the unity software to visualize the simulation results in Oculus VR headset. The hand motions of the expert surgeon is captured by a haptic system where later the motion data is used to train the hand motions of surgery students by force feedback. In the developed eye surgery training system, two types of eye surgeries are simulated, namely the cataract and vitrectomy. In each type of eye surgery, the haptic system is used to simulate the surgery tool motion. The interaction of the virtual surgery tool with the 3D modeled eye is computed through the SOFA framework. The simulation results are transformed to the unity game engine in order to visualize the results in an Oculus VR headset.

Date: Monday, August 3, 2020
(13 Mordad 1399)

Webinar Videos

Webinar Teaser

Time: 18:00-19:30 (+4:30 GMT Tehran local time)
or 9:30-11:00 (-4:00 GMT Canada Eastern Time Zone)

Part1

Part2

Part3

Menu