This research theme start its root from IROS 2005 Conference, where the overwhelming research work presented on SLAM in addition to the upcoming industrial needs motivates rigorous work on this area. The first Master student worked on SLAM in the group was Ali Agha Mohammadi, who elaborates on different aspect of visual SLAM as well as implementation of Laser range finder based localization and mapping. Very soon other researchers explored a wide spectrum of research work on the consistency of EKF -based SLAM algorithms, as well as other state-of-the-art of techniques developed in this area such as FastSLAM. Some works are done on developing more suitable and faster optimization techniques being developed for iSAM algorithms.The research results was shortly used in different robotic platforms developed in the group. Among many works done in this area, one may mention the projects implemented on our Silver robot for exploration in an unknown indoor environment, further promoted for obstacle avoidance of static and dynamic objects. The implementation of SLAM algorithms in outdoor applications using stereo vision camera implemented on our other robotic platform Melon, was among the other challenges being fully worked out in the group. Soon we realized the importance and challenges existing in the 3D Mapping and localization, and a long term project was funded to develop a suitable 3D representation of the environment based on RGB-D sensory data. Using Kolmogorov complexity measures as well as Nurbs smoothing functions enables us to develop a very effective and computationally effective representation method for 3D visual data. Furthermore, trajectory planning and nonlinear control for navigation has been considered in the implementation of these techniques on autonomous ground robots as well as autonomous areal drones.
The current research project in the group relates to the development of autonomous and commercial vehicles by implementation of state-of-the-art algorithms such as deep learning on visual data, in order to firstly develop driver assisting products as well as providing the technological grounds to move toward autonomous vehicles.
ARAS is a great part of my academic experience. Very glad that I had the opportunity to work with great researchers in the field, participated in cool robotic competitions, and got a deep understanding of robotic theory.
ARAS for me is where I started my research career, fell in love with it, made a few good friends, saw real robots for the first time, and learned a lot.
I learned from the serious yet enjoyable teamwork that no matter what limitations you may have, no unreachable goal ever exists in life.
Dynamic Object Recognition in 3D Environment
Detecting and Tracking Dynamic Objects is a crucial procedure for any mobile robotic system that is navigating in an unknown environment. Moving objects can hazard the mobile robot and also they can be a challenge for safe navigation tasks. there are many real world applications that require the detailed and precise knowledge of dynamic objects, for example an autonomous car driving in a highway or in urban environments, also a humanoid robot in a shopping mall. so an autonomous robot should have a perception of its environment and should be able to decide how to act in dynamic environments and therefore needs a precise awareness of the behavior of existing dynamic elements with respect to its ego motion. In this project we are making use of available LIDAR datasets and trying to establish a software to detect and track dynamic objects in Point Cloud Library. we are also trying to develop a mathematical tool that enables us to use probability theory to interfere and estimate dynamic objects pose and location in 3D environments and removes the need for the use of simpled 2D or 2.5D methods that can lead us to more precise results.
Mobile robot motion planning for search and exploration under uncertainty
Many problems of the sequential decision making under uncertainty can be modeled as the Markov Decision Process (MDP) and Partially Observable Markov Decision Process (POMDP) general frameworks. Motion planning under uncertainty is an instance of these problems. MDP and POMDP frameworks are computationally intractable and this problem restricts them to problems with small discrete state spaces and prevents using them in realistic applications.
In this project, the motion planning is done for specific goals such as environment exploration, search and coverage. However, the presence of uncertainties makes them challenging tasks. In order to achieve a reliable plan and decision, these uncertainties should be considered in the robot’s planning and decision making. Therefore, the path planning for the exploration and search is modeled as an asymmetric Traveling Salesman Problem (aTSP) in the belief space in which the robot should search a series of goal points. Toward reducing the complexity of the aforementioned problem, the Feedback-based Information Roadmap (FIRM) is exploited.
FIRM is a motion planning method for a robot operating under the motion and sensor uncertainty. FIRM provides a computationally tractable belief space planning and its capabilities make it suitable for real-time implementation and robust to the changing environment and large deviation. FIRM is proposed firstly by Dr. Ali Aghamohammadi as his Ph.D. thesis.
Using FIRM, the intractable traveling salesman optimization problem in the continuous belief space is changed to a simpler optimization problem on the TSP-FIRM graph. The optimal policy of the robot is obtained by finding the optimal path between each two goal points and solving the aTSP and then the policy is executed online. Also, Some algorithms are proposed to overcome the deviation from the path, kidnapping, finding new obstacles and becoming highly uncertain about the position which are possible situations in the online execution of the policy. Consequently, the robot should update its graph, map and policy online. The generic proposed algorithms are extended to the nonholonomic robots. In the online and offline phase, switching and LQG controllers as well as a Kalman filter for localization, are adopted. This algorithm can be implemented in practice and makes us one step closer to the solving Simultaneously Path planning, Localization and Mapping (SPLAM) problem.This algorithm is implemented in the Webots (video) and also on a real robot (Melon robot) (video). In both simulation and real implementation, we have used a vision-based localization based on the EKF.
Autonomous Flight and Obstacle Avoidance of a Quad rotor By Monocular SLAM
In this project, a monocular vision based autonomous flight and obstacle avoidance system for a commercial quad rotor is presented. The video stream of the front camera and the navigation data measured by the drone is sent to the ground station laptop via wireless connection. Received data processed by the vision based ORB-SLAM to compute the 3D position of the robot and the environment 3D sparse map in the form of point cloud. An algorithm is proposed for enrichment of the reconstructed map, and furthermore, a Kalman Filter is used for sensor fusion. The scaling factor of the monocular slam is calculated by the linear fitting. Moreover, a PID controller is designed for 3D position control. Finally, by means of the potential field method and Rapidly exploring Random Tree (RRT) path planning algorithm, a collision-free road map is generated.
The proposed system enables the robot to flies autonomously in unknown environment and avoids colliding obstacles. The proposed algorithm generally consists of two parts. Firstly, we obtain the 3D position of robot. For this, the 3D position of robot is estimated using Kalman Filter which fuses the monocular ORB-SLAM outputs and navigation data measured by on-board sensors of drone. Regarding the autonomous flight and obstacle avoidance, robot needs to have a perception of its environment. To fulfill this aim, we use the surrounding map of robot which is reconstructed by monocular ORB-SLAM. But, this map is sparse and not appropriate for autonomous applications. Therefore, we represented an algorithm that lines up and enriches the reconstructed map. In the next step we determine the motion next set point and generate a collision-free path between specified set point and current robot position.
For this, a dynamic trajectory generation algorithm is proposed to fly and avoid the probable obstacles autonomously in an unknown but structured environment by utilizing some path planning methods such as potential field and RRT. The algorithm has been evaluated in real experiments and the flight variables are compared with some external precise sensors. In the experiments, it is illustrated that robot can perform reliable and robust autonomous flight in different scenarios while avoiding obstacles. Moreover, the proposed system can be easily applied to other platforms, which is being extended and implemented in our future plans.
Loop Closure Detection By Algorithmic Information Theory: Implemented On Range And Camera Image Data
It is assumed that a wheeled mobile robot is exploring an unknown unstructured environment, while perceiving camera or range images as its observations. These observations may be obtained with a proper sensor such as 3-D laser scanner, Lidar, Microsoft Kinect camera, stereo pairs, or monocular camera. For autonomy, it is required to avoid obstacles, perceive surrounding environment, recognize revisited places, perform path planning, mapping, and localization for a long term exploration in an unknown area or navigation toward a goal. The concentration of this paper is on loop closure detection based on the complexity of the sparse model (image model, hereafter) extracted from either camera or range images. The mobile robot position estimation becomes unreliable by closing large-scale loops due to the accumulation of estimation error. Therefore, loop closure detection approaches based on the observation similarity, which are independent from the estimated position are more accurate. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-
based representations, in this model, the dimension of the sensor measurements’ representations is reduced. Considering the loop closure detection as a clustering problem in high- dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms.
Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.
An Online Implementation of Robust RGB-D SLAM
In this project an online robust RGB-D SLAM algorithm which uses an improved switchable constraints robust pose graph slam alongside with radial variance based hash function as the loop detector. The switchable constraints robust back-end is improved by initialization of its weights according to information matrix of the loops and is validated using real world datasets. The radial variance based hash function is combined with an online image to map comparison to improve accuracy of loop detection. The whole algorithm is implemented on K. N. Toosi University mobile robot with a Microsoft Kinect camera as the RGB-D sensor and the whole algorithm is validated using this robot, while the map of the environment is generated in an online fashion. The proposed algorithm is implemented on K. N. Toosi mobile robot in a step by step implementation hierarchy, by which the importance of adding each step to the algorithm is elaborated. Graphical and numerical results are reported for each step of the extended algorithm, by which it is verified that the proposed algorithm works suitably well with RGB-D data from Kinect camera. Furthermore, it is shown that the required execution time needed for each step is such that the algorithm is promising for implementation in real time with current graphical processing unit capabilities.
Vision-Based Fuzzy Navigation of Mobile Robots in Grassland Environments
Suppose a wheeled mobile robot needs to autonomously navigate in an unstructured outdoor environment using a non-calibrated regular camera as its input sensor. For safe navigation of a mobile robot in an unknown outdoor environment, we need to do the following tasks:
• Ground plane detection
• Obstacle identification
• Traversable area specification
We consider that the robot is navigating in a rough terrain with static obstacles, perceives the required information from a single camera and makes navigation decisions in real-time. While the robot traverses in the real world, the relative positions of the obstacles vary in the image plane and consequently the 2D projections of these points, in our case extracted features, move in some direction depending on the heading of the robot and the location of obstacle in real world. It can be seen in Fig. 1, that camera movement toward an object, increases the scale of the object in the image plane and causes apparent motion of features in the image plane. When the robot moves toward an obstacle, projected features from the obstacle move upward in the image plane if they are located above the camera’s X-Z plane. On the contrary, if the features are located below the camera’s X-Z plane, they move downward as the robot draws near the obstacle. Taking into account this property and based on the movement of features in the image plane, the robot can decide whether the corresponding 3D point is an obstacle or not, and by this way it can avoid moving toward the obstacles in the environment. Using these two properties of the apparent motion of features and a fuzzy inference system, features can be compared in relation to each other and represented by linguistic fuzzy sets, which is the base of our vision-based fuzzy navigation algorithm.