Research interest of the Autonomous Robotics group lies primarily in the field of modern intelligent methods applied in a wide variety of fields from technologies relating driverless cars to autonomous land and aerial robots and surgical robotics. The current research theme in the group relates to the development of autonomous and commercial vehicles by implementation of state-of-the-art algorithms such as deep learning on visual data, in order to firstly develop driver assisting products as well as providing the technological grounds to move toward autonomous vehicles. Deep estimation from single images, dynamic object detection in 3D environments and obstacle avoidance for autonomous flight are some of the on-going projects of the AR group.
This research theme gets its root from the IROS 2005 Conference, where the overwhelming research work presented on SLAM in addition to the upcoming industrial needs motivates rigorous work in this area. The first Master student working on SLAM in the group was Ali Agha Mohammadi, who elaborates on different aspects of visual SLAM as well as implementation of Laser range finder based localization and mapping. Very soon other researchers explored a wide spectrum of research work on the consistency of EKF -based SLAM algorithms, as well as other state-of-the-art techniques developed in this area such as FastSLAM. Some work is done on developing more suitable and faster optimization techniques being developed for iSAM algorithms.The research results was shortly used in different robotic platforms developed in the group. Among many works done in this area, one may mention the projects implemented on our Silver robot for exploration in an unknown indoor environment, further promoted for obstacle avoidance of static and dynamic objects. The implementation of SLAM algorithms in outdoor applications using stereo vision cameras implemented on our other robotic platform Melon, was among the other challenges being fully worked out in the group. Soon we realized the importance and challenges existing in the 3D Mapping and localization, and a long term project was funded to develop a suitable 3D representation of the environment based on RGB-D sensory data. Using Kolmogorov complexity measures as well as Nurbs smoothing functions enables us to develop a very effective and computationally effective representation method for 3D visual data. Furthermore, trajectory planning and nonlinear control for navigation has been considered in the implementation of these techniques on autonomous ground robots as well as autonomous aerial drones.
Current Projects


Ectasia is a condition in which the human cornea gradually becomes thinner and its slope becomes steeper, and eventually the pressure of the internal fluid of the eye causes the cornea to protrude forward. In general, everyone’s cornea is divided into three groups: Normal, Suspect and KC. Keratoconus is important because it limits performing refractive surgeries such as LASIK and femto. To limit KC usually preoperative care is performed using advanced imaging devices.
In this project, AI technology is used to diagnose KC categories based on four-maps eye images. in order to use this technology, it is necessary to produce a comprehensive database of medical images in the diagnosis of KC disease and its categories. This framework uses the output data of the Pentacam device, especially the Four Maps Refractive images. The data available at Farabi Hospital will be used to collect this data. In the first stage, the collected data need to be properly labeled and annotated to determine the patient’s health level in one of the three available classes.
The labeling process is very important for the data obtained from different sources because the reliability of the data is one of the important assumptions in creating the proper performance of methods based on artificial intelligence. For this means, the detailed expertise of physicians is needed to label and annotate these images. After the collection and labeling process, the potential challenges of this big data are identified and addressed, and the data and annotations associated with each data are aggregated and a comprehensive database is created to classify images.
An Online Implementation of Robust RGB-D SLAM
In this project an online robust RGB-D SLAM algorithm which uses an improved switchable constraints robust pose graph slam alongside with radial variance based hash function as the loop detector. The switchable constraints robust back-end is improved by initialization of its weights according to information matrix of the loops and is validated using real world datasets. The radial variance based hash function is combined with an online image to map comparison to improve accuracy of loop detection. The whole algorithm is implemented on K. N. Toosi University mobile robot with a Microsoft Kinect camera as the RGB-D sensor and the whole algorithm is validated using this robot, while the map of the environment is generated in an online fashion. The proposed algorithm is implemented on K. N. Toosi mobile robot in a step by step implementation hierarchy, by which the importance of adding each step to the algorithm is elaborated. Graphical and numerical results are reported for each step of the extended algorithm, by which it is verified that the proposed algorithm works suitably well with RGB-D data from Kinect camera. Furthermore, it is shown that the required execution time needed for each step is such that the algorithm is promising for implementation in real time with current graphical processing unit capabilities.
- Vision-Based Fuzzy Navigation of Mobile Robots in Grassland Environments
Suppose a wheeled mobile robot needs to autonomously navigate in an unstructured outdoor environment using a non-calibrated regular camera as its input sensor. For safe navigation of a mobile robot in an unknown outdoor environment, we need to do the following tasks:
• Ground plane detection
• Obstacle identification
• Traversable area specification
• Navigation
We consider that the robot is navigating in a rough terrain with static obstacles, perceives the required information from a single camera and makes navigation decisions in real-time. While the robot traverses in the real world, the relative positions of the obstacles vary in the image plane and consequently the 2D projections of these points, in our case extracted features, move in some direction depending on the heading of the robot and the location of obstacle in real world. It can be seen in Fig. 1, that camera movement toward an object, increases the scale of the object in the image plane and causes apparent motion of features in the image plane. When the robot moves toward an obstacle, projected features from the obstacle move upward in the image plane if they are located above the camera’s X-Z plane. On the contrary, if the features are located below the camera’s X-Z plane, they move downward as the robot draws near the obstacle. Taking into account this property and based on the movement of features in the image plane, the robot can decide whether the corresponding 3D point is an obstacle or not, and by this way it can avoid moving toward the obstacles in the environment. Using these two properties of the apparent motion of features and a fuzzy inference system, features can be compared in relation to each other and represented by linguistic fuzzy sets, which is the base of our vision-based fuzzy navigation algorithm
Vision-Based Fuzzy Navigation of Mobile Robots in Grassland Environments
Suppose a wheeled mobile robot needs to autonomously navigate in an unstructured outdoor environment using a non-calibrated regular camera as its input sensor. For safe navigation of a mobile robot in an unknown outdoor environment, we need to do the following tasks:
• Ground plane detection
• Obstacle identification
• Traversable area specification
• Navigation
We consider that the robot is navigating in a rough terrain with static obstacles, perceives the required information from a single camera and makes navigation decisions in real-time. While the robot traverses in the real world, the relative positions of the obstacles vary in the image plane and consequently the 2D projections of these points, in our case extracted features, move in some direction depending on the heading of the robot and the location of obstacle in real world. It can be seen in Fig. 1, that camera movement toward an object, increases the scale of the object in the image plane and causes apparent motion of features in the image plane. When the robot moves toward an obstacle, projected features from the obstacle move upward in the image plane if they are located above the camera’s X-Z plane. On the contrary, if the features are located below the camera’s X-Z plane, they move downward as the robot draws near the obstacle. Taking into account this property and based on the movement of features in the image plane, the robot can decide whether the corresponding 3D point is an obstacle or not, and by this way it can avoid moving toward the obstacles in the environment. Using these two properties of the apparent motion of features and a fuzzy inference system, features can be compared in relation to each other and represented by linguistic fuzzy sets, which is the base of our vision-based fuzzy navigation algorithm.
We address the 3D object tracking problem while using nonlinear estima- tors alongside deep learning techniques. By considering the autonomous vehicle as our case study, the problem is formulated as a structure from motion (SFM) where a nonlinear estimator performs the state estimation, and a deep learning technique produces the observations. In order to solve this problem, a switched state-dependent Riccati equation (SDRE) filter is proposed that robustly estimates both the lateral and longitudinal distances to frontal dynamic objects in the autonomous vehicle motion plane. The traditional SFM approach, however, cannot be encapsulated by an observable state-space model for the autonomous vehicle case; therefore, an extension is proposed to the SFM general form while implementing it in a multi-thread framework to address the multiple object requirement. By considering the estimations obtained from the filter on one hand, and the observations given by the deep learning part, on the other, the 3D tracking of frontal objects is realized in practice. The stability analysis of the switching SDRE filter in the modified SFM formulation is thoroughly performed in the discrete-time domain. To further investigate the effectiveness of the suggested methods, a Monte Carlo simulation is carried out, and analyzed, while the real-world implementation of the proposed method has been further accomplished by utilizing a Jetson TX2 board on an economical car (Quick) from SAIPA company. Since observations play a key role in estimating the required variables, image processing techniques are further studied in the second part of the thesis. In this regard, a hybrid paradigm consisting of an object detector alongside a classic OpenCV object tracker as well as a recurrent neural network is proposed to address some of the challenges such as occlusion and blurred images. Furthermore, a fully convolutional architecture is proposed to address the single object tracking task in a general-purpose tracking application. This model takes advantage of utilizing a novel architecture with various input branches to enforce multiple models in a single structure. The proposed approach is based on applying a convolutional neural network (CNN) and extracting a region of interest (RoI) in a form of a matrix at each frame. By this means, instead of analyzing the whole frame, just a small region is sufficient to track the intended object. Besides, a specific branch is taken into account to integrate the target template into the architecture making it possible to distinguish the intended object from similar objects in the surrounding area. Finally, to investigate the effectiveness and applicability of the proposed approaches, various simulations and comparison studies are conducted. Based on the reported results, it is shown that the capability of the proposed methods is equal to that of state-of-the-art (SOTA) methods in addressing real world applications.
Actine Member: Faraz Lotfi

Depth perception is fundamental for robots to understand the surrounding environment. As the view of cognitive neuroscience, visual depth perception methods are divided into three categories, namely binocular, active, and pictorial. The first two categories have been studied for decades in detail. However, research for the exploration of the third category is still in its infancy and has got momentum by the advent of deep learning methods in recent years. In cognitive neuroscience, it is known that pictorial depth perception mechanisms are dependent on the perception of seen objects. Inspired by this fact, in this thesis, we investigated the relation of perception of objects and depth estimation convolutional neural networks. For this purpose, we developed new network structures based on a simple depth estimation network that only used a single image at its input. Our proposed structures use both an image and a semantic label of the image as their input. We used semantic labels as the output of object perception. The obtained results of performance comparison between the developed network and original network showed that our novel structures can improve the performance of depth estimation by 52% of relative error of distance in the examined cases. Most of the experimental studies were carried out on synthetic datasets that were generated by game engines to isolate the performance comparison from the effect of inaccurate depth and semantic labels of non-synthetic datasets. It is shown that particular synthetic datasets may be used for training of depth networks in cases that an appropriate dataset is not available. Furthermore, we showed that in these cases, usage of semantic labels improves the robustness of the network against domain shift from synthetic training data to non-synthetic test data.
Active Member: Amin Kashi

When conceptualizing the algorithm running on an intelligent robot through the glasses of a reductionist, a subsystem responsible for getting a sense of the location of the agent shows to be among the intuitively deduced modules. Whether we view location as a parameter that defines our position with respect to a fixed origin, or we define it through a relative perspective, a robot needs to have a notion of its placement in the world to be able to make appropriate decisions and perform the necessary actions while reacting to the dynamic world around it. However, this dynamic nature of the world presents problems such as flawed describers and unreliable observations that make it challenging to use straightforward solutions to perform localization through low-level information gathered by the sensors mounted on the robot. Therefore, approaches are needed that are able to exploit the semantic information embedded in observed scenes and extract higher-level information about the world around them that are robust to such issues. Moreover, by leveraging multiple sensors, information from modalities where the source of data varies to a suitable extent in between the said modalities may be gathered and fused to form a joint high-level representation of the state of the robot, further adding to the reliability of the localization system. In this thesis, our goal is to design and experiment with neural network architectures and create learning paradigms that incentivize the extraction of robust features through a representation learning procedure where the inputs to the network are not preprocessed. We propose mechanisms and objectives that allow the network to disregard faulty input information while achieving interpretability that allows the system to communicate its uncertainty about the estimates based on the provided inputs. Moreover, we take a hybrid approach to global localization of the robot where physical and learning based models are combined to form a multilevel localization approach in order to increase the flexibility of the pipeline. We perform comprehensive experiments to show our motivation while comparing our approaches to the state-of-the-art methods quantitatively and qualitatively. We analyze the proposed approaches through custom designed interpretation methods to get in-depth intuition on how our algorithms add to the literature and improve upon the state-of-the-art algorithms. Thereafter, we provide an overview of the branches of our work that can be explored further while delineating the potential future of the field.
Active Member: Hamed Damirchi

Past Projects

Depth perception is fundamental for robots to understand the surrounding environment. As the view of cognitive neuroscience, visual depth perception methods are divided into three categories, namely binocular, active, and pictorial. The first two categories have been studied for decades in detail. However, research for the exploration of the third category is still in its infancy and has got momentum by the advent of deep learning methods in recent years. In cognitive neuroscience, it is known that pictorial depth perception mechanisms are dependent on the perception of seen objects. Inspired by this fact, in this thesis, we investigated the relation of perception of objects and depth estimation convolutional neural networks. For this purpose, we developed new network structures based on a simple depth estimation network that only used a single image at its input. Our proposed structures use both an image and a semantic label of the image as their input. We used semantic labels as the output of object perception. The obtained results of performance comparison between the developed network and original network showed that our novel structures can improve the performance of depth estimation by 52% of relative error of distance in the examined cases. Most of the experimental studies were carried out on synthetic datasets that were generated by game engines to isolate the performance comparison from the effect of inaccurate depth and semantic labels of non-synthetic datasets. It is shown that particular synthetic datasets may be used for training of depth networks in cases that an appropriate dataset is not available. Furthermore, we showed that in these cases, usage of semantic labels improves the robustness of the network against domain shift from synthetic training data to non-synthetic test data.
Active Members: Amin Kashi, Faraz Lotfi
Human Behavior Classification (HBC) can be counted as a vital part of any security system. It is undeniable that in almost all of the sensitive places either in society or in industrial sections, continuous monitoring of the situation is mandatory to realize various critical tasks. For instance, considering the pandemic related to COVID-19, it is imperative to track and record an infected person trajectories. To further enhance the performance in terms of accuracy, different body parts shall be detected while tracking a person. This may help not only to identify contaminated surfaces but also to classify dangerous behaviors. On the other hand, the interaction between individuals is another important point to give attention to, because people often change their decisions, moving trajectories, etc with respect to each other.
Active Members: Amin Mardani, Sina Allahkaram, Ali Farajzadeh, Faraz Lotfi

Dynamic Object Detection in 3D Environment
Dynamic object detection is the most important part of an environmental perception unit in any autonomous vehicle which is tending to perform safely in urban environments. In urban scenarios there are almost always multiple objects surrounding the vehicle, thus the problem of dynamic object detection is actually a Multiple Object Detection (MOD) problem. The effectiveness and accuracy of methods in this field are highly dependent on how the uncertainties are handled in the procedure, and in the operational level what combination of sensors are used to precept the environment.
Most MOD methodologies in the literature are based on the tacking-by-detection4 procedure. Potential movable objects are detected using data provided by modal sensors and the position and velocity of dynamic objects are tracked afterward. Continuous awareness of the kinematic states of the surrounding traffic participants is vital for modeling the perceived environment and furthermore for control actions and safety perseverance. This knowledge has to be real-time adjustable to be employable within reasonable time frames. This research field is now being conducted in different major branches in the AR Group. Research is being conducted by defining the master and Ph.D. Theses, in a collaboration with freelancer alumni. For more information, please the ARAS Driver-less Car Project.
Active Members: Faraz Lotfi

The Autonomous Car project has been started in a collaboration with SAIPA Group. The main goal of the project is to establish a software system for the autonomous driving process. The chosen platform for this project is an automatic vehicle called Quick from the SAIPA corporation. Road scene object detection (such as cars, bicycle, and pedestrian), real-time object detection and tracking, image based object distance estimation, and domestic data set gathering are some of the accomplished projects of this team. You may find more details about this project and it’s progress on ARAS autonomous car with Quick!

The Detection and Tracking of Multiple Moving Objects (DaTMO) Software Package project has been started to develop a software package for detection and tracking of dynamic objects in different road scenes. DaTMO-SP consists of two different software modules: Detection and Tracking. Until now the package is only designed and developed as a LiDAR based solution for DaTMO problem. Further modifications toward implementing sensor fusion methods on this package, is starting to take place in near future. For more details on this project please visit DaTMO-SP page.

Mobile robot motion planning for search and exploration under uncertainty
Many problems of the sequential decision making under uncertainty can be modeled as the Markov Decision Process (MDP) and Partially Observable Markov Decision Process (POMDP) general frameworks. Motion planning under uncertainty is an instance of these problems. MDP and POMDP frameworks are computationally intractable and this problem restricts them to problems with small discrete state spaces and prevents using them in realistic applications. In this project, the motion planning is done for specific goals such as environment exploration, search and coverage. However, the presence of uncertainties makes them challenging tasks. In order to achieve a reliable plan and decision, these uncertainties should be considered in the robot’s planning and decision making. Therefore, the path planning for the exploration and search is modeled as an asymmetric Traveling Salesman Problem (aTSP) in the belief space in which the robot should search a series of goal points. Toward reducing the complexity of the aforementioned problem, the Feedback-based Information Roadmap (FIRM) is exploited. FIRM is a motion planning method for a robot operating under the motion and sensor uncertainty. FIRM provides a computationally tractable belief space planning and its capabilities make it suitable for real-time implementation and robust to the changing environment and large deviation. FIRM is proposed firstly by Dr. Ali Aghamohammadi as his Ph.D. thesis.
Using FIRM, the intractable traveling salesman optimization problem in the continuous belief space is changed to a simpler optimization problem on the TSP-FIRM graph. The optimal policy of the robot is obtained by finding the optimal path between each two goal points and solving the aTSP and then the policy is executed online. Also, Some algorithms are proposed to overcome the deviation from the path, kidnapping, finding new obstacles and becoming highly uncertain about the position which are possible situations in the online execution of the policy. Consequently, the robot should update its graph, map and policy online. The generic proposed algorithms are extended to the non-holonomic robots. In the online and offline phase, switching and LQG controllers as well as a Kalman filter for localization, are adopted. This algorithm can be implemented in practice and makes us one step closer to the solving Simultaneously Path planning, Localization and Mapping (SPLAM) problem.This algorithm is implemented in the Webots (video) and also on a real robot (Melon robot) (video). In both simulation and real implementation, we have used a vision-based localization based on the EKF.

Autonomous Flight and Obstacle Avoidance of a Quad rotor By Monocular SLAM
In this project, a monocular vision based autonomous flight and obstacle avoidance system for a commercial quad rotor is presented. The video stream of the front camera and the navigation data measured by the drone is sent to the ground station laptop via wireless connection. Received data processed by the vision based ORB-SLAM to compute the 3D position of the robot and the environment 3D sparse map in the form of point cloud. An algorithm is proposed for enrichment of the reconstructed map, and furthermore, a Kalman Filter is used for sensor fusion. The scaling factor of the monocular slam is calculated by the linear fitting. Moreover, a PID controller is designed for 3D position control. Finally, by means of the potential field method and Rapidly exploring Random Tree (RRT) path planning algorithm, a collision-free road map is generated. The proposed system enables the robot to flies autonomously in unknown environment and avoids colliding obstacles. The proposed algorithm generally consists of two parts. Firstly, we obtain the 3D position of robot. For this, the 3D position of robot is estimated using Kalman Filter which fuses the monocular ORB-SLAM outputs and navigation data measured by on-board sensors of drone. Regarding the autonomous flight and obstacle avoidance, robot needs to have a perception of its environment. To fulfill this aim, we use the surrounding map of robot which is reconstructed by monocular ORB-SLAM. But, this map is sparse and not appropriate for autonomous applications. Therefore, we represented an algorithm that lines up and enriches the reconstructed map. In the next step we determine the motion next set point and generate a collision-free path between specified set point and current robot position.
For this, a dynamic trajectory generation algorithm is proposed to fly and avoid the probable obstacles autonomously in an unknown but structured environment by utilizing some path planning methods such as potential field and RRT. The algorithm has been evaluated in real experiments and the flight variables are compared with some external precise sensors. In the experiments, it is illustrated that robot can perform reliable and robust autonomous flight in different scenarios while avoiding obstacles. Moreover, the proposed system can be easily applied to other platforms, which is being extended and implemented in our future plans.

Loop Closure Detection By Algorithmic Information Theory: Implemented On Range And Camera Image Data
It is assumed that a wheeled mobile robot is exploring an unknown unstructured environment, while perceiving camera or range images as its observations. These observations may be obtained with a proper sensor such as 3-D laser scanner, Lidar, Microsoft Kinect camera, stereo pairs, or monocular camera. For autonomy, it is required to avoid obstacles, perceive surrounding environment, recognize revisited places, perform path planning, mapping, and localization for a long term exploration in an unknown area or navigation toward a goal. The concentration of this paper is on loop closure detection based on the complexity of the sparse model (image model, hereafter) extracted from either camera or range images. The mobile robot position estimation becomes unreliable by closing large-scale loops due to the accumulation of estimation error. Therefore, loop closure detection approaches based on the observation similarity, which are independent from the estimated position are more accurate. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature- based representations, in this model, the dimension of the sensor measurements’ representations is reduced. Considering the loop closure detection as a clustering problem in high- dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms.
Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.
Publications
Title | Abstract | Year | Type | Research Group | |
---|---|---|---|---|---|
Object localization through a single multiple-model switching CNN and a superpixel training approach F Lotfi, F Faraji, Hamid D Taghirad Applied Soft Computing 115, 108166 | Abstract: Object localization has a vital role in any object detector and tracker, and therefore, has been the focus of attention by many researchers. In this article, a special training approach is proposed for a light convolutional neural network (CNN) to determine the region of interest (RoI) in an image while effectively reducing the number of probable anchor boxes. Almost all CNN based detectors utilize a fixed input size image, which may yield poor performance when dealing with various object sizes. In this paper, a different CNN structure is proposed taking three different input sizes, to enhance the performance. To demonstrate the effectiveness of the proposed method, two common data set are used for training while tracking by localization application is considered to demonstrate its final performance. The promising results indicate the applicability of the presented structure and the training method in practice. | 2022 | Journal | Autonomous Robotics | |
Optimization of Battery Life and State of the Charge in an Electric Motorcycle Braking System Amirhossein Samii, Faraz Lotfi, Hamid D Taghirad 2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM) | Abstract: The regenerative-friction braking system in electric motorcycles has attracted much attention in recent years due to the increase in energy demand. In this regard, recovering the vehicle energy from heat during braking is of significant importance. Without proper energy management, the battery lifetime may be negatively affected by the recurrent energy produced from braking. This paper presents a practical approach of regenerative-friction braking system for an electric motorcycle using Fuzzy Inference System (FIS) optimized via Whale Optimization Algorithm (WOA). The FIS determines the amount of the regenerative and friction braking based on different velocities and movement path slopes. The dynamic model of the electric motorcycle and the proposed controller are implemented in Matlab using the commercial Bikesim software. Moreover, the proposed method is implemented on a complete setup of an electric … | 2021 | Conference | Autonomous Robotics | |
Single Object Tracking through a Fast and Effective Single-Multiple Model Convolutional Neural Network F Lotfi, HD Taghirad arXiv preprint arXiv:2103.15105 | Abstract: Object tracking becomes critical especially when similar objects are present in the same area. Recent state-of-the-art (SOTA) approaches are proposed based on taking a matching network with a heavy structure to distinguish the target from other objects in the area which indeed drastically downgrades the performance of the tracker in terms of speed. Besides, several candidates are considered and processed to localize the intended object in a region of interest for each frame which is time-consuming. In this article, a special architecture is proposed based on which in contrast to the previous approaches, it is possible to identify the object location in a single shot while taking its template into account to distinguish it from the similar objects in the same area. In brief, first of all, a window containing the object with twice the target size is considered. | 2021 | Preprint | Autonomous Robotics | |
GVPS: Global Visual Position System for Drones Hamid Didari, Hamid D Taghirad, Parnia Shokri, Fatemeh Ghofrani 2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM) | Abstract: Drones are contributing significantly to transportation, rescue, commercial, and safety purposes through their rapid technology development. Operating in an unknown outdoor environment is a stringent requirement in these applications, while usually Global Positioning System (GPS) is exploited to determine the global position of the drone. However, relying on an external input source is not a secure solution due to the hijacking possibilities. To tackle this problem, a positioning system that is independent of any external signals is proposed in this paper. This is accomplished in two steps of relative position estimation and global position derivation. First, the relative position of a UAV is estimated by a monocular camera. Since the estimated relative position has an unknown scale, an extended Kalman filter (EKF) is employed to fuse IMU data to that of the relative position. Next, the global position of UAV is derived by … | 2021 | Conference | Autonomous Robotics | |
A Framework for 3D Tracking of Frontal Dynamic Objects in Autonomous Cars F Lotfi, HD Taghirad arXiv preprint arXiv:2103.13430 | Abstract: Both recognition and 3D tracking of frontal dynamic objects are crucial problems in an autonomous vehicle, while depth estimation as an essential issue becomes a challenging problem using a monocular camera. Since both camera and objects are moving, the issue can be formed as a structure from motion (SFM) problem. In this paper, to elicit features from an image, the YOLOv3 approach is utilized beside an OpenCV tracker. Subsequently, to obtain the lateral and longitudinal distances, a nonlinear SFM model is considered alongside a state-dependent Riccati equation (SDRE) filter and a newly developed observation model. | 2021 | Preprint | Autonomous Robotics | |
A Gaussian Process-Based Ground Segmentation for Sloped Terrains Pouria Mehrabi, Hamid D Taghirad 2021 9th RSI International Conference on Robotics and Mechatronics (ICRoM) | Abstract: A Gaussian Process based ground segmentation method is proposed in this paper which is fully developed in a probabilistic framework. The proposed method tends to obtain a continuous realistic model of the ground. The LiDAR three-dimensional point cloud data is used as the sole source of the input data. The physical realities of the data are taken into account to properly classify sloped ground as well as the flat ones. Furthermore, unlike conventional ground segmentation methods, no height or distance constraints or limitations are required for the data for the lack of access to the physical behavior of the ground. Furthermore, a density-like parameter is defined to handle ground-like obstacle points in the ground candidate set. The non-stationary covariance kernel function is used for the Gaussian Process, by which Bayesian inference is applied using the maximum A Posteriori criterion. The log-marginal … | 2021 | Conference | Autonomous Robotics | |
Identity Recognition based on Convolutional Neural Networks Using Gait Data F Faraji, Faraz Lotfi, M Majdolhosseini, M Jafarian, Hamid D Taghirad 2021 26th International Computer Conference, Computer Society of Iran (CSICC) | Abstract: As a critical part of any security system, identity recognition has become paramount among researchers. In this regard, several methods are presented while considering various sensors and data. In particular, gait data yields rich information about a person, including some exclusive moving patterns which can be utilized to distinguish between different individuals. On the other hand, convolutional neural networks are proved to be applicable for structured data, especially images. In this article, 12 markers are considered in gathering the gait data, each representing a lower-body joint location. Then, utilizing the gait data in a 2D tensor form, three different convolutional neural networks are trained to recognize the identities. Taking light architectures into account, this approach is implementable in realtime application. The obtained result shows the promising capability of the proposed method being used in identity … | 2021 | Conference | Autonomous Robotics | |
Exploring Self-Attention for Visual Odometry Hamed Damirchi, Rooholla Khorrambakht, Hamid D. Taghirad arXiv preprint arXiv:2011.08634 | Abstract: Visual odometry networks commonly use pretrained optical flow networks in order to derive the ego-motion between consecutive frames. The features extracted by these networks represent the motion of all the pixels between frames. However, due to the existence of dynamic objects and texture-less surfaces in the scene, the motion information for every image region might not be reliable for inferring odometry due to the ineffectiveness of dynamic objects in derivation of the incremental changes in position. Recent works in this area lack attention mechanisms in their structures to facilitate dynamic reweighing of the feature maps for extracting more refined egomotion information. In this paper, we explore the effectiveness of self-attention in visual odometry. We report qualitative and quantitative results against the SOTA methods. Furthermore, saliency-based studies alongside specially designed experiments are utilized to investigate the effect of self-attention on VO. Our experiments show that using self-attention allows for the extraction of better features while achieving a better odometry performance compared to networks that lack such structures. | 2020 | Preprint | Autonomous Robotics | |
ARC-Net: Activity Recognition Through Capsules Hamed Damirchi, Rooholla Khorrambakht, Hamid Taghirad arXiv preprint arXiv:2007.03063 | Abstract: Human Activity Recognition (HAR) is a challenging problem that needs advanced solutions than using handcrafted features to achieve a desirable performance. Deep learning has been proposed as a solution to obtain more accurate HAR systems being robust against noise. In this paper, we introduce ARC-Net and propose the utilization of capsules to fuse the information from multiple inertial measurement units (IMUs) to predict the activity performed by the subject. We hypothesize that this network will be able to tune out the unnecessary information and will be able to make more accurate decisions through the iterative mechanism embedded in capsule networks. We provide heatmaps of the priors, learned by the network, to visualize the utilization of each of the data sources by the trained network. By using the proposed network, we were able to increase the accuracy of the state-of-the-art approaches by 2%. Furthermore, we investigate the directionality of the confusion matrices of our results and discuss the specificity of the activities based on the provided data. | 2020 | Preprint | Autonomous Robotics | |
Preintegrated IMU Features For Efficient Deep Inertial Odometry R. Khorrambakht, H. Damirchi, H. D. Taghirad arXiv preprint arXiv:2007.02929 | Abstract: MEMS Inertial Measurement Units (IMUs) are inexpensive and effective sensors that provide proprioceptive motion measurements for many robots and consumer devices. However, their noise characteristics and manufacturing imperfections lead to complex ramifications in classical fusion pipelines. While deep learning models provide the required flexibility to model these complexities from data, they have higher computation and memory requirements, making them impractical choices for low-power and embedded applications. This paper attempts to address the mentioned conflict by proposing a computationally, efficient inertial representation for deep inertial odometry. Replacing the raw IMU data in deep Inertial models, preintegrated features improves the model's efficiency. The effectiveness of this method has been demonstrated for the task of pedestrian inertial odometry, and its efficiency has been shown through its embedded implementation on a microcontroller with restricted resources. | 2020 | Preprint | Autonomous Robotics | |
A switched SDRE filter for state of charge estimation of lithium-ion batteries Faraz Lotfi, Saeedeh Ziapour, Farnoosh Faraji, Hamid D. Taghirad International Journal of Electrical Power & Energy Systems | Abstract: Lithium-ion (Li-ion) batteries need very precise monitor of the state of charge (SOC) to ensure a long cycle life. | 2020 | Journal | Autonomous Robotics | |
A New Approach To Estimate Depth Of Cars Using A Monocular Image SMA Tousi, J Khorramdel, F Lotfi, AH Nikoofard, AN Ardekani 8th Iranian Joint Congress on Fuzzy and intelligent Systems (CFIS) | Abstract: In this paper,Predicting scene depth from RGB images is a challenging task. Since the cameras are the most available, least restrictive and cheapest source of information for autonomous vehicles; in this work, a monocular image has been used as the only source of data to estimate the depth of the car within the frontal view. | 2020 | Journal | Autonomous Robotics | |
ARC-Net: Activity Recognition Through Capsules Hamed Damirchi, Rooholla Khorrambakht, Hamid Taghirad 19th IEEE International Conference on Machine Learning and Applications (ICMLA) | Abstract: Human Activity Recognition (HAR) is a challenging problem that needs advanced solutions than using handcrafted features to achieve a desirable performance. Deep learning has been proposed as a solution to obtain more accurate HAR systems being robust against noise. In this paper, we introduce ARC-Net and propose the utilization of capsules to fuse the information from multiple inertial measurement units (IMUs) to predict the activity performed by the subject. We hypothesize that this network will be able to tune out the unnecessary information and will be able to make more accurate decisions through the iterative mechanism embedded in capsule networks. We provide heatmaps of the priors, learned by the network, to visualize the utilization of each of the data sources by the trained network. By using the proposed network, we were able to increase the accuracy of the state-of-the-art approaches by 2%. Furthermore, we investigate the directionality of the confusion matrices of our results and discuss the specificity of the activities based on the provided data. | 2020 | Conference | Autonomous Robotics | |
System identification and H-infinity based control of quadrotor attitude Ali Noormohammadi-Asl, Omid Esrafilian, Mojtaba Ahangar Arzati, Hamid D. Taghirad Arxiv Optimization and Control | Abstract: The attitude control of a quadrotor is a fundamental problem, which has a pivotal role in a | 2019 | Journal | Autonomous Robotics | |
Position Estimation for Drones based on Visual SLAM and IMU in GPS-denied Environment Hamid Didari Khamseh Motlagh, Faraz Lotfi, Saeed Bakhshi Germi, Hamid D.Taghirad International Conference on Robotics and Mechatronics | Due to the increased rate of drone usage in various commercial and industrial fields, the need for their autonomous operation is rapidly increasing. One major aspect of autonomous movement is the ability to operate safely in an unknown environment. The majority of current works are persistently using a global positioning system (GPS) to directly find the absolute position of the drone. However, GPS accuracy might be not suitable in some applications and this solution is not applicable to all situations. In this paper, a positioning system based on monocular SLAM and inertial measurement unit (IMU) is presented. The position is calculated through the semi-direct visual odometry (SVO) method alongside IMU data, and is integrated with an extended Kalman filter (EKF) to enhance the efficiency of the algorithm. The data is then employed to control the drone without any requirement to any source of external input. The experiment results for longdistance flying paths is very promising | 2019 | Conference | Autonomous Robotics | |
Path Planning for a UAV by Considering Motion Model Uncertainty Hossein Sheikhi Darani, Ali Noormohammadi-Asl and Hamid D. Taghirad International Conference on Robotics and Mechatronics | Abstract: The primary purpose of path planning for unmanned aerial vehicles (UAVs), which is a necessary prerequisite toward an autonomous UAV, is to guide the robot to | 2019 | Conference | Autonomous Robotics | |
Robust Object Tracking Based on Recurrent Neural Networks F. Lotfi, V. Ajallooeian and H. D. Taghirad 2018 6th RSI International Conference on Robotics and Mechatronics (IcRoM) | Abstract: Object tracking through image sequences is one of the important components of many vision systems, and it has numerous applications in driver assistance systems such as pedestrian collision avoidance or collision mitigating systems. Blurred images produced by a rolling shutter camera or occlusions may easily disturb the object tracking system. In this article, a method based on convolutional and recurrent neural networks is presented to further enhance the performance and robustness of such trackers. It is proposed to use a convolutional neural network to detect an intended object and feed the tracker with found image. Moreover, by using this structure the tracker is updated every ' n ' frames. A recurrent neural network is designed to learn the object behavior for estimating and predicting its position in blurred frames or when it is occluded behind an obstacle. Real-time implementation of the proposed approach verifies its applicability for improvement of the trackers performance. | 2018 | Conference | Autonomous Robotics | |
Multi-goal motion planning using traveling salesman problem in belief space Ali Noormohammadi-Asl, Hamid D. Taghirad Information Sciences | Abstract: In this paper, the multi-goal motion planning problem of an environment with some background information about its map is addressed in detail. The motion planning goal is to find a policy in belief space for the robot to traverse through a number of goal points. This problem is modeled as an asymmetric traveling salesman problem (TSP) in the belief space using Partially Observable Markov Decision Process (POMDP) framework. Then, feedback-based information roadmap (FIRM) algorithm is utilized to reduce the computational burden and complexity. By generating a TSP-FIRM graph, the search policy is obtained and an algorithm is proposed for online execution of the policy. Moreover, approaches to cope with challenges such as map updating, large deviations and high uncertainty in localization, which are more troublesome in a real implementation, are carefully addressed. Finally, in order to evaluate applicability and performance of the proposed algorithms, it is implemented in a simulation environment as well as on a physical robot in which some challenges such as kidnapping and discrepancies between real and computational models and map are examined. | 2018 | Journal | Autonomous Robotics | |
Implementation of Multi-Goal Motion Planning Under Uncertainty on a Mobile Robot Ali Noormohammadi-Asl, Hamid D. Taghirad, Amirhossein Tamjidi 2017 5th RSI International Conference on Robotics and Mechatronics (ICRoM) | Abstract: Multi-goal motion planning under motion and sensor uncertainty is the problem of finding a reliable policy for visiting a set of goal points. In this paper, the problem is formulated as a formidable traveling salesman problem in the belief space. To solve this intractable problem, we propose an algorithm to construct a TSP-FIRM graph which is based on the feedback-based information roadmap (FIRM) algorithm. Also, two algorithms are proposed for the online planning of the obtained policy in the offline mode and overcoming changes in the map of the environment. Finally, we apply the algorithms on a physical nonholonomic mobile robot in the presence of challenging situations like the discrepancy between the real and computation model, map updating and kidnapping. | 2017 | Conference | Autonomous Robotics | |
Reconstruction of B-spline curves and surfaces by adaptive group testing Alireza Norouzzadeh Ravari, Hamid D. Taghirad Computer-Aided Design | Abstract: Point clouds as measurements of 3D sensors have many applications in various fields such as object modeling, environment mapping and surface representation. Storage and processing of raw point clouds is time consuming and computationally expensive. In addition, their high dimensionality shall be considered, which results in the well known curse of dimensionality. Conventional methods either apply reduction or approximation to the captured point clouds in order to make the data processing tractable. B-spline curves and surfaces can effectively represent 2D data points and 3D point clouds for most applications. Since processing all available data for B-spline curve or surface fitting is not efficient, based on the Group Testing theory an algorithm is developed that finds salient points sequentially. The B-spline curve or surface models are updated by adding a new salient point to the fitting process iteratively until the Akaike Information Criterion (AIC) is met. Also, it has been proved that the proposed method finds a unique solution so as what is defined in the group testing theory. From the experimental results the applicability and performance improvement of the proposed method in relation to some state-of-the-art B-spline curve and surface fitting methods, may be concluded. | 2016 | Journal | Autonomous Robotics | |
NURBS-based Representation of Urban Environments for Mobile Robots Alireza Norouzzadeh Ravari and Hamid D. Taghirad 2016 4th International Conference on Robotics and Mechatronics (ICROM) | Abstract: Representation of the surrounding environment is a vital task for a mobile robot. Many applications for mobile robots in urban environments may be considered such as self-driving cars, delivery drones or assistive robots. In contrast to the conventional methods, in this paper a Non Uniform Rational B-Spline (NURBS) based technique is represented for 3D mapping of the surrounding environment. While in the state of the art techniques, the robot's environment is expressed in a discrete space, the proposed method is mainly developed for representation of environment in a continuous space. Exploiting the information theory, the generated representation has much lower complexity and more compression capability in relation to some state of the art techniques. In addition to representation in a lower dimensional space, the NURBS based representation is invariant against 3D geometric transformations. Furthermore, the NURBS based representation can be employed for obstacle avoidance and navigation. The applicability of the proposed algorithm is investigated in some urban environments through some publicly available data sets. It has been shown by some experiments that the proposed method has better visual representation and much better data compression compared to some state-of-the-art methods. | 2016 | Conference | Autonomous Robotics | |
Autonomous Flight and Obstacle Avoidance of a Quadrotor By Monocular SLAM Omid Esrafilian and Hamid D. Taghirad 2016 4th International Conference on Robotics and Mechatronics (ICROM) | Abstract: In this paper, a monocular vision based autonomous flight and obstacle avoidance system for a commercial quadrotor is presented. The video stream of the front camera and the navigation data measured by the drone is sent to the ground station laptop via wireless connection. Received data processed by the vision based ORB-SLAM to compute the 3D position of the robot and the environment 3D sparse map in the form of point cloud. An algorithm is proposed for enrichment of the reconstructed map, and furthermore, a Kalman Filter is used for sensor fusion. The scaling factor of the monocular slam is calculated by the linear fitting. Moreover, a PID controller is designed for 3D position control. Finally, by means of the potential field method and Rapidly exploring Random Tree (RRT) path planning algorithm, a collision-free road map is generated. Moreover, experimental verifications of the proposed algorithms are reported. | 2016 | Conference | Autonomous Robotics | |
A Navigation System for Autonomous Robot Operating in Unknown and Dynamic Environment: Escaping Algorithm F. AdibYaghmaie, A. Mobarhani, H. D. Taghirad International Journal of Robotics | Abstract: In this study, the problem of navigation in dynamic and unknown environment is investigated and a navigation method based on force field approach is suggested. It is assumed that the robot performs navigation in unknown environment and builds the map through SLAM procedure. Since the moving objects' location and properties are unknown, they are identified and tracked by Kalman filter. Kalman observer provides important information about next paths of moving objects which are employed in finding collision point and time in future. In the time of collision detection, a modifying force is added to repulsive and attractive forces corresponding to the static environment and leads the robot to avoid collision. Moreover, a safe turning angle is defined to assure safe navigation of the robot. The performance of proposed method, named Escaping Algorithm, is verified through different simulation and experimental tests. Besides, comparison between Escaping Algorithm and Probabilistic Velocity Obstacle, based on computational complexity and required steps for finishing the mission is provided in this paper. The results show Escaping Algorithm outperforms PVO in term of dynamic obstacle avoidance and complexity as a practical method for autonomous navigation. | 2016 | Journal | Autonomous Robotics | |
Reconstruction of B-spline curves and surfaces by adaptive group testing Alireza Norouzzadeh Ravari, Hamid D. Taghirad Computer-Aided Design | Abstract: Point clouds as measurements of 3D sensors have many applications in various fields such as object modeling, environment mapping and surface representation. Storage and processing of raw point clouds is time consuming and computationally expensive. In addition, their high dimensionality shall be considered, which results in the well known curse of dimensionality. Conventional methods either apply reduction or approximation to the captured point clouds in order to make the data processing tractable. B-spline curves and surfaces can effectively represent 2D data points and 3D point clouds for most applications. Since processing all available data for B-spline curve or surface fitting is not efficient, based on the Group Testing theory an algorithm is developed that finds salient points sequentially. The B-spline curve or surface models are updated by adding a new salient point to the fitting process iteratively until the Akaike Information Criterion (AIC) is met. Also, it has been proved that the proposed method finds a unique solution so as what is defined in the group testing theory. From the experimental results the applicability and performance improvement of the proposed method in relation to some state-of-the-art B-spline curve and surface fitting methods, may be concluded. | 2015 | Journal | Autonomous Robotics | |
Loop Closure Detection by Compressed Sensing for Exploration of Mobile Robots in Outdoor Environments Alireza Norouzzadeh Ravari and Hamid D. Taghirad 2015 3rd RSI International Conference on Robotics and Mechatronics (ICROM) | Abstract: In the problem of simultaneously localization and mapping (SLAM) for a mobile robot, it is required to detect previously visited locations so the estimation error shall be reduced. Sensor observations are compared by a similarity metric to detect loops. In long term navigation or exploration, the number of observations increases and so the complexity of the loop closure detection. Several techniques are proposed in order to reduce the complexity of loop closure detection. Few algorithms have considered the loop closure detection from a subset of sensor observations. In this paper, the compressed sensing approach is exploited to detect loops from few sensor measurements. In the basic compressed sensing it is assumed that a signal has a sparse representation is a basis which means that only a few elements of the signal are non-zero. Based on the compressed sensing approach a sparse signal can be recovered from few linear noisy projections by l1 minimization. The difference matrix which is widely used for loop detection has a sparse structure, where similar observations are shown by zero distance and different locations are indicated by ones. Based on the multiple measurement vector technique which is an extension of the basic compressed sensing, the loop closure detection is performed by comparison of few sensor observations. The applicability of the proposed algorithm is investigated in some outdoor environments through some publicly available data sets. It has been shown by some experiments that the proposed method can detect loops effectively. | 2015 | Conference | Autonomous Robotics | |
Modified Fast-SLAM For 2D Mapping And 3D Localization Soheil Gharatappeh, Mohammad Ghorbanian, Mehdi Keshmiri, Hamid D. Taghirad 2015 3rd RSI International Conference on Robotics and Mechatronics (ICROM) | Abstract: Fast Simultaneous Localization and Mapping (SLAM) algorithm is capable of real-time implementation due to logarithmic time complexity which results in decrease of computational cost. In this algorithm state vector of a robot merely includes planar location of the robot and its angle to the horizontal plane. It has fewer components comparing to state vector in extended Kalman filter method which consists of location of all environmental features. In existing methods for implementing this algorithm, robot movement is considered to be totally in planar movement; while if moving on a slope changes the pitch angle of the robot, it causes errors in the algorithm. Correcting these errors will lead to a precise 2D mapping and 3D localization. This paper details the modification added to conventional Fast-Slam algorithm to accommodate this requirement by using an IMU. Simulation and experimental results shows the effectiveness of such modification. | 2015 | Conference | Autonomous Robotics | |
3D Scene and Object Classification Based on Information Complexity of Depth Data A. Norouzzadeh, H. D. Taghirad Mathematics | Abstract: In this paper the problem of 3D scene and object classification from depth data is addressed. In contrast to high-dimensional feature-based representation, the depth data is described in a low dimensional space. In order to remedy the curse of dimensionality problem, the depth data is described by a sparse model over a learned dictionary. Exploiting the algorithmic information theory, a new definition for the Kolmogorov complexity is presented based on the Earth Moverâs Distance (EMD). Finally the classification of 3D scenes and objects is accomplished by means of a normalized complexity distance, where its applicability in practice is proved by some experiments on publicly available datasets. Also, the experimental results are compared to some state-of-the-art 3D object classification methods. Furthermore, it has been shown that the proposed method outperforms FAB-Map 2.0 in detecting loop closures, in the sense of the precision and recall. | 2015 | Journal | Autonomous Robotics | |
Transformation Invariant 3D Object Recognition Based On Information Complexity Alireza Norouzzadeh Ravari and Hamid D. Taghirad 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM) | Abstract: The 3D representation of objects and scenes as a point cloud or range image has been made simple by means of sensors such as Microsoft Kinect, stereo camera or laser scanner. Various tasks, such as recognition, modeling and classification can not be performed on raw measurements because of the curse of high dimensionality, computational and algorithm complexity. Non Uniform Rational Basis Splines (NURBS) are a widely used representation technique for 3D objects in various robotics and Computer Aided Design (CAD) applications. In this paper, a similarity measurement from information theory is employed in order to recognize an object sample from a set of objects. From a NURBS model fitted to the observed point cloud, a complexity based representation is derived which is transformation invariant in the sense of Kolmogorov complexity. Experimental results on a set of 3D objects grabbed by a Kinect sensor indicates the applicability of the proposed method for object recognition tasks. Furthermore, the results of the proposed method is compared to that of some state of the art algorithms. | 2014 | Conference | Autonomous Robotics | |
An Online Implementation of Robust RGB-D SLAM M. A. Athari, H. D. Taghirad 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM) | Abstract: This paper presents an online robust RGB-D SLAM algorithm which uses an improved switchable constraints robust pose graph slam alongside with radial variance based hash function as the loop detector. The switchable constraints robust back-end is improved by initialization of its weights according to information matrix of the loops and is validated using real world datasets. The radial variance based hash function is combined with an online image to map comparison to improve accuracy of loop detection. The whole algorithm is implemented on K. N. Toosi University mobile robot with a Microsoft Kinect camera as the RGB-D sensor and the whole algorithm is validated using this robot, while the map of the environment is generated in an online fashion. | 2014 | Conference | Autonomous Robotics | |
An Improved Optimization Method for iSAM2 Rana Talaei Shahir and Hamid D. Taghirad 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM) | Abstract: There is an issue called maximum likelihood estimation in SLAM that corresponds to a nonlinear least-square problem. It is expected to earn an accurate solution for large-scale environments with high speed of convergence. Although all the applied optimization methods might be accepted in terms of accuracy and speed of convergence for small datasets, their solutions for large-scale datasets are often far from the ground truth. In this paper, a double Dogleg trust region method is proposed and adjusted with iSAM2 to level up performance and accuracy of the algorithm especially in large-scale datasets. Since the trust region methods are sensitive to their own parameters, Gould parameters are chosen to obtain better performance. Simulations are done on some large-scale datasets and the results indicate that the proposed method is more efficient compared to the conventional iSAM2 algorithm. | 2014 | Conference | Autonomous Robotics | |
A Square Root Unscented FastSLAM With Improved Proposal Distribution and Resampling Ramazan Havangi, Hamid D. Taghirad, Mohammad Ali Nekoui, and Mohammad Teshnehlab IEEE Transactions on Industrial Electronics | Abstract: An improved square root unscented fast simultaneous localization and mapping (FastSLAM) is proposed in this paper. The proposed method propagates and updates the square root of the state covariance directly in Cholesky decomposition form. Since the choice of the proposal distribution and that of the resampling method are the most critical issues to ensure the performance of the algorithm, its optimization is considered by improving the sampling and resampling steps. For this purpose, particle swarm optimization (PSO) is used to optimize the proposal distribution. PSO causes the particle set to tend to the high probability region of the posterior before the weights are updated; thereby, the impoverishment of particles can be overcome. Moreover, a new resampling algorithm is presented to improve the resampling step. The new resampling algorithm can conquer the defects of the resampling algorithm and solve the degeneracy and sample impoverishment problem simultaneously. Compared to unscented FastSLAM (UFastSLAM), the proposed algorithm can maintain the diversity of particles and consequently avoid inconsistency for longer time periods, and furthermore, it can improve the estimation accuracy compared to UFastSLAM. These advantages are verified by simulations and experimental tests for benchmark environments. | 2014 | Journal | Autonomous Robotics | |
Loop Closure Detection By Algorithmic Information Theory: Implemented On Range And Camera Image Data Alireza Norouzzadeh Ravari and Hamid D. Taghirad IEEE Transactions on Cybernetics | Abstract: In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods. | 2014 | Journal | Autonomous Robotics | |
An intelligent UFastSLAM with MCMC move step Ramazan Havangi, Mohammad Ali Nekoui, Hamid D. Taghirad and Mohammad Teshnehlab Advanced Robotics | Abstract: FastSLAM is a framework for simultaneous localization and mapping (SLAM). However, FastSLAM algorithm has two serious drawbacks, namely the linear approximation of nonlinear functions and the derivation of the Jacobian matrices. For solving these problems, UFastSLAM has been recently proposed. However, UFastSLAM is inconsistent over time due to the loss of particle diversity that is caused mainly by the particle depletion in the resampling step and incorrect a priori knowledge of process and measurement noises. To improve consistency, intelligent UFastSLAM with Markov chain Monte Carlo (MCMC) move step is proposed. In the proposed method, the adaptive neuro-fuzzy inference system supervises the performance of UFastSLAM. Furthermore, the particle impoverishment caused by resampling is restrained after the resample step with MCMC move step. Simulations and experiments are presented to evaluate the performance of algorithm in comparison with UFastSLAM. The results show the effectiveness of the proposed method. | 2013 | Journal | Autonomous Robotics | |
Unsupervised 3D Object Classification from Range Image Data by Algorithmic Information Theory Alireza Norouzzadeh Ravari and Hamid D. Taghirad 2013 First RSI/ISM International Conference on Robotics and Mechatronics (ICRoM) | Abstract: The problem of unsupervised classification of 3D objects from depth information is investigated in this paper. The range images are represented efficiently as sensor observations. Considering the high-dimensionality of 3D object classification, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In order to remedy this problem, a low-dimensional representation is defined here. The sparse model of every range image is constructed from a parametric dictionary. Employing the algorithmic information theory, a universal normalized metric is used for comparison of Kolmogorov complexity based representations of sparse models. Finally, most similar objects are grouped together. Experimental results show efficiency and accuracy of the proposed method in comparison to a recently proposed method. | 2013 | Conference | Autonomous Robotics | |
A New Method for Mobile Robot Navigation in Dynamic Environment: Escaping Algorithm F. Adib Yaghmaie, A. Mobarhani and H. D. Taghirad Robotics and Mechatronics (ICRoM) | Abstract: This paper addresses a new method for navigation in dynamic environment. The proposed method is based on force field method and it is supposed that the robot performs SLAM and autonomous navigation in dynamic environment without any predefined information about dynamic obstacles. The movement of dynamic obstacles is predicted by Kalman filter and is used for collision detection purpose. In the time of collision detection, a modifying force is added to repulsive and attractive forces corresponding to the static environment and leads robot to avoid collision. Moreover, a safe turning angle is defined to assure safe navigation of the robot. The performance of proposed method, named Escaping Algorithm, is verified through different simulation and experimental tests. The results show the proper performance of Escaping Algorithm in term of dynamic obstacle avoidance as a practical method for autonomous navigation. | 2013 | Conference | Autonomous Robotics | |
A New Method for Mobile Robot Navigation in Dynamic Environment: Escaping Algorithm Farnaz Adib Yaghmaie, Amir Mobarhani, and Hamid D. Taghirad Robotics and Mechatronics | Abstract: This paper addresses a new method for navigation in dynamic environment. The proposed method is based on force field method and it is supposed that the robot performs SLAM and autonomous navigation in dynamic environment without any predefined information about dynamic obstacles. The movement of dynamic obstacles is predicted by Kalman filter and is used for collision detection purpose. In the time of collision detection, a modifying force is added to repulsive and attractive forces corresponding to the static environment and leads robot to avoid collision. Moreover, a safe turning angle is defined to assure safe navigation of the robot. The performance of proposed method, named Escaping Algorithm, is verified through different simulation and experimental tests. The results show the proper performance of Escaping Algorithm in term of dynamic obstacle avoidance as a practical method for autonomous navigation. | 2013 | Conference | Autonomous Robotics | |
A SLAM based on auxiliary marginalised particle filter and differential evolution R. Havangi, M.A. Nekoui, M. Teshnehlab and H.D. Taghirad International Journal of Systems Science | Abstract: FastSLAM is a framework for simultaneous localisation and mapping (SLAM) using a Rao-Blackwellised particle filter. In FastSLAM, particle filter is used for the robot pose (position and orientation) estimation, and parametric filter (i.e. EKF and UKF) is used for the feature location's estimation. However, in the long term, FastSLAM is an inconsistent algorithm. In this paper, a new approach to SLAM based on hybrid auxiliary marginalised particle filter and differential evolution (DE) is proposed. In the proposed algorithm, the robot pose is estimated based on auxiliary marginal particle filter that operates directly on the marginal distribution, and hence avoids performing importance sampling on a space of growing dimension. In addition, static map is considered as a set of parameters that are learned using DE. Compared to other algorithms, the proposed algorithm can improve consistency for longer time periods and also, improve the estimation accuracy. Simulations and experimental results indicate that the proposed algorithm is effective. | 2013 | Journal | Autonomous Robotics | |
SLAM Based on Intelligent Unscented Kalman Filter R.Havangi, M.A.Nekoui, H.D.Taghirad, and M.Teshnehlab The 2nd International Conference on Control, Instrumentation and Automation | Abstract: The performance of SLAM based on unscented Kalman filter (UKF-SLAM) and thus the quality of the estimation depends on the correct a priori knowledge of process and measurement noise. Imprecise knowledge of these statistics can cause significant degradation in performance. In this paper, the adaptive Neuro-Fuzzy has been implemented to adapt the matrix covariance process of UKF-SLAM in order to improve its performance. | 2012 | Conference | Autonomous Robotics | |
Feedback Error learning Control of Trajectory Tracking of Non-Holonomic Mobile Robot Farnaz Adib Yaghmaie, Fateme Bakhshande and Hamid D.Taghirad 20th Iranian Conference on Electrical Engineering | Abstract: In this paper a new controller for nonholonomic system is introduced. This feedback error learning controller benefits from both nonlinear and adaptive controller properties. The nonlinear controller is used to stabilize the nonholonomic behavior of the systems. This controller is a sliding mode controller which is designed based on backstepping method. The adaptive controller tries to face with uncertainty and unknown dynamic of the mobile robot. This part uses neural network controller for adaptation. The experimental results show the effectiveness of proposed controller and suitable and robust tracking performance of a mobile robot, which is significantly better than traditional controllers. | 2012 | Conference | Autonomous Robotics | |
Stereo-Based Visual Navigation of Mobile Robots in Unknown Environments H. Soltani, H. D. Taghirad and A.R. Norouzzadeh Ravari 20th Iranian Conference on Electrical Engineering (ICEE2012) | Abstract: In this paper a stereo vision-based algorithm for mobile robots navigation and exploration in unknown outdoor environments is proposed. The algorithm is solely based on stereo images and implemented on a nonholonomic mobile robot. The first step for exploration in unknown environments is construction of the map of circumference in real-time. By getting disparity image from rectified stereo images and translating its data to 3D-space, point cloud model of environments is constructed. Then by projecting points to XZ plane and put local maps together based on visual odometry, global map of environment is constructed in real-time. A* algorithm is used for investigating optimal path and nonlinear back-stepping controller guides the robot to follow the identified path. Finally, the mobile robot explores for a desired object in an unknown environment through these steps. Experimental results verify the effectiveness of the proposed algorithm in real-time implementations. | 2012 | Conference | Autonomous Robotics | |
Histogram Based Frontier Exploration Amir Mobarhani, Shaghayegh Nazari, Amir H. Tamjidi, Hamid D. Taghirad 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems | Abstract: This paper proposes a method for mobile robot exploration based on the idea of frontier exploration which suggests navigating the robot toward the boundaries between free and unknown areas in the map. A global occupancy grid map of the environment is constantly updated, based on which a global frontier map is calculated. Then, a histogram based approach is adopted to cluster frontier cells and score these clusters based on their distance from the robot as well as the number of frontier cells they contain. In each stage of the algorithm, a sub-goal is set for the robot to navigate. A combination of distance transform and A* search algorithms is utilized to generate a plausible path toward the sub-goal through the free space. This way keeping a reliable distance from obstacles is guaranteed while searching for the shortest path toward the sub-goal. When such a path is generated, a B-spline interpolated and smoothed trajectory is produced as the control reference for the mobile robot to follow. The whole process is iterated until no unexplored area remains in the map. The efficiency of the method is shown through simulated and real experiments. | 2011 | Conference | Autonomous Robotics | |
An Adaptive Neuro-Fuzzy Rao-Blackwellized Particle Filter for SLAM Ramazan Havangi, Mohammad Teshnehlab, Mohammad Ali Nekoui, Hamid Taghirad 2011 IEEE International Conference on Mechatronics | Abstract: The Rao-Blackwellized particle filter SLAM (RBPF-SLAM) that is also known as FastSLAM is a framework for simultaneous localization using a Rao-Blackwellized particle filter. The performance and the quality of the estimation of the Rao-Blackwellized particle filter depends heavily on the correct a priori knowledge of the process and measurement noise covariance matrices (Qt and Rt) that are in most applications unknown. On the other hand, an incorrect a priori knowledge of Qt and Rt may seriously degrade their performance. To solve these problems, this paper presents an adaptive Neuro-Fuzzy Rao-Blackwellized particle filter. The free parameters of adaptive Neuro-Fuzzy inference systems are trained using the steepest gradient descent (GD) to minimize the differences of the actual value of the covariance of the residual with its theoretical value as much as possible. | 2011 | Conference | Autonomous Robotics |
|
The H-Infinity Fast SLAM Framework Ramazan Havangi, Mohammad Ali Nekoui, Hamid Taghirad, Mohammad Teshnehlab 2011 IEEE International Conference on Mechatronics | Abstract: FastSLAM is a framework using a Rao-Blackwellized particle filter. However, the performance of FastSLAM depends on correct a priori knowledge of the process and measurement noise covariance matrices (Q t and R t ) that are in most applications unknown. On the other hand, an incorrect a priori knowledge of Q t and R t may seriously degrade the performance of FastSLAM. To solve these problems, this paper presents H ? FastSLAM. In this approach, H-Infinity particle filter is used for the mobile robot position estimation and H-Infinity filter is used for the feature location's estimation. The H-Infinity FastSLAM can work in an unknown statistical noise behavior and thus it is more robust. Experimental results demonstrate the effectiveness of the proposed algorithm. | 2011 | Conference | Autonomous Robotics | |
Vision-Based Fuzzy Navigation of Mobile Robots in Grassland Environments A. R. Norouzzadeh Ravari, H. D. Taghirad, A. H. Tamjidi Advanced Intelligent Mechatronics | Abstract: In this paper a vision-based algorithm for mobile robot navigation in unknown outdoor environments is proposed. It is based on a simple phenomenon, that when the robot moves forward, projected images of the near obstacles grow in captured frames faster than that of the far objects. The proposed algorithm takes advantage of this property and extracts features from each grabbed frame of the camera and tracks the vertical position of the features and their speed along the Y axis of the image plane over multiple frames as the robot moves. The relative height of the features and their distance from the robot in 3D is inferred based on this data and they are fed into a fuzzy reasoning system which marks the features from safe to unsafe according to their suitability for navigation. Then a second fuzzy system summarizes these scores in different image regions and directs the robot toward the area containing more features marked as safe. Simulation and implementation results confirm the efficacy of the proposed simple algorithm for mobile robot navigation in outdoor environments. | 2009 | Conference | Autonomous Robotics | |
Line Matching Localization and Map Building with Least Square E. Mihankhah, H.D. Taghirad, A. Kalantari, E. Aboosaeedan, H. Semsarilar 2009 IEEE/ASME International Conference on Advanced Intelligent Mechatronics | Abstract: We introduce a very fast and robust localization and 2D environment representation algorithm in this paper. This innovative method matches lines extracted from the LASER range finder distance data with the lines that construct the map, in order to calculate the local translation and rotation. This matching is done with a simple least square with no iterations. The algorithm is suitable for any indoor environment with mostly polygonal structure and has proven high speed and robustness in the experimental tests on our innovatively designed tracked mobile rescue robot ldquoSilverrdquo. One experimental test is presented in the last section of this paper where the outputs are presented. These outputs are: 1-drift free raster map made of points and 2- A gallery of lines providing a linear ground truth. | 2009 | Conference | Autonomous Robotics | |
On the Consistency of EKF-SLAM: Focusing on the Observation Models Amirhossein Tamjidi, Hamid D. Taghirad, Aliakbar Aghamohammadi Intelligent Robots and Systems | Abstract: In this paper a new strategy for handling the observation information of a bearing-range sensor throughout the filtering process of EKF-SLAM is proposed. This new strategy is advised based on a thorough consistency analysis and aims to improve the process consistency while reducing the computational cost. At first, three different possible observation models are introduced for the EKF-SLAM solution for a robot equipped with a bearing-range sensor. General form of the covariance matrix and the level of inconsistency in the robot orientation estimate is then calculated for these variants, and based on the numerical comparison of the estimation results, it is proposed to use the bearing and range information of a feature in the initialization step of EKF-SLAM. However, it is recommended to use only the bearing information to perform other iteration steps. The simulation observations verify that the new strategy yields to more consistent estimates both for the robot and the features. Moreover, through the proposed consistency analysis, it is shown that since the source of consistency improvement is independent from the choice of the motion model, it gives us an advantage over other existing methods that assume a specific motion models for consistency improvement. | 2009 | Conference | Autonomous Robotics | |
A novel hybrid Fuzzy-PID controller for tracking control of robot manipulators A. R. Norouzzadeh Ravari, H.D. Taghirad 2008 IEEE International Conference on Robotics and Biomimetics | Abstract: In this paper, a novel hybrid fuzzy proportional-integral-derivative (PID) controller based on learning automata for optimal tracking of robot systems including motor dynamics is presented. Learning automata is used at the supervisory level for adjustment of the parameters of hybrid Fuzzy-PID controller during the system operation. The proposed method has better convergence rate in comparison with standard back-propagation algorithms, less computational requirements than adaptive network based fuzzy inference systems (ANFIS) or neural based controllers and having the ability of working in uncertain environments without any previous knowledge of environments' parameters. The proposed controller has been successfully applied in simulation to control a 6-DOF Puma 560 manipulator using robotic toolbox, and has satisfactory results. In this simulation also, external disturbance and noise are addressed. The result of simulation has also shown that the rate of convergence and robustness of the designed controller guarantees practical stability. | 2009 | Conference | Autonomous Robotics | |
Autonomous Staircase Detection and Stair Climbing for a Tracked Mobile Robot using Fuzzy Controller E. Mihankhah, A. Kalantari, E. Aboosaeedan, H.D. Taghirad, and S.Ali.A. Moosavian 2008 IEEE International Conference on Robotics and Biomimetics | Abstract: Theoretical analysis and implementation of autonomous staircase detection and stair climbing algorithms on a novel rescue mobile robot are presented in this paper. The main goals are to find the staircase during navigation and to implement a fast, safe and smooth autonomous stair climbing algorithm. Silver is used here as the experimental platform. This tracked mobile robot is a tele-operative rescue mobile robot with great capabilities in climbing obstacles in destructed areas. Its performance has been demonstrated in rescue robot league of international RoboCup competitions. A fuzzy controller is applied to direct the robot during stair climbing. Controller inputs are generated by processing the range data from two laser range finders which scan the environment one horizontally and the other vertically. The experimental results of stair detection algorithm and stair climbing controller are demonstrated at the end. | 2009 | Conference | Autonomous Robotics | |
SLAM Using Single Laser Range Finder AliAkbar Aghamohammadi, Amir H. Tamjidi, Hamid D. Taghirad IFAC Proceedings Volumes | Abstract: Presented method in this paper aims to develop an accurate motion model and SLAM algorithm, which is only based on the Laser Range Finder (LRF) data. Proposed method tries to overcome some practical problems in traditional motion models and SLAM approaches, such as robot slippage, and inaccuracy in parameters related to robot's hardware. Novel insights specific to process and measurement model, and making use of them in the IEKF framework, give rise to the real time method with drift-free performance in restricted environments. Furthermore, uncertainty measures, calculated through the method, are valuable information for fusion purposes and also an accurate motion model, derived in this method, can be used as a robust and an accurate localization procedure in different structured environments. These issues are validated through experimental implementations; experiments verify method's efficiency both in pure localization and in SLAM scenarios in the restricted environments, involving loop closures. | 2008 | Conference | Autonomous Robotics | |
A Solution for SLAM through Augmenting Vision and Range Information Ali A. Aghamohammadi, Amir H. Tamjidi, Hamid D. Taghirad 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems | Abstract: This paper proposes a method for augmenting the information of a monocular camera and a range finder. This method is a valuable step towards solving the SLAM problem in unstructured environments free from problems of using encoderspsila data. Proposed algorithm causes the robot to benefit from a feature-based map for filtering purposes, while it exploits an accurate motion model, based on point-wise raw range scan matching rather than unreliable feature-based range scan matching, in unstructured environments. Moreover, robust loop closure detection procedure is the other consequence of this method. Experiments with a low-cost IEEE 1394 webcam and a range finder illustrate the effectiveness of the proposed method in drift-free SLAM at loop closing motions in unstructured environments. | 2008 | Conference | Autonomous Robotics | |
Feature-Based Laser Scan Matching For Accurate and High Speed Mobile Robot Localization A. A. Aghamohammadi, H. D. Taghirad, A. H. Tamjidi, and E. Mihankhah Proceedings of the 3rd European Conference on Mobile Robots | Abstract: This paper introduces an accurate and high speed pose tracking method for mobile robots based on matching of extracted features from consecutive scans. The feature extraction algorithm proposed in this paper uses a global information of the whole scan data and local information around feature points. Uncertainty of each feature is represented using covariance matrices determined due to observation and quantization error. Taking into account each feature's uncertainty in pose shift calculation leads to an accurate estimation of robot pose. Experiments with low range URG_X002 laser range scanner illustrate the effectiveness of the proposed method for mobile robot localization. | 2007 | Conference | Autonomous Robotics | |
Mobile Robot Navigation in an Unknown Environment A. Jazayeri, A.Fatehi, H. Taghirad 2016 IEEE Conference on Control Applications (CCA) | Abstract: This article focuses on the mobile robot's autonomous navigation problem in an unknown environment. Considering a robot equipped with an omnidirectional range-sensor a map of the discovered area is constructed in an iterative manner. Given a target position located in the unexplored territory, initially a motion planning scheme is employed that relies on exploration-principles of the area near the target. This is achieved by assigning an exploration cost function that indirectly attracts the robot close to target. Upon discovery of the target, the robot moves to it following the shortest-distance path. Simulation studies that prove the efficiency of the overall method are presented. | 2006 | Conference | Autonomous Robotics | |
New Wavelet Based Algorithm for Real Time Visual Tracking Akram Bayat, Hamid R. Taghirad, Seyyed Sadegh Mottaghian | Abstract: in this paper, we propose a new technique in wavelet domain for real time object detection and tracking in a sequence of images. The object to be tracked is identified in the first frame. Our proposed algorithm consists of two phases: the first, wavelet based edge detection is used to form ground boundary map. Then, Object dimensions estimation is implemented to determine probabilistic object areas. finally, target detection based on finding best match using feature vectors is applied. We defined dispersion of wavelet detail coefficient in object area as feature to be matched. Also we proposed a new color model for images to be used in processing algorithm. Our experimental results show that the algorithm is robust and fast. It is also insensitive to changing illumination condition and size of target | 2006 | Conference | Autonomous Robotics |