The visual robotics group started its research by equipment of one of our industrial robotic manipulator to a visual camera, in order to automatically track dynamical objects within the workspace of the robot. This mission was fully accomplished on our Mitsubishi 5DoF industrial robot, with a eye-in-hand configuration. The first algorithm developed to target moving objects was the use of EKF algorithm on the moving objects with marked features. Then the research group focuses on the development of tracking featureless objects, especially kernel-based visual servoing methods. In order to effectively compute the visual kernel of the pictures, Fourier transformation and Log-polar transformation are being used. Then we introduce a sliding mode controller design in kernel-based visual servoing.
The main goal is to track a target object without any guiding features like lines, point, etc. In the kernel-based approach, a sum of weighted image signal, or Fourier transform of image signal, is used as a measurement for tracking purpose which is known as kernel measurement. Tracking error is the difference between current and desired kernel measurement and it is used as input variables to an integral sliding mode controller. By binding kernel-measurement to sliding mode control, our configured system will outperform conventional kernel-based visual servoing system. The proposed method is implemented on the Mitsubishi industrial robot and is compared with conventional kernel-based visual servoing approach for different initial conditions. Furthermore, the stability of proposed algorithm is analyzed via Lyapunov theory. The uncertainties such as image noise, image blur, and camera calibration errors can affect the stability of algorithm and will lead the target object partially or totally leave the image range, which derives a task failure. To reduce the effect of bounded uncertainties, the controller parameters are designed automatically based on sliding condition.
The other venue being examined in this research theme was the use of monocular and stereo vision on the perception of environment on a mobile robotic platform. This research were in collaboration to our Autonomous Robotics group which is currently being pursued.
Javad Ramezanzadeh, Fatemeh Bakhshandeh, Mahsa Parsapour, Parisa Masnadi, Aida Farahani, Seyed Farokh Atashzar, Mahya Shahbazi, Sahar Sedaghati, Homa Ammari, , Mehrnaz Salehian, Soheil Rayatdoost, Mohammad Reza Sadeghi. Farzaneh Sedaghat.