Review Article
Austin J Robot & Autom. 2014;1(1): 3.
Insight into Autonomous Navigation of Micro Aerial Vehicle (MAV)
Subramani T1*, Saranya M1 and Neelaveni R2
1Department of Robotics and Automation Engineering, PSG College of Technology, India
2Department of Electrical and Electronics Engineering, PSG College of Technology, India
*Corresponding author: Subramani T, Department of Robotics and Automation, PSG College of technology, Peelamedu, Coimbatore – 641004, India
Received: October 30, 2014; Accepted: November 28, 2014; Published: December 02, 2014
Abstract
Researchers focus mainly on autonomous navigation of Micro Aerial Vehicle (MAVs) in GPS-denied environments. This is because MAVs offer major advantages when used for aerial surveillance, reconnaissance, and inspection in complex and dangerous environments. Indeed, they are better suited for dull, dirty, or dangerous missions than manned aircraft. MAV is a class of Unmanned Aerial Vehicle (UAVs). Further, many technological, economic, and political factors have encouraged the development and operation of Unmanned Aerial Vehicle (UAVs) and its class of MAVs. This review provides knowledge about different methodologies available to confer autonomy to MAV.
Keywords: MAV: Autonomous Navigation; GPS-denied; Optical Flow Algorithm
Introduction
One of the key research fields over the last few decades is the autonomous vehicles. Based on the working environment, a rough classification of the autonomous vehicles would include Unmanned Aerial Vehicles (UAVs), Un-manned Ground Vehicles (UGVs), Autonomous Underwater Vehicles (AUVs), and Autonomous Surface Vehicles (ASVs). The autonomous unmanned aircraft equipped with autonomous control devices is called Unmanned Aerial Vehicles (UAVs). A Micro Aerial Vehicle (MAV) is a class of UAVs with respect to its size. In recent years, there is a rapid growth development in MAVs. Development is driven by commercial, research, government, and military purposes.
The importance of MAV has grown now as it is very efficient. The MAV are currently used to do home delivery of products, it is used for surveillance in military, stone quarries for inspection and also in application where human cannot enter. These applications require the MAV with good navigation skills. The good navigation skills will help to complete the work in timely manner.
This review aims to give the readers an insight into the autonomous navigation of MAV. Review discusses various autonomous control devices used by the researchers to develop an autonomous MAV. Conferring autonomy upon MAVs, so they can detect and navigate through voids and locate and land on a surface is a highly essential and challenging task. The degree of autonomy is measured by the ability to navigate through cluttered environments, maneuver close to obstructions, avoid obstacles, and take off and land. And also it includes navigation inside caves and dense forests where GPS will not work. This makes the selection of algorithm very challenging. The reason is that the multiple data is to be fused to arrive a solution. Payload restrictions have often constrained the autonomy capabilities. Another restriction includes choosing type of MAV whether it is having quad rotor or hexa rotor. For the purpose of autonomy in MAV motion, the camera sensor is used. Hence Vision based autonomous navigation of MAVs is the keen topic discussed in this review.
Autonomous MAV
Some of the different methodologies that confer autonomy to MAV are as follows.
- Monocular camera with classical PID controller [1]
- Monocular camera with SLAM, Extended Kalman Filter and PID controller [2]
- Hybrid Stereo Vision system [3]
- Open Source navigation system [4]
- Optical flow techniques [5,6]
Monocular camera with classical pid controller
This methodology is proposed by Yingcai Bi and Haibin Duan of the Beijing University, China [1]. They propose a hybrid system consisting of a low-cost quad rotor and a small pushcart. The autonomous navigation of quad rotor is made with the help of PID controller. MAV is controlled with classical Proportional–Integral– Derivative (PID) controller for autonomous visual tracking and landing on the moving helipad carrier. The vision-based tracking and landing approach utilizes RGB color information rather than grayscale information of the helipad. Thereby autonomous MAV shows fast and robust performance in different lighting conditions. The model utilizes the off-the-shelf affordable quadrotor thereby the complex task is performed using the relative pixel position information in image plane without communication between the quadrotor and carrier. The quadrotor’s relative position to helipad is estimated with a frequency up to 30 Hz from the video stream, which enables the quadrotor to fly autonomously while performing real-time visual tracking and landing on the carrier. Figure 1 depicts the steps involved in the process of tracking and landing of MAV using PID controller and Figure 2 gives the block representation.
Figure 1: Flowchart for tracking and landing of MAV using PID controller [4].
Figure 2: Block diagram for tracking and landing of MAV using PID controller [4].
Monocular camera with slam, extended kalman filter and pid controller
Jakob Engel, et al, presents a complete solution for the visual navigation of a small-scale, low-cost quadrocopter in un-known environments in their paper [2]. Their approach relies on the information retrieved from the monocular camera for its movement. So it does not require any external tracking aids such as GPS or visual markers. They have carried out the computation on an external laptop that communicates over wireless LAN with the quadrocopter. So the computation process is costlier than previous methodology. This approach consists of three components: a monocular SLAM system, an extended Kalman filter for data fusion, and a PID controller. The method offers following advantages.
- Simple and effective method to compensate for large delays in the control loop using an accurate model of the quadrocopter’s flight dynamics,
- Novel, closed-form method to estimate the scale of a monocular SLAM system from additional metric sensors. The authors have extensively evaluated their system in terms of pose estimation accuracy, flight accuracy, and flight agility using an external motion capture system. Furthermore, they have also compared the convergence and accuracy of the scale estimation method for an ultrasound altimeter and an air pressure sensor with filtering based approaches is also compared. The complete system is available as open-source in ROS and is depicted in Figure 3.
- a monocular SLAM implementation for visual tracking,
- an EKF for data fusion and prediction, and
- PID control for pose stabilization and navigation.
- Hybrid Stereo Vision system
- Yingcai Bi, HaibinDuan. Implementation of autonomous visual tracking and landing for a low-cost-quadrotor. Sci Verse Science Direct, Optik. 2013; 124: 3296– 3300.
- Jakob Engel, Jurgen Sturm, Daniel Cremers. S5cale-aware navigation of a low-cost quadrocopter with a monocular camera. Science Direct, Robotics and Autonomous Systems. 2014; 62: 1646–1656.
- Eynard D, Vasseur P, Demonceaux C, Frémont V. Real time UAV altitude, attitude and motion estimation from hybrid stereovision. Auton Robot. 2012; 33: 157–172.
- Dryanovski I, Valent RG, Xiao J. An open-source navigation system for micro aerial vehicles. Auton Robot. 2013; 34: 177–188.
- Chao H, Gu Y, Napolitano M. A Survey of Optical Flow Techniques for Robotics Navigation Applications. J Intell Robot Syst. 2014; 73: 361–372.
- Conroy J, Gremillion G, Ranganathan B, Sean Humbert. Implementation of wide-field integration of optic flow for autonomous quadrotor navigation. Auton Robot. 2009; 27: 189–198.
- Shaowu Yang, Scherer SA, Schauwecker K, Zell A. Onboard monocular vision for landing of an MAV on a landing site specified by a single reference image. International Conference on ICUAS. 2013; 318-325.
Figure 3: Flowchart for tracking and landing of MAV using SLAM, EKF &PID controller [5].
The Approach outline can be well explained with the help of the above figure. The navigation system consists of three major components:
The Onboard monocular vision for landing is also possible using a specified single reference and Visual SLAM algorithm as explained by Shaowu Yang, et al [7].
Hybrid Stereo Vision System
Damien Eynard, et al, presents a hybrid stereoscopic rig composed of a fisheye and a perspective camera for vision-based navigation [3]. In contrast to classical stereoscopic systems based on feature matching, the proposed method avoids matching between hybrid views. A plane-sweeping approach is proposed for estimating altitude and detecting the ground plane. Rotation and translation are then estimated by decoupling: the fisheye camera contributes to evaluating attitude, while the perspective camera contributes to estimating the scale of the translation. The motion can be estimated robustly at the scale, with the knowledge of the altitude. Hence the methodology is robust, real-time, accurate, exclusively vision-based approach with an embedded C++ implementation. This approach does not use any non-visual sensors. But if required an Inertial Measurement Unit can be coupled. The Figure 4 shows the block diagram of Hybrid Stereo Vision.
Figure 4: Block Diagram of Hybrid Stereo Vision [1].
Optical Flow Techniques
Optical flow techniques are natural solutions to the navigation and obstacle avoidance problem, as motivated by insect and bird flights. Optical flow can be treated as the projection of the 3D perceived motion of objects, which has wide applications for motion estimation, video compression, and image interpolation [5]. It has been discovered by biologists with honeybees which rely on optical flows for grazing landing, travel distance estimation, obstacle avoidance, and flight speed regulation. The compound eyes of insects such as dragonflies play an important role for these navigation functions with the capability of wide field motion detection. Many different types of algorithms for optical flow computation have been developed by computer vision and biological experts, including gradient based approaches (e.g., Lucas-Kanade and Horn-Shunck methods), feature based methods (e.g., SIFT), and interpolation techniques [6].These algorithms can successfully generate optical flows with pixel level accuracy compared with the sub-pixel accuracy ground truth, using benchmark data sets collected by computer vision researchers.
A very popular method is to steer the MAV using optical flow information and use additional constraints such as scene structure to navigate towards the landing or transit surfaces. The flow field calculated from the camera images can be used for ego motion stabilization without additional pose sensors and at the same time, for basic navigation tasks like obstacle avoidance or landing without an explicit 3D reconstruction [6]. To further simplify the use of flow for landing task, the rotational component of optical flow arising from changes in MAV pitch are assumed smaller than the translational component and ignored. Focus of expansion in the flow field indicates the direction of travel which can be leveraged to determine the divergence. If the FOE is located inside the rapidly diverging region, then a collision is imminent. Similarly a rapidly expanding region to the right of FOE corresponds to the obstacle approaching on the right side of MAV. Thus it should turn away from the region of high optic to avoid a collision. In the same vein, the MAV can estimate its height from the optic flow in the downward direction; faster optic flow indicates a low altitude. Thus by equipping a MAV with cameras in front of and below the aircraft, the above patterns can be embedded in a sensor suite for autonomous landing and navigation through pre chosen voids.
Open Source Navigation System for MAV
Ivan Dryanovski, et al, implemented an open-source indoor navigation system for quad rotor micro aerial vehicles (MAVs) in the ROS framework [4]. The system requires a minimal set of sensors including a planar laser range-finder and an inertial measurement unit. The proposed system address the issue of autonomous control, state estimation, path-planning, and tele-operation, and provide interfaces so that the system allows to seamlessly integrate with existing ROS navigation tools for 2D SLAM and 3D mapping. All components run in real time onboard MAV, with state estimation and control operating at 1 kHz. The system focuses on modularity and abstraction, to develop a product that is flexible and hardware-independent. The open source module is shown in Figure 5.
Figure 5: Open source for MAV navigation [2].
Conclusion
When it comes to real-time implementation, it is very important that the processing of images should be very fast to facilitate the navigation without any problem. So, the best idea is to use a camera which is having low resolution because for navigation resolution of the image is not very important. One more problem faced is the luminance which is due to bad lighting condition. This can be avoided by processing each pixel individually before storing the image in the memory. Each algorithm has its own pros and cons. The design of any MAV is expensive as it needs expensive test rigs. These costs can be compensated by selecting a suitable algorithm for navigation. Different methodologies are available for autonomous navigation of MAV. But while considering the outdoor navigation it is expected that the optical flow methodologies will perform better. The algorithm can be implemented in a high speed controller to yield good results.
References