(ProQuest: ... denotes non-US-ASCII text omitted.)
Shuhuan Wen 1 and Kamal Mohammed Othman 2 and Ahmad B. Rad 2 and Yixuan Zhang 2 and Yongsheng Zhao 3
Academic Editor:Shen Yin
1, Key Lab of Industrial Computer Control Engineering of Hebei Province, Yanshan University, Qinhuangdao 066004, China
2, School of Engineering Sciences, Simon Fraser University, No. 250-13450, 102 Avenue, Surrey, BC, V3T 0A3, Canada
3, Parallel Robot and Mechatronic System Laboratory of Hebei Province and Key Laboratory of Advanced Forging & Stamping Technology and Science of Ministry of National Education, Yanshan University, Qinhuangdao 066004, China
Received 4 May 2014; Revised 24 June 2014; Accepted 25 June 2014; 10 July 2014
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Robot has been used in many areas, such as industry process [1-6] and autonomous navigation. Autonomous navigation in an unknown environment is regarded as a key attribute of a service robot and has received considerable attention in the last two decades. The focus of this paper is the indoor environments such as homes, offices, and hospitals. It is important that the service robots can roam around in order to assist humans or to perform various tasks in such surroundings. For example, the robots need to detect and avoid obstacles that they might encounter in the environment.
Precise position estimation is a necessary prerequisite for a reliable navigation. In this case, simultaneous localization and mapping (SLAM) approach is employed to make a robot truly autonomous without the need for any a priori knowledge of location. SLAM algorithm [7, 8] can be recognized and localized in an unknown environment. Some artificial intelligence methods are used in the SLAM of the mobile robots, such as reinforcement learning [9], which we will study for NAO humanoid robot in the future. SLAM algorithm is the essential process of building a map of the environment while simultaneously determining its location within this map. SLAM has also been implemented in a number of different domains from indoor to outdoor, underwater, and airborne systems. However, the effectiveness of various SLAM methods greatly depends on using different exteroceptive sensors (sonar, laser, camera, odometer, etc.). Some methods are based on vision or laser [10, 11]. Camera and laser have different features, respectively. Cameras can provide the 3D sensing of the environment and obstacle avoidance, but its accuracy of localization is generally inferior to laser scanner. Compared to other sensors, laser provides more accurate range and bearing measurements. However, laser cannot provide the image of the environment. In particular, laser of the robot cannot recognize the object when it is faced with landmarks or obstacles. Therefore, the combination of camera and laser can achieve very good and reliable navigation for indoor SLAM problems. There are some studies reported in the literature whereby the laser and the stereo camera are used together for accomplishing indoor navigation tasks. In Labayrade et al.'s paper [12], the two sensors were used together to eliminate false positions in object detection. In Iocchi and Pellegrini's paper [13], the 2D laser scanner and the stereo camera were used for building the 3D environment map. In Lin et al.'s paper [14], the proposed approach combined SLAM using a 2D laser scanner and dense 3D depth information from a stereo camera, and this work solved effectively the obstacles at different heights into the 2D map. The objective of this work is to effectively finish accurate position based on SLAM and avoid the obstacle of the environment. The proposed approach combines SLAM using a laser and stereo cameras. Using the cameras, it can recognize landmarks and provide obstacle information in the environment. Laser can provide high accuracy of both range and bearing measurements. K-means method is used to extract data from different objects. Robots with the proposed method can have rich environment information for better navigation and obstacle avoidance. The proposed approach is implemented on an NAO humanoid robot.
The rest of the paper is organized as follows. In Section 2, the background knowledge of laser based SLAM with closed-loop controller, the NAO platform that we have used, and the choosing method of the landmarks by using cameras are introduced. In Section 3, we briefly describe the task scenario and K-means clustering method of extraction data from Laser. The method of extraction data from different objects by using laser and avoidance obstacles is also described. The experiment results are shown in Section 4. Finally, Section 5 gives the conclusion and discusses future work.
2. Laser and Camera Based SLAM with Closed-Loop Controller
2.1. EKF-SLAM
There are two key computational solutions to the SLAM problem: the extended Kalman filter (EKF-SLAM) and Rao-Blackwellized particle filters (FastSLAM). EKF-SLAM was presented by Smith and Cheeseman in 1987 and has been used extensively [15]. EKF-SLAM is very well known to navigation problems. The main steps of SLAM include robot motion prediction, new landmarks initialization, and known landmarks correction.
The observation model is [7] [figure omitted; refer to PDF] where x is robot state vector and landmark states, h ( · ) is the observation function and v is the measurement noise.
The covariance matrix is [figure omitted; refer to PDF] where F is Jacobian matrix, Q is the Gaussian noise.
The EKF correction step is written as [figure omitted; refer to PDF] where z is the innovation's mean, x [variant prime] is the observation position of the robot, H is Jacobian matrix, R is the covariance matrix of the measurement noise.
Kalman gain K is [figure omitted; refer to PDF] The update position of the robot is [figure omitted; refer to PDF] The correction covariance matrix is [figure omitted; refer to PDF] The measurement data is given by the laser, include the distance, angle between the robot and landmarks and the position of the landmarks. The extraction data method from laser is introduced in Section 3.
2.2. NAO Humanoid Robot
The platform selected for my research project is humanoid robot NAO, a commonly used humanoid platform for education environment, produced by the French company Aldebaran Robotics. NAO is a reasonably priced advanced humanoid robot with attractive features that make it an ideal platform for research in humanoid robots. It is designed based on an open architectural philosophy and can be programmed with several different programming environments. NAO is a medium-sized humanoid robot developed mainly for universities and laboratories for research and education purposes. It replaced the Sony AIBO dogs in the RoboCup Standard Platform League (SPL) in 2008. As an autonomous humanoid robot, NAO is capable to move in a biped way, sense its close environment, communicate with human and think by on-board processor [16-21]. NAO is a new biped robot recently developed by French company Aldebaran-Robotics. Figure 1 shows the overall appearance of the robot in our experiment. NAO is 0. 57 meters high and weighs about 4.5 kilogram. NAO with laser head is an option provided by Aldebaran robotics for users intending to study specific navigation problems such as SLAM.
Figure 1: Aldebaran-robotics NAO humanoid.
[figure omitted; refer to PDF]
NAO is very light compared to other existing robots of the same height because its BMI is about 13. 5 ( kg / m 2 ). NAO has a total of 25 degrees of freedom, 11 degrees of freedom (DOF) for its lower parts and another 14 DOF for its upper parts. NAO robot is demonstrated generally by its hardware, mechanical architecture and software. The hardware section mainly includes several sensors as they are critical for the robot to explore the surrounding environment. In the software, the structure of NAOqi is explained by the three components: NAOqi OS, NAOqi Library and Device control Manager. In addition, the dedicated NAO software, Choregraphe, Monitor and Webots, are included.
There is one laser scanner on the robot NAO head. Besides the camera, the laser sensor is another essential device for my research. The SLAM algorithm requires the laser to obtain landmark location information in terms of bearing and distance to perform a full SLAM process. The type of the laser used in our experiment is URG-04LX. The function of the laser is provided in Table 1.
Table 1: Main characteristics of the NAO URG-04LX laser.
Function | Data |
Detection range | 0.02~4 m |
Scan angle | 240° |
Scan time | 100 msec/scan (10.0 Hz) |
Resolution | 1 mm |
Interface | USB 2.0, RS232 |
The robot has a CPU in its v4.0 head of our experiment as the main processing unit: it's a GEODE 500 MHz board with 512 M of flash memory and Wi-Fi connection for connection to the development remote computer [22]. NAO also has a user friendly GUI, so it is easy to program NAO using C, C++, Phython or Urbi. Python 2.7 is used to program in this paper.
2.3. Closed-Loop Controller
NAO deviates from a set target as there is no closed-loop control to compensate the noise of the sensors. This results in an inaccurate position of NAO. Figure 2 shows the experiment result of the real position and estimated position of the NAO when the control U is a fixed value. The error is large between the real position and the actual robot position. Therefore, we designed a simple PI controller in order to reduce the error. The closed-loop process is only based on the return error of the real position and the estimated position of NAO. The outline of this closed-loop is presented in Figure 3, where x R is the real position, x e s is the EKF-estimated position, [straight epsilon] is the threshold defined by user, and k p is the proportional value. The steps of closed-loop controller are followed.
Figure 2: The experiment motion results of robot without closed-loop controller.
[figure omitted; refer to PDF]
Figure 3: The closed-loop controller.
[figure omitted; refer to PDF]
Closed-loop controller steps are as follows:
(i) define an initial controller value U = Constant (i.e., U = 0.1 );
(ii) if the error e = x R - x e s > [straight epsilon] , then U = k p ( e + T / T i ∑ e ) ,
: else U = Constant (i.e., U = 0.1 ).
The real position and EKF-estimated position of the NAO with closed-loop controller are shown in Figure 4.
Figure 4: The experiment motion results of robot with closed-loop controller.
[figure omitted; refer to PDF]
The error between estimated and reference robot position can be calculated, which is in significant amount during the experiment. Accordingly, a closed-loop PI controller is implemented to minimize this error, such that the deviation that robot travels can be reduced. In addition, to verify the performance of motion controller, the result has been verified with observation during the experiment. It has shown that the motion controller has succeeded in keeping NAO robot with small position error.
2.4. Choosing Landmarks by Using Cameras
In order to distinguish the landmarks from the obstacles in the environment, we used the NAO embedded vision system as the robot is not able to decide if an object is a landmark or an obstacle. Firstly, the robot chooses the objects as landmarks and recognizes them. Then, the robot reads the laser data when it sees the landmarks. The recognizing object function is used in this paper. The steps of recognizing objects are followed.
Recognizing objects steps are as follows:
(i) take a picture for the recognized objects by the cameras;
(ii) draw the contour of the objects;
(iii): save the message of the recognized objects to vision recognition database;
(iv) create a Python module called back of picture recognition;
(v) call laser when robot sees the objects.
The recognized objects are shown in Figure 5.
The process of the recognized landmarks.
(a) The picture of the landmark
[figure omitted; refer to PDF]
(b) The contour of the landmark
[figure omitted; refer to PDF]
3. Task Scenario
3.1. Task Scenario
We test our navigation approach in an indoor environment. There are two landmarks in the environment (see Figure 6).
Figure 6: The scenario of the experiment.
[figure omitted; refer to PDF]
In the scenario, the NAO is supposed to walk autonomously from its current location to a destination. The robot needs to recognize the landmarks and it can avoid the obstacle when it walks. An outline of implement the task is shown in Figure 7.
Figure 7: The overall flow chart of the navigation.
[figure omitted; refer to PDF]
In order to finish the navigation indoors environment, we require several components including:
(i) recognition of objects by using camera that judge the objects that are landmarks or obstacles;
(ii) navigation in an unknown environment that calls SLAM when meeting landmarks;
(iii): a laser module that provides the robot's position while walking;
(iv) an avoidance module that calls the function when meeting obstacle.
3.2. The K-Means Clustering Method of Extraction Data from Laser
It is very important to extract different objects data from laser. The data is provided in a two-dimensional array, 684 by 4 in our experiment. Each index includes length (distance between laser and objects), angle (computed value based on the index and URG device configuration), and x and y (objects coordinates in Cartesian space).
The k-means algorithm is an effective method of extraction laser data. The term "k-means" was first used by MacQueen in 1967 [23]. The steps are followed. The k-means algorithm takes as input the number of clusters and a set of observation vectors to cluster. It returns a set of centroids, one for each of the clusters. An observation vector is classified with the cluster number or centroid index of the closest centroid to it.
Step 1.
Assign each observation to the cluster whose mean is the closest to it: [figure omitted; refer to PDF] where x p is a set of observations { x 1 ... x n } assigned to exactly one S and m i is an initial set of k means { m 1 ... m k } .
Step 2.
Calculate the new means to be the centroids of the observations in the new clusters: [figure omitted; refer to PDF] The algorithm has converged when the assignments no longer change. There are two landmarks in our experiment.
3.3. Avoidance Obstacles
In order to prevent the robot to knock down other objects, we use avoidance in our experiment. The avoidance function is navigationProxy.moveTo ( · ) . The key of addressing this issue is calling the ALNavigation module from NAOqi as shown in Table 2. This module assists the robot to use its sonars while it is in motion. By using sonars, the robot is able to stop moving if there are any obstacles located in the security area of sonars. The avoidance steps are followed.
Table 2: Main python code for using obstacle avoidance function.
Function | Python code |
Calling ALNavigation module | navigationProxy=ALProxy("ALNavigation", robotIP, PORT) |
Moving NAO & use its sonars | navigationProxy.moveTo(x,y,theta) |
Avoidance obstacles steps are as follows:
(i) call navigationProxy.moveTo ( · ) function;
(ii) if there exit obstacles, the robot stops and turns to a certain angle. Assume the security distance is 0.3 m between the robot and the obstacles
: else walks forward.
4. Simulation and Experiment Results
The proposed method has been evaluated in the autonomous and intelligent systems laboratory (AISL). There are two landmarks in our experiment. In order to test our method, firstly, we did the simulation on the computer. The simulation result is shown in Figure 8 and the result is good. Then, we did the experiment in the lab. The experiment parameters are shown in Table 3.
Table 3: The value of the experiment parameters.
Parameters | Value |
laser.setDetectingLength | 0.02~1.5 m |
laser.setOpeningAngle | -90°~90° |
k p | 0.8 |
T i | 5.5 |
U | 0.1 |
Figure 8: The simulation motion result of Robot SLAM with closed-loop controller.
[figure omitted; refer to PDF]
Figure 9 is the experimental result of SLAM without closed-loop controller. The two triangles are the initial position of the robot. The two stars are the landmarks. The circles on the red points are the covariance of the robot. The circles become smaller when the robot is near the landmark. The control input is 0.1, and the error between the real position and estimated position is bigger and bigger because it is an open loop.
Figure 9: The experiment motion result of Robot SLAM without closed-loop controller.
[figure omitted; refer to PDF]
In Figure 10, the error between the real position and estimated position becomes smaller and smaller because PI controller is added. An obstacle is added based on Figure 10. The experiment result is shown in Figures 11 and 12. The yellow circle is the obstacle. The red point is the estimated position of the NAO robot. The motion of the robot is changed because there exits obstacle. The experiment shows that the robot can avoid the obstacles successfully.
Figure 10: The experiment motion result of Robot SLAM with closed-loop controller.
[figure omitted; refer to PDF]
Figure 11: The experiment motion result of Robot SLAM with closed-loop controller and avoidance obstacle.
[figure omitted; refer to PDF]
Figure 12: The enlargement result based on Figure 11 in order to see clearly.
[figure omitted; refer to PDF]
5. Conclusion and Future Work
The main contribution of this paper was the full real time implementation of EKF-SLAM on the NAO humanoid robot by combining its camera with laser. EKF-SLAM algorithm was realized in Python code and studied in simulation and experimental implementation. The simulation result validated the effort of EKF-SLAM landmark observation in terms of reducing the uncertainty of robot motion. The camera is used to recognize the landmarks, and then the laser gave the distance and the angle between the robot and landmarks. The robot can avoid the obstacle by using navigationProxy.moveTo ( · ) function. A closed-loop PI motion controller was used to improve trajectory control. The experiment was the realization of simulation in two landmarks cases, and the NAO robot in the experiment successfully accomplished EKF-SLAM in a realistic exploration task with two landmarks and a fixed obstacle. Our future works include designing an avoidance algorithm to choose the best path which can avoid the obstacles, rather than use the existing avoidance function. The controller is also to be improved in order to obtain better accurate position. We used one landmark at one time in our paper, but more landmarks are used at one time in the future work.
Acknowledgments
The work was partly supported by the Natural Science Foundation of Hebei Province of China (Project no. F2014203095), the Young Teacher of Yanshan University (Project no. 13LGA007), the National Natural Science Foundation of China (Project no. 51275439), and the Major State Basic Research Development Program of China 973 Program (Project no. 2013CB733000).
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
[1] S. Yin, S. Ding, X. Xie, H. Luo, "A review on basic data-driven approaches for industrial process monitoring," IEEE Transactions on Industrial Electronics , vol. 61, no. 11, pp. 6418-6428, 2014.
[2] S. Yin, X. Li, H. Gao, O. Kaynak, "Data-based techniques focused on modern industry: an overview," IEEE Transactions on Industrial Electronics , 2014.
[3] S. Yin, H. Gao, O. Kaynak, "Data driven control and process monitoring for industrial applications--part II," IEEE Transactions on Industrial Electronics , 2014.
[4] S. Yin, X. Gao, H. R. Karimi, X. P. Zhu, "Study on support vector machine-based fault detection in Tennessee eastman process," Abstract and Applied Analysis , vol. 2014, 2014.
[5] S. Yin, X. P. Zhu, H. R. Karimi, "Quality evaluation based on multivariate statistical methods," Mathematical Problems in Engineering , vol. 2013, 2013.
[6] X. Xie, S. Yin, H. Gao, O. Kaynak, "Asymptotic stability and stabilisation of uncertain delta operator systems with time-varying delays," IET Control Theory and Applications , vol. 7, no. 8, pp. 1071-1078, 2013.
[7] H. Durrant-Whyte, T. Bailey, "Simultaneous localization and mapping (slam)--part I: the essential algorithms," IEEE Robotics and Automation Magazine , vol. 13, no. 2, pp. 99-108, 2006.
[8] T. Bailey, H. Durrant-Whyte, "Simultaneous localization and mapping (SLAM): part II," IEEE Robotics and Automation Magazine , vol. 13, no. 3, pp. 108-117, 2006.
[9] W. D. Smart, L. P. Kaelbling, "Effective reinforcement learning for mobile robots," in Proceedings of the EEE/RSJ International Conference on Robotics and Automation (ICRA '02), pp. 3404-3410, IEEE Press, Washington, DC, USA, May 2002.
[10] A. J. Davison, I. D. Reid, N. D. Molton, O. Stasse, "MonoSLAM: real-time single camera SLAM," IEEE Transactions on Pattern Analysis and Machine Intelligence , vol. 29, no. 6, pp. 1052-1067, 2007.
[11] D. Fox, D. Hähnel, W. Burgard, S. Thrun, "An efficient fastslam algorithm for generating maps of large-scale cyclic environments from raw laser range measurements," in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '03), pp. 206-211, IEEE Press, Las Vegas, Nev, USA, October 2003.
[12] R. Labayrade, C. Royere, D. Gruyer, D. Aubert, "Cooperative fusion for multi-obstacles detection with use of stereovision and laser scanner," Autonomous Robots , vol. 19, no. 2, pp. 117-140, 2005.
[13] L. Iocchi, S. Pellegrini, "Building 3D maps with semantic elements integrating 2D laser, stereo vision and INS on a mobile robot," in Proceedings of the 2nd International Society for Photogrammetry and Remote Sensing International Workshop 3D-ARCH, (ISPRS '07), pp. 1-8, IEEE Press, Zurich, Switzerland, 2007.
[14] K. Lin, C. Chang, A. a. . Dopfer, "Mapping and localization in 3D environments using a 2D Laser scanner and a stereo camera," JISE: Journal of Information Science and Engineering , vol. 28, no. 1, pp. 131-144, 2012.
[15] R. C. Smith, P. Cheeseman, "On the representation and estimation of spatial uncertainty," International Journal of Robotics Research , vol. 5, no. 4, pp. 56-68, 1986.
[16] D. Gouailier, V. Hugel, P. Blazevic, C. Kilner, J. Monceaux, P. Lafourcade, B. Marnier, "Mechatronic design of nao humanoid," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '09), pp. 769-774, IEEE Press, Kobe, Japan, May 2009.
[17] S. Shamsuddin, L. I. Ismail, H. Yussof, N. I. Zahari, S. Bahari, H. Hashim, A. Jaffar, "Humanoid robot NAO: review of control and motion exploration," in Proceedings of the IEEE International Conference on Control System, Computing and Engineering (ICCSCE '11), pp. 511-516, IEEE Press, Penang, Malaysia, November 2011.
[18] D. Tlalolini, C. Chevallereau, Y. Aoustin, "Optimal reference motions with rotation of the feet for a biped," in Proceeding of the ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (IDETC/CIE '08), pp. 1027-1036, IEEE Press, New York , NY, USA, August 2008.
[19] K. Nishiwaki, S. Kagami, Y. Kuniyoshi, M. Inaba, H. Inoue, "Toe joints that enhance bipedal and fullbody motion of humanoid robots," in Proceedings of the IEEE International Conference on Robotics adn Automation (ICRA '02), pp. 3105-3110, IEEE Press, Washington, DC, USA, May 2002.
[20] Y. Ogura, K. Shimomura, H. Kondo, A. Morishima, T. Okubo, S. Momoki, H. Lim, A. Takanishi, "Human-like walking with knee stretched, heel-contact and toe-off motion by a humanoid robot," in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '06), pp. 3976-3981, October 2006.
[21] S. Lohmeier, T. Buschmann, H. Ulbrich, F. Pfeiffer, "Modular joint design for performance enhanced humanoid robot LOLA," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA '06), pp. 88-93, IEEE Press, Orlando, Fla, USA, May 2006.
[22] O. Mohareri, A. B. Rad, "Autonomous humanoid robot navigation using augmented reality technique," in Proceedings of the IEEE International Conference on Mechatronics (ICM '11), pp. 463-468, IEEE Press, Istanbul, Turkey, April 2011.
[23] J. B. MacQueen, "Some methods for classification and analysis of multivariate observations," in Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability (MSAP '67), pp. 281-297, IEEE Press, New York, NY, USA, 1967.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2014 Shuhuan Wen et al. Shuhuan Wen et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
We present a SLAM with closed-loop controller method for navigation of NAO humanoid robot from Aldebaran. The method is based on the integration of laser and vision system. The camera is used to recognize the landmarks whereas the laser provides the information for simultaneous localization and mapping (SLAM ). K-means clustering method is implemented to extract data from different objects. In addition, the robot avoids the obstacles by the avoidance function. The closed-loop controller reduces the error between the real position and estimated position. Finally, simulation and experiments show that the proposed method is efficient and reliable for navigation in indoor environments.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer