INTRODUCTION
The quadruped robots with manipulators combine the maneuverability of the quadruped robot with the operation ability of the manipulator to get rid of the limitation of the fixed base, which can meet the requirements of mobile operation in complex and dangerous environments, such as removing roadblocks and rescue applications [1–5]. Based on the safety principle, it is necessary to reduce the time of operators entering the above working environment, which poses a great challenge to the autonomy of robots. Therefore, robots should employ the abiliti of recognition, navigation and autonomous operation.
In the past few years, researchers have witnessed efforts towards motion planning for quadruped robots with manipulators. Boston Dynamics installs a 7-degrees of freedom (DOF) hydraulic manipulator on the hydraulic quadruped robot, which is named BigDog. It uses the Covariance Matrix Adaptation (CMA) algorithm to plan trajectories and realizes cinderblock throwing [6, 7]. ETH Zurich presented Articulated Locomotion and Manipulation for ANYmal (ALMA) to perform dynamic locomotion while executing manipulation tasks and built a Model Predictive Control framework for whole-body dynamic locomotion and manipulation [8]. Shandong University (SDU) has developed a new electric-driven 6-legged robot, which is called SDUhex, to move and grasp the target object in the process of moving [9]. However, the above research does not endow the robot with the ability of environment perception and autonomy.
The tasks of target recognition of quadruped robots mainly include leader-following and operation object recognition. At present, to achieve this function, researchers mainly focus on the field of deep learning, such as You Only Look Once (YOLO) [10], single shot detector (SSD) [11] and their improved algorithms [12]. Although these algorithms show good performance in recognising the operation object, they are prone to lead to false detection in leader-following. A common solution is to stick a logo on the navigator's body, such as the Scalf of SDU [13]. In addition, Guo et al. have made a preliminary exploration in leader-following based on human gait [14].
In the Defence Advanced Research Projects Agency Robotics Challenge, many teams contributed new technologies for autonomous robot navigation and operation. In 2015, Schwarz et al. equipped Mamaro with immersive 3D visualisation and 6D tracking equipment of operator head and manipulator motions [15]. Stentz et al. utilised various sensors, which are embedded in the CHIMP's head to generate full 3D representations of its environment and transmit these models to a human operator to achieve awareness [16]. Hebert et al. designed a hardware and software system for RoboSimian to achieve navigation and operation based on human assistance [17]. However, the above research studies still require tel-operations when performing grasp tasks. In 2021, the champion team CERBERUS combined ANYmal C100 series quadruped robot and unmanned aerial vehicle to achieve mapping, navigation and objective search capabilities to explore complex underground environments [10, 18]. However, ANYmal C100 series quadruped robot lacks the ability of operation.
In order to improve the autonomous operation ability of quadruped robots, Xu et al. realized the ability to grasp a pre-assigned object in clutter based on deep reinforcement learning [19]. Morrison et al. built a lightweight network generative convolution neural network to achieve the grasp of dynamic targets [20]. Yan et al. proposed a learning-based contact status recognition for Peg-in-Hole assembly to realize opening a lock without vision sensors [21]. There are also many attempts that combine the autonomous operation ability of the manipulator with the navigation of quadruped robot such as Boston Dynamics [22] and Ningbo Industrial Internet Institute [23], but no methods to achieve it have been published.
To sum up, the current research on quadruped robots for recognition, navigation and manipulation has the problem of low autonomy. In order to solve this problem, we design an autonomous system for a quadruped robot with a manipulator, which is driven by the onboard electro-hydraulic system (Section 2), and realizes the functions of autonomous target detection, leader-following, autonomous navigation and autonomous manipulation. It is mainly introduced from the following aspects: hardware system (Section 3), software system (Section 4) and system framework (Section 5).
THE OVERVIEW OF THE SYSTEM
The overview of the environment awareness system (EAS) for the quadruped robot with a manipulator is shown in Figure 1. The whole system extracts the information of the surrounding environment based on multiple sensors. It analyzes the elevation information, the location information, the reflection intensity and the colour information based on the environment modelling module. Then, tasks such as navigation, leader-following, target recognition and grasping detection are realized through the task planning module. It is worth noting that in order to extract more effective information about the terrain, the solid-state LiDAR Livox MID-70 with dense point clouds is selected to calculate the elevation information. The multi-line LiDAR Velodyne VLP16 is selected in order to obtain the reflection intensity of point clouds in the environment because the leader-following is realized based on the analysis of the reflection intensity of point clouds. We install depth cameras in multiple positions of the robot according to the viewing angle requirements of different tasks. The task planning module generates a feasible trajectory for robot. The motion controller transforms the planned trajectory to move speed and joint angle to realize the movement and operation of robot. In particular, the whole system is equipped with a picture phase transmission remote controller, which is convenient to transmit the robot's awareness information to human for monitoring. Among them, the environment awareness controller communicates with the motion controller and the picture phase transmission remote controller based on the network protocol transmission control protocol/Internet protocol (TCP/IP), and the picture phase transmission remote controller communicates with the robot motion controller based on the COM5 serial port.
[IMAGE OMITTED. SEE PDF]
HARDWARE SYSTEM DESIGN
The hardware system of the developed robot is shown in Figure 2. The system uses the electro-hydraulic quadruped with a manipulator made by SDU as the chassis [24]. It is 1.42 m in length and 0.693 m in width. The standing height is 0.7 m. The weight of the whole platform is 240 kg. Among them, the manipulator is 35 kg. The maximum payload of the manipulator is 25 kg. As a quadruped robot with a manipulator, the design concept takes into account factors such as reliability, payload and manipulation stability.
[IMAGE OMITTED. SEE PDF]
The robot leg has 3-DOF, which includes the lateral hip joint, the anterior hip joint and the knee joint, ensuring the payload and kinematics performance of the robot. The inertial navigation system (GW-NAV600) is mounted on the robot trunk to ensure the accurate location of the robot. The self-designed manipulator placed on the body has 5-DOF, including: a) the hydraulic pedestal that can rotate, b) the waist rotary joint, c) the shoulder pitching joint, e) the elbow pitching joint, and f) the wrist rotating joint. The gripper has 2 fingers and the maximum opening and closing reaches 300 mm.
The hardware of the specially designed EAS is placed in front of the quadruped robot as shown in the orange box in Figure 2. The EAS is mainly composed of 2 depth cameras (Realsense D435i), a plane-array LiDAR, a Velodyne VLP16 LiDAR and a computer (Lenovo Y7000). Since a safety concept has been integrated into the robot platform design, we employ two depth cameras to expand the field of view (FOV). To facilitate processing, the coordinate systems of the two cameras are unified to the position where the upper camera is located, which is named the EAS eye. In particular, a depth camera for grasping detection is mounted 10 cm from the wrist joint of the manipulator, as shown in the blue rectangle in Figure 2. It can ensure that the FOV of the depth camera can not only effectively cover the work space of the manipulator, but also avoid the occlusion of the view by the manipulator.
To sum up, coordinate systems of the whole system include the robot coordinate system {B}, the global coordinate system {W}, the manipulator baselink coordinate system {A}, the EAS eye 3D space coordinate system {C1}, the EAS eye 2D image coordinate system {P1}, the wrist camera 3D space coordinate system {C2}, the Livox Mid-70 LiDAR coordinate systems {LP}, the Velodyne VLP16 LiDAR coordinate systems {L} and the wrist camera pixel coordinate system {P2}. The intergenerational relationships between the above coordinate systems are shown in Table 1.
TABLE 1 The relationship of the coordinate system
Coordinate system | Parent | Descendent |
{W} | − | {B} |
{B} | {W} | |
{LP} | {B} | − |
{B} | ||
{A} | {B} | |
{L} | {B} | − |
{A} | ||
− |
SOFTWARE SYSTEM
The hardware is driven by the proposed integrated software architecture, which is depicted in Figure 3. The sensors equipped on the robot provide the required information for achieving the task. There are dependencies among modules in the software system. The object recognition module and leader-following module provide the target position for the path planning module. And the localization module offers the location for the mapping module and the starting point for path planning, respectively. As for grasp detection, it needs to receive the object position from the object module to detect the proper grasp point. The data transmission frequency between modules is shown in Table 2. However, the path planning module does not have a fixed frequency in the execution of receiving map information and planning the trajectory. This is because the path planning module adopts the strategy of planning while walking and re-executes it whenever it reaches an intermediate way point. When the surrounding environment changes dramatically, the way points will be set more densely; otherwise, the way points will be set sparsely. According to their functions, we divide them into two sub-systems: navigation system and recognition system. The navigation system mainly includes the localization module, mapping module and path planning module. The recognition system mainly includes leader-following module, target recognition module and grasp detection module. During the navigation process, we convert the trajectory into the speed and angle of the robot motion and use the two values to control the robot. And the number of the whole images is 1562.
[IMAGE OMITTED. SEE PDF]
TABLE 2 Data transmission frequency
Transmitting terminal | Receiving terminal | Frequency |
Leader-following | Path planning | 10 Hz |
Grasp detection | Manipulator motion controller | 10 Hz |
Object recognition | Path planning | 5 Hz |
Path planning | Robot motion controller | 10 Hz |
Object recognition | Grasp detection | 5 Hz |
Robot localization | Mapping | 10 Hz |
Mapping | Path planning | - |
A framework diagram of the navigation system is shown in Figure 4. The robot localization module combines the laser odometer and the inertial navigation system, which provides the location information of the robot in the navigation process. Based on this work, we utilize the Robot-Centric Elevation Mapping algorithm [25, 26] and the rapidlly exploring rand tree algorithm to construct the map building module and the path planning module, which provide the path for the quadruped manipulator robot. It is worth noting that the map updating strategy takes into account the movement of the robot and the changes in the environment, making the map more accurate. At the end of the framework, there is also an evaluator to evaluate whether the target object enters the workspace of the robot. If not, the robot will exit the operation task and perform navigation again.
[IMAGE OMITTED. SEE PDF]
The framework diagram of the recognition system is shown in Figure 5. We propose a leader recognition method based on reflective intensity by pasting a highly reflective logo on the leader's coat to make it have the characteristics for the recognition algorithm [13]. First, Velodyne VLP16 LiDAR is used to obtain point cloud information with reflection intensity around the robot. To improve the computational efficiency, the statistical filter is used to remove noise, and a voxel filter is employed to reduce the number of the point clouds. Then, the ground information is segmented by the Gaussian Particle Filter algorithm. After that, the remaining point clouds are clustered by the Euclidean clustering algorithm to obtain the point cloud clusters, which represent the objects in FOV. Finally, we judge the point cloud cluster belonging to the navigator based on the reflection intensity and send its position to the navigation module.
[IMAGE OMITTED. SEE PDF]
The target recognition module and the grasp detection module are implemented by the YOLO v5 algorithm [27]. CSPDarknet is used to extract the feature, which is a combination of Cross Stage Partial (CSP) and Darknet architectures. The network structure can be divided into Input, Backbone, Neck and Output. The input of the network is an RGB image, and the output is the bounding box of the target object in the image, which is described by height, width and centre. In order to better simulate the situation that the robot needs to deal with, the first camouflage bag dataset is established as the input of the network which is shown in Figure 6. The images of the dataset mainly include images synthesised based on image processing technology and the images actually taken. When synthesising images based on image processing techniques, we use mask and segmentation techniques to segment objects from the original image so that they can be pieced together with other backgrounds. However, the synthesised images sometimes cannot represent the actual features well, so we also made some actual images. The actual images mainly include four situations: common indoor scenes, common outdoor scenes, occluded scenes, and scenes with similar backgrounds and objects. All the actual images satisfyi the view of the quadruped robot.
[IMAGE OMITTED. SEE PDF]
After obtaining the bounding box of the target in the image, the 3D position of the target is calculated through the pinhole model. We convert the 3D position into the joint angles of the five joints of the manipulator and send the five values to the low-level controller of the manipulator. In particular, to avoid the low quality of single detection or the grasp failure caused by the movement of the target object, the target position is detected and updated in real time in the process of grasping.
SYSTEM FRAMEWORK
The task relevance scheduling strategy is used to build the overall architecture of the system. The autonomous operation can be regarded as a large task, that has the characteristics of high time consumption and high performance requirements. To solve this task, according to the inherent characteristics, we decomposed it into multiple sub-tasks through the decomposition algorithm, including object recognition, leader-following, robot location, autonomous mapping, path planning, grasping detection, robot motion planning and manipulator motion planning. These sub-tasks constitute a set of related tasks T = t1, t2, …, tn. This task set can be defined as a typical workflow. There is a pre- and post-dependency between sub-tasks, and it is executed in a parallel pipeline manner. Relationships between sub-tasks can be represented by a directed graph (Figure 7).
[IMAGE OMITTED. SEE PDF]
In Figure 7, every node represents a sub-task ti. {e12, e13, …, emn} is the edge set E, where eij represents the dependence between the sub-task ti and the subtask tj. ti drives the tj and it is the publisher of data communication between two tasks while tj is the subscriber. Data transmission adopts wireless network communication and Robot Operating System (ROS) communication. The nodes in the first row communicate with the nodes in the second row by wireless network communication, which use the TCP/IP protocol. The communication between nodes in the second row and nodes in the third row is ROS topic communication. There is no data communication between nodes in the first row and nodes in the third row.
The success rate of the whole task P(T) is
EXPERIMENTS
To verify the performance of the whole autonomous system, we conduct the experiments by processing the proposed key module both in the simulation platform and the real world. Moreover, the system ability of the whole robotic platform is experimentally verified.
Simulation environment construction
The Lenovo Rescuer Y7000 notebook is used to build the simulation environment. The hardware system of this computer mainly includes CPU Intel(R) Core(TM) I7-10,875H CPU, 32G RAM and NVIDIA GeForce RTX 2060 (6G) graphics card. The software system mainly includes NVIDIA Graphics card driver 472.19, Compute Unified Device Architecture (CUDA) 11.0, CUDA Deep Neural Network library (cuDNN) V8.0.5 and Pytorch 1.7.1. In this paper, we utilise Webots 2021a to perform the simulation experiment with the operating system Ubuntu 18.04. The chassis and the manipulator of the quadruped robot with the manipulator used in the simulation are 1:1 constructed according to the actual physical parameters.
Virtual prototype simulation experiment
The experiment cost of the quadruped manipulator robot platform is too expensive to perform a large number of experiments. There are often uncertainties when robots apply new autonomous algorithms. Any defects in the algorithm may lead to robot fall, experimenter injury and other experimental accidents, so it is important to conduct simulation experiments. By testing the effectiveness of each module in different tasks on the simulation platform, we can determine the loophole of the algorithm, adjust the relevant parameters, and reduce the loss caused by the failure of the prototype experiment.
Navigation simulation experiment
To complete the robot navigation task, this paper first verifies the effectiveness of the robot's real-time mapping on Webots 2021a. The constructed map is a robot centred local map. Figure 8 shows the mapping process of robot walking in the simulation environment, which describes the changes in the map with the robot pose and position. Figure 8a–d represents the simulation scene and the real-time map at 1, 7, 39 and 84 s. The real-time map is a 2.5D elevation map. The closer to blue, the lower the elevation is; the closer to red, the higher the elevation is. The coordinate system in the elevation map represents the pose of the robot in the map.
[IMAGE OMITTED. SEE PDF]
In the walking process, the mapping radius is 10 m, and the resolution ratio of the map is 5 cm. In this map, the obstacles have been represented as a bulge. The barrel with a larger diameter in Figure 8d does not affect the robot's walking due to its low height, so it need not be calculated as a bulge. In the early stage of mapping, because the robot trunk blocks the FOV of LiDAR, the elevation behind the robot is too large. However, the algorithm has the updating and fusion abilities based on both point clouds and robot movement; the blocking effect of the robot trunk on the FOV of LiDAR will gradually disappear as shown in Figure 8d.
Grasping detection simulation experiment
In our setup, the robot base remains stationary when the target object is located within the manipulator workspace. Only when the target object is outside the workspace of the manipulator, is the base moved. In the simulation experiment, we only verified the performance of grasping a dynamic object, which is located in the workspace, so we used the manipulator with a fixed frame. Figure 9 is the process diagram of the grasping simulation experiment of the manipulator. In this experiment, the object makes a round-trip motion, as shown in Figure 9b–d. Figure 10 shows the moving track of the end of the manipulator and the moving track of the target object in the manipulator base coordinate system. The black arrow indicates the movement path of the target object (yellow dot). The solid blue line represents the actual trajectory of the end of the manipulator. The red-dotted line shows trajectory of the manipulator end calculated when real-time grasping detection is not adopted. The purple box represents the initial position of the end of the arm, while the green box represents the terminal position of the end of the manipulator. During the experiment, the end of the manipulator effectively tracked the trajectory of the object. It can be seen that single detection cannot meet the requirements of real-time detection. Furthermore, it demonstrates that our grasping module is robustness when the objects are dynamic. However, due to the limitations of the moving speed and work space of the manipulator, the manipulator will fail to grasp when the object moves too fast or the object removes of the working space.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
Physical prototype experiment
On the basis of verifying the algorithms in the simulation environment, this paper assesses the ability of each module on the physical prototype platform. Furthermore, this paper completed the integration test of all modules.
Object recognition
The quadruped robot with the manipulator often faces the task of grasping and clearing obstacles in the process of walking. Among them, the grasping task is mainly faced with a variety of bags full of supplies. The clearing obstacle task is mainly faced with the door and window, which prevents the robot from moving. Therefore, we verify the recognition ability of the proposed recognition module on doors, windows and bags in indoor and outdoor environments. The bags are often camouflaged and obscured for security purposes, such as placing camouflage bags full of supplies in grass. To obtain the dataset with camouflaged images, the common solution is to use the image processing method, such as mask and background replacement. However, this processing method often makes the images out of reality. To solve this problem, we take 1562 images that match the viewpoint of the quadruped robot and accurately labelled them by LabelImg. These images include occlusion, background similar to the recognized object and other complex environments. In the process of network training, we mixed the self-built dataset with the Open Image Dataset v4 [28] and obtained more than 80,000 images. Where, 80% of the images are used as the training dataset, 10% as the validation dataset, and 10% as the test dataset. The results are shown in Figure 11. It can be seen that the proposed recognition module can effectively identify doors, windows and bags in a complex environment. In particular, when the image becomes blurred due to the motion of the robot, our algorithm can still recognise the target. Besides, it is still effective when there is occlusion and the object is similar to the surrounding environment.
[IMAGE OMITTED. SEE PDF]
Leader-following
To verify the effectiveness of the leader-following algorithm, we carry out the experiment in an outdoor environment. The person being followed wears a vest affixed with a reflective marker bar. According to the LiDAR installation position on the robot and the FOV parameters, the reflective marker is concentrated at the waist of the leader. The LiDAR scanning frequency is 10 Hz, and the leader is required to keep a natural walking posture during the walking process with a speed no less than 0.5 m/s. The recognition results are shown in Figure 12. Figure 12a and c shows the experimental scene, which is located in the thicket with dense vegetation. Others are the experimental results, where the white cuboid represents the position of the leader. The point cloud clusters framed by the red square are the pedestrians passing by the leader during the experiment. As shown in Figure 12b, there is no false detection when a pedestrian passes the leader. In Figure 12d, it can be seen that the recognition effect is still good when the leader passes through the bush, and the bush can also simulate the scene of the wild forest to a certain extent. The experimental results show that our system has anti-interference capability.
[IMAGE OMITTED. SEE PDF]
Navigation experiments
To verify the robustness of the proposed method on a physical prototype, a navigation experiment was carried out in an outdoor environment. Figure 13 shows the process of the robot autonomously passing the bend. Figure 13a and b show the steering process of the robot. Since the robot forward and steering are decouple, the robot adjusts the yaw in situ to pass the end smoothly. After that, the robot continues to move forward and keep yaw constant, as shown in Figure 13c and d. Figure 14 shows the actual trajectory (red line) and reference trajectory (blue line) of {B} in {W} of the robot when it moves in the above process. The robot can successfully reach the target without colliding any obstacles. Based on the walking characteristics of the quadruped robot, the trunk will sway from side to side with the lifting of limbs, so the actual walking trajectory shows fluctuations.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
Grasping experiments
As shown in Figure 15, the manipulator recognises the target object and performs grasping autonomously. The upper-right corner of each sub-figure shows the camera's FOV. Figure 16 shows the time-varying curve of the target position recognized by the grasping detection module in the manipulator base coordinate system during the movement of the manipulator. It is observed that the recognized target position fluctuates around an average value most of the time, and the fluctuation is usually within 0.5 mm. However, the target location can mutate due to false detection. At this point, due to the real-time detection of the grasping module, a closed loop is formed between grasp detection and the trajectory planning, which leads the manipulator to still successfully grasp the object. The data of the real trajectory and the reference trajectory of the manipulator end are collected during the experiments, which are shown in Figure 17. In Figure 17, the purple, green and sky blue represent the real trajectory of the manipulator in x, y and z directions, respectively. The blue, red and yellow represent the reference trajectories in x, y and z directions, respectively. By comparing the real trajectory with the reference trajectory, it is concluded that the manipulator can track the path reliably with little error.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
System experiments
Integrate all the above modules to test the system. The robot has achieved the ability to identify objects remotely and perform autonomous grasping as shown in Figure 18. In Figure 18a–c, the robot recognizes the target and walks towards it. Then, in Figure 18d–f, the robot lifts the manipulator to detect the object through the wrist camera and grasps it.
[IMAGE OMITTED. SEE PDF]
To verify the ability of the proposed task relevance scheduling strategy to reduce computing consumption, we compare the graphics processing unit (GPU) and CPU occupancy of the system during navigation and operation. For comparison, we put the industrial computer deployed with the system framework in the same environment as the navigation and operation experiments with all the processes of the system opened. The frequency of target recognition is about 5 Hz, and the frequency of navigation is about 10 Hz. The input of the object recognition system is a 640 × 480 RGB image, and the input of navigation system is point clouds extracted by the Solid-state LiDAR. Besides, the size of the local map established by the navigation algorithm is set to 10 m × 10 m. So, we have the same number of parameters in both experiments. In addition, in order to control the influence of the external environment on the computing ability of the computer, the industrial computer equipped with the system is located in the indoor environment during the two experiments, and the air conditioning temperature is 26 °C. The result is shown in Figure 19. In Figure 19, the blue line and the red line represent the GPU occupancy and CPU occupancy by our method, respectively. The yellow line and the purple line represent the GPU occupancy and CPU occupancy with all processes on, respectively. The data are recorded at 4 Hz.
[IMAGE OMITTED. SEE PDF]
The average occupancy rate of both GPU and CPU in our method is lower than that when all processes are always on. Among them, at 13 and 42 s, the GPU occupancy of our algorithm increased briefly due to the opening of the object recognition module and the grasp detection module, respectively, but they soon stabilized. From 46 s to 51 s, the grab detection algorithm judges whether the target object is located in the workspace of the manipulator. After the judgement, because the target object is located in the workspace of the manipulator, the system closes the object recognition module, and the occupancy of CPU and GPU decreases significantly. At 278 s, the manipulator successfully catches the object. At this time, the grab detection module is turned off, and the occupancy of CPU and GPU is further reduced. This demonstrates that the proposed method effectively reduces the consumption of computing resources. However, this method cannot solve the impact caused by the process startup.
CONCLUSIONS
In this study, we develop a compact autonomous system for quadruped robots with manipulators to realize leader-following, target recognition, navigation and operation. The system can accomplish navigation and operation tasks autonomously and efficiently.
The proposed prototype platform combines quadruped robot, manipulator and environmental awareness sensor, which not only retains the flexibility of traditional quadruped robot but also provides a foundation for the robot to eliminate the dependence on the operator.
The software system includes many modules, such as leader-following, target recognition, location, mapping, navigation and grasp detection, which cover the task requirements of navigation and operation in an all-round way and give the robot a high degree of autonomy.
More, the developed system framework, which is based on the task relevance scheduling strategy, can drastically reduce the computational complexity. The effectiveness of the developed framework in the outdoor environment is demonstrated via extensive experiments.
However, the current research cannot meet the needs of high-speed movement and autonomous grasping with a moving base, and further research and exploration are needed to solve these problems.
ACKNOWLEDGEMENTS
This work was supported by the National Natural Science Foundation of China (Grant/Award No: 61973135, 6207319, 91948201), Shandong Key R&D Program (Grant/Award No: 2019JZZY020317), and the Natural Science Foundation of Shandong Province (Grant/Award No: ZR2022MF296).
CONFLICT OF INTEREST
The authors declare that there are no conflicts of interest that could be perceived as prejudicing the impartiality of the research reported.
DATA AVAILABILITY STATEMENT
The dataset proposed in this study is available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.
Hooks, J., et al.: Alphred: a multi‐modal operations quadruped robot for package delivery applications. IEEE Rob. Autom. Lett. 5(4), 5409–5416 (2020). [DOI: https://dx.doi.org/10.1109/lra.2020.3007482]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022. This work is published under http://creativecommons.org/licenses/by-nc/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
A systematic solution is developed to improve the autonomous capability of the quadruped robot with a manipulator, such as navigation, recognition and operation. The developed system adopts novel software, hardware system and system architecture, including a specially designed environment awareness system (EAS). Based on the camera and LiDAR on the EAS, the recognition of multiple common targets, such as the leader, door, window and bag, is achieved. In terms of navigation, a location method is built, that combines the laser odometer and global positioning system. A mapping and path planning module is designed by the Robot‐centric Elevation Mapping algorithm and the rapidly exploring rand tree algorithm. For operation, a real‐time target grasp detection system is proposed based on the You Only Look Once v5 algorithm to improve the success rate of tasks. The whole system is integrated based on the task relevance scheduling strategy to reduce the computational complexity. The tightly integrated system and the subsystems are evaluated by conducting simulations and physical experiments in robot recognition, navigation and operation. Extensive experiments show that the proposed framework can better achieve the autonomous navigation and operation of the quadruped robot with a manipulator. Notably, the proposed framework is still effective when facing dynamic objects. In addition, the system can be easily extended to other forms of mobile robot.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 Engineering Research Center of Intelligent Unmanned System, Ministry of Education, Shandong University, Jinan, China
2 School of Electrical Engineering, University of Jinan, Jinan, China