1. Introduction
Bananas are one of the most important economic crops globally and are widely cultivated and consumed around the world [1]. Over 130 countries grow bananas, with China being the third-largest producer, having a cultivation area of 350,000 to 400,000 hectares. In 2022, China produced 1.235 million tons of bananas, drawing significant attention to the sustainable development and efficient harvesting of the banana industry [2,3]. However, the current harvesting method for bananas is primarily manual, and the weight exceeding 30 kg presents a considerable challenge for farmers. Additionally, with the gradual increase in demand and production, the banana industry increasingly faces issues such as labor shortages, rising labor costs, and worker safety [4]. Consequently, the mechanization and automation of banana harvesting have become important development trends, making the precise harvesting of banana bunches one of the challenges in agriculture.
In recent years, the development and application of intelligent harvesting robots have gradually increased [5]. Highly integrated harvesting systems can ensure quality, reduce the failure rate of harvesting, and lower the labor intensity for operators [6]. Thus, it is crucial to develop autonomous banana-harvesting robots that are accurate, stable, and reliable [7]. There are three main aspects where the performance of automated banana-harvesting robots needs improvement. First, the robot body and end-effector need to be strong enough to meet the task of harvesting heavy bananas. Second, there is a need for rapid and precise detection of banana stalks and high positioning accuracy to improve the efficiency of banana harvesting. Finally, the functional modules within the robot system need to be highly integrated to achieve autonomous banana harvesting.
The robot body is a crucial component of harvesting robots. In other research applications, Wang et al. [8] designed a litchi-harvesting robot, where the robot arm used a six-degree-of-freedom industrial robotic arm with a carrying capacity of 5 kg, achieving a litchi-harvesting success rate of 81.3%. Yasukawa et al. [9] developed a lightweight six-degree-of-freedom tomato-harvesting robot and completed initial outdoor detection tests for tomatoes. Xiong et al. [10] used a Mitsubishi five-degree-of-freedom robotic arm, achieving a 53.6% success rate in strawberry harvesting in farm environments. Additionally, there are studies on harvesting robots for other crops such as citrus [11], grapes [12], kiwifruit [13], and tea leaves [14]. The mechanical bodies used in these studies often involve highly versatile industrial robotic arms, with model sizes tailored for broader applications in production. In cases where generic robotic arms do not meet the requirements, some mechanical bodies are specifically designed for particular crops. Further studies have included different types of robotic arms in various applications [15]. For large and heavy fruits like bananas, existing mechanical bodies cannot meet the harvesting needs, thus necessitating the design of a suitable mechanical body.
End-effectors are one of the core components of a harvesting robot, and most end-effectors need to be designed according to the characteristics of the fruit to be harvested. Liu et al. [16] designed a six-finger flexible robotic hand to achieve flexible grasping of agricultural products, introducing a smooth grasping control strategy to ensure apple picking with the most appropriate force levels. Li et al. [17] addressed the low efficiency of citrus harvesting by designing a wrap-around swallowing harvesting mechanism, verifying its feasibility for citrus harvesting. Li et al. [18] designed a rope-driven adaptive, non-destructive harvesting end-effector based on a human hand to achieve 100% successful harvesting of pears without damage. Xie et al. [19] optimized the key structural parameters of a cherry tomato harvesting end-effector based on the Box-Behnken design technique, achieving a 76% success rate for harvesting fruits of different diameters. Cai et al. [20] proposed a pneumatic soft gripper that bends and extends by driving a silicone body, verifying its grasping performance for strawberries. These end-effectors have achieved good results for different fruits, but these fruits are relatively small and light. The main features of a banana harvesting end-effector include grasping and cutting, and it needs to be able to bear a significant weight. Designing an end-effector that meets these requirements is a potentially effective solution for autonomous banana harvesting.
Currently, while banana-harvesting robots have not yet been commercialized, there has been substantial research on related detection algorithms, including banana stalk detection [21,22], banana inflorescence axis detection [23], banana inflorescence axis cutting point localization [24], and banana pseudo-stem detection [25]. For banana-harvesting robots, the most important task is to detect and locate the banana stalk. Fu et al. [2] achieved a detection rate of 89.63% for banana stalks using color texture features, with an execution time of 1.325 s. Wu et al. [23] used an improved YOLOv3 to identify multiple parts of bananas, achieving an average detection accuracy of 93%. Wu et al. [24] enhanced the YOLOv5 model with an involution bottleneck module, achieving a 93.2% detection accuracy for multiple banana targets. Fu et al. [26] proposed the lightweight YOLO-banana model, reducing detection time and achieving an 85.98% detection accuracy for banana stalks. However, in practical harvesting scenarios, it is unnecessary to focus excessively on parts other than the stalk or banana stalks from multiple plants simultaneously. Additionally, there is a need to balance model accuracy and speed while maintaining precision.
To our knowledge, there is still relatively little research on banana-harvesting mechanical devices and integrated control systems. This study aims to develop an autonomous banana-harvesting robot to address the challenge of accurately and reliably harvesting bananas in banana plantation environments. The main contributions of this paper include: (1) Developing a prototype of a banana-harvesting robot and its end-effector; (2) Designing and implementing a precise detection model and spatial positioning for banana stalks; (3) Proposing a multi-threaded task-based harvesting system framework, integrating various functional modules with the robot, evaluating the robot’s harvesting performance, optimizing the control system, and improving the harvesting success rate.
The rest of the paper is organized as follows: Section 2 presents the materials and methods used in this study. Section 3 provides an analysis of the experimental result. The conclusion is presented in Section 4.
2. Materials and Methods
2.1. Overview of Banana-Picking Robots
The banana-picking robot described in this study is shown in Figure 1. The main structure consists of a tracked vehicle walking mechanism, the robot arm body, and the end-effector. The tracked vehicle measures 1.963 m in length, 1.490 m in width, and 4.24 m in height. The large-size design ensures its stability, adaptability, and load-bearing capacity while navigating the banana plantation. It is driven by an AC servo motor that operates a hydraulic oil pump, supplying pressure oil from the hydraulic oil tank to the hydraulic motors on both sides of the tracked vehicle through an electromagnetic directional valve. This design endows the tracked vehicle with powerful propulsion and agile steering capabilities. The hydraulic oil tank has a storage capacity of approximately 30 L, providing ample hydraulic oil to meet long working hours. The design of the hydraulic system ensures efficient power transmission and stable operating performance for the tracked vehicle. The robot arm body is controlled by a robot controller, which includes servo motor drivers for each joint and a Socket communication module. The end-effector is also driven by a hydraulic system, with the hydraulic control adjusting the pressure within the range of 0 to 15 MPa and a dynamic pressure of 3 MPa. The power source is a portable power device with a power output of 10 kW and a capacity of 15 kW·h. The robot uses an Intel Realsense D435i depth camera to capture color and depth images of the banana stalk, utilizing active infrared light to perceive target distance, with a minimum sensing depth of 0.1 m and a field of view of 91.2° (horizontal) × 65.5° (vertical). The computer comes with a 13th-generation Intel Core (TM) [email protected] processor, 16 GB of RAM, and a GeForce RTX 4090 with a 16 GB graphics card.
2.2. Design and Calculation of the Banana-Picking Robot Structure
2.2.1. Structure Design
Degrees of freedom (DOF) is a crucial parameter in the design of a robotic arm, affecting its workspace, flexibility, and kinematic modeling. When designing, parameters such as the length of the arm links, number of DOFs, motor drive mode, and polarization distance must be considered to ensure that the workspace and operational performance meet the requirements for automatic banana harvesting.
Generally, the more degrees of freedom a joint-type robotic arm has, the higher its flexibility and the more positions and postures it can reach. Conversely, with more degrees of freedom, the mechanical structure becomes more complex, the rigidity of the arm decreases, and its load-bearing capacity weakens. In banana plantations, where bananas weigh between 40 and 60 kg, more degrees of freedom require higher motor torque output, increasing both cost and complexity. Since banana bunches typically grow vertically and are neatly and regularly planted, their posture is relatively fixed, thus not requiring excessive degrees of freedom during the harvesting process.
Three degrees of freedom can reach the position of the banana stalk in the workspace, but due to the limited posture of three degrees of freedom, the wrist joint’s posture cannot be adjusted. Therefore, adding one degree of freedom allows the wrist joint to rotate on the z-axis, adjusting the end-effector’s spatial position according to the banana’s posture. The designed robot body model is shown in Figure 2. The robotic arm consists of four joints: Joints 1 and 4 are rotary joints, while Joints 2 and 3 are coupled translational joints. Joint 1 allows the arm to rotate in the horizontal plane, expanding its working range and enabling it to target from different directions. Joint 2 adjusts the arm’s length through horizontal translation, allowing it to extend or retract to operate at varying distances. This function ensures the end-effector can accurately grasp and cut banana stems. Joint 3 adjusts the arm’s height via vertical translation, adapting to different stem heights. Joint 4, another rotary joint at the arm’s end, precisely controls the end-effector’s orientation.
In banana plantations with wide–narrow row planting, the wide row spacing is 3.8 m, and the narrow row spacing is 1.2 m. The height of the banana stalk is generally between 2.6 m and 3.1 m. Therefore, the operational range of the robot should be adapted to the actual working environment. As shown in Figure 3, the prototype of the robot and its corresponding operational space are depicted. Based on the center of the robot chassis, the minimum height and length in extreme conditions are 1.033 m and 1.495 m, respectively. The maximum picking height reaches 2.972 m, and the maximum length is 3.129 m. With the movement of the second and third joints, the robot can operate within the extreme boundary area, forming an approximately matrix-shaped working range due to the coupling of the joints. The center of the end-effector’s gripper is 0.103 m from the reference point at the fourth axis’ screw center.
2.2.2. Kinematic Calculation
To meet the demands of banana-picking tasks, the custom-designed robotic arm features a unique geometric structure and motion mechanism. The design of the coupled joints, while making the arm’s structure more compact and efficient and allowing for faster positional adjustments and reduced operation time, also introduces increased complexity in kinematic calculations due to the non-linear nature of the coupled joint movements. Compared to standard robotic arms, a more suitable mathematical model is required for calculations. This ensures that the non-standard, custom-designed robotic arm can accurately compute the correct positions and angles of each joint through kinematic solutions, thereby ensuring that the end-effector can precisely reach the target location.
The mechanical structure schematic of the banana-picking robot is shown in Figure 4. The robot has four motion joints: base rotation, horizontal movement of slider C, vertical movement of slider A, and end-effector rotation. The base rotation can adjust the robot’s position and posture over a wide range. The horizontal and vertical movements, respectively, change the position of the end-effector; the end-effector rotation adjusts its posture. The link parameters are shown in Table 1.
By controlling the movement of the robot’s four degrees of freedom, the target position and posture of the stem clamping mechanism in the harvesting end-effector can be adjusted. Therefore, it is necessary to establish a forward kinematics model of the harvesting robot with four degrees of freedom variables (θ, x, z, φ) to describe the position and posture of the stem clamping mechanism and calculate the end-effector’s position and posture. Let θ and φ represent the rotation angles of the two rotational joints, and let x and z represent the movements of the two translational joints. The position vector of the coordinate origin o relative to the coordinate origin O is and the derivation of the forward kinematics equations are as follows:
(1)
From the equations, it can be seen that the position of the end-effector is related not only to the horizontal displacement x and vertical displacement z but also to the main arm angle α and the secondary arm angle β. According to the half-angle formula, the function expression relating the main arm α to the horizontal displacement x and vertical displacement z is given by:
(2)
Similarly, the function expression relating the secondary arm angle β to the horizontal displacement x and vertical displacement z is:
(3)
By combining Equations (1)–(3), the variables of the degrees of freedom θ, x, z, and φ can be obtained. Since both the base rotation angle θ and the end-effector rotation angle φ are yaw degrees of freedom, the posture angle Ψ of the stem clamping mechanism is:
(4)
Thus, the forward kinematics model of the harvesting robot is obtained. Based on the forward kinematics model, the inverse kinematics of the robot can be solved by establishing the Jacobian matrix and using the Newton iterative method [27].
2.3. End-Effector Design
To achieve integrated clamping and cutting of the banana stalk, the end-effector is designed as a composite mechanism that combines clamping and cutting functions. The design of the end-effector needs to ensure stability during cutting, enabling the separation of the banana bunch from the tree while preventing the bunch from falling off the end-effector. Additionally, since the diameter of the banana stalk ranges from 60 to 120 mm, the opening size of the clamping device should meet this condition.
The designed end-effector is shown in Figure 5. The entire clamping device consists of a hydraulic cylinder for adjusting the opening and closing of the clamps, a “Y”-shaped joint assembly, a pin shaft, a clamp arm assembly, clamp jaws, and a clamp mouth.
In the gripping device, the hydraulic cylinder drives the link mechanism through a push–pull rod, causing the link to swing around the center, thus controlling the opening angle of the jaws. This design ensures that the jaws can adjust according to the size of the banana stalk, providing stable gripping force. The “Y” joint assembly and pin connect and secure various parts of the gripping device, ensuring overall stability and flexibility. The claw design accommodates banana stalk diameters ranging from 80 mm to 300 mm. The jaws provide sufficient friction during gripping to ensure the banana stalk does not slip during cutting.
The cutting device features a small chainsaw that adjusts its position and orientation through a chainsaw hydraulic cylinder and swing arm mechanism, enabling powerful cutting of banana stalks. The chainsaw hydraulic cylinder, with precise hydraulic control, adjusts the chainsaw’s angle and position to ensure the completion of the cutting task.
2.4. Banana Stalk Detection and Localization
2.4.1. Detection
Deep learning algorithms are widely used in the agricultural field due to their high detection accuracy and strong generalization ability [28,29]. Common deep learning algorithms stack features, utilize Transformer technology, or use graphical neural networks [30,31,32]. These methods perform excellently but have relatively high computational complexity. In contrast, YOLOv5s (You Only Look Once Version 5s) maintains high accuracy while also offering high speed, making it the chosen base detection algorithm for this study.
The dataset for model training was collected from a banana plantation base in Suixi County, Zhanjiang City, Guangdong Province, using an Intel RealSense D435i depth camera. A total of 9586 images were collected, with a resolution of 1920 × 1080.
To ensure high-precision detection of banana stalks while meeting real-time detection requirements, this study improved the detection network design based on YOLOv5s, as shown in Figure 6. The improved YOLOv5s model is similar to YOLOv5 [33] but introduces a new lightweight Group Shuffle Convolution module (GSConv) [34] in the neck layer. The specific operations of this model are as follows:
Step 1: Group the input features. Given the number of feature channels as C, divide the input features into G groups, with each group containing C/G channels. Specifically, we divided them into 4 groups;
Step 2: Convolution within groups. Perform convolution operations independently on each feature group, outputting the same number of feature maps;
Step 3: Rearrange feature channels. Rearrange the output channels after the grouped convolution so that each new group contains channels from all the initial groups;
Step 4: Shuffle information. This rearrangement allows features from different groups to interact, preventing information isolation.
This module reduces computational complexity through grouped convolution. More importantly, by using grouped convolution and channel shuffling, it more effectively captures and utilizes spatial and channel information from the input features, thereby enhancing the capability of feature representation.
Additionally, the model’s intersection-over-union (IoU) loss function was optimized during training by adopting SIoU [35] instead of the traditional IoU, improving the robustness and stability of target detection.
2.4.2. Localization
In this study, the RealSense D435i camera, which is based on structured light projection technology, was used to capture color and depth images of the banana stalks. Based on the depth information and spatial coordinate transformation, the spatial coordinates of the banana stalks were obtained, as shown in Figure 7. The spatial coordinate transformation is given by Equation (5).
(5)
where and are the focal lengths of the camera in the x and y axes, respectively, and and are the translation dimensions from the pixel coordinate system to the image coordinate system. Let the given point pixel coordinates in the image coordinate system be , with the origin coordinates being . The depth camera’s internal parameters are shown in Equation (6).(6)
During the actual harvesting process by the robot, the spatial coordinates of the stalk need to be transformed into the robot coordinate system. After calibrating the camera’s internal parameters, the hand-eye calibration method was used to complete the calibration between the camera coordinate system and the robot coordinate system [36,37]. To improve calibration accuracy, an additional tool coordinate system was added to the end-effector to obtain more precise calibration data, as shown in Figure 8. By calibrating the camera’s internal and external parameters, the spatial coordinates of the stalk in the camera coordinate system can be converted to the robot coordinate system, enabling the robot to accurately harvest the banana stalks.
2.5. Harvesting System Design
To achieve autonomous harvesting operations with the banana-harvesting robot, various functional modules must be highly integrated. These mainly include the perception system, decision logic, operation execution, and communication system. The main design of the control system is shown in Figure 9.
The perception system uses a depth camera to capture and analyze video frames from the environment, identifying the banana stalk. Based on depth information and three-dimensional imaging principles, it completes the spatial positioning of the banana stalk, taking the center of the stalk as the target point for harvesting. The decision logic collects and processes the overall state of the system, including multiple recognitions of the stalk target at the same location and judging whether the spatial coordinates are consistent. It further analyzes whether the target point is within the operating range of the robot and whether the end-effector is in a standby state. When all conditions are met, the system enters the operation execution stage. At this point, the tracked vehicle stops moving, the robotic arm moves to the harvesting point, and the end-effector completes the clamping and cutting operation.
Additionally, to keep all key modules in a ready state, the control system is designed with a multi-threaded working mode. The main thread is responsible for the recognition, positioning, and motion calculation of the main system. Two sub-threads monitor the status of the robotic arm body and the end-effector. When the main thread completes the motion calculation, it sends the coordinates via socket communication to complete the movement. The completion signal of the movement is fed back to the end-effector through serial communication, completing the harvesting task.
It is noteworthy that each module has an independent main controller. The design of the multi-threaded mode can improve the efficiency and response speed of the control system, avoiding mis-execution situations.
3. Results and Discussion
The developed autonomous banana-harvesting robot was tested and optimized at multiple sites, including ➀ the banana plantation base in Zengcheng, Guangzhou, on 19 June 2021, ➁ the Guangdong Academy of Agricultural Sciences, on 18 September 2021, ➂ the Zhongkai College of Agricultural Engineering, Guangdong Province, on 21 March 2024, and ➃ the banana planting base in Suixi county, Zhanjiang city, on 12 April 2024. First, the robot’s detection performance for banana stalks was tested in different scenarios. Then, the robot’s positioning performance for the stalk targets was evaluated. Finally, the entire system was integrated, and its harvesting capability was tested, leading to proposed optimization schemes. The overall harvesting success rate, damage rate, efficiency, and other performance indicators of the system were assessed.
3.1. Banana Stalk Detection Performance
The banana stalk detection network in this study was built using the TensorFlow 2.0 framework. The model training batch size was set to 8, with a maximum training step size of 800. The image resolution was set to 640 × 640, and the learning rate was 0.01. The optimizer used was Stochastic Gradient Descent (SGD). Precision, average precision (AP), recall, and the comprehensive evaluation metric F1 were used as the evaluation metrics for the model [38,39]. Random sampling was performed at four different experimental bases to test the model’s performance. Recall indicates the proportion of correct predictions out of the total correct results, while AP and F1 integrate the detection algorithm’s accuracy and recall. The test results are shown in Table 2, and some test samples are shown in Figure 10.
From the evaluation metrics, it can be seen that the model performed well in detecting banana stalks. Field random tests revealed only a few stalks that were not recognized, primarily due to (1) excessive exposure caused by strong light and the camera facing direct sunlight and (2) the banana leaves obstructing large areas of the stalk and its surrounding region.
Figure 11a shows a detection case under strong light conditions. When sunlight did not directly hit the camera, the model was able to recognize the fruit stalk under strong light conditions. However, under strong light, especially when sunlight directly hits the camera, extreme exposure or localized brightness loss can occur, both of which are strong interference factors that prevent the detection of the banana fruit stalk.
Figure 11b presents a detection case under occlusion conditions. Normally, the fruit stalk has a certain length, making it less likely to be obstructed by the fruit itself. However, it is quite common for banana leaves to obstruct the fruit stalk. When banana leaves only partially covered the fruit stalk, the impact on the model’s recognition was minimal. If the fruit stalk was significantly obstructed but the area outside the stalk was not strongly interfered with by the banana leaves, the model could still detect the fruit stalk. However, if both the fruit stalk and the surrounding area were heavily obstructed by banana leaves, the model’s detection capabilities were limited.
For direct sunlight exposure, a potential solution could be to add auxiliary sunshades to reduce interference. Regarding occlusion, a single viewpoint may struggle, but with the robot’s movement, changing camera perspectives could potentially avoid the banana stalks being occluded. However, these factors continue to impact harvesting operations.
To further analyze the model performance, the improved YOLOv5s network was compared with other networks during the experiment. The recognition results of different networks are shown in Table 3. The AP value and the comprehensive evaluation metric F1 of the proposed improved YOLOv5s network model were the highest, reaching 99.23 and 0.97, respectively. The introduction of the lightweight group convolution module also further improved the detection speed of the model.
3.2. Location Performance
Location error is a crucial performance indicator for a harvesting robot, referring to the deviation between the center point of the end-effector and the center of the fruit. This deviation can be decomposed into absolute errors in the height, width, and depth directions of the banana stalk relative to the end-effector. A field test was conducted in Suixi County, Zhanjiang City, Guangdong Province, on 12 April 2024. The test method involved mounting a depth camera on the robot, capturing images, and performing coordinate transformation to unify all coordinate systems into the robot’s coordinate system. A random point on the calibration board was used as the target point, and the space coordinates calculated from the calibration matrix were compared with the actual space coordinates reached by the robot to calculate the absolute error. The hand–eye calibration error results are shown in Figure 12.
Statistical analysis indicated that the absolute error in the height direction was relatively large, with a maximum error of 6.854 mm. The average absolute positioning errors in the height, width, and depth directions were 5.159 mm, 2.405 mm, and 2.760 mm, respectively. The robot’s positioning accuracy for the target location was high, meeting the requirements for harvesting banana stalks.
As shown in Figure 13a, this is the initial state of the end-effector during the actual harvesting process, with a jaw size of approximately 26 cm. The harvesting process involves visual detection and positioning, then converting the camera coordinate system to the robot coordinate system using hand–eye calibration parameters, and finally reaching the harvesting point through robotic kinematic calculations. Due to the large length of the banana stalk in the height direction, which can all be considered as potential harvesting points, height errors were not measured. As shown in Figure 13b,c, the width and depth errors during the actual harvesting process were measured. The errors in the corresponding directions between the stalk center and the end-effector did not exceed 5 cm, which is entirely sufficient for the end-effector with its larger size to meet the harvesting requirements.
3.3. End-Effector Harvesting Experiment
In the end-effector harvesting experiment, two key tests were conducted to ensure the integrity of clamping and cutting the banana stalk during harvesting. First, to evaluate the reliability of the clamping mechanism, we targeted banana bunches separated from the plant and used the clamping mechanism to grip them. By adjusting the hydraulic controller’s pressure, the clamping mechanism was able to grip and suspend the banana bunches in the air. After multiple trials, the minimum negative pressure value was determined to be 4 MPa, with performance being more stable at negative pressures above 5 MPa. It is worth noting that the effective area of the hydraulic cylinder is approximately 300 square millimeters, with a dynamic pressure of about 3 MPa.
Second, the integrated clamping and cutting functionality was verified. The negative pressure required for the cutting function was relatively smaller compared to the clamping function. When the negative pressure reached 6.5 MPa, the clamping and cutting actions could be completed smoothly. Due to the wide dynamic adjustment range of the hydraulic system, multiple experiments and adjustments were conducted, and it was determined that a negative pressure of 7.5 MPa was optimal for the harvesting operation. The average time for clamping and cutting the banana stalk was approximately 4 s. These data are shown in Table 4. As illustrated in Figure 4, the stability of the end-effector’s operation was determined by its ability to successfully grip and cut the banana stalk and remain suspended in the air while holding the banana peduncle.
3.4. Harvesting Experiment and Optimization
After integrating the modules and constructing the control system, autonomous harvesting tests were conducted with the robot prototype in the plantation base. The harvesting process is shown in Figure 14.
We collected data from 56 trials across bases ➀–➂ before optimization, with 35 successful harvests, yielding a success rate of 62.5%. An analysis of the 21 failed harvests identified four primary reasons:
(1). When sunlight is directly above the depth camera, the camera imaging is prone to overexposure, sometimes failing to capture the surrounding environment, causing the robot to lose its target and be unable to complete the task;
(2). During the depth camera imaging process, small void areas in the depth image can result in missing localization data after the stalk target detection is completed;
(3). Using any arbitrary position as the initial point for robot hand-eye calibration may restrict the subsequent working area of the robot, leading to travel limits that prevent harvesting;
(4). Completing tasks such as recognition, localization, execution, and communication in a serial system during a single workflow can cause signal instability, resulting in harvesting failures.
To address these issues, optimization strategies for the robot’s autonomous harvesting tasks were proposed.
First, a sunshade was designed for the depth camera on the robot body, as shown in Figure 15. The sunshade device measures 251.2 mm in length, 50 mm in width, and 103.2 mm in height, with a top angle of 51.12°. This design prevents direct sunlight exposure while maintaining an unobstructed field of view.
Second, the decision logic in the control system was enhanced to include depth void value judgment. Given that the depth camera might produce voids in certain areas during outdoor use, the depth values of the banana stalk area detected in each frame are now evaluated during the picking process. If a depth void is detected, that frame is skipped, and the robot continues to move forward, allowing the stalk to be imaged in different areas of the frame, thereby reducing the impact of depth voids.
Furthermore, the picking area between the target point and the robot body was optimized. As shown in Figure 16a, when the calibration position is relatively forward after the target is identified, the forward movement space of the robot arm body is limited, which can easily cause the mechanical body to reach its limit. Therefore, by moving the calibration initial position towards the rear of the robotic arm, the operational range of the arm can be significantly increased. This enhancement further improves the stability of the robot during harvesting, as illustrated in Figure 16b.
Figure 16c,d illustrate the optimization of the relative distance between the picking point target and the robot body. As the robot moves, the target gradually enters the camera’s field of view. However, at this point, it is not yet at the optimal distance for picking. In some trials, picking the target as soon as it enters the field of view caused the robot to reach its movement limit. Thus, even when the target enters the field of view, continuous detection is maintained, but picking does not commence until the target reaches the center area of the imaging field. Only then does the picking task begin. This approach brings the harvesting targets closer to the robotic arm’s body, thereby avoiding situations where the arm might reach its motion limits due to high harvesting heights or wide operational distances. This adjustment further enhances the success rate of robotic harvesting operations.
In addition, factors contributing to harvesting failures also included the lack of coherence among various functional modules in the control system. Constructing the system in a serial mode to satisfy multiple parallel controllers can lead to premature or delayed signaling. For instance, the end-effector may begin operating before the robot reaches the target point, or the robot may initiate the transport phase before fully cutting the fruit stem. Therefore, a multi-threaded processing mode was designed in the control system, where each thread continuously monitors signals in real-time. This ensures immediate responsiveness to the corresponding task control whenever a signal is issued by any controller.
3.5. Optimized Picking Experiment
Successfully harvesting bananas with the robot and transporting them to the designated target point is considered a successful harvest. Before commencing the actual trials, we conducted more than 50 simulated trials with the end-effector deactivated to verify the stability of the picking process. We collected data from 12 picking trials conducted at the banana plantation base in Suixi County, Zhanjiang City, Guangdong Province. These trials were completed autonomously in a single session. The weather during the trials was sunny with high light intensity. The trials were conducted in a banana plantation using a wide–narrow row planting mode, with a 3.8 m wide row spacing. The planting density was moderate, with an approximate spacing of 1 m between adjacent plants. The height of the banana stalks ranged from 2.7 to 3 m. The experiments utilized an optimized robot control system, as shown in Table 5. Out of these trials, only one harvesting attempt failed due to the robot’s movement constraints when the harvesting height reached 2966 mm.
The experimental results indicated that the optimized robot control system demonstrated good stability. It consistently completed various processes, including detection, positioning, motion calculation, gripping, and cutting. The success rate of harvesting reached 91.69%, with an average harvesting time of 33.28 s.
3.6. Post-Harvest Transport Operations
Figure 17 depicts the handling process after banana harvesting. After autonomously gripping and cutting the banana bunches, they were maintained in the gripped state and transported along a pre-set path. This process was consistently executed at a higher elevation, as shown in Figure 17c. As the banana bunches reached the highest point of the transport vehicle’s receiving opening, they gradually descended to transfer the bunches onto the transport vehicle. While the preset procedure successfully harvests and transports the banana bunches, most path points were predetermined. In future research, efforts will focus on achieving dual-machine collaborative autonomous operations between harvesting and transport equipment.
4. Conclusions
This study developed a prototype banana-harvesting robot designed for the task of automated banana bunch picking. Integrating multiple functional modules, a fully automated harvesting robot system was constructed. Through field experiments, optimization methods for robot harvesting were proposed which validated the effectiveness of the robot prototype for banana harvesting. The specific conclusions drawn from the research are as follows:
(1). The developed prototype of the banana-harvesting robot effectively harvested bananas. The end-effector met the harvesting requirements, completing the gripping and cutting tasks with an average time of 4 s at 7.5 MPa negative pressure;
(2). The improved YOLOv5s detection model demonstrated superior performance and was capable of banana stalk detection in various environments. Evaluation metrics included an AP of 99.23% and an F1 score of 0.97. The robot achieved average positioning accuracies for the stalk target on the x, y, and z axes of 5.159 mm, 2.405 mm, and 2.760 mm, respectively;
(3). The optimized banana-harvesting robot exhibited improved stability and achieved a harvesting success rate of 91.69%. The average time from detection to completing the harvesting process was 33.28 s.
These performance metrics validate that the robot prototype can autonomously and effectively harvest bananas.
Conceptualization, T.C. and L.Z.; methodology, T.C., S.Z., and J.C.; software, T.C. and J.C.; validation, G.F.; formal analysis, G.F.; investigation, S.Z. and L.Z.; resources, L.Z.; data curation, T.C. and Y.C.; writing—original draft preparation, T.C.; writing—review and editing, L.Z.; supervision, S.Z. and L.Z.; project administration, L.Z.; funding acquisition, L.Z. All authors have read and agreed to the published version of the manuscript.
Not applicable.
The data presented in this study are available in the article.
The authors acknowledge the editors and reviewers for their constructive comments and all the support on this work.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 6. Detection network architecture. Note: Conv is convolution unit; DWConv indicates depth wise separable convolution; SPP stands for spatial pyramid pooling; Shuffle indicates permeating and merging of channel features.
Figure 10. Banana stalk detection results in different scenarios. (a) in Suixi county, Zhanjiang city. (b) in Zengcheng, Guangzhou city. (c) in Zhongkai College of Agricultural Engineering. (d) in Guangdong Academy of Agricultural Sciences.
Figure 11. Detection cases of banana stalk. (a) Different light conditions detection. (b) Different occlusion conditions detection.
Figure 16. Picking area optimization. (a,b) are movable area optimization comparison. (c,d) are target distance optimization comparison.
Link parameters of the harvesting robot.
Tag | Link Name | Length (mm) |
---|---|---|
1 | AB (l1) | 270 |
2 | BD/CE (l2) | 1260 |
3 | BC (l3) | 360 |
4 | EF/JK (l4) | 2080 |
5 | FL (l5) | 64 |
6 | NH (l6) | 364 |
7 | MN (l7) | 129 |
8 | EJ/FK (l8) | 240 |
9 | IJ (l9) | 354 |
10 | FM (l10) | 209 |
11 | EI (l11) | 240 |
12 | KQ (l12) | 170 |
Test results from different banana orchards.
Banana Orchards | Test Numbers | Recall (%) | Precision (%) | AP (%) | F1 |
---|---|---|---|---|---|
➀ | 281 | 89.58 | 99.31 | 98.79 | 0.94 |
➁ | 284 | 93.95 | 99.62 | 98.90 | 0.97 |
➂ | 260 | 86.83 | 99.50 | 99.12 | 0.93 |
➃ | 323 | 92.97 | 99.44 | 99.13 | 0.96 |
Test results from different models.
Model | AP (%) | F1 | Detection Speed (s/Image) |
---|---|---|---|
SSD [ | 98.46 | 0.88 | 0.104 |
Faster-RCNN [ | 98.49 | 0.94 | 0.245 |
YOLOv5s [ | 98.67 | 0.94 | 0.038 |
Our model | 99.23 | 0.97 | 0.026 |
Test results from different pressures.
Tag | Test Numbers | Pressure (MPa) | Clamping | Clamping | Cutting | Cutting |
---|---|---|---|---|---|---|
1 | 10 | 0–4 | off | 0% | off | 0% |
2 | 10 | 4–4.5 | on | 90% | off | 0% |
3 | 10 | 5.5–6.5 | on | 100% | off | 0% |
4 | 10 | 6.5–7.5 | on | 100% | on | 100% |
Test results of picking experiment.
Test Label | Stalk Diameter (mm) | Fruit String Diameter (mm) | Fruit String Length (mm) | Picking Point Height (mm) | Picking Time (s) |
---|---|---|---|---|---|
1 | 99 | 451 | 1328 | 2881 | 32.75 |
2 | 106 | 503 | 1453 | 2882 | 34.66 |
3 | 100 | 453 | 1330 | 2759 | 25.31 |
4 | 111 | 526 | 1526 | 2737 | 35.49 |
5 | 96 | 440 | 1321 | 2787 | 35.10 |
6 | 101 | 464 | 1363 | 2800 | 35.48 |
7 | 87 | 407 | 986 | 2855 | 41.04 |
8 | 108 | 510 | 1485 | 2966 | 36.59 |
9 | 92 | 414 | 1187 | 2826 | 31.11 |
10 | 98 | 432 | 1334 | 2730 | 32.42 |
11 | 105 | 498 | 1425 | 2758 | 36.85 |
12 | 103 | 487 | 1399 | 2746 | 26.02 |
References
1. Ma, C.; Wang, J.; Zeng, T.; Liang, Q.; Lan, X.; Lin, S.; Fu, W.; Liang, L. Banana individual segmentation and phenotypic parameter measurements using deep learning and terrestrial LiDAR. IEEE Access; 2024; 12, pp. 50310-50320. [DOI: https://dx.doi.org/10.1109/ACCESS.2024.3385280]
2. Fu, L.; Duan, J.; Zou, X.; Lin, G.; Song, S.; Ji, B.; Yang, Z. Banana detection based on color and texture features in the natural environment. Comput. Electron. Agric.; 2019; 167, 105057. [DOI: https://dx.doi.org/10.1016/j.compag.2019.105057]
3. He, J.; Duan, J.; Yang, Z.; Ou, J.; Yu, S.; Xie, M.; Luo, Y.; Wang, H.; Jiang, Q. Method for segmentation of banana crown based on improved DeepLabv3+. Agronomy; 2023; 13, 1838. [DOI: https://dx.doi.org/10.3390/agronomy13071838]
4. Zhou, L.; Yang, Z.; Deng, F.; Zhang, J.; Xiao, Q.; Fu, L.; Duan, J. Banana Bunch Weight Estimation and Stalk Central Point Localization in Banana Orchards Based on RGB-D Images. Agronomy; 2024; 14, 1123. [DOI: https://dx.doi.org/10.3390/agronomy14061123]
5. Wang, Q.; Meng, Z.; Wen, C.; Qin, W.; Wang, F.; Zhang, A.; Zhao, C.; Yin, Y. Grain combine harvester header profiling control system development and testing. Comput. Electron. Agric.; 2024; 223, 109082. [DOI: https://dx.doi.org/10.1016/j.compag.2024.109082]
6. Chen, M.; Jin, C.; Ni, Y.; Yang, T.; Zhang, G. Online field performance evaluation system of a grain combine harvester. Comput. Electron. Agric.; 2022; 198, 107047. [DOI: https://dx.doi.org/10.1016/j.compag.2022.107047]
7. Sun, Y.; Liu, R.; Zhang, M.; Li, M.; Zhang, Z.; Li, H. Design of feed rate monitoring system and estimation method for yield distribution information on combine harvester. Comput. Electron. Agric.; 2022; 201, 107322. [DOI: https://dx.doi.org/10.1016/j.compag.2022.107322]
8. Wang, C.; Li, C.; Han, Q.; Wu, F.; Zou, X. A performance analysis of a litchi picking robot system for actively removing obstructions, using an artificial intelligence algorithm. Agronomy; 2023; 13, 2795. [DOI: https://dx.doi.org/10.3390/agronomy13112795]
9. Yasukawa, S.; Li, B.; Sonoda, T.; Ishii, K. Development of a tomato harvesting robot. Proceedings of the 2017 International Conference on Artificial life and Robotics (ICAROB); Miyazaki, Japan, 19–22 January 2017; pp. 408-411.
10. Xiong, Y.; Ge, Y.; Grimstad, L.; From, P. An autonomous strawberry-harvesting robot: Design, development, integration, and field evaluation. J. Field Robot.; 2020; 37, pp. 202-224. [DOI: https://dx.doi.org/10.1002/rob.21889]
11. Yin, H.; Sun, Q.; Ren, X.; Guo, J.; Yang, Y.; Wei, Y.; Huang, B.; Chai, X.; Zhong, M. Development, integration, and field evaluation of an autonomous citrus-harvesting robot. J. Field Robot.; 2023; 40, pp. 1363-1387. [DOI: https://dx.doi.org/10.1002/rob.22178]
12. Sun, Y.; Sun, J.; Zhao, R.; Li, S.; Zhang, M.; Li, H. Design and system performance analysis of fruit picking robot. Nongye Jixie Xuebao/Trans. Chin. Soc. Agric. Mach.; 2019; 50, pp. 8-14.
13. Scarfe, A.J.; Flemmer, R.C.; Bakker, H.H.; Flemmer, C.L. Development of an autonomous kiwifruit picking robot. Proceedings of the 2009 4th International Conference on Autonomous Robots and Agents; Wellington, New Zealand, 10–12 February 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 380-384.
14. Li, Y.; Wu, S.; He, L.; Tong, J.; Zhao, R.; Jia, J.; Chen, J.; Wu, C. Development and field evaluation of a robotic harvesting system for plucking high-quality tea. Comput. Electron. Agric.; 2023; 206, 107659. [DOI: https://dx.doi.org/10.1016/j.compag.2023.107659]
15. Yang, H.; Li, L.; Gao, Z. Obstacle avoidance path planning of hybrid harvesting manipulator based on joint configuration space. Trans. Chin. Soc. Agric. Eng.; 2017; 33, pp. 55-62.
16. Liu, Y.; Zhang, J.; Lou, Y.; Zhang, B.; Zhou, J.; Chen, J. Soft bionic gripper with tactile sensing and slip detection for damage-free grasping of fragile fruits and vegetables. Comput. Electron. Agric.; 2024; 220, 108904. [DOI: https://dx.doi.org/10.1016/j.compag.2024.108904]
17. Li, Z.; Luo, Y.; Shi, Z.; Xie, D.; Li, C. Design and Research of End-effector for Naval Orange Harvesting. J. Mech. Transm.; 2020; 44, pp. 67-73.
18. Li, M.; Liu, P. A bionic adaptive end-effector with rope-driven fingers for pear fruit harvesting. Comput. Electron. Agric.; 2023; 211, 107952. [DOI: https://dx.doi.org/10.1016/j.compag.2023.107952]
19. Xie, H.; Kong, D.; Wang, Q. Optimization and experimental study of bionic compliant end-effector for robotic cherry tomato harvesting. J. Bionic Eng.; 2022; 19, pp. 1314-1333. [DOI: https://dx.doi.org/10.1007/s42235-022-00202-3]
20. Cai, S.; Pan, L.; Bao, G.; Bai, W.; Yang, Q. Pneumatic webbed soft gripper for unstructured grasping. Int. J. Agric. Biol. Eng.; 2021; 14, pp. 145-151. [DOI: https://dx.doi.org/10.25165/j.ijabe.20211404.6388]
21. Fu, L.; Duan, J.; Zou, X.; Lin, J.; Zhao, L.; Yang, Z. Fast and accurate detection of banana fruits in complex background orchards. IEEE Access; 2020; 8, pp. 196835-196846. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3029215]
22. Chen, T.; Zhang, R.; Zhu, L.; Zhang, S.; Li, X. A method of fast segmentation for banana stalk exploited lightweight multi-feature fusion deep neural network. Machines; 2021; 9, 66. [DOI: https://dx.doi.org/10.3390/machines9030066]
23. Wu, F.; Duan, J.; Chen, S.; Ye, Y.; Ai, P.; Yang, Z. Multi-target recognition of bananas and automatic positioning for the inflorescence axis cutting point. Front. Plant Sci.; 2021; 12, 705021. [DOI: https://dx.doi.org/10.3389/fpls.2021.705021]
24. Wu, F.; Duan, J.; Ai, P.; Chen, Z.; Yang, Z.; Zou, X. Rachis detection and three-dimensional localization of cut off point for vision-based banana robot. Comput. Electron. Agric.; 2022; 198, 107079. [DOI: https://dx.doi.org/10.1016/j.compag.2022.107079]
25. Cai, L.; Liang, J.; Xu, X.; Duan, J.; Yang, Z. Banana pseudostem visual detection method based on improved YOLOV7 detection algorithm. Agronomy; 2023; 13, 999. [DOI: https://dx.doi.org/10.3390/agronomy13040999]
26. Fu, L.; Yang, Z.; Wu, F.; Zou, X.; Lin, J.; Cao, Y.; Duan, J. YOLO-Banana: A lightweight neural network for rapid detection of banana bunches and stalks in the natural environment. Agronomy; 2022; 12, 391. [DOI: https://dx.doi.org/10.3390/agronomy12020391]
27. Fu, G.; Zhu, L.; Zhang, S.; Wu, R.; Huang, W. Research on solving inverse kinematics of banana picking robot based on Newton´s iteration method. J. Chin. Agric. Mech.; 2023; 44, 200.
28. Zhang, G.; Cao, H.; Jin, Y.; Zhong, Y.; Zhao, A.; Zou, X.; Wang, H. YOLOv8n-DDA-SAM: Accurate Cutting-Point Estimation for Robotic Cherry-Tomato Harvesting. Agriculture; 2024; 14, 1011. [DOI: https://dx.doi.org/10.3390/agriculture14071011]
29. Chen, T.; Li, H.; Chen, J.; Zeng, Z.; Han, C.; Wu, W. Detection network for multi-size and multi-target tea bud leaves in the field of view via improved YOLOv7. Comput. Electron. Agric.; 2024; 218, 108700. [DOI: https://dx.doi.org/10.1016/j.compag.2024.108700]
30. Huang, G.; Liu, Z.; Van Der Maaten, L.; QWeinberger, Q. Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA, 21–26 July 2017; pp. 4700-4708.
31. Han, K.; Wang, Y.; Chen, H.; Chen, X.; Guo, J.; Liu, Z.; Tang, Y.; Xiao, A.; Xu, C.; Xu, Y. et al. A survey on vision transformer. IEEE Trans. Pattern Anal. Mach. Intell.; 2022; 45, pp. 87-110. [DOI: https://dx.doi.org/10.1109/TPAMI.2022.3152247]
32. Han, K.; Wang, Y.; Guo, J.; Yang, Y.; Wu, E. Vision gnn: An image is worth graph of nodes. Adv. Neural Inf. Process. Syst.; 2022; 35, pp. 8291-8303.
33. Zhu, X.; Lyu, S.; Wang, X.; Zhao, Q. TPH-YOLOv5: Improved YOLOv5 based on transformer prediction head for object detection on drone-captured scenarios. Proceedings of the IEEE/CVF International Conference on Computer Vision; Montreal, BC, Canada, 11–17 October 2021; pp. 2778-2788.
34. Li, H.; Li, J.; Wei, H.; Liu, Z.; Ren, Q. Slim-neck by GSConv: A better design paradigm of detector architectures for autonomous vehicles. arxiv; 2022; arXiv: 2206.02424
35. Li, C.; Li, L.; Jiang, H.; Weng, K.; Geng, Y.; Li, L.; Ke, Z.; Li, Q.; Cheng, M.; Nie, W. et al. YOLOv6: A single-stage object detection framework for industrial applications. arxiv; 2022; arXiv: 2209.02976
36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA, 27–30 June 2016; pp. 770-778.
37. Zhong, M.; Han, R.; Liu, Y.; Huang, B.; Chain, X.; Liu, Y. Development, integration, and field evaluation of an autonomous Agaricus bisporus picking robot. Comput. Electron. Agric.; 2024; 220, 108871. [DOI: https://dx.doi.org/10.1016/j.compag.2024.108871]
38. Chen, S.; Zou, X.; Zhou, X.; Xiang, Y.; Wu, M. Study on fusion clustering and improved YOLOv5 algorithm based on multiple occlusion of Camellia oleifera fruit. Comput. Electron. Agric.; 2023; 206, 107706. [DOI: https://dx.doi.org/10.1016/j.compag.2023.107706]
39. He, Y.; Zhang, X.; Zhang, Z.; Fang, H. Automated detection of boundary line in paddy field using MobileV2-UNet and RANSAC. Comput. Electron. Agric.; 2022; 194, 106697. [DOI: https://dx.doi.org/10.1016/j.compag.2022.106697]
40. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.; CVerg, A. Ssd: Single shot multibox detector. Proceedings of the Computer Vision–ECCV 2016: 14th European Conference; Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14 Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 21-37.
41. Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst.; 2015; 28, [DOI: https://dx.doi.org/10.1109/TPAMI.2016.2577031] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27295650]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The high growth height and substantial weight of bananas present challenges for robots to harvest autonomously. To address the issues of high labor costs and low efficiency in manual banana harvesting, a highly autonomous and integrated banana-picking robot is proposed to achieve autonomous harvesting of banana bunches. A prototype of the banana-picking robot was developed, featuring an integrated end-effector capable of clamping and cutting tasks on the banana stalks continuously. To enhance the rapid and accurate identification of banana stalks, a target detection vision system based on the YOLOv5s deep learning network was developed. Modules for detection, positioning, communication, and execution were integrated to successfully develop a banana-picking robot system, which has been tested and optimized in multiple banana plantations. Experimental results show that this robot can continuously harvest banana bunches. The average precision of detection is 99.23%, and the location accuracy is less than 6 mm. The robot picking success rate is 91.69%, and the average time from identification to harvesting completion is 33.28 s. These results lay the foundation for the future application of banana-picking robots.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 College of Engineering, South China Agricultural University, Guangzhou 510642, China;
2 College of Innovation and Entrepreneurship, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China
3 College of Mechanical and Electrical Engineering, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China
4 College of automation, Zhongkai University of Agriculture and Engineering, Guangzhou 510225, China