The fight for food security worldwide is a complex issue. The agriculture industry must meet the growing demand for food while ensuring the profitability of farms. Recent challenges such as climate change, availability of agricultural workers, and rising costs of production resources (e.g., fertilizers, pesticides, herbicides) have further intensified the pressure on farmers. Recognizing the need for more efficient and sustainable farming practices, a new, data-driven movement in agriculture has emerged, leveraging stationary sensors and autonomous agents (such as robots and driverless farming equipment).[ 1 ] This approach, known as precision agriculture, enables more accurate information to be collected on plant health, soil quality, farm yield, and other aspects of farming that were previously tracked manually for decades.[ 2,3 ]
Although precision agriculture did not have an immediate impact in its initial 10 years,[ 4 ] numerous studies conducted since then have demonstrated the benefits of adopting precision agriculture practices. These practices have increased yield rates[ 5 ] and contributed to the sustainability of farms by reducing water usage,[ 6 ] minimizing fertilizer application,[ 7 ] and decreasing fuel consumption.[ 8 ] Additionally, precision agriculture has enabled more efficient utilization of existing farmlands.[ 9 ] Academic findings have been further supported by an industry-wide study conducted by the Association of Equipment Manufacturers, which attested to the efficiency gains observed in the academic field.[ 10 ]
One significant way precision agriculture facilitates yield increases and resource use reduction is by providing real-time data to farmers for decision-making. This is made possible by the widespread availability of wireless sensors and technological advancements that integrate sensor data into user interfaces. Precision agriculture heavily relies on Internet of Things (IoT)-based systems.[ 11 ] IoT has been utilized in various aspects of precision agriculture, including plant monitoring (PM),[ 12 ] effective mapping for agricultural machinery,[ 13 ] and even crop failure prediction using machine learning.[ 14 ] IoT systems in agriculture extend beyond wireless sensors continuously collecting data. Automation using IoT has been present in agriculture for many years, with global positioning system (GPS)-enabled tractors reducing overlaps and trajectory variations.[ 15 ] IoT systems rely on a network of sensors that are placed in different parts of the field to gather information such as the pH and moisture levels of the soil or the photosynthesis levels of leaves.[ 16 ] The soil sensors tend to be singular sensor nodes that collectively make up a network while the leaf sensors consist of handheld monitors. More recent innovations such as flexible sensors, which can be attached on top of plants directly providing information about specific biological indicators, have been proposed as PM tools.[ 17 ] These sensors are typically made up of very thin layers of films that are on top of each other, which provides flexibility and durability, without damaging the plant that they are attached to. For example, Lu et al. proposed a leaf-worn multisensory system that can both monitor the illumination levels and humidity of a particular leaf.[ 18 ] These sensors are not only developed for leaves, however, and can be applied to more irregular surfaces. Dong et al. developed a sensor that could be applied on a spherical surface, which can measure temperature changes on the surface of a fruit.[ 19 ]
In recent years, there has been a growing integration of more independent agents, such as agricultural robots, within IoT frameworks. IoT has already been applied in agricultural robots for adaptive navigation,[ 20 ] fruit selection guidance,[ 21 ] and weed detection.[ 22 ] Agricultural robots themselves have been proposed as mobile sensors, with studies employing both ground-based and aerial robots for field surveillance.[ 23 ] Although their use cases show potential, autonomous deployment of agricultural robotics is hampered multifaceted by issues. Alatise and Hancke identify four operations in which autonomous robots face the most challenges: navigation, path planning, obstacle avoidance, and localization.[ 24 ] These operations are affected even more when the robot operates in more unstructured environments and tend to require complex sensors fusion techniques to feasibly function.[ 25 ] Even with most of the challenges resolved, current autonomous capabilities of agricultural robots are not sufficient for Level 5 complete automation as described by Parasuraman et al.[ 26 ] Another shortcoming for autonomous agricultural robots is their reliance of sensors and cameras to be able to perceive the world. These sensors can be affected by a variety of external and internal factors, such as magnetic interference, physical damage while moving, and obstruction of the camera view. This, in turn, would make the robot more reliant on human supervision. In their study about an autonomous phenotyping robot for sorghum plants, Young et al. noted that human supervision during raw data collection was needed due to quality control issues stemming from the occlusion of cameras, movement of the robot on unstructured terrain, and involuntary movements to the camera itself caused by said unstructured terrain.[ 27 ] However, they also noted that for the human supervisor to collect and process data efficiently, the robot's computer needed to provide additional support in terms of data organization. Besides operational and hardware/software-related issues, setting up the necessary IoT peripherals and the main computer infrastructure for autonomous robotic systems can be very involved and expensive for many business owners, requiring constant maintenance. These weaknesses render the widespread deployment of robots not feasible in most cases, as significant investment of time and money needs to be made to accomplish only one task. It may seem like deploying robots with any level of autonomy is an expensive and laborious endeavor when compared to human workers or fully teleoperated robots. Most of the listed weaknesses for autonomous robots are not issues for most humans and are less of a concern with teleoperated robots. However, these options also have their limitations due to the capabilities of humans. Humans are not able to work as long as robots due to fatigue and can make mistakes due to problems related to diminished situational awareness or limitations of the user interface during teleoperation.
Let's provide an example from agriculture to further illustrate this point. Agricultural practices vary widely based on factors such as farm size, farm type, crop type, and crop size, necessitating the customization of robots to perform specific tasks in specific environments. One area of agriculture in which these parameters significantly impact robot operations is harvesting. Accessing the fruit itself, for example, poses a challenge since farms often feature a combination of semistructured and unstructured environments, making path planning difficult.[ 28 ] Additionally, once the fruit is reached, successfully locating it becomes a challenge due to variations in plant height, as well as the shape and location of the fruit. The robot must also ensure that it harvests the fruit without causing damage. While human agricultural workers can seamlessly carry out these tasks, they can only work for a certain amount of time, and in the case of teleoperated robots, the human operators rely on the data and video feed provided by the robot to locate and harvest the fruit.
Human–robot collaboration (HRC) is emerging as one possible solution to design modern agricultural systems, driven by the trends of leveraging the strengths of both robots and humans. The potential for HRC has been recognized in industrial settings[ 29 ] and has also been noted in several agricultural robotics studies.[ 30,31 ] In terms of HRC for agricultural applications, some recent examples include a vineyard sprayer robot that has human-assisted targeting[ 31 ] and robot-assisted human activity recognition for agricultural workers.[ 32,33 ] Human activity recognition itself is useful in informing what the robot would do in response, whether it is in a collaborative scenario or ensuring that the robot is acting in a 'socially aware’ manner, without jeopardizing the safety of the workers occupying the same space. For the effective integration of HRC in agriculture, several important factors need to be considered. These factors can be practical, such as assessing whether the introduction of robots increases efficiency or poses safety risks.[ 34 ] They can also be cognitive and organizational, like establishing trust among human collaborators, evaluating the costs associated with adoption, addressing perceived threats to job security, and managing the cognitive load of workers during collaboration, among others. Providing a summary of how researchers have addressed these existing factors and reviewing the current efforts in the field of agricultural HRC will serve as a valuable resource for agricultural stakeholders seeking to understand the current state of the art in HRC.
The review is structured as follows. In the next section, an overview of existing reviews conducted in the field of agricultural robots, IoT systems, and HRC will be provided. The structure of these reviews will be analyzed, their findings summarized, and the current review's scope will be distinguished from the existing corpus. Subsequently, the study selection methodology and the process that is used to scrutinize the selected studies will be described. Following this, a review of the selected studies will be presented. Finally, in the discussion section, general trends from these studies will be highlighted and possible future applications will be covered.
Related WorkThe success of an HRC system in agriculture relies heavily on the sound infrastructure that connects all the elements. Without a good communication system between the human collaborator and the robot, decision-making suffers significantly. If the robot itself does not have the necessary efficiency to perform the tasks or the interfaces that allow for the human to collaborate with the robot are poorly designed, these will also lead to poor decision-making. The topics of both IoT and robotics in agriculture have attracted significant interest from the research community. Several researchers have attempted to summarize these findings.
In this section, a “review of reviews” has been conducted to identify their foci, the challenges they have identified for both IoT and HRC in agriculture, the potential solutions they present in their studies, and future research directions they propose. This is by no means a rigorous meta-analysis of existing reviews; rather, it aims to highlight existing efforts in this field and identify the gaps in their reviews. The findings are summarized in Table 1 .
Table 1 Main contributions and challenges identified in reviews about IoT, robotics, and HRC in agriculture
References | Area of study | Main contribution | Challenges identified | # of studies included |
Ayaz et al.[ 35 ] | IoT | Introduction of IoT concepts for agriculture and the associated hardware, no particular focus on reviewing existing studies | Power issues regarding long-term sensor deployment and other IoT infrastructure, rural farmers being largely unaware of the connectivity potential of IoT | 50a) |
Tzounis et al.[ 36 ] | IoT | Introduction of IoT concepts for agriculture and the associated hardware (with a specific table of the most popular sensors and platforms) | Cost of processing and storing data in cloud-based systems, environmental factors affecting wireless communications, IoT systems requiring multiple security protocols to prevented unwanted access | 35a) |
Farooq et al.[ 37 ] | IoT | Systematic analysis of existing agricultural IoT studies through a specific quality metric | Sensor exposure to outside elements such as animals, maintenance cost, reliability of sensors being affected from the environment, location of the sensors, establishing interoperability through the different standards and protocols used | 67 |
Talavera et al.[ 38 ] | IoT | Analysis of studies for specific application areas and metadata about the studies, evaluation of how IoT technologies are used in the selected studies | Ensuring compatibility with existing infrastructure, fragmented strategies in tackling the security issue, power management, many IoT studies lack real-world applications (mostly prototypes) | 72 |
Jawad et al.[ 39 ] | IoT | Introduction of different IoT protocols, review of studies that have used different strategies to reduce power consumption | Communication range and data losses due to environmental factors, tradeoff between delay tolerance of farming data and power consumption, storage of data, and the management of different data streams | 59 |
Garcia et al.[ 40 ] | IoT | Overview of how IoT is used in irrigation systems, the specific IoT nodes, and the communication protocols | Primarily the connectivity of underground sensors and the data volume influencing the decision of using one communication protocol over another | 178 |
Shafi et al.[ 41 ] | IoT and Drones | Introduction of different wireless sensor technologies and drone platforms for agriculture, short review of wireless sensors is used in agriculture, demonstration of a proposed drone, and IoT-based precision agriculture system for crop monitoring | Weather variations affecting drone flight and sensor quality, literacy rate of the end users, establishing interoperability through the different standards and protocols used, data management, hardware cost | 20 |
Elijah et al.[ 42 ] | IoT | Overview of IoT technologies, application areas, and data analytics in agriculture, benefits of using IoT in agriculture | Lack of business models demonstrating the benefits of using IoT, establishing interoperability through the different standards and protocols used, data management, hardware cost, communication range and data losses due to environmental factors, different regulations regarding the use of farming data | 30a) |
Bac et al.[ 70 ] | Agricultural robots | Systematic analysis of harvesting robots through a quality metrics system that involves categories such as fruit localization success, harvesting success, damage rate, and # of fruits tested | Simplifying the robot's task by modifying plant environment, occlusion from plants, lack of economic analysis of the robot's impact, lack of standardized performance benchmarks, moderate harvest success rate | 50 |
Wang et al.[ 44 ] | Agricultural robots | General overview of different harvesting and picking robots and identification of trends in robotic design | Complex harvesting environment, low accuracy of sensors detecting fruits, picking efficiency due to end effector design | 50a) |
Fountas et al.[ 45 ] | Agricultural robots | Demonstration of how agricultural robots are used in different application areas | Lack of databases for training algorithms, data transmission method, lack of modularity, processing speed and environmental factors affecting CV algorithms | 79 |
Oliviera et al.[ 46 ] | Agricultural robots | Overview and comparison of different robots (and related technologies) that are used of different application areas | Movement of robots in structured farming terrain, tradeoff between cost and quality in cameras, CV algorithm selection that factors in the needs of the application area and processing power | 62 |
Lytridis et al.[ 47 ] | Agricultural robots/HRC | How different cooperation methods exist in agriculture, including HRC and multiple autonomous robots | Models that effectively control interactions, limited power for autonomous operations, absence of human–multirobot collaboration | 19 |
Benos et al.[ 48 ] | HRC | Introduction of safety and ergonomics in the context of HRC in agriculture | Mechanical risks due to undesired physical contact with humans and obstacles, mistakes made from human operating error | 30a) |
Vasconez et al.[ 49 ] | HRC | Systematic analysis of HRC studies through whether the studies covered specific topics such as interface design and level of autonomy | Many agriculture tasks not considered in research, cognitive aspects of HRC understudied | 17 |
Adamides and Edan[ 50 ] | HRC | Brief coverage of different application areas of HRC, discussion of future research directions of the group | Immature state of autonomous robotic technology and the varied practices in agriculture | 27 |
a)There was no specific study selection process, this is the estimated number studies included.
Existing Reviews of IoT in AgricultureThe reviews on IoT applications mainly focus on two aspects: application and hardware. Some reviews only focus on one, while others include two or more in their review. The challenges that are identified from these reviews could be viewed in Figure 1 . Application-based reviews typically begin by introducing relevant concepts, such as precision agriculture, IoT, and others. Then, they provide an overview of the various applications of IoT in agriculture and the corresponding studies. The reviews typically conclude with a discussion of current and future research trends, as well as existing limitations.
In Ayaz et al.'s review, the focus was not on reviewing existing studies but rather on introducing IoT, its applications in smart farming, and providing examples of studies that have developed IoT-based systems for specific applications.[ 35 ] The authors covered how IoT can be utilized in different farming practices, including greenhousing, vertical farming, and phenotyping. Additionally, the authors delved into the hardware aspect of IoT and explored concepts such as communication methods, drone applications, and food transport. They also discussed the challenges associated with each application area and proposed future directions for research.
Before discussing the applications, Tzounis et al. formally introduced the concept of IoT in their review.[ 36 ] They provided an overview of the layered structure of typical IoT systems and the associated technologies, both hardware and communication based, that are commonly utilized in such systems. The authors specifically address the applications of IoT in agriculture, including open and controlled field farming, as well as monitoring. They also explored related applications such as livestock management and supply chain tracking. Similar challenges to those identified in Ayaz et al.'s review, such as reliable communication and power consumption, were also recognized by Tzounis et al.
Some studies conducting reviews of applications have employed systematic approaches to evaluate the research quality and address their research-related questions. These studies also aim to identify general trends in research. For instance, Farooq et al.'s review utilized four quality measurements: whether the study contributed to IoT in agriculture, whether it presented a solution using IoT methods, whether it was cited by other studies, and the reputation of the publication source.[ 37 ] The review first presented the answers to their research questions and subsequently discusses the quality of the included studies. In addition to discussing challenges in IoT for agriculture, the authors also addressed potential pitfalls that studies in the IoT field of agriculture could encounter.
Similarly, Talavera et al. employed four measurements: whether the study proposed a comprehensive solution using IoT, whether it detailed the architecture, whether the paper was unique or related to another study, and whether the authors analyzed the results.[ 38 ] However, in their study, these measures were used as inclusion criteria. The authors sought to answer two research questions: the main IoT-related technological solutions in agriculture and the necessary infrastructure to enable these solutions. After addressing these research questions, the rest of the review focused on presenting representative studies for various aspects of IoT in agriculture, such as energy management and monitoring, as well as discussing the current limitations and challenges in the field.
The reviews that concentrate on the hardware aspects of IoT introduce concepts such as the types of sensors used and communication protocols. They then illustrate how these studies have employed the hardware and for what purposes.
Jawad et al. focused their review on existing energy-efficient solutions in wireless sensor systems for agriculture.[ 39 ] They began by introducing different wireless communication protocols utilized in agriculture, such as ZigBee, Bluetooth, Wi-Fi, and GPS/GPRS. Subsequently, they presented studies that address the efficiency issue from two perspectives: power reduction techniques and energy harvesting techniques. The authors also identified the specific requirements, challenges, and solutions to said challenges of the agriculture domain for IoT systems, particularly in terms of plant and soil monitoring. Some of the requirements they mentioned were controlling the environmental conditions for consistent data quality, using mobile drones to increase the communication ranges of the sensors, and establishing wireless channel models to ensure the signal loss due to the environment is minimized.
Garcia et al. conducted a review that specifically focused on the utilization of sensors in various agricultural applications, including water management, weather forecasting, and soil monitoring.[ 40 ] For each application, the authors provide additional information on sensor characteristics, such as the measured soil and atmospheric parameters, as well as the specific sensor models used. They also provide an overview of the most commonly employed nodes and communication technologies in agricultural IoT systems.
Shafi et al. followed a similar structure as the review by Jawad et al. introducing nodes and common communication technologies in IoT.[ 41 ] However, their review placed particular emphasis on remote-sensing systems based on spectral imaging and the different platforms deployed, including satellites, manned aerial platforms, and unmanned aerial platforms. After presenting an overview of wireless sensing systems used in agriculture, the authors provide a case study centered on a drone-based remote sensing platform.
Elijah et al. focused on introducing IoT devices and the applications of IoT in agriculture.[ 42 ] Similar to Jawad et al. and Shafi et al. they initiated their review with an introduction to IoT hardware and subsequently delve into common application areas. In addition to their hardware review, their analysis includes a section on how data collected through IoT systems can be utilized for different purposes, such as event prediction, decision-making, and protecting farmers against adverse conditions.
Existing Reviews of Agricultural Robotic ApplicationsOne of the earliest reviews conducted in the field of agricultural robotics was carried out by Bac et al.[ 43 ] Prior to their review, Bac et al. discussed various aspects of farming, such as the shape of the fruit, the crop type, and the farming environment and how these can affect robot performance. The review focused on introducing and examining harvesting robots designed for different types of fruits. Several performance factors identified by the authors were used to scrutinize existing studies, including autonomy, success rate in locating and harvesting fruits, operation cycle time, and damage rates. The authors also discussed numerous solutions to enhance the performance of harvesting robots, including HRC.
In a more recent review of harvesting robots, Wang et al. presented the current state of fruit and vegetable picking robots.[ 44 ] Their review was structured based on fruit/vegetable types and the corresponding harvesting methods employed. For each robot, the authors provided a brief description of its architecture and algorithms related to locomotion, vision, and harvesting. After reviewing the robots, the authors evaluated the current state of robotic components, such as robotic arms and end effectors, as well as the computer vision (CV) algorithms that facilitate the robots’ movement and harvesting. Their discussion further explored the effects of the farming environment, methods to quantify harvesting performance, and the current challenges faced in the field.
In a much more comprehensive coverage of various agricultural robot applications, Fountas et al. reviewed agricultural robots for eight different applications: harvesting, seeding, weeding, disease, and insect detection, spraying, plant management, crop scouting, and multipurpose tasks.[ 45 ] Due to the extensive volume of existing studies in each application area, the authors selected a representative sample of papers for each category. For each application type, the review began with a description of the specific application and its importance in farming. The authors then introduced the robots and report their overall performance. The discussion highlighted the limitations of agricultural robots in both general and application-specific cases. The authors also proposed potential solutions and research directions to address these limitations.
Oliveira et al. followed in the footsteps of reviews such as Fountas et al. and Bac et al. providing an updated coverage of agricultural robots.[ 46 ] The applications covered in their review include land preparation, sowing and planting, plant treatment (including weeding and pesticide deployment), harvesting, and phenotyping. Another objective of their review was to evaluate the robots based on the presence of sensors, robotic arms, CV algorithms, and their locomotion systems. A summary of the challenges that have been identified from these reviews can be seen in Figure 2 .
Reviews on HRC in Agricultural RobotsThe number of reviews that exclusively focus on the HRC aspect of agriculture is far fewer in number. To the author's knowledge there have been only four reviews that cover HRC in agriculture, each focusing on slightly different aspects of HRC.
Lytridis et el. presented how cooperation exists across different agents in agriculture, with a section dedicated to HRC.[ 47 ] In this section, Lytridis et al. gave a general overview of existing research and identified the trends within collaborative HRC research in agriculture, improving sensory limitations and direct robotic support of manual labor.
Benos et al. covered the safety and ergonomic aspects of HRC in agriculture.[ 48 ] Their review introduced a number of important concepts in the beginning such as the fundamentals of HRC, general safety considerations in HRC, and how different areas of ergonomics could be applied in the context of HRC, such as using human posture for activity recognition and using subjective risk assessment. Each of these concepts was followed by a summary of how these concepts relate to HRC in agriculture The purpose of Benos et al.'s review was not to summarize findings from a select number of studies, but instead provide a narrative of how HRC concepts can be applied in the agricultural context.
The review that is the most comprehensive is by Vasconez et al.[ 49 ] Their review also started with an overview of relevant topics for HRC. Their main method of evaluating studies was to determine whether they cover specific topics such as interface design, level of autonomy, and situational awareness.
The most recent survey on HRC in agriculture was done by Adamides and Edan.[ 50 ] Their review consisted of 27 reviews and their review was organized by application area. However, the discussion of the previous studies served as a foundation for discussing the research that they were currently engaged in and future research directions that they proposed.
Summary of ReviewsWe have observed each type of review study, although meritorious in their own rights, omitted certain topics in their discussion, and their coverage of existing studies. In IoT reviews, the focus seems to be on sensors and communication methods within the IoT systems. The use of agricultural robots has received limited attention from the authors with Ayaz et al. covering them for harvesting and data acquisition and Garcia et al. only for data acquisition. For agricultural robotics reviews, there is scarce discussion of HRC, instead mostly focusing on autonomous robotics applications. To the best of our knowledge, only the four reviews that we have included in this study have focused on this subject. As for the weaknesses of the four HRC reviews that we have covered, the main one is the lack of infrastructural information that is necessary to construct HRC systems. Robotics professionals who want to design HRC systems need to be able to access what is state of the art in terms of algorithms, communications protocols, and sensors that are used for reliable HRC operations in agriculture.
Our purpose in this review is to fill gaps in coverage and present a more holistic picture of agricultural robotics operations. To achieve this, we first summarized the current coverage of reviews in this section to highlight what information is readily available for the research community and what is missing. In the following section, we will provide a summary of the types of sensors, software, data analysis methods, and IoT structures that have been used to facilitate HRC. We aim for our review to serve as a useful summary of the state-of-the-art methods and technologies used for HRC, highlighting general research trends for the research community. Additionally, we intend to inform industry professionals and robotics designers about how robots could be integrated into precision agriculture systems via HRC and the necessary IoT infrastructure to facilitate this integration. Finally, we aimed to cover a broader range of studies, as none of the HRC studies covered in both Vasconez et al. and Lytridis et al. included more than 20 papers, and the most recent review by Adamides and Edan included 27.
Review of IoT Applications in Existing Agricultural HRC Applications Study InclusionFor the compilation of studies in this review, two sources were utilized: online databases and existing reviews on HRC/HRI in agriculture. Three databases were searched for studies: Scopus, Web of Science, and IEEE Xplore. The search queries employed for each database were as follows: for Scopus, (TITLE-ABS-KEY (“human robot*” AND agriculture)); for Web of Science, (“human robot*” AND agriculture) Topic (Title, Abstract, Author Keywords); and for IEEE Xplore, (“human robot*” AND agriculture ALL Metadata) and (human robot* FULL TEXT AND agriculture* DOC TITLE). The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method was used to review and determine the final set of included studies. The HRC-related reviews mentioned in the previous section.[ 47–50 ] served as surveys for additional studies. After removing duplicates, the following exclusion criteria were applied: studies that did not involve HRI/HRC or where the robot was fully autonomous, studies not related to the agricultural domain, and studies involving wearable robots (such as exoskeletons and prosthetics) that were not part of agriculture. Literature review studies were also excluded from the analysis.
The initial search of the online databases yielded 163 studies, and after removing duplicates, an additional 20 studies were included from the existing reviews, resulting in a total of 141 studies for the full review. After applying the exclusion criteria, the final number of included studies was 55. The study selection process and the breakdown of the rejection categories and number of papers can be found in Figure 3 . Figure 4 depicts the distribution of studies that have been included in this review. More than half of the studies (30 out of 55) were published between 2019 and 2021, showing growing interest in the field of HRC in agriculture. The low numbers for 2022 and 2023 were due to the cutoff date of the search ending in May 2023. The included studies were organized under eight different application areas, tractor automation, human activity recognition, spraying/pesticide deployment, harvesting, collaborative target/object detection, navigation of robots in semi-/unstructured environments, PM/farm management (FM)/education (E), and other. These categories and the associated number of studies can be viewed in Figure 5 .
Figure 5. The different HRC application areas that are included in the review and the number of corresponding studies. Clockwise: The master–slave system for multiple autonomous tractors. Reprinted with permission.[54] Copyright 2024, Springer Nature. The sensor setup and robot used for the activity recognition study. Reprinted with permission.[32] Copyright 2023, The authors. Licensee MDPI. The view from the robot that is used for human activity recognition in agricultural fields. Reprinted with permission.[58] Copyright 2023, IEEE. Experiment setup for testing Berenstein and Edan's sprayer system. Reprinted with permission.[67] Copyright 2023, Wiley. Adamides et al.'s comparison of two different interfaces for two different agricultural sprayer robots. Reprinted with permission.[31] Copyright 2023, Wiley. Human-aware harvesting robot movement. Reprinted with permission.[71] Copyright 2023, Association for Computing Machinery New York, NY, United States. The collaborative harvesting robot of Khosro-Anjom et al. Reprinted with permission.[73] Copyright 2023, American Society of Agricultural and Biological Engineers. The user interface and Turtlebot that is used in Huang et al.'s strawberry detection system. Reprinted with permission.[86] Copyright 2023, Springer. Object detection algorithm for navigating agricultural fields. Reprinted with permission.[89] Copyright 2023, Elsevier B.V. Automated plant management system of Agostini et al. Reprinted with permission.[95] Copyright 2024, Elsevier. Experiment setup for an autonomous nursery/indoor farming system. Reprinted with permission.[96] Copyright 2023, Elsevier B.V. Agricultural education setup system of Araiza et al. Reprinted with permission.[98] Copyright 2023, IEEE. Rose et al.'s framework for safe autonomous agriculture, with benefits and challenges identified. Reprinted with permission[103]. Copyright 2023, Springer Nature.
One of the earliest technological innovations employed in the agricultural context of HRC is the GPS, which was commercially introduced in the 1990s for agricultural applications.[ 51 ] Initially utilized as a visual aid for human operators, GPS later served as the foundation for autosteering systems.[ 52 ] In current tractor automation systems, the operator's role is primarily supervisory. In one of the earliest proposed systems for automated tractors, the human operator was not physically present on the tractor but instead monitored its movements from a workstation.[ 53 ] The tractor relied on various sensors for obstacle avoidance and path planning. Zhang et al. developed a system that fully utilizes the autosteering functionality, enabling one operator to control two tractors simultaneously.[ 54 ] Their system utilized relative GPS information from the human-controlled tractor to guide the unmanned tractor, and communication is facilitated using an XBee wireless communication module. The unmanned tractor demonstrated minimal deviations from the path set by the manned tractor. Another group adopted a sensor fusion approach, combining GPS information with a human–machine interface capable of autonomous tractor control via brainwaves.[ 55 ] They used an Emotiv EPOC headset to establish the connection between the interface and the tractor and a R4 Trimble receiver to track the tractor's trajectory under different experimental conditions. Their system achieved accuracy levels comparable to manual driving. Their system setup can be viewed in Figure 6 . It is worth noting that GPS is not the sole data type utilized for navigation purposes. As most open field farms can be characterized as unstructured environments, tractors need to be able to avoid numerous obstacles they may encounter. For instance, Yang and Noguchi's system utilized omnistereovision and optical flow algorithms in conjunction with GPS to detect humans.[ 56 ] Their camera setup that is mounted on top of the tractor is depicted in Figure 6. Their system successfully detected humans within a range of 4–11 meters.
Figure 6. Two different tractor automation systems. Gomez Gil et al.'s proposed system is a fusion of cameras and EMG sensors (top left and right), and Yang and Noguchi's relies on two different cameras (bottom). Reprinted with permission. Copyright 2023, The Authors; licensee MDPI.[55] Copyright 2023, Elsevier B.V.[56]
In order for robots to seamlessly and safely collaborate with humans, they need to be capable of detecting the various postures and movements assumed by their human counterparts. Detecting different postures enables the robot to anticipate the human collaborator's next actions and complement them.
Seyyedhasani et al. for example, leveraged this fact using human pickers and transport robots for harvesting applications.[ 57,58 ] They modeled their robot's behavior utilizing stochastic states of the human pickers identified through existing footage using CV algorithms. These states and how the system moves from one state to another are depicted in Figure 7 . They evaluated different scheduling strategies and numbers of robots to determine the best-performing strategy–robot combination.
Figure 7. Seyyedhasani et al.'s state diagram for both the picker's and robot's potential activities for their human harvesting recognition algorithm (left) and the performance of said algorithm being evaluated through different number of robot present and three different strategies, last come first serve, shortest processing time, and longest processing time (right). The top right graph depicts the performance for the morning and the bottom right graph depicts the afternoon. Reprinted with permission.[57,58] Copyright 2023, Elsevier B.V.
Pal et al. also developed a human activity recognition method using CV and machine learning algorithms to identify different states of pickers.[ 59 ] In addition to CV, wearable sensors have been utilized for human activity recognition. Tagarakis et al. employed wearable sensors to detect subactivities in a task involving lifting boxes and placing them on top of two different robots.[ 32 ] They utilized accelerometer, magnetometer, and gyroscope data from five inertial measurement units (IMUs). The authors manually labeled the actions using the z-axis data in the time domain to detect the activities.
In a study that builds upon Tagarakis et al.'s efforts, Anagnostis et al. applied machine learning approaches to automatically classify the assumed postures.[ 33 ] They employed a long short-term memory algorithm to classify the subactivities. The authors reported an overall accuracy rate of 85.6% in predicting subactivities during the lifting task with different robot configurations.
Human activity recognition has also been investigated for aerial agricultural robots. Leipnitz et al. compared the performance of three different approaches, including CNN-based YOLOv3 human detection algorithms, in both crowded urban and agricultural scenarios.[ 60 ] The authors tested two modified versions of YOLOv3, one smaller and less robust but with reduced processing time, and another version that employed spatial pooling methods. Additionally, they examined how a segmentation and merging method could enhance the performance of the tested YOLOv3 algorithms. The smaller version of YOLOv3 exhibited poor performance in crowded scenarios but offered better results in the agricultural setting, which featured fewer targets and higher image resolution. The operation of the algorithm and the performance results are shown in Figure 8 .
Figure 8. Leipnitz et al.'s YOLOv3-based AgriDrone human detection system detecting humans in the field (left) and the associated histogram data depicting system performance. The green bars in the graph represent the improvements made through the proposed algorithm. Reprinted with permission.[60] Copyright 2023, Springer Nature Switzerland AG.
For any action recognition algorithm, utilizing an existing dataset of movements and postures serves as a valuable starting point to obtain more accurate results. Gabriel et al. recorded various postures of humans in outdoor environments under different lighting conditions using a Thorvald robot.[ 61 ] To assess the effectiveness of their dataset, they applied two different activity recognition methods: skeleton extraction and bounding box fitting using YOLOv3. The authors reported a recognition rate of over 50% for YOLOv3. Two issues that hindered the accuracy of the algorithms were self-occlusion (humans not facing the camera directly) and distance, with a cutoff point of 25 m for human detection.
Robots Used Collaborative Agricultural Tasks Spraying/Pesticide DeploymentPesticides play a crucial role in ensuring crop health and consistent yields in many agricultural settings. However, pesticide exposure among farmers can lead to serious health problems, such as acute poisoning and significant disruption to bodily functions.[ 62,63 ] Furthermore, certain pesticide deployment methods like planes and tractors are not suitable for enclosed farming spaces, forcing more manual methods to be used. Robots present a potential solution to address these issues by allowing humans to reduce direct contact with pesticides. However, robots face limitations in accuracy, particularly in terms of image recognition.
To address this challenge, Adamides et al. proposed a teleoperated robotic sprayer called AgriBot and conducted a series of studies to investigate various aspects of human–robot interaction.[ 64 ] Since the robot was not fully autonomous in their case, the researchers tested different types of user interfaces to optimize robot control. They utilized a Robonik Summit XL mobile robot platform but did not provide detailed information on how the platform was modified for teleoperated spraying. They tested eight different configurations, comparing PC screen versus artificial intelligence (AR) glasses, singular versus multiple screens, and keyboard versus a PlayStation 3 console. To assess interface efficacy, they measured quantitative metrics such as the number of sprayed plants and collisions with obstacles. The presence of multiple screens had the most significant impact on performance, as it provided users with an expanded field of vision.
Another study by Adamides et al. investigated user interface elements and human–robot interaction performance for AgriBot.[ 65 ] In this research, subjective measures including perceived workload, usability, and presence were included. The type of controller used significantly influenced all subjective measures, with the PC keyboard being perceived as more usable, inducing less workload and offering a better spatial awareness. The different display and controller combinations are shown in Figure 9 . Additionally, Adamides et al. proposed a taxonomy for teleoperated robotic operations, specifically in the context of spraying, by conducting a literature review of user experience and user interface design elements. They categorized the identified guidelines through a card sorting exercise with the participation of 22 human–robot interaction experts.[ 66 ] The experts matched the guidelines with design categories with both open and closed card sorting, resulting in eight different categories: platform architecture and scalability, error prevention and scalability, visual design, information presentation, robot state awareness, robot environment/surroundings awareness, and cognitive factors.
Figure 9. Different user interface combinations (HMD: head mounted display, PS3: Play Station 3) that were tested in Adamides et al. study. Reprinted with permission.[65] Copyright 2024, Society of Photo-Optical Instrumentation Engineers (SPIE).
In another study, Adamides et al. evaluated the design of a newer prototype called SAVSAR and compared it to AgriBot.[ 31 ] The two robots are depicted in Figure 10 . The evaluation focused on usability and user experience, utilizing the taxonomy developed in their previous study. Four usability experts assessed three different versions of the SAVSAR interfaces: SAVSARv0 (base design), SAVSARv1 (with additional target detection algorithm and target pointing functionality), and SAVSARv2 (displaying information about obstacle distance). SAVSARv2 exhibited the fewest issues. A field experiment was conducted to evaluate SAVSARv2, similar to the previous field experiments for AgriBot, and positive subjective measures for usability and overall user experience were reported. Several solutions were implemented in SAVSARv2 to address the identified issues, including providing camera locations to users, informing users about interface shortcuts, and allowing users to set multiple targets.
Figure 10. The two prototype robots that were evaluated in another study[31] and the newer prototype called SAVSAR (right). Reprinted with permission.[31] Copyright 2023, Wiley.
Berenstein and Edan developed an autonomous robotic sprayer for HRC.[ 67 ] Unlike Adamides et al.'s prototypes, Berenstein and Edan's prototype was autonomous. Their HRC evaluation involved manipulating automation levels for an agricultural spraying task. The experiment included evaluating two different marking strategies (solid circular cursor and manual coloring) and four automation levels for marking, ranging from human manual marking to the robot automatically marking the target itself. Performance metrics for the experiment were true positive (successful target hit) and false positive (excessive spraying). Depending on the focus (prioritizing high true positive or low false positive), different ideal combinations were found. High true positive collaboration level 3 (robot identifies the target, human supervises) with manual coloring produced the highest true positives, while the lowest false positives were achieved with collaboration level 1 and a solid circular cursor.
Naveen et al. developed a pesticide robotic platform capable of showering, seeding, and plowing tasks, built on an Arduino platform.[ 68 ] Their objective was to create a feasible, cost-effective, and practical platform for HRI operations. Although the prototype was described, no field testing results were provided.
While most studies discussed so far focused on ground-based sprayer robots, unmanned aerial vehicles (UAVs) are also being used for pesticide applications. Zhou et al. proposed a human-supervised UAV spraying system that learns from trajectories determined by a human supervisor.[ 69 ] They employed a Kalman filter to estimate location and speed parameters obtained from an IMU sensor and magnetometer and then used a Gaussian mixture model approach to estimate the UAV trajectory. A field test of their system involved human teleoperation of the UAV to complete spraying tasks while the UAV learnt from the charted trajectories. The results indicated low deviation from the human user-charted trajectories and a low level of flight disturbance.
HarvestingHarvesting is perhaps the most widespread application for automation and robotics in agriculture. As it is a repetitive task, it provides a very good opportunity for robots to leverage their strengths. However, as Bac et al. noted in their review of harvesting robots, the image detection accuracy of the robots is still a major issue.[ 43 ] In a more recent study, Bac et al. reported very high harvesting failure rates for a sweet pepper plant.[ 70 ] They listed many limitations, from stem damage caused by end effectors, occluding leaves and fruit clusters affecting detection, and calibration issues that affected the positioning of the end effector. As such, relying on robots to harvest plants at this stage is not feasible, necessitating the need for HRC.
Harvesting HRC systems tend to be divided into two groups: systems that include robots with the ability to harvest or robots who will follow their human companions as the harvesting is done by the human. In the current landscape, however, the latter type of robot is more common. Vasconez et al.'s study evaluated three different movement strategies for an avocado harvesting robot: the robot avoiding, approaching, and following the human.[ 71 ] The depiction of the experiment that demonstrated the algorithm that governed the different control behavior can be seen in Figure 11 . In a similar study, Lai et al. tested the effectiveness of a robot that had the ability to carry a tea plucking machine while the human operated the tea plucking machine manually.[ 72 ] The robot was able to follow the human using a camera that detected a marker attached to the human and a LIDAR-based system that was used to detect the row in which the robot was in. The field results showed that the robot was able to follow the human quite closely with minimal difference on the x-axis of movement. The robots could also monitor the postures that are assumed by human workers and adjust their configuration to decrease the workload of their human collaborators. The different behavior modes of the robot are shown in Figure 12 . Khosro-Anjom used two wireless space sensors that were attached to the participant's lower back and trunk to control the behavior of a strawberry collection robot.[ 73 ] The robot has two main functionalities that are informed by the trunk flexion. If the trunk flexion of the participant was less than 10°, the robot got closer to the worker to collect the harvested produce. However, if the worker was in an ergonomically awkward position for a long period, the robot would stay at a distance from the worker and warn them to take a break.
Figure 11. Vasconez et al.'s robot motion control strategy. The blue line represents the default social avoidance behavior, the red line is the human hailing the robot, the green line represents the robot getting back to its original trajectory, and finally the magenta line represents another human hailing the robot, and the robot following the human. Reprinted with permission.[71] Copyright 2023, Association for Computing Machinery New York, NY, United States.
Figure 12. The behavior of the proposed tea leaf harvester robot proposed by Lai et al. that is influenced by the positioning on the human collaborator. a) follower state; b) cooperative state; c) waiting state. Reprinted with permission.[72] Copyright 2023, IEEE.
Researchers also envisioned HRC systems with multiple robots that have differentiated roles working toward a singular goal, supervised by a human overseer. Kim et al. used a HARMS or “Human, Agent, Robot, Machine, Sensor"-based framework to create a seamless operational infrastructure for gathering and collecting agricultural produce.[ 74 ] In this system, agents refer to software agents that allow interaction with the lower layers of HARMS, robots as the independent autonomous actors that can communicate with both sensors and machines, and finally, machines refer to agricultural machinery. The network layer corresponds to the sensor element of the framework and the physical communication capability of the different actors within the system. The communication layer corresponds to the methods of which the different actors exchange information, such as computer programming or natural language. The interaction layer is about the robot's general directives regarding its operations. Finally, the top two levels are about the overarching governance of the actor's behaviors. HARMS aims to make these actors indistinguishable in terms of operational roles using sensors and natural language processing. The hierarchical structure of this framework is depicted in Figure 13 . Kim et al. applied this framework for a simulated gathering task of different colored balls. There were four actors: the human controller, “Darwin,” which is a bipedal humanoid robot that is able to pick up balls, “bulldozer,” and “collector.” The experiments involved evaluating different combinations of the robot actors to see which one will result in the fastest collection of the ball.
Figure 13. Kim et al.'s proposed HARMS framework. Reprinted with permission.[74] Copyright 2023, Techno Press.
They found that using all three robots collaboratively, the operation time was the lowest. Cheein et al. outlined the necessary elements for HRC in precision agriculture scenarios and gave an example of an olive-harvesting example the group was working on.[ 75 ] The robot served as a mobile collector, and some of the functionalities that were suggested by the authors for their prototype were integrating yield maps of the olive grove to better plan the collection routes and human-aware movement algorithms to ensure movement that is safe and well received by the farm workers. Baxter et al. also discussed necessary elements for agricultural HRC for harvesting, focusing on the safety aspect specifically.[ 76 ] In a pilot survey, they asked 12 pickers who work with the agricultural robot Thorvald about their impressions working alongside the robot. Although the participants had mixed views about robots, the survey results indicated the workers having a positive experience working with the robot being viewed as safe and having appropriate behavior.
As studies have noted, appropriate robot behavior for harvesting robots is important for many aspects, including operational efficiency, safety, and acceptance. These concepts need to be applied at a fundamental level in order for the decision-making of the robot to be calibrated according to these priorities. Rysz and Mehta proposed such a mathematical model for harvesting robots in HRC environments.[ 77 ] The model, using the probabilistic efficiency values for the robot's suboperations during harvesting (fruit localization, detection, path planning, etc.), calculated the economic feasibility of implementing HRC at a given time. The purpose of the model is to determine whether HRC operations were economically viable. This framework was then tested in several simulated harvesting operations for different fruits and different efficiency values. The experimental results showed that HRC was more economically viable when there were lower efficiency values.
Collaborative Target/Object DetectionTarget detection in agricultural robotics is an essential component for efficient and safe operations. As previously noted, autonomous agricultural robots have reliability issues when it comes to independent target detection.[ 43 ] There are many aspects that affect the quality of target detection for CV that simply do not affect human vision the same way. By leveraging the strength of the accuracy of human perception and the longevity of robots, researchers devise systems that detect targets with high accuracy rates.
To the best of our knowledge, one of the earliest studies that recognized the potential of collaborative target detection was by Bechar and Edan.[ 78 ] Using a previously developed robotic platform and image processing algorithm,[ 79,80 ] they tested four different automation scenarios that corresponded to the levels defined by Sheridan[ 81 ]: human only (HO), human identifying the targets with the help of the algorithm and identifying misses (HO-R), human only canceling false positives and identifying misses while the algorithm identifies (HO-Rr), and no human involvement (R). The HO-R and HO-Rr conditions outperformed both the HO and the R conditions by 4% and 14%, respectively, in terms of accuracy. It is notable that this experiment was not done in real time, meaning the images that were used in the experiment were taken beforehand; hence, the real-time performance of the robot and the algorithm is unknown. The performance of this robotic platform and the objective function was further tested in a subsequent study,[ 82 ] and the objective function was formalized in a later study.[ 30 ] Using this model, Oren conducted further analyses on the algorithm,[ 83 ] testing which operational levels identified in another study[ 78 ] were optimal given different scenarios and an operational cost analysis. Oren determined that the goal of the operation was, in fact, influential on the optimal operational level. For example, if the desire is to limit false positives, Oren suggested using HO-R, HO-Rr, or R depending on the sensitivity levels of the robot and human. For increasing target detection, R and HO-R were suggested, but these suggestions depended on the target probabilities. In both cases, the human-only condition was not optimal, indicating the benefit of using HRC. The cost analysis found that the decision time of the humans affected the system performance significantly. Oren's study on determining optimal operational levels was later expanded in a study that explored switching collaboration levels in real time to improve performance.[ 84 ] The “switch” relies on three prerequisites: suboptimal objective function value, positive increase in the objective function value, and the switch time being shorter than the collaboration state the system is in. The researchers also tested variations of this algorithm. The results were mixed, as some combinations of collaboration/algorithm produced near-100% improvements while some combinations that have human-initiated switching resulted in very low improvement values.
In a more recent study, Ajidarma and Nof compared the performance of two collaborative detection algorithms for their agricultural robot system, which is a part of a larger cyberphysical framework.[ 85 ] The main difference between the two algorithms is that one algorithm accumulates human and robot detection data while the other algorithm resets each time when there is a measurement. They tested the algorithm with different types of scenarios, which included different sensor levels, different average error levels of both human and robot operators, and different percentages of conflict between the sensors. Both algorithms resulted in a reduction of potential faults by 86.9% and 66.4%, respectively, as the former iteration of the algorithm outperformed the latter in terms of effectiveness by 30.9%.
Huang et al. developed a Turtlebot-based strawberry detection system with a graphical user interface (GUI) that allowed users to interact with the robot.[ 86,87 ] The purpose of their study was to compare the effectiveness of two deep learning-based image detection systems: faster convolutional neural networks (CNNs) and the YOLOv3 system. Two robots equipped with each detection system were tested with 30 participants using a simulated farm (images of strawberries projected on a wall). The effectiveness of the algorithms was evaluated by subjective measures such as perceived trust and speed and objective measures such as success rates. Although the robot that used faster CNN resulted in higher false positives and lower false negatives, this did not result in a significant preference toward those robots.
Khan et al. proposed a layering method called Fl-SHODANI for image filtering and for processing in image annotation tasks.[ 88 ] This filter integrates different image processing layers with the use of neural networks and cloud computing.
Navigation of Robots in Semi-/Unstructured EnvironmentsOne of the most significant challenges autonomous robots face is navigating unstructured/semistructured environments. These environments are the norm in agricultural environments; hence, there have been studies that investigated how to make robots navigate these environments while taking into account a wide variety of obstacles, both stationary and moving. As we mentioned for tractors, these solutions often involve multisensor fusion methods. In their study, Reina et al. presented the different sensor combinations that could be used for ambient awareness of the robot's surroundings.[ 89 ] LIDAR stereovision, radar stereovision, and thermography stereovision were the configurations that were tested against a regular stereovision method. Each combination provided unique benefits, with LIDAR overcoming the limitations due to poor lighting conditions, radars assisting in obstacle detection by scanning the field, and thermography providing additional features to distinguish living things from inanimate objects and vehicles. An example of the differences between the different sensor combinations can be viewed in Figure 14 .
Figure 14. The comparison of image classification techniques of Reina et al.'s proposed ambient awareness system. a) LIDAR; b) Stereo; c) LIDAR-stereo. Reprinted with permission.[89] Copyright 2024, Elsevier B.V.
Another prototype that was developed by Chang et al. used facial recognition using the YCbCr technique, which is based on the luminance and chrominance of the image.[ 90 ] The facial detection was chosen as an alternative to more demanding methods such as the multisensor method covered earlier. The robot and the facial recognition algorithm were tested outdoors to see if the robot can follow the human operator's face, resulting in a very low deviation rate from the trajectory of the operator's face and the robot.
Control of robots can be achieved through various types of interface designs. Generally, when a robot is intended to be controlled by a human operator, this is accomplished using different input devices, such as keyboards or joysticks, and computer monitors to follow the video feed from the robot. However, some studies have explored the possibility of using newer technology, such as virtual reality (VR) headsets. Chen et al. developed a teleoperation interface by reconstructing the video feed that is recorded by a Kinect camera into a VR headset.[ 91 ] This is accomplished by two separate algorithms that are processed by a separate server, one for capturing the color and depth of the video feed and another for the real-time rendering of the video feed in the VR environment. In order to test the quality of the reconstruction, both 3D representation and image quality, the authors tested their system in both an indoor setting and an outdoor agricultural setting. The reconstruction of singular and multiple trees can be viewed in Figure 15 . Although the VR reconstruction did suffer from problems related to image complexity and environmental conditions such as lighting, the reconstruction was largely successful.
Figure 15. Reconstruction of singular and multiple trees using Chen et al.'s reconstruction algorithm. The top row left to right shows the depth, RGB, voxel block allocation, and model images, respectively, while the bottom row shows the reconstruction of multiple trees using the proposed algorithm. Reprinted with permission.[91] Copyright 2023, Elsevier B.V.
Another study that also used VR for 3D representation was by Xiao et al.[ 92 ] Instead of focusing on representing real-time images, their study focused on the 3D mapping of spaces for exploration purposes. LiDAR was used to map the environment, and the 3D normal distributions transform technique was used for 3D representation. This image was then transmitted to the VR headset of the robot operator. The operator controlled the robot with the controller of the HTC Vive VR system that was used. In their test of a simulated exploration task in an unstructured environment, the authors reported that the operators were comfortable in controlling the robot, but the image in the VR environment suffered from fidelity issues.
The robots’ ability to carry large payloads makes them ideal for being mobile harvesting bins. However, being close to humans, they need to be able to mind both humans and other aspects of their surroundings. Sarmento et al. constructed a prototype that was able to follow a human using an ultra-wideband (UWB) transceiver as a distance sensor.[ 93 ] They compared the effectiveness of two different filters, extended Kalman and histogram, to determine which one would perform better. The UWB transceivers on the robot acted as an anchor to the singular “tags” located on the humans, which the robot would follow. The distance detected from the anchors to the tags was more accurate for the Kalman Filter compared to histogram, with a difference of nearly 7 cm.
Plant Monitoring/Farm Management/Education (PM/FM/E)Many of the proposed agricultural robots are capable of housing different kinds of sensors. As many farming decisions are not bound to a single parameter, robots with multiple sensors can be used to provide farmers with a more holistic picture of the plants for them to make more accurate decisions. One direct example of this being utilized is Dusadeerungsikul and Nof's proposed “Agricultural Robotic System”.[ 94 ] for monitoring plants for abnormal conditions. Their system is based on a mobile robot platform with multiple onboard sensors and a manipulator. According to the system protocol, the mobile platform randomly chooses a plant to inspect and, after moving to the selected plants, takes images of the different parts of the plant. The images are then sent to a workstation where the user can inspect the images and decide whether a treatment for the plant is necessary. The system, compared to a fully manual monitoring method, performed at similar levels in terms of response to data received and was faster in terms of information sharing. Additionally, when unplanned events are introduced, the proposed system performed much better as in the manual condition, the workers did not have access to real-time information of the new events.
Agostini et al. proposed an automated PM system incorporating AI-based methods such as cause–effect couples (CECs) to inform farmers of decisions regarding plant care.[ 95 ] The main data source used for analysis was the 3D representations of tobacco leaves obtained by both normal and infrared cameras. Then, through the parameters identified by CECs, the system suggests treatment plans for the operator. The system also can learn from the outcomes of the treatments to modify them accordingly. This is also true for user-initiated treatments. To test the viability of their system, the researchers used plants that were subjected to different parameters such as lighting or fertilizer levels. They found that compared to the plants used to train their CECs, the test plants showed a better growth rate.
Polic et al. proposed an indoor organic farming management system utilizing agricultural drones with an end effector that takes images of the plants and ground robots that transport and treat plants.[ 96 ] For treating plants, the authors used compliant manipulator control and soft robotics for the ground robots. To ensure that the robots do not collide with humans, the authors proposed safety zones that would either make the robot slow down or stop completely. However, they also noted the challenges of establishing these zones due to the dynamic nature of operations in indoor agricultural applications.
Robots have also been proposed in newer farming methods as well. Paniti et al. constructed a prototype for collaborative hydroponics farming.[ 97 ] The robot, which consisted of a mobile base and a manipulator, used a specialized end effector to pick up the plants from a “hydro tower” in which the plants were located. This hydro tower and a plant that has been grown in it can be viewed in Figure 16 . However, human collaborators could also work on the hydro tower, making the end effector of the robot hazardous. The authors then tested the possible collision forces of this end effector by implementing a collision testing module from the COVR toolkit. Although the forces they found were below the safety thresholds and the possible collisions transient, they still recommended safety gloves as a precautionary measure.
Figure 16. Paniti et al.'s proposed hydro pot system (left) and a plant that is grown from the system (right). Reprinted with permission.[97] Copyright 2023, University of Pannonia, Faculty of Engineering.
Another critical aspect of implementing agricultural robots in various farming spaces is ensuring their proper utilization by users. This can be achieved by providing educational opportunities for users to interact with agricultural robots, enabling farmers to learn about the diverse application areas of these robots. For example, Araiza et al. proposed a robotic system that would teach its users about horticulture and how robots could be used in it.[ 98 ] There are two main parts of the system: the gaming interface, which is used to deliver the interactive education component to the user and allows for the control of the robot, and the robot platform itself, which includes a mobile base that can carry pots. The prototype setup can be viewed in Figure 17 .
Figure 17. The proposed agricultural education prototype of Araiza et al. Reprinted with permission.[98] Copyright 2023, IEEE.
Sun et al.'s educational platform was based on a social robot and mainly focused on delivering a Q&A-based education system.[ 99 ] Compared to Araiza, there is a lesser amount of direct demonstration of farming operations, but the main method of interacting with the educational component is more personal compared to just a tablet. Sun et al. observed that different considerations need to be made for different age groups, such as providing additional support for younger and older participants. They also observed the system attracting more attention from younger audiences compared to older ones.
Other ApplicationsAlthough they are not conducted for a specific agricultural task in mind, numerous studies cover important elements of HRC in agriculture. Some studies in this section try to build frameworks for feasible HRC by focusing on IoT issues such as communication protocols and infrastructure. Other studies, although not experimental, provide important information and commentary regarding HRC in agriculture.
The last years have seen efforts to establish design guidelines for safe HRC practices across the world. One of such efforts has been COVR, an EU-wide effort to establish a framework and toolkit for safe HRC irrespective of the application domain.[ 100 ] After establishing a generic life cycle model for robotics development, the authors identified six different safety skills that were going to be the foundation of their toolkit: maintaining a safe distance, dynamic stability and proper alignment, limiting physical interaction energy, range of movement, and restraining energy. The toolkit itself has different types of information available from existing standards to safety assessment tools. This toolkit was then field tested to observe its favorability and usability via a survey. The participants reported that the toolkit saved time, and they browsed the protocols and standards sections, respectively. This toolkit has been utilized in the farming domain already in Paniti et al.'s hydroponics farming HRC proposal.
Another HRC safety study was conducted by Zheyuan et al.[ 101 ] They proposed an advanced human–robot collaboration model that considers the characteristics of the human collaborator and evaluates the potential risk factors associated with said characteristics. To evaluate the effectiveness of their model, they implemented it for a table clearing task. Their model improved the effectiveness of HRC and performed better than traditional hazard analysis techniques in terms of its prediction ratio.
Rose et al. focused on how responsible implementation of agricultural robotics could be done using the four elements identified by Stilgoe et al.: anticipation, reflexivity, inclusion, and responsiveness.[ 102,103 ] The authors argue that anticipatory actions should go beyond technology acceptance and opinion surveys and should involve more hands-on experiences, such as “wizard of oz” studies where automation is simulated. For reflexivity, the authors suggested a broader effort of an iterative design approach that includes general guidelines and is not limited to individual prototypes. Regarding inclusion practices, the authors argue that stakeholders who are included in the design process should be expanded to include end users, customers, and NGOs that might represent the local community where the farm is located. Additionally, the purpose of inclusion should not be strictly for “impact” evaluation but to solve the larger challenges in agriculture that can be solved with robots. For the final point, the authors caution the readers to be cognizant of the impact agricultural robots can have and some harmful side effects such as privacy issues. A summary of the authors’ findings can be found in Figure 18 .
Figure 18. The benefits, areas of improvement, and potential safety and security issues identified by Rose et al. for autonomous agriculture. Reprinted with permission.[103] Copyright 2023, Springer Nature.
Redhead et al. conducted interviews with nine farmers to gather insights from them regarding the implementation of agricultural robots.[ 104 ] Their interviews yielded several findings such as varied interface design complexity, participatory design approach that involves open-source software and prototype development, and the use of small agricultural robots for more precise tasks. In their study, Soujanya et al. suggested different application areas for VR-based telepresence robots.[ 105 ] For agricultural robots, the authors suggested head movements to control the direction of the robot and sensors to be able to display information on the VR headset.
An important aspect of HRC is the ability of the robot to communicate seamlessly with its human collaborator. This may involve the human worker asking about current progress, existing issues, or the location of an item of interest. The cobot needs to be able to process the information from its sensors and communicate with the human in a meaningful way. To achieve this, Mohammed et al. developed a semantic protocol that can be understood by both machine and natural language.[ 106 ] This was made possible by combining existing robotic ontologies or the set of identifiable objects in the real world. This protocol was tested in a simulated virtual environment in which the mobile robot with the protocol explored the environments for a set amount of time and then was asked specific questions, such as the location of a landmark or the time of an image taken at a specific time frame.
The previous weaknesses described for robots, limited decision-making capabilities, inability to quickly adapt to their surroundings, and reliability issues that render autonomous operations infeasible, are present within wireless sensor networks (WSNs) as well. However, relying on humans entirely poses additional limitations such as fatigue or mistakes that can be induced by outside factors or faulty mental models. To help humans in deploying and monitoring robots in WSNs, Muhammad et al. developed a probabilistic model called PRISM.[ 107 ] They used a discrete time Markov chain to model the behavior of the operator and a Markov decision process for modeling robot behavior. The WSN and the PRISM model were tested under different scenarios, which included a different number of sensors, workload, fatigue of the operator, collaboration level, and the time it takes to reach and service a sensor node defined as granularity. Their simulations found that if there are specific nodes that need to be checked more often, human involvement in controlling the robot increases the performance significantly. They also found that fatigued operators degraded the performance of robot operations compared to autonomous operations.
The unstructured environment of agricultural spaces will have many different objects, such as trees and plants, which will be more rigid and stationary while animals and humans will have much more dynamic and nonrigid appearances. Algorithms that can perform well in detecting both types of objects could be beneficial in a more accurate representation of the environment for the robot to operate in. Ruiz Rodriguez et al. used RGB cameras instead of depth cameras to develop a 3D reconstruction algorithm that tried to solve the representation problem of nonrigid objects viewed from multiple angles.[ 108 ] They tested their proposed algorithm against a traditional rigid object reconstruction algorithm. Additionally, they also tested if color improves the performance of their proposed algorithm. The results showed that the proposed algorithm showed lower root mean square error than the rigid object algorithm, but the inclusion of color had minimal effect on the performance.
DiscussionIn this review, existing applications of HRC in the agricultural field were surveyed. The initial step involved conducting a review of reviews to explore the existing information available within the research community and identify research gaps present in current literature reviews. Subsequently, existing HRC applications based on the tasks and/or operations that the robots tend to undertake were summarized. Finally, the infrastructural elements of these studies are summarized, and these can be viewed in Table 2 .
Table 2 Summary of different HRC infrastructures
References | Application Area | Algorithms and Communication Protocol/Platform | Sensors/Cameras | ||
[53] | Tractor automation | Extended Kalman Filter, three-layer artificial neural network, Differential GPS | Novatel RT2 GPS receiver and KVH ECore 2000 gyroscope (for path planning), two color CCD cameras and Sony D30 pan-tilt color camera (object detection and remote monitoring) | ||
[54] | Tractor automation | Single-track model from Wang et al. proportional derivative controller, controller area network bus (internal electronic components), XBee-Pro (radio interface) | AgGPS 252 GPS receiver (master tractor position) | ||
[55] | Tractor automation | Self-developed C++ for moving steering wheel from signals, Emotiv API | R4 Trimble receiver, EPOC neuroheadset with EEG and EMG | ||
[56] | Tractor automation | Sum of squared difference stereo matching (image reconstruction), Lukas–Kanade method (human detection) | Six Sony progressive scan color CCDs, two RTK-GPSs | ||
[57,58] | Human activity recognition | Finite-state machines (operating states of agents) | Five GoPro HERO6 cameras (picker activities, not mounted on robot) | ||
[59] | Human activity recognition | Mask Region-based CNN, Gunnar-Farneback optical flow method | RealSense depth camera | ||
[33] | Human activity recognition | Long short-term memory | VICON IMeasureU Blue Trident wearable IMUs | ||
[60] | Human activity recognition | CNN, YOLOv3, and two different configurations YOLOv3-tiny, YOLOv3-spp | N/A | ||
[61] | Human activity recognition | OpenPose deep learning based pose extractor, YOLOv3 | N/A | ||
[31,64–66] | Spraying/pesticide deployment | For SAVSAR: self-developed pattern recognition algorithm for assisting target selection | For SAVSAR: AXIS P5512 PTZ Dome Network Camera, Logitech Sphere Camera, AXIS M1025 HDTV 1080p network camera, two laser scanners (LIDAR), | ||
[67] | Spraying/pesticide Deployment | Previously developed algorithm,[ 111 ]TCP-IP | SICK DX35 laser distance sensor, one color camera, two laser markers | ||
[68] | Spraying/pesticide deployment | Bluetooth | Ultrasonic sensor | ||
[69] | Spraying/pesticide deployment | Adaptive cubature Kalman filter, dynamic movement primitive and Gaussian mixture model for trajectory, GPS, MAVLINK | MPU-9250 IMU, LSM303D magnetometer, L1D2°C camera | ||
[71] | Harvesting | Faster R-CNN, TensorFlow API | N/A | ||
[72] | Harvesting | Position-based visual servo | RealSense D435 RGBD camera, RPLIDAR A3 laser scanner | ||
[73] | Harvesting | Dedicated control program for posture to inform robot movement, ZigBee | YEI 3-Space wearable IMU, ultrasonic sensors, XBee Pro 900 MHz module | ||
[74] | Harvesting | TCP/UDP | N/A | ||
[75] | Harvesting | N/A | N/A | ||
[76] | Harvesting | N/A | N/A | ||
[77] | Harvesting | N/A | N/A | ||
[78–80,82] | Target/object detection | Self-developed layers of HRC and objective function | N/A | ||
[83] | Target/object detection | Using Bechar et al.'s objective function and framework | N/A | ||
[84] | Target/object detection | Bechar et al.'s objective function, four different self-developed switching algorithms | N/A | ||
[85] | Target/object detection | Self-developed collaborative detection and prevention of errors and conflicts, HUB collaborative intelligence | N/A | ||
[86,87] | Target/object detection | Faster-CNN, YOLOv3 | N/A | ||
[88] | Target/object detection | Self-developed filter and image processor | N/A | ||
[89] | Navigation of robots | Mahalanobis distance-based classifier, geometry-based classifier (LIDAR stereovision), 3D point cloud generation (radar stereovision) two-pass dynamic programming (HDR stereovision-thermography) | Bumblebee XB3 and Flea3 (stereovision cameras), 3DSL with SICKLMS111 (LIDAR), custom-built radar, Flir a615 thermal camera | ||
[90] | Navigation of robots | YCbCr, TTL/CMOS | IC HA052 infrared sensor module | ||
[91] | Navigation of robots | Time of flight, voxel hashing, Kinect SDK | Kinect V2 RGB-D camera | ||
[92] | Navigation of robots | LIDAR odometry and mapping | Velodyne VLP-16 multiline LiDAR, Xsens MTi100 IMU, HTC Vive VR system | ||
[93] | Navigation of robots | Time of arrival, Extended Kalman Filter for first algorithm, histogram filter for second | Ultrawide-band transceivers | ||
[94] | PM/FM/E | Self-developed communications and control protocol | N/A | ||
[95] | PM/FM/E | Superparamagnetic clustering and convex hull approximation for leaf segmentation | eye-in-hand camera | ||
[96] | PM/FM/E | N/A | N/A | ||
[97] | PM/FM/E | N/A | N/A | ||
[98] | PM/FM/E | BaranC framework | PixieCa, Arduino-based custom sensors, LIDAR | ||
[99] | Other | N/A | N/A | ||
[100] | Other | N/A | N/A | ||
[101] | Other | Proposed advanced human–robot collaboration model | N/A | ||
[102] | Other | N/A | N/A | ||
[103] | Other | N/A | N/A | ||
[105] | Other | N/A | HC-05 Bluetooth module, VR headset with smartphone, L293D IC | ||
[106] | Other | Self-developed RoboSemProc protocol which has two parts: AMOR core ontology and YodaQA | N/A | ||
[107] | Other | Discrete time Markov chain (operator) and Markov decision process (robot) | N/A | ||
[108] | Other | Iterative closest point and two self-developed modifications | RGB-D Microsoft Kinect |
For studies on human activity recognition, the primary method of observing humans was through CV algorithms that detect human posture. In pesticide applications, two main approaches were identified: teleoperated robots with different types of user interfaces to improve performance and a system that demonstrated adjustable levels of collaboration depending on the end goal. Harvesting HRC systems almost exclusively featured a robot that serves as a mobile collection platform, rather than a robot actively working on the plant to harvest produce. Spraying systems mostly involve the use of teleoperation, but the effect of different levels of automation has been investigated, with levels involving HRC outperforming fully automated or fully manual robot control. Collaborative target detection systems utilized both multisensor fusion techniques and adjustable levels of automation. Studies that focused on robot navigation examined the representation of the robot's movement in different user interfaces, such as VR, and the ability to follow human collaborators throughout agricultural tasks using sensors and CV. Although HRC has been proposed in many application areas in agriculture, a few notable areas are missing, such as pollinating and seeding.
One major trend observed in research is the lack of robust collaboration tasks that were simulated in studies. While many proposed prototypes and frameworks were covered, actual collaboration to achieve a singular goal is a rare occurrence in the current research corpus. As noted by Bauer et al. in their literature review of HRC applications, collaboration and interaction are two different concepts, with the latter encompassing a larger set of scenarios that can be considered as “interaction.”[ 109 ] Hoc indicated in his paper that there are two minimum aspects of collaboration: each agent having their goals and being able to interfere with each other's actions through a variety of ways, such as correcting mistakes or preceding each other's actions, and this interference is for facilitating each agent's activities or the common goal.[ 110 ] Using these criteria as a benchmark, only 34 out of 55 studies actually involved robust collaboration. Even in terms of interaction, the studies that included any real-world interaction were low. This is an important shortcoming of current studies that aim to illuminate HRC and HRI in the agricultural field. Without studies that have higher fidelity in terms of task design, the outcomes from these studies will be of limited use for future designers. The location of the experiment (laboratory vs field) and the type of participants (university population vs actual farm workers) are also crucial elements to consider, as many studies included in this review were confined to the laboratory with a university population as participants (23 out of 55 included studies were conducted in the outdoor agricultural field or greenhouse). Additionally, future studies should focus on the longitudinal effects of HRC to observe the potential issues that might arise after continuous operation. These issues could include accuracy or reliability issues on the part of the robot, as well as acceptance issues and low utilization rates. Technological adoption is not a permanent phenomenon, and disadoption of precision agricultural technologies has been noted.[ 111 ]
Another observation about the reviewed HRC studies is the lack of biomechanical assessments on the effects of robots on their human collaborators. While some studies covered technologies such as wearable sensors and CV to detect human movement, they mainly focused on activity recognition. Only the study by Khosro-Anjom et al. utilized ergonomic principles to adjust for the robot's movement.[ 73 ] Tools such as IMUs, electromyography (EMG) sensors, and motion capture technology can be utilized to observe the differences induced by robots in collaborative tasks compared to humans completing the tasks. Understanding changes in posture could provide additional insights into the safety benefits and hazards that robots may introduce in different application areas, and also help in the development of movement algorithms for robots that do not alter the gait of the workers as they move past the robot. The findings from this research indicate a need for studies that include experimental tasks that are truly collaborative in nature and are conducted in the field to increase both the internal and external validity of the research efforts in developing HRC that is helpful and profitable for future adoption. Additionally, studying the effects of HRC on humans themselves, in terms of physical, cognitive, and attitudinal aspects, was severely lacking in current agricultural HRC studies.
This review has certain limitations that need to be addressed. As the focus of this review is on how HRC is applied in different application scenarios, it does not scrutinize the prototype design or the accuracy of the algorithms that are used; it merely reports them. This decision was made both for the sake of brevity and our intended focus on filling the information gap of how technologies such as robots and IoT are used in HRC applications. A future systematic review that scrutinizes the prototypes, testing scenarios, and software performance will provide an accurate picture regarding the state of the art in the agricultural HRC field.
ConclusionThe objective of this study is to provide agricultural stakeholders with a comprehensive overview of the current state of the art in HRC applications, focusing on research advancements and the capabilities of developed prototypes and systems. This review aims to serve as a valuable resource for robotics engineers, offering easy access to information on the hardware, software, and algorithms utilized in the construction of HRC systems specifically tailored for the agricultural industry. By presenting the latest advancements and technological foundations, we seek to support stakeholders in making informed decisions regarding the implementation of HRC in agriculture, enabling them to leverage the benefits of this emerging field.
AcknowledgementsThis work was partially supported by the US Department of Agriculture (NIFA 2022-67021-36125 and 2022-67021-36124). The authors have no conflict of interest to report.
Conflict of InterestThe authors declare no conflict of interest.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Recent years have witnessed an increased utilization of robotics systems in agricultural settings. While fully autonomous farming holds great potential, most systems fall short of meeting the demands of present-day agricultural operations. The use of human labor or teleoperated robots is also limited due to the physiological constraints of humans and the shortcomings of interfaces used to control robots. To harness the strengths of autonomous capabilities and endurance of robots, as well as the decision-making capabilities of humans, human–robot collaboration (HRC) has emerged as a viable approach. By identifying the various applications of HRC in current research and the infrastructure employed to develop them, interested parties seeking to utilize collaborative robotics in agriculture can gain a better understanding of the possibilities and challenges they may encounter. In this review, an overview of existing HRC applications in the agricultural domain is provided. Additionally, general trends and weaknesses are identified within the research corpus. This review serves as a presentation of the state-of-the-art research of HRC in agriculture for professionals considering the adoption of HRC. Robotics engineers can utilize this review as a resource for easily accessing information on the hardware, software, and algorithms employed in building HRC systems for agriculture.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer