1. Introduction
Unmanned underwater vehicles (UUVs) are essential tools for underwater observation tasks [1,2]. They can be commonly categorized according to their autonomy levels, from remotely operated vehicles (ROVs) of limited autonomy to autonomous underwater vehicles (AUVs) of high autonomy [3,4,5]. Most AUVs can only be adopted in some simple tasks such as observation, exploration or monitoring, and more complex tasks such as underwater grasping and manipulation can hardly be performed without human interventions [6]. Such tasks rely on the UUVs equipped with manipulators [7,8,9], which are mostly remotely operated since human intelligence is indispensable in understanding complicated mission requirements and controlling multi-degrees of freedom (DOFs) motion.
The harsh underwater environment creates many difficulties for the application of perception and positioning sensors. Some common robot sensors are unavailable in the underwater environment, and some underwater sensors are bulky and expensive, which strongly limits the autonomy capability of AUVs. For example, robust underwater velocity and position measurement across a wide range rely heavily on acoustic devices [10,11]. The water turbidity and lack of visible reference objects and features also limit the application of vision-based positioning and perception methods [12,13,14]. In many cases, AUVs are specifically tailored for certain types of underwater missions [15,16]. Structures, electronic devices and autonomy programs have to be specially designed according to the mission requirements, which limits their applicability to a wider range of occasions.
With intervention or remote control by human operators, the ROVs are generally more versatile in underwater explorations and manipulation [17]. Low-cost and miniaturized ROVs are becoming more popular in ocean explorations to serve as alternatives for the expensive AUVs. Humans can make decisions and undertake operations with minor sensor information to conduct complex underwater missions. New control methods are also increasingly investigated, and practical issues such as communication rate [18] and winch performance [19] are also considered in the ROV control problems.
However, the performance of an ROV is highly relevant to the human operator’s skills. Furthermore, prolonged concentrated work is not only physically demanding, but also has a high risk of accidents. Therefore, shared autonomy approaches have been proposed in the literature to leverage the capabilities of autonomous systems to complement the skills of human operators [20]. Human intelligence is superior in making intelligent and flexible high-level decisions, dealing with unexpected events, and comprehending the varying and unconstructed environment., while machine autonomy has the advantage of high stability, accuracy and fast responses [21,22].
Many attempts have been made to apply shared-control algorithms to land robots [23,24,25], but the shared control of UUVs has still not been fully investigated. A shared autonomy approach for a low-cost sAUV is developed in [20]. It contains a variable level assistant control mode with an online estimation of user skill level and a planner which generates paths that avoid tether entanglement of the sAUV. The method reduces the training requirements and attentional burden required for underwater inspection tasks, but the accurate control of close-range object grasping tasks has not been fully discussed. An ROV teleoperation via human body motion mapping is proposed in [26], where the ROV hydrodynamic forces are converted into haptic feedback, and the human body motions are modeled and mapped into the gesture controls. The control precision is improved and mental load is reduced, but automatic control has not yet been fused into the system to form a shared-control scheme.
In order to improve the working efficiency and reduce the operational burden on humans in underwater grasping tasks, this paper develops a practical visual-aided shared-control method for a low-cost prototype sAUV. The primary contributions of this paper are three-fold.
The proposed method takes full advantage of human command in the high-level guidance, and the visual servo autonomy in the close-range dynamic positioning, to construct a stable, flexible and efficient shared controller for underwater grasping tasks.
A variable AF of the target objects is proposed, whose field intensity is adjusted by the human intention extracted from the remote commands. An arbitration mechanism is then adopted to assign authority weights to the human command and the automatic controller according to the AF intensity.
The shared controller is realized based on the reference velocity fusion in the kinematic level, which is then tracked by the dynamic controller considering model parameter uncertainties. Both the simulation and experiment demonstrate an obvious increase in the grasping efficiency and stability.
The rest of this paper is organized as follows. Section 2 introduces the hardware construction and modeling of the prototype sAUV. Section 3 introduces the detailed design procedure of the shared controller, including both the adjustable AF concept and the model-based dynamic controller considering model uncertainties. Section 4 provides the simulation and experiment results, and Section 5 concludes the paper.
2. System Construction and Modeling
2.1. System Construction
The prototype miniaturized sAUV is shown in Figure 1, and its functional diagram is illustrated in Figure 2. The prototype is approximately 0.7 m in length including the claws, 0.35 m in width, and 0.25 m in height, which is miniaturized and low-cost. It is equipped with six thrusters to provide five-DOF motion for dexterous underwater grasping tasks. Nevertheless, it is passively stable in roll and pitch unless a special pose adjustment is required. The sAUV is wirelessly controlled and capable of three control modes: full autonomy, remote control and shared control. It can receive remote-control signals from a joystick and an onshore computer.
Due to budget and weight limits, acoustic devices are not applied in this prototype version. The sAUV utilizes a forward-looking camera for observation and object searching. The real-time visual program is taken charge of by an independent visual processor. All the other sensors and modules are connected to a microcontroller unit (MCU), which is the central brain of the electrical system. The inertial measurement unit (IMU) provides Euler angles in the north-east-down (NED) earth coordinate. The wireless communication module takes charge of the mutual communication between the sAUV and the external operators, including the joystick and the onshore computer. High-speed communication is established between the MCU and the visual processor to conduct the shared autonomous control algorithm. When the visual measurement is available, the joystick provides the reference velocities and poses as remote commands, otherwise, the translational motions are directly controlled by the input thrust forces. The control modes are switched according to different task requirements.
2.2. Kinematic Model
Kinematic and dynamic models of the sAUV were established following the classic model introduced by T.I Fossen [27]. The positions and poses are denoted in the NED frame, and the sAUV velocities are denoted in the body-fixed coordinate frame (B-frame). The schematic of the kinematic model is shown in Figure 3.
The six-DOF kinematic model is written as:
(1)
where is the position and Euler angle vector in the NED frame, and v is the velocity vector in the B-frame. Given that the sAUV is passively stable in roll and pitch, a decoupled four-DOF motion on the horizontal plane along with an independent vertical motion is adopted, which gives a simplified kinematic model as:(2)
The automatic visual servo control is performed when the target object is recognized by the camera fixed on the front of the sAUV. The rotation matrix from the B-frame to the camera-fixed coordinate frame (C-frame) is given by:
(3)
The center position of an object in the C-frame is:
(4)
where , , denotes the rotation matrix of the current pose, and is the object position in the NED frame. A position-based visual error function is utilized for the convenience of the controller design. Assuming that the object is expected to be located at in the C-frame in a grasping task, the visual error is then defined as:(5)
The visual error in (5) can be directly obtained from the image captured by the camera. Given that the sAUV is self-stabilized in pitch and roll, which gives , the expansion form of (5) then becomes:
(6)
Using the kinematic model in (2), the time derivative of (6) can be written as:
(7)
For most of the object-grasping tasks, an object can be grabbed from an arbitrary direction when the translational visual error is eliminated using appropriate motion control. Hence, only the translation component of the visual error is considered in this paper, and the Euler angles are measured by the IMU and are independently controlled. Equation (7) is equivalent to the kinematic model but is more straightforward for the visual servo controller design.
2.3. Dynamic Model
The system dynamics in the grasping task are acquired from the six-DOF dynamic model [27]:
(8)
where M, C, D, and G are the mass, Coriolis-centripetal, damping, and restoring force terms, respectively, and M = MA + MRB. τ contains the input forces and torques. Since the sAUV is self-stabilized in pitch and roll, a decoupled four-DOF dynamic can be utilized by simplifying (8). The high-order terms and nonlinear terms in the viscous hydrodynamic force can also be neglected in the low-speed motion, which gives:(9)
The shared controller in the next section is designed based on the kinematic model in (2) and (7), and the dynamic model in (9).
3. Shared Controller Design
The control performances for pure ROVs are closely related to the operating skills of the human operators. Similar to the athletic skills of human beings, the remote operation skill needs extended training, and is also highly related to the natural ability of the operator. Still, precise and delicate operations can be challenging and physically consuming even for skilled operators.
Aiming to mitigate the dependency of grasping task performance on the individual operators, this paper proposes a shared-control algorithm for the sAUV, which provides more convenient, efficient and accurate pose adjustment during the tasks. The controller is designed based on an adjustable AF concept. The general control diagram is illustrated in Figure 4.
3.1. Shared Control Based on Attraction Field
A visual servo controller at the kinematic level was first designed according to the visual error model in (6) to automatically eliminate the visual error. A Lyapunov function was selected as:
(10)
where is the yaw angle error. The time derivative of (10) is:(11)
The kinematic control law was selected as;
(12)
Bringing (12) into (11), the time derivative of the kinematic Lyapunov function becomes:
(13)
which indicates that exponentially converges to zero as tThe automatic visual servo on the kinematic level addresses the control problem in the absence of human intervention. However, the remote command from the human operator must be taken into consideration in the shared-control approach. The concept of attraction field (AF) is then introduced as the basis of the shared controller.
The concept of AF is inspired by the potential field commonly utilized in obstacle avoidance problems, where the obstacles are regarded as sphere or ellipsoidal sets, and the robotics are prohibited from entering these sets [28]. However, in the proposed grasping problem, the target objects are regarded as sources that generate AFs to attract the sAUV, as illustrated in Figure 5. For a detected object Qi (i = 1, 2, … n) at Pi, its AF intensity Ai at an arbitrary point P in the field can be described by:
(14)
where KA is the attraction coefficient which can be adjusted by human intention, is the effective range of the AF, qi is the importance factor of the object, and is a small positive constant.When the sAUV recognizes a single target object, it is naturally attracted by the AF generated by the object within the effective range. When n ≥ 2 objects simultaneously appear in the camera view, which are denoted by {Qi} (i = 1, 2, …, n), all the potential AF intensities at the current position at time T caused by {Qi} are calculated as (i = 1, 2, …, n). As only one of the objects is expected to be grasped at each time, the resultant active AF at time T belongs to only one of these fields. The resultant AF is selected using the following criterion.
Assuming that the active AF at the last sample time T − 1 comes from Qi and has an intensity of at the current time T. If the current AF with the largest intensity satisfies:
(15)
where (1 + Hs) > 1 is the threshold ratio, the AF at time T is then switched to AT,1, and the corresponding source object is Q1 also selected.Otherwise, the resultant AF is still calculated using the object Qi at time T.
The above criterion is a straightforward strategy with a hysteresis characteristic. It can be simply interpreted as that the resultant AF only changes when the current largest intensity is much larger than the resultant AF in the last moment. This hysteresis characteristic avoids unstable and frequent changes between different AFs.
Based on the AF concept, the shared-control strategy at the kinematic level is designed to minimize the conflict between automatic control and human intentions. When executing a grasping task, the arbitrator determines the reference control signal to make a combination of the remote operation input uc,T and the visual servo reference velocity at time T. The authority weight of the shared controller is determined according to the AF field intensity, which is given by:
(16)
where AT,i is calculated by (14).The AF attraction coefficient KA in (14) is then utilized to adjust the attraction strength according to the human intention. Assuming that the control input at time T is decomposed in the three orthogonal directions in the B-frame as , the reference velocity given by the automatic visual servo controller is . The deviation degree between and is then defined as:
(17)
The deviation degree defined by (17) reflects the human intention of taking more or less control of the sAUV. A larger value of indicates a stronger intention from the operator to take more control of the system, and a larger value of indicates a stronger divergence from the human operator with the automatic controller.
At each sample time T, the derivative of the attraction coefficient is calculated according to the largest deviation, which is given by:
(18)
where DTH1 > 0 is the AF decreasing threshold, DTH2 > 0 is the AF recovery threshold, and is the upper bound of the coefficient. When the AF needs to be switched using Equation (15), the newly updated AF intensity is also calculated using the current coefficient KA.Using the coefficient updated by (18), the AF intensity can be adjusted smoothly by interpreting the remote command. When the operator provides a remote command that is very different from the attraction on purpose, the AF intensity quickly decreases within a couple of seconds to transfer the control authority to the operator. When the human intention is coincident with the visual control or has a moderate deviation, the AF intensity gradually recovers, which means that the operator needs help from the automatic controller to perform accurate control.
3.2. Dynamic Controller
In the inner loop, the dynamic controller is designed to track the shared-control reference velocities. The controller in (16) is extended to the dynamic case by taking the kinematic controllers as virtual control laws. Let αu, αv, αw and αr be the virtual control laws, introducing the control errors:
(19)
Given that the exact values of the dynamic model parameters are inaccessible in reality, the parameter uncertainties are considered in the model-based dynamic controllers. According to the models in (8), all the parameters to be estimated are concatenated in a vector denoted by:
(20)
Let be the estimation of , define the parameter estimation error as . Combining the virtual control errors and the parameter estimation errors, a Lyapunov function is chosen as:
(21)
The time derivative of Vd is:
(22)
Choosing the control law at the dynamic level:
(23)
Bringing (23) into (22) yields:
(24)
where:(25)
Then applying the adaptive law:
(26)
where is a positive diagonal matrix. Applying (23) and (26) into (22) yields:(27)
which indicates that converges to zero as .The above dynamic guarantees a convergence of the visual and yaw angle error in the automatic control mode. However, the reference velocities in the shared-control mode are partially provided by the human operator, and thus do not always make the visual error converge to zero as . In such cases, the virtual control laws in (19) are synthesized by the shared controller expressed in (16), while the dynamic controller still tracks the reference velocities. In general, the designed dynamic control law in (23) guarantees a successful tracking of the reference velocities in both the automatic and shared-control modes.
4. Simulation and Experiment
4.1. Simulation
Simulation is conducted using MATLAB based on an approximate model of the sAUV. The model parameters applied in the simulation are set as M = diag(21, 23, 25, 1.3, 2.1, 1.8), DA = diag(12, 18, 20, 1.5, 2.5, 2.4), and is neglected due to the passive stability in roll and pitch. The initial model parameter estimations in the controller are chosen to have at least 50% errors. The controller gains are ku = kv = kw = 0.2, ku = 0.4, kτu = kτv = kτw = 20, kτr = 10. . The importance factors for all the objects are chosen to be qi = 1, and . The initial position of the sAUV is set at P0 = (−1.5, −2, 1.5), ψ0 = 0 and ψd = 0.5.
The simulation is conducted on three basic operating tasks, including automatic visual servo, shared control with unskilled control, and shared control with target switch. These three task units are basic issues that need to be addressed during the whole underwater grasping task, which also reveal the major advantages of the proposed shared controller.
The fully automatic visual servo performance is an important foundation for the shared-control mode and is thus firstly validated in the simulation. In the first task, the sAUV is expected to perform dynamic positioning in front of the target with a desired yaw angle using the visual servo technique. The NED coordinate position of the target object is set at Pt = (0, 0, 0), and the target C-frame position is set at = (0, 0, 0.4). The simulation results are shown in Figure 6 and Figure 7.
It can be observed from Figure 6 that the visual errors all converge smoothly within the period. Due to the existence of the non-zero desired yaw angle ψ, only the errors ev and z are coincident. Figure 7 shows the translational and yaw velocities during the pose-adjusting process. The velocities change smoothly and gradually decrease as the sAUV approaches the target. The maximum translational velocities are near 0.4 m/s, and the maximum yaw velocity is about 0.25 rad/s, which are reasonable values for the prototype.
The second task is set to mimic a common situation when the remote command is provided by an unskilled operator. The operator is supposed to have normal observation ability, but very poor operation skills, so that the enhanced assistant and corrective ability of the shared-control strategy can be fully revealed. The unskilled operator can recognize the rough deviation direction of the sAUV from the target object, and intends to approach and grasp it with rough speed control. Specifically, the operator provides constant reference velocities with correct directions when the position deviation is recognizable, but cannot adjust the velocity magnitudes according to the deviation distance. Similar to the previous task, the target object is set at = (0, 0, 0), and the target C-frame position is = (0, 0, 0.4). The corresponding results are shown in Figure 8 and Figure 9.
Compared to Figure 6, the results in Figure 8 show a much faster convergence of the visual errors due to the remote-control signals provided to approach the target object. Figure 9 compares the actual velocities with the remote-control velocities. Before the first 6 to 8 seconds, the visual errors are large, and the operator provides 0.4 m/s constant translational velocities to control to eliminate the errors. These remote-control signals at the beginning are regarded as correct and helpful signals to accelerate the convergence of the visual errors.
When each translational error becomes small, the corresponding remote-control signals of u and w changes to the opposite direction, and all the large remote-control velocities are regarded as wrong commands from the unskilled operator. However, as the AF intensity increases when the sAUV approaches the target, the overall velocities are mainly determined by the automatic controller. The harmful disturbances of the rough remote commands are thus eliminated according to the shared controller described in (16). Consequently, the results of the second task demonstrate that the shared-control strategy can successfully help to improve the control performance of an unskilled operator.
The third task is set to evaluate the target switching performance on occasions where the operator wants to pick target objects far from the current candidate objects. The distraction candidates near the target object are not expected to be picked up immediately but generate disturbance AF during the switching process before the correct target is reached. During the switching process, the operator first provides constant reference velocity towards the candidate target at = (0, 0, 0). When the sAUV is attracted by the candidate target, the operator gives a large escape velocity to v towards the correct target at Pt2 = (0, 0.5, 0). The target C-frame position is = (0, 0, 0.4) for both the candidate and correct targets. The corresponding results are shown in Figure 10 and Figure 11.
The visual error and yaw angle curves in Figure 10 show a rapid and smooth target switch process. There are obvious abrupt changes in the visual errors of ew and eu, but the position coordinates x and y change smoothly, because the visual errors are re-calculated when the active AF is switched from the candidate target to the correct target. Similar rapid changes in the velocities u and v also occur in Figure 11.
Notice that the operator gives a remote-control signal only to the velocity v after 12 s to switch the target. When the reference velocity deviation of (17) in any one of the directions exceeds the threshold value, the attraction coefficient KA rapidly reduces, and results in a rather low AF intensity near 15 s, as shown in Figure 11. The operator thus has high authority to control the sAUV to quickly switch the target until the reference velocity deviation goes beneath the recovery threshold. The low AF intensity time is also flexibly controlled by the operator. In general, the results demonstrate a good target switching performance using the proposed shared-control strategy.
4.2. Experiment
An object-grasping experiment is conducted on the prototype sAUV in a water pool. Due to the limited depth of the pool, the sAUV performs a three-DOF planar motion on the water which is still enough to verify the shared-control performance. Four plastic balls in two different colors are attached on one side of the pool to serve as target objects, as shown in Figure 12. The sAUV first follows a desired path before stopping in front of the target objects and then grasps two of the objects according to different task requirements. In task one, the sAUV is required to grasp any two of the objects despite their colors. In task two, the sAUV is required to grasp two green objects without damaging the red ones. Obviously, task two requires a more skillful remote operation.
The grasping task is conducted by both a skilled and an unskilled operator, with a pure remote control and a shared-control mode. The tasks are conducted several times and the results are recorded statistically to demonstrate the general efficiency improvement using the shared-control algorithm. Specifically, each task is repeated 10 times in each mode by an operator, and the average times are recorded and compared in Figure 13a,b.
Figure 13a shows that the shared-control strategy generally reduces the averaging operation time for both operators. The time saving is moderate for the skilled operator, because the pure remote-control performance of the skilled operator is closer to the automatic system than the unskilled operator. The time saving for the unskilled operator is more significant, because the shared-control strategy effectively corrects the inappropriate commands given by the operator, and provides greater help on the precise pose adjustment before grasping. The time-saving effect for task two is also more significant for both of the operators, because the shared control is more helpful when the manual operation complexity increases.
Figure 13b compares the time proportion of the visual servo process in the grasping experiment, which is the total time needed to stop in front of each object before grasping. This process is regarded as the most challenging process for an unskilled operator. The time-saving effect has a similar trend to Figure 13a, but is more significant for the visual servo process. Owing to the help of the shared controller, the time needed for the unskilled operator in the visual servo process of task two is drastically reduced by nearly 40%. Besides the time-saving effect, both of the operators also report a much more easy and relaxed operating feeling with the shared-control mode. In conclusion, the proposed shared-control method significantly improves the efficiency of conducting the grasping tasks.
5. Conclusions
A visual-aided shared-control method is proposed and validated in this paper to improve the underwater grasping efficiency of the sAUV. The control algorithm utilizes the variable AF to adjust the authority weights of the human command and the automatic controller, and takes full advantage of human intelligence for decision-making and the automatic system for precise control. Compared to pure-remote operation, the proposed shared controller significantly improves the efficiency of underwater grasping tasks and reduces the operation skill required for the operators.
The proposed method provides a valuable reference for controlling various kinds of underwater vehicles. Future work will proceed with algorithm improvements to consider practical issues in underwater environments, including measurement noise, water turbidity, communication delays and other environmental disturbances. Experiments will also be conducted with more complex tasks. The proposed method is believed to have great potential in real applications of underwater exploration and manipulation.
Conceptualization, T.W.; methodology, T.W. and Z.S.; software, T.W. and F.D.; validation, T.W. and F.D.; writing—original draft, T.W.; funding acquisition, T.W. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Data will be made available on request.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Structure of the prototype sAUV: (a) Side view with claws; (b) Top view without claws.
Figure 12. Water pool experiment. (a) The sAUV and target objects. (b) Shared control for object grasping.
Figure 13. Comparison of the operating time in the experiment. (a) Operating time for different tasks and operators. (b) Visual servo time reduction percentage with the shared-control mode.
References
1. Sahoo, A.; Dwivedy, S.K.; Robi, P.S. Advancements in the field of autonomous underwater vehicle. Ocean Eng.; 2019; 181, pp. 145-160. [DOI: https://dx.doi.org/10.1016/j.oceaneng.2019.04.011]
2. Wang, T.; Zhao, Q.; Yang, C. Visual navigation and docking for a planar type AUV docking and charging system. Ocean Eng.; 2021; 224, 108744. [DOI: https://dx.doi.org/10.1016/j.oceaneng.2021.108744]
3. Hong, S.; Chung, D.; Kim, J.; Kim, Y.; Kim, A.; Yoon, H.K. In-water visual ship hull inspection using a hover-capable underwater vehicle with stereo vision. J. Field Robot.; 2018; 36, pp. 531-546. [DOI: https://dx.doi.org/10.1002/rob.21841]
4. Peng, Z.; Wang, J.; Han, Q. Path-Following Control of Autonomous Underwater Vehicles Subject to Velocity and Input Constraints via Neurodynamic Optimization. IEEE Trans. Ind. Electron.; 2019; 66, pp. 8724-8732. [DOI: https://dx.doi.org/10.1109/TIE.2018.2885726]
5. Xie, T.; Li, Y.; Jiang, Y.; Pang, S.; Wu, H. Turning circle based trajectory planning method of an underactuated AUV for the mobile docking mission. Ocean Eng.; 2021; 236, 109546. [DOI: https://dx.doi.org/10.1016/j.oceaneng.2021.109546]
6. Huang, H.; Tang, Q.; Li, J.; Zhang, W.; Bao, X.; Zhu, H.; Wang, G. A review on underwater autonomous environmental perception and target grasp, the challenge of robotic organism capture. Ocean Eng.; 2020; 195, 106644. [DOI: https://dx.doi.org/10.1016/j.oceaneng.2019.106644]
7. Birk, A.; Doernbach, T.; Müller, C.A.; Luczynski, T.; Chavez, A.G.; Köhntopp, D.; Kupcsik, A.; Calinon, S.; Tanwani, A.K.; Antonelli, G. et al. Dexterous Underwater Manipulation from Onshore Locations: Streamlining Efficiencies for Remotely Operated Underwater Vehicles. IEEE Robot. Autom. Mag.; 2018; 25, pp. 24-33. [DOI: https://dx.doi.org/10.1109/MRA.2018.2869523]
8. Youakim, D.; Cieslak, P.; Dornbush, A.; Palomer, A.; Ridao, P.; Likhachev, M. Multirepresentation, Multiheuristic A search-based motion planning for a free-floating underwater vehicle-manipulator system in unknown environment. J. Field Robot.; 2020; 37, pp. 925-950. [DOI: https://dx.doi.org/10.1002/rob.21923]
9. Stuart, H.; Wang, S.; Khatib, O.; Cutkosky, M.R. The Ocean One hands: An adaptive design for robust marine manipulation. Int. J. Robot. Res.; 2017; 36, pp. 150-166. [DOI: https://dx.doi.org/10.1177/0278364917694723]
10. Sarda, E.I.; Dhanak, M.R. A USV-Based Automated Launch and Recovery System for AUVs. IEEE J. Ocean. Eng.; 2017; 42, pp. 37-55. [DOI: https://dx.doi.org/10.1109/JOE.2016.2554679]
11. Palomeras, N.; Vallicrosa, G.; Mallios, A.; Bosch, J.; Vidal, E.; Hurtos, N.; Carreras, M.; Ridao, P. AUV homing and docking for remote operations. Ocean Eng.; 2018; 154, pp. 106-120. [DOI: https://dx.doi.org/10.1016/j.oceaneng.2018.01.114]
12. Yazdani, A.M.; Sammut, K.; Yakimenko, O.; Lammas, A. A survey of underwater docking guidance systems. Robot. Auton. Syst.; 2020; 124, 103382. [DOI: https://dx.doi.org/10.1016/j.robot.2019.103382]
13. Zhang, S.; Zhao, S.; An, D.; Liu, J.; Wang, H.; Feng, Y.; Li, D.; Zhao, R. Visual SLAM for underwater vehicles: A survey. Comput. Sci. Rev.; 2022; 46, 100510. [DOI: https://dx.doi.org/10.1016/j.cosrev.2022.100510]
14. Wang, R.; Wang, X.; Zhu, M.; Lin, Y. Application of a Real-Time Visualization Method of AUVs in Underwater Visual Localization. Appl. Sci.; 2019; 9, 1428. [DOI: https://dx.doi.org/10.3390/app9071428]
15. Palomeras, N.; Peñalver, A.; Massot-Campos, M.; Negre, P.; Fernández, J.; Ridao, P.; Sanz, P.; Oliver-Codina, G. I-AUV Docking and Panel Intervention at Sea. Sensors; 2016; 16, 1673. [DOI: https://dx.doi.org/10.3390/s16101673] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27754348]
16. Kimball, P.W.; Clark, E.B.; Scully, M.; Richmond, K.; Flesher, C.; Lindzey, L.E.; Harman, J.; Huffstutler, K.; Lawrence, J.; Lelievre, S. et al. The ARTEMIS under-ice AUV docking system. J. Field Robot.; 2018; 35, pp. 299-308. [DOI: https://dx.doi.org/10.1002/rob.21740]
17. Khadhraoui, A.; Beji, L.; Otmane, S.; Abichou, A. Stabilizing control and human scale simulation of a submarine ROV navigation. Ocean Eng.; 2016; 114, pp. 66-78. [DOI: https://dx.doi.org/10.1016/j.oceaneng.2015.12.054]
18. Batmani, Y.; Najafi, S. Event-Triggered H∞ Depth Control of Remotely Operated Underwater Vehicles. IEEE Trans. Syst. Man Cybern. Syst.; 2021; 51, pp. 1224-1232. [DOI: https://dx.doi.org/10.1109/TSMC.2019.2896382]
19. Zhao, C.; Thies, P.R.; Johanning, L. Investigating the winch performance in an ASV/ROV autonomous inspection system. Appl. Ocean Res.; 2021; 115, 102827. [DOI: https://dx.doi.org/10.1016/j.apor.2021.102827]
20. Lawrance, N.; Debortoli, R.; Jones, D.; Mccammon, S.; Milliken, L.; Nicolai, A.; Somers, T.; Hollinger, G. Shared autonomy for low-cost underwater vehicles. J. Field Robot.; 2019; 36, pp. 495-516. [DOI: https://dx.doi.org/10.1002/rob.21835]
21. Li, G.; Li, Q.; Yang, C.; Su, Y.; Yuan, Z.; Wu, X. The Classification and New Trends of Shared Control Strategies in Telerobotic Systems: A Survey. IEEE Trans. Haptics; 2023; 16, pp. 118-133. [DOI: https://dx.doi.org/10.1109/TOH.2023.3253856]
22. Liu, S.; Yao, S.; Zhu, G.; Zhang, X.; Yang, R. Operation Status of Teleoperator Based Shared Control Telerobotic System. J. Intell. Robot. Syst.; 2021; 101, 8. [DOI: https://dx.doi.org/10.1007/s10846-020-01289-8]
23. Selvaggio, M.; Cacace, J.; Pacchierotti, C.; Ruggiero, F.; Giordano, P.R. A Shared-Control Teleoperation Architecture for Nonprehensile Object Transportation. IEEE Trans. Robot.; 2022; 38, pp. 569-583. [DOI: https://dx.doi.org/10.1109/TRO.2021.3086773]
24. Li, M.; Song, X.; Cao, H.; Wang, J.; Huang, Y.; Hu, C.; Wang, H. Shared control with a novel dynamic authority allocation strategy based on game theory and driving safety field. Mech. Syst. Signal Proc.; 2019; 124, pp. 199-216. [DOI: https://dx.doi.org/10.1016/j.ymssp.2019.01.040]
25. Li, R.; Li, Y.; Li, S.E.; Zhang, C.; Burdet, E.; Cheng, B. Indirect Shared Control for Cooperative Driving Between Driver and Automation in Steer-by-Wire Vehicles. IEEE Trans. Intell. Transp. Syst.; 2021; 22, pp. 7826-7836. [DOI: https://dx.doi.org/10.1109/TITS.2020.3010620]
26. Xia, P.; You, H.; Ye, Y.; Du, J. ROV teleoperation via human body motion mapping: Design and experiment. Comput. Ind.; 2023; 150, 103959. [DOI: https://dx.doi.org/10.1016/j.compind.2023.103959]
27. Fossen, T.I. Handbook of Marine Craft Hydrodynamics and Motion Control; Wiley: New York, NY, USA, 2011.
28. Muratore, L.; Laurenzi, A.; Hoffman, E.M.; Baccelliere, L.; Kashiri, N.; Caldwell, D.G.; Tsagarakis, N.G. Enhanced Tele-interaction in Unknown Environments Using Semi-Autonomous Motion and Impedance Regulation Principles. Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA); Brisbane, QLD, Australia, 21–25 May 2018; pp. 5813-5820.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Human intelligence has the advantage for making high-level decisions in the remote control of underwater vehicles, while autonomous control is superior for accurate and fast close-range pose adjustment. Combining the advantages of both remote and autonomous control, this paper proposes a visual-aided shared-control method for a semi-autonomous underwater vehicle (sAUV) to conduct flexible, efficient and stable underwater grasping. The proposed method utilizes an arbitration mechanism to assign the authority weights of the human command and the automatic controller according to the attraction field (AF) generated by the target objects. The AF intensity is adjusted by understanding the human intention, and the remote-operation command is fused with a visual servo controller. The shared controller is designed based on the kinematic and dynamic models, and model parameter uncertainties are also addressed. Efficient and stable control performance is validated by both simulation and experiment. Faster and accurate dynamic positioning in front of the target object is achieved using the shared-control method. Compared to the pure remote operation mode, the shared-control mode significantly reduces the average time consumption on grasping tasks for both skilled and unskilled operators.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer