1 Introduction
In a flexible-link manipulator (FLM), sensors and actuators are not placed at the same location, i.e. these are non-collocated. In most of the works, tip position information is measured by a traditional mechanical sensor such as strain gauge, accelerometer, and encoder. However, sometimes these sensors exhibit poor performance in the critical environment due to electromagnetic interference and give a noisy response. The vision sensor provides a solution to overcome these difficulties which give a direct measure of tip point deflection. In recent years, there has been an increasing interest in high-performance control of flexible manipulators using visual servoing (VS) [1].
There are four VS schemes based on the error: (i) position-based VS [2], (ii) image-based VS (IBVS) [3], (iii) hybrid VS [4], and (iv) motion-based VS [5]. IBVS is found to be the most preferred scheme in control of FLMs, as it is more efficient than the above-mentioned schemes. Also, IBVS eliminates errors due to sensor modelling and it is flexible against camera calibration error. In this work, the eye-in-hand configuration (camera placed in the tip and observe only target object) is considered as it excludes the effect of kinematics on positioning accuracy.
However, referring to recent research, IBVS has two challenges: (i) the selection of visual features to avoid the singularities in the interaction matrix and (ii) the design of the control scheme by selected visual features such that FLM tracks the reference trajectory with minimum tracking error. In IBVS, the design and selection of suitable visual features are difficult tasks. IBVS based on moments exploits global image features to eliminate the process of extraction, matching, and tracking in the image processing step.
Recently, image moments are widely utilised in VS. Initially, moments have been applied for visual pattern recognition in computer vision applications in [6]. In [7], an image moment is used for selecting six visual features in the IBVS application to control the six-degree-of-freedom (6-DOF) of the system. The IBVS scheme using an image moment is adopted to design six features from a solid and discrete object to decouple the DOFs of the system in [8]. In the application perspective, a shape moment-based VS architecture is proposed for a redundant manipulator in [9]. In [10], image moment-based predictive IBVS control architecture is recommended to control the 6-DOF manipulator modelled as a virtual Cartesian motion device. In [11], an IBVS scheme is presented that uses a virtual spring approach with image moments for controlling the position and orientation of an unmanned aerial vehicle. In [12], the IBVS scheme based on image moment is presented for a 7-DOF robot manipulator. In [13], the image-moment based VS technique is used to find the features (centroid and major axis of the region) for control of a planar flexible robot manipulator. In view of the successful use of image moments based visual serving control schemes in different robotic applications, in this work, an attempt has been made to extend the approach to design an image moment-based new IBVS controller for tip-tracking control of two-link flexible manipulator (TLFM).
FLMs are under-actuated systems and their dynamics exhibit non-minimum phase characteristics, and therefore designing a control scheme for tip-tracking is very challenging due to unstable internal dynamics [14]. To deal with these issues and effectively control the internal states in different frequency ranges, modal transformation methods, i.e. output redefinition or singular perturbation (SP) can be used before controller design. In the first method, the reflected tip position is chosen as a redefined output or combination of tip rate and joint rate as a new output to obtain the minimum phase characteristics. In the second method, the overall dynamics of FLM is decomposed on a two-time scale, i.e. slow and fast time scale. The speed of joint motion is relatively slow compared to flexible modes. Therefore, the tip position related to joint motion is considered as a slow subsystem, while tip deflection related to flexible modes acts as a fast subsystem. The slow subsystem corresponding to the slow time scale tracks the desired trajectory and the fast subsystem corresponding to a fast time scale minimises vibration of the links. The SP method is less complex comparable to output redefinition, as it needs fewer measurement data such as joint position, tip position, and joint velocity and also excludes derivative signals of flexible vibration modes. By following the two-time scale composite control technique [15–17], initially slow subsystem controller is designed, and then a fast feedback control is added to stabilise the fast subsystem along its equilibrium trajectory. Inspired by the two-time scale separation in the SP approach of flexible arms [15–17], new vision-based tip-tracking control of TLFM is proposed here.
The last decade has seen a major amount of research interest in VS-based control of FLM. Also, the advantages of image moment-based feature over other local features motivated to employ it in many vision-based robotic applications. In [16], the computed torque method (IBVS controller) is used for controlling the slow subsystem. Therefore, in this study, the moment-based IBVS approach is utilised to design a high-performance slow subsystem controller for tip position control. To handle the model uncertainties and disturbances associated with TLFM, a linear quadratic regulator (LQR) controller has been used in [18], for the flexible dynamics. However, state observer is needed to estimate unmeasurable/elastic modal coordinates [19]. Therefore, the Kalman filter based on the LQR controller is designed as a fast subsystem controller to dampout the deflection by handling the model uncertainty. It also provides robustness towards measurement noise and time delays.
The objective of this work is to design an image Jacobian (interaction) matrix with a minimal set of visual features using image moments for visual control of TLFM that can track the reference trajectory with minimum tracking error. Then, a suitable controller is designed such that when it is applied to the manipulator with coupled rigid and flexible dynamics, the reference trajectory will be tracked with simultaneous control of link vibrations.
The contributions of the study are as follows:
Moment-based visual features have been designed to address the singularity and local minima issue of IBVS.
The new two-time scale IBVS controller is developed for tip-tracking control of TLFM, whose dynamics is decomposed into two-time scale models namely slow and fast models. Subsequently, the moment-based IBVS controller is designed for the slow subsystem, and a Kalman filter-based LQR controller is designed for the fast subsystem for tip-tracking control of FLM.
2 Dynamics of TLFM and camera modelling
2.1 TLFM dynamics
Owing to distributed link flexure, the positioning and tracking of the tip in the case of a TLFM are very difficult. However, owing to the link flexure, the dynamics of TLFM is a distributed parameter system that involves partial differential equations, i.e. an infinite number of flexible modes are needed for the modelling. However, for the realisation of the controller, a finite-dimensional model is necessary [20]. Hence, it is necessary to truncate the higher-order flexible modes. Therefore, the dynamics of TLFM is derived by using the Euler–Lagrangian formulation technique along with the assumed mode method (AMM) [21].
In this work, it is assumed that motion of the TLFM is in the horizontal plane; the links have uniform material properties and have a constant cross-sectional area [22]. The schematic diagram of TLFM with a tip-mounted camera is shown in Fig. 1, where is the fixed coordinate frame with the joint of link-1 located at its origin. is the rigid body moving coordinate frame of the ith link, and is fixed at the joint of link i. is the flexible body moving coordinate frame, and is fixed at the end of link i. represents the applied torque of the ith link, represents the joint angle of the ith joint, and denotes the deflection along with the ith link.
[IMAGE OMITTED. SEE PDF]
The complete system behaves as a non-minimum phase system, when the tip position is taken as the output. The actual output vector is considered as the output for the ith link. Hence, the redefined output can be written as
The dynamics of flexible links are derived as Euler–Bernoulli beams with deformation for the ith link satisfying the link partial differential equation
The finite-dimensional expression can be presented using the AMM [21] as
The dynamics of TLFM is derived by using the energy principle and Lagrangian formulation technique along with AMM. The total Lagrangian (L) can be defined as
Substitute total Lagrangian (L), i.e. difference of total kinetic energy and total potential energy in (4) and solving for the generalised coordinates, the dynamics of TLFM can be expressed as
The state-space formulation of (5) can be rewritten as
2.2 Camera modelling
Camera modelling is necessary to understand the geometric aspects of the camera. To control the motion of the TLFM, we assume the camera is modelled as a preservative projection [23]. Fig. 2 shows the perspective camera model, where is the coordinate of the principle point and coordinate is the 2D projection of 3D point P with coordinates on the image plane
[IMAGE OMITTED. SEE PDF]
The image Jacobian matrix with reference to 2D projection coordinate can be written as
3 Feature selection
In the IBVS scheme selection of visual features is a difficult task. Two types of features are used in a VS application, namely local and global features. In the practical situation, the object can be of any shape, it is difficult to match each point accurately, and due to the insufficient number of image feature points singularity or local minima may occur. Therefore, the use of an image point (local feature) as a feature is inadequate in IBVS. Hence, to improve the performance, robustness, and stability of the robotic control system, the global feature can be selected as the image feature instead of simple points (local feature). It is expected to use global features to avoid extraction, tracking, and matching steps. Global feature extraction based on optical flow, luminance [24], are developed to avoid the tracking and matching in IBVS but these have limited convergence domain due to the non-linear nature.
Another efficient and interesting global feature is image moment in VS. In image moment-based feature, some independent feature of the object is used such that the corresponding interaction matrix (Jacobian matrix) is full rank and has a maximal decoupled structure. Recently, the performances of global (moment based) and local (point-based) features in IBVS are compared and verified that the moment-based feature performs better [25]. Therefore, the singularity and local minima problem of the IBVS can be avoided by selecting proper moment-based features that also simplify the controller design.
However, referring to the recent research [26] in moment-based VS sensitivity of data noise increases with the increase of moment orders. To deal with the issue low order shifted moment is proposed in [26] to reduce the effect of data noise on the control performance. The shifted moment has been used to select features to efficiently control the rotational DOFs. However, the selection of visual features is still a key issue to solve the problem of singularity in IBVS [27]. In this section, two combinations of image moment-based features are selected from the previous theoretical result to control the 2-DOF of the TLFM.
3.1 Image moments
Assume is a real bounded function of region R. Then, the moment of for order can be given as
3.2 Interaction matrix of image moment features
The interaction matrix or image Jacobian matrix describes the time variation of the moments with respect to the relative kinematic screw . Interaction matrix related to is has been derived in [7]. It is obtained from the following equation:
3.3 Interaction matrix of shifted moment features
In [26], low-order shifted moment invariant-based new visual features have been introduced. VS based on shifted moments is utilised to reduce the effect of measurement noise on the control performance. To control the two rotational motions of TLFM, two visual features are selected from shifted moments. These features are selected from three polynomials computed from low-order shifted moments that reduce the sensitivity of data noise.
Shifted moments can be defined with respect to shifted point , where and are shift parameters. Shifted moments can be written as
3.4 Visual features for tip position control of TLFM
In [7], two shifted moment-based visual feature is needed to control the 2-DOF of TLFM. In TLFM, the tip-mounted camera position is a function of joint angle of the ith joint. Therefore, one needs to select two shifted moment-based visual features to control the two rotational motions of TLFM. So, to implement visual tip position control of TLFM, for selected shifted moment-based visual features, the interaction matrix has to be developed that relates the visual feature with the joint angular velocity.
Low-order shifted moment-based visual feature to control 2-DOF of TLFM is used to reduce the sensitivity of data noise. These are selected from three polynomials computed from shifted moments. The polynomials computed from the shifted moments of order 2 and 3 are given as follows [26]:
[IMAGE OMITTED. SEE PDF]
The interaction matrix corresponding to the shifted parameter is to be calculated from (18) is
Shifted moment-based two visual features are selected from two invariants from (16) and (17) combining three kinds of moment invariants (invariant to translation, to 2D rotation and to scale). The interaction matrix related to shifted moment-based two-visual feature (shifted moment) to control the 2-DOF of TLFM can be written as
4 Problem formulation
A popular composite control scheme for a considered TLFM with tip mounted camera is presented to solve the tip-tracking problem that uses the SP approach to decompose the manipulator dynamics into a two-time scale, slow subsystem based on strain measurements and fast subsystems based on visual feedback. This controller is designed to control these two separate time-scale subsystems, such that when it is applied to the manipulator with coupled rigid and flexible dynamics, the reference trajectory will be tracked with simultaneous control of link vibrations.
However, the design of the observer to estimate fast states is challenging due to the measurement noise present in strain gauge measurement. Also, the selection of noise-free visual features from vision feedback is difficult for perfect tip position tracking of TLFM.
An SP-based composite controller for this manipulator using reduced-order models is designed, where Kalman filter based on the LQR controller is implemented for the fast subsystem and the moment-based IBVS controller is designed for a slow subsystem for tip-tracking control of FLM.
A fast subsystem performs a real-time operation as fast as desired for stability and quality control, whereas a slow subsystem carries out a non-real-time operation and handles the image acquisition. Fig. 4 shows the structure of the proposed new two-time scale control scheme. In that, is the input of the TLFM, is the strain output of the strain gauge attached in the flexible link, is the output of the vision system. is the visual output of the slow subsystem controller, is an output fast subsystem controller. is the combined output of both the controllers that are used as control input for TLFM.
[IMAGE OMITTED. SEE PDF]
5 Model decomposition by two-time scale perturbation technique
Owing to the distributed link flexure, the dynamic of the flexible manipulator becomes a distributed parameter system. Usually, the dynamics of TLFM is composed of rigid and flexible dynamics. A popular approach to decompose the complex dynamics into two-time scales is by an SP technique. In the SP method, the design of a feedback control system for the under-actuated system can be decomposed into two subsystems, i.e. slow subsystem (for tip position measurement and control) and fast subsystem (for compensating tip deflection/vibration). Using the SP theory, the state variable of TLFM dynamic model (5) can be written as
The slow subsystem can be defined as
The slow and fast parts of the tip position variables and of the deflection variables change with respect to (23) and (25), respectively. So, as per the composite control theory, the control input of TLFM can be expressed as
6 Design of new two-time scale IBVS controller
The tip-tracking issue of TLFM can be divided into two sub-problems: (i) tracking of the tip motion and (ii) suppression of the oscillation in flexible beams. To deal with these issues, a new two-time scale IBVS controller is proposed. The proposed new two-time scale controller is a composite controller that is composed of a fast subsystem, based on strain measurement, that damps the elastic vibration, and a slow subsystem, based on a vision-based measurement that achieves the tracking of a reference tip trajectory. Slow and fast subsystems controllers are designed next.
6.1 Design of slow subsystem controller
The slow controller design is based on the rigid model (23) of the TLFM. In this section, the shifted moment-based IBVS controller is designed for tip position control of the slow subsystem of a TLFM.
Here, a shifted moment-based visual feature is measured from a binary or a segmented image of the object of the static environment projected in the image plane. The mathematical background of moment-based feature and selection of shifted moment-based features to control 2-DOF of TLFM is presented in Section 3.
The feature error can be defined as
The goal of the shifted moment-based IBVS controller is to ensure that the actual visual feature asymptotically reach the desired visual feature
Given the non-linear TLFM system, the objective is to determine a hub velocity that can drive the system with respect to the desired image feature position.
It is necessary to design an IBVS-based control scheme for a closed-loop system (23) such that the output trajectory should track the reference output trajectory as close as possible.
Slow control input is designed as given in [16]
6.2 Design of the fast subsystem controller
Here, the LQR controller is utilised to control the fast subsystem of the TLFM. The control problem is to determine fast control input such that tip deflection converges to zero as fast as possible. In a fast controller, a state observer is generally needed to estimate the unmeasurable elastic/modal coordinates. Measurement noise plays an important role in the design of a state observer. In fact, strain gauges are inherently affected by very high noise, due to the electromagnetic interference. The scenario just depicted raises a problem of delayed signal estimation, where a Kalman filter, rather than a deterministic observer can be effectively used. A Kalman filter based on fast model that includes the first three modes, and a fast feedback that damps the first mode only, is the best choice with respect to the closed-loop system stability and robustness towards time delays [16].
The state-space representation of TLFM dynamics as given in (6) includes both rigid and flexible dynamics. To determine the feedback gain for the control law to minimise the performance index is given by
Equation (36) can be written as
7 Stability and robustness analysis
The study of the robustness of the closed-loop system is very important, as it affects performance and stability. Furthermore, due to the model uncertainty and non-collocated behaviour, FLM behaves as a non-minimum phase system that also motivates studying robustness. Here, the stability and robustness of the proposed method is analysed theoretically for the disturbance and un-modelled dynamics of the TLFM.
Equation (22) can be written as
Assumption 1
The terms F, W, , and of (45) have the following properties.
-
and , when and are equal to zero.
-
is non-singular.
-
and are Lipschitz continuous to all and , and (45) is controllable.
Equation (45) can be written as
Lemma 1
If the zero state equilibrium of (quasi-boundary layer model) is uniformly exponentially stable in , then is Hurwitz uniformly in . In this case, there is a Lyapunov function that satisfies
Theorem 1
Let the zero state equilibrium of (quasi-boundary layer model) be uniformly exponentially stable in , and Assumption 1 holds for all . Then the zero state equilibrium of system (45) is uniformly exponentially stable in if
Proof
See Theorem 1 of [28].
Theorem 2
Consider system (45) with Assumption 1. Let the zero state equilibrium of (48) be uniformly exponentially stable in and the zero state equilibrium of (49) be exponentially stable. If (52) is satisfied, then there exists such that for all , the zero state equilibrium of (45) is exponentially stable.
Proof
If Assumption 1 is satisfied, then the manifold exists. Furthermore, if (52) is satisfied, then the zero state equilibrium of the boundary layer model is uniformly exponentially stable in and the zero state equilibrium of the reduced model is exponentially stable. Thus, the zero state equilibrium of the overall system is exponentially stable for a small value of . □
Also, the robustness of the proposed shifted moment-based new two-time scale IBVS controller is numerically investigated in the presence of modelling error and field-of-view (FOV) constraint, model uncertainty, and image noise uncertainty in Section 8.4.
8 Results and discussion
In this section, the tip-tracking performance of TLFM is analysed by simulation studies. Initially, the theoretical results of feature selection are validated. Then, the performance and robustness of the proposed slow subsystem controller is evaluated. Finally, tip-tracking performance (performance of slow and fast subsystem controllers) is analysed.
The physical parameters of TLFM considered for simulation studies are given in [22]. In the simulation, the focal length of the camera and scale factor is considered of 0.008 m and 0.2 pixel/m, respectively. The proportional derivative (PD) and LQR controller parameters used in the simulation are given in Table 1. The mean absolute errors (MAEs) and root mean square errors (RMSEs) are used as a quantitative measure for comparing the tip-tracking performance of the proposed scheme [22]. It is assumed that the target always remains inside the camera FOV.
Table 1 Parameters for different controllers
Controller | Control law | Parameter | Value |
PD | (34) | ||
LQR | (36) [18] | ||
8.1 Feature validation
Theoretical results on the selection of the shifted point to control two rotational motion of TLFM using low-order shifted moment is validated here. It is assumed that the object is parallel to the image plane, i.e. . The simulation result of two different object shapes symmetrical and non-symmetrical will be presented. Initially, the symmetrical object (rectangle) is considered that is shown in Fig. 5. The second object is a non-symmetrical object (whale) that is shown in Fig. 6.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
Tables 2–4 show the value of invariants obtained for the four positions of the symmetrical object.The first position is the original position of a non-symmetrical object, while the other three positions are obtained by simple transformation, a 2D translation motion defined by the vector [65, 215], scale by the scalar value 3, and finally rotation by degree.
Table 2 Invariant values using symmetrical object
Position | ||||||
original | 2.7405 | 0.2238 | −0.1678 | 25.0918 | −18.8173 | −0.7499 |
translated | 2.7405 | 0.2238 | −0.1678 | 25.0918 | −18.8173 | −0.7499 |
scaled | 1.7995 | 1.3213 | −0.9909 | 25.0918 | −18.8173 | −0.7499 |
rotated | 2.7405 | 0.2238 | −0.1678 | 25.0918 | −18.8173 | −0.7499 |
Table 3 Invariant values using symmetrical object with
Position | ||||||
original | 0.0001 | −0.317 | 5.0739 | −0.1012 | 1.6199 | 0.0000 |
translated | 0.0001 | −0.317 | 5.0739 | −0.1012 | 1.6199 | 0.0000 |
scaled | 0.0000 | −0.0953 | 1.0139 | −0.1012 | 1.6199 | 0.0000 |
rotated | 0.0001 | −0.317 | 5.0739 | −0.1012 | 1.6199 | 0.0000 |
Table 4 Invariant values using symmetrical object with
Position | ||||||
original | 0.0002 | −0.3978 | 4.1441 | −0.8871 | 9.2418 | 0.0000 |
translated | 0.0002 | −0.3978 | 4.1441 | −0.8871 | 9.2418 | 0.0000 |
scaled | 0.0003 | −0.669 | 6.4486 | −0.8871 | 9.2418 | 0.0000 |
rotated | 0.0002 | −0.3978 | 4.1441 | −0.8871 | 9.2418 | 0.0000 |
Tables 5–7 show the value of invariants obtained for the four positions for the non-symmetrical object.The first position is the original position of a symmetrical object, while the other three positions are obtained by the simplest transformation, a 2D translation motion defined by the vector [100, 120], scale by scalar value 1.2 and finally rotation by 30°.
Table 5 Invariant values using non-symmetrical object
Position | ||||||
original | 0.0488 | 0.4793 | 2.2545 | 0.2136 | 1.0046 | 0.0005 |
translated | 0.0488 | 0.4793 | 2.2545 | 0.2136 | 1.0046 | 0.0005 |
scaled | 0.0211 | 0.299 | 1.395 | 0.2136 | 1.0046 | 0.0005 |
rotated | 0.0488 | 0.4793 | 2.2545 | 0.2136 | 1.0046 | 0.0005 |
Table 6 Invariant values using non-symmetrical object with
Position | ||||||
original | 0.0002 | −0.3528 | 7.2767 | −0.053 | 1.0925 | 0.0000 |
translated | 0.0002 | −0.3528 | 7.2767 | −0.053 | 1.0925 | 0.0000 |
scaled | 0.0001 | −0.1276 | 2.2887 | −0.053 | 1.0925 | 0.0000 |
rotated | 0.0002 | −0.3528 | 7.2767 | −0.053 | 1.0925 | 0.0000 |
Table 7 Invariant values using non-symmetrical object with
Position | ||||||
original | 0.0005 | −0.5439 | 5.0459 | −0.3951 | 3.6657 | 0 |
translated | 0.0005 | −0.5439 | 5.0459 | −0.3951 | 3.6657 | 0 |
scaled | 0.0001 | −0.1632 | 1.4729 | −0.3951 | 3.6657 | 0 |
rotated | 0.0005 | −0.5439 | 5.0459 | −0.3951 | 3.6657 | 0 |
From Tables 2–7, it is observed that for both symmetrical and non-symmetrical objects, results obtained after applying translation and rotation, respectively, are identical to the original one. This validates the invariance of selected features to translations and rotation of , , , , , and . The third row of each table shows that the scale change in the image changes , , and only. Hence, the invariance properties of selected features proposed in Section 3.4 to control the rotational motion of TLFM are validated from the results.
8.2 Slow controller performance
In this section, the performance of the slow controller (moment-based IBVS controller) is evaluated. The invariance property of selected shifted moment-based features to control the rotational motion of TLFM is already validated in Section 8.1. The performance of the proposed controller is evaluated for two different object shapes (symmetrical and non-symmetrical). Here, tip positioning with a symmetrical object (rectangle) and non-symmetrical object (whale) are termed as task-1 and task-2, respectively. The condition number is used as a performance index that represents the well-behavedness of the matrices used for approximating the interaction matrix. It gives a global measure of the visibility of motion. It also measured the stability of control scheme, i.e. it should be as low as possible to improve the robustness and the numerical stability of the system.
Fig. 7 shows the initial and desired position of task-1. The interaction matrix as given in (20) is calculated for the desired position of task-1 with the invariants ( and ) that is obtained from (16) and (17). It is noticed that the condition number is 3.78 that is satisfactory. The initial and desired values of the selected image features of task-1 are listed in Table 8. Fig. 8 shows the image feature errors of task-1 and the same is given in Table 8. It is observed from Fig. 8 that the feature error reaches zero in 45 s.
[IMAGE OMITTED. SEE PDF]
Table 8 Initial and desired value of image features of task-1 and task-2
Visual feature | Task-1 | Task-2 | ||||
Desired value | Initial value | feature error | Desired value | Initial value | feature error | |
a | 0.258 | 0.407 | −0.149 | −0.034 | −0.645 | 0.611 |
0.068 | −0.210 | 0.278 | 0.081 | 0.476 | −0.395 |
[IMAGE OMITTED. SEE PDF]
Fig. 9a shows the initial position and Fig. 9b shows the desired position of task-2.The interaction matrix (20) is calculated for the desired position of task-2 with invariants ( and ) that is obtained from (16) and (17). It is noticed that the condition number is 2.38 that is satisfactory. The initial and desired values of selected image features of task-2 are given in Table 8. Fig. 10 shows the image feature errors of task-2.It is observed from Fig. 10 that the feature errors converge to zero in 53 s. It is observed from the obtained results that selected image features converge to zero, i.e. VS using the proposed moment-based IBVS controller (slow subsystem controller) is successfully achieved.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
8.3 Tip-tracking performance
8.3.1 Tip positioning performance
In the simulation, the tip-positioning performance of the developed controller is compared with the same controller with encoder feedback. Square wave signals (with 20° amplitude and 0.1 Hz frequency) are considered as reference trajectories for both the links to investigate the tip positioning performance.
The tip positioning performance of link-1 and link-2 is shown in Figs. 11 and 12, respectively.It is observed from the tip-positioning performance that the estimated tip position from the encoder exhibits more overshoot than the reference position and tip position with a visual sensor exhibits less overshoot.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
Figs. 13 and 14 show the tip positioning error profiles for link-1 and link-2, respectively.From tip positioning error profiles, it can be observed that the error trajectory achieved by employing encoder feedback has yielded maximum overshoot compared to the controller with camera feedback.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
Quantitative metrics (MAE and RMSE) are calculated from hub angle response curves obtained from simulation studies of the proposed controller, and the same are listed in Table 9. The tip positioning performance comparison between the proposed controller with an encoder feedback and the same controller with camera feedback is presented in Table 9. It is observed from Table 9 that the MAE and RMSE for the controller with camera feedback are lower than that obtained with encoder feedback.
Table 9 Tip positioning performance comparison
Error | Response | Link | Tip position with encoder | Tip position with camera |
MAE | hub angle response (deg) | link-1 | 1.7172 | 1.6194 |
link-2 | 0.9805 | 0.6170 | ||
RMSE | hub angle response (deg) | link-1 | 5.2502 | 5.0849 |
link-2 | 4.4217 | 3.8477 |
8.3.2 Tip deflection performance
Tip deflection performance of link-1 and link-2 with Kalman filter based on fast model (LQR controller) and LQR controller are shown in Figs. 15 and 16, respectively.It can be seen from the tip deflection performance that the Kalman filter-based LQR controller damps the vibration/deflection. It also provides optimal performance with respect to closed-loop system stability and robustness towards measurement noise and time delays.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
Figs. 17 and 18 show the tip deflection error profiles for link-1 and link-2, respectively.From Fig. 17, it can be seen that the tip deflection error of link-1 is 47.7 m/s2 for LQR, and yields the minimum tip acceleration error of 36.8 m/s2 for LQR with Kalman filter. The tip acceleration error of link-2 is 20.68 m/s2 for LQR, and yields a minimum deflection error of 13.37 m/s2 for Kalman filter based on the LQR controller.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
MAE and RMSE are calculated from the tip deflection error profiles obtained from the simulation studies of the proposed controller, and the same are listed in Table 10. The tip deflection performance comparison between the LQR controller and LQR with a Kalman filter is presented in Table 10. It can be seen that MAE and RMSE for Kalman filter based on the LQR controller are lower than that obtained with the LQR controller, i.e. link vibration has been minimised with Kalman filter-based LQR controller as compared with the LQR controller.
Table 10 Tip deflection performance comparison
Error | Response | Link | LQR | LQR with Kalman filter |
MAE | tip acceleration response, m/s2 | link-1 | 6.4653 | 2.1868 |
link-2 | 0.3875 | 0.3256 | ||
RMSE | tip acceleration response, m/s2 | link-1 | 9.1975 | 4.6050 |
link-2 | 1.4435 | 1.1707 |
8.4 Robustness analysis
8.4.1 Modelling error and FOV constraint
Here, the effect of camera modelling error and FOV constraint is investigated to test the robustness of the proposed shifted moment-based IBVS controller. The errors in the camera parameter, i.e. 10 pixels on the coordinates of the principle point and 20% in focal length is considered as a modelling error. Furthermore, it is assumed that the object is partially occluded out of camera FOV. Figs. 19 and 20 show the results with modelling errors and FOV constraint for task-1 and task-2, respectively.
[IMAGE OMITTED. SEE PDF]
[IMAGE OMITTED. SEE PDF]
The desired position of task-1 in this case is identical as in Fig. 7b. The image feature errors of task-1 are shown in Fig. 19b. It is observed from Fig. 19b that the feature errors converge to zero in 51 s. Also, the initial and desired values of selected image features of task-1 and task-2 with modelling error and FOV constraint are given in Table 11. Similarly, the desired position of task-2 is identical as in Fig. 9b. The image feature errors of task-2 are shown in Fig. 20b. It is observed from Fig. 20b that the feature errors converge to zero in 44 s.
Table 11 Initial and desired value of image features of task-1 and task-2 with modelling error and FOV constraint
Visual feature | Task-1 | Task-2 | ||||
Desired value | Initial value | feature error | Desired value | Initial value | feature error | |
a | 0.258 | 0.421 | −0.163 | −0.034 | −0.839 | 0.805 |
0.068 | −0.258 | 0.326 | 0.081 | 0.503 | −0.422 |
8.4.2 Model uncertainties
To investigate the robustness of the proposed new two-time scale IBVS controller in the presence of model uncertainty, the camera position was altered. The camera position on the tip was moved to , , and , where, is the centre coordinate of the optical axis of the camera, is the origin of the tip frame, and is the orientation of the camera optical axis with respect to the z-axis of the tip frame.
In this case, the initial and desired position of task-1 and task-2 is identical as in Figs. 7 and 9. The image feature error in the presence of model uncertainty for task-1 and task-2 are shown in Fig. 21. The initial and desired value of image features of task-1 and task-2 with model uncertainty are given in Table 12. It is observed from Fig. 21 that despite the model uncertainty feature error of task-1 and task-2 is converged to zero in 55 and 47 s, respectively.
[IMAGE OMITTED. SEE PDF]
Table 12 Initial and desired value of image features of task-1 and task-2 with model uncertainty
Visual feature | Task-1 | Task-2 | ||||
Desired value | Initial value | feature error | Desired value | Initial value | feature error | |
a | 0.258 | 0.498 | −0.240 | −0.034 | −1.179 | 1.145 |
0.068 | −0.230 | 0.298 | 0.081 | 0.626 | −0.545 |
8.4.3 Image noise
Moment-based features are considered as reliable features because values of moment invariants are insensitive to the presence of image noise. Therefore, in this section, robustness of the proposed shifted moment-based new two-time scale IBVS controller is investigated with image noise uncertainty. The white Gaussian noise is introduced in the image of the initial and desired position of task-1 and task-2. Also, the same initial and desired position of task-1 and task-2 is considered as in Figs. 7 and 9 respectively.
The interaction matrix as given in (20) is calculated for the desired position of task-1 and task-2 with the invariants ( and and and , respectively). It is noticed that condition number is for task-1 and task-2 is 7.09 and 2.82, respectively, that is satisfactory. The initial and desired values of the selected image features of task-1 and task-2 with image noise are listed in Table 13. The image feature error in the presence of noise in the image for task-1 and task-2 are shown in Fig. 22. It is observed from Fig. 22 that despite image noise, the feature error of task-1 and task-2 converges to zero in 50 and 54 s, respectively. Thus, the robustness of the proposed controller against the image noise uncertainty is verified.
Table 13 Initial and desired value of image features of task-1 and task-2 with image noise
Visual feature | Task-1 | Task-2 | ||||
Desired value | Initial value | feature error | Desired value | Initial value | feature error | |
−9.707 | −11.182 | 1.475 | −9.176 | −7.745 | −1.431 | |
a | 1.368 | 2.207 | −0.839 | −3.258 | −2.622 | −0.636 |
[IMAGE OMITTED. SEE PDF]
Remark 1
It is observed from the obtained results that despite the modelling error and FOV constraint, model uncertainty, and image noise uncertainty, selected image features converge to zero, i.e. the performance of the proposed shifted moment-based new two-time scale IBVS controller is similar to that of the previous case (Section 8.2), which validates the robustness of our proposed controller with respect to modelling errors, FOV constraint, model uncertainty, and image noise.
8.4.4 Performance comparison
Performance comparison of the shifted moment-based new two-time scale IBVS controller and other moment-based IBVS controllers is presented in Table 14. It is observed from Table 14 that the IBVS controller with shifted moment-based feature yields better performance.
Table 14 Comparison with other moment based features
Parameter | Moment-based features [8] | Improved moment based features [29] | Shifted moment based features |
order of moments | up to 5 | up to 3 | up to 3 |
sensitivity of data noise | high | low | low |
condition number (task-1) | 6.31 | 4.11 | 3.78 |
invariants (task-1) | and | and | and |
— | (9) | (3) | (17) |
condition number (task-2) | 4.48 | 3.10 | 2.38 |
invariants (task-2) | and | and | and |
— | (9) | (3) | (17) |
9 Conclusions
In this study, the shifted moment-based visual feature is exploited to deal with singularity in the interaction matrix and local minima in trajectories issues of the IBVS approach. Also, an image Jacobian matrix is designed with a minimal set of shifted moment-based visual features that can track the reference trajectory. The complete dynamics of TLFM is separated into fast and slow subsystems describing the flexible and rigid dynamics using a two-time scale SP approach. A new two-time scale IBVS controller based on the shifted moment is developed for tracking the reference trajectory and tip vibration suppression. Kalman filter based on the LQR controller is designed for the fast subsystem and the moment-based IBVS controller is employed for the slow subsystem for tip position tracking control of TLFM. It is observed from the results that the proposed controller effectively stabilises the oscillatory dynamics and tracks the reference trajectory with the smallest settling time and achieves better tip-tracking performance. Also, the robustness of the proposed controller is verified in the presence of modelling error and FOV constraint, model uncertainty, and image noise uncertainty. Future studies will focus on the implementation and adaptation of the proposed control scheme in the real-time flexible manipulator.
Lochan, K., Roy, B.K., Subudhi, B.: ‘A review on two-link flexible manipulators’, Annu. Rev. Control, 2016, 42, pp. 346–367
Allen, P.K., Timcenko, A., Yoshimi, B., et al.: ‘Automated tracking and grasping of a moving object with a robotic hand-eye system’, IEEE Trans. Robot. Autom., 1993, 9, (2), pp. 152–164
Espiau, B., Chaumette, F., Rives, P.: ‘A new approach to visual servoing in robotics’, IEEE Trans. Robot. Autom., 1992, 8, (3), pp. 313–326
Malis, E., Chaumette, F., Boudet, S.: ‘2-1/2-D visual servoing’, IEEE Trans. Robot. Autom., 1999, 15, (2), pp. 238–250
Malis, E.: ‘Survey of vision-based robot control’. ENSIETA European Naval Ship Design Short Course, Brest, France, 2002, pp. 1–16
Hu, M.K.: ‘Visual pattern recognition by moment invariants’, IRE Trans. Inf. Theory, 1962, 8, (2), pp. 179–187
Chaumette, F.: ‘Image moments: A general and useful set of features for visual servoing’, IEEE Trans. Robot., 2004, 20, (4), pp. 713–723
Tahri, O., Chaumette, F.: ‘Point-based and region-based image moments for visual servoing of planar objects’, IEEE Trans. Robot., 2005, 21, (6), pp. 1116–1127
Indrazno, S., McGinnity, M., Behera, L., et al.: ‘Visual servoing of a redundant manipulator using shape moments’. Proc. IET Irish Signals and Systems Conf., Dublin, Ireland, 2009, pp. 1–6
Burlacu, A., Lazar, C., Copot, C.: ‘Predictive control of nonlinear visual servoing systems using image moments’, IET Control Theory Appl., 2012, 6, (10), pp. 1486–1496
Ozawa, R., Chaumette, F.: ‘Dynamic visual servoing with image moments for a quadrotor using a virtual spring approach’, Adv. Robot., 2013, 27, (9), pp. 683–696
Siradjuddin, I., Behera, L., Member, S., et al.: ‘Image-based visual servoing of a 7-dof robot manipulator using an adaptive distributed fuzzy PD controller’, IEEE/ASME Trans. Mechatronics, 2014, 19, (2), pp. 512–523
Larsen, J.C., Ferrier, N.J.: ‘A case study in vision based neural network training for control of a planar, large deflection, flexible robot manipulator’. Proc. IEEE/RSJ Int. Conf. on intelligent Robots and Systems, Sendai, Japan, 2004, pp. 2924–2929
Yang, Y., Pan, J., Wan, W.: ‘Survey of optimal motion planning’, IET Cyber-Syst. Robot., 2019, 1, (1), pp. 13–19
Siciliano, B., Book, W.J.: ‘A singular perturbation approach to control of lightweight flexible manipulators’, Int. J. Robot. Res., 1988, 7, (4), pp. 79–90
Bascetta, L., Rocco, P.: ‘Two-time scale visual servoing of eye-in-hand flexible manipulators’, IEEE Trans. Robot., 2006, 22, (4), pp. 818–830
Zhang, L., Liu, J.: ‘Observer-based partial differential equation boundary control for a flexible two-link manipulator in task space’, IET Control Theory Appl., 2012, 6, (13), pp. 2120–2133
Xu, Y., Ritz, E.: ‘Vision based flexible beam tip point control’, IEEE Trans. Control Syst. Technol., 2009, 17, (5), pp. 1220–1227
Ali, M., Alexander, C.K.: ‘Robust tracking control of a robot manipulator using a passivity-based extended-state observer approach’, IET Cyber-Syst. Robot., 2019, 1, (2), pp. 63–71
Zhang, L., Liu, J.: ‘Adaptive boundary control for flexible two-link manipulator based on partial differential equation dynamic model’, IET Control Theory Appl., 2013, 7, (1), pp. 43–51
Pradhan, S.K., Subudhi, B.: ‘Position control of a flexible manipulator using a new nonlinear self tuning PID controller’, IEEE/CAA J. Autom. Sin., 2020, 7, (1), pp. 136–149
Sahu, U.K., Subudhi, B., Patra, D.: ‘Sampled-data extended state observer-based backstepping control of two-link flexible manipulator’, Trans. Inst. Meas. Control, 2019, 41, (13), pp. 3581–3599
Hutchinson, S., Hager, G., Corke, P.: ‘A tutorial on visual servo control’, IEEE Trans. Robot. Autom., 1996, 12, (5), pp. 651–670
Collewet, C., Marchand, E.: ‘Photometric visual servoing’, IEEE Trans. Robot., 2011, 27, (4), pp. 828–834
Zhang, J., Liu, D.: ‘Calibration-free and model-independent method for high-DOF image-based visual servoing’, J. Control Theory Appl., 2013, 11, (1), pp. 132–140
Tahri, O., Tamtsia, Y., Mezouar, Y.: ‘Visual servoing based on shifted moments’, IEEE Trans. Robot., 2015, 31, (3), pp. 798–804
Sahu, U.K., Patra, D.: ‘Shape features for image-based servo-control using image moments’. Proc. Annual IEEE India Conf. (INDICON), Delhi, India, 2015, pp. 1–6
Son, J.-W., Lim, J.-T.: ‘Robust stability of nonlinear singularly perturbed system with uncertainties’, IEE Proc. Control Theory Appl., 2006, 153, (1), pp. 104–110
Zhao, Y., Xie, W.F., Liu, S.: ‘Image-based visual servoing using improved image moments in 6-DOF robot systems’, Int. J. Control Autom. Syst., 2013, 11, (3), pp. 586–596
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2020. This work is published under http://creativecommons.org/licenses/by/3.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Owing to the non‐collocation of actuators and sensors in a flexible‐link manipulator (FLM) it becomes difficult to achieve accurate tip position tracking. To resolve this issue, a vision sensor is used for direct measurement of the tip position instead of employing the traditional mechanical sensors. Among the different visual servoing (VS) control schemes, image‐based VS (IBVS) is more effective. However, there are many challenges in the IBVS scheme such as singularities in the interaction matrix and local minima in trajectories that affect the system performance in real‐time applications. In this study, the moment‐based new visual feature is selected to address the aforesaid issues that arise in the IBVS scheme. Furthermore, a new two‐time scale IBVS controller is developed for addressing the tip‐tracking control problem of the two‐link flexible manipulator (TLFM). In the proposed control scheme, the dynamics of the FLM is decomposed into two‐time scale models, namely a slow subsystem and a fast subsystem. The performance and robustness of the proposed new two‐time scale IBVS controller for TLFM are verified by pursuing simulation studies. It is observed from the obtained results that the proposed controller effectively stabilises the oscillatory dynamics and tracks the reference trajectory accurately.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 Department of Electronics and Telecommunication, G H Raisoni College of Engineering, Nagpur, Maharashtra, India
2 Department of Electrical Engineering, National Institute of Technology, Rourkela, India
3 School of Electrical Sciences, Indian Institute of Technology Goa, Ponda, Goa, India