This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Recently, combination of unmanned aerial vehicles (UAVs) and a wireless sensor network (WSN) has been attracting much attention from the research community. Increasingly inexpensive UAVs, thanks to their mobility and flexibility, may efficiently help maintain the network connectivity and relay data [1, 2]. While WSN is an appropriate solution to continuously gather data, its network nodes are not easily relocated and their wireless communication links are essentially unreliable [1–3]. This hinders capturing a full view over a large area. It challenges against instant operations in response to sporadic events as well. If these events are associated with moving objects, the problem is even more complicated [4, 5]. On the other hand, increasingly popular UAVs may effectively compensate the gap, thanks to its high dynamism and autonomy. If being navigated smartly, they may quickly approach event places, monitoring their evolution and even getting involved in handling operations [6–8]. However, over immense spaces such as fields, parks, forests, national borders, seas, and deserts, UAVs alone cannot search elaborately and extensively due to their limited flight time and distance. Their embedded sensing devices can merely “see” the ground underneath at a certain range. This desires data collection from ground sensors to enhance the searching capability of UAVs. As such, mutually compensated power of UAV and WSN means makes the combination suitable for many supervising applications, such as monitoring intruders, animals, fire, and vehicles.
Practically, connectivity in WSN is quite hard to be kept continuous, obstructing data concentration on gateways placed at long distances from sensors. This definitely hampers construction of an adequate measurement database for calculating the exact object location [9, 10]. However, known searching and tracking strategies in literature heavily rely on such a centralized database. Locating these objects desires heavy data exchange and storage of history statistics [4, 5, 11]. Sensors are assumed to be densely populated and robust to provide sufficient statistics continuously, which is unrealistic in large areas. Additionally, it is costly in terms of energy and communication to collect data from distant sensors of many hops away [12, 13].
Technically, an event occurrence is expressed by dramatic changes of one or a few parameters, such as gas volume, sound wave, light, and radiation in the affected region [8, 11, 14, 15]. Unfortunately, data about the event may not be fully reported by sensors at once. Consequently, the system cannot immediately locate and track objects. Sometimes, necessary statistics can only be collected to accurately pinpoint a target object after the UAV flies through a relatively large suspected area [9–12]. This means that preprogramming the UAV trajectory before taking off cannot provide the optimal flight path.
By analyzing radio signals intensively, several works have introduced radio-based localization algorithms in literature [16–18]. Namely, radio signals are processed to predict the current target position. The authors of [16] introduce a trilateration algorithm in combination with the LMS (least mean square) method. The strategy proposes to insightfully learn the channel model to draw localization inference about the target position. Similarly, another RSSI- (received signal strength indicator-) based node localization scheme is proposed in [17]. The distributed weighted search localization algorithm (WSLA) was constructed, combining a node localization precision classification scheme and weight-based searches. The strategy brings up considerable improvement with respect to accuracy. Meanwhile, the authors of [18] constructed a RSSI-assisted localization scheme using machine learning and Kalman filter. They designed two learning algorithms: ridge regression (RR) and vector-output regularized least squares (vo-RLS), which effectively help reduce localization error. While the radio-analyzed approach is conceptually interesting and incurring little latency, its accuracy apparently depends on existence of obstacles. Furthermore, calibrating and training the model to map RSSI to the target position are potentially expensive. The method is therefore likely unfeasible when the mission takes place in a spacious area. In addition, random changes in the environment potentially affect the accuracy as well.
In a similar approach, the authors of [4, 19] introduced tracking methods based on capturing images of target objects and/or on recording their sound. Upon intensive image and acoustic data analysis, the system infers instant positions of a moving object that falls in the visible/audible range of sensors. Arranging directional imaging sensors in a node-dense grid, the authors of [4] designed a
With respect to usage of on-board sensing electronics, in [20, 21], the authors proposed image-based navigation algorithms. This strategy obviously eases the work of users/operators, given that UAVs provide visual data of targets. Advanced image processing techniques also enable quick object detection and bring up a high autonomy in surveying events. A template matching and morphological filtering combined algorithm is introduced in [20], which allows a UAV to track another cooperative vehicle within the distance of few meters. The tracking machine uses a monocular camera and exchanges data with the target to assure the mission success. Differently, multiple means (monocular camera, GPS receiver, and IMU sensor) were exploited in [21] for a UAV to supervise moving vehicles in multiple levels. The proposed scheme first visually detects a vehicle, then tracks its movement, and eventually processes data to navigate the flying UAV from the ground. However, in all of these strategies, on-board cameras can only scan a limited footprint on the ground. This approach is therefore suitable for tracking vehicles that are moving on the road only. It cannot deal with free movement of objects elsewhere. In this regard, our work opts for ground sensors as means of providing extra data. This allows the system to cover a larger UAV-traceable area. To extend the monitoring range, the authors of [15] let networked UAVs work in cooperation with WSNs to detect and monitor natural disaster events based on images and environment statistics (temperature, smoke, humidity, and light). A multilevel data analysis, advanced image processing in particular, and cloud- (ThingSpeak) assisted ground monitoring/maneuvering subsystem were designed to detect and locate such events as fire in the early stage. While the system is practical and applicable with respect to natural disasters, it is not flexible to tightly track particular moving objects. Moreover, UAVs do not seem to be highly autonomous.
From the data processing and representation perspective, ontology and domain knowledge modeling have been introduced to interpret event occurrences [22, 23]. The semantic presentation technology concentrates on intensively analyzing UAV-acquired data (videos, images) to visually understand the scene. By application of AI techniques for advanced image and video processing, it is able to detect, identify, and label objects of interest, which facilitates the context interpretation [22]. Ontology-driven representation using semantic statements such as those in the TrackPOI schema [23, 24] generates high-level descriptions of the scene, which supports UAVs and human operators to visually detect, differentiate and track moving objects. Note however that this approach does not target at searching any object or navigating UAVs in real time to acquire those data. The data processing operations can only be performed with the assumption that the flying UAVs have already visually found out objects and are now successfully tracking them. Furthermore, ontology-driven computation requires a high concentration of data collected and obviously induces long delay due to heavy overhead generated by AI algorithms and semantic web. In the targeted context our study aims at, objects may move arbitrarily (unlike vehicles on an established road) at a high velocity, which demands instant response of inherently resource-limited UAVs to continuously keep track of. This consequently discourages the application of any heavy-overhead protocol. In reality, objects may require sensed data other than visual media, such as sound and radiation, to be detected and tracked. Thus, the ontology approach cannot be used as a key means to search moving objects and to self-navigate UAVs in tracking them in real-time. In principle, the technology may be employed in conjunction with our proposed self-navigation strategy to facilitate ground operations, but this is out of our study scope.
Remarkably, the authors of [8] introduce an interesting and promising framework in which UAVs are equipped with gas sensors to collect data. The study also devised a self-adaptive team of UAVs for monitoring gas emission events. However, in this approach, if the gas detection data collected by UAVs are not sufficient to fully cover the event area, the positioning and tracking mission likely fails. Preprogramming paths before departure, as mentioned earlier, potentially navigate the UAVs along nonoptimal trajectories. This means that the model works efficiently against only such events as gas emission, where no highly dynamic object is targeted. Relying solely on on-board sensors also excludes capability of monitoring ground objects that do not alter measurement results at the flying altitude. Similarly, the progressively spiral-out scheme [25] presents an exhaustive scanning method to discover a mobile target. A time-varying team of UAVs are coordinated to fly along spiral-out trajectories with the hope that the spiral-bounded area covers its location. Depending on how the estimated confidence area changes over time, some UAVs may leave the team or extra ones are invited to join in. While this scanning algorithm is truly aimed at searching mobile targets, its exhaustiveness costs more flying distance and UAVs than needed. Furthermore, as the UAVs fly along planned paths, regardless of how the target object moves, they do not continuously keep track of it.
In this work, we therefore construct an online (dynamic) self-navigating strategy where a UAV makes use of measurement data reported from ground sensors to navigate itself. Namely, the UAV is steered by referring to instant sensed data, rather than being path-planned before taking off. Its on-board camera(s) or other sensing means is also employed for confirming the presence of the target object and tracking its movement. The problems we attempt to address are (i) how the UAV should be self-navigated to find out and catch up with the moving object in different levels of data availability and (ii) how the available sensed data are processed to create the best navigation hints. Our technical contributions reported in this paper accordingly include the following:
(1)
We build a system model in which UAVs cowork with sensors to locate and track events associated with moving objects. The UAVs collect data from preexisting ground sensors as they are flying. In particular, the UAVs may also proactively deploy sensors on the fly to supplement sensed data about the moving object if necessary. The UAVs are directed based on processing data acquired on the move and on captured images to detect the object
(2)
We propose an online self-navigation strategy based on the above data sources. The dynamic steering scheme is carried out without referring to any centralized database. A local bivariate regression is constructed to formulate the scalar field from measurement data, which supports calculating/predicting the best flying direction regularly. The work presents an efficient transformation of the bivariate function. The local regression can accordingly be implemented by applying known algorithms for univariate polynomials
(3)
Dealing with object motion, we introduce a “wait-in-front” tactic for a UAV to catch up with the moving object quicker. Namely, by observing its motion, the UAV heads toward the next predicted position of the object, instead of its current one. This means that the flying direction is deflected from the gradient vector by a calculated angle
(4)
We present a maneuvering framework in which the UAV flexibly changes its path adjustment policy depending on gathered measurement data. Specifically, UAV steering may switch from gradient vector-driven to heading straightforward, as well as from searching to tracking or the other way around. The goal is to save communication load and edge processing cost aboard. Besides, the flying UAV autonomously restores searching operations when the object has moved away from its tracking range
The rest of this paper is organized as follows. Section 2 describes the system model, highlighting the integrated UAV-WSN structure and UAV self-navigation strategy. It also explains how an object influences the measured parameter, followed by a comprehensive protocol for searching and tracking missions. Section 3 details a local bivariate regression formulation for calculating the gradient vector at any point. Then, algorithms for dealing with moving objects are presented in Section 4. A detailed description of how UAVs are self-navigated in an online manner to catch up with them is also presented. Section 5 subsequently analyzes overhead of our self-navigation strategy in terms of complexity and communication load. To demonstrate the soundness of the proposed framework, we report NS-3 and Matlab-based cosimulation results in Section 6. Statistical performance of the self-navigation in terms of accuracy and costs is concretely discussed therein. Finally, Section 7 draws our conclusions.
2. System Model
Being composed of UAVs and wireless sensors, the proposed system is visually indicated in Figure 1. Measurement data are collected from ground sensors, both directly and via multiple hop paths. Occasionally, a UAV may drop down sensors where and when needed to get more data for locating an object. In addition, the following conditions hold:
(i)
The UAV knows its current position and that of sensors from which it has just received measurement data
(ii)
The terrain under surveillance is relatively flat, so that the problem of searching and tracking is presented in 2D space
(iii)
The UAV at standby state is signaled to fly searching once an object presence is suspected but does not have sufficient data at once to know its exact location
Once a detectable object appears and moves around in the area covered by the WSN, we say that an event occurs. The presence of the object (e.g., vehicle, animal, and radiation source) will cause abnormal changes in some measurement data within its surrounding region. A UAV on its surveillance mission needs to sense these changes. Without referring to any centralized database of sufficient data, the UAV has to rely on data sent from ground sensors to be self-navigated. Due to imperfection of the ground sensor network, it should collect data on the move to gradually know better about the event. Understanding the rule of changes associated with the object appearance, the UAV may appropriately shape its trajectory in a dynamic manner.
2.1. Object and Event Description
Conceptually, an event is the circumstance in which one or a few remarkable objects appear in the area covered by the large-scale WSN. Its occurrence triggers the system to detect, locate, and track moving objects. The presence of a target object results in the fact that some parameter (either primitive or compound), whose value is position-dependent, excessively increases or decreases toward extreme values [8]. Let us denote the parameter, which will be measured by ground sensors, as
The projection of the cone top onto the ground plane should be the center of the object if it is present. As soon as the searching UAV acquires sufficient sensor data, it gets to know the position. The UAV is then navigated directly toward the object. In reality, this good luck however does not happen right at the beginning of the search due to the facts that
(i)
ground sensors do not always successfully transmit their data to a distant centralized point
(ii)
the object-affected area is large, taking time for the UAV to capture data around cone peaks
(iii)
occurrence of the event itself may, to some extent, adversely affect the sensor network connectivity, which hinders centralizing data
As a result, the UAV only obtains data from its nearby sensors, which likely excludes ones close to the object in early time of the search. A question raised here is, given that limited information, how the UAV should be directed to move as close as possible to the object. Its answer would be that the UAV just flies along the direction that observes the highest spike of measurement values.
If multiple events occur in parallel, separate objects should be observed. In such a case, it simply dispatches one UAV to search and track each. To identify and label each object, UAVs essentially rely on image-based recognition or on special on-board electronics that process, for example, ultrasound or radiation signals. Another alternative is to track the “cone”
2.2. Online Self-Navigation Strategy
While the UAV is flying, data reported from connected sensors let it model a scalar field
Knowing the explicit expression of function
Once the gradient vector has been computed, the UAV is dynamically self-navigated according to the velocity vector. It can be expressed as
However, the UAV gets sensed information from ground sensors that are not too far way; the expression of
(i)
The object is successfully found thereat. Confirmation of the object presence may be realized based on on-board sensing devices. For a simplified presentation, we hereafter assume that images captured by on-board cameras are processed for this purpose. Signal found may also be generated if some measured values exceeding
(ii)
The UAV does not detect any object; it is just a local peak
During the tracking stage, if the object moves out of the localizable range, signal missing is generated, bringing the UAV to the discovery state as well. As such, gradient vector is not estimated all the time of searching and tracking. Its calculation is necessary only when ground sensor data are insufficient. Even in the searching phase, the UAV will cease the periodical gradient update when a peak is located. These behaviors are later embodied in an algorithm devised in Section 4.2. It is also noticed that, the UAV normally stays at the standby state, being ready for taking off. When a value of
To assure that the UAV is self-navigated heading to expected positions, even in case of wind and atmospheric disturbances, application of a supplementary adjustment model [27, 28] can be incorporated. With respect to positioning, if the UAV relies on satellite-based positioning systems (e.g., GPS), the impact of bad weather (e.g., fog, smog, and rain) on the accuracy is said to be insignificant [29]. Furthermore, it may also improve the accuracy by correlating its own data to the sensor-reported samples. In confirming the object appearance, severe reduction of visibility due to bad weather may degrade camera images. To overcome this challenge, the UAV may be equipped with supplementary on-board sensors that measure ultrasound, radiation, or the like, to detect the target object better.
3. Data Formulation and Processing
In the searching phase, the UAV is flying in the object-affected region where values of
Table 1
Definition of key symbols.
Symbol | Meaning/explanation |
The scalar field of parameter | |
The object-affected region, observing abnormal growth in values of parameter | |
The low and high levels of parameter | |
The | |
The number of terms in | |
The number of measurement samples available right before regression of | |
Gradient vector at the current position of the field | |
Ground velocity of the UAV and the object, respectively | |
The vector whose coefficients | |
The scalar whose elements | |
The | |
The | |
The vector whose coordinates are | |
U | The orthogonal projection position of UAV onto the ground, object position at the time interval |
The time intervals, respectively, for updating the gradient vector and for capturing images to confirm the object presence | |
The deviation angle to steer the UAV against the gradient vector at the regression/update interval | |
The radius of visible range by on-board camera and object localizable range by ground sensor data, respectively. Lemma 2 later mentions sufficient conditions on them to keep track of the mobile object | |
The length of sensor messages, data load per regression, and average uplink throughput from ground sensors to the UAV, respectively | |
The tracking deviation (being, respectively, max and average)—distance between the projection of the UAV onto the ground and the object center | |
The energy consumption power values on moving and on hovering, respectively |
3.1. Formulation Expression of Ground Data
As stated in the previous section,
(i)
The number of measurement samples may be large, making polynomial interpolation intractable. New measurement samples may come in at a high rate, and their values are time-varying
(ii)
In reality, the impact of an event on the measured parameter is maximum at its center and gradually decreases along the distance therefrom. This implicates a necessity of weighing measurement values, taking into account their source position.
Given that at present time
Lemma 1.
There exists a bijective map between two-dimension array
Proof.
Let us define scalar
Then,
Once values of
It is then inferred from Lemma 1 that
Finally, the gradient vector can be calculated out when coefficients
3.2. Measurement Data Regression
Calculation of partial derivatives of function
We next extend scalar
Subsequently,
Minimizing
As the number of coefficients in
Regarding the weighing matrix
4. Dealing with Object Motion
If the target object does not change its location, the UAV is directed following the gradient vector until it finds out a peak
4.1. Updating Motion Information of Objects
Cone
Coefficients
The actual flying distance U
Finally, deviation angle
The UAV is then self-navigated according to this deviation angle, i.e., along vector
4.2. Tracking Motion Objects
Once finding out the moving object, the UAV switches to tracking mode as depicted in Figure 3. As stated earlier in Section 2, on-board sensing devices, embedded cameras in particular, continuously help the UAV chase the object. To let the object be within its visible range, the tracking UAV must
(1)
move to keep the orthogonal projection of the UAV onto the ground close to the object
(2)
capture and process images frequently enough, so that
[figures omitted; refer to PDF]
On the other hand, if the object moves out of visible range, violating the above rule, the system falls in one of the following cases:
(1)
The object moves absolutely far away, triggering signal missing. This brings the system to state discovery as indicated in Figure 3. The UAV then waits for further data to move back to the searching state, or more luckily, directly to any peak found
(2)
The system can still position the object center thanks to sufficient data provided by ground sensors. This means that the object is still marked found
In the second case, if the UAV moves in the right direction in the next update interval and the scanning radius is large enough, it quickly “sees” the object again by the camera. Figure 6 illustrates these behaviors, further detailing state tracking.
[figure omitted; refer to PDF]Lemma 2.
As soon as a tracked object gets out of the visible range, the sufficient condition for the UAV to get it visible back is that
Proof.
Let us assume that at time instant
In the worst case, the object center is located right at the border of circle (U
Inequalities (21) and (22) are obviously true if constraint (20) holds. This completes the proof.
Note that constraint (20) in Lemma 2 is not a necessary condition; i.e., the UAV may still fully keep track of the object even if it is violated. For an example, in case
Eventually, Algorithm 1 summarizes the process of chasing the moving object. It details the FSM previously presented in Section 2.2. [Re]calculation of flying direction is executed based on both gradient and deviation angle. The algorithm also indicates how the UAV changes its steering tactic in different operation modes. Apparently, escaping from gradient-based driving does weaken the need of data acquisition and processing.
Algorithm 1: Monitoring a moving object: from searching to tracking.
/* initialization */;
/* searching */;
while (
execute local regression to determine
determine gradient
if (a peak
break;
end
calculate
locate
estimate the deviated angle
calculate
navigate the UAV according to angle
end
calculate coordinates of peak
navigate the UAV in a straight line to
capture ground image;
ifobject_found in imagethen
switch to tracking mode;
else
switch to discovery mode;
end
4.3. On-the-Fly Deployment of Sensors
While searching and tracking the moving object, the UAV expects to have sufficient collected data from ground sensors as soon as possible, so that the very object position is pinpointed accurately. Note however that sensors may be sparsely distributed or partially corrupted due to the event occurrence, raising the need to deploy on-the-fly ones by the UAV itself. The following positions should be considered dropping sensors down nearby:
(i)
Predicted position of P
(ii)
Expected arrival point U
(iii)
Predicted object center position
(iv)
Positions in the vicinity of the UAV to meet inequality (13)
(v)
Surrounding positions to augment satisfaction of the conditions claimed in Lemma 2
At the beginning of each update period, if the system finds out the above points within the drop-down radius of the UAV, its carry-on sensors will get ready. Should more UAVs join in the mission, they should also be dispatched to cooperatively drop down. A dropping schedule may nonetheless be discarded if existing sensors nearby already transmit sufficient measurement samples.
5. Overhead Analysis
Now that the searching and tracking strategies have clearly been clarified, let us evaluate their complexity and data communication load. It can be inferred from the analytic model and algorithms presented in Sections 3 and 4 that the overhead is influenced by the UAV steering policy, operation mode, object mobility, data updating frequency, and regression parameters.
5.1. Complexity
The heaviest load pertains to matrix chain multiplication in Equation (12) to find out coefficients of vector
As indicated in Table 2, the complexity is
Table 2
Computation load of steps in calculating coefficients of vector
st. | Operation | Out size | Multiplications | Notes |
1 | N | Nmm | ||
2 | N | NmN | ||
3 | N | NNN | Inversion | |
4 | N | Nmm | Same as step 1 | |
5 | N | Nmm | ||
6 | N | NNN | ||
Total (step 4 omitted): |
If an on-board camera is employed to confirm the existence of the object, training tasks must be executed followed by periodical object detection. UAVs nowadays are computationally powerful enough to get the jobs done [33]. They can anyway offload heavy training tasks to a cloudlet if wishing to reduce the edge processing load [34]. Upon acquiring the training results, they may easily and quickly carry out detection jobs.
5.2. Data Communication Load
Periodical local regression to update the gradient vector requires availability of measurement data from ground sensors. As mentioned earlier in Section 3, at least
Note that the above data amount is desired only before the object position is located. As soon as signal found in the FSM depicted in Figure 3 is generated, the UAV, without regression update, flies straightforward to the detected center position.
6. Performance Evaluation Results
We verified the proposed strategy by constructing NS-3 and Matlab co-simulations. Data communication and formulation were realized in the well-known network simulation tool NS-3. Meanwhile, Simulink UAV Library for Robotics System Toolbox™ was employed to adjust and validate flying trajectories upon each self-navigation operation. Specifically, how the exact heading trajectory looks like in each update interval is shaped by the Simulink tool. With the application of the module, the navigating calculation has already taken into account external factors such as wind and atmospheric disturbances [27, 28]. Networking configurations and system setup are concretely described in Table 3. In all the simulation scenarios, sensor-measured data also underwent a Kalman filter-based preprocessing stage [18] to be cleaned.
Table 3
System parameter setup for simulation scenarios.
No. | Category | Parameter | Value | Notes |
1 | Network | Number of sensors | 2500 | |
2 | Network | Ground area | ||
3 | Network | Routing | AODV | |
4 | Communication link | Wireless standard | IEEE802.11 | Protocol suite |
5 | Communication link | Tx power | 16 dBm | Max value |
6 | Communication link | Rx sensitivity | -96 dBm | |
7 | Data | Message size | 256 bytes | |
8 | Data | Tx period | 2-10 s | Also update period |
9 | UAV | Max ground speed | 15 m/s | |
10 | UAV | Max acceleration | 2 m/s | |
11 | UAV | Scanning radius | 30 m | By on-board camera |
12 | UAV | Max altitude | 150 m |
Simulation results show that expected trajectories basically agree with those adjusted by Simulink, as an example seen in Figure 7. The UAV reached its expected arrival position in all update intervals.
[figure omitted; refer to PDF]While our proposed framework (hearafter called online self-nav) is novelly different, we attempted to simulate the best matching schemes we know, for comparative study. At first, the most promising state-of-the-art approach we know, self-adaptive team [8], was shaped to fit the context. As mentioned in Section 1, this scheme solely relies on on-board sensors to detect a gas emission event. For fair comparisons, the team of five UAVs thereon were, however, assumed to also communicate with ground sensors while they were flying. In addition, we also resimulated another relevant approach in organizing a dynamic team of UAVs for searching and tracking the event, to which further comparison is made. Namely, the PSO- (progressively spiral-out optimization-) based algorithm devised in [25] was numerically evaluated in the same system context. In this searching method, a time-varying team of UAVs were path-planned according to predefined rules for exhaustive scanning. They basically flied along spiral-out trajectories, whose bounded area is expected to cover the target moving region.
6.1. Success of Online Self-Navigation
We let the UAV depart at the origin (0 m, 0 m), whereas the object initially arose at position (500 m, 1200 m). The object subsequently moved at variable speed within 10 m/s.
A typical simulation instance of searching is depicted in Figure 8. It shows how the UAV successfully approached the object. Afterwards, it switched to the tracking state (as previously mentioned in Section 2.2). Remarkably, the distance between the projection of the UAV onto the ground and the object position got gradually shorter during the search. At the update period of 8 s and object velocity of 8 m/s, the UAV eventually reached the target after 25 update periods (around 200 s). The success consistently occurred in all the simulation instances we ever ran.
[figures omitted; refer to PDF]
6.2. Searching Efficiency
Searching time and flying distance are key parameters to evaluate how well the UAV finds out the object. To make a fair comparison between our online self-navigation strategy and the self-adaptive team of five UAVs, we let our UAV be stationed at position (0 m, 900 m). Before taking off, the five UAVs of the team were, respectively, arranged at (0 m, 300 m); (0 m, 600 m); (0 m, 900 m); (0 m, 1200 m); and (0 m, 1500 m). The object initially arose at position (900 m, 900 m). Meanwhile, in the case of the aforementioned PSO scanning approach [25] (hereafter referred to as “PSO” for short), the team was initially formed by four UAVs. At the beginning of the search, they were located evenly at circle, centered at position (500 m, 900 m), with a radius of 20 m. During the search, the team called on another UAV to keep the search exhaustive. All the UAVs were also equipped with 30 m scanning cameras.
It can be inferred from Figure 9 that it is basically more costly, in terms of time and of flying distance, to find out the object when the gradient update interval (or update period) gets higher. This does implicate a trade-off between computation/communication and flying costs. However, the impact of the interval on the time and distance is not so significant as that of object mobility.
[figures omitted; refer to PDF]
The figure shows that both temporal and spatial lengths are extended exponentially as the object velocity gets higher. Throughout all 25 scenarios formed by a combination of 5 update period (2 s, 4 s, 6 s, 8 s, and 10s) and object velocity (2 m/s, 4 m/s, 6 m/s, 8 m/s, and 10 m/s) values, we observe a consistent outperformance of our online self-navigation strategy compared to the other. As depicted in Figure 9, it took about 230 s (23 update periods) to reach the target object in the worst case (update period of 10s and object speed of 10 m/s). On the other hand, the self-adaptive team spent about 260 s on locating the object, which is about 13% longer. In this case, the distance difference is 0.54 km (3.82 km versus 3.28 km), meaning 16% longer. Note however that the cumulative flying distance of all the UAV team is up to 19.12 km, gravely longer than 3.28 km in ours. At the same period and speed, PSO observed a total fly distance of 4.41 km per UAV, but the team eventually failed to find out the object. The reason is that its velocity exceeded the limit set by the spiral-out radius and UAV mobility [25]. If the velocity was lowered to 8 m/s, it caught the object after about 230 s, which is 60s (or 35%) longer than ours. In this scenario, the average per-UAV flying distances of the online-self-nav, self-adaptive, and PSO strategies are, respectively, 2.38 km, 2.78 km, and 3.36 km.
6.3. Tracking Accuracy
To evaluate how well the UAV keeps track of the moving object, we regularly collected their location data in the whole tracking process. Let us define tracking deviation at a time instant as the distance between the projection of the UAV onto the ground and the object center. Quantitatively, the average (
As indicated in Figure 10, the deviation values essentially depend on object mobility more significantly than they do on update interval. In general, it is more challenging for the UAVs to track as the speed and the interval get greater. Specifically, in our proposed self-navigation strategy, the maximum and average deviations are only 0.01 m at an interval of 2 s and velocity of 2 m/s. They reach 4.91 m and 4.33 m at 10 s and 10 m/s, respectively. The deviation levels are anyway well within the scanning area of the on-board camera, whose radius is 30 m as indicated in Table 3.
[figures omitted; refer to PDF]
The outperformance of our online navigation scheme, as seen in the chart, is clearly conclusive, with respect to both deviations. All the simulation instances observed better tracking accuracy in 25 pairs of update period and object velocity values, compared to the self-adaptive and PSO strategies. The maximum difference between the two pertaining to the pair of 10 s and 10 m/s is 17.55 m and 7.39 m on for
Overall, the online self-navigation framework brings up clear reduction with respect to searching time, flying distance, and tracking error, as listed in Table 4. As shown in the table, the worst performer was the PSO team.
Table 4
Average reduction of searching time, flying distance, and tracking deviation observed in our proposed WSN-assisted navigation, compared to self-adaptive and PSO strategies.
Strategy | Searching time (s) | Flying distance (km) | Tracking deviation |
PSO team | 194.32 | 2.82 | 5.06 |
Self-adaptive team | 161.92 | 2.37 | 4.28 |
Self-nav. | 136.64 | 1.93 | 0.84 |
Reduction from PSO | 29.68% | 31.56% | 83.40% |
Reduction from self-adapt. | 15.61% | 18.57% | 80.37% |
6.4. Data Exchange Volume
Recall that data exchange between ground sensors and the UAV mainly occurs in the searching phase, when the object center position has not been located. As stated in Section 5.2, during the phase, the exchange volume depends on update interval
Figure 11 truly reflects this argument. With a message length of 256 bytes as indicated in Table 3, the data amount reaches 9.17 MB, 5.99 MB, 4.42 MB, 3.54 MB, and 2.97 MB, corresponding to object velocity of 10 m/s, 8 m/s, 6 m/s, 4 m/s, and 2 m/s, all at a minimum update interval of 2 s. When the update takes place less frequently, the amount is clearly reduced. The reduction intensity nonetheless goes down against the increasing order of update period. Note that the plotted statistics also cover the tracking phase, during which data exchange occurs sporadically (as explained in Sections 2.2 and 5.2).
[figure omitted; refer to PDF]6.5. Energy Consumption
In reality, a UAV spends its energy on flying motors, regression computation, and data communication with ground sensors. The first part accounts for major consumption and depends on motion parameters and flying time. The consumed power
The chart shows that the total energy consumed gets basically higher as the object moves faster. The gradient update interval nevertheless does not adhere to the same rule. This may be attributed to random vibration of motion trajectories. In contrast, it can be noticed that the consumption amount is drastically boosted against the object mobility. In all the simulations, the values range from 24.13 KJ at an update period of 2 s and object velocity of 2 m/s to 68.45 KJ at 10 s and 10 m/s, respectively.
7. Conclusions
We have presented an online self-navigation framework in which UAVs regularly update measurement data on the move. Periodical formulation of acquired data brings up helpful hints to steer UAVs toward moving objects. The local regression-based formulation enables calculating the gradient vector at any instant position. The vector is a good reference for steering the vehicles, especially in early searching time, when available information about targets is limited. Once ground data are sufficient, the formulation also helps instantly locate the exact position of objects. Flexibly switching between searching, tracking, and discovering states, the comprehensive maneuvering strategy considerably reduces both computation and communication loads.
Numerical results of NS-3 and Matlab cosimulations consistently agree with the theoretical expectation. Our proposed framework clearly shows its outperformance in searching and tracking. Compared to the best state-of-the-art method we know, reduction of over 15%, 18%, and 80% is, respectively, observed with respect to searching time, flying distance, and tracking deviation. Simulation statistics also indicate how the supervising performance depends on object mobility and frequency of data update.
Acknowledgments
Our research was partially sponsored by the Ministry of Science and Technology of Vietnam under research project “Research and development of Internet of Things platform (IoT), application to management of high technology, industrial zones,” mission code KC.01.17/16-20.
[1] H. J. Na, S. J. Yoo, "PSO-based dynamic UAV positioning algorithm for sensing information acquisition in wireless sensor networks," IEEE Access, vol. 7, pp. 77499-77513, DOI: 10.1109/ACCESS.2019.2922203, 2019.
[2] I. Jawhar, N. Mohamed, J. Al-Jaroodi, S. Zhang, "A framework for using unmanned aerial vehicles for data collection in linear wireless sensor networks," Journal of Intelligent & Robotic Systems, vol. 74 no. 1-2, pp. 437-453, DOI: 10.1007/s10846-013-9965-9, 2014.
[3] A. A. Awan, M. A. Khan, A. N. Malik, S. A. A. Shah, A. Shahzad, B. Nazir, I. A. Khan, W. Jadoon, N. Shahzad, R. N. Jadoon, "Quality of service-based node relocation technique for mobile sensor networks," Hindawi Wireless Communications and Mobile Computing, vol. 2019, article 5043187,DOI: 10.1155/2019/5043187, 2019.
[4] A. Tripathi, H. P. Gupta, T. Dutta, D. Kumar, S. Jit, K. K. Shukla, "A target tracking system using directional nodes in wireless sensor networks," IEEE Systems Journal, vol. 13 no. 2, pp. 1618-1627, DOI: 10.1109/JSYST.2018.2864684, 2019.
[5] M. Vazquez-Olguin, Y. S. Shmaliy, O. Ibarra-Manzano, "Object tracking over distributed WSNs with consensus on estimates and missing data," IEEE Access, vol. 7, pp. 39448-39458, DOI: 10.1109/ACCESS.2019.2905514, 2019.
[6] W. Meng, Z. He, R. Su, P. K. Yadav, R. Teo, "Decentralized multi-UAV flight autonomy for moving convoys search and track," IEEE Transactions on Control Systems Technology, vol. 25 no. 4, pp. 1480-1487, DOI: 10.1109/TCST.2016.2601287, 2017.
[7] R. R. Pitre, X. R. Li, R. Delbalzo, "UAV route planning for joint search and track missions—an information-value approach," IEEE Transactions on Aerospace and Electronic Systems, vol. 48 no. 3, pp. 2551-2565, DOI: 10.1109/TAES.2012.6237608, 2012.
[8] H. Yuan, C. Xiao, W. Zhan, Y. Wang, C. Shi, H. Ye, K. Jiang, Z. Ye, C. Zhou, Y. Wen, Q. Li, "Target detection, positioning and tracking using new UAV gas sensor systems: simulation and analysis," Springer Journal of Intelligent & Robotic Systems, vol. 94 no. 3-4, pp. 871-882, DOI: 10.1007/s10846-018-0909-2, 2019.
[9] S. Mini, S. K. Udgata, S. L. Sabat, "�-Connected Coverage Problem in Wireless Sensor Networks," ISRN Sensor Networks, vol. 2012,DOI: 10.5402/2012/858021, 2012.
[10] Z. Kang, H. Zeng, H. Hu, Q. Xiong, G. Xu, "Multi-objective optimized connectivity restoring of disjoint segments using mobile data collectors in wireless sensor network," EURASIP Journal on Wireless Communications and Networking, vol. 2017,DOI: 10.1186/s13638-017-0852-0, 2017.
[11] A. Ez-Zaidi, S. Rakrak, "A comparative study of target tracking approaches in wireless sensor networks," Hindawi Journal of Sensors, vol. 2016, article 3270659,DOI: 10.1155/2016/3270659, 2016.
[12] K. L. Ang, J. K. P. Seng, A. M. Zungeru, "Optimizing energy consumption for big data collection in large-scale wireless sensor networks with mobile collectors," IEEE Systems Journal, vol. 12 no. 1, pp. 616-626, DOI: 10.1109/JSYST.2016.2630691, 2018.
[13] J. Zhang, P. Hu, F. Xie, J. Long, A. He, "An energy efficient and reliable in-network data aggregation scheme for WSN," IEEE Access, vol. 6, pp. 71857-71870, DOI: 10.1109/ACCESS.2018.2882210, 2018.
[14] O. Demigha, W. Hidouci, T. Ahmed, "On energy efficiency in collaborative target tracking in wireless sensor network: a review," IEEE Communications Surveys & Tutorials, vol. 15 no. 3, pp. 1210-1222, DOI: 10.1109/SURV.2012.042512.00030, 2013.
[15] A. Sharma, P. K. Singh, A. Sharma, R. Kumar, "An efficient architecture for the accurate detection and monitoring of an event through the sky," Elsevier Computer Communications, vol. 148, pp. 115-128, DOI: 10.1016/j.comcom.2019.09.009, 2019.
[16] J. Du, J. Diouris, Y. Wang, "A RSSI-based parameter tracking strategy for constrained position localization," EURASIP Journal on Advances in Signal Processing, vol. 2017 no. 1,DOI: 10.1186/s13634-017-0512-x, 2017.
[17] Y. Yao, Q. Han, X. Xu, N. Jiang, "A RSSI-based distributed weighted search localization algorithm for WSNs," International Journal of Distributed Sensor Networks, vol. 11 no. 4,DOI: 10.1155/2015/293403, 2015.
[18] S. Mahfouz, F. Mourad-Chehade, P. Honeine, J. Farah, "Target tracking using machine learning and Kalman filter in wireless sensor networks," IEEE Sensors Journal, vol. 14 no. 10, pp. 3715-3725, DOI: 10.1109/JSEN.2014.2332098, 2014.
[19] S. Xiao, W. Li, H. Jiang, Z. Xu, Z. Hu, "Trajectroy prediction for target tracking using acoustic and image hybrid wireless multimedia sensors networks," Springer Multimedia Tools and Applications, vol. 77 no. 10, pp. 12003-12022, DOI: 10.1007/s11042-017-4846-z, 2018.
[20] R. Opromolla, G. Fasano, D. Accardo, "A vision-based approach to UAV detection and tracking in cooperative applications," Sensors, vol. 18 no. 10, article 3391,DOI: 10.3390/s18103391, 2018.
[21] X. Zhao, F. Pu, Z. Wang, H. Chen, Z. Xu, "Detection, tracking, and geolocation of moving vehicle from UAV using monocular camera," IEEE Access, vol. 7, pp. 101160-101170, DOI: 10.1109/ACCESS.2019.2929760, 2019.
[22] D. Cavaliere, V. Loia, A. Saggese, S. Senatore, M. Vento, "A human-like description of scene events for a proper UAV-based video content analysis," Knowledge-Based Systems, vol. 178 no. 2, pp. 69-74, 2019.
[23] D. Cavaliere, V. Loia, A. Saggese, S. Senatore, "Semantically enhanced UAVs to increase the aerial scene understanding," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49 no. 3, pp. 555-567, DOI: 10.1109/TSMC.2017.2757462, 2019.
[24] D. Cavaliere, V. Loia, S. Senatore, "Towards an ontology design pattern for UAV video content analysis," IEEE Access, vol. 7, pp. 105342-105353, DOI: 10.1109/ACCESS.2019.2932442, 2019.
[25] D. Brown, L. Sun, "Dynamic exhaustive mobile target search using unmanned aerial vehicles," IEEE Transactions on Aerospace and Electronic Systems, vol. 55 no. 6, pp. 3413-3423, DOI: 10.1109/taes.2019.2907391, 2019.
[26] S. Lian, Y. He, J. Zhao, "CCD: locating event in wireless sensor network without locations," 2011 IEEE Eighth International Conference on Mobile Ad-Hoc and Sensor Systems,DOI: 10.1109/MASS.2011.72, .
[27] R. W. Beard, T. W. McLain, "Small unmanned aircraft theory and practice," 2012. Chapter 4
[28] MathWorks Robotics Team, Robotics System Toolbox UAV Library, 2019. https://www.mathworks.com/matlabcentral/fileexchange/68788-robotics-system-toolbox-uav-library
[29] A. Ariffin, N. Aziz, K. Othman, "Implementation of GPS for location tracking," 2011 IEEE Control and System Graduate Research Colloquium,DOI: 10.1109/icsgrc.2011.5991833, .
[30] Q. Li, X. Lu, A. Ullah, "Multivariate local polynomial regression for estimating average derivatives," Journal of Nonparametric Statistics, vol. 15 no. 4-5, pp. 607-624, DOI: 10.1080/10485250310001605450, 2003.
[31] J. Fan, I. Gijbels, T. Hu, L. Huang, "A study of variable bandwidth selection for local polynomial regression," Statistica Sinica, vol. 6 no. 1, pp. 113-127, 1996.
[32] J. Taylor, Strategies for mean and modal multivariate local regression, [Ph.D. thesis], 2012.
[33] W. Chen, Z. Baojun, T. Linbo, Z. Boya, "Small vehicles detection based on UAV," The Journal of Engineering, vol. 2019 no. 21, pp. 7894-7897, DOI: 10.1049/joe.2019.0710, 2019.
[34] M. F. Pinto, A. L. M. Marcato, A. G. Melo, L. M. Honόrio, C. Urdiales, "A framework for analyzing fog-cloud computing cooperation applied to information processing of UAVs," Wireless Communications and Mobile Computing, vol. 2019,DOI: 10.1155/2019/7497924, 2019.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2020 Tien Pham Van et al. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. http://creativecommons.org/licenses/by/4.0/
Abstract
Increasingly inexpensive unmanned aerial vehicles (UAVs) are helpful for searching and tracking moving objects in ground events. Previous works either have assumed that data about the targets are sufficiently available, or they solely rely on on-board electronics (e.g., camera and radar) to chase them. In a searching mission, path planning is essentially preprogrammed before taking off. Meanwhile, a large-scale wireless sensor network (WSN) is a promising means for monitoring events continuously over immense areas. Due to disadvantageous networking conditions, it is nevertheless hard to maintain a centralized database with sufficient data to instantly estimate target positions. In this paper, we therefore propose an online self-navigation strategy for a UAV-WSN integrated system to supervise moving objects. A UAV on duty exploits data collected on the move from ground sensors together with its own sensing information. The UAV autonomously executes edge processing on the available data to find the best direction toward a target. The designed system eliminates the need of any centralized database (fed continuously by ground sensors) in making navigation decisions. We employ a local bivariate regression to formulate acquired sensor data, which lets the UAV optimally adjust its flying direction, synchronously to reported data and object motion. In addition, we also construct a comprehensive searching and tracking framework in which the UAV flexibly sets its operation mode. As a result, least communication and computation overhead is actually induced. Numerical results obtained from NS-3 and Matlab cosimulations have shown that the designed framework is clearly promising in terms of accuracy and overhead costs.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer