Natural soundscapes contain rich ecological information. Acoustic communication is widespread among animals and these biological sounds, together with abiotic and anthropogenic sounds, offer vast insights into the states of ecosystems. The recent advent of portable autonomous sound recorders has created a new way for ecological data to be acquired at scale, without many of the biases associated with human observers (Servick, 2014). Consequently, monitoring with passive acoustic recorders has garnered much attention and ecologists have been quick to embrace the technology (Burivalova et al., 2019; Oswald et al., 2022; Sugai et al., 2019). However, effective ecological monitoring using acoustic technology is more than just recording sound. Without proper planning and experimental design, acoustic monitoring can amass huge volumes of data that are difficult to analyze and may not be informative for conservation or management. To ensure acoustic monitoring is effective, we need methods and processes for implementation that align with a priori objectives and questions. Ensuring that the hype of the technology and the promise of big data does not overshadow the importance of well-designed monitoring programs will help to realize the wide-ranging benefits that acoustic recorders can offer ecological monitoring (Bayraktarov et al., 2019).
Effective ecological monitoring collects data that are useful in conservation planning or decision-making. Unfortunately, there are many examples of monitoring programs that have failed to inform management. The reasons for this are many but include poorly defined questions and experimental designs, monitoring many species inadequately instead of fewer species sufficiently, lack of ground-truthing, disruptions to the integrity of long-term data, and inappropriate data management (Lindenmayer & Likens, 2010). These issues can lead to wasted time and money, poor conservation decisions, and the degradation of people's trust in ecological data (Lindenmayer & Likens, 2010; McDonald-Madden et al., 2010). To avoid these problems, new monitoring programs should undergo a thorough planning process with appropriate project partners where conceptual models, objectives and questions are defined before other details (e.g., monitoring methods, specific metrics) are decided upon. In this early stage, practitioners should clearly define their reasons for monitoring, which species or threats will be monitored and why, and any relevant trade-offs (Wintle, 2018). Where appropriate, these should be framed in a management context that clearly links monitoring and management actions (Robinson et al., 2018). These steps will help to identify key partners and stakeholders, and the suitable processes and timeframes for evaluation and reporting.
In this paper, we discuss key concepts of effective ecological monitoring as they relate to acoustic programs and provide recommendations for practitioners (Figure 1). The most fundamental considerations are whether monitoring is needed at all (McDonald-Madden et al., 2010) and, if so, whether acoustic methods are appropriate relative to alternative survey methods. A detailed treatment of these questions is beyond the scope of this paper; we assume that monitoring is needed, and that acoustics has been chosen as the likely best option, whether by itself or in conjunction with other methods. The recommendations we provide are not intended to be overly prescriptive, but instead suggest general considerations for planning an effective acoustic monitoring program. Furthermore, our recommendations are likely transferable to many technologies that are applied, often at large scales, to monitor species and ecosystems. While acoustics has a wide range of potential uses, such as general surveillance monitoring, as a tool of discovery, and to create sonic time-capsules of places (Deichmann et al., 2018; Desjonquères et al., 2020; Penar et al., 2020; Roe et al., 2021), the focus of this paper is targeted monitoring of species and ecosystems in terrestrial and aquatic environments using passive acoustic sensors. To this end, we draw on the literature and our collective experiences to frame the key considerations as such:
- Coordination and partnerships: who will be involved in the program's setup, implementation and oversight?
- Monitoring objectives and questions: what do you want to know?
- Acoustic measurement entities: what sounds will you measure?
- Field survey design: where and how will you deploy passive acoustic sensors? Do you need to collect any additional (non-acoustic) data? Will sensor placement have any unintentional impact on fauna or people?
- Recording schedules and periodicity: how often and for how long will you record sound?
- Sound data processing: how will you analyze the collected sound data?
- Data storage and availability: where will you store your sound data and will it be publicly accessible?
FIGURE 1. Flowchart of recommendations for the development of an effective monitoring program using passive acoustic recorders.
Effective monitoring begins with establishing an appropriate project team and partnerships. In addition to a core team of people who will lead the implementation of a program, partnerships with stakeholders, such as Traditional Owners, other landowners, and researchers, should be established early to ensure an inclusive co-design process. Resilient partnerships consider matters of trust, identity, and power (Dietsch et al., 2021) and, by appropriately addressing these dynamics, avoid perverse outcomes for biodiversity and people alike. Partnerships improve success in monitoring and conservation (Garnett et al., 2018; Lacher et al., 2012). By working toward common goals, partnerships ensure that programs address knowledge gaps that are important to all entities and, where relevant, contribute to policy or management decisions (Lindenmayer & Likens, 2018). Partners can also help determine whether similar or larger monitoring programs already exist, which may influence the design of the new program (e.g., collection of standard metadata that a larger program requires). For some programs, a small core team may be sufficient, but larger partnerships are increasingly needed as large-scale, long-term programs become more feasible.
The Australian Acoustic Observatory is a good example of the importance of partnerships in large monitoring programs (Roe et al., 2021). The program currently monitors 90 sites (four solar-powered sensors per site;
Clear objectives and good questions are the foundation of effective ecological monitoring (Lindenmayer et al., 2022; Lindenmayer & Likens, 2018). Objectives may be relatively simple (e.g., create a species list of vocal animals) or they may be more complex, based on a conceptual model of the system and related a priori hypotheses (e.g., monitor population trend for a species in response to management). Defining objectives early can help to ensure that all aspects of a monitoring program are fit-for-purpose and understood by the entire project team. Failing to do so can result in large amounts of data being collected with negligible benefit to conservation and any proposed management (Lindenmayer et al., 2013). This is a risk with acoustic monitoring because it is relatively easy to deploy sensors and record sound. Problems can arise where sound recorders are deployed before resolving how the sound data will be used, since this impacts decisions like choice of sensors, recording schedules, and spatial coverage. Consequently, data may be inappropriate for questions developed afterwards. Good questions should always guide the design of targeted acoustic monitoring programs and, in so far as possible, we should avoid retrofitting questions to sound data after collection.
Ecological monitoring, especially ongoing or long-term monitoring, can provide information to track changes in the environment through time. Targeted ecological monitoring is usually done to answer questions about the trajectory of a species or ecosystem, ideally in a design that guides management decisions. Good monitoring questions are necessarily specific. Good questions are also evolving questions (Lindenmayer & Likens, 2010). As more is understood about a system, questions should be reviewed and revised to ensure that data continue to address the objectives. Questions can be modified, or new questions can be added to a program (although the integrity of the long-term dataset must be considered; Lindenmayer et al., 2022). For instance, if management is enacted or changed in response to the results of monitoring, questions and survey methods should be revised to ensure that the impacts of new management actions are properly captured. This “learning by doing” approach is foundational to adaptive management (Hauser et al., 2019; Lindenmayer & Likens, 2009; Lyons et al., 2008) and highlights the importance of reviewing monitoring data early and often, especially when something in the system being monitored is changed.
Generally, there is little that is unique about a good monitoring question using acoustics versus other survey methods; the method is not the question. However, acoustic signals and their dynamics (see section Acoustic measurement entities) should be captured in hypotheses, where the knowledge exists to do so. For example, Bradfer-Lawrence et al. (2020) demonstrated that soundscape evenness (one of many acoustic indices) follows a U-shaped curve whereby it most strongly indicates bird species richness and abundance at intermediate levels of energy. This knowledge could be built into a priori hypotheses about how soundscapes are expected to reflect bird assemblages which, in turn, reflect habitat condition. However, as acoustics is still a young discipline, many of these ecosystem-sound relationships are yet to be resolved (Bateman & Uzal, 2021) which limits their incorporation into hypotheses. This reflects the current state of acoustics: the technology is being applied in monitoring while foundational theories and relationships are still being determined (Stowell & Sueur, 2020). Monitoring data may help clarify some of these theories and, where feasible, a program could incorporate a pre-survey investigation to resolve these relationships. However, targeted monitoring should mostly rely on existing acoustic knowledge to ensure that it is fit-for-purpose. Concurrent on-ground data, collected using conventional methods, are almost always required to validate acoustic measurements and infer accurate meaning from acoustic data, at least until predicted relationships are established.
As knowledge advances, monitoring questions can evolve to incorporate new approaches if they help meet objectives. For instance, species abundance is difficult to monitor using acoustics (but see Borker et al., 2014; Lambert & McDonald, 2014; Marques et al., 2013; Pérez-Granados & Traba, 2021; Simmons et al., 2022). Currently, other approaches like the acoustic abundance index (Krishnan, 2019) and species occupancy (Balantic & Donovan, 2019; Law et al., 2022; Wood & Peery, 2022) can be used to answer questions about relative abundance and population change. However, as abundance data can be very informative for conservation, if methods to acoustically measure abundance were developed, monitoring questions could evolve to incorporate such data. These may be relatively simple for species whose calling behavior is understood (Lambert & McDonald, 2014), however more complex or unknown signaling systems may require further research before extrapolation to density from recordings is possible. This reiterates the importance of reviewing and revising monitoring questions in line with the best available knowledge.
Acoustic measurement entitiesOnce monitoring objectives and questions are defined, we need to consider measurement entities: what will you measure to answer your questions? For acoustics, this means selecting sound events that signal the biotic or abiotic states relevant to your monitoring objectives. This may be self-evident, but acoustic studies often fail to justify, or even describe the specific vocalizations or other sound signals that are targeted in the study or monitoring program. Programs should either describe and justify the specific sounds targeted or, as is likely the case for generic monitoring, the sounds that are expected within the recording period. For example, if a program seeks to monitor a bird species' use of feeding habitat, it should target calls or calling behaviors that are associated with feeding events (c.f. non-feeding, such as a flight call given during flyover), and these should be described. If a program seeks to monitor all bird species present in a dawn chorus, it should justify why this time is targeted (what will it capture or not capture relative to other times of day?) and explain what calling behaviors (e.g., territorial displays) are expected.
In some cases, such as programs that aim to quantify species detection/non-detection, a species' most common or conspicuous call may be the appropriate measurement entity. General knowledge about a species' calls may be sufficient to establish an acoustic monitoring program. However, for programs that aim to measure specific behavioral contexts or demographic parameters, detailed knowledge of the species' vocal behavior is required. If this is not already known, studies of the species' vocal behavior may be necessary before a monitoring program begins in earnest (e.g., see Teixeira et al., 2020, 2021). An additional layer of complexity is that some species have multiple components to their vocalizations, in which case the practitioner must decide which components should be targeted. Some components may be highly stereotypical, while others are not. For example, recent research on the vocal culture of Albert's lyrebird Menura alberti reveals that, of the three components present in the male ‘whistle’ song, the final component is more stable across geographic distance than the introductory or body components (Backhouse et al., 2021). Such differences may affect a monitoring program by impacting the performance of machine learning tools (can an algorithm detect all variations of a call type?) or quantitative call metrics like pulse or beat rate.
An alternative approach to measuring species-specific calls is the use of acoustic indices, which are mathematical summaries of soundscape elements (Farina, 2019). Acoustic indices can be used to reveal large spatiotemporal trends in soundscapes and, potentially ecological condition or species diversity (Fuller et al., 2015; Tucker et al., 2014). For monitoring programs that aim to measure acoustic indices as proxies of biodiversity, the chosen measurement entities will necessarily be broad. They may comprise the overall soundscape in a defined unit of time or frequency band. In any case, the chosen measurement entities should be described and justified with reference, where possible, to species' vocal behaviors or underlying assumptions linking sound to ecological condition. In any monitoring context, poor survey design and reliance on unverified indices can create erroneous and apparently contradictory results (Alcocer et al., 2022; Hayward et al., 2015). To avoid these problems in acoustic surveys, the choice of acoustic measurement entities—whether specific calls or acoustic indices—is a critically important step in program design.
Field survey designAcoustic field surveys must be designed to appropriately capture the measurement entities to answer the defined objectives and questions. This must consider the sensor type, the sampling locations (hereafter, “sites”), including spatial coverage and representativeness, the number of sensors deployed per site, and the location of sensors within sites. All sensor types have their pros and cons, and these should be explicitly considered in the survey design. The emergence of low-cost sensors, such as the AudioMoth and HydroMoth (
A critical yet overlooked issue is that different sensors create different quality recordings. For instance, less-sensitive microphones can produce ‘quieter’ recordings with less information (i.e., lower signal-to-noise ratio) which can affect both species' detection probabilities and calculations of acoustic indices (see section Sound data processing). Darras et al. (2020) tested 12 microphone models and found considerable differences between models in signal-to-noise ratio, sound detection space and, consequently, the detectability of birds and bats in the field. Likewise, for underwater recording, Lamont et al. (2022) showed that the HydroMoth device had lower signal-to-noise ratio relative to the more costly SoundTrap sensors (
The location of survey sites should comprehensively cover the geographic range relevant to objectives; in some cases, this may be a species' entire known distribution (Woinarski, 2018). When targeting specific species, such as in occupancy surveys, sensors can be deployed randomly across a species' expected distribution (randomized sampling) or at locations known to have biological importance to the species (preferential sampling) (Wood & Peery, 2022). For example, randomized sampling could involve the random selection of survey sites from a grid overlaid on mapped habitat for a species. In preferential sampling, sites could be selected from a data frame of known nesting sites or animal home ranges. Both approaches are valid and deciding between them is a matter of the monitoring aims and questions, as well as the extent of a priori knowledge of species' space use (Wood & Peery, 2022). For generic biodiversity surveys, sites may be sampled randomly within habitat types or preferentially according to habitat features (e.g., waterbodies). In any program, if monitoring questions relate to the impact of management or other environmental factors, sites should be stratified to properly represent these and, where possible, a before-after control-impact (BACI) design should be utilized (Christie et al., 2019).
The positioning of sensors within sites and number of sensors per site are also important considerations. Sensors should be positioned to best capture the targeted sound events (measurement entities). In some cases, this will be broad (e.g., anywhere within a habitat patch) but in other cases specific locations will need to be selected (e.g., at nests, in the canopy). Sensor position should also consider potential ambient noise. For instance, geophony, such as running water, and anthrophony, such as vehicle noise, can mask target sounds, making data processing difficult and results erroneous. Protection of the sensors from damage may also influence within-site location, such as if sensors need to be at a height to prevent interference from humans or non-arboreal animals. Additionally, it is important to consider the potential for unintended impacts on people (e.g., privacy concerns from recording human voices, Traditional Owner sovereignty) and animals (e.g., neophobia) (Sandbrook et al., 2021).
For many programs, a single sensor representing a sampling unit will be appropriate (i.e., one sensor equals one site), but in other cases where within-site spatial coverage or detection space needs to be large, more sensors would be required (Sugai et al., 2019, 2020). In the latter case, sensors should be sufficiently distant from one another to ensure that they do not overlap in their detection space. Alternatively, some programs may use an array of multiple sensors whose detection spaces overlap to triangulate and count individual animals (Stevenson et al., 2015). Measuring detection space directly (e.g., through playback experiments) is necessary where spatial information is required (Sugai et al., 2020), but it is important to realize that detection space is not static. It will vary with different levels of background noise, the amplitude of species' calls, and the physical environment.
For hypothesis-driven programs, practitioners must ensure that the study design has sufficient statistical power to answer the monitoring questions (Lindenmayer et al., 2022). Where possible, a power analysis should be conducted before a program commences (or soon thereafter, using early empirical data) to help plan an appropriate sample size of sites, sensors within sites, and durations of recordings. An optimal study design targets the smallest sample sizes necessary to reliably reject the null hypothesis. Acoustic monitoring programs often collect repeated data from each site; for example, many samples (e.g., species detections, index values) can be taken from a single recording, but these will not be independent. This can artificially increase sample sizes (pseudoreplication) and, if not dealt with in analysis, the risk of type I errors (i.e., falsely concluding that there is a significant effect) (Alcocer et al., 2022). Notwithstanding, some methods, such as occupancy modeling, require repeated data, which acoustic monitoring can efficiently acquire. Acoustic monitoring can also be prone to type II errors (i.e., falsely concluding there is no effect) because the costs of equipment and fieldwork, as well as uncontrollable environmental events, can limit the number of independent sites sampled per treatment. Particularly for rare species, when assessing occupancy it is usually preferable to sample more sites in lieu of within-site effort, but optimizing this depends on species detectability (MacKenzie & Royle, 2005). If empirical data are available, these can be used to investigate statistical power, ideally in the early phases of a monitoring program (Southwell et al., 2019; Wood et al., 2019). Ecologically-informed hypothetical data can also be used to investigate power (Smart et al., 2022; Wood, 2022), and these are especially useful when designing larger, multi-species programs. There are various options for implementing power analyses, including several packages and tutorials in R (Banner et al., 2019; Green & MacLeod, 2016; Lu et al., 2017; Wood et al., 2019).
Lastly, field surveys should consider what, if any, additional on-ground data should be collected. This may include data on weather conditions (e.g., rainfall may influence anuran calling activity; Heard et al., 2015), vegetation (e.g., structure may influence sound attenuation as well as faunal community composition; Scarpelli et al., 2023), aquatic habitat condition (e.g., soniferous aquatic insects are mainly found in sites of intermediate disturbance, with mild nutrient enrichment; Linke et al., 2022), and so on. Phenological data on flowering and fruiting are particularly relevant for acoustic surveys of birds and may positively or negatively impact the success of a monitoring program. For example, flowering trees may attract your target species to the site and enhance detection probability, but if large flocks of birds are drawn to the site, the resulting cacophony may mask detection of target calls. If monitoring questions relate to management, additional data relevant to management activities would likely be informative. Such information can contribute to a more comprehensive and informative monitoring program and may be vital to interpreting acoustic data. As with acoustic entities, any additional entities measured should be properly considered upfront based on the monitoring objectives and existing knowledge of the species or ecosystem.
Recording schedules and periodicityRecording schedules define the recording times, sample rates, durations, and repetitions of all recordings in a program. Designing efficient recording schedules is critical to the success of acoustic monitoring programs. Where possible, this should consider the statistical power needed to answer your monitoring questions. Given the diversity of monitoring objectives and questions, it is difficult to provide general recommendations for recording schedules, but studies in various ecosystems provide some insight. For example, from simulations and two empirical datasets of bird species richness in North America, Wood et al. (2021) found that increasing sampling effort on fewer days performed better than distributing effort over more days. Franklin et al. (2021) compared four post-dawn survey schedules for assessing bird assemblages in an Australian montane dry sclerophyll forest, and found the optimal method comprised five 20-min samples from each of two survey days. In the eastern Brazilian Amazon, Metcalf, Barlow, et al. (2022) found that increasing temporal coverage and decreasing sample length improved predictions of bird alpha and gamma diversity. Likewise, Francomano et al. (2021) assessed temporal soundscape variability at eight sites from four continents and found that the representativeness of acoustic indices was improved more by increasing the number of subsamples analyzed than by increasing subsample duration. This also holds true for underwater sounds (Linke et al., 2020). Studies like these emphasize that all recording schedules are not equal, and their configurations need to be explicitly considered in line with monitoring questions. Where the acoustic dynamics of your study system are not understood, some acoustic data should be analyzed early (e.g., after one season) alongside concurrent on-ground data to inform an ongoing optimal schedule.
Importantly, recording time and duration should be sufficient to maximize species detection probability, which should be informed by existing data, pilot data or, at least, expert knowledge. For example, many bird surveys are undertaken during the dawn chorus which is appropriate for species that are active at that time, and for most inventory-type surveys. However, a drawback is that this time of day can be very noisy, which makes data processing more difficult. If target species vocalize equally or more at other times of the day, it may be better to record outside of the dawn chorus (e.g., dusk recording for black-cockatoo Calyptorhynchus sp. nest monitoring; Teixeira et al., 2022). Clearly, if target vocalizations are given at specific times of day only (e.g., nocturnal calls), that would define the recording time.
In addition to the within-day schedule, it is important to consider sampling periodicity within and between survey seasons (Woinarski, 2018). How many days, weeks, or months do you need to sample to address your monitoring objective? Does the survey need to be repeated and, if so, how often? Again, these considerations must be guided by the monitoring objectives and questions, but logistical constraints are also critical. A highly intensive survey may provide comprehensive within-season data, but it may not be feasible to implement every year and, therefore, long-term data are unlikely. If longevity is important (e.g., for monitoring species' population trends), then the feasibility of repeated site access and the ongoing increases in data storage and processing costs must be carefully considered. This may also influence the choice of sensor as some programs may require longer (or continuous) deployments of sensors with advanced capabilities, such as solar power.
Recording schedules influence the total run time for a sensor, which impacts how frequently batteries and memory cards need to be serviced (Sugai et al., 2020). As such, it is usually necessary for practitioners to predict run time before deploying sensors. Fortunately, most sensors can be programmed via software that predicts daily memory and battery requirements or expiration dates, allowing practitioners to easily examine various schedules. Memory usage is a product of recording duration, bit depth (typically 16-bits for commercial recorders), and sample rate. Sample rate, or sampling frequency, defines the number of digital samples taken per second, which in turn determines the frequency range able to be recorded. Higher sample rates capture greater frequency ranges and, therefore, species, but they have greater memory and battery requirements. For example, a sample rate of 22.05 kHz will capture most birds while using half the data (for a given bit depth and duration) and significantly less power than a sample rate of 44.1 kHz. However, the higher-frequency sounds of most insects will be missed. To survey species with ultrasonic calls (i.e., >20 kHz), such as most bats, specialized microphones capable of recording at very high sample rates are usually required, but these come with high memory and power requirements. It is necessary to record at a sample rate that is at least twice the frequency of the target species' vocalizations, since digital sensors cannot properly record at frequencies above half the sample rate (termed the Nyquist frequency).
Sensor run time is also influenced by file type and the number of channels used. Most commercial sensors record in uncompressed wav format, but some newer models (e.g., Frontier Labs BAR-LT) can also record in a variety of lossless compressed formats (e.g., FLAC). While these reduce memory requirements, they may also require conversion to another format prior to analysis which adds additional processing steps to workflows. Recording to more than one channel (i.e., stereo recording) will increase memory and battery requirements. Every recording schedule and setup has trade-offs in the species captured and the memory and battery used, and decisions should be guided by the monitoring objectives, the serviceability of the sensors once deployed, and data storage options. While it is relatively easy to record a lot of data, we recommend that practitioners at least consider how much data is enough to address their monitoring objectives (Francomano et al., 2021). Excess data may lead to unexpected insights or may simply add to monitoring and storage costs. Surveys should, in the first place, be defined to address objectives, but additional data can be collected within the scope of the survey program if memory and power allow.
Sound data processingAcoustic monitoring begins with field deployment of sensors, but data collection does not end there. Quantifying sound events requires methods to extract, or collect, relevant data from sound recordings. These methods should be defined during survey planning. Some projects may rely on manual listening or viewing spectrograms to detect target sound events. This approach can be accurate (Joshi et al., 2017; Rocha et al., 2015) and, despite being labor-intensive, can still render acoustic methods an efficient tool for data collection (Joshi et al., 2017; Wimmer et al., 2013). However, acoustics is increasingly applied at scales that make manual methods impractical (Sugai et al., 2019). For larger projects, some form of automation will likely be required. Machine learning methods to detect sounds of interest are becoming highly sophisticated (Kahl et al., 2021; Lauha et al., 2022; Liu et al., 2022) and their development can be considered as a sub-discipline of acoustics, the breadth of which is beyond the scope of this paper. Practitioners implementing an acoustic monitoring program may build a custom recognition algorithm, possibly in collaboration with computer scientists, or use commercial software like Kaleidoscope Pro (
Importantly, performance can vary substantially between recognizer types (Brooker et al., 2020; Lemen et al., 2015; Marchal et al., 2021; Russo & Voigt, 2016), with consequences for acoustic data outputs and compatibility across datasets. As such, practitioners should understand how detectable a species is with a given recognizer, which can be calculated by comparing a recognizer's outputs to manually labeled data. All recognizers will incur a degree of error (false positives and false negatives) and it is important to decide whether precision (proportion of detections that are correct) or recall (proportion of species' calls that are detected) is more important for a given program because these are usually a trade-off. Accordingly, if the chosen recognizer software assigns a confidence score to each detection, practitioners can set a confidence level (i.e., a threshold below which detections are not accepted) that optimizes the trade-off between precision and recall. For a more in-depth treatment of detector metrics, see Knight et al. (2017).
Where a program aims to measure a species' detection/non-detection (e.g., occupancy surveys), precision is usually favored over recall because the total number of detections within a given period is not relevant. High precision reduces the likelihood of false detections and can reduce the need for extensive verification of detections. Conversely, where a program aims to detect a rare or highly cryptic species, high recall is often required to minimize the risk of missing the species. High recall often comes with a higher rate of false positives and, consequently, a need for thorough manual verification. In any case, recognizer outputs usually require some manual verification to quantify performance and minimize errors (Sugai et al., 2019). This may be as simple as validating a subset of outputs as true or false positives (to measure precision), or it may require substantial processing to identify false negatives (to measure recall). Some projects involve citizen scientists in validation tasks (Snyder et al., 2022) but others require expert input (Liu et al., 2022).
Arguably, the most exciting developments for species detection are new tools that can implement advanced deep learning neural networks in a user-friendly way. These networks offer very high performance, but until recently they were largely inaccessible to practitioners and others outside of universities. Today, however, the realm of possibility is rapidly expanding with deep learning tools like BirdNET becoming available (Kahl et al., 2021; Manzano-Rubio et al., 2022). BirdNET can provide high-precision detections for hundreds of northern hemisphere bird species (Kahl et al., 2021), and it has recently been extended to monitor anurans and an endangered primate (Wood, Barceinas Cruz, & Kahl, 2023; Wood, Kahl, et al., 2023). In addition to pre-trained models, users can train custom recognizers using BirdNET's embeddings, which expands its potential applications. Although in some cases, such as for chorusing species, there may be limitations to off-the-shelf software and programs will still require highly trained custom recognizers, tools like BirdNET, which allow practitioners to easily implement deep learning methods, will undoubtedly revolutionize acoustic monitoring.
Beyond recognizers, acoustic projects that interrogate whole soundscapes, or parts thereof, may use indices-based methods to process sound recordings. Indices are usually not used for species recognition (but see Brodie et al., 2020; Znidersic et al., 2020); rather, they mathematically summarize large patterns in soundscape composition to describe, for example, temporal dynamics in biotic and abiotic sounds. Generally, the use of acoustic indices implies a correlation between acoustic complexity and biological diversity. However, despite many acoustic indices being tested for correlations to ecological variables (Allen-Ankins et al., 2023; Barbaro et al., 2022; Buxton et al., 2018; Dröge et al., 2021; Flowers et al., 2021; Fuller et al., 2015; Minello et al., 2021) results are, at best, mixed. In a recent meta-analysis, Alcocer et al. (2022) found that a small number of indices show a moderately positive association to some metrics of biological diversity, but their performance is highly variable within and between studies and effect sizes (i.e., strength of the relationships) have declined over time. For example, in a review, Bateman and Uzal (2021) identified a positive relationship between bird species richness and the Acoustic Complexity Indices (ACI) in six studies and no relationship in seven studies. They report seven studies that found a positive relationship between ACI and other environmental factors (e.g., vegetation diversity), two that found a negative relationship, and two that found no correlation. In a large study of over 8000 recordings with paired on-ground data from four countries, Sethi et al. (2023) found that indices did not reliably predict bird species richness across datasets, concluding that “there are no common features of biodiverse soundscapes.” Of concern, there is an increasing trend of using indices without validation (Alcocer et al., 2022). As such, results from studies relying on indices should be interpreted cautiously.
Because acoustic indices summarize all components of a soundscape within defined time and frequency parameters, they are influenced by all biotic and abiotic sounds. As such, attempting to correlate single indices to single biodiversity metrics can be problematic. Unfortunately, to date, most tests of acoustic indices have focused on a handful of common indices and their relationships to terrestrial bird species richness in particular, while relationships to other vocal fauna, including mammals, anurans, and insects, are largely unstudied (Alcocer et al., 2022). A recent study in Australia attempted to address this issue by testing 13 indices for correlations with all vertebrate fauna richness measured using conventional on-ground surveys (Allen-Ankins et al., 2023). Their results show some positive correlations with bird diversity, but poor correlations with anuran and other non-avian diversity. Moreover, correlations were strongest for less-common indices (e.g., spectral density) and weakest for common ones (e.g., ACI), and multiple indices together were more reliable than individual indices.
Like other authors (e.g., Allen-Ankins et al., 2023; Sethi et al., 2023), we recommend that indices be tested and ground-truthed using data from conventional survey methods, as the weight of evidence indicates that the relationships of acoustic indices to biodiversity are currently unresolved for most ecosystems (Alcocer et al., 2022; Bateman & Uzal, 2021; Fuller et al., 2015; Gibb et al., 2019). Some authors have proposed a framework to guide the use of acoustic indices (Bradfer-Lawrence et al., 2019) but even if predictions can be made for a given metric in a given system, the limited transferability of these to other datasets reduces the potential of indices in applied monitoring today. Practitioners wishing to use acoustic indices will need to ensure that their patterns in the given system are understood and calibrated. This may require additional work before or alongside the monitoring program. Other indices-based tools like false color spectrograms (Towsey et al., 2014) and multi-index motifs (Scarpelli et al., 2021) can be powerful methods to summarize acoustic data for major sound patterns, however substantial theoretical research is still required to advance the application of acoustic indices (Alcocer et al., 2022; Farina et al., 2021). Until then, we recommend that in targeted monitoring programs other methods be considered instead of, or in tandem with, acoustic indices.
Data storage and availabilityStoring raw recordings is a significant, ongoing challenge for acoustic monitoring programs. While some monitoring programs rely on local storage (e.g., external hard drives), these options can become unmanageable for large monitoring programs, especially because a minimum of two data copies are recommended (Metcalf, Abrahams, et al., 2022). Where available, servers or cloud-based options are preferred as these ensure data safety. Recently, there have been calls for permanent, sharable data repositories for both terrestrial and underwater sounds (Parsons et al., 2022; Vella et al., 2022). Not only would such repositories securely archive data, but they would also help to align different acoustic programs, improve data sharing (including labeled data, which can aid development of call recognizers and other tools), and allow for additional or independent verification and re-use of sound recordings. However, because acoustic datasets can be very large, there are currently few options of this nature (Sugai et al., 2019).
In addition to data storage, data availability is an important consideration. Who will be able to access a program's data, and when? Generally, we support an open science approach; that is, where appropriate, data should be available to others involved in research and decision-making (Woinarski, 2018). However, to ensure data use is equitable, especially for early career researchers and people from marginalized backgrounds, the processes and timing of data sharing need to be carefully considered. For instance, data may be withheld from public repositories until formal publications have been finalized. Alternatively, raw sound recordings may be shared but outputs from analyses (e.g., species detections) may not be. These decisions should rest with the original researchers. In publications, authors may opt to include a statement of data availability which directs readers to contact the authors with any inquiries about accessing data. Where data are publicly shared, sound recordings, metadata, call recognizers and other products could be citable by a persistent identifier like a digital object identifier (DOI) to ensure formal recognition of the original authors.
CONCLUSIONIn this paper, we have focused on seven principles of effective acoustic monitoring as they apply to programs that target specific species or ecosystem features (Figure 1). These targeted programs are becoming increasingly important as global biodiversity targets are renewed and market-based initiatives like natural capital accounting emerge (Convention on Biological Diversity, n.d.; Mace, 2019; Mace et al., 2018). Conservation technologies like passive acoustic recorders will undoubtedly be an important part of large-scale programs to measure and report on the state of biodiversity. However, whether acoustic data are truly informative for decision-making is not only determined by technological capabilities. Indeed, amassing enormous volumes of sound data—which is increasingly easy to do as recorders become smaller and cheaper—can mask underlying inadequacies in data quality and relevance to management. Monitoring programs can appear to be comprehensive when they are not. To ensure the utility of acoustic data, monitoring programs should be well-designed and implemented in line with a clear question-driven program for data collection, analysis, and reporting.
More broadly, we see the need not only for targeted monitoring programs that answer clearly defined questions, but also generic surveillance programs that keep a finger on the pulse of biodiversity. We must embrace the capacities of technology to scale-up monitoring, acknowledging that discoveries made by sound data collected today may be delayed until machine learning methods and sound-ecosystem theories are sufficiently advanced. Nonetheless, targeted question-driven monitoring is the bedrock of evidence-based decision-making, and to this end acoustic monitoring programs must look to the principles of effective ecological monitoring. Alongside general surveillance monitoring, well-designed targeted acoustic monitoring can enable an effective evidence-base for conservation decision-making now and in the future.
AUTHOR CONTRIBUTIONSDaniella Teixeira: Conceptualization; writing—original draft; writing—review & editing, investigation; project administration. Paul Roe: Conceptualization; writing—review & editing. Berndt J. van Rensburg: Conceptualization; writing—review & editing. Simon Linke: Conceptualization; writing—review & editing. Paul G. McDonald: Conceptualization; writing—review & editing. David Tucker: Conceptualization; writing—review & editing. Susan Fuller: Conceptualization; writing—review & editing.
ACKNOWLEDGMENTSWe acknowledge the Australian Research Data Commons for funding the research data infrastructure where some of the authors' data is stored and analysed. Open access publishing facilitated by Queensland University of Technology, as part of the Wiley - Queensland University of Technology agreement via the Council of Australian University Librarians.
FUNDING INFORMATIONNo funding was received to assist with the preparation of this manuscript. All authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.
CONFLICT OF INTEREST STATEMENTAll authors declare no conflicts of interest.
DATA AVAILABILITY STATEMENTData sharing is not applicable to this article as no new data were created or analyzed in this study.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Passive acoustic recorders have emerged as powerful tools for ecological monitoring. However, effective monitoring is not simply an act of recording sounds. To have meaning for conservation and management, acoustic monitoring needs to be properly planned and analyzed to yield high quality information. Here, we provide a set of considerations for the design of an effective acoustic monitoring program. We argue that such a program, has the following attributes: (1) has established appropriate partnerships with landowners, Traditional Owners, researchers, or other relevant stakeholders, (2) is based on clear objectives and questions, (3) is explicit in its target sound signals, (4) has considered in-field sensor placement for a range of factors, including experimental design, statistical power, background noise, and potential impacts on human privacy and animal disturbance, (5) has a justified recording schedule and periodicity, (6) has methods to process sound data in line with objectives, and (7) has protocols for permanent data storage and access. Acoustic monitoring is increasingly used in large-scale programs and will be important in addressing global biodiversity targets and new biodiversity markets. It is critical that new monitoring programs are designed to effectively and efficiently capture data that address pertinent and emerging issues in conservation.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 School of Biology and Environmental Science, Queensland University of Technology, Brisbane, Australia; Bush Heritage Australia, Melbourne, Australia
2 School of Computer Science, Queensland University of Technology, Brisbane, Australia
3 School of the Environment, The University of Queensland, Brisbane, Australia; Department of Zoology, University of Johannesburg, Johannesburg, South Africa; Department of Zoology and Entomology, University of Pretoria, Pretoria, South Africa
4 CSIRO Land & Water, Brisbane, Australia
5 School of Environmental and Rural Science, University of New England, Armidale, Australia
6 School of Biology and Environmental Science, Queensland University of Technology, Brisbane, Australia