1. Introduction
The role of digital screen technology (‘screens’) in the lives of children and its potential impact has been heavily debated [1,2,3,4]. There is some evidence to support positive, helpful impacts for children using screens such as increasing learning capacities, higher productivity and enhanced competence in social interaction [5,6,7]. However, there is also some evidence to support negative, harmful impacts such as for physical, emotional and cognitive well-being and overall development [6,8,9,10,11]. Overall, there is relatively weak evidence which provides sufficient detail on the nature of use to support informed decision making by families for their children’s use of screens [12,13]. To better understand the potential positive and negative impact of screen use on children and adolescents—and thus to be able to provide practical evidence-based information to families—it is firstly crucial to have robust methods to measure screen use. This paper outlines some of the challenges faced in capturing contemporary screen use by children and the advantages and disadvantages of available options for measuring screen use by children.
1.1. Challenges in Measuring Children’s Complex Digital Screen Technology Engagement
Technological advancements are bringing growing complexities to children’s engagement with screens, creating important challenges for capturing critical aspects of contemporary screen use. Children now commonly use multiple devices and software, with differing content, for different tasks and in a variety of contexts. This contrasts with the much simpler situation when the first studies on screen use were conducted, when screen use was just television (TV) viewing of a limited number of scheduled broadcast channels. Figure 1 illustrates a conceptual model which outlines the complexity of the child-technology interactions. The conceptual model is based on human–computer interaction models [14,15] and shares aspects with earlier models [16,17], and further refined by the authors with input from community advisory groups.
1.1.1. Child
In considering the child at the centre of the model, an important challenge is that different aspects of screen use may be important for different children. How infants and toddlers engage with technology can be vastly different to how adolescents engage with technology; therefore, different methods may be required for different age groups. Children’s gender, interests and physical, mental or social capabilities and propensities may also be important considerations. Further, different considerations, including ethical ones, may be required for children with disabilities, and for children and families from culturally and linguistically diverse backgrounds [18].
1.1.2. Technology
A further measurement challenge is that the technology children interact with includes different hardware, software and content. A range of different technology devices with different capabilities and contents are being used by children and adolescents. Each household generally has multiple devices that can be shared. For example, typical homes in the USA have five internet-connected devices (e.g., computer, smartphone, tablet, television, etc.) [19], and young children often use other people’s devices, such as from their parents or older siblings. Many prior studies have used aggregated groups of screen use, such as combining TV viewing and mobile touch screen device (MTSD) use together, despite known differences in the potential interaction [20]. These multiple devices can also be used simultaneously with children multitasking. Adding to this challenge is that some hardware can operate multiple software programs (apps) which in turn can support multiple contents. For example, a tablet computer can operate one app for internet searching and another for video playback and both apps can present many types of content. Content on the video app, for example, may range from home-made videos to professionally produced movies. Measuring content has been shown to be important, for example pro-social or anti-social content can have different impacts [21].
1.1.3. Tasks
A prominent measurement challenge is dealing with the variety of tasks, or purposes, for which children use screens. Technology may be used for relaxation (which is important for health), for activities of daily living such as navigation and encouraging teeth brushing, or it can be used for communication such as video chat and social media interaction. Prior studies have often separated the purpose of use into educational versus recreational [22], however, whether use is educational may not be clear cut, especially for young children. For example, a young child may play a numbers game on an app and see it as recreation whereas their parent sees it as education. Similarly, an adolescent may create music on an app for recreation but be learning about music concepts. This challenge is further illustrated by the same hardware, software and content potentially having a different purpose based on the perspective of the user or the observer. For example, what is seen as recreational by a child may be perceived as educational by their parent.
1.1.4. Interaction
The complexity of the challenge for measuring children’s technology is not only dealing with the child, technology and task variety, but also the different aspects of the interaction of these elements within the local and broader system contexts. For example, it may be important to measure the information flow between child and technology or the physical posture assumed. The short-term aspects of interaction may also have longer term consequences that also need to be considered such as cognitive development and musculoskeletal development. Most studies have focused on the cumulative duration of interaction, with screen time forming the basis of most health guidelines [23,24,25,26]. But the effect size of screen time is increasingly being questioned [13], and other aspects of child-technology interactions are increasingly being recognised as influential [7,27].
1.1.5. Other People
Children in particular are often interacting with technology together with other people, such as peers, siblings and parents. While this often involves people being physically in the same location, virtual co-use is also common, for example, video chat with remote family or friends [28]. Capturing significant aspects of these interpersonal dimensions is an important measurement challenge as there is evidence these dimensions influence the likely impacts of child-technology interactions. For example, co-viewing has been linked to positive psychosocial health and developmental outcomes [29,30].
1.1.6. Local Context
The physical and social local context of child-technology interaction is diverse and a challenge for measurement. Children are interacting with technology in multiple physical contexts, such as the home, educational settings and in the community, as well as in virtual worlds. Understanding screen use in these contexts is likely to be important for observational studies of children’s natural engagement with technology. Measuring across multiple contexts can be challenging, for example parents cannot realistically be expected to detail their child’s technology use when at school. The social context is likely to be important to consider. Family practices, values and rules on access and use of screens also form an important part of the local context [31]. Measurement within laboratory studies may be less challenging given the single controlled context of use.
1.1.7. Broader Environment
Studies, and therefore measurement, also face the challenge of considering the broader environment of children’s technology engagement, including the socio-economic, cultural and physical environment. Community attitudes, cultural practices and weather may all influence children’s interaction with technology and may need to be considered, and thus measured.
1.1.8. Time
A final challenge for measuring children’s interaction with technology is dealing with the time of use. Engagement with technology may have different impacts depending on the time of use, for example watching an exciting program may have no impact on a child’s sleep if viewed in the morning but may disrupt sleep if viewed in the evening just before bedtime. Similarly, the impact of screen use on attention and learning in school may only relate to school day screen use and not weekend screen use. Lastly, patterns of screen use may vary across the year with school holidays, summer weather etc.
In attempting to measure screen use, researchers have most commonly used self- or proxy-reported methods such as questionnaires or diaries, that try to capture multiple aspects of the child-technology interaction [32,33]. These subjective methods of measuring screen use are generally easy to administer with low cost. However, they are subject to recall inaccuracy and reporting bias leading to overall imprecision [32,34,35,36] meaning studies based on these methods may miss and/or mistakenly claim important effects. Therefore, a major challenge for the field is to find unbiased and more precise measurement methods that can deal with the complex system within which children interact with screen based technology.
1.2. Study Aim
The aim of this paper is to help researchers by describing currently available methods to measure screen use by children and adolescents, and by providing guidance around determining a suitable method or combination of methods to support a particular research project.
2. Narrative Review Approach
A narrative review approach was used for this study as the best way to address the aim of providing practical guidance to researchers in selecting measures of screen use by children and thus enable stronger evidence from future studies. Narrative reviews provide a flexible approach to interpreting existing knowledge to bring together implications for research, and are especially suited to complex issues [37,38].
The author team conducted initial searches for measurement methods to investigate children’s screen use in a range of databases (PsycINFO (Ovid), PubMed, Wed of Science (Core Collection), CINAHL, SPORTDiscus, Embase (Ovid), MEDLINE (Ovid), Scopus and iEEE), limited to 2010 onwards with a focus on young children, which located 30,312 articles after duplicates were removed. However, this comprehensive search yielded a low number of relevant methods and failed to identify known methods that had been used to measure screen use. Four recent reviews on this area were also located. Two of these reviews found a very low number of studies that had used objective methods to measure screen use in children [32,33]. The scoping review by Browne et al. considered measurement methods for digital media use in children and adolescents and found that the vast majority of methods were proxy-reported (92%), and suggested that the greatest advances in measuring screen use will revolve around using automated data collection from devices or other software solutions [32]. The systematic review by Byrne et al. summarised the measurement methods used to assess screen time in young children (0–6 years) and found that the majority of methods were proxy-reported (completed by parents) (76.3%) via questionnaire (92.4%). None of the located articles (622 articles) within the systematic review used a device-based method to measure screen time [33]. Byrne, et al. [33] highlighted the challenge of locating studies that measured just the simple construct of screen time, as information on measurements of screen time was often missing from the titles and abstracts, particularly if screen time was not a primary outcome measure. The systematic review by Perez et al. [39] focused on measures of screen media use for participants of any age, that had been validated by direct observation or video observation. They noted poor validity for proxy- or self-reported measures and extra difficulties in using technological measures of screen use by young children. The extensive narrative review by Barr et al. [40] covered many aspects of child development and digital media use, including proposing a toolkit comprising questionnaire, time use diary, passive mobile device sensing App and electronically prompted sampling.
Therefore, the range of currently available methods that could be used to measure digital screen technology use in children and adolescents was identified by the authors from their database searches, the recent published reviews and review reference lists, as well as methods known to the authors based on their diverse research fields across science, engineering, humanities and health. The current study also presents a discussion of the advantages and disadvantages of each method determined by the authors, to provide guidance for researchers to determine the best method or combination of methods to support a research project.
Whilst valuable evidence can be obtained from qualitative methods such as by using semi-structured interviews to capture reasons for overuse of digital tools by young children [41], to explore adolescents’ perceptions on their patterns and influences of mobile device use [42] or to examine parents’ views of their child’s screen-viewing time [43], this review focuses on quantitative methods.
3. Summary of Different Measurement Method Options
This narrative review covers the following measurement method options: self-/proxy-reporting, direct observation, recording devices, onboard logging and screen recording, network traffic logging of digital data traces, proximity logging and other specialised devices. The range of different method options included was based on iterative and purposive searches of the literature and on expertise of the review team representing a range of science, health and humanity disciplines. This review focuses on methods that can be used for capturing naturalistic child-technology interactions (e.g., use in the home), but the methods may also be useful for other contexts (e.g., laboratory studies). To provide a practical understanding for researchers, some key features of each method are described along with some examples of when the method has been used previously (focusing on children and adolescents), and typical constructs collected with the method. The potential advantages and disadvantages of each type of method are also summarised in Table 1.
3.1. Self-/Proxy-Reporting
The vast majority of observational studies on child-technology interaction to date have used self- or proxy- report methods such as questionnaires or diaries.
3.1.1. Questionnaires
Questionnaires typically collect retrospective recall of screen use over a specific period in either paper or electronic formats. Questionnaire items can include open ended or close ended questions. Online questionnaires can have a wide reach as they allow for collection of data within a fairly short period of time from a diverse and substantial number of people without geographical barriers [44], and therefore can potentially include a wide range of participants and representative samples. Psychometric data (evidence for reliability and validity) are available for some, but not all [34,45]. Some studies have reported reasonable validity for self-/proxy- reporting, for example a study of 9- to 10-year-old participants found end-of-day reports of their exposure to information and communication technology were comparable with data from real time direct observations [46]. In contrast, another study comparing parent reported duration of child’s device use to data logged on mobile devices found that only 30% of parents were considered accurate reporters [47]. Different aspects of child-technology interaction may be more accurately reported. For example parent reports of the content of media use (e.g., child’s favourite apps or TV shows) may be more accurate that reports of duration of use [34,35]. A recent systematic review and meta-analysis of discrepancies between self-reported digital media use and device logged use highlighted the concerns about the validity of self-reported findings as they were rarely accurate [48]. As noted earlier, of particular concern is not just the unbiased recall inaccuracy but the potential for bias due to social desirability.
Examples of questionnaires used to measure screen use include Technology Use Questionnaire (TechU-Q) [49], TV viewing as part of the Youth Risk Behavior Survey Questionnaire [50,51] and Child Sedentary Activity Questionnaire [52]. Many studies have used just a single item for duration of screen use (e.g., [11]. A large number of questionnaires conflate exposure and outcome by assessing ‘problematic’ screen use, for example, Addiction Profile Index: Internet Addiction Form [53], Behavioural Addiction Measure Video Gaming [54], Bergen Social Media Addiction Scale [55], Game Addiction Scale [56] and Problematic Internet Use Questionnaire [57]. Typical constructs collected cover child, technology, task, other people and include duration of screen use, which devices are owned by participants, co-viewing and whether use is problematic.
3.1.2. Diaries
Diaries collect time use data over a period of time, and like questionnaires may be in either paper or electronic format. Participants are typically provided with a graphical representation of the day, or part thereof, and for each time period (sometimes 5 or 15 min blocks) report aspects such as the activity being performed and the location of that activity. Recall periods may be shorter for diaries than for questionnaires and thus diaries may be more accurate [58]. In a large scale study, many participants provided 2 days of data, but longer recording was deemed too great a burden [59]. Some evidence for accuracy has been reported by comparing sedentary task time reported in diaries to that measured by sensors [60].
Examples of diaries used to measure screen use in children include Light Time-Use Diary [59], Multimedia Activity Recall for Children and Adults (MARCA) [60] and as part of a combined methods approach called Comprehensive Assessment of Family Media Exposure (CAFÉ) [61]. The Light Time-Use Diary has been used as parent-reporting on preschool children in the Longitudinal Study of Australian Children to collect screen use and other activities and the location of that activity [59]. MARCA was used to collect self-reports of different types of screen use, such as TV viewing and playing electronic games at a video game centre, by 643 14-year-olds for a minimum of 7 days [60]. The CAFÉ time use diary was parent completed for 24 h diary in 15 min blocks to capture content and context of media use [61].Typical constructs collected using dairies cover child, technology, task, other people, time and include duration of screen use, devices used and time pattern of use.
3.1.3. Electronically Prompted Sampling
Electronically prompted sampling methods, sometimes called Ecological Momentary Assessments, are technologically aided diary systems in which participants are prompted either at a random time or at a set time of day by text message or app notification [62,63,64,65]. The prompt typically asks the participant to report what they are doing, or feeling, at that precise time. A systematic review found that electronically prompted sampling can be successfully used with children from approximately 7 years of age; however, adaptions may be necessary for younger children [65].
Examples of electronically prompted sampling include the following: to capture screen use by children and adolescents including TV viewing and mood when watching TV among a sample of adults and children ≥ 10 years old [62,63,64], and to assess current activities (e.g., watching TV/movies, playing video games and physical activities) in a sample of 121 9- to 13-year-old children [66], and associations between mood and social media use in 55 adolescents [67]. Typical constructs collected cover child, technology, task, time and include duration of screen use, devices used and time of day of use.
3.2. Direct Observation
Researcher direct observations of child-technology interaction over a period of time in the participants’ settings is often used as the reference standard for measuring screen use. Observation can be in-situ in real time or later viewing of video recordings (see next section). Therefore, there is a level of intrusiveness that can create participant discomfort despite researchers being respectful of privacy [46]. There may be reduced ecological validity because the presence of the observer may influence the child’s behaviour [58,68,69]. Reactivity may be reduced if the objective of measuring screen use is concealed. For example, Krugman et al. [70] observed screen use by students in their homes under the guise of examining homework, with the true purpose revealed on study completion. Although observed children from varying socioeconomic background could be captured, the high researcher burden means large (and therefore generalisable) samples are unlikely [69]. Similarly, although observations could occur in different environments and different times/seasons, the high researcher burden and intrusiveness can impede acquisition of repeated measures. For example, previous research has suggested that at least 6 to 15 days are required to acquire reliable results of habitual television viewing or physical activity [71,72].
Examples of direct observation studies of children’s screen use include the following: observing minutes of screen time in children with a mean age of 7.8 (SD: 1.8) years during an after school program [73], observing TV viewing by 3- to 4-year-olds at home for 6–12 h/day for 2.5 days [71] and observing activity patterns (including TV viewing) in 4-year-olds at home and school (recess) [74]. Typical constructs collected cover child, technology, task, other people, local context and include duration of screen use, devices used, task of screen use and contextual information such as co-viewing [69].
3.3. Recording Devices
Audio-visual recording is an important group of methods available for researchers and includes fixed room cameras, wearable or portable cameras, and audio devices. Cameras and audio devices have the ability to capture a variety of screen devices, sometimes with minimal burden to participants [58]. Coding recordings may be less burden than in-situ direct observation and thus longer observation periods may be possible. Recordings can also be played back to participants to gain reflections about screen use from participants. A clear advantage of mobile recording devices is the ability to capture various locations; however, recording is not allowed in some places such as banks, airports and public swimming pools. Analysis of the data including coding of the images and sound from devices can be time consuming and therefore a high researcher burden [58,69]. Further, wearing a device or taking recordings with a portable device can be seen as burdensome by participants. The use of recording devices also brings ethical concerns including participant privacy and third party consent [75]. Images and sounds may be recorded which the child/family may not want others to see or hear. Studies have reduced concerns about participant privacy by allowing participants to stop recording at certain times (by taking off the device or turning it off) or deleting some recordings at the end of data collection. Recording of people not involved in a study and who have therefore not consented needs careful consideration, including safety concerns if non-participants accost the participant [76]. Automated blurring of non-participant faces has been used to alleviate this concern. Mobile devices can also create other problems including comfort and security of attaching the device to the participant, participant concerns for damaging the device or themselves, movement blurring of images or noise, and limited battery life [76,77].
3.3.1. Fixed Room Cameras
A camera fixed in one location can capture screen use within that local context, for example with multiple cameras used to capture different rooms in a home. Cameras are often set up to record continuously [78,79] although they could be triggered by a person moving into view or when technology is turned on [80]. Information is limited to activities that appear within the field of view of the camera, which is a limitation given the increased portability of devices as they may be moved out of view [69]. Depending on the view of the camera and the image clarity and resolution it may or may not be possible to capture screen content, facial expression, along with local context information such as co-viewing [81].
Example studies using fixed room cameras include the following: a pioneering study by Allen [82] installed time-lapsed cameras in 95 families’ homes that recorded the TV screen at 4 frames/min and found differences between the recorded results and the self-reported results in dairies; a home-based time-lapse video camera has also been used to record TV viewing for a 10 day period in 5-year-olds [80], and 9-year-olds [81]. More recently, algorithms based on video were developed for facial recognition and gaze angle determination in a proof of concept for measuring children watching television [83]. Typical constructs collected cover child, technology, task, other people, local context, time and include duration of screen use within a set location, TV device use and co-viewing.
3.3.2. Wearable or Portable Cameras
Wearable cameras can be mounted on a chest harness, head band or suspended on a lanyard around the participant’s neck, with different movement and comfort issues related to each method of attachment. In comparison to fixed room cameras, wearable cameras usually capture what the child is looking at, but like fixed room cameras can record time-lapsed still images or videos. Portable cameras are handheld recording devices where typically a parent or caregiver records a view of the child and their technology interaction or other activity [84]. This creates a greater burden to parents or caregivers but also enables their control over what to capture, alleviating some privacy and third-party concerns. Wearable and portable cameras are also at risk of hardware damage when used in daily life situations. Wearable and portable cameras can provide location context information [58] across multiple contexts in a child’s life.
Examples of wearable camera studies include the following: SenseCam wearable cameras worn during waking hours for 3–5 days taking 3–10 images per minute to capture screen use in a sample of adolescents and adults [85] and Autographer cameras worn around the neck on a lanyard recording every 15 s for 2 days to capture screen-based activities (as well as dietary behaviours and physical activity behaviours) in a sample of 14 children (9–11 years) [76]. Thomas et al. used wearable cameras in a study of 10 adolescents (mean age 15.4 years) with adolescents wearing the device on 3 school evenings and 1 weekend day with images taken every 10 s to capture screen use type and context [86]. Typical constructs collected cover child, technology, task, other people, time and include duration of screen use, devices used (but has mainly been used for TV viewing duration) and some contextual information about screen use.
3.3.3. Audio Recorders
Wearable or fixed room digital audio recording devices have been used to capture sound from TVs and radios, as well as study participant talk. A fixed room audio device may struggle to capture the required data (depending on the environment and distance from the participant) as clearly as a wearable device, although wearable devices may record movement artefact noise. Recordings can be analysed by researchers through direct listening and coding, through transcription for text analysis or more sophisticated automated analysis including speech recognition software. Software has been used to identify sound from screen technology [87], with recent advances in artificial intelligence and transcription achieving higher accuracy to discriminate between participant and screen speech [88] compared to earlier methods [89]. However, software is generally unable to distinguish between types of technological input (i.e., TV versus radio) [90], or to discriminate when the TV noise is foreground or background [89]. Audio devices obviously provide no information on screen interactions which are not audible.
Examples of audio recordings that have been used previously to capture children’s screen use include the following: using the Language ENvirnoment Analysis (LENA) system to capture conversations and electronic media exposure in children 12–36 months of age [90], capturing exposure to electronic noise for 1 day every 6 months in a sample of children 6 to 24 months of age [91] and measuring audible television in a sample of 2- to 48-month-old children [89]. Typical constructs collected cover child, technology, task, other people and include duration of screen use and conversations about screen exposure.
3.4. Screen-Device Onboard Logging
Screen devices themselves can provide methods for measuring their use including onboard automatic logging of internet traffic or app use or onboard manual logging. Data traffic logging (either directly or through battery use as a surrogate measure) and screen recording (both video and still image). Measurement apps based on automatic logging use (either from the smartphone/tablet manufacturer or independent software company) can measure device use: duration, frequency, time, general app type and app status (foreground, background etc.) [61] including short bursts of exposure of mobile phone use [61], and which web pages are being visited and for how long [92]. Device operating software can automatically log battery use as a surrogate measure of device use. Manual screenshots taken by the participant/parent can provide information on battery use, app use and on-screen actions and be submitted to the research team via online survey. Screen video recordings can also provide information about on-screen actions. Data per device may be more difficult to link to a particular participant if the device has more than one user [19] as current apps typically cannot identify the user. For example, a study of mobile touch screen device use by young children (0–3 years) reported 61–70% of devices had been shared [47]. As many young children do not have their own device and just tend to share devices, this method could be unsuitable for younger children [47]. There are currently two main mobile touch screen device operating systems (iOS and Android) which have different capabilities to log data, meaning researchers may not be able to acquire the same data from a broad spectrum of participants. Onboard device logging also does not capture all types of screen use (e.g., television, game consoles) [47]; therefore, it is unable to capture the full scope of screen exposure [19].
Examples of device-based logging and recordings studies include the following: onboard device logging to capture children and adolescents’ screen use in a study where mobile device sampling on 3- to 5-year-olds captured phone use with an app Chronical for Android phones and battery screenshots for iPhones [47]. In other studies, smartphone (Android) use among 18- to 33-year-olds was captured using a researcher developed app ‘Fun in a Box’ [93], and a smartphone use tracking app (Effortless Assessment of Risk Stats (EARS)) was used over 4 weeks on a sample of 67 participants 11–12 years old [94], and battery use of smartphones was captured in a sample of adolescents 12–15 years old [95]. A further example used the XMobiSense app to capture the number and duration of voice calls, text messages and amount of data transfer in mobile phone use by 466 participants 10–24 years old (mean age 18.6 years) [96]. In an example of rich detailed data collection, Ram et al. [36] captured smartphone screen shots from 4 adolescents every 5 s the smartphone was activated over several months (some 500,000 images). They then used a combination of human coding and machine learning to examine the applications viewed, consumption versus production interaction, food-related content and emotion/sentiment. Typical constructs collected cover child, technology, task, time and include capturing information about which apps and webpages were being used and for how long on certain devices as well as number and duration of voice calls and messages.
3.5. Remote Digital Trace Logging
As with screen device onboard logging, measuring screen use can also be done at the home router, internet service provider or digital platform (e.g., social media) levels by collecting digital trace data, that is, data that are generated as people interact with any kind of digital platform via a networked device. Interaction here includes all the ways that people may actively engage with a digital platform, whether this is through, for example, typing or drawing to create text or a picture, talking to a voice assistant, recording and uploading audio, photos, video and/or enabling geo-location. As such, this includes data related to the interaction actions (e.g., clicks) and the content of that interaction (such as text or picture) [97]. It is often referred to as ‘Big Data’ or digital footprint data, and is typically not only huge in volume but also high in velocity, being created in or near real-time, and exhaustive in scope [98]. Depending on the approach used, data trace logging can capture use across all internet-connected devices in the household. Data can be collected on an individual, a small cohort or on a huge population. The duration and frequency of the type of activity such as phone calls, messages and websites visited can be collected, along with the pattern of internet interactions, such as who is contacted, which types of websites are visited, which social media groups and individuals are visited and what is ‘liked’ on social media and what comments are made. However, there are challenges in capturing passively viewed content (i.e., where no action is taken on the page such as when someone is reading text or simply doing something else) [12]. Further, some internet interactions are end-to-end encrypted requiring a key to decode the content of the interactions; thus, it is not always possible to capture all of the relevant information, though even the meta-data about the amount of internet traffic and time of interactions may be useful. As with onboard device methods, the internet use may be difficult to link with an individual user, particularly when working at scale. Privacy issues are central, as with other measurement methods, and the use of this type of data without specific participant consent is of current community concern. Indeed, there are notable differences between commercial and academic practices in the collection and use of such data, with varied perspectives on what constitutes ethical practice.
Examples of internet digital trace logging include the following: a study on adults using server log data of outgoing voice calls and SMS that found participants generally overreported when self-reporting daily usage compared to log data [99], and identifying aspects of an educational game that best related to enhanced learning outcomes [100]. Typical constructs collected cover child, technology, task, time and include capturing information including number and durations of voice calls and frequency of text messages and specific aspects of interaction with an app.
3.6. Proximity Logging
Radio-frequency identification (RFID) can be used to detect when a participant (wearing a chip) is near to a screen device (also with a chip attached). The chips can be small (fingernail size) and thin (paper thick) so can be attached as a sticker. Chips are also cheap and regularly used in community participation running and cycling events to clock start and finish times for participants. Information is only available when the participant is in close proximity to the screen, or to other participants if each family member wears a chip. The method is therefore unable to measure whether the screen is on, nor whether the user is interacting with the screen, nor is it able to capture the content of screen use. An example of proximity logging has been to capture TV viewing during 2 consecutive days, in a sample of 7 children with mean age 10.7 years (SD: 2.1) [101]. Typical constructs collected cover child, technology and other people and include capturing information including specific device (such as television) ‘use’ duration, and co-viewing.
3.7. Other Systems
There are a number of other systems that have been used in the past to monitor screen use and/or restrict screen use. These systems included special hardware and/or software and provide ideas for what measurement methods could offer.
The Nielson People Meter monitored what TV program was being viewed and who was watching. As a participant started watching TV, a light flashed on the meter controller reminding them to press their assigned button (to log in). When the participant had finished watching, the participant pressed the button again to log out [102]. As this required participants to log in and out this is not always done correctly [103]. Participant fatigue has been observed with the recorded viewing time of participants reducing over the days of the study [104]. However, short term monitoring has had high levels of compliance [103]. The Nielson People Meter has mainly been used previously for TV broadcasting analysis for children as young as 2 years of age to capture content and co-viewing [102]. The data regularly collected survey with representative sampling has been commercially available, but was expensive. The system was developed when the screen environment was much simpler and focused on broadcast/cable TV, which does not represent contemporary screen use by children and adolescents.
The Arbitron Portable People Meter captured similar information but was based on a small device worn by the participant which accessed an inaudible code embedded in the audio stream of audio and video programming. This system has been used to measure advertising exposure including for participants from 12 years of age [105]. As with the Nielson system it did not capture the breadth of screen use by contemporary children.
TV Allowance was a semi-automated device that monitored TV and computer monitor use, through a power cord. In order to turn on the device, the participant entered an individual four-digit code. The cumulative use of each device was then calculated and power withheld from the devices if the individual had already consumed their time allowance. TV Allowance has been used to capture screen use by children in home-based studies including TV viewing in samples of 4- to 7-year-old children [106] and 3- to 5-year-old children [107]. Typical constructs collected include TV and computer duration of use and it can capture co-viewing if each family member enters their code when they start and end watching/use.
Other methods that may be useful in future studies in children include software systems for tracking computer keystroke and mouse activity across a whole office workplace that were developed in the wake of a risk in upper limb musculoskeletal disorders in the 1980s [108], and eye tracking to identify what components of a computer screen were attracting user attention [109].
Table 1Method options to investigate digital screen technology use by children and adolescents.
Types of Measure and Example Studies | Methods | Advantages | Disadvantages |
---|---|---|---|
| |||
Questionnaire
| Retrospective recall of screen use through paper or electronic format. |
|
|
Diary
| Recall of screen use across day through paper or electronic format. |
|
|
Electronically prompted sampling
| Instant recall of screen use or associated factors in response to Text or App messages to participant. |
|
|
Direct observation
| Conemporaneous observation and recording screen use by Trained observer in their natural environment through paper or electronic format. |
|
|
| |||
Fixed room cameras
| Contemporaneous fixed camera recording still images or video capturing screen use within one setting per camera. |
|
|
Wearable or portable camera
| Contemporaneous wearable camera (attached to participant usually on chest or head or on neck lanyard) recording still images or video in the field of view of the participant. |
|
|
Audio recording
| Contemporaneous fixed room or wearable device capturing sound (screen technology as well as voices of participants and other people nearby). |
|
|
Screen-device onboard logging
| Contemporaneous manual or automated onboard capture of smart phone or tablet use with app or screen recording. |
|
|
Remote digital trace logging
| Contemporaneous automatic capture of network traffic at router, internet service provider or platform. |
|
|
Proximity logging | Contemporaneous detection when a participant is near to a screen (when both have chips attached) using radio frequency identification. |
|
|
4. Potential Future Methods
As screen technology develops so do the potential technical advances in measurement. These advances may be able to reduce research and participant burden, and more accurately capture child and adolescent screen use.
In future work, there is significant potential for exploring the possibilities of digital trace data [12]. However, there are several challenges that need to be addressed. Current work tends to focus on the activities on one device (e.g., tracking mobile phone use via screen capture and tracking software) or tends to focus on one digital platform (e.g., via data collection via an application programming interface (API), data scraping or direct collaboration with owners of digital platforms to use the digital trace data they have already or are collecting). It is rarely representative (c.f. [110,111]). It can sometimes be a relatively high burden for participants (e.g., data ‘donation’ where participants request their information from companies and then share this information directly with researchers). There are other problems too, such as issues related to validity, reliability and bias; data privacy, informed consent and the legalities of working with such data for those under 18; and questions about the environmental cost of using data intensive technologies (e.g., [112,113]) that need to be considered.
Nevertheless, given the importance of understanding screen use in greater depth, such an endeavour is worth further exploration and debate. Technically, some of the issues could be addressed in future work, through for example, accessing people’s accounts on social media and other digital platforms with their permission via APIs which allow the participant to give ‘read only access’ to their account to researchers, which has the advantage that behaviours are captured across multiple devices. Linking such forms of digital trace data collection to other methods, such as representative surveys (to capture important socio-demographic or attitudinal variables) or government data (e.g., on educational achievement or health) could also be valuable to better infer the social, educational and health implications of screen use. Closer collaborations with technology companies, to enable the design and collection of digital trace data that is academically important (including, for example, pop up surveys in games to better understand motivations for game play) could also be fruitful, within a wider data collection strategy that triangulates data from varied sources.
As data collection opportunities become ever larger and more complex in scope, there are also questions of how best to analyse the data collected [12,36]. Machine learning software could be used to combat the high researcher burden for the analysis of screen use via images, movies or audio. For example, the YOLO (You Only Look Once) family of computer vision models have been rapidly evolving since version 1 was introduced in 2016 [114]. Version 8, released in 2023, is able to detect, segment, track and classify objects in better than real-time in an image or movie. Furthermore, YOLO is able to estimate pose, potentially allowing for changes in posture whilst using digital devices to be monitored. Similar advances are being made in audio classification and transcription.
There may also be technical approaches to addressing some of the concerns around privacy and data protection. For example, although there is great potential to use cameras to record children’s use of digital devices, there are a number of significant privacy issues associated with using cameras. Light Detection and Ranging (LiDAR) technology provides a point-cloud based 3D map of the area being monitored. Thus, LiDAR could be used [115] to determine screen use or activity density within a set location without capturing identifiable information therefore reducing the ethical considerations of privacy particularly within public locations.
Despite the potential, the opportunities for researchers to collect rich digital trace data are steadily being eroded, as technology companies protect their commercial interests. Politically, academic researchers need to become far more powerful actors in shaping the direction of how digital trace data are collected, analysed and used. Further, given the nature of digital trace data, decisions about when and how to collect such data need to be made in collaboration with the public, to ensure the needs, interests and perspectives of all stakeholders are designed into any approach. We would therefore suggest co-design approaches (e.g., [116]), working with families, legal experts, childhood educators and others when developing future approaches in this domain.
Future research on developing better methods for assessing screen use by children and adolescents could therefore include developing validated low participant and researcher burden measures that address the relevant aspects of child-technology interaction.
5. Researcher Checklist for Measurement Method Selection
When designing a research project to investigate associations between screen use and child development there are many considerations that researchers should contemplate. The child-technology interaction model present in Figure 1 can serve to remind researchers of the different aspects which may be important to measure. Indeed the complexity illustrated in Figure 1, suggests there may not be a single method that can capture all aspects of child-technology interaction critical to a particular research question. For example, a number of researchers [40,61,117] suggested the combined use methods such as an online survey, time use diary, electronically prompted sampling and onboard logging to answer research questions around attitudes, practices, content and context of use and exposure to short bursts of screen use. Further, Kaye [27] et al. encouraged researchers to consider more than just screen time, and to consider user focused methods. Building on the child-technology interaction model, Figure 2 provides a checklist of considerations to help determine what could be the most suitable method options to investigate children and adolescents’ screen use for a particular study.
Firstly, researchers should consider their specific study aim, the potential study design and what resources are available to them.
Next in considering the complexity of screen use within the child-technology model (Figure 1), researchers should consider the target participants, types of technology, tasks of technology use and interaction aspects of interest, as well as the local context setting and the broader environment of interest, and the time of year/week/day that the technology is likely to be used.
Researchers should also consider what evidence is available for the reliability and validity of the measurement method. Consideration should also be given to the intrusiveness of the method and the impact of this on ecological validity. Similarly, researchers should consider the ease of use of the method, for both participants and researchers and try to minimise the burden, balanced against benefit of the information obtained.
In considering the ethical considerations some methods may be more or less appropriate for certain kinds of children and families. Importantly, consideration should be given to gaining children’s assent (as well as parent/caregivers’ informed consent), participant and researcher safety (particularly in different settings), and data privacy, which is particularly important for research involving children [18].
Finally, researchers should consider if there have been any new advances in measurement method options.
Scenario Examples Using the Considerations Checklist
To illustrate how the information in this paper can be used by researchers designing studies to investigate the association between screen use and child development the following two research study scenarios illustrate the use of the considerations checklist to determine which measurement method option(s) to use to measure screen use in children and adolescents.
Scenario A: A team of researchers are designing a study with the aim of investigating what types of screen devices young children are using during a typical day and for how long. An observational study design was determined to be the best fit. The team has a budget to purchase equipment as needed and have contacts within a local playgroup centre for participant recruitment. The target participants are children aged 4 years of age. A range of typical technologies could be used by 4-year-olds including TV and tablet devices. The main desired interaction aspects of interest include duration of use and type of screen devices being used. The researchers would like to capture data in multiple locations which could include in the home and outdoors. Based on this scenario measurement options could include a parent reported questionnaire, direct observation or wearable cameras. A parent reported questionnaire could allow for collection of data within a fairly short period of time from a diverse and substantial number of people, however, could be subject to recall inaccuracy and reporting bias leading to overall imprecision. Additionally, it would miss the time the child is in child care. Direct observation could be used for children of this age and could capture any device type; however, it may reduce ecological validity because the presence of the observer may influence the child’s behaviour, and following multiple children for full days would have high researcher burden. A wearable camera could be used for a child of this age and could capture a variety of screen devices, however, there are privacy concerns that should be considered including capturing inappropriate images, capturing third parties, as well as high researcher burden from coding the images. A choice could be made to use wearable cameras, using a camera with a privacy button, giving parents/caregivers the opportunity to review and delete images, and facial blurring software being applied to the images. Advances in machine learning could be used to combat the high researcher burden to code images captured by wearable cameras.
Scenario B: A team of researchers are designing a study with the aim of investigating social connectedness of adolescents during social media use. An observational study design suited the aim. The target participants are adolescents 13–18 years of age. Technology type is the participants’ own smartphones and tablet devices and the social media apps and content they interact with. The tasks focused on include leisure and daily living tasks and the interaction aspects of interest include comments and likes. Adolescents use social media in many different locations and both alone and when with peers. The broader environment considerations include the cultural group and the time considerations include the time of day and proximity to school exams. Based on this scenario, measurement options could include a diary completed by the adolescent or onboard device logging. A diary could be used to ask adolescents about their social media use including platforms used and estimated time of usage but may have social desirability bias and it would be difficult to capture short bursts of exposure. Onboard device logging can measure duration of time on social media by measuring app usage and can capture short bursts of exposure of mobile phone/tablet use, however, the adolescent must own devices with operating software compatible with the measurement app, which may constrain who can participate in the study. Remote digital trace logging at internet service provider or social media platform level could also be used. Digital trace logging allied with machine learning could enable in-depth examination of emotional aspects of interaction. Given the target population, onboard or remote logging, together with machine learning to deal with the large dataset logging can create, could be chosen as the best measurement option for the study aim.
6. Conclusions
This paper provides a unique contribution to the field by providing practical support to researchers designing studies to investigate the association between screen use and child health, well-being and development. It provided a conceptual framework for thinking about potentially relevant elements using the child-technology interaction model and outlined some of the challenges faced in capturing contemporary screen use by children and adolescents. It then described the range of available options for measuring screen use by children, providing examples of use, constructs measured and relevant advantages and disadvantages of each method drawn from the literature base and the authors’ own experiences. The paper also provided a checklist and worked example scenarios to support researchers attempting to select the most appropriate method option(s).
Children’s engagement with digital screen technology is complex, as are the aspects of child health, wellbeing and development influenced by interacting with technology. Thus, selecting appropriate measurement method(s) is difficult, but essential to developing better evidence to support guidance on helping children to thrive in a digital world.
Conceptualisation, L.S., A.B. and J.Z.; writing—original draft preparation, A.B. and L.S.; writing—review and editing, all authors. All authors have read and agreed to the published version of the manuscript.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
1. Cerniglia, L.; Cimino, S. A reflection on controversial literature on screen time and educational apps use in 0–5 years old children. Int. J. Environ. Res. Public Health; 2020; 17, 4641. [DOI: https://dx.doi.org/10.3390/ijerph17134641] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32605168]
2. Campana, K.; Mills, J.E.; Haines, C.; Prendergast, T.; Martens, M. To tech or not to tech? The debate about technology, young children, and the library. Child. Libr.; 2019; 17, pp. 20-26. [DOI: https://dx.doi.org/10.5860/cal.17.2.20]
3. Browne, D.; Thompson, D.A.; Madigan, S. Digital media use in children: Clinical vs scientific responsibilities. JAMA Pediatr.; 2020; 174, pp. 111-112. [DOI: https://dx.doi.org/10.1001/jamapediatrics.2019.4559] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31790548]
4. Straker, L.; Zabatiero, J.; Danby, S.; Thorpe, K.; Edwards, S. Conflicting guidelines on young children’s screen time and use of digital technology create policy and practice dilemmas. J. Pediatr.; 2018; 202, pp. 300-303. [DOI: https://dx.doi.org/10.1016/j.jpeds.2018.07.019] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30139534]
5. Homer, B.D.; Kinzer, C.K.; Plass, J.L.; Letourneau, S.M.; Hoffman, D.; Bromley, M.; Hayward, E.O.; Turkay, S.; Kornak, Y. Moved to learn: The effects of interactivity in a Kinect-based literacy game for beginning readers. Comput. Educ.; 2014; 74, pp. 37-49. [DOI: https://dx.doi.org/10.1016/j.compedu.2014.01.007]
6. Fitzpatrick, C.; Binet, M.; Cristini, E.; Almeida, M.L.; Begin, M.; Frizza, G. Reducing harm and promoting positive media use strategies: New perspectives in understanding the impact of preschooler media use on health and development. Psicol. Refexia Crit.; 2023; 36, 19. [DOI: https://dx.doi.org/10.1186/s41155-023-00262-2] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37553485]
7. Sanders, T.; Noetel, M.; Parker, P.; Del Pozo Cruz, B.; Biddle, S.; Ronto, R.; Hulteen, R.; Parker, R.; Thomas, G.; De Cocker, K. An umbrella review of the benefits and risks associated with youths’ interactions with electronic screens. Nat. Hum. Behav.; 2024; 8, pp. 82-99. [DOI: https://dx.doi.org/10.1038/s41562-023-01712-8]
8. Rosen, L.D.; Lim, A.F.; Felt, J.; Carrier, L.M.; Cheever, N.A.; Lara-Ruiz, J.M.; Mendoza, J.S.; Rokkum, J. Media and technology use predicts ill-being among children, preteens and teenagers independent of the negative health impacts of exercise and eating habits. Comput. Hum. Behav.; 2014; 35, pp. 364-375. [DOI: https://dx.doi.org/10.1016/j.chb.2014.01.036] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25717216]
9. Page, A.S.; Cooper, A.R.; Griew, P.; Jago, R. Children’s screen viewing is related to psychological difficulties irrespective of physical activity. Pediatrics; 2010; 126, pp. e1011-e1017. [DOI: https://dx.doi.org/10.1542/peds.2010-1154]
10. del Pozo-Cruz, B.; Perales, F.; Parker, P.; Lonsdale, C.; Noetel, M.; Hesketh, K.D.; Sanders, T. Joint physical-activity/screen-time trajectories during early childhood: Socio-demographic predictors and consequences on health-related quality-of-life and socio-emotional outcomes. Int. J. Behav. Nutr. Phys. Act.; 2019; 16, 55. [DOI: https://dx.doi.org/10.1186/s12966-019-0816-3]
11. Kwon, S.; Armstrong, B.; Wetoska, N.; Capan, S. Screen time, sociodemographic factors, and psychological wellbeing among young children. JAMA Netw. Open; 2024; 7, e2354488. [DOI: https://dx.doi.org/10.1001/jamanetworkopen.2023.54488] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38441898]
12. Sultan, M.; Scholz, C.; van den Bos, W. Leaving traces behind: Using social media digital trace data to study adolescent wellbeing. Comput. Hum. Behav. Rep.; 2023; 10, 100281. [DOI: https://dx.doi.org/10.1016/j.chbr.2023.100281]
13. Orben, A.; Przybylski, A.K. The association between adolescent well-being and digital technology use. Nat. Hum. Behav.; 2019; 3, pp. 173-182. [DOI: https://dx.doi.org/10.1038/s41562-018-0506-1] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30944443]
14. Hood, R.; Zabatiero, J.; Silva, D.; Zubrick, S.; Straker, L. “Coronavirus change the rules on everything”: Parent perspectives on how the COVID-19 pandemic influenced family routines, relationships and technology use in families with infants. Int. J. Environ. Res. Public Health; 2021; 18, 12865. [DOI: https://dx.doi.org/10.3390/ijerph182312865] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34886591]
15. Straker, L.; Abbott, R.; Collins, R.; Campbell, A. Evidence-based guidelines for wise use of electronic games by children. Ergonomics; 2014; 57, pp. 471-489. [DOI: https://dx.doi.org/10.1080/00140139.2014.895856] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/24665962]
16. Bronfenbrenner, U.; Morris, P. The bioecological model of human development. Handbook of Child Psychology: Volume 1 Theoretical Models of Human Development; Lerner, R.; Damon, W. Wiley: Hoboken, NJ, USA, 2006; Volume 1, pp. 793-828.
17. Livingstone, S.; Mascheroni, G.; Staksrud, E. Developing a Framework for Researching Children’s Online Risks and Opportunities in Europe; The London School of Economics and Political Science: London, UK, 2015.
18. ARC Centre of Excellence for the Digital Child. Digital child ethics toolkit: Ethical considerations for Digital Childhoods Research. Digital Child Working Paper 2024-01; Australian Research Council Centre of Excellence for the Digital Child: Brisbane, Australia, 2024; [DOI: https://dx.doi.org/10.26187/k90v-9a20]
19. Milkovich, L.M.; Madigan, S. Using mobile device sampling to objectively measure screen use in clinical care. Pediatrics; 2020; 146, e20201242. [DOI: https://dx.doi.org/10.1542/peds.2020-1242] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32482772]
20. Neumann, M.M. Young children and screen time: Creating a mindful approach to digital technology. Aust. Educ. Comput.; 2015; 30, Available online: https://journal.acce.edu.au/index.php/AEC/article/view/67/pdf (accessed on 17 June 2024).
21. Bjelajac, Ž.Đ.; Merdović, B. Influence of video games on pro-social and anti-social behavior. Kult. Polisa; 2019; 16, pp. 53-65.
22. Montazami, A.; Pearson, H.A.; Dubé, A.K.; Kacmaz, G.; Wen, R.; Alam, S.S. Why this app? How parents choose good educational apps from app stores. Br. J. Educ. Technol.; 2022; 53, pp. 1766-1792. [DOI: https://dx.doi.org/10.1111/bjet.13213]
23. Okely, A.D.; Ghersi, D.; Loughran, S.P.; Cliff, D.P.; Shilton, T.; Jones, R.A.; Stanley, R.M.; Sherring, J.; Toms, N.; Eckermann, S. A collaborative approach to adopting/adapting guidelines. The Australian 24-hour movement guidelines for children (5–12 years) and young people (13–17 years): An integration of physical activity, sedentary behaviour, and sleep. Int. J. Behav. Nutr. Phys. Act.; 2022; 19, 2. [DOI: https://dx.doi.org/10.1186/s12966-021-01236-2]
24. American Academy of Pediatrics. American Academy of Pediatrics: Children, adolescents, and television. Pediatrics; 2001; 107, pp. 423-426. [DOI: https://dx.doi.org/10.1542/peds.107.2.423] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/11158483]
25. Canadian Paediatric Society Digital Health Task Force. Screen time and young children: Promoting health and development in a digital world. Paediatr. Child Health; 2017; 22, pp. 461-477. [DOI: https://dx.doi.org/10.1093/pch/pxx123] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29601064]
26. World Health Organization. Guidelines on Physical Activity, Sedentary Behaviour and Sleep for Children under 5 Years of Age; World Health Organization: Geneva, Switzerland, 2019.
27. Kaye, L.; Orben, A.; Ellis, D.A.; Hunter, S.C.; Houghton, S. The conceptual and methodological mayhem of “screen time”. Int. J. Environ. Res. Public Health; 2020; 17, 3661. [DOI: https://dx.doi.org/10.3390/ijerph17103661] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32456054]
28. Zhu, Y.; Heynderickx, I.; Redi, J.A. Alone or together: Measuring users’ viewing experience in different social contexts. Proceedings of the Human Vision and Electronic Imaging XIX; San Francisco, CA, USA, 2–6 February 2014; pp. 218-228.
29. Griffith, S.; Hart, K.; Mavrakis, A.; Bagner, D. Making the best of app use: The impact of parent-child co-use of interactive media on children’s learning in the U.S. J. Child. Media; 2022; 16, pp. 271-287. [DOI: https://dx.doi.org/10.1080/17482798.2021.1970599]
30. Foulds, K. Co-viewing mass media to support children and parents’ emotional ABCs: An evaluation of Ahlan SimSim. Early Child. Educ. J.; 2023; 51, pp. 1479-1488. [DOI: https://dx.doi.org/10.1007/s10643-022-01408-0] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36268053]
31. Swider-Cios, E.; Vermeij, A.; Sitskoorn, M. Young children and screen-based media: The impact on cognitive and socioemotional development and the importance of parental media. Cogn. Dev.; 2023; 66, 101319. [DOI: https://dx.doi.org/10.1016/j.cogdev.2023.101319]
32. Browne, D.T.; May, S.S.; Colucci, L.; Hurst-Della Pietra, P.; Christakis, D.; Asamoah, T.; Hale, L.; Delrahim-Howlett, K.; Emond, J.A.; Fiks, A.G. From screen time to the digital level of analysis: A scoping review of measures for digital media use in children and adolescents. BMJ Open; 2021; 11, e046367. [DOI: https://dx.doi.org/10.1136/bmjopen-2020-046367] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34011597]
33. Byrne, R.; Terranova, C.O.; Trost, S.G. Measurement of screen time among young children aged 0–6 years: A systematic review. Obes. Rev.; 2021; 22, e13260. [DOI: https://dx.doi.org/10.1111/obr.13260]
34. Atkin, A.J.; Gorely, T.; Clemes, S.A.; Yates, T.; Edwardson, C.; Brage, S.; Salmon, J.; Marshall, S.J.; Biddle, S.J. Methods of measurement in epidemiology: Sedentary behaviour. Int. J. Epidemiol.; 2012; 41, pp. 1460-1471. [DOI: https://dx.doi.org/10.1093/ije/dys118]
35. de Reuver, M.; Bouwman, H. Dealing with self-report bias in mobile Internet acceptance and usage studies. Inf. Manag.; 2015; 52, pp. 287-294. [DOI: https://dx.doi.org/10.1016/j.im.2014.12.002]
36. Ram, N.; Yang, X.; Cho, M.; Brinberg, M.; Muirhead, F.; Reeves, B.; Robinson, T. Screenomics: A new approach for observing and studying individual’s digital lives. J. Adolesc. Res.; 2020; 35, pp. 16-50. [DOI: https://dx.doi.org/10.1177/0743558419883362]
37. Agarwal, S.; Charlesworth, M.; Elrakhawy, M. How to write a narrative review. Anaesthesia; 2023; 78, pp. 1162-1166. [DOI: https://dx.doi.org/10.1111/anae.16016]
38. Sukhera, J. Narrative reviews in medical education: Key steps for researchers. J. Grad. Med. Educ.; 2022; 14, pp. 418-419. [DOI: https://dx.doi.org/10.4300/JGME-D-22-00481.1]
39. Perez, O.; Garza, T.; Hindera, O.; Beltran, A.; Musaad, S.M.; Dibbs, T.; Singh, A.; Chug, S.; Sisson, A.; Kumar Vadathya, A. Validated assessment tools for screen media use: A systematic review. PLoS ONE; 2023; 18, e0283714. [DOI: https://dx.doi.org/10.1371/journal.pone.0283714] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37053175]
40. Barr, R.; Kirkorian, H.; Coyne, S.; Radesky, J. Early Childhood and Digital Media; Cambridge University Press: Cambridge, UK, 2024.
41. Işıkoğlu, N.; Erol, A.; Atan, A.; Aytekin, S. A qualitative case study about overuse of digital play at home. Curr. Psychol.; 2023; 42, pp. 1676-1686. [DOI: https://dx.doi.org/10.1007/s12144-021-01442-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33584081]
42. Toh, S.H.; Howie, E.K.; Coenen, P.; Straker, L.M. “From the moment I wake up I will use it… every day, very hour”: A qualitative study on the patterns of adolescents’ mobile touch screen device use from adolescent and parent perspectives. BMC Pediatr.; 2019; 19, 30. [DOI: https://dx.doi.org/10.1186/s12887-019-1399-5]
43. Solomon-Moore, E.; Matthews, J.; Reid, T.; Toumpakari, Z.; Sebire, S.J.; Thompson, J.L.; Lawlor, D.A.; Jago, R. Examining the challenges posed to parents by the contemporary screen environments of children: A qualitative investigation. BMC Pediatr.; 2018; 18, 129. [DOI: https://dx.doi.org/10.1186/s12887-018-1106-y] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29626932]
44. Lefever, S.; Dal, M.; Matthíasdóttir, Á. Online data collection in academic research: Advantages and limitations. Br. J. Educ. Technol.; 2007; 38, pp. 574-582. [DOI: https://dx.doi.org/10.1111/j.1467-8535.2006.00638.x]
45. Lubans, D.R.; Hesketh, K.; Cliff, D.; Barnett, L.; Salmon, J.; Dollman, J.; Morgan, P.J.; Hills, A.; Hardy, L. A systematic review of the validity and reliability of sedentary behaviour measures used with children and adolescents. Obes. Rev.; 2011; 12, pp. 781-799. [DOI: https://dx.doi.org/10.1111/j.1467-789X.2011.00896.x]
46. Ciccarelli, M.; Straker, L.; Mathiassen, S.E.; Pollock, C. ITKids part I: Children’s occupations and use of information and communication technologies. Work; 2011; 38, pp. 401-412. [DOI: https://dx.doi.org/10.3233/WOR-2011-1167]
47. Radesky, J.S.; Weeks, H.M.; Ball, R.; Schaller, A.; Yeo, S.; Durnez, J.; Tamayo-Rios, M.; Epstein, M.; Kirkorian, H.; Coyne, S. Young children’s use of smartphones and tablets. Pediatrics; 2020; 146, e20193518. [DOI: https://dx.doi.org/10.1542/peds.2019-3518] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32482771]
48. Parry, D.A.; Davidson, B.I.; Sewall, C.J.; Fisher, J.T.; Mieczkowski, H.; Quintana, D.S. A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use. Nat. Hum. Behav.; 2021; 5, pp. 1535-1547. [DOI: https://dx.doi.org/10.1038/s41562-021-01117-5]
49. Howie, E.K.; McNally, S.; Straker, L.M. Exploring the reliability and validity of the TechU-Q to evaluate device and purpose specific screen use in preschool children and parents. J. Child Fam. Stud.; 2020; 29, pp. 2879-2889. [DOI: https://dx.doi.org/10.1007/s10826-020-01787-1] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32837150]
50. Brener, N.D.; Kann, L.; McManus, T.; Kinchen, S.A.; Sundberg, E.C.; Ross, J.G. Reliability of the 1999 youth risk behavior survey questionnaire. J. Adolesc. Health; 2002; 31, pp. 336-342. [DOI: https://dx.doi.org/10.1016/S1054-139X(02)00339-7]
51. Schmitz, K.H.; Harnack, L.; Fulton, J.E.; Jacobs, D.R., Jr.; Gao, S.; Lytle, L.A.; Van Coevering, P. Reliability and validity of a brief questionnaire to assess television viewing and computer use by middle school children. J. Sch. Health; 2004; 74, pp. 370-377. [DOI: https://dx.doi.org/10.1111/j.1746-1561.2004.tb06632.x] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/15656264]
52. He, M.; Harris, S.; Piché, L.; Beynon, C. Understanding screen-related sedentary behavior and its contributing factors among school-aged children: A social-ecologic exploration. Am. J. Health Promot.; 2009; 23, pp. 299-308. [DOI: https://dx.doi.org/10.4278/ajhp.07070965] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/19445431]
53. Ogel, K.; Karadag, F.; Satgan, D.; Koc, C. Development of the addiction profile index Internet addiction form (APIINT): Validity and reliability. Dusunen Adam J. Psychiatry Neurol. Sci.; 2015; 28, pp. 337-343. [DOI: https://dx.doi.org/10.5350/DAJPN2015280405]
54. Sanders, J.L.; Williams, R.J. Reliability and validity of the behavioral addiction measure for video gaming. Cyberpsychol. Behav. Soc. Netw.; 2016; 19, pp. 43-48. [DOI: https://dx.doi.org/10.1089/cyber.2015.0390]
55. Lin, C.-Y.; Broström, A.; Nilsen, P.; Griffiths, M.D.; Pakpour, A.H. Psychometric validation of the Persian Bergen Social Media Addiction Scale using classic test theory and Rasch models. J. Behav. Addict.; 2017; 6, pp. 620-629. [DOI: https://dx.doi.org/10.1556/2006.6.2017.071]
56. Brunborg, G.S.; Hanss, D.; Mentzoni, R.A.; Pallesen, S. Core and peripheral criteria of video game addiction in the game addiction scale for adolescents. Cyberpsychol. Behav. Soc. Netw.; 2015; 18, pp. 280-285. [DOI: https://dx.doi.org/10.1089/cyber.2014.0509]
57. El Asam, A.; Samara, M.; Terry, P. Problematic internet use and mental health among British children and adolescents. Addict. Behav.; 2019; 90, pp. 428-436. [DOI: https://dx.doi.org/10.1016/j.addbeh.2018.09.007]
58. Barnett, T.A.; Kelly, A.S.; Young, D.R.; Perry, C.K.; Pratt, C.A.; Edwards, N.M.; Rao, G.; Vos, M.B. American Heart Association Obesity Committee of the Council on LifestyleCardiometabolic Healthet al. Sedentary behaviors in today’s youth: Approaches to the prevention and management of childhood obesity: A scientific statement from the American Heart Association. Circulation; 2018; 138, pp. e142-e159. [DOI: https://dx.doi.org/10.1161/CIR.0000000000000591] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30354382]
59. Tey, C.; Waker, M.; Campbell, M.; Hampton, A.; Williams, J. The Light Time-Use Diary and preschool activity patterns: Exploratory study. Int. J. Pediatr. Obestity; 2007; 2, pp. 167-173. [DOI: https://dx.doi.org/10.1080/17477160701369274] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/17999282]
60. Straker, L.; Smith, A.; Hands, B.; Olds, T.; Abbott, R. Screen-based media use clusters are related to other activity behaviours and health indicators in adolescents. BMC Public Health; 2013; 13, 1174. [DOI: https://dx.doi.org/10.1186/1471-2458-13-1174]
61. Barr, R.; Kirkorian, H.; Radesky, J.; Coyne, S.; Nichols, D.; Blanchfield, O.; Rusnak, S.; Stockdale, L.; Ribner, A.; Durnez, J. et al. Beyond screen time: A synergistic approach to a more comprehensive assessment of family media exposure during early childhood. Front. Psychol.; 2020; 11, 1283. [DOI: https://dx.doi.org/10.3389/fpsyg.2020.01283]
62. Kubey, R.; Larson, R. The use and experience of the new video media among children and young adolescents. Commun. Res.; 1990; 17, pp. 107-130. [DOI: https://dx.doi.org/10.1177/009365090017001006]
63. Kubey, R.W.; Csikszentmihalyi, M. Television and the Quality of Life: How Viewing Shapes Everyday Experience; Psychology Press: London, UK, 1990.
64. Larson, R.; Kubey, R.; Colletti, J. Changing channels: Early adolescent media choices and shifting investments in family and friends. J. Youth Adolesc.; 1989; 18, pp. 583-599. [DOI: https://dx.doi.org/10.1007/BF02139075]
65. Heron, K.E.; Everhart, R.S.; McHale, S.M.; Smyth, J.M. Using mobile-technology-based ecological momentary assessment (EMA) methods with youth: A systematic review and recommendations. J. Pediatr. Psychol.; 2017; 42, pp. 1087-1107. [DOI: https://dx.doi.org/10.1093/jpepsy/jsx078]
66. Dunton, G.F.; Liao, Y.; Intille, S.S.; Spruijt-Metz, D.; Pentz, M. Investigating children’s physical activity and sedentary behavior using ecological momentary assessment with mobile phones. Obesity; 2011; 19, pp. 1205-1212. [DOI: https://dx.doi.org/10.1038/oby.2010.302]
67. Nareim, C.; Bickham, D.S.; Rich, M. Exploring use patterns and racial and ethnic differences in real time affective states during social media use among a clinical sample of adolescents with depression: Prospective cohort study. JMIR Form. Res.; 2022; 6, e30900. [DOI: https://dx.doi.org/10.2196/30900]
68. Cox, M.F.; Petrucci, G.J.; Marcotte, R.T.; Masteller, B.R.; Staudenmayer, J.; Freedson, P.S.; Sirard, J.R. A novel video-based direct observation system for assessing physical activity and sedentary behavior in children and young adults. J. Meas. Phys. Behav.; 2020; 3, pp. 50-57. [DOI: https://dx.doi.org/10.1123/jmpb.2019-0015]
69. Vandewater, E.A.; Lee, S.-J. Measuring children’s media use in the digital age: Issues and challenges. Am. Behav. Sci.; 2009; 52, pp. 1152-1176. [DOI: https://dx.doi.org/10.1177/0002764209331539]
70. Krugman, D.M.; Cameron, G.T.; White, C.M. Visual attention to programming and commercials: The use of in-home observations. J. Advert.; 1995; 24, pp. 1-12. [DOI: https://dx.doi.org/10.1080/00913367.1995.10673464]
71. DuRant, R.H.; Baranowski, T.; Johnson, M.; Thompson, W.O. The relationship among television watching, physical activity, and body composition of young children. Pediatrics; 1994; 94, pp. 449-455. [DOI: https://dx.doi.org/10.1542/peds.94.4.449]
72. Baranowski, T.; Thompson, W.O.; Durant, R.H.; Baranowski, J.; Puhl, J. Observations on physical activity in physical locations: Age, gender, ethnicity, and month effects. Res. Q. Exerc. Sport; 1993; 64, pp. 127-133. [DOI: https://dx.doi.org/10.1080/02701367.1993.10608789]
73. Lee, R.M.; Emmons, K.M.; Okechukwu, C.A.; Barrett, J.L.; Kenney, E.L.; Cradock, A.L.; Giles, C.M.; deBlois, M.E.; Gortmaker, S.L. Validity of a practitioner-administered observational tool to measure physical activity, nutrition, and screen time in school-age programs. Int. J. Behav. Nutr. Phys. Act.; 2014; 11, 145. [DOI: https://dx.doi.org/10.1186/s12966-014-0145-5]
74. McKenzie, T.L.; Sallis, J.F.; Nader, P.R.; Broyles, S.L.; Nelson, J.A. Anglo-and Mexican-American preschoolers at home and at recess: Activity patterns and environmental influences. J. Dev. Behav. Pediatr.; 1992; 13, pp. 173-180. [DOI: https://dx.doi.org/10.1097/00004703-199206000-00004]
75. Kelly, P.; Marshall, S.; Badland, H.; Kerr, J.; Oliver, M.; Doherty, A.; Foster, C. An ethical framework for automated, wearable cameras in health behavior research. Am. J. Prev. Med.; 2013; 44, pp. 314-319. [DOI: https://dx.doi.org/10.1016/j.amepre.2012.11.006]
76. Everson, B.; Mackintosh, K.A.; McNarry, M.A.; Todd, C.; Stratton, G. Can wearable cameras be used to validate school-aged children’s lifestyle behaviours?. Children; 2019; 6, 20. [DOI: https://dx.doi.org/10.3390/children6020020]
77. Zhou, Q.; Wang, D.; Mhurchu, C.N.; Gurrin, C.; Zhou, J.; Cheng, Y.; Wang, H. The use of wearable cameras in assessing children’s dietary intake and behaviours in China. Appetite; 2019; 139, pp. 1-7. [DOI: https://dx.doi.org/10.1016/j.appet.2019.03.032]
78. Bechtel, R.B.; Achelpohl, C.; Akers, R. Correlates between Observed Behavior and Questionnaire Responses on Television Viewing; Television and Social Behavior: Television in Day-to-Day Life: Patterns of Use Rubinstein, E. US Government Printing Office: Washington, DC, USA, 1972; Volume 72, pp. 274-344.
79. Fletcher, J.E.; Chen, C.C.-P. Validation of viewing reports: Exploration of a photographic method. Proceedings of the Annual Meeting of the Broadcast Education Association; Las Vegas, NV, USA, April 1975.
80. Anderson, D.R.; Field, D.E.; Collins, P.A.; Lorch, E.P.; Nathan, J.G. Estimates of young children’s time with television: A methodological comparison of parent reports with time-lapse video home observation. Child Dev.; 1985; 56, pp. 1345-1357. [DOI: https://dx.doi.org/10.2307/1130249]
81. Borzekowski, D.L.; Robinson, T.N. Viewing the viewers: Ten video cases of children’s television viewing behaviors. J. Broadcast. Electron. Media; 1999; 43, pp. 506-528. [DOI: https://dx.doi.org/10.1080/08838159909364507]
82. Allen, C.L. Photographing the TV audience. J. Advert. Res.; 1965; 5, pp. 2-8.
83. Vadathya, A.K.; Musaad, S.; Beltran, A.; Perez, O.; Meister, L.; Baranowski, T.; Hughes, S.O.; Mendoza, J.A.; Sabharwal, A.; Veeraraghavan, A. An objective system for quantitative assessment of television viewing among children (family level assessment of screen use in the home-television): System development study. JMIR Pediatr. Parent.; 2022; 5, e33569. [DOI: https://dx.doi.org/10.2196/33569]
84. Given, L.M.; Cantrell Winkler, D.; Willson, R.; Davidson, C.; Danby, S.; Thorpe, K. Parents as coresearchers at home: Using an observational method to document young children’s use of technology. Int. J. Qual. Methods; 2016; 15, 1609406915621403. [DOI: https://dx.doi.org/10.1177/1609406915621403]
85. Kerr, J.; Marshall, S.J.; Godbole, S.; Chen, J.; Legge, A.; Doherty, A.R.; Kelly, P.; Oliver, M.; Badland, H.M.; Foster, C. Using the SenseCam to improve classifications of sedentary behavior in free-living settings. Am. J. Prev. Med.; 2013; 44, pp. 290-296. [DOI: https://dx.doi.org/10.1016/j.amepre.2012.11.004]
86. Thomas, G.; Bennie, J.A.; De Cocker, K.; Dwi Andriyani, F.; Booker, B.; Biddle, S.J.H. Using wearable cameras to categorize the type and context of screen-based behaviors among adolescents: Observational study. JMIR Pediatr. Parent.; 2022; 5, e28208. [DOI: https://dx.doi.org/10.2196/28208]
87. Ramirez, N.F.; Hippe, D.S.; Shapiro, N.T. Exposure to electronic media between 6 and 24 months of age: An exploratory study. Infant Behav. Dev.; 2021; 63, 101549. [DOI: https://dx.doi.org/10.1016/j.infbeh.2021.101549]
88. Ambrogio, S.; Narayanan, P.; Okazaki, A.; Fasoli, A.; Mackin, C.; Hosokawa, K.; Nomura, A.; Yasuda, T.; Chen, A.; Friz, A. An analog-AI chip for energy-efficient speech recognition and transcription. Nature; 2023; 620, pp. 768-775. [DOI: https://dx.doi.org/10.1038/s41586-023-06337-5]
89. Christakis, D.A.; Gilkerson, J.; Richards, J.A.; Zimmerman, F.J.; Garrison, M.M.; Xu, D.; Gray, S.; Yapanel, U. Audible television and decreased adult words, infant vocalizations, and conversational turns: A population-based study. Arch. Pediatr. Adolesc. Med.; 2009; 163, pp. 554-558. [DOI: https://dx.doi.org/10.1001/archpediatrics.2009.61]
90. Ambrose, S.E.; VanDam, M.; Moeller, M.P. Linguistic input, electronic media, and communication outcomes of toddlers with hearing loss. Ear Hear.; 2014; 35, pp. 139-147. [DOI: https://dx.doi.org/10.1097/AUD.0b013e3182a76768]
91. Brushe, M.E.; Lynch, J.W.; Melhuish, E.; Reilly, S.; Mittinty, M.N.; Brinkman, S.A. Objectively measured infant and toddler screen time: Findings from a prospective study. SSM-Popul. Health; 2023; 22, 101395. [DOI: https://dx.doi.org/10.1016/j.ssmph.2023.101395] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37096246]
92. Scharkow, M. The accuracy of self-reported internet use—A validation study using client log data. Commun. Methods Meas.; 2016; 10, pp. 13-27. [DOI: https://dx.doi.org/10.1080/19312458.2015.1118446]
93. Andrews, S.; Ellis, D.A.; Shaw, H.; Piwek, L. Beyond self-report: Tools to compare estimated and real-world smartphone use. PLoS ONE; 2015; 10, e0139004. [DOI: https://dx.doi.org/10.1371/journal.pone.0139004] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26509895]
94. Wade, N.E.; Ortigara, J.M.; Sullivan, R.M.; Tomko, R.L.; Breslin, F.J.; Baker, F.C.; Fuemmeler, B.F.; Delrahim Howlett, K.; Lisdahl, K.M.; Marshall, A.T. Passive sensing of preteens’ smartphone use: An Adolescent Brain Cognitive Development (ABCD) cohort substudy. JMIR Ment. Health; 2021; 8, e29426. [DOI: https://dx.doi.org/10.2196/29426] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/34661541]
95. Gower, A.D.; Moreno, M.A. A novel approach to evaluating mobile smartphone screen time for iPhones: Feasibility and preliminary findings. JMIR mHealth uHealth; 2018; 6, e11012. [DOI: https://dx.doi.org/10.2196/11012] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30455163]
96. Goedhart, G.; van Wel, L.; Langer, C.E.; de Llobet Viladoms, P.; Wiart, J.; Hours, M.; Kromhout, H.; Benke, G.; Bouka, E.; Bruchim, R. Recall of mobile phone usage and laterality in young people: The multinational Mobi-Expo study. Environ. Res.; 2018; 165, pp. 150-157. [DOI: https://dx.doi.org/10.1016/j.envres.2018.04.018] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29704776]
97. Fischer, C.; Pardox, Z.; Baker, R.; Willians, J.; Smyth, P.; Yu, R.; Slater, S.; Baker, R.; Warschauer, M. Mining big data in education: Affordances and challenges. Rev. Res. Educ.; 2020; 44, pp. 130-160. [DOI: https://dx.doi.org/10.3102/0091732X20903304]
98. Kitchin, R. The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences; Sage: Thousand Oaks, CA, USA, 2014.
99. Boase, J.; Ling, R. Measuring mobile phone use: Self-report versus log data. J. Comput.-Mediat. Commun.; 2013; 18, pp. 508-519. [DOI: https://dx.doi.org/10.1111/jcc4.12021]
100. Lui, M.; Li, C.; Pan, Z.; Pan, X. Mining big data to help make informed decisions for designing effective digital educational games. Interact. Learn. Environ.; 2023; 31, pp. 2562-2582.
101. Alahmadi, M.A. Direct measurement of TV viewing time and physical activity in children. A pilot study. Proceedings of the 3rd International Congress on Sport Sciences Research and Technology Support (icSPORTS 2015); Lisbon, Portugal, 15–17 November 2015; pp. 145-149.
102. Nielsen Media Research. 2000 Report on Television: The First 50 Years; Nielson Media Research: New York, NY, USA, 2000.
103. Danaher, P.J.; Beed, T.W. A coincidental survey of people meter panelists: Comparing what people say with what they do. J. Advert. Res.; 1993; 33, pp. 86-93.
104. Clancey, M. The television audience examined. J. Advert. Res.; 1994; 34, pp. 2-11.
105. Fitzgerald, J. Evaluating return on investment of multimedia advertising with a single-source panel: A retail case study. J. Advert. Res.; 2004; 44, pp. 262-270. [DOI: https://dx.doi.org/10.1017/S0021849904040309]
106. Robinson, J.L.; Winiewicz, D.D.; Fuerch, J.H.; Roemmich, J.N.; Epstein, L.H. Relationship between parental estimate and an objective measure of child television watching. Int. J. Behav. Nutr. Phys. Act.; 2006; 3, 43. [DOI: https://dx.doi.org/10.1186/1479-5868-3-43]
107. Mendoza, J.A.; McLeod, J.; Chen, T.-A.; Nicklas, T.A.; Baranowski, T. Convergent validity of preschool children’s television viewing measures among low-income Latino families: A cross-sectional study. Child. Obes.; 2013; 9, pp. 29-34. [DOI: https://dx.doi.org/10.1089/chi.2012.0116]
108. Douwes, M.; Kraker, H.; Blatter, B. Validity of two methods to assess computer use: Self-report by questionnaire and computer use software. Int. J. Ind. Ergon.; 2007; 37, pp. 425-431. [DOI: https://dx.doi.org/10.1016/j.ergon.2007.01.002]
109. Groen, M.; Noyes, J. Using eye tracking to evaluate usability of user interfaces: Is it warranted?. IFAC Proc. Vol.; 2010; 43, pp. 489-493. [DOI: https://dx.doi.org/10.3182/20100831-4-FR-2021.00086]
110. Al Baghal, T.; Wenz, A.; Sloan, L.; Jessop, C. Linking Twitter and survey data: Asymmetry in quantity and its impact. EPJ Data Sci.; 2021; 10, 32. [DOI: https://dx.doi.org/10.1140/epjds/s13688-021-00286-7]
111. Sloan, L.; Jessop, C.; Al Baghal, T.; Williams, M. Linking survey and Twitter data: Informed consent, disclosure, security, and archiving. J. Empir. Res. Hum. Res. Ethics; 2020; 15, pp. 63-76. [DOI: https://dx.doi.org/10.1177/1556264619853447] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31220995]
112. Barocas, S.; Nissenbaum, H. Big data’s end run around anonymity and consent: Privacy, big data, and the public good. Framew. Engag.; 2014; 1, pp. 44-75.
113. Crawford, K. The Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence; Yale University Press: New Haven, CT, USA, 2021.
114. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, real-time object detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA, 27–30 June 2016; pp. 779-788.
115. Gómez, J.; Aycard, O.; Baber, J. Efficient detection and tracking of human using 3D LiDAR sensor. Sensors; 2023; 23, 4720. [DOI: https://dx.doi.org/10.3390/s23104720] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/37430633]
116. Costanza-Chock, S. Design Justice: Community-Led Practices to Build the World We Need; The MIT Press: Cambridge, MA, USA, 2020.
117. Rich, M.; Bickham, D.S.; Shrier, L.A. Measuring youth media exposure: A multimodal method for investigating the influence of media on digital natives. Am. Behav. Sci.; 2015; 59, pp. 1736-1754. [DOI: https://dx.doi.org/10.1177/0002764215596558]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The role and potential impact of digital screen technology in the lives of children is heavily debated. Current evidence is limited by the weakness of measures typically used to characterise screen use, predominantly proxy- or self-reports with known inaccuracy and bias. However, robust and detailed evidence is needed to provide practical trustworthy guidance to families and professionals working with families. The purpose of this paper is to support researchers to select measurement method(s) that will provide robust and detailed evidence. The paper outlines the challenges in measuring contemporary screen use by children, using a child–technology interaction model to organise considerations. A range of different methods used to measure digital screen technology use in children and adolescents (i.e., questionnaires, diaries, electronically prompted sampling, direct observation, fixed room cameras, wearable/portable cameras, audio recorders, screen-device onboard logging, remote digital trace logging and proximity logging) are described along with examples of their use and constructs typically measured as well as a summary of the advantages and disadvantages of each method. A checklist and worked examples are provided to support researchers determining the best methods or combination of methods for a research project.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details





1 ARC Centre of Excellence for the Digital Child, Australia;
2 ARC Centre of Excellence for the Digital Child, Australia;
3 Oxford Internet Institute, University of Oxford, Oxford OX1 2JD, UK;
4 ARC Centre of Excellence for the Digital Child, Australia;