Content area
Wearable medical devices offer continuous health monitoring but often rely on static user interfaces that do not adjust to individual user needs. This lack of adaptability presents accessibility challenges, especially for older adults and users with limited tech proficiency. To address this, we propose an adaptive user interface powered by reinforcement learning to personalize navigation flow, button placement, and notification timing based on real-time user behavior. Our system uses a deep Q-learning (DQL) model enhanced with the Golden Jackal Optimization (GJO) algorithm for improved convergence and performance. Usability testing was conducted to evaluate the adaptive interface against traditional static designs. The proposed DQL-GJO model demonstrated the fastest convergence, requiring only 45 epochs, compared to 70 for standard DQL and 48–62 for other hybrid models. It also achieved the lowest task completion time (TCT) at 82 s, the lowest error rate (ER) at 9.9%, and the highest user satisfaction (US) at 78%. These improvements suggest that the GJO-enhanced model not only accelerates training efficiency but also delivers superior user experience in practical use.
Introduction
Wearable medical devices have played an important role in the current healthcare system enabling continuous monitoring of health conditions and early detection of medical conditions1. The technology behind the sensors, data analytics, and wireless connectivity has enabled such devices to become more powerful and common2. Nevertheless, their user interfaces (UI) are very static and commonly created following the one-size-fit-all concept3. This inflexibility poses accessibility barriers particularly to older adults, people with motor disabilities and to people who are not techno savvy.
It has been shown that personalized interactions are the core needs to realize improved healthcare user experience and amplified adherence to self-directed healthcare. Most current approaches use manual customization to implement intelligent UIs, but such methods prove impractical for all users despite growing interest in this field. The main limitation of wearable medical device user interfaces stems from their inability to respond dynamically to user-specific needs and preferences4. The usability together with accessibility declines when user behavior variations cannot be supported by fixed interaction models and static UI frameworks. Individuals experience frustration and a higher mental workload that ultimately leads people to drop their healthcare monitoring schedules5. The typical interface designs present a challenge to users with vision and motor impairment wishing to use a wearable medical device6,7.
The problem addressed in this study lies in the limitations of wearable medical devices, which, despite their potential for continuous health monitoring, often employ static user interfaces that fail to adapt to individual user needs. This rigidity creates significant accessibility challenges, particularly for older adults and individuals with limited technological proficiency, thereby reducing usability and overall user satisfaction. Existing designs lack the capacity to adjust navigation flow, button placement, or notification timing in response to real-time user behavior, ultimately hindering both efficiency and inclusivity. While reinforcement learning has shown promise in adaptive systems, conventional approaches such as standard Deep methods suffer from slower convergence rates and suboptimal performance. This gap highlights the need for an intelligent, optimization-enhanced framework capable of delivering faster training, lower error rates, and improved usability in practical health-monitoring scenarios.
To address the above issues a system must utilize user interaction data to conduct smart real-time adjustments of elements within the UI. The adaptability in the usability of the wearable medical devices is enhanced by a platform that has adaptive UI features and this aids in increasing access to diverse categories of users8. As reinforcement learning (RL) is introduced into systems, interfaces can be created that automatically adjust navigation structures and button placements and, additionally, change the frequency of notifications on the basis of the calculation of user patterns and behavioral traits. Auto system adaptation eliminates the user customization of the system as it provides an easy interface to users.
The investigation of this study seeks to evaluate how reinforcement learning optimizes user interface adaptation for better usability design and improved health monitoring adherence, and user engagement. This study presents a novel approach to adaptive UI design for wearable medical devices using deep Q-learning, a reinforcement learning technique. The key contributions of this research include:
This study implements a deep Q-learning model, further optimized using the GJO algorithm, to continuously refine interface elements, eliminate manual adjustments, and improve interaction efficiency.
The adaptive UI caters to individuals with motor impairments, visual limitations, and varying levels of technological proficiency, ensuring a more inclusive wearable healthcare experience.
By minimizing navigation complexity and interaction errors, the system enhances user satisfaction, engagement, and adherence to health monitoring routines.
The rest of this paper is structured as follows: Sect. 2 reviews related work on adaptive user interfaces and reinforcement learning in healthcare applications. Section 3 details the methodology, including the reinforcement learning framework and UI adaptation strategy. Section 4 describes the results. Section 5 presents and discusses the results and their implications. Section 6 concludes the study and outlines potential directions for future research.
Related work
The study conducted by Lewis-Thames et al.9 examined how digital health tool development interacts with rural older adults, along with their essential usability problems and user involvement approaches. The authors emphasized a technology adaptation requirement based on user-centred representation while solving digital skill and physical limitations that users encounter. The involvement of the user process resulted in the development of efficient means of adoption that employed tailored training sessions and the local assistance network with user-friendly interface systems. The study finding validates that inclusive digital health systems generate improved access to healthcare and result in improved patient outcomes among rural elderly citizens.
Stamate et al.10 suggest user-oriented design of assistive technology requirements in order to help the aged. The involvement of older adults in the design work makes technology adoption projects more successful because they can provide proper functioning solutions according to real-life requirements of usability and satisfy the needs of users concerning personalization and adaptability. The study demonstrates that participatory design is effective since it establishes interfaces that are user friendly besides promoting user confidence and high system adoption rates. In this study, it is established that inclusive innovation is crucial in the development of high quality digital and living solutions, and equality programs to the elderly citizens.
By means of wearable trackers connected to mobile platforms, Sharma et al.11 developed a system that monitors medications and health care metrics of the geriatric group of clients. This system also monitors health data in real-time to initiate immediate healthcare support and provide specific care plans to persons. This system can support patient medical compliance and avoid threats to self-government and safety because non-stop monitoring with sensors is provided. Using the mobile application, all obstacles to accessing health information among the seniors and the caregivers are overcome without effort. The findings of the research suggest that digital health innovations are an empirical approach to providing proactive healthcare services to elderly people that are easily accessible.
In their article, Luo et al. designed the most efficient techniques to enhance elderly engagement in smart healthcare with the purpose of optimizing the system12. The INPD approach gives researchers the means of identifying user requirements, as well as the means of designing criteria that can yield desirable levels of accessibility of systems. This strategy makes it possible to develop interfaces that are easy to use by healthcare technologies that address the needs of seniors in particular ways. User-centred design is one of the key elements that enhance the involvement and independence of the aged patients who interact with intelligent healthcare systems, hence effective solutions.
The healthcare sustainability crisis due to aging populations and the ways in which wearable sensors enhance capabilities of elderly care monitoring and health monitoring systems are outlined in Iadarola et al.13 study. Combined with the technology collection system, wearable sensors provide stable vital sign monitoring functions to identify the medical problems at an early stage and enhance regulated disease management to reduce stress on the healthcare system. The combination of proactive healthcare and autonomous living support is what makes wearable sensors a stable healthcare practice. The need to present innovative health monitoring solutions is apparent due to the existing needs of the senior citizens.
According to Coskun14, one of the growing issues is the presence of the aging population living alone with the help of a tracking application that unites the GPS technology and sensory functions. A tracking system enhances safety, as it is contemporary, tracking real-time positions, and monitors health and falls in order to respond to the emergency promptly. The solution allows accessibility to important data remotely, therefore, it can accommodate an individual ageing, as well as, offer the user a round-the-clock support. Literature findings show that the technological solutions are needed as critical opportunistic solutions to enhance the health conditions and safety levels of isolated elderly persons.
In15, Sun et al. constructed a reinforcement learning-based adaptive framework to create user interfaces to adjust to their current user feedback to attain better live experience. This system provides the users with improved interface usability benefits and accessibility by customizing features using intelligent feedback techniques. An AI-based adaptive customization is an effective process in user categorizing and it is providing improved user clicks and keeping the user participating. The data of research prove that reinforcement learning approaches make it possible to develop digital interfaces responding to an individual user.
Wegener et al. note that the absence of digital health solutions to serve older adults along with the principles of co-designs necessitates sociotechnical applications of work to address the needs of older adults16. The researchers illustrate how technological elements ought to merge with the social elements and environmental factors in the process of developing accessible health technology. The research aligns the digital health devices with the needs of older adults in a proper manner since it incorporates their involvement in the design phase formulation. Such an approach allows elderly citizens to have increased access to new systems and achieve improved health outcomes, as it increases system adoption and user learning.
According to the study by Hirani17, smart healthcare solutions may combine wearable sensors integrated with machine intelligence to remotely track and monitor disabled and elderly patients using the IoT and AI technology. The approach provides real-time health information collection through which anomalies can be detected early and patient-specific care strategies can be developed to protect both users’ and healthcare providers’ resources. The system performs proactive healthcare management through its integration of sensor data with AI-driven analytics, which enables rapid responses during critical health events. The research demonstrates how advanced technology solutions effectively enhance medical care delivery as well as patient outreach for defenseless groups.
Research gaps
Previous research has highlighted user-centric design in digital health solutions, but most studies have concentrated on participatory design, assistive technology, and sensor-based monitoring for older adults. More effort should be made to develop user interface systems that automatically modify themselves based on time-specific user interactions because such development remains fragmented. The interface designs of wearable medical devices lack adaptable features, so users cannot establish personal relationships through the devices, and these devices do not adapt to user ability modifications. Current research about interface adaptation through reinforcement learning focuses on tracking overall user engagement since accessibility solutions remain insufficient for elderly users, together with those who have low technology competency. The absence of usability tests between static and adaptive interfaces prevents us from understanding how reinforcement learning uses adaptive features to optimize navigation efficiency, as well as reduce cognitive strain and increase user adherence to wearable medical equipment systems. The research tackles missing areas by building deep Q-learning-based adaptive interfaces for wearable medical devices. Under the proposed system, the interface controls enable dynamic adjustments of navigation routes and button positions, and notice frequencies through detection of current user activity patterns. The system enhances user interface accessibility as well as usability by automatically adjusting without human intervention through its operation outside of predefined interfaces, along with manual participatory design methods. This research shows how reinforcement learning works by testing usability between interfaces that were static and adaptive interfaces to show better results in completion times and user satisfaction, along with error rates. The research promotes inclusive wearable medical device development by making equipment accessible for people with motor and visual impairments to enhance their long-term health condition self-management.
Materials and methods
System design
Figure 1 presents a detailed cloud-based system architecture designed for the real-time processing, analysis, and interaction of data collected from wearable health monitoring devices. This architecture facilitates continuous health monitoring by capturing physiological signals from various wearable sensors, ensuring secure data transmission, enabling large-scale data storage and analytics, and supporting multi-level interaction interfaces tailored to diverse stakeholders in the healthcare ecosystem. At the initial stage, wearable devices attached to patients generate diverse physiological data such as heart rate, glucose levels, body temperature, and movement patterns. These data streams are transmitted to gateways that serve as aggregators and initial relays for the information. Following this, the data passes through a firewall component that provides a security layer to shield the system from unauthorized access and potential cyber threats. Once secured, the real-time data is processed by a streaming data processor, which standardizes and filters the input to prepare it for deeper processing and storage. The preprocessed data is then directed into a data lake, which accommodates large volumes of raw, unstructured, or semi-structured health data. Subsequently, structured and refined information is transferred into a data warehouse for long-term storage and advanced analysis. A control application module within the system governs the interaction between the wearable devices and the cloud infrastructure, facilitating operational commands and bidirectional communication.
Fig. 1 [Images not available. See PDF.]
Adaptive user interface architecture for healthcare data management using wearable devices.
The system enables dynamic UI personalization through real-time physiological data acquisition, secure communication via gateways and firewalls, and centralized cloud-based analytics. Integrated with software business logic, the adaptive UI delivers context-aware interfaces tailored to medical staff, patients, and administrators, ensuring seamless interaction, continuous monitoring, and data-driven decision-making in a secure healthcare environment.
At the core of the system lies the cloud server, which houses the software business logic that governs overall system behavior, orchestrates data workflows, and integrates with EHRs for an enriched clinical context. This logic layer also supports advanced data analytics functionalities, which transform raw health signals into meaningful insights, aiding in early diagnosis and personalized healthcare strategies. Finally, the processed outputs and insights are delivered to various end-users through dedicated interfaces. Medical personnel and device technicians receive detailed visualizations and alerts; patients access user-friendly feedback and progress monitoring tools; and administrative staff interact with system-level configurations and reports. This multi-interface system ensures a cohesive, secure, and responsive environment for wearable health monitoring and digital healthcare delivery.
Overview of the proposed adaptive UI system
Static user interfaces found in wearable medical devices neglect the requirements of users whose skills and mental capacities differ because of technological experience, physical capacities, and cognitive abilities. The proposed adaptive interface system combines reinforcement learning (RL) for implementing real-time interface reconfigurations when users engage with the system, as shown in Fig. 2. The system delivers enhanced accessibility together with usability, along with better user experience through its continuous optimization of UI architecture, navigational pathways, and notification heuristics.
Users interact with the adaptive UI by providing feedback data, which is processed by an RL model through a feedback loop mechanism to improve interface elements. This dynamic adaptation system boosts operational performance together with user connection and eliminates manual adjustments to produce an interface that automatically adapts to context.
Fig. 2 [Images not available. See PDF.]
Overview of the proposed approach.
Key Interface Elements Optimized. The adaptive UI system primarily optimizes the following interface parameters to enhance usability and engagement:
Navigation flow optimization
The system performs trajectory analysis of user movement within the UI, dynamically restructuring menu hierarchies to prioritize high-frequency interactions. The optimization objective is formulated as:
1
where represents the weight assigned to feature access frequency, and denotes the number of steps required to reach the feature. This cost function minimizes redundant navigation steps and optimizes transition pathways to mitigate cognitive overload and enhance task efficiency.
Button placement and resizing
Leveraging interaction heatmaps and user ergonomics, the system autonomously adjusts button dimensions and spatial positioning to improve accessibility. The resizing follows Fitts’ Law:
2
where T is the selection time, D is the distance to the button, and W is the button width. For individuals with motor impairments or reduced dexterity, the system implements scalable button elements with context-aware positioning, reducing interaction latency and input errors.
Notification frequency and timing
An adaptive notification scheduling algorithm modulates alert frequency based on predictive user engagement models. Reinforcement learning optimizes notification timing, ensuring alerts are disseminated at optimal temporal intervals. The RL model updates its policy using the Q-learning update rule.
The design and implementation of the adaptive UI system are predicated on user-centric principles, with an emphasis on accessibility and cognitive load minimization:
Accessibility augmentations The system incorporates assistive functionalities, including dynamic text scaling, contrast modulation, and multimodal interaction (e.g., haptic feedback and voice-based commands), ensuring inclusivity for users with sensory or motor impairments.
Cognitive load mitigation The UI employs predictive modeling to anticipate user intent, proactively surfacing relevant functionalities and minimizing extraneous cognitive demands.
Autonomous personalization Unlike conventional UI customization paradigms that necessitate explicit user intervention, this system autonomously adapts the interface in real-time, facilitating a frictionless user experience, particularly for individuals with limited digital literacy.
By optimizing these core interface elements, the proposed adaptive UI framework enhances user-system interaction efficacy, fosters sustained adherence to medical device utilization, and ensures a scalable, intelligent, and inclusive digital health ecosystem.
Reinforcement learning model
The adaptive UI system leverages Deep Q-Learning (DQL)18 to facilitate dynamic optimization of interface elements, ensuring an adaptive and efficient user experience. Traditional static UI designs lack adaptability, whereas DQL enables real-time refinement of UI configurations based on user interaction patterns. The learning process is governed by the Bellman equation:
3
where s denotes the system’s current interaction state, a represents a UI modification, r quantifies performance enhancements, γ is the discount factor prioritizing future rewards, and α is the learning rate controlling model updates. To mitigate instability and variance, the model incorporates experience replay and target network updates, ensuring convergence and robustness.
Deep Q-Network (DQN) architecture
The Deep Q-Network (DQN) processes user interaction data and outputs optimized UI element configurations through a structured multi-layered architecture as shown in Fig. 3.
Fig. 3 [Images not available. See PDF.]
Architecture of Deep Q-Network (DQN).
Each hidden layer transformation is mathematically expressed as:
4
where represents activations from the previous layer, and are the weight matrix and bias, and σ is the ReLU activation function, ensuring non-linearity.
Policy optimization and action selection
The system can allow users to regulate how they want to explore untested UI adaptations versus use better patterns; it is made to do so through its ε-greedy policy. All these changes follow a route to continuous optimization without affecting the flexibility of the interface. The system provides modification of movements of the elements and the ability to change the size of the buttons along with restricting the workload by timing signals, which leads to the creation of the effortless interface use.
Reward function for UI optimization
The reward function is multi-objective, incorporating key usability metrics:
5
Where, T is the time spent on task completion (minimized to improve efficiency), E is the errors of the interaction (penalized to improve accuracy) and G is the user engagement (maximized to ensure a lasting interaction). The weight parameters w 1, w 2, w 3 are optimally adjusted to user-related behavioral data, which guarantees individual optimization of the UI.
To make sure that the reward function was equitable and measured all performance indicators in a similar scale, task completion time (TCT) and error rate (ERS) were brought to scale using the minmax scaling methodology. In particular, all raw values were rescaled into the range of 0 to 1. To operationalize it with regard to user engagement (UE), we took the combination of frequency of interaction and duration of activity in which a task is performed. Interaction frequency records the user action count, i.e. the number of times a user clicked on the interface, performed navigational actions and how long a user interacted with the interface actively in a given task. Interaction activity duration records how much time a user spent interacting with the interface proactively, e.g. how many times a user clicked on the interface. The min max method was used to normalize these two factors which were then added together in a weighted composite score. The resulting Engagement Index was again standardized to 0–1 scale to give consistency with other measures. This method enabled the reward factor to optimize efficiency (TCT), accuracy (ER) and interactivity (UE) in a coherent fashion so that the gains in one area were not trumped by the losses in another.
Golden Jackal optimization for reinforcement learning enhancement
We combine Golden Jackal Optimization (GJO), a bio-inspired metaheuristic algorithm that models collaborative hunting behavior, with RL in order to improve its efficiency. The choice of the GJO algorithm in this study can be explained by the fact that it allows a tradeoff between exploration and exploitation to the optimization process, which is especially essential when it comes to reinforcement learning models like DQL. The classical DQL is sensitive to hyperparametric controls, and with poor learning rates, discounting factors, or exploration parameters, may slow convergence and worsen performance. Inspired by cooperative hunting in the golden jackal, GJO utilises adaptive position updating and prey-encircling behaviour to effectively search a broad solution space and narrow the promising regions more precisely. This two pronged approach permits the algorithm to prevent premature convergence, as is often found in other metaheuristics, and expedites the process of finding the best hyperparameter settings. Moreover, GJO has a very lightweight mathematical framework and limited computational complexity, rendering it very viable to be included in the resource-constrained devices like wearable medical software. The model integrates GJO into the framework of the DQL and, as a result, converges quicker and is more stable in its training, resulting in considerable enhancements to user interface flexibility.
GJO algorithm phases
Searching for prey (Exploration Phase)
Jackals disperse across the UI search space, evaluating diverse configurations. Position update equation.
6
where ensures exploration.
Surrounding and stimulating the prey (Exploitation Phase)
Jackals converge towards the best-observed UI configuration.
Position update:
7
with an adaptive convergence coefficient ).
Pouncing on the prey (Final convergence Phase)
The final refinement step ensures optimal UI responsiveness.
8
where is the ensemble learning effect.
Integration with reinforcement learning
By optimizing RL hyperparameters, GJO accelerates learning.
9
where Error reduction, Task completion time, Cognitive load. Dynamic reward scaling:
10
This ensures the system adapts optimally to user interactions.
Training and deployment
The data storage mechanism within the replay buffer achieves dual benefits by reducing continuous action dependencies and stabilizing the learning process. The user interface development process continuously refines policies until they reach all necessary convergence points before obtaining their optimized configuration state. The training processes in cloud infrastructure operate on extensible computing resources before wearables with low power consumption accept inference models to create real-time UI adjustments requiring reduced processing power. Reinforcement learning drives the adaptive system to improve UI elements, which results in better accessibility and holds both immediate engagement value and long-term effectiveness of user adherence toward medical self-management practices.
Data acquisition
User Interaction Data Acquisition Data acquisition follows a hybrid approach combining empirical user trials and synthetic simulations. Empirical trials involve real-world interactions with wearable medical devices, capturing behavioral telemetries such as navigation trajectories, dwell times, touch accuracy, and interaction latency. Synthetic data augmentation, achieved through generative user modeling, enhances dataset variability and expands the reinforcement learning model’s action space. Two key datasets are used for model training:
MobiHealth Dataset19: A multimodal dataset containing physiological and interaction signals, including ECG, accelerometer, and gyroscope data. It supports modeling navigation inefficiencies, cognitive workload variations, and interaction anomalies, optimizing UI adaptability.
REALDISP Dataset [20]: Focuses on movement-based activity recognition using displaced and non-displaced accelerometers. It provides critical motion and gesture tracking data, facilitating improved modeling of motor impairments and accessibility-driven UI adaptations.
To ensure robust model generalization, user segmentation is applied across diverse demographic and physiological profiles. The primary user groups include elderly individuals (65 + years) exhibiting slower navigation speeds and higher cognitive load, users with motor impairments experiencing hand tremors and reduced dexterity, and users with visual impairments facing contrast perception challenges that affect UI detectability.
In accordance with ethical research standards, this study did not involve any direct experimentation on human participants or the use of human tissue samples. All analyses were conducted using an open-access dataset sourced from Kaggle, which is publicly available, fully anonymized, and collected in compliance with the platform’s data-sharing policies. As the dataset does not contain any personally identifiable information and was not generated through direct interaction with human subjects, the study did not require approval from an institutional ethics committee. Furthermore, informed consent from participants was not applicable due to the secondary and anonymized nature of the data used.
Pre-processing and feature engineering
Raw sensor and interaction data undergo a structured preprocessing pipeline to extract high-impact features essential for reinforcement learning. The key stages include:
Signal Denoising: Butterworth filtering and wavelet transformation to remove noise and preserve high-fidelity physiological and motion signals.
Normalization and Standardization: Z-score normalization to ensure consistent scaling across interaction latency, navigation efficiency, and error rate distributions.
Dimensionality Reduction: Recursive Feature Elimination (RFE) and Autoencoder-based feature selection to retain relevant features while eliminating redundancy.
Temporal Segmentation: Sliding window techniques to structure time-series data for sequential analysis.
Gesture Recognition and Kinematic Analysis: Fourier transformation-based spectral decomposition to extract motion patterns, mapping motor impairments to adaptive UI strategies.
The extracted feature vector for reinforcement learning is represented as:
where Represents individual interaction metrics, and n is the feature space dimensionality.
By integrating structured data acquisition, preprocessing, and segmentation, the reinforcement learning model dynamically optimizes UI configurations, enhancing accessibility, usability, and user engagement in wearable medical devices.
Evaluation metrics
To evaluate the effectiveness of the proposed adaptive UI system, usability testing was conducted with a diverse group of participants, including older adults, individuals with motor impairments, and users with varying levels of technological proficiency. Performance was assessed through key metrics: task completion time, error rate, and subjective user satisfaction.
(1) Task Completion Time (TCT)
11
where TCT is the task completion time, represents the completion time of the user, and n is the total number of users.
(2) Error Rate (ER)
12
where ER is the error rate, E represents the number of erroneous interactions, and A is the total number of interactions.
(3) User Satisfaction (US)
13
where US is the user satisfaction score, and is the satisfaction rating of the user.
Results
Training results
Figure 4a shows the Loss Curve, representing the training loss of the reinforcement learning (RL) model over 50 epochs. The decline shows that learning has been successful, while training noise has generated small variations in results. Figure 4b presents the ROC Curve analysis that demonstrates the model’s classification success rate by showing the True Positive Rate (TPR) in relation to False Positive Rate (FPR). The predictive capability of a model improves as its AUC (Area Under the Curve) value increases. Together, these figures demonstrate the training stability and classification efficiency of the adaptive UI model in wearable medical devices.
Fig. 4 [Images not available. See PDF.]
(a) Loss Curve & (b) Receiver Operating Characteristic (ROC) Curve.
In the Fig. 4b, the orange ROC curve remains consistently above the diagonal baseline, which represents random classification. This clear separation from the diagonal indicates that the proposed model demonstrates strong discriminatory power in distinguishing between positive and negative cases. The calculated Area Under the Curve (AUC) of 0.92 further reinforces this observation, signifying an excellent level of classification performance. An AUC value of 0.92 implies that the model has a 92% probability of ranking a randomly selected positive instance higher than a randomly selected negative instance. In practical terms, this means that the DQL-GJO model is highly reliable in differentiating user interaction patterns, thereby ensuring that adaptive interface decisions—such as navigation flow, button layout, or notification timing—are based on accurate assessments of user behavior.
Performance comparison
Table 1 compares the performance of static and adaptive user interfaces (UI), showing significant improvements with the adaptive UI. The time required to complete a task decreases from 120 s in the static UI to 82 s in the adaptive UI, representing a 32% improvement. The error rate achieved a 45% reduction as the measurement results declined from 18% to 9.9%. User satisfaction numbers climbed 39% from 56% in the static UI design to 78% in the adaptive design for the interface. User engagement demonstrates the largest growth by increasing from 50% to 80% with a total improvement reaching 60%. The implementation of adaptive User Interfaces generates improved performance while simultaneously decreasing mistakes and delivering better user contentment and user engagement with static interfaces.
Table 1. Performance metrics for static and adaptive UI.
Metric | Static UI | Adaptive UI | Improvement (%) |
|---|---|---|---|
Task Completion Time | 120 s | 82 s | 32% |
Error Rate | 18% | 9.9% | 45% |
User Satisfaction | 56% | 78% | 39% |
Engagement levels | 50% | 80% | 60% |
UI effectiveness analysis
Table 2 presents data showing that adaptive UI produces superior results than static UI regarding multiple user interface (UI) features while operating as both a static and adaptive system. Users achieve better interface movement speeds through the adaptive UI because it allows 85% navigation speed compared to 60% in static user interfaces. The adaptive UI produces substantial improvements in comprehension as users understand it much better than the static UI, with a 55% to 88% increase. Users experience better interaction convenience when they switch from the static UI to the adaptive UI, as the percentage goes from 58% to 90%. This reflects the improved user-friendly design. The research results demonstrate that adaptive UI surpasses static UI because it better supports user navigation, together with comprehension, understanding, and interface interaction.
Table 2. UI feature effectiveness in static VS adaptive UI.
UI Feature | Static UI effectiveness (%) | Adaptive UI effectiveness (%) |
|---|---|---|
Navigation Speed | 60% | 85% |
UI Comprehension | 55% | 88% |
Interaction Ease | 58% | 90% |
Usability analysis
Table 3 outlines the usability measures between static UI and adaptive UI, where adaptive interfaces deliver stronger performance. Users undergo less mental strain through the adaptive UI because its cognitive load reduction rate rises to 75% from the static UI’s 40%. Error recovery takes less time according to measurements, since it has shortened from 15 s to 8 s, which indicates users will find it easier to fix mistakes. Adaptive UI delivers retention improvements since users continue utilizing the system with greater frequency, moving from 65% static retention to 85% adaptive retention. The research findings validate adaptive UI because it helps users maintain less mental workload while improving their ability to recover from errors and retain their application usage.
Table 3. Usability metrics for static and adaptive UI.
Metric | Static UI | Adaptive UI |
|---|---|---|
Cognitive Load Reduction | 40% | 75% |
Error Recovery Time | 15 s | 8 s |
User Retention Rate | 65% | 85% |
Baseline comparison with optimization algorithms
To benchmark performance, the proposed Deep Q-Learning with Golden Jackal Optimization (DQL-GJO) was compared against several other optimization techniques integrated with Deep Q-Learning: Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), Differential Evolution (DE), and Grey Wolf Optimizer (GWO). As shown in Table 4, the DQL-GJO model achieves superior convergence speed, task efficiency, and satisfaction levels.
Table 4. Baseline optimizer method comparison.
Method | Convergence epochs | TCT (sec) | ER (%) | US (%) |
|---|---|---|---|---|
DQL | 70 | 98 | 13.5 | 69 |
DQL-GA | 62 | 90 | 11.2 | 72 |
DQL-PSO | 58 | 88 | 10.6 | 75 |
DQL-ACO | 56 | 86 | 10.2 | 76 |
DQL-DE | 53 | 85 | 10.1 | 77 |
DQL-GWO | 48 | 84 | 10.0 | 77.5 |
Proposed DQL-GJO | 45 | 82 | 9.9 | 78 |
The standard DQL model exhibits the slowest learning with 70 epochs to converge, the highest TCT (98 s), and relatively higher ER (13.5%), leading to lower overall user satisfaction (69%). Introducing metaheuristic optimizers improves performance across all dimensions. For instance, DQL combined with GA, PSO, and ACO shows noticeable reductions in both TCT and ER, while simultaneously boosting US. Among these, DQL-GWO and DQL-DE achieve stronger performance, with convergence achieved in 48 and 53 epochs, respectively, and US values exceeding 77%. The proposed DQL-GJO model demonstrates the best overall performance, converging in only 45 epochs, thus requiring the least training iterations. It achieves the lowest TCT of 82 s and the lowest ER of 9.9%, which directly translates into the highest user satisfaction (78%). These results underscore the advantage of the GJO optimizer in balancing exploration and exploitation more effectively than the other algorithms, thereby enhancing both learning efficiency and practical usability outcomes.
Discussion
The incorporation of reinforcement learning in adaptive user interfaces enhances usability performance, along with efficiency and user engagement in wearable medical devices. The adaptive system enhances user experience through automatic interface modifications of navigation paths and button placement, as well as notification timing to lower cognitive strain, resulting in fewer errors with better satisfaction rates. The findings of the research indicate that flexible user interfaces produce higher efficacies than fixed objects since user complete tasks quicker by 32% and commit fewer errors by 45% and show an increased interface interaction by 60%.
The study findings confirm that adaptive interfaces could be very promising in ensuring accessibility advantages to aged users and the individuals with motor deficits, resulting in a prolonged practice of adherence to medical self-management. Low power consumption in the operation of the system allows wearable devices to be practically used in doing the work of constant health monitoring. A dynamic user interface system introduces considerable benefits to health care professionals and end-users as wearable medical devices are being used in their operating. In this system, the patients will cross the conventional device implementation barriers that will provide more opportunities to engage in self-monitoring activities in managing chronic diseases. Health systems users find improved usability due to low training requirements that liberate the staff to utilize resources to other aspects. The advantage of this system is that it provides automatic delivery of custom interior design to users to enhance access to medical wearable by individuals with varying digital capabilities and other cognitive and motor disabilities. This adaptive mechanism is necessary in the wearable device since its small screen size and limited control means demand easy interactions with its users.
The study presents promising findings despite several limitations that remain in its path. The reinforcement learning model shows good results but needs significant training data before optimization for its policy structure, thus creating challenges in medical wearable applications. The research depends heavily on controlled simulated user testing instead of genuine wearable usage conditions that include environmental disruptions, user movements, and varied user patterns. Medical wearables operating under limited power requirements need real-time optimization because their current system operation could reduce their usability. Future research will need to evaluate integrated methods of federated learning platforms together with meta-learning methods to validate their capabilities to scale models and optimize user interface systems of wearable healthcare applications.
While the simulation results demonstrate the efficiency and robustness of the proposed DQL-GJO model, real-world validation remains a critical next step to establish its practical utility in healthcare contexts. Wearable medical devices are often used by diverse populations, particularly older adults and individuals with limited technical proficiency, whose behaviors, preferences, and environmental conditions may not be fully captured in controlled datasets or simulations. Conducting pilot usability studies in real-world settings would provide valuable insights into how the adaptive interface responds to varying levels of digital literacy, cognitive load, and physical dexterity. Such validation would also help identify unforeseen challenges, including device connectivity issues, environmental noise, and user-specific accessibility needs, which cannot be fully replicated in a simulated environment.
Conclusions
The proposed adaptive UI framework for wearable medical devices employs reinforcement learning to execute real-time optimization of interface components, enhancing accessibility, usability, and user engagement. Utilizing Deep Q-Learning, further enhanced by GJO, the system dynamically refines navigation flow, button placement, and notification heuristics based on user interaction telemetry, thereby minimizing task completion time, mitigating interaction errors, and augmenting subjective user satisfaction. Empirical evaluations validate the superiority of adaptive UI configurations over static counterparts, demonstrating substantial performance enhancements across heterogeneous user profiles, including individuals with motor and cognitive impairments. Future advancements can explore the integration of multimodal interaction paradigms, including speech-based commands and electromyographic gesture recognition, to further augment accessibility. The extension of the reinforcement learning framework to incorporate federated learning architectures can enable decentralized, privacy-preserving adaptation across distributed user populations. Additionally, real-time physiological signal-driven UI modulation, leveraging biosensor data and cognitive workload estimation, can refine personalization strategies. Finally, expanding empirical validation across a diverse range of wearable medical devices and user demographics will ensure system robustness, scalability, and clinical applicability in digital health ecosystems.
Author contributions
M.J.conceived and designed the analysis; J.H.collected the data and performed the analysis; M. J. and L. W. wrote the paper .
Data availability
The datasets used and analyzed during the current study are publicly available from the UCI Machine Learning Repository. Specifically: The MHealth Dataset can be accessed at: https://archive.ics.uci.edu/dataset/319/mhealth+dataset, accessed on 10 January 2025.The REALDISP Activity Recognition Dataset is available at: https://archive.ics.uci.edu/dataset/305/realdisp+activity+recognition+dataset, accessed on 10 January 2025.Additional details are available from the corresponding author upon reasonable request.
Declarations
Competing interests
The authors declare no competing interests.
Ethical approval
This study did not involve any experiments on human participants or the use of human tissue samples. All analyses were performed using an open-access dataset obtained from Kaggle, which is publicly available and anonymized. As such, ethical approval was not applicable.
Informed consent
This study did not involve any experiments on human participants or the use of human tissue samples. All analyses were performed using an open-access dataset obtained from Kaggle, which is publicly available and anonymized. As such, informed consent was not applicable.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1. Tariq, M. U. Advanced wearable medical devices and their role in transformative remote health monitoring. In Transformative Approaches to Patient Literacy and Healthcare Innovation 308–326 (IGI Global, 2024).
2. Jamshed, MA; Kamran, A; Abbasi, QH; Imran, MA; Ur-Rehman, M. Challenges, applications, and future of wireless sensors in internet of things: A review. IEEE Sens. J.; 2022; 22,
3. Stefanidi, Z; Margetis, G; Ntoa, S; Papagiannakis, G. Real-time adaptation of context-aware intelligent user interfaces, for enhanced situational awareness. IEEE Access.; 2022; 10, pp. 23367-23393. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3152743]
4. Iqbal, SM; Mahgoub, I; Du, E; Leavitt, MA; Asghar, W. Advances in healthcare wearable devices. NPJ Flex. Electron.; 2021; 5,
5. Tzafilkou, K; Perifanou, M; Economides, AA. Negative emotions, cognitive load, acceptance, and self-perceived learning outcome in emergency remote education during COVID-19. Educ. Inform. Technol.; 2021; 26,
6. Tapu, R; Mocanu, B; Zaharia, T. Wearable assistive devices for visually impaired: A state of the Art survey. Pattern Recognit. Lett.; 2020; 137, pp. 37-52.2020PaReL.137..37T [DOI: https://dx.doi.org/10.1016/j.patrec.2018.10.031]
7. Lu, L et al. Wearable health devices in health care: narrative systematic review. JMIR mHealth uHealth; 2020; 8,
8. Hu, L; Chen, Y; Cao, E; Hu, W. User experience & usability of wearable health device: A bibliometric analysis of 2014–2023. Int. J. Human–Computer Interact.; 2024; 41,
9. Lewis-Thames, M et al. Lessons learned from a user-centered design approach rural older adults using digital health tools. Innov. Aging; 2024; 8, 632. [DOI: https://dx.doi.org/10.1093/geroni/igae098.2069] [PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11690171]
10. Stamate, A; Marzan, M; Velciu, M; Paul, C; Spiru, L. Advancing user-centric design and technology adoption for aging populations: a multifaceted approach. Front. Public. Health; 2024; 12, 1469815. [DOI: https://dx.doi.org/10.3389/fpubh.2024.1469815] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/39712308][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11659139]
11. Sharma, R. R. et al. Enhanced Healthcare Monitoring and Medication Reminders for Elderly Individuals Using Wearable Sensors and Mobile Application. In 8th International Conference on Computing, Communication, Control and Automation (ICCUBEA) 1–5 (IEEE, 2024). (2024).
12. Luo, J; Zhang, R; Xu, J; Pan, Y. Positive strategies for enhancing elderly interaction Experi-ence in smart healthcare through optimized design methods: an INPD-Based research approach. Sustainability; 2024; 16,
13. Iadarola, G; Mengarelli, A; Crippa, P; Fioretti, S; Spinsante, S. A review on assisted living using wearable devices. Sensors; 2024; 24,
14. Coşkun, Ö. A healthcare monitoring system design for elderly living alone. Sci. J. Mehmet Akif Ersoy Univ.; 2024; 7,
15. Sun, Q., Xue, Y. & Song, Z. Adaptive user interface generation through reinforcement learning: A data-driven approach to personalization and optimization. In 6th International Conference on Frontier Technologies of Information and Computer (ICFTIC) 1386–1391 (IEEE, 2024).
16. Wegener, EK et al. Considerations when designing inclusive digital health solutions for older adults living with frailty or impairments. JMIR Formative Res.; 2024; 8, e63832. [DOI: https://dx.doi.org/10.2196/63832]
17. Nayak, S., Nayak, S. C., Rai, S. C. & Kar, B. Wearable sensors and machine intelligence for smart healthcare. In Internet of Things Based Smart Healthcare: Intelligent and Secure Solutions Applying Machine Learning Techniques 3–22 (Springer Nature Singapore, 2022).
18. Alavizadeh, H; Alavizadeh, H; Jang-Jaccard, J. Deep Q-learning based reinforcement learning approach for network intrusion detection. Computers; 2022; 11,
19. MHealth Dataset. (2025). https://archive.ics.uci.edu/dataset/319/mhealth+dataset, accessed on: 10 20. REALDISP Activity Recognition Dataset. https://archive.ics.uci.edu/dataset/305/realdisp+activity+recognition+dataset, accessed on: 10 January, 2025.
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.