Content area
To address the issues of high costs in optimizing service design and the singularity of evaluation dimensions for service touchpoints, in this study, a service touchpoint optimization process that is based on the experiential memory model is developed, and it is applied to an online meeting platform. First, the target user group is identified. Through user behavior analysis and interface disassembly, the main service touchpoints can be obtained. The effectiveness, efficiency, and satisfaction of each service touchpoint are initially assessed via user usability tests, and an experience score is derived. A memory score is subsequently determined on the basis of the memory intensity of users toward the service touchpoints. These two scores are then combined to construct an experiential memory model, which is used to identify the service touchpoints that require improvement. The model is then applied to conduct in-depth analyses of the actual demands of users and to formulate corresponding optimization schemes. Eventually, on the basis of user behavior patterns and optimization strategies, a prototype of an optimized design is developed. The case analysis reveals that the improved experiential memory model that is developed in this research can effectively discriminate the key service touchpoints and reduce the design workload. Moreover, the proposed service touchpoint optimization process can enhance the user experience for online meeting platforms, thus providing a reference for research on online meeting optimization and service touchpoint evaluation.
Introduction
Service design has become a hot topic of research in recent years. Service touchpoints are the key elements in service design1,2. He et al3. introduced user emotional valence to assess touchpoints in online education services on the basis of user experience and explored a scientific and systematic approach for evaluating these touchpoints, thus providing valuable insights for enhancing the design and service quality of online education platforms. Peng et al4. proposed a methodology for evaluating touchpoints between products and services in sustainable transportation modes, with the aim of enhancing the user experience and service quality. Key touchpoints are identified by capturing them and conducting relevance analysis during user‒product interactions. This methodology is then used to assess the risk of failures in these touchpoints, thus contributing to the establishment of new service touchpoints. Fang et al5. developed a model for evaluating service experiences that identifies problematic service touchpoints and provides recommendations for future improvements. This study serves as a valuable reference for enhancing the overall online shopping experience. Hu et al6. conducted user experience evaluations and identified the key factors that influence digital exhibitions. They then proposed suggestions for enhancing the visitor experience and service quality. The service process optimization method that was proposed by Bai et al1. is based on the association of service touchpoints and the design structure matrix (DSM). By identifying modules and layering service touchpoints, they successfully replanned and optimized a complex service process from two perspectives: individual service touchpoints and service modularization.
The aforementioned research has provided valuable insights into service touchpoint optimization; however, most studies have relied primarily on user experience as the basis of analysis, which has resulted in limited evaluation dimensions and high optimization costs. Moreover, an effective method for systematically evaluating and ranking service touchpoints is lacking. Against this backdrop, in the present study, the “experiential memory model” is introduced, and by integrating theories such as usability testing and the Ebbinghaus forgetting curve, a service touchpoint optimization process that is grounded in this model is developed. The aims are to evaluate service touchpoints from a multidimensional perspective, identify and optimize critical touchpoints, and propose improvement strategies that are both theoretically meaningful and practically feasible, thereby reducing the design workload while addressing the limitations of methods from the literature on service touchpoint evaluation.
Research model and framework
Correlation theory
The interdisciplinary nature of user experience is pronounced, with variations in its definition across different domains. Norman7 divided experience into layers of instinct, behavior, and reflection. Garrett8 proposed five layers of user experience, namely, the strategy layer, scope layer, structure layer, frame layer, and surface layer. Robinson et al9. elucidated the distinction between emotional experience and recall from a memory-based psychological perspective and demonstrated that emotional experiences are typically ephemeral, which renders them increasingly ambiguous when individuals recollect them. Zhao10 summarized the extant research priorities and proposed three stages of the user experience process: expectation, progression, and influence; the author posited that each link of the experience has a different effect on the product and introduced memory into the “influence” element of user experience. Memory, which is an innate response, is subject to the influence of both physical and emotional factors11. Thus, understanding the mechanisms that underlie memory can aid designers in creating user-friendly products, optimizing workflow efficiency, and enhancing product usability. Memory assessment has been extensively applied across a multitude of fields, such as psychology12, information technology13, and video games14. This widespread use attests to the viability of memory assessment. Furthermore, in the studies cited in references 9 and 10, scholars have integrated both experiences and memories into their research frameworks. Given these precedents, introducing memory assessment into the domain of service design is justifiable. Specifically, researchers can explore the possibility of evaluating services by leveraging experiences and memories.
Usability testing, which is the core research and practical method in the field of user experience (UX), assesses whether a product meets the needs and expectations of users through software usage scenarios15. It accurately reflects the system level and helps designers understand user needs and enhance the overall user experience9. In accordance with the definition in ISO 9241-11 (2018), usability encompasses three core dimensions: effectiveness (the accuracy and completeness of users achieving their goals), efficiency (the amount of resources that are required to achieve the goals), and satisfaction (the subjective comfort of and acceptability to users when using the product)16. Studies have shown that usability testing of systems can improve user satisfaction17, reduce support costs18, determine improvement directions19, and ultimately ensure the success of a product in a highly competitive market. Therefore, in this article, the experience aspect of a product is assessed through usability testing.
The Ebbinghaus forgetting curve, which was proposed by the German psychologist Hermann Ebbinghaus, laid the foundation for quantitatively understanding the temporal decay of memory20,21. The curve demonstrates that newly acquired information is forgotten most rapidly during the initial stage, after which the rate of forgetting gradually slows and stabilizes within approximately two days. The fundamental pattern of forgetting that is revealed by this curve, along with subsequent theoretical extensions such as the “spacing effect,” has been widely applied not only in psychology but also in informed practices such as calculating users’ interest retention22 and enhancing the relevance of social recommendations23. These applications have contributed to the improvement of user experience with digital products. Accordingly, in this study, the Ebbinghaus forgetting curve is used to assess memory retention following users’ experiences, with the aim of providing a new perspective for the evaluation of service touchpoints.
Improvement of the experiential memory model
On the basis of years of service experience and existing user experience evaluation methods, Haokang Company proposed an experiential memory model24 to assist staff in optimizing contact screening and applied it in several real-life service cases. The experiential memory model incorporates both the immediate impression of service and subsequent recall, thereby enabling a dual evaluation mechanism that comprehensively reflects users’ authentic experiences. The model categorizes experience into five levels and memory into four levels, thus resulting in a total of six level regions. However, this model exhibits excessive level differentiation without specific data support during the evaluation process, thereby leading to the inconsistent division of experience and memory levels, which can generate ambiguous service evaluation results. This inconsistency hampers subsequent optimization efforts. Therefore, in this study, the experiential memory model is improved to make it more suitable for use in service touchpoint assessment and optimization tasks. First, mathematical coordinates are used to quantify the evaluation results to provide designers with clear and effective data. Second, the evaluation levels of experience and memory are unified to ensure the consistency of the evaluation results. Finally, the classification of grade areas is simplified, namely, the number of areas is decreased from 6 to 5, which is more in line with the common standard number for grade evaluation. The improved experiential memory model is illustrated in Fig. 1.
Fig. 1 [Images not available. See PDF.]
Improved experiential memory model.
The improved experiential memory model employs a Cartesian coordinate system with “memory” on the horizontal axis and “experience” on the vertical axis. The coordinate range is established as five levels that range from level A to level E. The higher a level is, the more refined the touchpoints in the corresponding area. Prioritizing service touchpoints with lower evaluation levels to expedite the design cycle and ensure overall harmony within the service system is imperative for designers. The green zone represents Grade A; it includes the best service touchpoints, namely, the touchpoints that are characterized by the highest evaluations in terms of both user experience and memory. These touchpoints indicate excellent user experiences and strongly positively influence overall service. The blue zone corresponds to Grade B; it includes touchpoints with relatively good scores in terms of experience and memory, which correspond to generally good user experiences and a positive contribution to the service system. The gray zone denotes Grade C; it represents touchpoints with the lowest memory scores, which suggest that these touchpoints are largely unnoticeable and exert minimal effects on the overall service and thus may not require further intervention. The yellow zone corresponds to Grade D, where touchpoints receive lower experience scores but relatively high memory scores, thus indicating poor user experiences that are strongly remembered, which exert negative effects on the service and warrant optimization. The red zone represents Grade E, which includes touchpoints that are characterized by the lowest experience scores and the highest memory scores; these touchpoints represent the worst user experiences that leave a strong impression and thus necessitate prioritized and intensive optimization.
Touchpoints of the online meeting optimization process
By employing tools and methods such as user profiling and behavioral tracking and drawing on theories such as usability testing and the Ebbinghaus forgetting curve, in this study, a service touchpoint optimization process that is grounded in an experiential memory model is developed. The steps of this process are illustrated in detail in Fig. 2.
Fig. 2 [Images not available. See PDF.]
Touchpoint optimization process based on the experiential memory model.
The first stage is experience information acquisition. The target users of the online meeting platform are determined through a field investigation, and a comprehensive user profile is constructed. Furthermore, each target user is meticulously monitored, and the user’s behavioral operations are meticulously recorded to acquire the service touchpoints. Finally, the user experience is assessed through usability testing via the user testing method, which yields a score for the service touchpoint experience.
The second stage is memory evaluation and experience integration. First, the degree of user memory is initially determined, and a service touchpoint memory score is subsequently derived through computational analysis. The experience score and memory score are then combined to construct an experiential memory model, which yields the service touchpoint level. Finally, on the basis of the classification results, the service touchpoints that require optimization are determined, and an output for the optimization strategy is generated.
The third stage is service touchpoint concept output. In this stage, the optimization strategy guides the development of an online platform service touchpoint optimization scheme, which focuses on enhancing the interactions between these two types of users to increase the level of engagement and improve the overall service system.
Case verification
Experience information acquisition
Drawing upon an analysis of download data, market penetration, and user statistics that pertain to a network meeting platform in China, Tencent Meeting is presented as a case study for service touchpoint optimization.
Target user identification
The basic information and usage patterns of Tencent Meeting users were extensively collected through a questionnaire survey. Representative data from various users were summarized, and two user profiles, namely, “speaker user” and “audience user,” were established on the basis of the primary behavioral characteristics that were observed in Tencent Meeting usage (Fig. 3). Subsequent research focused on these two types of users.
Fig. 3 [Images not available. See PDF.]
Tencent Meeting user portraits.
According to Fig. 3, the primary objectives of “speaker users” who use Tencent Meeting are to optimize cost efficiency by reducing labor and material expenses, reconcile communication impediments that arise from geographical limitations, and simultaneously accommodate an unlimited number of participants. The primary behaviors include content sharing, meeting management, meeting creation, and meeting chairing. The most frequently utilized functionalities include screen sharing, voice communication, virtual backgrounds, and meeting administration. The primary requirements include user-friendly software, extensive meeting functionalities, and adaptable participation options. The primary objectives of “audience users” who use Tencent Meeting are to efficiently acquire the necessary content, facilitate real-time collaboration and communication, and optimize time utilization. Common functionalities include textual communication, note-taking capabilities, interactive feedback mechanisms, and data preservation features. The primary activities include documenting the proceedings, posing inquiries to the presenter, and facilitating the exchange and preservation of pertinent information. The requirements are swift login capabilities, seamless audio screen playback, real-time communication with the speaker, portable recording of meeting information, and convenient review of meetings at any time. In the “Common Features” section, Tencent Meeting is widely used in various scenarios. The most common application is internal meetings, followed by distance education. Among the users, 88.3% opt for online meeting products on the basis of the organization’s designation, while 68.1% choose them on the basis of personal preferences. The majority of users conduct online meetings at least once a week, while only 11.9% of users do so less than once a week.
Service touchpoint acquisition
Ten users were selected from each of the two categories to participate in an online meeting. Their usage processes were carefully monitored and analyzed, and the behaviors of the users during three different stages (i.e., meeting preparation, during the meeting, and at the end of the meeting) as well as the service touchpoints used were recorded. Moreover, an analysis was conducted on the Tencent Meeting interface, and additional service touchpoints that were derived from user behavior were added. For more details, please refer to Fig. 4. In the “Behavior” section, orange represents the actions of the speaking users, blue represents the actions of the audience users, and the black lines represent the common behaviors that were exhibited by both types of users. Furthermore, adhering to interaction design principles for interface design, the Tencent Meeting interface was deconstructed into seven primary categories with a total of 39 more detailed secondary interfaces. By integrating the functions and providing appropriate corresponding relationships, relevant service touchpoints were obtained. Finally, the service touchpoints were integrated and encoded, thus forming a set of 28 main service touchpoints. These touchpoints are shown in the last row of Fig. 4, and the color of the origin above each touchpoint indicates the source of the touchpoint: orange represents that the user’s speaking behavior was the source, blue represents that the user’s audience behavior was the source, and green represents that the interface analysis was the source.
Fig. 4 [Images not available. See PDF.]
Tencent Meeting service touchpoints.
Experience assessment
Usability testing was conducted to obtain user experience scores for the Tencent Meeting platform. ISO standard 9241 − 21016 defines usability as the extent to which a system, product, or service can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use. Therefore, in this study, the experience score was assessed on the basis of three indicators: effectiveness, efficiency, and satisfaction. A five-point Likert scale questionnaire was distributed to Tencent Meeting users to collect satisfaction evaluations regarding various service touchpoints. On the scale, a score of 1 indicates “very dissatisfied,” whereas a score of 5 indicates “very satisfied”. The average score of all users was calculated to represent the satisfaction level for each contact point and was used for the subsequent calculation of the total experience score. A total of 198 valid responses were collected. From these, 5 “speaker” participants and 15 “audience” participants were selected to participate in this usability test, all of whom had prior experience using Tencent Meeting. All the personnel who were recruited for this study were adults, and informed consent was obtained from all the participants. The specific steps for usability testing were as follows.
Table 1. Usability testing tasks.
Task Type | Number | Specific Content | Touchpoints |
|---|---|---|---|
(A) Speaker users’ task | A1 | Schedule a meeting that starts in 30 min according to the following requirements. 1. Meeting name: Usability test + Your name. 2. Set up meetings to reoccur on a weekly basis until their conclusion in October. 3. Set the password to 1234. 4. Automatically activate the cloud recording feature for every meeting. | (T1) Schedule (T2) Meeting setup (T3) Link number |
A2 | Share the meeting in designated group chats. The meeting should be attended in accordance with the specified requirements. 1. Change your name to Tencent users. 2. Turn on the camera after entering. 3. Replace the background image with the image on the computer desktop. 4. Set up a 5-minute registration period. | (T4) Calendar (T5) Invitation (T6) Personal information (T7) Join meeting (T9) Function set | |
A3 | Share your screen as a PowerPoint file on the computer desktop. Choose a specific page and engage in 30 s of reading. Receive vocal responses to written inquiries. End screen sharing. | (T11) Share screen (T12) Audio and video settings (T13) Chat feature (T22) Stop sharing | |
A4 | Put all participants on silent. Unmute “Staff 2.” | (T14) Meeting management (T15) Participants | |
A5 | Upload the PowerPoint file to the meeting. | (T16) Share data | |
A6 | End and cancel all scheduled meetings. | (T23) End meeting (T27) Leave meeting | |
A7 | Export the check-in information to the desktop. | (T25) Meeting details (T26) Derive information | |
A8 | Share meeting videos that were recorded in the cloud with participants. | (T19) Record (T24) Past meeting | |
(B) Audience users’ task | B1 | Enter the meeting according to the following requirements (Link number: 884 5431 7066; Password: 1234). 1. Change the name to Participant. 2. Automatically mute the microphone and disable video at the start of the meeting. 3. Sign in after the meeting. | (T3) Link number (T7) Join meeting (T8) Premeeting setup (T12) Audio and video settings (T17) Pop-ups |
B2 | Watch the shared screen and take notes. Vocally answer the on-screen questions. | (T13) Chat feature (T20) Interface layout (T21) Information note | |
B3 | Complete interactive tasks with additional features. | (T10) Additional application | |
B4 | Respond to the end of the sharing session with a round of applause. Leave the meeting. | (T18) Emoticon (T27) Leave meeting | |
B5 | Save the documents that are uploaded by staff members to the desktop. | (T25) Meeting details (T28) Download data | |
B6 | Delete the meeting record. | (T24) Past meeting |
Developing test tasks. On the basis of the usage processes that were observed during previous tracking, test tasks were designed separately for the two types of users (the “Speaker users’ task” and the “Audience users’ task”). Task A (the Speaker users’ task) consisted of 8 items and was for speaker users. Task B (the Audience users’ task) consisted of 6 items and was for listener users (Table 1). These tasks covered all service interaction touchpoints.
Pretests and time limits. Prior to the main experiment, a pretest was conducted with five interaction designers who frequently used Tencent Meeting. The time spent on each task and touchpoint was recorded. Their average operation time was used as the benchmark for proficient performance. A time limit of three times the proficient time was set for each task. In the subsequent experiments, if participants completed all tasks within the specified time, the test was regarded as “completely completed”; if they completed only some tasks within the specified time, it was regarded as “partially completed”; if they completed the tasks beyond the specified time or were unable to complete them at all, it was regarded as “task failure”. Some touchpoints were used in multiple tasks, in which case the average time for multiple operations was used as the test result.
User testing. The researchers explained the experimental task and procedure to the participants, after which a total of 20 participants were recruited to independently complete the test tasks. The entire experimental process was recorded to facilitate subsequent data analysis. The application environment was a Windows 11 computer system, and Tencent Meeting software version v3.19.30 was used.
Data analysis. The research team analyzed the recorded task completion and operation times from screen recordings, calculated the effectiveness and efficiency metrics via Eqs. (1) and (2), respectively, and assessed satisfaction on the basis of emotional scale ratings. The overall experience score was calculated by combining these three indicators via Eq. (3). The formula is as follows25:
1
where represents the effectiveness score of contact , the score range has been converted to a rating standard of 1 to 5, U represents the total number of users who participated in the test, represents the number of users who successfully completed all the tasks related to contact , and represents the number of users who partially completed the tasks related to contact .
2
where represents the efficiency score of contact , denotes the average time taken by all the participants to complete the tasks related to contact , and represents the proficient time taken to complete the tasks related to contact .
3
,where represents the experience score of contact i and indicates the satisfaction of users with contact i, which is derived from the previous questionnaire survey.
Via the formula, the experience scores of all the service touchpoints were calculated, and they are shown in Fig. 5. For the complete dataset that was used in this analysis, please refer to the supplementary material (Test_data.xlsx).
Fig. 5 [Images not available. See PDF.]
User experience scores.
In Fig. 5, the yellow bars represent the overall experience scores of the service touchpoints, and the three blue lines represent the effectiveness, efficiency and satisfaction. The left vertical axis represents the coordinate axis of the overall experience score, and the right vertical axis represents the coordinate axes of effectiveness, efficiency and satisfaction. According to Fig. 5, the T7 service touchpoint had an experience score of , which indicated the optimal user experience for this particular service touchpoint; 15 service touchpoints (T1–T6\T8\T15–T17\T19\T22–T23\T25\T27) had experience scores in the range of , which indicated positive user experiences; nine service touchpoints (T9–T13\T18\T20\T24\T26) had experience scores in the range of , which indicated poor user experiences; and three service touchpoints (T14\T21\T28) had experience scores in the range of , which indicated that the poorest user experiences were associated with these specific touchpoints. The experience scores of 12 service touchpoints were rated as “poor” or “poorest”, which indicated that 12 of the service touchpoints needed to be optimized.
Integration of memory evaluation into experience
Memory assessment
The users’ memories of the service touchpoints of the online meeting platform were assessed via a questionnaire. According to the Ebbinghaus forgetting curve, the rate at which users forget information tends to stabilize on the second day after using Tencent Meeting. Therefore, the memory assessment was conducted on the second day following the usability test. The users were instructed to abstain from using Tencent Meeting during this interval. The specific procedures were as follows. First, memory was categorized into five levels on a Likert scale, with integers from 0 to 4 representing the respective score values for each level: 0 denotes “complete absence of recollection”, whereas 4 signifies “exceptionally vivid remembrance”. Second, the participants were earnestly requested to truthfully complete the questionnaire in accordance with their own memory experiences. Finally, the collected questionnaires were analyzed to assess the reliability and validity of the data, and the mean value was computed as the assessment outcome for service touchpoint memory; a total of 205 questionnaires were collected. The reliability of the questionnaire was examined via Cronbach’s α coefficient. A value of 0.779 was obtained, which indicated a satisfactory level of internal consistency. Validity was assessed via the Kaiser‒Meyer‒Olkin (KMO) measure and Bartlett’s test of sphericity. The KMO value was 0.708, which suggested that the data exhibited good construct validity. The p value was less than 0.005, which indicated that the questionnaire passed Bartlett’s test of sphericity. The touchpoint memory assessment results are shown in Fig. 6. For the complete dataset that was used in this analysis, please refer to the supplementary material (Test_data.xlsx).
Fig. 6 [Images not available. See PDF.]
Touchpoint memory scores.
According to Fig. 6, eight service touchpoints (T6–T8\T12–T13\T15\T23\T27) had memory scores in the range of [3,4], which indicated that the users had a profound recollection of the functional and operational details that pertained to these service touchpoints during their use; 13 service touchpoints (T1–T3\T5\T9–T11\T14\T16–T17\T22\T24\T28) had memory scores in the range of , which indicated that the users had slightly forgotten the information that pertained to these service touchpoints; two service touchpoints (T4\T25) had memory scores in the range of , which indicated a greater tendency for users to forget the information that pertained to these service touchpoints; and five service touchpoints (T18–T21\T26) had memory scores in the range of , which indicated that the users did not notice these touchpoints during operation.
Comprehensive evaluation
The experiential memory model is depicted in Fig. 7 on the basis of the service touchpoint experience scores and memory scores, with the divisions indicated.
Fig. 7 [Images not available. See PDF.]
Experiential memory model for Tencent Meeting.
According to Figs. 1 and 7 touchpoint was at level A, which was the highest level among all the service touchpoints, and was classified as “the best touchpoint”; this touchpoint provided the users with the best sense of experience and memory, and optimization was not needed. Furthermore, 14 touchpoints (T1–T6\T8\T15–T17\T22–T23\T25T27) were at level B and belonged to the “relatively good touchpoints” category, which indicated favorable user evaluations of these touchpoints. Five touchpoints (T18–T19\T20–T21\T26) were at level C and fell under the category of “noninductive touchpoints”; the users had no significant memory of these touchpoints, thus resulting in minimal effects on the overall service system. Thus, selective optimization on the basis of specific circumstances is feasible. Six touchpoints (T9–T13\T24) were at level D and were categorized as “relatively good touchpoints”; these touchpoints presented numerous issues that significantly affected both the users and the service system, and their prompt optimization was recommended. Finally, 2 service touchpoints (T14\T28) were at level E and fell under the category of “the worst touchpoints”; these touchpoints had the minimum score and exerted the most significant effects on both the users and the service system, thus necessitating immediate optimization. The research team opted to optimize the service touchpoints at levels E and D. On the basis of a comparison with Fig. 5, the number of service touchpoints for optimization was reduced from 12 to 8, which corresponded to an approximate 33% reduction in the design workload.
Optimization strategy
Targeted interviews were conducted with users to identify the reasons for the low scores of the eight pain points, and the users’ pain points and demands were discovered. Through classification of the questions, these results were transformed into optimization strategies, as shown in Fig. 8.
Fig. 8 [Images not available. See PDF.]
Optimized Tencent Meeting service touchpoint system.
First, a total of 8 user pain points were identified for the low-level contact points that needed optimization. They are as follows: The mute operation is delayed, finding the background customization button is difficult, the button is presumed to be located at the top, there is no message notification, the key settings are confusing, forgetting or incorrectly turning on the microphone, the procedure is too difficult, and data search is difficult. The lines between the low-level service touchpoints and user pain points indicate the correspondence relationships between them. Among the four pain points, 4 belong to the “speaker user” perspective, 2 belong to the “audience user” perspective, and 2 are shared by both types of users. Second, the user pain points were further classified and summarized into three main issues: insufficient visual feedback, inaccurate distribution positioning, and incorrect operation procedures. Finally, a total of six different optimization strategies were generated for the three types of problems, namely, adding feedback animation, eye-catching display mode, changing the key positions, rezoning, simplifying the steps, and the reset function operation. These optimization strategies can then be used to guide the implementation of specific project outcomes on the basis of their respective goals.
Service touchpoint concept output
Speaker user perspective
The optimization of service touchpoints from the perspective of speaker users is illustrated in Fig. 9. According to the optimization strategy, the solution for enhancing touchpoint T14 (meeting management) involves adding a delay icon when managing the voices of members to provide users with visual feedback (“add feedback animation”). Additionally, the lower action button should be relocated to its customary upper position (“rezoning”). The solution for touchpoint T9 (function set) is to relocate the custom button to the bottom of the image and increase the size of the custom icon (“change key position”). With respect to touchpoints T11 (screen sharing), T12 (audio and video), and T13 (chat feature), the proposed solution entails incorporating visual feedback for information prompts within the shared state, as well as introducing an information preview box upon expanding the operation panel to facilitate timely access to relevant information for the speaker (“eye-catching display mode”). Additionally, the automatic activation of the microphone at the onset of sharing should be accompanied by a textual reminder that alerts users about its status (“reset function operation”).
Fig. 9 [Images not available. See PDF.]
Service touchpoint optimization from the perspective of speaker users.
Audience user perspective
The optimization of service touchpoints from the perspective of audience users is illustrated in Fig. 10. The proposed solution for touchpoints T10 (additional application) and T12 (audio and video settings) is to relocate the “Settings” button to the left side of the interface, alongside the audio and camera options (“change key position”). Moreover, the addition of a confirmation screen when the microphone is activated effectively reduces error rates (“reset function operation”). The proposed solution for touchpoint T28 (download data) entails incorporating “Download” and “Batch Download” buttons into the document list interface, which simplifies the process of saving operations. Additionally, users would be empowered to conveniently download multiple meeting documents simultaneously (“simplify steps”). The proposed solution for T24 (past meetings) entails the addition of a date search function to the upper-right corner of the historical meeting page to enhance user searchability (“reset function operation”).
Fig. 10 [Images not available. See PDF.]
Service touchpoint optimization from the perspective of audience users.
Summary
In this study, a service touchpoint optimization process that is based on the experiential memory model was developed and applied to the case of an online meeting platform. Two types of user profiles were constructed, and 28 key service touchpoints were identified through user behavior analysis and interface decomposition. Usability testing was then employed to conduct a user experience evaluation, which, in combination with memory evaluation, enabled the construction of an experiential memory model. The results indicated that eight service touchpoints required optimization. Compared with traditional experience evaluation, the number of touchpoints that were identified for optimization was reduced by approximately 33%. Subsequently, user interviews were conducted to identify pain points that were associated with the low-ranking touchpoints, which led to the development of six targeted optimization strategies and the creation of design prototypes.
Conclusions
To address the challenges of high optimization costs in service design and the limited dimensions of service touchpoint evaluation, in this study, a service touchpoint optimization process that is based on an experiential memory model is proposed. The process begins by employing theoretical approaches such as user profiling, behavioral tracking, and interaction design principles to collect key service touchpoints. Subsequently, usability testing is conducted to generate experience scores for each touchpoint, while memory scores are determined according to the strength of users’ recollections of these service encounters. By integrating these two dimensions, an experiential memory model is established to identify low-ranking touchpoints that require improvement. Finally, the pain points that arise from these touchpoints are analyzed, and corresponding optimization strategies are formulated.
At the theoretical level, the experiential memory model is advanced in this study by refining its structure to achieve greater standardization and applicability in service touchpoint evaluation. By integrating ISO international standards for experience assessment and the Ebbinghaus forgetting curve for memory evaluation, a more comprehensive framework is established. Furthermore, three computational formulas are introduced to resolve the challenges of calculating usability scores and related indicators, thereby enabling a systematic construction of the experiential memory model and a rigorous evaluation of service touchpoints. The innovative incorporation of the “memory” dimension into service evaluation not only transcends the traditional user experience-oriented perspective but also contributes a novel analytical framework and methodological approach for service touchpoint research.
At the practical level, the proposed service touchpoint optimization process, which is grounded in the experiential memory model, was applied to the domain of online meeting platforms in this study. This application provides developers with a feasible reference framework to better address users’ actual needs, particularly in terms of optimizing service processes. Validation via a case study demonstrated that the proposed process offers significant advantages in the evaluation and optimization of service touchpoints: It not only resolves the issue of suboptimal user experience in online meeting platforms but also effectively reduces the workload of service optimization.
Nevertheless, this study has certain limitations. Owing to the constraints of time and equipment, the research was confined to computer-based platforms, with a limited number of participants, which may have left some issues in the experience process undetected. Moreover, the experiential memory model was applied only within the context of online meetings. Future research could extend the validation of the model across multiple devices and diverse scenarios and further expand the set of experience evaluation metrics to develop a more generalizable and multidimensional toolkit for service design.
Author contributions
X.H. provided the research idea and the purpose of this research; A.T. designed the study, analyzed the data and design output and prepared the initial draft of the paper; and X.H. supervised, corrected, and revised this paper. All authors have read and agreed to the published version of the manuscript.
Funding
This research was funded by the Key R&D Project of the Science and Technology Department of Shaanxi Province (Grant No. 2023GXLH-067), the Shaanxi Province Social Science Fund Annual Project (Grant No. 2023J016).
Data availability
All the data that were generated or analyzed during this study are included in this published article (and its Supplementary Information files).
Declarations
Competing interests
The authors declare no competing interests.
Ethics declarations
This research was approved by the School of Design and Art of Shaanxi University of Science and Technology, which confirmed that all the research was conducted in accordance with relevant regulations. All the participants who were recruited for this study were adults, and informed consent was obtained from all the participants. We guarantee that the research was conducted in accordance with the Declaration of Helsinki.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1. Bai, Z., Liu, C., Sun, H. & Ding, M. Complex Service Process Optimization Based on Service Touchpoint Association and the Design Structure Matrix. Complexity 1–19 (2021). (2021).
2. Leinonen, A; Roto, V. Service design handover to user experience design – a systematic literature review. Inf. Softw. Technol.; 2023; 154, 107087. [DOI: https://dx.doi.org/10.1016/j.infsof.2022.107087]
3. He, X; Song, N. Emotional value in online education: A framework for service touchpoint assessment. Sustainability; 2023; 15, 4772.2023Sust..15.4772H [DOI: https://dx.doi.org/10.3390/su15064772]
4. Peng, Q; Wang, W; Yang, X; Wang, Y; Chen, J. Research on affective interaction in mini public transport based on IPA-FMEA. Sustainability; 2023; 15, 7033.2023Sust..15.7033P [DOI: https://dx.doi.org/10.3390/su15097033]
5. Fang, T; Sun, H. Nah, FFH; Siau, K. Research on experience evaluation of Taobao shopping platform service. HCI IN BUSINESS, GOVERNMENT AND ORGANIZATIONS, HCIBGO 2021; 2021; Springer International Publishing Ag: pp. 43-54. [DOI: https://dx.doi.org/10.1007/978-3-030-77750-0_3]
6. Hu, L et al. Research on the design of online museum exhibition system based on user experience evaluation and eye movement experiment analysis. Displays; 2025; 90, 103107. [DOI: https://dx.doi.org/10.1016/j.displa.2025.103107]
7. Norman, D. A. Emotional Design. Perché Amiamo (o Odiamo) Gli Oggetti Della Vita QuotidianaApogeo Editore,. (2004).
8. Garrett, J. J. The Elements of User Experience: User-Centered Design for the Web and Beyond (New Riders, 2011).
9. Robinson, MD; Clore, GL. Belief and feeling: evidence for an accessibility model of emotional self-report. Psychol. Bull.; 2002; 128, pp. 934-960. [DOI: https://dx.doi.org/10.1037/0033-2909.128.6.934] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/12405138]
10. Zhao, W. Research on User Experience Elements Based on Internet Products (Jiangnan University, 2015).
11. Costanzi, M et al. Forgetting unwanted memories: active forgetting and implications for the development of psychological disorders. J. Pers. Med.; 2021; 11, 241. [DOI: https://dx.doi.org/10.3390/jpm11040241] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33810436][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8066077]
12. Gopi, Y; Madan, CR. Subjective memory measures: metamemory questionnaires currently in use. Q. J. Exp. Psychol.; 2023; 77, pp. 924-942. [DOI: https://dx.doi.org/10.1177/17470218231183855]
13. Liang, X et al. Self-evolving agents with reflective and Memory-augmented abilities. Neurocomputing; 2025; 647, 130470. [DOI: https://dx.doi.org/10.1016/j.neucom.2025.130470]
14. Lobato-Camacho, FJ; López, JC; Vargas, JP. Virtual reality evaluation of the Spatial learning strategies in gamers. Multimed Tools Appl.; 2023; 83, pp. 38127-38144. [DOI: https://dx.doi.org/10.1007/s11042-023-17177-w]
15. Rutter, S; Zamani, E; McKenna-Aspell, J; Wang, Y. Embedding equality, diversity and inclusion in usability testing: recommendations and a research agenda. Int. J. Hum. -Comput Stud.; 2024; 188, 103278. [DOI: https://dx.doi.org/10.1016/j.ijhcs.2024.103278]
16. Ergonomics of human-system interaction - Part 210: Human-centred design for interactive systems. (2019).
17. Weichbroth, P. Usability testing of mobile applications: A methodological framework. Appl. Sci.; 2024; 14, 1792.1:CAS:528:DC%2BB2cXlslClsrY%3D [DOI: https://dx.doi.org/10.3390/app14051792]
18. Villamañe, M; Alvarez, A. Facilitating and automating usability testing of educational technologies. Comput. Appl. Eng. Educ.; 2024; 32, e22725. [DOI: https://dx.doi.org/10.1002/cae.22725]
19. Jarvis, T; Mah, AML; Wang, RH; Wilson, MG. Web-Based system navigation database to support equitable access to assistive technology: usability testing study. JMIR Form. Res.; 2022; 6, e36949. [DOI: https://dx.doi.org/10.2196/36949] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36326813][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9673003]
20. Murre, JMJ; Chessa, AG. Why ebbinghaus’ savings method from 1885 is a very ‘pure’ measure of memory performance. Psychon Bull. Rev.; 2023; 30, pp. 303-307. [DOI: https://dx.doi.org/10.3758/s13423-022-02172-3] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/36069971]
21. Murre, JMJ; Dros, J. Replication and analysis of ebbinghaus’ forgetting curve. PLOS ONE; 2015; 10, e0120644. [DOI: https://dx.doi.org/10.1371/journal.pone.0120644] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26148023][PubMedCentral: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4492928]
22. Zhang, K; Chu, D; Tu, Z; Liu, X; Zhang, B. LSTM-UBI: a user behavior inertia based recommendation method. Multimed Tools Appl.; 2024; 83, pp. 69227-69248. [DOI: https://dx.doi.org/10.1007/s11042-024-18256-2]
23. Chen, H et al. Incorporating forgetting curve and memory replay for evolving Socially-aware recommendation. Inf. Process. Manag; 2025; 62, 104070. [DOI: https://dx.doi.org/10.1016/j.ipm.2025.104070]
24. He, X; Tian,; Anjie,; Song,Ning,. Zeng,Jieyu. Service touchpoint optimization strategy based on experiential memory model. Packag Eng.; 2024; 45, pp. 129-136.
25. Zheng, S; Wang, ZM; Fang, X. Deng,. Usability testing: task evaluation model and measurement method. DIE BIAN-Mobile User Experience; 2018; JJ: pp. 15-20.
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.