1. Introduction
The automotive industry’s requirements are becoming increasingly complex and sophisticated with the development of novel security systems, comfort features, and new designs. To deal with these requirements, more data processing capacity is needed to support technologies like Autonomous Driving and Assistance Systems (ADAS) [1], as well as safety-critical systems that are part of safer vehicles. Electronic Control Units (ECUs), complex networks of computers and processing units, are found in most vehicles and exchange information continuously. These units have become increasingly complex in both hardware and software, with Volvo cars requiring around 100 million lines of code derived from approximately 100,000 functional requirements [2]. Automotive manufacturers have started utilizing off-the-shelf multicore microcontrollers, commonly referred to as Multiprocessor System-on-a-Chip (MPSoC), in ECUs to keep up with this trend. These controllers implement new functions, parallel processing, and increased performance, making it easier to implement safety requirements such as those from ISO 26262 [3].
Despite these advancements, significant challenges remain in efficiently managing inter-core communication and maintaining the predictability of data flow in multicore systems. Traditional software designed for single-core systems incorporates communication methods to exchange data between tasks in preemptive and non-preemptive setups. However, when there are multiple cores in a microcontroller, different parts of the software need to be distributed and executed by parallel cores [4]. This situation presents new challenges for event chains, as tasks can be distributed across different cores, making it difficult to maintain the predictability of data flow without implementing contention mechanisms. Several studies have proposed solutions for inter-core communication in embedded systems, yet many solutions are tailored to specific industries like avionics and automotive applications, where time is of the essence, particularly in safety-related use cases.
According to these needs, it is hypothesized that the development of a communication model between cores, along with a synchronization mechanism in heterogeneous systems, will contribute to reducing latency caused by the overlap of operations. This paper proposes a solution that merges the predictability of LET, applied to inter-task communication, with the composability of time-controlled buffers using a TDMA scheme for inter-core communication. This approach ensures consistent latency and temporal determinism in core-to-core communication [5]. The benefit of this approach is that it reduces the need to bind applications to specific cores, facilitating the creation of less dependent event chains. Moreover, by setting temporal intervals for data transfer between cores, deterministic data flows can be modeled. This methodology’s portability is crucial, as it is not tied to specific hardware implementations, allowing its use across various platforms. However, scheduling mechanisms and peripheral resources, such as DMA, are outside the scope of this work and represent areas of interest for future research.
The main contributions of this work are as follows:
We propose a solution that merges the predictability of Logical Execution Time (LET) applied to inter-task communication with the composability of time-controlled buffers.
We utilize a TDMA scheme for inter-core communication to ensure consistent latency and temporal determinism in core-to-core communication.
Our proposal reduces the need to bind applications to specific cores, facilitating the creation of less dependent event chains.
This work is structured as follows. Section 2 presents the related works. Section 3 presents the communication strategies in multicore systems, beginning with a discussion on explicit communication and its problems, such as variable latency and lack of determinism. Section 4 introduces the methodology for implementing LET and TDMA, detailing how these models combine to improve predictability and data consistency in multicore systems, and presents a real-time implementation using the CYT2B75 Traveo II Entry Family platform. Section 5 discusses the results obtained from applying the LET-TDMA method in different processor configurations, illustrating the latency and data transfer behavior. Section 6 discusses how the implementation of LET and TDMA benefits communication in multicore systems by reducing latency and improving determinism and analyzes the results obtained. Finally, Section 7 presents the conclusions, emphasizing this study’s contributions and suggesting areas for future research.
2. Related Works
Communication in single-core systems has traditionally relied on methods designed to exchange data between tasks in preemptive and non-preemptive configurations. These systems use processing flows that range from data acquisition from sensors to the actuation phases, known as event chains [6]. However, with the evolution toward multicore systems, software development techniques had to adapt to take advantage of distributed processing. Assigning functions and tasks to different cores can significantly influence the performance of application control, especially due to communication through shared variables among multiple tasks assigned to different cores [7]. To facilitate this communication, various models and methods have been developed in both software and hardware architectures [8].
One of the proposed techniques for inter-core communication is the use of asynchronous cyclic buffers, which can ensure fresh data upon the first write operation of a buffer to all of its consumer tasks, as proposed by Toscanelli in [9]. However, implementing this solution in multicore environments requires caution, as data consistency can only be protected if consumers retrieve data promptly. In such cases, the Cyclic Asynchronous Buffer (CAB) mechanism might have to wait until all consumers have read the oldest data to avoid inconsistencies. Real-time embedded systems require fast and accurate responses to events, as well as deterministic behavior due to their time-critical and safety-critical nature [10], especially in the automotive domain. Martinez et al. [7] identified three models, each with its own characteristics and applications: explicit, implicit, and Logical Execution Time (LET). Before the prevalence of systems with multiple processors, developers utilized various techniques to ensure predictable behavior in real-time applications. One such method, the Logical Execution Time approach, was introduced to address the needs of time-critical applications requiring events to happen in a specific order. The LET model guarantees consistent behavior in event sequences by establishing fixed processing times from input to output, independent of actual task execution times [11].
While LET provides a solid framework for predictability and event chains in time-critical systems, the emergence of multicore systems and the increasing complexity of applications require a more flexible approach to communication and resource management. In this context, a message-based communication approach was proposed in [12,13,14], which implements contention-based protocols with synchronous and asynchronous data transfer using the main memory shared by all cores or ScratchPad memories (SPMs). Such an approach might lead to a degradation of response time and performance since latency between data exchanges could depend on the priority of tasks, similar to what happens in single-core solutions. Urbina [15] proposed an enhanced approach by introducing the Time-Triggered Message-Based Architecture (TIMEA), which is heavily based on Network-on-a-Chip (NoC); thus, it cannot be easily ported to other platforms, despite its integration with AUTOSAR and hardware acceleration features. In their study, Beckert et al. [16] introduced heuristic methods that employ shared memory buffers along with communication models such as implicit communication and Logical Execution Time.
Shirvani et al. [17] proposed a novel hybrid heuristic-based list scheduling algorithm. Its innovative approach to task prioritization, VM slot time optimization, and task duplication enabled efficient scheduling of dependent tasks in heterogeneous cloud computing environments, leading to improved makespan optimization and overall performance enhancement. A similar study by Seifhosseini et al. [18] introduced a novel scheduling failure factor (SFF) model. It formulates the scheduling problem as a multi-objective optimization issue and utilizes a discrete gray-wolf optimization algorithm (MoDGWA) to efficiently solve the combinatorial problem. The performance of the proposed algorithm was validated in terms of makespan, total cost, reliability, and cost score reduction. Such methods are helpful in handling concurrent access to shared memory. However, the problem with shared memory buffer solutions is that the worst-case response time can lead to situations where tasks use outdated data, which can affect the performance of real-time algorithms.
On the other hand, Soliman et al. [19] and Tabish et al. [20] explored hardware-specific solutions such as ScratchPad Memories (SPMs) and Direct Memory Access (DMA) for scheduling mechanisms. These mechanisms employ time-based schemes like TDMA to temporally isolate the entire task and data exchange processes. Although these solutions are highly efficient, they are highly dependent on SPMs, which limits their portability. Furthermore, their approach focused on scheduling mechanisms that allocate both task code and data in the SPM, increasing dependency due to varying SPM sizes across different controllers. The study by Bellassai et al. [21] presented a significant contribution to the field of LET implementation in POSIX (Portable Operating System Interface) systems. This research focused on real-time tasks that utilized topic-based messaging and the producer–consumer paradigm. The system model ran on multicore platforms with global memory and ensured deterministic behavior in control applications. The LET paradigm mandated input and output operations at the beginning and end of a specified period. To achieve this, the researchers designed communication mechanisms and protocols that were integrated into dynamic systems compatible with POSIX.
Gemlau et al. [22] extended this concept to the system level (SL LET) in automotive systems, ensuring predictable and composable times. While the traditional usage of LET was limited to individual components, SL LET expanded this approach to a more systemic level. The methodology translated the AUTOSAR functional model into an implementation model that met the requirements, representing the system as a set of LET tasks. Furthermore, Gemlau et al. [23] addressed the challenges of programming cyber-physical systems in high-performance architectures by applying the LET paradigm and its extension. These systems, which monitor and control physical processes, face complexities due to hardware heterogeneity. The methodology focused on the need for proper programming for these architectures, which resemble parallel programming in high-performance computing but with time and security requirements. Kang’s research [24] focused on programming deep learning applications on embedded devices with heterogeneous processors. In the same year, Mirsaeid [25] proposed a hybrid algorithm for programming scientific workflows in heterogeneous distributed computing systems. Verucchi et al. [26] proposed optimizing latency in embedded automotive and aviation systems, emphasizing the application of the LET paradigm in characterizing end-to-end latency.
Another interesting approach, proposed by Noorian et al. [27], lies in the development and application of the hybrid GASA algorithm for workflow scheduling optimization in cloud heterogeneous platforms. This novel and effective solution combines the strengths of different meta-heuristic approaches to enhance the efficiency and performance of scheduling algorithms in distributed computing environments. These research papers show the significance and flexibility of the LET paradigm across different technologies. They also demonstrate its essential role in developing critical systems and real-time applications.
3. Communication Strategies in Multicore Systems
In real-time multicore microcontrollers, software components are typically allocated to each core, presenting an additional challenge for inter-task communication, primarily when tasks are assigned to different cores. Each processing core operates independently and may have different clock configurations, leading to unsynchronized task execution within the same period. Furthermore, varying initialization conditions, such as the number of data blocks that need to be loaded from Non-Volatile Memory (NVM) to Random Access Memory (RAM) at startup, can result in different core initialization times. In this context, data shared between cores are read from and written to a shared memory section at any point during task execution. According to [6], this behavior exemplifies explicit communication, as illustrated in Figure 1.
According to Becker et al. [28], with this paradigm, tasks directly access the shared register to read its value or write a new value to the register. This means that whenever the code requires a read or write operation on the variable, the shared register is accessed. This results in uncertainty since the exact timing of the register access depends on the task’s specific execution path.
This model presents different performance issues, leading to problems such as the following:
Sampling jitter;
Sample loss;
Lack of determinism in event chains;
Variable data exchange latency.
The data shown in Figure 2 demonstrate the challenges of transferring data between cores, which can lead to a loss of signal resolution and potentially affect the performance of application algorithms. For this example, the modeling was conducted using a modeling and simulation environment such as Simulink’s SoC Blockset. The timing reference for the example is managed by the simulation setup, starting at 0 at the beginning of the execution.
In this scenario, a producer task was designed to run at a fixed rate of 10 ms on the primary core, triggered by an external event representing any scheduler event or an ISR in the operating system. To simulate unpredictable delays caused by external events like task preemption, ISRs, or execution jitter, the data output was generated at random times. The output was written to a shared buffer without any synchronization mechanisms, meaning that write operations could occur at any time during execution. A similar consumer task was designed to run on a secondary core at a fixed rate of 10 ms, also triggered by an external event. The consumer task read the input data from the shared buffer at the beginning of its execution.
Both tasks were designed to run concurrently in parallel core models and exchanged data at different points in time, which affected predictability. Data flow consistency is compromised when write operations occur after read operations, leading to the loss of data samples. This degradation can significantly impact software performance, especially in critical applications where composability and predictability are essential, such as in safety-critical systems.
3.1. LET in Multicore Systems
For single-core systems, similar issues arise, as mentioned in Section 1. For example, if a provider task exhibits jittering frequency, consumer tasks may lose samples, thereby affecting the performance of the algorithms they execute. To address this issue, Logical Execution Time (LET) was developed. Predictability is crucial for time-critical applications, as it enables the modeling and optimization of event chains. Modeling event chains helps determine the order and scheduling of tasks, thus minimizing sampling jitter and improving the performance of algorithm execution. In the realm of real-time multicore systems, the Logical Execution Time (LET) model stands out as a technique for ensuring system predictability and temporal isolation. Initially introduced by Giotto as a time-triggered programming language, the LET model addresses concurrency issues with its straightforward strategies and time determinism, thereby improving system predictability and simplifying certification processes [29]. The fundamental principle of the LET model involves setting fixed times for operations on memory resources, significantly reducing contention over shared resources and thus enhancing overall system performance. This enhancement is particularly beneficial, as it streamlines both the system design and analytical processes. Moreover, the LET model plays an essential role in managing the complexities of concurrent access to shared memory, which is especially critical in multicore environments. By mitigating these complexities, the LET model greatly enhances system robustness and reliability. This is particularly vital in applications where precision and timely responses are essential, such as in safety-critical and time-sensitive operations. The impact of the LET model extends beyond just performance enhancement; it introduces a structured, efficient framework for resource utilization. This optimization is critical in multicore systems, where the coordination of multiple cores requires a balance between resource allocation and execution efficiency. Ultimately, the LET model delivers a predictable and efficient execution of tasks, making it an indispensable tool in real-time multicore system design.
In a real-time system, the tasks that must be executed are defined. Each task has a specific purpose and is designed to be executed within a set time. The LET process is as follows:
Assignment of a LET to a Task: A task requiring a Logical Execution Time (LET) needs data consistency and coherency during its execution. This is crucial when the task is part of an event chain where data must remain consistent and predictable throughout the chain. These tasks should have periodic executions, independent of their preemption characteristics. The LET assigned to such tasks defines the period during which they must perform their operations on shared memory. The LET is a fixed and predictable period that should match the rate of the task activation.
Start of the LET Period: At the beginning of the LET period, read operations to shared memory are performed before the task starts its execution. This start is typically triggered by a system clock or an external event. Data read from shared memory are stored in the local context of the task, enabling it to perform operations locally.
Execution of the Task’s Logic: During the LET, the task executes its logic, which may include data processing, decision-making, or interaction with other system components. During this time, output data that need to be written to shared memory are stored in local buffers to avoid contention.
Completion of Execution: The task must complete its execution within the assigned LET period. If the task finishes before the LET period expires, it remains suspended until its next activation period. At the end of the LET, the output data are written to shared memory, ensuring data consistency and predictability throughout the system. Once the current LET ends, the next LET period begins, either for the same task or for a group of tasks in the system.
Figure 3 shows a visual representation to understand the operational dynamics of the LET model.
While the LET model enhances predictability and data consistency in multicore real-time systems, it has some limitations, as it was initially designed for classical single-core real-time systems. The model does not account for the parallel execution of multiple tasks across different cores. Each core operates with an independent scheduler, resulting in read/write operations with different timings across cores, even when operations are fixed at the core task level. Figure 4 illustrates an example of various executions of different tasks allocated to different cores using Simulink’s SoC Blockset. This scenario was modeled for the producer task to start periodic execution every 10 ms at t = 0. The consumer periodic task was set to have a rate of 10 ms with a variable start time to emulate variable initialization times. Although the cycle time was fixed, the offset between the tasks on each core varied on each simulation run. These tasks included producer–consumer pairs, a simulation of real-time data acquisition tasks, and synchronized transmission tasks.
TDMA in Multicore Systems
Time-Division Multiple Access (TDMA) is a time-slot method widely used in communications and embedded systems. According to [30], TDMA is a time-triggered communication technology where the time axis is statically divided into slots, with each slot statically assigned to a component. A component is permitted to send a message only during its designated slot. Figure 5 illustrates a simple example of a single channel shared among four transmitter entities and four receiver entities to exchange data within a given time period T. The channel can be any transmission medium, such as a memory cell or a carrier signal.
This concept allows data-producing entities to utilize the full channel capacity to transmit information during a defined time interval, while consumer entities can access this information within the same timeframe. In a channel, information is only available during a specific period, necessitating time synchronization to ensure the desired data are transmitted to the intended recipients. This method is also resource-efficient, as multiple producers and consumers can share a single channel. In embedded systems, resources are limited and time constraints exist; therefore, TDMA has been widely adopted to address scheduling and resource management challenges. With this setup, emitters can transmit data at a fixed rate, while receivers can read data within the same time slots, facilitating communication synchronization and optimizing channel capacity by allowing multiple transmitters to use the same channel through time-slot multiplexing. TDMA has been proposed as a solution in several studies. For instance, Ref. [31] included it as part of a memory controller that allows both critical and non-critical applications to access memory without interference. Similarly, Ref. [19] suggested TDMA for overlapping the memory and execution phases of task segments. Figure 6 illustrates how data exchange is scheduled between tasks on different cores.
4. Methodology
In time-critical systems, it is important to have predictable event chains in order to model and improve performance more effectively. However, in real-time embedded multicore systems, there are challenges due to processing cores running in parallel with different initialization times, core frequencies, and operating systems. This parallelism affects the performance of the LET model, as it does not consider the concurrent execution of tasks. The time it takes between read and write operations can vary depending on each core’s events, like initialization, interrupts, and preemption, leading to varying latency and reduced predictability of data flow. To address the issue of variable latency in inter-core communication, TDMA is proposed as a complement to the LET model. While the LET model handles read/write operations within a single processing core, TDMA can be used for inter-core data exchange, offering improved composability. TDMA is predictable and composable.
Predictability is the ability to provide an upper bound on the timing properties of a system. Composability is the ability to integrate components while preserving their temporal properties. Although composability implies predictability, the opposite is not true. A round-robin arbiter, for instance, is predictable, but not composable. A TDM (Time Division Multiplexing) arbiter, on the other hand, is predictable and composable [31].
4.1. Implementation of LET Plus TDMA in Multicore Systems
In a multicore real-time system, write/read operations can be synchronized to the channel using any notification mechanism, enabling data exchange between cores independently of the task-executing algorithms and at a fixed rate. This synchronization allows for predicting data flow behavior based on this model. In real hardware, timing depends on memory cache times and the selected synchronization mechanism (e.g., ISRs (Interrupt Service Routines), OS task, or specific hardware). Despite these dependencies, the minimal latency between operations can be calculated and further optimized.
In this work, we develop a method to address these challenges by combining the predictability and data consistency of the LET model with the composability of TDMA. This implementation is shown in Figure 7 and is described as follows:
Determine the TDMA time intervals for communication between tasks on different cores that require sharing information. Assign specific time windows during which a task can transmit data through the communication channel.
Assign an execution period to each task (LET), defining when the task should start its execution and when its results must be available. Additionally, set and fix the reading and writing times statically, allowing the system to behave predictably, as each task’s operations on shared memory have a defined execution period.
Coordinate and plan the TDMA intervals with the LET execution times of the tasks to ensure that communication occurs without conflicts, thereby enabling communication within the TDMA intervals without interference.
Implement mechanisms to synchronize the LET task groups with the same LET periods with their corresponding TDMA slots, maintaining the execution of tasks within the TDMA and LET processes.
Implementation of LET and TDMA methods in multicore real-time systems. The figure illustrates how write and read operations are synchronized across multiple cores using TDMA slots and LET intervals. Each core has its shared memory, with tasks executing within their LET intervals. Guard bands are included between TDMA slots to prevent conflicts and ensure data consistency. This combined approach enhances the predictability and composability of inter-core data exchange.
[Figure omitted. See PDF]
4.2. Implementation Details
For the LET-TDMA solution implementation, we used the CYT2B75 Traveo II Entry Family Starter Kit platform, which features a 32-bit Arm Cortex-M0+ processor and an additional Arm Cortex-M4 core to handle complex tasks. The platform boasts a substantial memory architecture, including 4 MB of Flash, 128 KB of Work Flash, and 512 KB of SRAM. It supports advanced cryptographic operations with its Cortex-M0+ core and provides security features through a Memory Protection Unit (MPU) and a Shared Memory Protection Unit (SMPU). The software implementation was designed with a three-layer architecture to decouple hardware-specific software for shared memory static allocation and access from model-specific software. This architecture comprises three layers: hardware-specific software, LET operations, and TDMA management. Figure 8 illustrates the architecture developed, highlighting the interconnected layers designed to optimize both the flow and the consistency of the information.
The first of these layers is the Data Intermediation Layer, which manages the shared memory used by various software components to access common information. This layer provides APIs to manage direct access to shared buffers, which are declared through a configuration header. Platform-specific labels must be provided in the configuration files to provide the linker with the memory address range for allocating the shared buffers and controlling data in the global RAM.
The Access Times Manager Layer establishes the times for read and write operations for cross-core LET and performs the write operation between local and shared buffers. As input configuration, this layer requires the periods from tasks allocated to different cores belonging to the same event chain. It also requires the local buffer sizes to handle data copy operations at the LET periods. Its main functions include the following:
Enqueue: Organizes writing tasks from local memory to shared memory, acquiring local memory addresses from the local buffers for the writing process, and executes them throughout the task execution.
Trigger: Checks and transfers pending data to the designated areas of the shared memory, ensuring its availability for other processes.
Read: Facilitates access to the shared memory using specific identifiers to locate the necessary sections.
Finally, the TDMA Controller coordinates the activation and access to the assigned time slots for using shared memory among the system cores, leveraging the APIs provided by the LET layer. The TDMA Controller provides the base time for the TDMA time slots and is executed at a periodic rate driven either by an ISR or an OS scheduler-handled task with a high priority to minimize jitter or delayed access to the time slots. The timing of the base period defines the TDMA timing granularity, which for efficiency, can be calculated using the greatest common divisor of the periods of the communicating tasks. The TDMA Controller also provides the means to coordinate the activation of consumer tasks based on the availability of producer data and its configured activation period through implementation or platform-specific callbacks. The implementation of these callbacks is out of the scope of this study. Algorithm 1 illustrates the main functions of this process.
Algorithm 1: Main system operation |
4.3. Characterization of Producer and Consumer Tasks
To validate the characterization of communication between producer and consumer tasks, the specific moments when the tasks must read and write data were calculated using Equations (1) and (2), obtained from [32].
(1)
(2)
where P corresponds to the execution times of the producer and Q to those of the consumer. and are the offsets for the writing and reading tasks, respectively. represents the maximum values of and , and is the offset of the task with the largest period of the pair. Finally, and are the communication tasks for writing and reading, respectively.According to Equations (1) and (2), predictability in publication and reading times reduces variability in task response times. The calculation of specific offsets allows for improved system determinism, as knowing the exact times of reading and writing enables precise forecasting of system behavior under various load and execution conditions. This not only enhances predictability but also provides flexibility in system design, allowing adaptation to different temporal and synchronization requirements without the need to change task periods. The diagram of this characterization is shown in Figure 9.
4.4. Characterization of End-to-End Latencies
To characterize the age, maximum age, and reaction latencies, the semantics in [7] were considered and are defined below.
The age latency (Last-to-First, L2F) measures the delay from the last input that is not overwritten to the first output generated with the same input.
(3)
The reactive latency (First-to-First, F2F) measures the delay from the first input that can be overwritten to the first output generated with the next different input.
(4)
The maximum age latency (Last-to-Last, L2L) or maximum age measures the delay from the last input that is not overwritten to the last output, including duplicates.
(5)
Figure 10 shows how the latency is measured from the first input that can be overwritten to the first output generated with the next different input. The latency is measured from the first input that can be overwritten to the first output generated with the next different input. The latency is measured similarly to the latency, from the last input that is not overwritten to the last output, including multiple executable instances.
4.5. Validation Methods
The Root Mean Square Error (RMSE) is a widely used accuracy metric in regression tasks to evaluate the difference between the values predicted by a model and the actual values. To calculate the RMSE, first, the difference between the predicted and actual value for each data point is calculated, and then each difference is squared to prevent the cancellation of negative and positive errors. Subsequently, these squared values are averaged, and finally, the square root of the average is taken to adjust the errors to the original scale of the data. This metric is especially useful because it gives greater weight to larger errors, which is crucial in many practical contexts where large errors are particularly undesirable. The RMSE was calculated as shown in Equation (6).
(6)
where n is the total number of observations, represents the actual values, and represents the predicted values.5. Results
This section presents the results obtained from applying our LET-TDMA method in the dual-core Arm Cortex processor with 4 MB of Flash, 128 KB of Work Flash, and 512 KB of SRAM. An external 16 MHz crystal oscillator drove the internal clocks from both cores. The application core hosts an AUTOSAR 4.0.3-compliant OS, with Scalability Class 1. This OS features fixed priority-based scheduling, handling of interrupts, and start-up and shutdown interfaces. Table 1 shows the configuration parameters for the clocks, the memory type used for the buffers, and the number of configured tasks.
To output the measurement times of the read () and write () operations, the Instrumentation Trace Macrocell (ITM) hardware available in the Cortex M4 core was used, together with I-jet debugger hardware from IAR Systems and the IAR Embedded Workbench Integrated Development Environment. The two processing cores implemented an event chain with a producer task on the main core and a consumer task on the application core. The consumer task was set as a non-preemptive AUTOSAR basic task with the highest priority, while the producer task was set as a simple function called by an ISR-driven basic scheduler. The dataset to be transferred was designed to represent a simple ramp with a slope of 1, with a task cycle time set to 10 ms. Figure 11 shows the timing of write and read operations for the data values of the slope with a buffer size of 16 bits. Compared to the simulated scenario depicted in Figure 1, it is possible to see that latency in the data transfer has very low variability. Measurements for both 8-bit and 32-bit buffers yielded similar results.
The datasets generated for buffer sizes of 8, 16, and 32 bits during the experimentation are accessible in the public repository at
Specifically for the configurations of the LET (ISR Core 0 with AUTOSAR Core 1, and ISR Core 0 with ISR Core 1) and LET-TDMA (ISR Core 0 with AUTOSAR Core 1) implementations. Through the experiments, the execution time, accuracy, and latency were evaluated. Comparative analyses were conducted between different system configurations to assess the impact of buffer sizes on system performance. Table 2 shows the behavior of this scenario, including data samples from the write operations on the main core and the read operations on the application core.
Table 3, Table 4 and Table 5 provide samples of the times for data written () and the times for data read (). These times are crucial for maintaining an operational sequence and ensuring effective coordination between concurrently operating components. Offsets and were applied to these temporal records to synchronize operations between tasks that require coordinated interaction despite being independent. These values were calculated by obtaining the difference in execution times between the write and read tasks (), as illustrated in Figure 9. Furthermore, the parameter establishes the maximum interval within which tasks must be coordinated, acting as a reference period for the task execution cycle. The values of indicate the additional adjustment needed to synchronize producer and consumer operations.
Figure 12 shows the results of calculating the values of and , which are identical at all calculated data points. These ensure that both read and write execution tasks operate at synchronized moments, thus avoiding desynchronization between operations. Figure 12a–c represent the values of and for buffer sizes of 8, 16, and 32 bits, respectively.
Table 6 presents the analysis of communication methods in multicore systems, where the LET model stands out for its high predictability and consistency in response times, crucial attributes for applications that require temporal synchronization. LET’s ability to provide consistent and predictable response times makes it ideal for real-time control environments and critical safety applications. However, its implementation is more complex and requires detailed planning of Logical Execution Times. On the other hand, the combination of TDMA-DMA and SPM demonstrates advantages for applications that can benefit from optimized memory management, thus improving overall system performance and reducing wait and processing times.
Additionally, explicit communication tends to show less variability in reactive latency times compared to implicit communication. This lower variability is required for applications that need consistent and reliable response times, as it reduces uncertainty and improves the predictability of system behavior. However, it can increase programming complexity and the overhead of synchronization management. Meanwhile, implicit communication simplifies implementation by automatically managing synchronization, but it presents greater variability in latency times and less predictability, which can lead to synchronization problems. Nevertheless, the combination of LET and TDMA has shown that the predictability and consistency of Logical Execution Time applied to task communication, along with the composability of time-controlled buffers, ensures consistent latency and temporal determinism for communication between cores in 8-bit, 16-bit, and 32-bit architectures. This reduces the need to link applications to specific cores, facilitating the creation of less dependent event chains.
In this work, the LET implementation in a dual-core processor was first reproduced to measure its variability against our LET-TDMA method. The setup used the same conditions as those in the LET-TDMA experimentation: two processing cores implementing an event chain of producer and consumer tasks. The dataset to be transferred was defined to represent a simple ramp with a slope of 1. The task cycle times were set to 10 ms. Table 7 shows the RMSE values calculated using Equation (6). This case depicts how variable the LET case behaves due to the task activation latency introduced by the AUTOSAR OS on the M4 core compared to the execution of the ISR-based task on the M0+ core, which was measured to increase by 62.5 ns on each task activation. In contrast, the LET-TDMA solution maintains the time between read and write operations at an average of 1.00143 ms, exhibiting very low variability with a maximum time of 1.002 ms and a minimum delay of 1.001 ms.
Figure 13 shows the plotted time differences between the producer write operations and the consumer read operations using the proposed LET-TDMA solution. From this information, it can be deduced that the RMSE calculation is consistent with the actual measured time differences between the write and read operations.
6. Discussion
Integrating LET and TDMA models in multicore systems represents a significant advancement in managing inter-core communication. This study demonstrates that combining these methodologies can effectively address the inherent challenges of synchronization and data consistency in real-time critical systems. At first glance, it is evident that tasks on different cores significantly benefit from the implementation of the LET model, ensuring temporal predictability with fixed operation times for read and write operations. According to the results obtained, there is no data loss when maintaining constant read/write operation rates. However, a variable that still requires improvement is data transfer latency when implemented in multicore systems. The LET model faces challenges due to variability in initialization and execution times on each core, as well as in dispatcher times, which can introduce variable latencies in data communication between cores. These challenges depend on the application and the operating system, especially in heterogeneous core configurations.
The proposed method combines LET with TDMA-managed buffers to mitigate variable latency issues in inter-core communication. This is achieved by synchronizing the read operation of the consumer task on one core with the write operation of the producer task on another core, ensuring each task has defined access times to the communication channels. According to the results obtained, it is evident that the LET model greatly benefits from integration with TDMA when applied in multicore systems. This integration not only reduces variability in response times but also improves the system’s consistency and predictability. Fixing the latency also makes the system more deterministic, as variability in read and write operations can affect the performance of time-critical algorithms.
TDMA also provides efficient resource usage since global buffers can be shared across several tasks from different cores due to the temporal isolation provided by this scheme, which allows for the reuse of buffers across time slots while eliminating waiting times introduced by synchronization mechanisms, such as semaphores or spinlocks. Considering various example scenarios such as producer–consumer tasks, real-time data acquisition in industrial control systems, and synchronized data transmission in telecommunications, buffer usage can be optimized. For instance, in a scenario with four pairs of tasks with execution periods that are the same or multiples, buffer usage can be reduced by up to 75% if all pairs can use the same buffer if the required buffer sizes are the same or lower than the maximum buffer size. Similarly, in industrial control systems, synchronization of sensor data acquisition and actuator control can be enhanced using these principles.
In microcontrollers, defined operation times and the absence of resource locks may lead to lower energy consumption, but that is not the focus of this work. As mentioned at the beginning of this document, low latency and determinism are crucial to ensuring the safety of critical embedded systems. These systems, found in the automotive and aviation industries, require high predictability and consistency in data transfer to guarantee system performance and safety. Integrating LET with TDMA provides a strong and efficient solution, ensuring that tasks distributed across different cores can communicate predictably and without data loss, thus optimizing overall system performance.
7. Conclusions and Future Directions
In this study, we have introduced a communication model between cores that combines the predictability of Logical Execution Time with a synchronization mechanism in heterogeneous systems and the composability of time-controlled buffers. This is achieved using a Time-Division Multiple Access scheme for inter-core communication. Given the critical nature of deadlines in real-time tasks, this approach ensures constant latency between tasks on different cores using a shared memory channel, addressing hardware limitations and timing dependencies. This reduces the need to bind applications to specific cores and facilitates the creation of less dependent event chains.
Furthermore, by establishing time intervals for data transfer between cores, deterministic data flows can be modeled. The portability of this approach allows it to be used across various platforms, as it is not tied to specific hardware implementations. Our analysis and results demonstrate the following: (a) Improved temporal predictability: The integration of LET with TDMA improves the temporal predictability of read and write operations in multicore systems, achieving a constant latency of 11 ms. (b) Reduced variable latency: Measurements of data transfer between cores using shared buffers of 8, 16, and 32 bits showed a latency of approximately 1 ms, enhancing system consistency, compared to the LET model, which showed variable latencies of 3.2846 ms, 8.9257 ms, 0.2338 ms for 8, 16 and 32 bits. (c) Enhanced system consistency: Implementing this method enabled predictable and synchronized access times to shared resources, with access times improving to 10 ms, thereby enhancing system consistency. Moreover, the TDMA implementation allows global buffers to be shared among multiple tasks across different cores due to the temporal isolation of this scheme, making them reusable within defined time intervals.
Despite its performance benefits, this method faces certain limitations. The event chains were limited to two tasks and one label each, focusing on latency improvement on a dual-core embedded microcontroller. RAM was used for shared buffers, and LET management was independent of the Runtime Environment (RTE) and operating system software. The reference conditions in this study focused on evaluating the real impact of the cross-core LET-TDMA implementation and its improvements. However, in more complex configurations with event chains distributed across more than two cores, data loss could still occur, especially in systems handling multiple external events. Balancing external event handling is crucial to avoid TDMA scheduling disruptions.
Future work could explore expanding this technique to support more cores by distributing event chains across multiple cores. Integrating LET-TDMA management as an RTE implementation addon could simplify inter-core event chaining through configuration and code generation. Additionally, optimizing shared buffer strategies using technologies like DMA or platform-specific inter-process communication implementations, abstracted through standardized interfaces, remains a relevant topic for future research.
Conceptualization, C.-A.M.-A. and J.-A.R.-G.; methodology, C.-A.M.-A. and J.-A.R.-G.; software, C.-A.M.-A.; validation, C.-A.M.-A., J.-A.R.-G., D.-M.C.-E., J.R.-R. and J.T.; formal analysis, C.-A.M.-A., J.-A.R.-G., D.-M.C.-E., R.C.-S. and J.T.; investigation, C.-A.M.-A., J.-A.R.-G., D.-M.C.-E., R.C.-S., J.T. and J.R.-R.; resources, J.R.-R.; data curation, C.-A.M.-A., J.-A.R.-G., D.-M.C.-E., R.C.-S., J.R.-R. and J.T.; writing—original draft preparation, C.-A.M.-A. and J.-A.R.-G.; writing—review and editing, C.-A.M.-A., J.-A.R.-G., D.-M.C.-E., R.C.-S., J.T. and J.R.-R.; visualization, C.-A.M.-A., J.-A.R.-G., D.-M.C.-E., R.C.-S., J.T. and J.R.-R.; supervision, C.-A.M.-A., J.-A.R.-G., D.-M.C.-E., R.C.-S., J.T. and J.R.-R.; project administration, C.-A.M.-A., J.-A.R.-G., D.-M.C.-E., R.C.-S., J.T. and J.R.-R.; funding acquisition, C.-A.M.-A., J.-A.R.-G., D.-M.C.-E., R.C.-S., J.T. and J.R.-R. All authors have read and agreed to the published version of the manuscript.
Raw data for 8-, 16-, and 32-bit buffer sizes are available at
We thank the Autonomous University of Queretaro and the National Council of Humanities, Sciences, and Technologies (CONAHCYT) for the Master’s scholarship.
The authors declare no conflicts of interest.
The following abbreviations are used in this manuscript:
AUTOSAR | AUTOmotive Open System ARchitecture |
CAB | Cyclic Asynchronous Buffer |
DMA | Direct Memory Access |
F2F | First-to-First |
GASA | Genetic Simulated Annealing Optimization |
ISR | Interrupt Service Routines |
L2F | Last-to-First |
L2L | Last-to-Last |
LET | Logical Execution Time |
MoDGWA | Multi-objective cost-aware Discrete Gray-Wolf optimization-based Algorithm |
MPU | Memory Protection Unit |
NoC | Network-on-a-Chip |
NVM | Non-Volatile Memory |
OS | Operative System |
POSIX | Portable Operating System Interface |
RAM | Random Access Memory |
RMSE | Root Mean Squared Error |
RTE | Runtime Environment |
SFF | Scheduling Failure Factor |
SL | System level |
SMPU | Shared Memory Protection Unit |
SPM | ScratchPad Memories |
SRAM | Static Random Access Memory |
TDM | Time-Division Multiplexing |
TDMA | Time-Division Multiple Access |
TIMEA | Time-Triggered Message-Based Architecture |
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Explicit communication: Illustration of data read and write operations to a shared global memory (GLOBAL X) section during task execution on multicore systems, demonstrating inter-task communication handling when tasks are allocated to different cores.
Figure 2. Simulation of data transfer between cores using a shared buffer with explicit communication. The figure shows the data values written by the producer task on Core 1 and read by the consumer task on Core 2 over time. This simulation highlights the potential issues with data flow consistency and signal resolution due to asynchronous execution and external delays.
Figure 3. Visual representation of the operational dynamics of each task under the LET framework. The figure illustrates the predefined time frames within which tasks are executed. Each task follows a cycle of running, suspension/preemption, and termination, with specific periods for waiting. This structure ensures consistent and predictable execution, enhancing system predictability and reliability.
Figure 4. Simulation of eight executions with varying latencies between read and write operations. This figure demonstrates the impact of different execution cycle starts for the receiver task on the second core. Despite having a fixed cycle time, the offset between tasks on each core varies, highlighting the challenges in achieving consistent timing across multiple cores.
Figure 5. Illustration of a TDMA channel with four time slots. The figure shows a single channel shared among four transmitter entities (emitters) and four receiver entities (receivers), each assigned to a specific time slot (T1 to T4). This arrangement ensures that each emitter can only send data during its designated time slot, thereby avoiding conflicts and ensuring orderly data transmission.
Figure 6. TDMA-based memory schedule for a system with two cores. The figure depicts how data exchange is managed between tasks on different cores using TDMA. Each core has designated time slots for loading and unloading data partitions, ensuring efficient and synchronized access to shared memory. The schedule shows the allocation of segments to partitions and the usage of TDMA slots, highlighting the coordination required to prevent interference and optimize memory access.
Figure 8. Implementation of LET and TDMA architecture. This figure illustrates the three-layer architecture developed for real-time implementation, highlighting the Data Intermediation Layer, Latency Exchange Time Manager, and TDMA Controller. The interconnected layers are designed to optimize the flow and consistency of information, ensuring efficient and predictable data exchange in multicore systems.
Figure 9. Characterization of producer and consumer tasks. This figure illustrates the runnable of the writer ([Forumla omitted. See PDF.]) and reader ([Forumla omitted. See PDF.]), allowing management of the synchronization of communication between tasks with different periods and offsets in real-time systems.
Figure 10. This figure illustrates the measurement of the age latency (L2F), reactive latency (F2F), and maximum age latency (L2L). Additionally, the tasks ([Forumla omitted. See PDF.]) performed by the reader ([Forumla omitted. See PDF.]) and writer ([Forumla omitted. See PDF.]) are represented in relation to the synchronization times [Forumla omitted. See PDF.] and [Forumla omitted. See PDF.].
Figure 11. Measurements of data transfer between cores using a 16-bit shared buffer with the LET-TDMA method implementation. The figure shows the data values written by the producer task on Core 1 and read by the consumer task on Core 2 over time. The execution log performed on real hardware highlights the determinism and low latency variability achieved by using the proposed solution.
Figure 12. Variable data read/write operation times for different buffer sizes. The plots illustrate the offset times calculated for tasks at buffer sizes of 8 bits (a), 16 bits (b), and 32 bits (c). Each plot shows the read and write times for the tasks, highlighting the differences in data transfer times across varying buffer sizes.
Figure 13. Calculation of time differences between write and read operations between cores using a 16-bit shared buffer with the LET-TDMA method implementation. This plot highlights the low variability between data transfer operations, which directly impacts the latency calculations.
Configuration parameters for the Traveo II CYT2B75 Dual-Core Microcontroller.
Core | Clock | Prescaled | Scheduler | Base | Tasks | Buffer | Shared | Buffer |
---|---|---|---|---|---|---|---|---|
Cortex M0+ | 80 MHz | 1 MHz | ISR | 10 ms | 1 | SRAM | 3 | 8, 16, 32 |
Cortex M4 | 160 MHz | 1 KHz | AUTOSAR OS | 1 ms | 2 | SRAM | 3 | 8, 16, 32 |
Runtime measurements for producer and consumer tasks at buffer sizes of 8, 16, and 32 bits. The table displays the write times (
18,385,737 | 18,386,738 | 27,223,037 | 27,224,038 | 21,877,902 | 21,878,904 |
18,395,737 | 18,396,738 | 27,233,037 | 27,234,038 | 21,887,903 | 21,888,904 |
18,405,737 | 18,406,738 | 27,243,037 | 27,244,038 | 21,897,903 | 21,898,904 |
18,415,737 | 18,416,738 | 27,253,037 | 27,254,038 | 21,907,903 | 21,908,904 |
18,425,737 | 18,426,738 | 27,263,037 | 27,264,038 | 21,917,903 | 21,918,904 |
18,435,737 | 18,436,738 | 27,273,037 | 27,274,038 | 21,927,903 | 21,928,904 |
18,445,737 | 18,446,738 | 27,283,037 | 27,284,038 | 21,937,903 | 21,938,904 |
18,455,737 | 18,456,739 | 27,293,037 | 27,294,039 | 21,947,903 | 21,948,904 |
18,465,737 | 18,466,739 | 27,303,037 | 27,304,039 | 21,957,903 | 21,958,904 |
18,475,737 | 18,476,739 | 27,313,037 | 27,314,039 | 21,967,903 | 21,968,905 |
18,485,737 | 18,486,739 | 27,323,037 | 27,324,039 | 21,977,903 | 21,978,905 |
18,495,738 | 18,496,739 | 27,333,037 | 27,334,039 | 21,987,903 | 21,988,905 |
18,505,738 | 18,506,739 | 27,343,037 | 27,344,039 | 21,997,903 | 21,998,905 |
18,515,738 | 18,516,739 | 27,353,037 | 27,354,039 | 22,007,903 | 22,008,905 |
18,525,738 | 18,526,739 | 27,363,038 | 27,364,039 | 22,017,903 | 22,018,905 |
18,535,738 | 18,536,739 | 27,373,038 | 27,374,039 | 22,027,903 | 22,028,905 |
18,545,738 | 18,546,739 | 27,383,038 | 27,384,039 | 22,037,904 | 22,038,905 |
18,555,738 | 18,556,739 | 27,393,038 | 27,394,039 | 22,047,904 | 22,048,905 |
18,565,738 | 18,566,739 | 27,403,038 | 27,404,039 | 22,057,904 | 22,058,905 |
18,575,738 | 18,576,739 | 27,413,038 | 27,414,039 | 22,067,904 | 22,068,905 |
18,585,738 | 18,586,740 | 27,423,038 | 27,424,039 | 22,077,904 | 22,078,905 |
18,595,738 | 18,596,740 | 27,433,038 | 27,434,039 | 22,087,904 | 22,088,905 |
18,605,738 | 18,606,740 | 27,443,038 | 27,444,039 | 22,097,904 | 22,098,905 |
18,615,739 | 18,616,740 | 27,453,038 | 27,454,040 | 22,107,904 | 22,108,905 |
18,625,739 | 18,626,740 | 27,463,038 | 27,464,040 | 22,117,904 | 22,118,905 |
18,635,739 | 18,636,740 | 27,473,038 | 27,474,040 | 22,127,904 | 22,128,906 |
18,645,739 | 18,646,740 | 27,483,038 | 27,484,040 | 22,137,904 | 22,138,906 |
18,655,739 | 18,656,740 | 27,493,039 | 27,494,040 | 22,147,904 | 22,148,906 |
18,665,739 | 18,666,740 | 27,503,039 | 27,504,040 | 22,157,904 | 22,158,906 |
18,675,739 | 18,676,740 | 27,513,038 | 27,514,040 | 22,167,904 | 22,168,906 |
18,685,739 | 18,686,740 | 27,523,039 | 27,524,040 | 22,177,904 | 22,178,906 |
18,695,739 | 18,696,740 | 27,533,039 | 27,534,040 | 22,187,904 | 22,188,906 |
18,705,739 | 18,706,740 | 27,543,039 | 27,544,040 | 22,197,905 | 22,198,906 |
18,715,739 | 18,716,740 | 27,553,039 | 27,554,040 | 22,207,905 | 22,208,906 |
18,725,739 | 18,726,740 | 27,563,039 | 27,564,040 | 22,217,905 | 22,218,906 |
18,735,739 | 18,736,740 | 27,573,039 | 27,574,040 | 22,227,905 | 22,228,906 |
18,745,739 | 18,746,740 | 27,583,039 | 27,584,040 | 22,237,905 | 22,238,906 |
18,755,739 | 18,756,740 | 27,593,039 | 27,594,040 | 22,247,905 | 22,248,906 |
18,765,739 | 18,766,740 | 27,603,039 | 27,604,040 | 22,257,905 | 22,258,907 |
18,775,739 | 18,776,741 | 27,613,039 | 27,614,041 | 22,267,905 | 22,268,907 |
18,985,741 | 18,986,742 | 27,823,041 | 27,824,042 | 22,477,906 | 22,478,908 |
18,995,741 | 18,996,742 | 27,833,040 | 27,834,042 | 22,487,906 | 22,488,908 |
19,005,741 | 19,006,742 | 27,843,041 | 27,844,042 | 22,497,906 | 22,498,908 |
19,015,741 | 19,016,742 | 27,853,041 | 27,854,042 | 22,507,906 | 22,508,908 |
19,025,741 | 19,026,742 | 27,863,041 | 27,864,042 | 22,517,907 | 22,518,908 |
19,035,741 | 19,036,742 | 27,873,041 | 27,874,042 | 22,527,907 | 22,528,908 |
19,045,741 | 19,046,742 | 27,883,041 | 27,884,042 | 22,537,907 | 22,538,908 |
19,055,741 | 19,056,742 | 27,893,041 | 27,894,042 | 22,547,907 | 22,548,908 |
19,065,741 | 19,066,742 | 27,903,041 | 27,904,042 | 22,557,907 | 22,558,908 |
Time values for the execution of reading and writing tasks for 8-bit buffers. The table includes the write times (
| | | | | | | |
---|---|---|---|---|---|---|---|
18,395,737 | 18,396,738 | 10,000 | 10,000 | 18,395,737 | 10,000 | 18,405,737 | 18,405,737 |
18,405,737 | 18,406,738 | 10,000 | 10,000 | 18,405,737 | 10,000 | 18,415,737 | 18,415,737 |
18,415,737 | 18,416,738 | 10,000 | 10,000 | 18,415,737 | 10,000 | 18,425,737 | 18,425,737 |
18,425,737 | 18,426,738 | 10,000 | 10,000 | 18,425,737 | 10,000 | 18,435,737 | 18,435,737 |
18,435,737 | 18,436,738 | 10,000 | 10,000 | 18,435,737 | 10,000 | 18,445,737 | 18,445,737 |
18,445,737 | 18,446,738 | 10,000 | 10,000 | 18,445,737 | 10,000 | 18,455,737 | 18,455,737 |
18,455,737 | 18,456,739 | 10,000 | 10,001 | 18,455,737 | 10,001 | 18,465,738 | 18,465,738 |
18,465,737 | 18,466,739 | 10,000 | 10,000 | 18,465,737 | 10,000 | 18,475,737 | 18,475,737 |
18,475,737 | 18,476,739 | 10,000 | 10,000 | 18,475,737 | 10,000 | 18,485,737 | 18,485,737 |
18,485,737 | 18,486,739 | 10,000 | 10,000 | 18,485,737 | 10,000 | 18,495,737 | 18,495,737 |
18,495,738 | 18,496,739 | 10,001 | 10,000 | 18,495,738 | 10,001 | 18,505,739 | 18,505,739 |
18,505,738 | 18,506,739 | 10,000 | 10,000 | 18,505,738 | 10,000 | 18,515,738 | 18,515,738 |
18,515,738 | 18,516,739 | 10,000 | 10,000 | 18,515,738 | 10,000 | 18,525,738 | 18,525,738 |
18,525,738 | 18,526,739 | 10,000 | 10,000 | 18,525,738 | 10,000 | 18,535,738 | 18,535,738 |
18,535,738 | 18,536,739 | 10,000 | 10,000 | 18,535,738 | 10,000 | 18,545,738 | 18,545,738 |
18,545,738 | 18,546,739 | 10,000 | 10,000 | 18,545,738 | 10,000 | 18,555,738 | 18,555,738 |
18,555,738 | 18,556,739 | 10,000 | 10,000 | 18,555,738 | 10,000 | 18,565,738 | 18,565,738 |
18,565,738 | 18,566,739 | 10,000 | 10,000 | 18,565,738 | 10,000 | 18,575,738 | 18,575,738 |
18,575,738 | 18,576,739 | 10,000 | 10,000 | 18,575,738 | 10,000 | 18,585,738 | 18,585,738 |
18,585,738 | 18,586,740 | 10,000 | 10,001 | 18,585,738 | 10,001 | 18,595,739 | 18,595,739 |
Time values for execution of reading and writing tasks for 16 bits. The table includes the write times (
| | | | | | | |
---|---|---|---|---|---|---|---|
27,233,037 | 27,234,038 | 10,000 | 10,000 | 27,234,038 | 10,000 | 27,244,038 | 27,244,038 |
27,243,037 | 27,244,038 | 10,000 | 10,000 | 27,244,038 | 10,000 | 27,254,038 | 27,254,038 |
27,253,037 | 27,254,038 | 10,000 | 10,000 | 27,254,038 | 10,000 | 27,264,038 | 27,264,038 |
27,263,037 | 27,264,038 | 10,000 | 10,000 | 27,264,038 | 10,000 | 27,274,038 | 27,274,038 |
27,273,037 | 27,274,038 | 10,000 | 10,000 | 27,274,038 | 10,000 | 27,284,038 | 27,284,038 |
27,283,037 | 27,284,038 | 10,000 | 10,000 | 27,284,038 | 10,000 | 27,294,038 | 27,294,038 |
27,293,037 | 27,294,039 | 10,000 | 10,001 | 27,294,039 | 10,001 | 27,304,040 | 27,304,040 |
27,303,037 | 27,304,039 | 10,000 | 10,000 | 27,304,039 | 10,000 | 27,314,039 | 27,314,039 |
27,313,037 | 27,314,039 | 10,000 | 10,000 | 27,314,039 | 10,000 | 27,324,039 | 27,324,039 |
27,323,037 | 27,324,039 | 10,000 | 10,000 | 27,324,039 | 10,000 | 27,334,039 | 27,334,039 |
27,333,037 | 27,334,039 | 10,000 | 10,000 | 27,334,039 | 10,000 | 27,344,039 | 27,344,039 |
27,343,037 | 27,344,039 | 10,000 | 10,000 | 27,344,039 | 10,000 | 27,354,039 | 27,354,039 |
27,353,037 | 27,354,039 | 10,000 | 10,000 | 27,354,039 | 10,000 | 27,364,039 | 27,364,039 |
27,363,038 | 27,364,039 | 10,001 | 10,000 | 27,364,039 | 10,001 | 27,374,040 | 27,374,040 |
27,373,038 | 27,374,039 | 10,000 | 10,000 | 27,374,039 | 10,000 | 27,384,039 | 27,384,039 |
27,383,038 | 27,384,039 | 10,000 | 10,000 | 27,384,039 | 10,000 | 27,394,039 | 27,394,039 |
27,393,038 | 27,394,039 | 10,000 | 10,000 | 27,394,039 | 10,000 | 27,404,039 | 27,404,039 |
27,403,038 | 27,404,039 | 10,000 | 10,000 | 27,404,039 | 10,000 | 27,414,039 | 27,414,039 |
27,413,038 | 27,414,039 | 10,000 | 10,000 | 27,414,039 | 10,000 | 27,424,039 | 27,424,039 |
27,423,038 | 27,424,039 | 10,000 | 10,000 | 27,424,039 | 10,000 | 27,434,039 | 27,434,039 |
Time values for execution of reading and writing tasks for 32 bits. The table includes the write times (
| | | | | | | |
---|---|---|---|---|---|---|---|
21,887,903 | 21,888,904 | 10,001 | 10,000 | 21,888,904 | 10,001 | 21,898,905 | 21,898,905 |
21,897,903 | 21,898,904 | 10,000 | 10,000 | 21,898,904 | 10,000 | 21,908,904 | 21,908,904 |
21,907,903 | 21,908,904 | 10,000 | 10,000 | 21,908,904 | 10,000 | 21,918,904 | 21,918,904 |
21,917,903 | 21,918,904 | 10,000 | 10,000 | 21,918,904 | 10,000 | 21,928,904 | 21,928,904 |
21,927,903 | 21,928,904 | 10,000 | 10,000 | 21,928,904 | 10,000 | 21,938,904 | 21,938,904 |
21,937,903 | 21,938,904 | 10,000 | 10,000 | 21,938,904 | 10,000 | 21,948,904 | 21,948,904 |
21,947,903 | 21,948,904 | 10,000 | 10,000 | 21,948,904 | 10,000 | 21,958,904 | 21,958,904 |
21,957,903 | 21,958,904 | 10,000 | 10,000 | 21,958,904 | 10,000 | 21,968,904 | 21,968,904 |
21,967,903 | 21,968,905 | 10,000 | 10,001 | 21,968,905 | 10,001 | 21,978,906 | 21,978,906 |
21,977,903 | 21,978,905 | 10,000 | 10,000 | 21,978,905 | 10,000 | 21,988,905 | 21,988,905 |
21,987,903 | 21,988,905 | 10,000 | 10,000 | 21,988,905 | 10,000 | 21,998,905 | 21,998,905 |
21,997,903 | 21,998,905 | 10,000 | 10,000 | 21,998,905 | 10,000 | 22,008,905 | 22,008,905 |
22,007,903 | 22,008,905 | 10,000 | 10,000 | 22,008,905 | 10,000 | 22,018,905 | 22,018,905 |
22,017,903 | 22,018,905 | 10,000 | 10,000 | 22,018,905 | 10,000 | 22,028,905 | 22,028,905 |
22,027,903 | 22,028,905 | 10,000 | 10,000 | 22,028,905 | 10,000 | 22,038,905 | 22,038,905 |
22,037,904 | 22,038,905 | 10,001 | 10,000 | 22,038,905 | 10,001 | 22,048,906 | 22,048,906 |
22,047,904 | 22,048,905 | 10,000 | 10,000 | 22,048,905 | 10,000 | 22,058,905 | 22,058,905 |
22,057,904 | 22,058,905 | 10,000 | 10,000 | 22,058,905 | 10,000 | 22,068,905 | 22,068,905 |
22,067,904 | 22,068,905 | 10,000 | 10,000 | 22,068,905 | 10,000 | 22,078,905 | 22,078,905 |
22,077,904 | 22,078,905 | 10,000 | 10,000 | 22,078,905 | 10,000 | 22,088,905 | 22,088,905 |
Comparative results of various communication methods in multicore systems. The table compares the number of cores, tasks, and different latency measurements (L2L, L2F, and F2F) across multiple studies, including ours. Chain sizes and labels are also indicated. Our results demonstrate the performance of the LET and LET-TDMA methods for 8-, 16-, and 32-bit data transfers.
Author | Method | Cores | Tasks | L2L (ms) | L2F (ms) | F2F (ms) | Chain Size |
---|---|---|---|---|---|---|---|
Tabish et al. [ | TDMA-DMA with SPM | 3 | 5–20 | - | 400 | - | - |
Biondi et al. [ | Explicit Communication | 2 | 4 | - | 12.746 | 22.746 | 4, 3 labels |
Implicit Communication | 2 | 3 | - | - | - | 3, 2 labels | |
LET Communication | 2 | 5 | 154.234 | - | - | 14 labels | |
Hamman et al. [ | Explicit Communication | 4 | 3 | - | - | 8.6 | 10,000 labels |
Implicit Communication | 4 | 3 | - | - | 36.9 | 10,000 labels | |
LET | 4 | 3 | - | - | 111.97 | 10,000 labels | |
Martinez et al. [ | Explicit Communication (C1) | 4 | 3 | 123.718 | - | 125.710 | 3, 2 labels |
Implicit Communication (C1) | 4 | 3 | 154.988 | - | 151.855 | 3, 2 labels | |
LET (C1) | 4 | 3 | 210 | - | 212 | 3, 2 labels | |
Explicit Communication (C2) | 4 | 3 | 2.844 | - | 64.894 | 3, 2 labels | |
Implicit Communication (C2) | 4 | 3 | 6.54 | - | 66.33 | 3, 2 labels | |
LET (C2) | 4 | 3 | 53.597 | - | 103.597 | 3, 2 labels | |
Maia and Fohler [ | LET | 4 | 2–5 | 4040 | 5000 | 420.43 | 38 labels |
WCR-LET | 4 | 2–5 | - | 4000 | - | ||
Maia–Fohler | 4 | 2–5 | 3237 | 4197 | - | ||
Wang et al. [ | fLETEnum | 4 | 21 | 2725 | 3685 | - | 31 to 63 labels |
fLETSBacktracking | 4 | 1 | 2725 | 3685 | - | ||
fLETSymbOpt | 4 | 1 | 2725 | 3685 | - | ||
Günzel et al. [ | D19 | 2–5 | 21 | 3250 | 4750 | - | 30 to 60 labels |
K18 | 2–5 | 21 | 2650 | 2700 | - | ||
B17 | 2–5 | 21 | 2650 | - | - | ||
Günzel | 2–5 | 21 | 1750 | 3250 | - | ||
Ours | LET | 2 | 2 | [20∼40] | [10∼40] | [10∼50] | 2, 1 labels |
LET and TDMA (8, 16 and 32 bits) | 2 | 2 | 10 | 10 | 20 | 2, 1 labels |
Values calculated for the Root Mean Square Error (RMSE) for 8-, 16-, and 32-bit buffers.
RMSE | Core M0 | Core M4 | 8-bit | 16-bit | 32-bit |
---|---|---|---|---|---|
LET | ISR | AUTOSAR | 3.2846 ms | 8.9257 ms | 0.2338 ms |
ISR | ISR | 9.1680 ms | 7.9906 ms | 1.4070 ms | |
LET-TDMA | ISR | AUTOSAR | ≈1 ms | ≈1 ms | ≈1 ms |
References
1. Nidamanuri, J.; Nibhanupudi, C.; Assfalg, R.; Venkataraman, H. A progressive review: Emerging technologies for ADAS driven solutions. IEEE Trans. Intell. Veh.; 2021; 7, pp. 326-341. [DOI: https://dx.doi.org/10.1109/TIV.2021.3122898]
2. Antinyan, V. Revealing the Complexity of Automotive Software. Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2020); Sacramento, CA, USA, 6–16 November 2020; pp. 1525-1528. [DOI: https://dx.doi.org/10.1145/3368089.3417038]
3. Monot, A.; Navet, N.; Bavoux, B.; Simonot-Lion, F. Multicore scheduling in automotive ECUs. Embedded Real Time Software and Systems; ERTSS: Toulouse, France, 2010.
4. Bucaioni, A.; Mubeen, S.; Ciccozzi, F.; Cicchetti, A.; Sjödin, M. Modelling multi-criticality vehicular software systems: Evolution of an industrial component model. Softw. Syst. Model.; 2020; 19, pp. 1283-1302. [DOI: https://dx.doi.org/10.1007/s10270-020-00795-5]
5. Schoeberl, M.; Sørensen, R.B.; Sparsø, J. Models of communication for multicore processors. Proceedings of the 2015 IEEE International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing Workshops; Auckland, New Zealand, 13–17 April 2015; IEEE: Piscataway, NJ, USA, 2015.
6. Hamann, A.; Dasari, D.; Kramer, S.; Pressler, M.; Wurst, F. Communication centric design in complex automotive embedded systems. Leibniz Int. Proc. Inform. LIPIcs; 2017; 76, pp. 101-1020. [DOI: https://dx.doi.org/10.4230/LIPIcs.ECRTS.2017.10]
7. Martinez, J.; Sañudo, I.; Bertogna, M. End-to-End Latency Characterization of Task Communication Models for Automotive Systems; Springer Nature: Heidelberg, Germany, 2020; Volume 56, pp. 315-347. [DOI: https://dx.doi.org/10.1007/s11241-020-09350-3]
8. Pazzaglia, P.; Biondi, A.; Di Natale, M. Optimizing the functional deployment on multicore platforms with logical execution time. Proc. Real-Time Syst. Symp.; 2019; 2019, pp. 207-219. [DOI: https://dx.doi.org/10.1109/RTSS46320.2019.00028]
9. Toscanelli, M. Multicore Software Development for Engine Control Units. Master’s Thesis; Università di Bologna: Bolonia, Italia, 2019.
10. Cerrolaza, J.P.; Obermaisser, R.; Abella, J.; Cazorla, F.J.; Grüttner, K.; Agirre, I.; Ahmadian, H.; Allende, I. Multi-core devices for safety-critical systems: A survey. ACM Comput. Surv. (CSUR); 2020; 53, pp. 1-38. [DOI: https://dx.doi.org/10.1145/3398665]
11. Igarashi, S.; Azumi, T. Work in progress: Considering heuristic scheduling for NoC-Based clustered many-core processor using LET model. Proceedings of the Real-Time Systems Symposium; Hong Kong, China, 3–6 December 2019; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2019; pp. 516-519. [DOI: https://dx.doi.org/10.1109/RTSS46320.2019.00053]
12. Hung, S.H.; Chiu, P.H.; Shih, C.S. Building and optimizing a scalable and portable message-passing library for embedded multicore systems. Information; 2012; 15, pp. 3039-3057.
13. Hung, S.H.; Tu, C.H.; Yang, W.L. A portable, efficient inter-core communication scheme for embedded multicore platforms. J. Syst. Archit.; 2011; 57, pp. 193-205. [DOI: https://dx.doi.org/10.1016/j.sysarc.2010.11.003]
14. Sørensen, R.B.; Puffitsch, W.; Schoeberl, M.; Sparsø, J. Message passing on a time-predictable multicore processor. Proceedings of the 2015 IEEE 18th International Symposium on Real-Time Distributed Computing, ISORC 2015; Auckland, New Zealand, 13–17 April 2015; pp. 51-59. [DOI: https://dx.doi.org/10.1109/ISORC.2015.15]
15. Urbina, M. TIMEA: Time-Triggered Message-Based Multicore Architecture for AUTOSAR. Ph.D. Thesis; University of Siegen: Siegen, Germany, 2020.
16. Beckert, M. Scheduling Mechanisms for Efficient and Safe Automotive Systems Integration. Ph.D. Thesis; Technischen Universität Braunschweig: Braunschweig, Germany, 2019; [DOI: https://dx.doi.org/10.24355/dbbs.084-201911070747-5]
17. Shirvani, M.H.; Talouki, R.N. A novel hybrid heuristic-based list scheduling algorithm in heterogeneous cloud computing environment for makespan optimization. Parallel Comput.; 2021; 108, 102828. [DOI: https://dx.doi.org/10.1016/j.parco.2021.102828]
18. Seifhosseini, S.; Shirvani, M.H.; Ramzanpoor, Y. Multi-objective cost-aware bag-of-tasks scheduling optimization model for IoT applications running on heterogeneous fog environment. Comput. Netw.; 2024; 240, 110161. [DOI: https://dx.doi.org/10.1016/j.comnet.2023.110161]
19. Soliman, M.R.; Gracioli, G.; Tabish, R.; Pellizzoni, R.; Caccamo, M. Segment streaming for the three-phase execution model: Design and implementation. Proceedings of the 2019 IEEE Real-Time Systems Symposium (RTSS); Hong Kong, China, 3–6 December 2019; pp. 260-273. [DOI: https://dx.doi.org/10.1109/RTSS46320.2019.00032]
20. Tabish, R.; Mancuso, R.; Wasly, S.; Pellizzoni, R.; Caccamo, M. A real-time scratchpad-centric OS with predictable inter/intra-core communication for multi-core embedded systems. Real-Time Syst.; 2019; 55, pp. 850-888. [DOI: https://dx.doi.org/10.1007/s11241-019-09340-0]
21. Bellassai, D.; Biondi, A.; Biasci, A.; Morelli, B. Supporting logical execution time in multi-core POSIX systems. J. Syst. Archit.; 2023; 144, 102987. [DOI: https://dx.doi.org/10.1016/j.sysarc.2023.102987]
22. Gemlau, K.B.; KÖHLER, L.; Ernst, R.; Quinton, S. System-Level Logical Execution Time: Augmenting the Logical Execution Time Paradigm for Distributed Real-Time Automotive Software. ACM Trans. Cyber-Phys. Syst.; 2021; 5, pp. 1-27. [DOI: https://dx.doi.org/10.1145/3381847]
23. Gemlau, K.B.; Kohler, L.; Ernst, R. A Platform Programming Paradigm for Heterogeneous Systems Integration. Proc. IEEE; 2021; 109, pp. 582-603. [DOI: https://dx.doi.org/10.1109/JPROC.2020.3035874]
24. Kang, D.; Oh, J.; Choi, J.; Yi, Y.; Ha, S. Scheduling of Deep Learning Applications Onto Heterogeneous Processors in an Embedded Device. IEEE Access; 2020; 8, pp. 43980-43991. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2977496]
25. Hosseini Shirvani, M. A hybrid meta-heuristic algorithm for scientific workflow scheduling in heterogeneous distributed computing systems. Eng. Appl. Artif. Intell.; 2020; 90, 103501. [DOI: https://dx.doi.org/10.1016/j.engappai.2020.103501]
26. Verucchi, M.; Theile, M.; Caccamo, M.; Bertogna, M. Latency-Aware Generation of Single-Rate DAGs from Multi-Rate Task Sets. Proceedings of the 2020 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS); Sydney, Australia, 21–24 April 2020; pp. 226-238. [DOI: https://dx.doi.org/10.1109/RTAS48715.2020.000-4]
27. Noorian Talouki, R.; Hosseini Shirvani, M.; Motameni, H. A hybrid meta-heuristic scheduler algorithm for optimization of workflow scheduling in cloud heterogeneous computing environment. J. Eng. Des. Technol.; 2022; 20, pp. 1581-1605. [DOI: https://dx.doi.org/10.1108/JEDT-11-2020-0474]
28. Becker, M.; Dasari, D.; Mubeen, S.; Behnam, M.; Nolte, T. End-to-end timing analysis of cause-effect chains in automotive embedded systems. J. Syst. Archit.; 2017; 80, pp. 104-113. [DOI: https://dx.doi.org/10.1016/j.sysarc.2017.09.004]
29. Igarashi, S.; Ishigooka, T.; Horiguchi, T.; Koike, R.; Azumi, T. Heuristic Contention-Free Scheduling Algorithm for Multi-core Processor using LET Model. Proceedings of the 2020 IEEE/ACM 24th International Symposium on Distributed Simulation and Real Time Applications, DS-RT 2020; Prague, Czech Republic, 14–16 September 2020; [DOI: https://dx.doi.org/10.1109/DS-RT50469.2020.9213582]
30. Kopetz, H. Real-Time Systems: Design Principles for Distributed Embedded Applications; 2nd ed. Springer: New York, NY, USA, 2011; [DOI: https://dx.doi.org/10.1016/S0898-1221(97)90277-7]
31. Ecco, L.; Tobuschat, S.; Saidi, S.; Ernst, R. A mixed critical memory controller using bank privatization and fixed priority scheduling. Proceedings of the RTCSA 2014—20th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications; Chongqing, China, 20–22 August 2014; [DOI: https://dx.doi.org/10.1109/RTCSA.2014.6910550]
32. Martinez, J.; Sañudo, I.; Bertogna, M. Analytical Characterization of End-to-End Communication Delays With Logical Execution Time. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst.; 2018; 37, pp. 2244-2254. [DOI: https://dx.doi.org/10.1109/TCAD.2018.2857398]
33. Biondi, A.; Pazzaglia, P.; Balsini, A.; Natale, M.D. Logical Execution Time Implementation and Memory Optimization Issues in AUTOSAR Applications for Multicores. Proceedings of the 8th International Workshop on Analysis Tools and Methodologies for Embedded and Real-Time Systems (WATERS); Dubrovnik, Croatia, 27 June 2017.
34. Maia, L.; Fohler, G. Reducing End-to-End Latencies of Multi-Rate Cause-Effect Chains for the LET Model. arXiv; 2023; arXiv: 2305.02121
35. Wang, S.; Li, D.; Sifat, A.H.; Huang, S.Y.; Deng, X.; Jung, C.; Williams, R.; Zeng, H. Optimizing Logical Execution Time Model for Both Determinism and Low Latency. arXiv; 2024; arXiv: 2310.19699
36. Günzel, M.; Chen, K.H.; Ueter, N.; von der Brüggen, G.; Dürr, M.; Chen, J.J. Compositional Timing Analysis of Asynchronized Distributed Cause-effect Chains. ACM Trans. Embed. Comput. Syst.; 2023; 22, pp. 1-34. [DOI: https://dx.doi.org/10.1145/3587036]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The automotive industry has recently adopted multicore processors and microcontrollers to meet the requirements of new features, such as autonomous driving, and comply with the latest safety standards. However, inter-core communication poses a challenge in ensuring real-time requirements such as time determinism and low latencies. Concurrent access to shared buffers makes predicting the flow of data difficult, leading to decreased algorithm performance. This study explores the integration of Logical Execution Time (LET) and Time-Division Multiple Access (TDMA) models in multicore embedded systems to address the challenges in inter-core communication by synchronizing read/write operations across different cores, significantly reducing latency variability and improving system predictability and consistency. Experimental results demonstrate that this integrated approach eliminates data loss and maintains fixed operation rates, achieving a consistent latency of 11 ms. The LET-TDMA method reduces latency variability to approximately 1 ms, maintaining a maximum delay of 1.002 ms and a minimum delay of 1.001 ms, compared to the variability in the LET-only method, which ranged from 3.2846 ms to 8.9257 ms for different configurations.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details






1 Faculty of Informatics, Autonomous University of Queretaro, Queretaro 76230, Mexico;
2 CICATA-Queretaro Unit, National Polytechnic Institute, Cerro Blanco No. 141, Col. Colinas del Cimatario, Queretaro 76090, Mexico;
3 Faculty of Engineering, Autonomous University of Queretaro, Queretaro 76010, Mexico;