Content area
This paper addresses the challenge of optimizing cloudlet resource allocation in a code evaluation system. The study models the relationship between system load and response time when users submit code to an online code-evaluation platform, LambdaChecker, which operates a cloudlet-based processing pipeline. The pipeline includes code correctness checks, static analysis, and design-pattern detection using a local Large Language Model (LLM). To optimize the system, we develop a mathematical model and apply it to the LambdaChecker resource management problem. The proposed approach is evaluated using both simulations and real contest data, with a focus on improvements in average response time, resource utilization efficiency, and user satisfaction. The results indicate that adaptive scheduling and workload prediction effectively reduce waiting times without substantially increasing operational costs. Overall, the study suggests that systematic cloudlet optimization can enhance the educational value of automated code evaluation systems by improving responsiveness while preserving sustainable resource usage.
Details
Software;
User experience;
Computer science;
Mathematical models;
Optimization techniques;
Resource allocation;
Automation;
Resource management;
Workloads;
Feedback;
Generative artificial intelligence;
Simulation;
Static code analysis;
Large language models;
Educational objectives;
Pattern analysis;
Science education;
User satisfaction;
Optimization;
Teaching assistants;
Response time;
Resource utilization;
Learning
