Content area
The development of information technology has made the traditional financial management model of power supply enterprises unable to meet the needs of modern enterprises for data processing and decision support. The study aims to implement a financial information software system for power supply companies built on the Java 2 Platform Enterprise Edition (J2EE) framework, thereby enhancing the precision of financial data processing. This study proposes financial information mining based on an improved Apriori algorithm and designs and implements a software system under the J2EE framework. The improved Apriori algorithm improves the efficiency of the data mining process by introducing a dynamic update mechanism and updating frequent item sets only when necessary. The algorithm also adopts parallelization processing technology to accelerate the generation of frequent item sets and the extraction of association rules through distributed computing resources. The results indicated that the designed system could effectively mine frequent itemsets and association rules in financial data, and outperformed existing systems in key performance indicators such as CPU usage and system throughput. The CPU usage remained between 40% and 60%, and the throughput was roughly around 30 Mb/s, while the CPU usage of other systems fluctuated between 20% and 80%, with throughput increasing from nearly 10 Mb/s to nearly 30 Mb/s. In addition, the system response time has been significantly reduced, and the data accuracy has reached 99.9%. The financial information software system for power supply companies based on the J2EE framework can effectively support the financial decision-making and management of power supply companies. This is crucial for promoting the informatization process of enterprises, which can strengthen the informatization level of the entire energy industry and provide valuable experience for other enterprises in the same industry.
Highlights
This research system is based on J2EE framework and improved Apriori algorithm to improve the efficiency of financial data mining.
The system performs outstandingly in terms of performance, with stable CPU usage, short response time, and data accuracy as high as 99.9%.
This system has optimized the financial management process and promoted the informatization development of the industry.
Introduction
In today’s digital age, as a key link in energy supply, the informatization of Financial Management (FM) in Power Supply Companies (PSCs) is essential in enhancing operational efficiency and market competitiveness. The advancement of technologies such as big data, cloud computing, and Artificial Intelligence (AI) presents new challenges and opportunities for PSC’s FM [1, 2]. However, currently, PSCs generally face issues including insufficient data mining, information silos, and poor system scalability in their financial informatization practices. These issues severely limit the efficiency of FM, becoming a bottleneck for industry development [3, 4]. With the intensification of market competition and the business processes complexity, conventional FM models cannot satisfy the requirements of modern enterprises for data processing and decision support. The Java 2 Platform Enterprise Edition (J2EE) framework, as a mature enterprise-level application development platform, offers strong technical support for building efficient and stable financial information systems with its cross-platform, componentization, and security features [5, 6].
Financial informatization refers to the use of information technology to collect, store, analyze, and disseminate financial data, in order to improve the efficiency and quality of financial services and enhance the decision-support capabilities of financial institutions. In recent years, research on financial informatization and J2EE technology in PSCs has made some progress. The traditional remote branch FM system has poor node recovery capability and insufficient robustness, resulting in low system reliability. In view of this, Bi S adopted the J2EE architecture to store plenty of unstructured data on the cloud platform for data classification, retrieval, and access control. The system successfully recovered 9 faulty nodes, highly consistent with the standard sample distribution, and maintained a robustness of over 95%, verifying the good reliability of the FM assistance function [7]. Al Hawari proposed a design pattern-based solution to address certain functional features repeatedly appearing hundreds of times in complex information systems. These design patterns were generally described through UML diagrams for use in various web development platforms and support popular object-oriented programming languages. These design patterns have been reused plenty of times in implementing six information systems, with one pattern being used over 700 times. In addition, some design patterns have not been explored in relevant research [8]. Mei et al. proposed a research method for the application architecture and implementation technology of electric power marketing management information systems in response to the current situation of informatization construction and development of electric power marketing systems in modern economic construction. The purpose was to cope with the challenges of market competition pressure and market share expansion. This design could well enhance the operational efficiency of the power marketing process and had a positive impact on promoting innovative development [9]. Li et al. proposed an approach for constructing an integrated management system for enterprises built on an information reconstruction model and Internet of Things (IoT) technology to address the need to improve the level of enterprise management information system construction. At the algorithm design level, genetic algorithms were optimized to address the issue of traditional algorithms easily getting stuck in local optima, thereby enhancing the robustness of the system to complex data scenarios. The system performed excellently in terms of regression performance, response time, and hit count, providing an efficient solution for enterprise management informatization [10].
In summary, how to fully utilize the advantages of the J2EE framework to improve the system’s data processing capabilities, enhance its scalability and security, and effectively integrate emerging technologies such as cloud computing and big data with the J2EE framework are currently hot and difficult research topics. Faced with these challenges, to develop a system capable of processing large amounts of financial data and providing in-depth analysis, this study proposes a design scheme for a Financial Information Software (FIS) system for PSCs based on the J2EE framework. This plan aims to achieve efficient processing and accurate analysis of financial data by integrating cutting-edge technologies such as data mining, AI, and cloud computing. The proposed PSC financial information system differs significantly from previous J2EE-based financial systems (such as the cloud-based financial information system proposed by Bi [7]) in many key aspects. The proposed system has been customized specifically for the financial informatization needs of PSCs, optimizing core financial functions such as accounting processing, cost analysis, and budget preparation to adapt to the special business processes of the energy industry. The Improved Apriori Algorithm (IAA) is adopted in data mining technology. Through the dynamic update mechanism and parallel processing, the efficiency and accuracy of Financial Information Mining (FIM) are improved. Compared with the system proposed by Bi [7], the research system focuses more on the system’s security and stability. By implementing end-to-end data encryption, secure API interfaces, and regular security audits, data protection and system security have been further strengthened.
The innovation lies in combining the Apriori algorithm with the J2EE framework, which improves the efficiency of FIM and enhances the scalability and security of the system. Additionally, the paper will explore the application of IoT in enterprise-integrated management systems, providing PSCs with more comprehensive and efficient financial information solutions. The contribution lies in providing a new implementation path for the financial informatization of PSCs. By introducing and integrating multiple technologies, the intelligence level of financial data processing has been improved, and the FM process has been optimized.
Methods and materials
This study explores the theoretical basis, architecture design, and functional implementation of system design. Through in-depth analysis of the financial informatization needs of PSCs, an efficient and reliable FIS software system has been designed, aimed at improving data processing capabilities, optimizing business processes, and enhancing the scalability and security of the system. Meanwhile, this study also specifically describes the key technologies and methods used in the system implementation process, including data mining techniques, system architecture design principles, and modular implementation strategies.
Financial information mining of PSCs based on apriori
In the financial information management of PSCs, data mining techniques can effectively reveal potential patterns and correlations in financial data, providing a basis for financial decision-making. The Apriori, as a classic Association Rule (AR) mining algorithm, has been widely utilized in many fields. Introducing it into the financial information management of PSCs can efficiently mine frequent itemsets and strong ARs in financial data, providing support for financial analysis and decision-making [11]. The steps for mining financial information ARs of PSCs are displayed in Fig. 1.
[See PDF for image]
Fig. 1
Steps of FIM association rules of PSCs
In Fig. 1, first, the transaction record data are input into the transaction record database, and then frequent itemsets are identified through data analysis. Based on these frequent itemsets, ARs are generated. Finally, these ARs are output and organized into a set of related rule sets. Throughout the entire process, users can supervise and intervene in the data mining process to ensure the accuracy and practicality of the mining results. The core of the Apriori lies in the generation of frequent itemsets and the mining of ARs to reveal the intrinsic connections in the data. In the FIM of PSCs, datasets usually contain a large amount of financial transaction records, expense categories, income and expenditure situations, and other information. By setting support and confidence thresholds, ARs with realistic meaning can be filtered out [12, 13]. The support degree represents the percentage of transactions containing itemsets X and Y, calculated as given by Eq. (1).
1
In Eq. (1), is the probability of X and Y appearing simultaneously. The confidence level represents the transactions percentage concluding X that also contain Y, as given by Eq. (2).
2
In Eq. (2), means the confidence level of the AR, which is the Conditional Probability (CP) that a transaction containing X also contains Y. is the CP of the occurrence of X under the condition that itemset Y is known to occur. and are the amount of transactions in the database that contain X and Y. In addition, the degree of improvement is an important indicator for measuring the importance of ARs. It represents the probability ratio including Y to the possibility of X occurring in the population, as given by Eq. (3).
3
In Eq. (3), is the degree of improvement, which is the ratio of the possibility of incidence of X to the overall probability of occurrence of Y under the condition of including Y. and are the overall possibility of the occurrence of X and Y. In the process of AR mining, the calculation of correlation probability is based on the frequency of occurrence of itemsets. For each frequent itemset, all possible non-empty subsets will generate corresponding rules, and the basis for generating these rules is shown in Eq. (4).
4
In Eq. (4), and are the number of transactions that candidate itemset l and frequent itemset s appears in the database. is the minimum confidence threshold used to filter out strong ARs. The study improves the Apriori. The data preprocessing stage reduces noise and inconsistency through data cleaning and normalization, while using data compression and dimensionality reduction techniques to lower the dataset size [14, 15]. The specific process of improving the Apriori is exhibited in Fig. 2.
[See PDF for image]
Fig. 2
Flowchart of improved Apriori algorithm
In Fig. 2, the implementation process of the IAA is as follows: step 1 is to scan the financial dataset and generate a frequent itemset. The financial data of the PSCs include items such as electricity revenue, material procurement costs, and employee compensation. By scanning the dataset, it is determined which items appear more frequently in all transaction records. The next step is to generate a candidate set of frequent itemsets based on frequent itemsets. The generation process of frequent itemsets implements a more efficient pruning strategy, introduces a dynamic update mechanism, and only updates frequent itemsets when necessary. AR extraction can be achieved by designing heuristic rule extraction methods and introducing machine learning methods to automatically identify and extract strong ARs. To improve performance, parallel processing, and distributed computing are adopted, while caching frequently used data structures to reduce duplicate scans [16, 17]. The pseudo-code of the IAA is shown in Fig. 3.
[See PDF for image]
Fig. 3
Pseudo-code of the improved Apriori algorithm
In Fig. 3, the algorithm starts by scanning the transaction database, calculates the support degree of each item, and screens out the frequent 1-itemset that meets the minimum support threshold. Subsequently, the algorithm generates higher-dimensional frequent itemsets through an iterative process. At each step, candidate itemsets are constructed based on the frequent itemsets obtained in the previous step, and their support degrees are calculated to screen out the frequent item sets again. When a new frequent item set cannot be generated, the iteration terminates. After obtaining all the frequent itemsets, the algorithm further extracts the ARs from them and calculates the boost degree for all non-empty subsets of each frequent itemset to identify the strong ARs. Ultimately, the algorithm outputs the list of frequent itemsets and the list of strong ARs. This improved version of the algorithm enhances the efficiency and accuracy of the algorithm by introducing data cleaning and normalization in the data preprocessing stage, as well as adopting more efficient pruning strategies and dynamic update mechanisms during the generation of frequent itemsets.
Design of FIS system for PSCs based on J2EE framework
The FIM of PSCs based on the Apriori algorithm provides a theoretical basis and practical case for data mining in system design. By conducting in-depth AR mining on financial data, potential patterns and relationships within the financial data have been revealed. On this basis, the application of the J2EE framework will ensure the systematization, modularity, and maintainability of system design, thereby promoting the automation and informatization of FM processes in PSCs. The J2EE framework provides a complete set of specifications and components for developing and deploying multi-tier, component-based enterprise-level applications [18, 19]. For PSCs, FIS systems need to handle a large amount of financial data, including accounting processing, cost accounting, financial analysis, etc. Therefore, the design of the system must ensure efficiency, security, and scalability. The basic architecture of the J2EE system consists of multiple levels to support complex enterprise-level applications, as shown in Fig. 4.
[See PDF for image]
Fig. 4
J2EE architecture diagram
In Fig. 4, the J2EE framework provides a layered solution for the FIS system of PSCs. It includes a presentation layer that utilizes Java Server Pages (JSP) technology to create a dynamic user interface for displaying financial data. The business logic layer uses Enterprise JavaBeans (EJB) to implement core financial rules and data processing, ensuring transaction management and security. The data access layer interacts with the database through Java Database Connectivity (JDBC) to maintain data consistency. The integration layer uses Web Services to integrate with other systems. The enterprise information portal provides a unified information access point through Portlets technology [20]. In addition, the middleware technology of the J2EE framework plays a key role in the system, providing application server functions such as transaction management, message passing, security, and load balancing to achieve effective data exchange and system integration. When designing FIS systems for PSCs, using the J2EE framework combined with the Model View Controller (MVC) design pattern is an efficient approach. The MVC design pattern segments the application into 3 core components: model, view, and controller, to achieve modularity and maintainability of the software system, as displayed in Fig. 5.
[See PDF for image]
Fig. 5
Workflow of the MVC framework
In Fig. 5, the model layer is in charge of encapsulating financial data and business logic, including financial functions such as accounting processing, cost analysis, and budget preparation. The model layer interacts with the database, performs data addition, deletion, modification, and query operations, and ensures data consistency and integrity. The view layer displays financial data and user interface, interacts with the model layer through the controller, forwards user operation requests to the controller, and receives data from the model layer to update the user interface. The controller layer serves as a bridge between the model layer and the view layer, responsible for receiving user input, processing business logic, and updating models and views. The controller layer processes users’ financial query requests, transaction operations, etc., and calls the methods of the model layer to process data according to business rules. The controller layer is also responsible for selecting appropriate views to display processing results, providing users with an intuitive and easy-to-use financial information management platform. By adopting the MVC design pattern, the FIS system of the PSCs can achieve the separation of business logic and user interface, improving the maintainability and scalability of the system. The FIS system architecture for PSCs based on the J2EE framework achieves high cohesion and low coupling through layered and modular design, supporting complex business requirements and data processing while ensuring system security, stability, and scalability, as shown in Fig. 6.
[See PDF for image]
Fig. 6
FIS system architecture of PSCs
In Fig. 6, the user interface layer interacts with users and provides an intuitive user interface. This layer utilizes technologies such as JSP, HTML5, CSS3, and JavaScript to build a responsive user interface, ensuring that users can easily access and operate system functions. Asynchronous transmission and processing of front-end and back-end data are achieved through AJAX and JSON technologies to enhance user experience. The business logic layer is the core of the system, responsible for processing the business logic of financial data. Under the J2EE framework, this layer is implemented by J2EE components, encapsulating core business logic such as financial auditing, accounting processing, and cost analysis. EJB components support transaction management, concurrency control, and security control through container management, ensuring the accuracy of business processes and data consistency. In addition, the business logic layer also implements asynchronous communication and data exchange with other systems through JMS, ensuring the flexibility and scalability of system integration. The data access layer interacts with the database and achieves the persistence of financial data. This layer uses JDBC API to interact with the database, executing SQL statements to implement data addition, deletion, modification, and query. The integration layer integrates the system with the existing ERP system, human resources system, and other information management systems of the PSCs. Through Enterprise Service Bus (ESB) and Web service technology, data sharing and service invocation between different systems are achieved, ensuring smooth information flow and coherence of business processes. Table 1 shows the implementation of system functions.
Table 1. System function implementation table
Serial number | Functional module | Function description |
|---|---|---|
1 | Accounting processing module | Realize the functions of financial data entry, review, bookkeeping, etc., and support multi-currency and multi-ledger operations. |
2 | Cost accounting module | Conduct detailed accounting of various costs of the PSC, including material procurement, labor costs, equipment depreciation, etc. |
3 | Financial Analysis Module | It provides functions such as financial statement generation, data analysis, and budget execution analysis, and supports visual display. |
4 | Budget preparation module | Support annual budget preparation and quarterly budget adjustment, and provide monitoring and early warning functions for budget execution progress. |
5 | Report generation module | Automatically generate various financial statements, such as balance sheets, income statements, cash flow statements, etc., and support custom reports. |
6 | Data mining module | Based on the IAA, frequent item sets and ARs in financial data are mined to provide support for decision-making. |
7 | System Management Module | It includes functions such as user permission management, data backup and recovery, and system parameter configuration to ensure the safe operation of the system. |
8 | Integrated interface module | Achieve seamless integration with other information systems such as ERP systems and human resource systems to ensure data sharing and the continuity of business processes. |
In Table 1, the user’s addition, deletion, modification, and query functions are implemented in the user management module, including user basic information management, permission configuration, etc., ensuring security and flexibility. The FM module enhances the precision of financial data processing by automating the generation of various financial statements. The audit tracking module records every step in the audit process, facilitating the tracking and tracing of the audit process. The data analysis module provides powerful data mining and trend analysis tools to help PSCs gain insights into financial conditions and market trends. The system monitoring module implements real-time monitoring and abnormal alarm functions, enhancing system stability and security. The decision support module provides management with decision analysis and recommendations to assist in making scientific decisions.
In terms of data protection, the system adopts advanced encryption technologies such as AES and RSA to encrypt financial data during storage and transmission, ensuring that the data are not easily accessed without authorization under any circumstances. In addition, the system has also implemented a data backup and recovery mechanism to prevent data loss or damage. In terms of the user identity verification mechanism, the system adopts a multi-factor identity verification method, combining passwords, One Time Programmable codes (OTP), and biometric technologies (such as fingerprints or facial recognition) to enhance the verification strength of user identities. This multi-level verification method can effectively prevent identity impersonation and unauthorized access. In terms of system security, the system has implemented strict access control policies. Role-based Access Control (RBAC) ensures that only authorized users can access specific financial data and system functions. Meanwhile, the system has also deployed firewalls and Intrusion Detection Systems (IDS) to monitor and defend against potential network attacks. In enterprise or IoT-integrated environments, systems further enhance data protection and system security by implementing end-to-end data encryption, secure API interfaces, and regular security audits. End-to-end encryption ensures the data security during transmission, while secure API interfaces provide a secure communication channel for the system’s integration with other systems. Regular security audits help to identify and fix potential security vulnerabilities promptly.
Results
This study documented the key requirements analysis, system design, functional implementation, and specific parameter conditions during the testing phase of the system development process. In addition, system performance was tested, including a comparative analysis of key performance indicators such as CPU occupancy and system throughput. By comparing with existing systems, the improvement of the proposed system in terms of efficiency and accuracy was demonstrated.
Comparison of experimental configuration and IAA performance
The preprocessing of financial data is based on the actual business scenarios of the PSC, covering various types of financial transactions including electricity fee income, material procurement costs, employee compensation, etc. The dataset size has been reasonably divided according to the actual business requirements to ensure the efficient processing capacity of the system. The time range is from January 2023 to December 2024, covering two complete financial years to comprehensively reflect the financial situation under different seasons and business cycles. In the preprocessing stage, the data are cleaned, normalized, and dimensionality reduced to ensure data integrity and accuracy while reducing noise and inconsistency. In the experiment of the FIS system for PSCs based on the J2EE framework, this study performed a series of tests on the system to verify its stability. Table 2 shows the parameter configuration of the experiment.
Table 2. Experimental parameter configuration table
Parameter entry | Model/Configuration | Numerical value |
|---|---|---|
Server CPU | Intel Xeon Gold 6148 | 2 × 3.0 GHz |
Server memory | DDR4 | 256GB |
Server hard disk | NVMe SSD | 2 × 1 TB |
Client CPU | Intel Core i5-8500 | 3.0 GHz |
Client memory | DDR3 | 8GB |
Client hard disk | SSD | 256GB |
Network equipment | Cisco Catalyst 9300 | – |
Operating System (Server) | Red Hat EL 8.3 | – |
Operating system (Client) | Windows 10 Pro | – |
Application server software | WildFly 18.0 | – |
Database software | MySQL 8.0 | – |
Development tool | IntelliJ IDEA 2020.3 | – |
SSL certificate | Self signature | – |
Firewall rule | Port 80, 443 | – |
The server adopts the Dell PowerEdge R740 model, equipped with 2 Intel Xeon Gold 6148 processors with a main frequency of 3.0 GHz, 256GB DDR4 memory, and 2 1 TB NVMe SSD hard disks. The network device is the Cisco Catalyst 9300 series switch, providing a stable Gigabit Ethernet connection. The operating system is Red Hat Enterprise Linux 8.3, the application server uses WildFly 18.0, and the database software is MySQL 8.0. These configurations provide powerful hardware support and a stable operating environment for the experiments, ensuring the validity and repeatability of the experimental results. Firstly, the performance of the IAA is tested and compared with the original Apriori. The error convergence of the Apriori before and after improvement is shown in Fig. 7.
[See PDF for image]
Fig. 7
Error convergence of Apriori algorithm before and after improvement
In Fig. 7a, the improved algorithm reduces the error value to below 10− 6 after approximately 20 iterations and remains stable, while the original algorithm requires nearly 100 iterations to reach a similar error level. In addition, the improved algorithm exhibits a smoother downward trend throughout the entire iteration process, indicating better convergence. In Fig. 7b, the Apriori algorithm reduces the error value to nearly 10− 4 in the first 20 iterations. The IAA has significantly improved error convergence speed, stability, and robustness. The IAA is compared with the Apriori algorithm, Frequent Pattern Growth (FP-Growth), Equivalent Class Transformation (Eclat), and Density-Based Spatial Algorithm for Discovering the Complete Sets (DBSCAN). The comparison results of the accuracy, recall rate and F1 value of several algorithms are shown in Fig. 8.
[See PDF for image]
Fig. 8
Comparison of accuracy, recall rate and F1 value of several algorithms
In Fig. 8a, the improved Apriori algorithm achieves an accuracy rate of approximately 0.9 when the number of iterations is 100. As the number of iterations increases to 500, the accuracy rate remains stable above 0.9, demonstrating high stability. In Fig. 8b, the recall rate of the improved Apriori algorithm has reached approximately 0.85 when the number of iterations is 100. As the number of iterations increases, the recall rate stabilizes at around 0.9, demonstrating a strong recall ability. In Fig. 8c, the F1 value of the improved Apriori algorithm has reached approximately 0.85 when the number of iterations is 100. As the number of iterations increases, the F1 value stabilizes at around 0.9, achieving the optimal comprehensive performance. The numerical comparison is shown in Table 3.
Table 3. Accuracy comparison table of several algorithms
Test set/Verification set | IAA | Apriori | FP-Growth | Eclat | DBSCAN |
|---|---|---|---|---|---|
Test set Test 1 | 90 | 70 | 75 | 60 | 80 |
Test set Test 2 | 88 | 65 | 72 | 58 | 78 |
Test set Test 3 | 89 | 68 | 74 | 59 | 79 |
Test set Test 4 | 91 | 72 | 76 | 61 | 81 |
Verification set Test 1 | 89 | 69 | 73 | 57 | 77 |
Verification set Test 2 | 87 | 64 | 71 | 56 | 76 |
Verification set Test 3 | 88 | 67 | 73 | 58 | 78 |
Verification set Test 4 | 90 | 71 | 75 | 60 | 80 |
To further support the scalability of the updated algorithm and demonstrate its applicability throughout the industry, case study and pilot deployment data are studied and tested. These data include the system performance in enterprises of different scales and different business scenarios, as well as the system’s scalability when processing larger-scale datasets. The results are specifically shown in Table 4.
Table 4. Comparison table of algorithm performance indicators and scalability test data
Algorithm name | IAA | FP-Growth algorithm | Eclat algorithm |
|---|---|---|---|
Average CPU usage rate (%) | 45 | 50 | 40 |
Average throughput (Mb/s) | 32 | 28 | 25 |
Data accuracy (%) | 99.9 | 99.7 | 99.5 |
Execution time (s) | 150 | 180 | 120 |
Memory Usage (MB) | 1024 | 1200 | 900 |
Scalability testing (number of cases) | 5 | 3 | 2 |
In Table 4, the IAA is tested in five enterprises of different scales and business scenarios, including small enterprises, medium-sized enterprises, and large enterprises. The systems all demonstrate good scalability and can adapt to the needs of enterprises of different scales. The FP-Growth algorithm is tested in three cases, including different business scenarios. The algorithm performs well in small and medium-sized enterprises, but its performance declines when dealing with larger-scale datasets. The Eclat algorithm is tested in two cases. The algorithm performs fairly well in small enterprises, but when expanded to larger-scale datasets, both the performance and efficiency decline.
Performance analysis of FIS system for PSCs
A comparison is made between the proposed FIS system (System A) for PSCs based on the J2EE framework and the financial software system (System B) based on cloud technology. System B is a widely deployed business cloud solution that offers a variety of functions including FM, budgeting, cost analysis, and report generation. This system is designed based on the mainstream cloud service architecture in the current market and utilizes the flexibility and scalability of cloud computing to optimize the financial data processing flow. System B adopts advanced cloud-native technologies, containerization, and microservice architecture to ensure the high availability and rapid response capability of the system. The relative residual ratio and system response time of the two systems are shown in Fig. 9.
[See PDF for image]
Fig. 9
RRR and system response time of two systems
In Fig. 9a, when the sample size is 0, the Relative Residual Ratio (RRR) of system B is about 0.08, while the RRR of system A is close to 0. As the sample size increases, the RRR of system B gradually decreases to around 0.02, while the RRR of system A remains below 0.02 throughout the entire process. In Fig. 9b, when the sample size is 0, the response time of System B is about 60 milliseconds, while the response time of System A remains around 60 milliseconds, with little change. As the sample size grows, the response time of system B continues to rise, approaching 100 milliseconds when the sample size reaches 500. In contrast, although the response time of System A has also increased, the growth rate is relatively small. When the sample size reaches 500, the response time is about 70 milliseconds. System A exhibits higher stability in RRR, with smaller errors and higher accuracy in the data processing. In terms of system response time, System A is more efficient in handling large amounts of data and can respond to user requests faster. The CPU utilization and system throughput of the two systems are shown in Fig. 10.
[See PDF for image]
Fig. 10
CPU occupancy and system throughput of two systems
In Fig. 10a, the CPU occupancy rate of System B exhibits significant fluctuations throughout the entire runtime, with frequent changes ranging from 20% to 80%. The CPU occupancy rate of System A is relatively stable, mostly maintained between 40% and 60%. This indicates that System B’s utilization of CPU resources is not stable enough when processing tasks, while System A shows good resource management capabilities and load balancing. In Fig. 10b, the throughput of System A remains stable overall in terms of data volume, with a throughput of approximately 30 Mb/s. The throughput of System B significantly increases with the increase of data volume, growing from nearly 10 Mb/s to nearly 30 Mb/s. This indicates that System B has better scalability and higher throughput when processing large-scale data, while System A maintains stable processing capabilities at different data volumes. Table 5 shows the comparison results of indicators between System A and System B.
Table 5. Comparison results of system indicators
Index | System A | System B |
|---|---|---|
Response time (milliseconds) | 150 | 200 |
CPU usage (%) | 30 | 40 |
System Throughput (TPS) | 500 | 400 |
Concurrent processing capability (requests per second) | 300 | 250 |
Data accuracy (%) | 99.9 | 99.8 |
Customer satisfaction (1–5 points) | 4.8 | 4.6 |
Security (days without vulnerabilities) | 180 | 150 |
Scalability (modular score) | 8.5 | 7.0 |
Maintenance cost-benefit ratio | 1:4 | 1:3 |
System stability (uptime/month) | 99.9% | 99.5% |
Energy consumption (KWH/h) | 0.8 | 1.0 |
Backup and Recovery (minutes) | 5 | 10 |
Upgrade convenience (version upgrade time) | 30 min | 2 h |
In Table 5, the vulnerability-free days of System A are 180 days, which is much higher than System B’s 150 days, indicating that System A is more reliable in security protection. In the scalability score, System A scored 8.5 points higher than System B’s 7.0 points, indicating that System A is more excellent in modularity and functional expansion. In terms of cost-effectiveness ratio, System A’s 1:4 is higher than System B’s 1:3, indicating that System A has more advantages in cost control and benefit output. System A outperforms System B in terms of performance, security, maintainability, and cost-effectiveness, making it a better choice for FIS systems in PSCs. The high efficiency, stability, and user satisfaction of System A, as well as its outstanding performance in security and scalability, make it more adaptable to constantly changing business needs and market challenges.
Conclusion
The paper attempted to improve the efficiency and accuracy of financial information management in PSCs by designing and implementing an FIS system grounded on the J2EE framework. This study adopted IAA data mining technology and the J2EE framework for system design. Research has shown that the designed system exhibited high efficiency and accuracy in processing large amounts of financial data. After about 20 iterations, the IAA reduced the error value to below 10− 6 and remained stable, while the original algorithm required nearly 100 iterations to reach a similar error level. In addition, the CPU occupancy rate of the system was relatively stable, mostly maintained between 40% and 60%, and the throughput was roughly maintained at around 30 Mb/s. The system exhibited higher stability in RRR, with smaller errors and higher accuracy in the data processing. It was more efficient in handling large amounts of data and could respond to user requests faster. This study also has certain limitations, such as insufficient validation of its performance under extreme data loads. The study has not yet delved deeply into the integration of emerging technologies such as blockchain with financial information systems. Future work will focus on further optimizing system performance, especially when dealing with extremely large amounts of data, as well as exploring the application potential of blockchain and IoT technologies in FIS.
Acknowledgements
Not Applicable.
Author contributions
C.W. wrote the whole paper.
Funding
This paper was supported by the Construction and research of the intelligent management platform for digital courses under the Spark framework (NO: 232102321065).
Data availability
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Molaei, AM; Fromenteze, T; Skouroliakou, V; Hoang, TV; Kumar, R; Fusco, V; Yurduseven, O. Development of fast Fourier-compatible image reconstruction for 3D near-field bistatic microwave imaging with dynamic metasurface antennas. IEEE Trans Veh Technol; 2022; 71,
2. Fahmideh, M; Grundy, J; Ahmad, A; Shen, J; Yan, J; Mougouei, D; Abedin, B. Engineering blockchain-based software systems: Foundations, survey, and future directions. ACM-CSUR; 2022; 55,
3. Wu, Y; Meng, X; Hu, H; Zhang, J; Dong, Y; Lu, D. Affirm: interactive Mamba with adaptive fourier filters for long-term time series forecasting. Proc AAAI Conf Artif Intell; 2025; 39,
4. Zhu, M. Design and implementation of a CRM system based on email services. J Comput Commun; 2025; 13,
5. Trenchva M, Filipova M, Stefanovabogdanska D, Dimitrova R. Development and sustainability of specialized accounting software products. Ann Spiru Haret Univ Economic Ser. 2023;23(4):642–52.
6. Sambhus, K; Liu, Y. Automating Sql injection and cross-site scripting vulnerability remediation in code. Software; 2024; 3,
7. Bi, S. Reliability analysis of the financial management auxiliary system of remote branches based on cloud computing. Int J Appl Syst Stud; 2023; 10,
8. Al-Hawari, F. Software design patterns for data management features in web-based information systems. J King Saud University-Computer Inform Sci; 2022; 34,
9. Mei, Z. Design and implementation of intelligent tourism management system based on fuzzy cluster analysis. J Electr Syst; 2024; 20,
10. Li, J; Jia, Z; Wang, F. Construction of enterprise comprehensive management system based on information reconstruction and IoT. Int J Syst Assur Eng Manage; 2024; 15,
11. Liu, H. Financial risk intelligent early warning system of a municipal company based on genetic Tabu algorithm and big data analysis. Int J Inform Technol Syst Approach (IJITSA); 2022; 15,
12. Nabi, F; Zhou, X; Iftikhar, U; Attaullah, HM. A case study of cyber subversion attack based design flaw in service oriented component application logic. J Cyber Secur Technol; 2024; 8,
13. Chen, W. Risk warning method of computerised accounting information distortion based on deep integration model. Int J Ind Syst Eng; 2023; 44,
14. Luo, Y. Financial data security management method and edge computing platform based on intelligent edge computing and big data. IETE J Res; 2023; 69,
15. Kabuya, J; Tshipepele, E; Kasoro, N. Application of big data to configuration management in a PLM context. Indonesian J Data Sci; 2022; 3,
16. Seh, AH; Al-Amri, JF; Subahi, AF; Ansari, MJ; Kumar, R; Bokhari, MU et al. Hybrid computational modeling for web application security assessment. CMC-Comput Mater Continua; 2022; 70,
17. Wan, Y; Huang, Q; Wu, Y; Li, S. A BPSO based collaborative management platform for multi-source data security in power grids. J Intell Fuzzy Syst; 2024; 46,
18. Khanahmadi, M; Shameli-Sendi, A; Jabbarifar, M; Fournier, Q; Dagenais, M. Detection of microservice‐based software anomalies based on OpenTracing in cloud. Software: Pract Experience; 2023; 53,
19. Guan H, Yang H, Wang J. Approach to security pattern selection. Int J Autom Comput 13(2): 168–82.
20. Mokayed, H; Quan, TZ; Alkhaled, L; Sivakumar, V. Real-time human detection and counting system using deep learning computer vision techniques. Artif Intell Appl; 2023; 1,
© The Author(s) 2025. This work is published under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.