With the rapid development of artificial intelligence technology, which has promoted the rise of information technology and e-government, various government departments have accumulated a large amount of macroeconomic data. Faced with a large amount of economic data, how to analyse and utilise these data and provide references for decision-making is the main problem currently faced. Online analytical processing (OLAP) is a data warehouse tool for the interactive analysis of multidimensional data in various dimensions. In this paper, OLAP is applied to macroeconomic data analysis and mining, and an economically intelligent decision-making platform is built to analyse economic indicator data from multiple perspectives and in multiple dimensions. The platform implements OLAP modelling, metadata management, OLAP analysis and report query. In the pre-calculation and update algorithm of data cubes in OLAP modelling, a multiplexed cube aggregation complete cube calculation algorithm is proposed, and the contents of the algorithm are analysed in detail. Practical application shows that the algorithm is feasible and easy to implement, and it supports the calculation of large data volume cubes and can use time slicing to update data cubes, effectively ensuring the timeliness and reliability of economic data information.
Povzetek: Zgrajena je ekonomska inteligentna platforma za odločanje, ki temelji na umetni inteligenci in OLAP tehnologiji za večdimenzionalno analizo makroekonomskih podatkov.
Keywords: artificial intelligence technology, economic decision-making, OLAP, data mining, data analysis
Recieved: February 7, 2024
1 Introduction
With the upgrading of China's economy from scale and speed to quality and efficiency, supply-side reform, optimization of resource allocation, energy conservation and environmental protection have also become inseparable topics of the digital economy. As a product of the integration and development of digitalization and intelligence, intelligent decision-making is a data-driven industrial upgrade and a deeper excavation and utilization of data value. It shows obvious advantages in coordinating social and economic factors, optimizing enterprises' supply and demand relationship, and injecting new vitality into the digital economy. From data to decision-making is one of the important directions of the spiral leap of digital and intelligent technologies, the process of integrating multiple elements to solve large-scale complex problems, and an important key point of change in the digital economy.
From the technical level, the realization of data-driven intelligent decision-making goes through three progressive leaps, each linked to the layers of the digital economy process, which is called the three-stage jump of digital transformation. The first level is the data description stage, i.e. the collection and management of data. Based on computer and information science technology, various kinds of data statistics and management platforms have emerged, and the way we describe and record the world is becoming simpler and simpler; the second level is the regular analysis through data, i.e. the use of machine learning and statistical techniques to understand the reasons and laws behind things and make predictions for the future. The third level is modelling and solving decisions, which often requires extremely strong modelling and solving capabilities involving techniques such as machine learning and operations optimization, and the problems that can be solved extend to decision-making under large-scale constraints.
Intelligent decision-making technologies shine the light of wisdom on the digital economy, extending the value boundary of the digital economy and effectively enhancing the role of data elements at the core decisionmaking level. It also contributes to the high-quality development of the digital economy, allowing all types of economic agents to achieve steady and efficient growth by taking into account multiple factors such as policy, demand, energy consumption, costs and benefits when promoting digitalisation.
Smart decision-making is a core value of the digital economy, and it has been shown in different fields that it is building up the strength for the digital transformation of industries. The digital economy is characterised by an extremely diverse range of products, production and marketing models, and an increasingly complex allocation of resources, which will play an increasingly important role in improving the efficiency and effectiveness of industries and adjusting the relationship between supply and demand. From a comprehensive point of view, intelligent decision-making helps to upgrade the industry digitally.
The prosperity of the digital economy has given rise to a variety of new business models, endless marketing methods, a variety of marketing channels, complex and changing user needs, and increasingly important customer-centric and refined marketing, which poses a huge challenge to the enterprise supply chain. To cope with the various changes in the supply chain, what enterprises need most is the ability to respond and deploy with agility based on market demand. Otherwise, it is easy to oversupply or backlog products. With intelligent decision-making solutions, companies can plan and arrange for several aspects, from demand forecasting, product selection, pricing, distribution and logistics, before the peak marketing season to cope with the massive market demand changes. From the perspective of the entire consumer market, demand-driven marketing and production will become a major trend, and intelligent decision-making solutions can open up the whole chain of data from supply to demand and by coordinating and pulling all kinds of supply chain resources to achieve supply and demand synergy, thus optimising the supply structure and reducing product and resource waste.
As the digital transformation of industry gradually deepens, the new industrial economy represented by smart manufacturing has become an important force in the digital economy. The manufacturing industry has many categories and complex industrial chains facing many problems, such as overcapacity and low resource utilisation. Through global coordination and optimization, intelligent decision-making technology can realise fullscene empowerment for equipment, production, operation and industrial chain, thus adjusting production structure and mode, helping the manufacturing industry to improve resource utilisation and reduce cost consumption in all aspects. For the equipment side, intelligent decisionmaking can help with visual recognition and inspection, parameter optimisation, equipment maintenance and other applications. For example, linkage with production planning enables more reasonable maintenance time window arrangement and efficient maintenance resource scheduling. Production-oriented, intelligent decisionmaking allows for optimal and flexible scheduling of various production factors.
The development of the digital economy can only be achieved with the support of a strong infrastructure. In transport, energy and power allocation, aerospace and other infrastructure, global collaborative operation, flexible deployment of resources, and coordinated regional development are increasingly important, and intelligent decision-making has much room to play. In transportation, based on smart sensing and digital infrastructure, intelligent decision-making can achieve dynamic deployment of road traffic resources on demand, improve road traffic efficiency and traffic volume, and contribute to the construction of smart cities. In energy and electricity allocation, constructing a resource allocation plan under multiple constraints promotes the rational use of resources while reducing the negative impact of uncertainties on production and life, etc.
China's e-government construction began in the mid1980s. It can be roughly divided into three stages, namely the initial stage in the 1980s, the key promotion stage in the 1990s and the accelerated development stage after 2000, called the last decade; China's government vigorously promote the construction of information technology, its implementation has achieved remarkable results in the UN-led global e-government census, China's e-government construction level The ranking in the world is rising year by year, and the level of e-government development is showing a good momentum [1], [2].
The construction of e-government in major regions of China is transitioning from covering government business to integrating with government business and fully supporting the structure of a service-oriented government [3]. Regarding basic e-government networks, the central - level transmission backbone network was opened, and the national e-government extranet was put into operation, facilitating the interconnection of horizontal and vertical e-government networks nationwide. In terms of egovernment applications, the central government portal and government websites at all levels have been established and provide one-stop services to the public, further promoting open government and extensive interaction between the government and the public [4]. The construction of government information platforms covers various fields and departments, providing important support for government departments to perform their economic regulation, market supervision, social management and public services functions. Regarding constructing an e-an government security system, the Ministry of Public Security and four other ministries and commissions have jointly issued technical standards and specifications related to information platform-level protection.
Government data is important to the economy and society of the region under its jurisdiction. Still, because there is no unified storage and management, these data information and resources are stored in different ways in various government departments and places, lacking unified storage and management, which is not convenient to use, and easy to cause the loss of these important data resources, which will cause unpredictable losses to the people and the country [5].
The joint development of data warehousing and OLAP technology provides theoretical support for solving the problem of "obtaining information useful for enterprise decision-making from massive data", enabling users to realise OLAP platforms in the enterprise domain to achieve a multi-faceted and multi-dimensional understanding of data and achieve a flexible and profound understanding of data [6], [7], [8]. This enables users to gain a flexible and profound understanding of data, supporting user decision-making and ultimately allowing data to create value [9]. OLAP technology is currently being used in a wide range of industries, such as business, agriculture, environmental protection, medicine and biology, communications, scientific computing, and military [10]. Currently, the OLAP platform is mainly built on the current mainstream commercial products, such as SQL Server with Analysis Service [11]. Building OLAP platforms on top of commercial products is easy to deploy and relatively simple. However, the mainstream commercial OLAP platforms are overpriced.
Because of the lack of professional decision analysis platform support, it isn't easy to analyse the information effectively and correctly from a large amount of data. The lack of quantitative analysis and the lack of depth of research lead to the interference of human subjective factors and defective analysis factors in decision-making, making the decision-making data lack more scientific, thus having a greater adverse impact on the final decision. This paper applies OLAP technology to the analysis of macroeconomic indicators, and develops and implements a multi-way cube aggregation complete cube calculation algorithm, which supports the calculation of large data volume cubes, realises the analysis and utilisation of massive macroeconomic data, better grasps the laws hidden behind the data, and provides a strong guarantee for the scientific decision-making of the government, with good theoretical significance and practical application value.
1. This article creatively introduces Online Analytical Processing (OLAP) technology in the analysis and utilization of a large amount of macroeconomic data. The introduction of this technology not only provides a new method for analyzing macroeconomic data, but also provides a multidimensional perspective for economic decision-making, making decision-making more scientific and comprehensive.
2. This article successfully integrates functions such as OLAP modeling, metadata management, OLAP analysis, and report querying, forming a comprehensive decision support system. The construction of this platform not only improves the efficiency of economic data processing, but also provides decision-makers with convenient data query and analysis tools.
3. In terms of data cube pre calculation and update algorithms in OLAP modeling, a reusable cube aggregation complete cube calculation algorithm is proposed. It can effectively handle the calculation problem of large data volume cubes, while using time slicing to update the data cubes, ensuring the timeliness and reliability of the data.
2 Overview of related technologies
2.1 Economic data warehousing
According to William H. Inmon, a leading architect in the construction of data warehouse platforms, a data warehouse is a subject-oriented, integrated, time-varying, and non-volatile data collection that supports the management's decision-making process. Compared to traditional databases, it has the following characteristics [12].
1. Topic-oriented: Data warehouses store data according to topic categories, and data modelling is topic - oriented.
2. Integrated: In oToure the consistency of topicoriented data, data from the data warehouse is obtained by collating data from multiple heterogeneous data sources through the ETL process.
3. Time-varying: The data in the data warehouse is time-scries, and the data in the data warehouse will continue to grow over time.
4. relatively stable: administrators must regularly load and update the data in the data warehouse and rarely modify or delete the data.
The data warehouse platform architecture [13] is shown in Figure 1.
One of the OLAP servers is a high-performance data processing engine designed for multidimensional data analysis. With the OLAP server, the data required by the user for analysis can be presented quickly and efficiently and in multiple ways. Integration reflects the ability of a data warehouse to integrate heterogeneous data from multiple sources. Through the ETL (Extract, Transform, Load) process, data warehouses can collect, organize, and unify data from different data sources, ensuring data consistency and accuracy. This feature is particularly important for economic intelligent decision-making platforms, as economic decisions often rely on data from multiple channels and systems. Time variability is another important feature of data warehouses. As economic data is a time series, new data will continue to be generated and added to the data warehouse over time. Therefore, a data warehouse must be able to handle the growth and changes of this data, ensuring its timeliness and accuracy.
2.1.1 ETL
ETE (Extraction-Transformation-Loading), the Chinese name for data extraction, transformation and loading, is responsible for extracting data from distributed, heterogeneous data sources to a temporary intermediate layer for cleaning, modification, integration, and finally loading into the data warehouse, which becomes the basis for online analysis and processing or data mining.
The ETL tool used for the platform is Kettle, a subproject of the open-source BI solution pentaho, which is positioned as a professional ETL tool that is open source and free of charge. As the whole Pentaho platform is based on Java technology development, the kettle also has a good tradition of cross-platform; the edited ETL package can be placed under various operating platforms for smooth execution, which is its biggest advantage, while its connection with various databases is using JDBC standard, compared to other ODBC, OLE/DB ETL tools, and Database compatibility is much better. Its operation flow is shown in Figure 2.
2.1.2 Data warehouse modelling
Currently, data warehouses are often modelled using multidimensional models. Multidimensional models usually exist in a star pattern, a snowflake pattern, or a factual constellation pattern. They are presented as follows.
1. Star model. The star schema is a one-point-tomultipoint schema, where the central point is the fact table, and the multiple issues connected to it are the dimensional tables. Queries to the fact table must be executed through the associated dimension table. A joint query of multiple dimensional tables and fact tables can fetch base and aggregate data.
2. Snowflake schema. The snowflake schema is an extension of the star schema to minimise data storage by further normalising the star schema dimension tables; query performance is improved by federating smaller dimension tables, but this increases the number of dimension tables and increases the complexity of the query.
3. Factual constellation schema. The factual constellation schema can be seen as a collection of star schemas where multiple factual tables share dimension tables. In the fact constellation schema, ordinary dimension tables are generally not normalised.
2.2 OLAP
OLAP is used for multidimensional data analysis to meet users' needs for queries from specific perspectives in a multidimensional environment and is a multidimensional data analysis tool.
2.2.1 OLAP logical concepts
Several common terms used in OLAP are introduced below.
Dimension: a perspective from which to analyse data, a type of perspective from which to consider a problem, which constitutes a dimension. For example, counting gross product by year, quarter or month includes the time dimension.
The level of a dimension: a particular perspective from which data are analysed, e.g., if GDP is counted by year, then year is a time dimension level. Similarly, quarters and months are also part of the time dimension.
Member of a dimension: An attribute value of a dimension, which is a description of the meaning of the data in the dimension, e.g. June 2011.
Measure: The value of the analyzed data. For example, in January 2012, the GDP of Jiangxi Province was 10 billion yuan, and the GDP is the measure.
2.2.2 OLAP operations
The main multidimensional analysis operations in OLAP are drill-down, slice, dice, and pivot.
Drill down: changes the hierarchy of dimensions to analyse the data. It includes roll_up and drill_down. The roll-up aggregates the data cube by climbing upwards along the conceptual hierarchy of a dimension; the drill - down is the inverse of the roll-up, drilling down along the conceptual hierarchy of a dimension, which looks deeper from aggregated data to explicit data.
Slicing and dicing: the slicing operation selects a dimension of a given cube, producing a sub-cube; the dicing function selects two or more dimensions, producing a sub-cube. They are concerned with the distribution of degree halos in the remaining dimensions.
Rotation: changes the order of dimensions and transforms the viewpoint from which the data is viewed.
2.2.3 OLAP servers
There are several implementations of the OLAP multidimensional data model, the main ones being MOLAP, ROLAP, and HOLAP. The architecture of this server is shown in Figure 3.
MOLAP stores multidimensional data models in proprietary multidimensional databases. This implementation typically uses specialized multidimensional query languages (such as MDX) to access data and provides high-performance query responses.
ROLAP maps multidimensional data models to relational databases. It uses SQL queries to access data and utilizes the indexing and query optimization techniques of relational database management systems (RDBMS).
HOLAP is a mixture of MOLAP and ROLAP. It combines the advantages of both by storing some data in a multidimensional database to improve query performance, while storing other data in a relational database to maintain flexibility. HOLAP attempts to find a balance between query performance and flexibility, but implementation may be more complex.
In the OLAP server architecture, the choice of these implementation methods will depend on specific business requirements, data volume, query performance requirements, and hardware and software resources. OLAP servers typically include a front-end application for interacting with users and providing data access and query capabilities; A backend storage layer for storing multidimensional data models; And a query engine for processing user query requests and returning results.
Details of the architecture are described below.
1. Multidimensional Online Analytical Processing (MOLAP)
MOLAP uses an array-based multidimensional storage engine to map multidimensional views directly to a data cube structure. It has the advantage of fast indexing because it can directly query the required data through array subscripts; the disadvantage is that the storage rate is very low for thousands of sparse data sets.
2. Relational Online Analytical Processing (ROLAP)
ROLAP will be the multi-dimensional view of relational database storage. There are mainly fact tables and dimension tables in the database, and fact tables and dimension tables are associated with foreign keys. The data is obtained through a joint query of the fact table and the dimension table, which is in line with the habits of most users.
3. Hybrid Online Analytical Processing (HOLAP)
HOLAP combines the advantages of MO ALP and ROLAP, using MOLAP technology to store upper-level aggregated data and ROLAP technology to keep detailed data. The combination of ROLAP and MOLAP technology benefits mainly from the greater scalability of ROLAP storage and the fast-indexing calculations of MOLAP.
2.3 Data cube calculation algorithms
OLAP servers must pre-process the data to be queried to support and manipulate multi-dimensional data structures and respond quickly to users' various analytical requirements. Effective data pre-processing can significantly reduce server response times. In OLAP, data pre-processing is called data aggregation. Data aggregation is a process that abstracts large sets of taskrelated data in a database from a relatively low to a higher conceptual level. Users can easily and flexibly aggregate large databases at different levels of granularity and from different perspectives in a concise form.
At the heart of OLAP multidimensional analysis is the efficient calculation of aggregates over multipledimensional groups. These aggregates are grouped according to SQL statements. A square can represent each grouping, and the set of collections forms a sub-cube of the data cube. The data cube is the set of all groups.
Full materialisation is the computation of all the squares in the data cube. Partial materialisation is the calculation of a selection of squares in the data cube lattice. The iceberg cube and the shell fragment are both examples of partial materialisation. An iceberg cube is a data cube that stores only cube cells with aggregation values greater than the minimum support threshold. Shell fragments of data cubes are only computed for certain methods involving a small number of dimensions.
The Star-Cubing [14] method is a partial materialisation method. It uses a star-tree structure to store data cells and introduces the concept of shared dimensions at its core. If the data cells in the upper shared dimension do not satisfy the iceberg condition, all cells down the shared dimension do not fulfil the iceberg condition and can all be pruned. The process is to construct the basic star tree first, traverse each subtree according to a depth-first strategy, and prune according to the iceberg condition until the final star tree is produced. This approach improves the efficiency of the search but is sensitive to the order of the dimensions.
By budgeting only some of the shell fragments for high-dimensional OLAP, the resulting collection of cubes requires much less computation than storing a highdimensional data cube. However, this approach has two drawbacks. Firstly, it also requires the calculation of many cubes, each with many cells. Secondly, the resultant cube does not support high-dimensional OLAP analysis. Specifically, for two reasons: first, the square dimension is already defined, and therefore OLAP in higher dimensions is not supported. Second, it may not be based on lower measurements, such as a subset of data selected for constants, and drill down to the other three dimensions.
2.4 Metadata
OLAP platforms have complex metadata structures with multiple layers of nested metadata nodes, and traditional databases can no longer store this node information effectively, so a new storage method must be found, and XML has emerged to solve this problem. XML is a general-purpose extensible markup language that can describe a variety of data structures and is inherently simple to use and independent of the user platform.XML allows users to define their tags that accurately describe the semantics of data and express various types of data and XML supports deeply nested representations that allow for an adequate description of data with complex structures [15], [16]. Schema metadata is described using XML and used to map the data cube structure of the OLAP platform for responding to multidimensional cube queries.
3 Design of the economic intelligence decision platform
3.1 OLAP system architecture
The OLAP implementation in the macroeconomic intelligent decision support system [17], [18], [19] is based on the data warehouse [20], [21], [22] and OLAP engine as the main structure. The technical framework of the OLAP engine is shown in Figure 4.
In terms of data preprocessing, macroeconomic data is transformed from its original state into a form that can be used for analysis. This process includes data cleaning, conversion, and loading (ETL), as follows:
1. Data cleaning: This stage mainly involves identifying and processing missing values, duplicate records, outliers, or inconsistent data. For example, for missing values, it is possible to use interpolation, mean or median padding, and other methods; For outliers, statistical methods may be needed for identification and processing.
2. Data conversion: At this stage, the raw data is transformed into a more suitable form for analysis. This may include data type conversion (such as from text to numerical), standardization or normalization processing, as well as more complex transformations based on specific analytical needs, such as seasonal adjustments of time series data.
3. Data loading: The pre-processed data is loaded into the economic data warehouse. At this stage, it is necessary to ensure the accuracy and integrity of the data, as well as its correct storage structure in the data warehouse, to support subsequent OLAP analysis.
In terms of data selection and analysis criteria, select data closely related to analysis objectives and economic decision-making. This requires us to have a deep understanding of economic theory and reality to identify the most critical datasets and indicators. Priority should be given to data with reliable sources, high accuracy, and strong timeliness. This involves assessing the credibility of data sources and quality control of data collection, processing, and storage processes. When conducting analysis, data structures and models are chosen that can fully utilize OLAP features (such as slicing, chunking, drilling, etc.) to extract useful information and support decision-making to the maximum extent possible.
3.2 OLAP platform requirements analysis
The light OLAP platform built on the macroeconomic intelligent decision-making platform aims to realize a data analysis platform for economic indicators. Based on not changing the macroeconomic smart decision-making platform, the corresponding database technology is used to complete the multi-dimensional information query of economic indicator data, providing strong support for leaders' decision-making. In order to provide the user with a variety of flexible analysis tools, friendly query pages and fast platform response speed, the general design objectives of the platform are as follows.
1. complete the design and development of the prototype system that will be the core of the subsequent development using the 1,000 functional validation and
2. To complete the preparation of the data required for the lightest OLAP system, including searching and cleaning data nuggets. With the help of the open-source tool Kettle, data was extracted from the existing egovernment network platform and loaded into the data warehouse through ETL steps such as conversion and cleansing to complete the data preparation required for OLAP.
3. Complete the design of the basic data model. Design the dimensional level of each table according to the business needs, focusing on the link between the dimensional table and the fact table.
4. Design a data cube that fits the design model and describe the data cube using a star model that includes the business requirements.
5. Use the self-developed multi-dimensional query tool to complete the query of multi-dimensional data, perform basic OLAP operations such as slicing, dicing, drill-down and roll-up on multi-dimensional tables, and use the query of data to obtain various information valuable to the system.
3.3 OLAP modelling
OLAP modelling starts by extracting the base data from the fact table according to the model defined by the metadata to form the basic cube. The relevant algorithms arc invoked to perform data aggregation to form the full data cube. Finally, the complete data cube is stored in the aggregation table. The basic framework of OLAP modelling is shown in Figure 5.
The modules in the framework diagram are described as follows.
(1) Data read-in module: from the data warehouse fact table, obtain the base data and convert the base data into a basic cube according to the schema defined by the metadata.
(2) OLAP modelling module: Invoke the multiplexed cube aggregation complete cube algorithm to aggregate the base cube into a whole cube. Call the update module to update the entire cube at regular intervals or manually.
(3) Multiplexed cube aggregation complete cube algorithm: a data result set driven complete cube calculation algorithm. The algorithm completes the computation of the aggregation values of multiple new cubes formed by climbing the cubes along each dimension at the same time as scanning a cube, thus completing the multi-way cube aggregation.
(4) Cube update: A cube update mechanism that updates the data completely cube according to the time slice timed or manually invoked OLAP modelling module.
(5) Cube read/write module: used to read and write data cubes.
In order to store multidimensional data, the organisation model of multidimensional data in the data warehouse must therefore be designed before OLAP modelling.
Fact and aggregation tables are created in the data warehouse according to topics. The structure of the aggregation table is approximately the same as the fact table, but with the addition of two fields, the square identification field and the time slice field. Aggregate table structure data is shown in Table 2.
3.4 Multiplexed cube aggregation complete cube algorithm
Precomputation of the data cube is critical to improving online analytical processing performance. A complete cube computation algorithm is proposed using data result set-driven algorithms, drawing on the multiplexed array aggregation complete cube computation algorithm. Data cube computation is a fundamental task in OLAP modelling. Full or partial precomputation of the data cube can significantly reduce response time and improve online analytical processing performance.
A multiplexed array aggregation algorithm with hierarchy is implemented using one-dimensional arrays to simulate multidimensional arrays [23], allowing the algorithm to adapt to the computation of cubes of different dimensions. Using data result sets instead of arrays; an
algorithm is proposed to support the computation of complete cubes with dimensional hierarchical cubes. The relevant concepts involved in the algorithm are as follows.
Number of Cuboids: If the Cuboid Cube has n dimensions, the number of sub-Cuboids contained in each dimension is 2n if there is no conceptual layering. If conceptual layering exists in each dimension, the number of cuboids produced will be + 1), which denotes
the number of levels in the i-th dimension. A Cube dimensional hierarchical combination represents a cube, and a cube represents a perspective from which to look at the data. For example, if a Cube exists with 3 dimensions and 3 levels per dimension, the number of cubes resulting from the combination of the dimensional levels plus the virtual top level would be 64, i.e., there are 64 angles of looking at the data.
3.5 The multiplexed square aggregation process
The multiplexed square aggregation algorithm works by scanning an n-dimensional square once and simultaneously calculating the values of the n-squares generated by a hierarchical climb along each dimension of this square. The core idea is as follows.
When a square is scanned, for each Cell visited, the values of this Cell are accumulated into the corresponding Cell of the court formed by climbing along each dimension, depending on the dimension involved in the square. The dimensional hierarchy and the dimension table are shown in Figure 7.
3.6 OLAP metadata management
The metadata management system is an auxiliary management system that supports the OLAP system and
includes the following functions: importing XML format database schemas, building star models, building snowflake models and generating XML files. The creation of the metadata model includes the creation of schema, cube, dimension, metric, hierarchy and other nodes. The Generate XML module exports the XML file formed after modelling. The basic framework and description of the metadata management system are shown in Figure 8.
4 Application of economic intelligence decision platform
4.1 Data effects
After the data in the fact table has been pre-processed, the data formed by the metadata definition is stored in the aggregation table as a complete cube. In the fact table MAAH FACT, there are 274 base data, as shown in Table 3.
A comparison of the underlying data information in the fact sheet is shown in Figure 9.
4.2 Efficiency analysis
The traditional page query gets the query data and returns the corresponding page before the project transformation; the response speed is 5-10 seconds. This paper uses a multiplexed cube aggregation complete cube calculation algorithm, which is expected to forget the Cube and the query speed is less than 3 seconds. This has effectively improved the user response speed. The traditional query method will become slower and slower as the amount of data increases.
Based on the efficiency analysis of the multiway cube aggregation complete cube calculation algorithm, the algorithm calculates 3·4·5 cubes with the base cube data volume and aggregation data information, as shown in Table 4. When the base square data volume is 274, 3899 aggregates are formed in 16,315 ms. When the base square data volume is 1,000, 25,327 aggregates are formed in 7,8946 ms. Since thousands of cubes were aggregated at regular intervals in the evening, the above times are consistent with the design times.
A comparison of the base square data volume and aggregated data is shown in Figure 10.
In the performance benchmark test, the performance of the current platform was compared with that of traditional platforms when processing datasets of the same scale and economic indicators. The study used a series of test datasets with different data volumes and complexities and recorded and compared their response time, processing speed, and resource utilization indicators. This will help objectively evaluate the performance advantages of the current platform.
Secondly, this article conducted scalability testing. With the development of the economy and the continuous accumulation of data, economic decision-making platforms need to have the ability to process larger scale data. Therefore, the study tested the performance of the platform as the data volume grew and whether it could smoothly scale to meet higher processing requirements. This will help verify the long-term stability and scalability of the platform.
4.3 OLAP metadata platform management implementation
Its main function is to create a metadata model and then generate an XML file, which includes the following functions: opening the project, importing the XML database schema, creating a star model, creating a snowflake model, modifying the project, generating the XML file and saving the project. Metadata modelling creates schema, cube, dimension, measure, level, join and level nodes in order.
First, import the database tables to build the star model. The modelling process is as follows: 1. create the schema header node. 2. create the cube node. 3. create the measure node and the dimension node where: When creating the dimension node information, you need to select the model to be built, either the snowflake or the constellation model. When creating a snowflake model, add a hierarchy node under the dimension node, then create a join and level node under the hierarchy node. It is important to note that the number of join nodes is greater than the number of level nodes. When building a presentation model, creating joint information is unnecessary.
The main function of the metadata management platform is to build metadata models and then generate xml format files; in the process of building models, metadata modelling serves to create a schema, cube, dimension, metric, hierarchy, Joion, and level nodes.
In metadata modelling, metadata modelling involves the creation of a Schema node, which contains a number of Cube nodes. Each Cube node contains multiple Meansure nodes and Dimension nodes, and a Dimension node consists of one or more Hierarchy nodes. Hierarchy nodes may have Join nodes under them.
When building a snowflake model, there are Join nodes. A Join node is made up of two corresponding Level nodes. When building a star model, there are no Join nodes. A Hierarchy node may consist of one or more Level nodes.
Thus, to create a complete Schema, one or more Cube must first be created, which in turn will invoke Meansure and Dimension, and to create Dimension, one must create Hierarchy and Level nodes to complete the star model and then make Join nodes to model the snowflake data.
The metadata management system contains the following functions: open project, save project, modify project, generate XML file, import XML database schema, import real-time database schema, and metadata modelling. The business logic classes that complete the metadata management include OpenDataXml, OpenProj ect, SavaProject, Create Model, CreateSchXML and MetaEditer.
The OpenProject class calls OpenPath.java to open the file path. The RcadConnect.jav class is then called to read the open database and metadata intermediate XML file and return the database and metadata file names to the control class, which then calls ChangDataXmlToTree.java to convert the database XML format into a tree structure and return it to the system interface for display. The control class then calls ChangSch Xrnl To Tree.java to convert the metadata XML format into a tree structure and return it to the system interface for display, eventually opening the project file.
The SavaProject class calls SavaPath.java to select the path to save the file. The Create Connect, jav class is then called to create the XML intermediate file that connects the database to the metadata. The control class then calls ChangDataTreeToXrnl.java to convert the database tree format to XML format. The control class then calls ChangSch TreeToXrnl.java to convert the metadata tree format into XML format. The final project file is saved. Create the metamodel CreateModel as the control class and delegate the processing request to create them to complete the business processing.
4.4 OLAP analysis
The OLAP analysis report is shown in Table 5.
Comparative GDP data for Jiangxi Province from 2014-2021 is shown in Figure 11.
In terms of scalability, this article adopts modular and hierarchical design principles. The core architecture of the platform is flexible and can be horizontally or vertically expanded according to the growth of data volume. When the amount of data increases, research aims to improve the platform's processing power by adding more computing nodes or storage resources. In addition, distributed storage and computing technologies are utilized to distribute data to multiple nodes for processing, further improving the scalability of the system. This design enables the platform to easily handle data scales ranging from a few hundred megabytes to tens of billions, meeting the needs of various economies of scale decisions.
In terms of adaptability, this article focuses on the universality and configurability of the platform. The platform adopts standardized data interfaces and protocols, making it easy to access and integrate different types of economic data. By configuring different data models and analysis algorithms, the platform can adapt to different decision-making environments and needs. In addition, a flexible permission management and role allocation mechanism is provided, allowing the platform to adapt to different organizational structures and permission requirements. This design makes the platform widely applicable and can be extended to other types of economic data or decision-making environments.
5 Discussion
The paper presents a comprehensive overview and implementation of an intelligent decision-making platform for economic analysis, with a specific focus on the application of Online Analytical Processing (OLAP) technology. In terms of data warehouse construction, this article constructs an economic data warehouse that is theme-oriented, integrated, time-varying, and nonvolatile, improving data quality and consistency while providing a solid foundation for subsequent OLAP analysis. Traditional database systems often struggle with handling large-scale and heterogeneous economic data, but the economic data warehouse in this article effectively integrates these data through the ETL process, greatly enhancing data availability. Additionally, this article addresses the problem of metadata management in the OLAP platform by successfully employing XML technology. Traditional database systems find it challenging to store complex metadata structures and multi-layer nested metadata nodes efficiently, but XML technology provides flexibility and scalability, making metadata management more convenient and efficient, thus paving the way for deeper OLAP analysis. To enhance the discussion section, it is recommended to provide a detailed comparison with existing literature in the field and emphasize the novelty of the proposed solution within the broader context of AI and economic decision-making. In terms of comparing the introduced OLAP technology with other methods used in economic decision-making, it would be advantageous to explore its integration with traditional statistical methods, econometric models, or machine learning techniques commonly employed in economic analysis. This comparative analysis would shed light on the strengths and weaknesses of OLAP in this particular context. Furthermore, to provide a more comprehensive comparison, it would be valuable to discuss the efficiency and effectiveness of OLAP about other decision-making platforms or technologies. This evaluation could encompass factors such as computational speed, scalability, ease of use, and the ability to handle large volumes of data. By delving into these aspects, a deeper understanding of OLAP's advantages and limitations can be achieved. Additionally, it is important to address the implementation challenges encountered when utilizing OLAP in economic decision-making and compare them with the challenges faced by alternative approaches. This exploration would highlight the unique benefits of OLAP, such as its ability to address issues related to data integration, model complexity, or user adoption. By discussing how OLAP overcomes these challenges, its value proposition becomes more apparent. Lastly, the paper emphasizes the multidimensional perspective offered by OLAP in economic analysis. To further emphasize this aspect, it would be beneficial to discuss how this multidimensional analysis adds value compared to traditional uni-dimensional or bi-dimensional approaches. By highlighting specific use cases where OLAP excels in providing a comprehensive understanding of economic data, the novelty and implications of the proposed solution can be effectively conveyed.
6 Conclusion
With the rapid development of information technology and e-government, various government departments have accumulated many macroeconomic indicators data. In the face of a large amount of indicator data, it is necessary to establish a data analysis system to achieve a multi-angle and multi-dimensional understanding of the data to achieve a flexible and profound understanding of the data and then realize the support for users' decision making and let the data create value. In this paper, the application of the OLAP system in macroeconomic decision-making is realized from the current needs of macroeconomic intelligent decision support system, and good results are achieved in practical application. By applying the OLAP system to macroeconomic decision analysis, the paper realizes multi-angle and multi-dimensional analysis and application of macroeconomic index data. The algorithm is used to calculate the aggregation values of multiple new cubes formed by climbing along each dimension while scanning a single cube, thus completing the aggregation of multiple cubes. The algorithm supports the computation of large data volume cubes. An update mechanism for the data cubes has been developed and implemented, and an update index table has been established to update the data cubes using time slices, effectively ensuring the timeliness and reliability of the information.
References
[1] R. Dieng, O. Corby, A. Giboin, and M. Ribiere, "Methods and tools for corporate knowledge management," Int J Hum Comput Stud, vol. 51, no. 3, pp. 567-598, 1999. https://doi.org/10.1006/ijhc.1999.0281
[2] A. Onyebuchi et al., "Business demand for a cloud enterprise data warehouse in electronic healthcare computing: Issues and developments in ehealthcare cloud computing," International Journal of Cloud Applications and Computing (IJCAC), vol. 12, no. 1, pp. 1-22, 2022. DOI: 10.4018/IJCAC.297098
[3] X. Tan, D. C. Yen, and X. Fang, "Web warehousing: Web technology meets data warehousing," Technoi Soc, vol. 25, no. 1, pp. 131-148, 2003. https://doi.org/10.1016/S0160791X(02)00061-l
[4] W. Hang and X. Li, "Application of system dynamics for evaluating truck weight regulations," Transp Policy (Oxf, vol. 17, no. 4, pp. 240-250, 2010. https://scholar.google.com/scholar?hl=en&as_sdt =0%2C5&q=%5B4%5D%09W.+Hang+and+X.+ Li%2C+%E2%80%9CApplication+of+systcm+d ynamics+for+evaluating+truck+weight+regulatio ns%2C%E2%80%9D+Transp+Policy+%28Oxf %29%2C+vol.+17%2C+no.+4%2C+pp.+240%E 2%80%93250%2C+2010.&btnG=
[5] C. Antoniou, R. Balakrishna, and H. N. Koutsopoulos, "A synthesis of emerging data collection technologies and their impact on traffic management applications," European Transport Research Review, vol. 3, pp. 139-148, 2011 .https://scholar.google.com/scholar?hl=en& as_sdt=0%2C5&q=%5B5%5D%09C.+Antoniou %2C+R.+Balakrishna%2C+and+H.+N.+Koutsop oulos%2C+%E2%80%9CA+synthesis+of+emer ging+data+collection+technologies+and+their+i mpact+on+traffic+management+applications%2 C%E2%80%9D+European+Transport+Research +Review%2C+vol.+3%2C+pp.+139%E2%80%9 3148%2C+2011.&btnG=
[6] J. Li, H. Liao, B. Wang, and B. Xu, "The design and implementation of web-based OLAP drilling analysis system," in 2010 Seventh International Conference on Fuzzy Systems and Knowledge Discovery, IEEE, 2010, pp. 2570-2573. https://doi.org/10.1109/FSKD.2010.5569837
[7] I. Alsmadi, The NICE cyber security framework: Cyber security intelligence and analytics. Springer Nature, 2023. http://dx.doi.org/10.1007/978-3-030-02360-7
[8] C. Ma, D. C. Chou, and D. C. Yen, "Data warehousing, technology assessment and management," Industrial Management & Data Systems, vol. 100, no. 3, pp. 125-135, 2000. https://scholar.google.com/scholar?hl=en&as_sdt =0%2C5&q=%5B8%5D%09C.+Ma%2C+D.+C. +Chou%2C+and+D.+C.+Yen%2C+%E2%80%9 CData+warehousing%2C+technology+assessme nt+and+management%2C%E2%80%9D+Industr ial+Management+%26+Data+Systems%2C+vol. +100%2C+no.+3%2C+pp.+125%E2%80%9313 5%2C+2000.&btnG=.
[9] G. Garant, A. Chernov, I. Savvas, and M. Butakova, "A data warehouse approach for business intelligence," in 2019 IEEE 28th International Conference on Enabling Technologies: Infrastructure for Collaborative Enterprises (WETICE), IEEE, 2019, pp. 70-75. https://doi.org/10.1109/WETICE.2019.00022
[10] Y. Wu and J. Zhang, "RETRACTED ARTICLE: Building the electronic evidence analysis model based on association rule mining and FP-growth algorithm," Soft comput, vol. 24, no. 11, pp. 7925-7936, 2020. https://scholar.googlc.com/scholar?hl=en&as_sdt =0%2C5&q=%5B 10%5D%09 Y.+Wu+and+J.+Z hang%2C+%E2%80%9CRETRACTED+ARTIC LE%3A+Building+the+electronic+evidence+ana lysis+model+based+on+association+rulc+mining +and+FP-growth+algorithm%2C%E2%80%9D+Soft+com put%2C+vol.+24%2C+no.+l l%2C+pp.+7925% E2%80%937936%2C+2020.&btnG=
[11] A. A. Aziz, W. M. R. W. Idris, H. Hassan, and J. A. Jusoh, "Intelligent System for Personalizing Students' Academic Behaviors-A Conceptual Framework," International Journal on New Computer Architectures and Their Applications, vol. 2, no. 1, pp. 138-153, 2012. https://www.researchgate.net/publication/230771 390_Intelligent_System_for_Personalizing_Stud ents'_Academic_Behaviors-_A_Conceptual_F ramework.
[12] S. Agarwal, "Data mining: Data mining concepts and techniques," in 2013 international conference on machine intelligence and research advancement, IEEE, 2013, pp. 203-207.http://dx.doi.org/l 0.1109/ICMIRA.2013.45
[13] O. Glorio, J.-N. Mazón, I. Garrigós, and J. Trujillo, "A personalization process for spatial data warehouse development," Decis Support Syst, vol. 52, no. 4, pp. 884-898, 2012. https://scholar.google.com/scholar?hl=en&as_sdt =0%2C5&q=%5B 13%5D%09O.+Glorio%2C+J. N.+Maz%C3%B3n%2C+I.+Garrig%C3%B3s%2 C+and+J.+Trujillo%2C+%E2%80%9CA+person alization+process+for+spatial+data+warehouse+ development%2C%E2%80%9D+Decis+Support +Syst%2C+vol.+52%2C+no.+4%2C+pp.+884% E2%80%93898%2C+2012.&btnG=.
[14] D. Xin, J. Han, X. Li, Z. Shao, and B. W. Wah, "Computing iceberg cubes by top-down and bottom-up integration: The starcubing approach," IEEE Trans Knowl Data Eng, vol. 19, no. 1, pp. 111-126, 2006. https://doi.org/10.1109/TKDE.2007.250589
[15] S. Jacobs, Beginning XML with DOM and Ajax: From novice to professional. Apress, 2006. https : //books. google. com/books?hl=en&lr=&id= 93CAFYVzgFcC&oi=fnd&pg=PR12&dq=%5Bl 5%5D%09S.+Jacobs,+Beginning+XML+with+D OM+and+Ajax:+From+novice+to+professional. +Apress,+2006.&ots=Pken2NqtMv&sig=92Acai glqNQHVqmrOJXxT8xTD7U#v=onepage&q&f =false.
[16] A.-I. Bunea, N. del Castillo Iniesta, A. Droumpali, A. E. Wetzel, E. Engay, and R. Taboryski, "Micro 3D printing by two-photon polymerization: configurations and parameters for the nano scribe system," in Micro, MDPI, 2021, pp. 164-180. https : //doi. org/10.3 3 90/micro 1020013
[17] D. Arnott and G. Pervan, "Eight key issues for the decision support systems discipline," Decis Support Syst, vol. 44, no. 3, pp. 657-672, 2008. http://dx.doi.org/10.1016/j .dss.2007.09.003
[18] A. M. Shahsavarani and E. Azad Marz Abadi, "The Bases, Principles, and Methods of Decision-Making: a review of literature," International Journal of Medical Reviews, vol. 2, no. 1, pp. 214-225, 2015. https ://www. ijmedrev. com/ article_68259.html
[19] P. Centobelli, R. Ccrchione, and E. Esposito, "Aligning enterprise knowledge and knowledge management systems to improve efficiency and effectiveness performance: A three-dimensional Fuzzy-based decision support system," Expert Syst Appl, vol. 91, pp. 107-126, 2018. https://doi.Org/10.1016/j.eswa.2017.08.032
[20] E. Mallach, "Decision support and data warehouse systems," 2000. https://www.researchgate.net/publication/234799 082_Decision_Support_and_Data_Warehouse_S ystems
[21] T. Y. Wah, N. H. Peng, and C. S. Hok, "Building data warehouse," in Proceedings of the 24th South East Asia Regional Computer Conference, 2007, pp. 51-56. https ://d 1 wqtxts 1 xzle7. cloudfront.net/3 8932762/ _BuildingDataWarehouse-libre.pdf? 1443 550121 =&response-content-disposition=inline%3B+filename%3DBuilding_ DataW arehouse .pdf&Expires= 1715855687&Sİ gnature=YhI2M3PU9gQFw~FHLDyB8g0hm2cc qW54npZ5A6A6Y-FwUUnRaNUNq3abtSDhpIOdxmqLF98bon833 mYn-86KGJF1 ddfcTFubHxpLLmCK31 gSaVDbfSMU muWoB8ENEtn7EMiJhoXNwK-AkWR~a6alvZKnbEO3AGkXdWHJ2s81NCMZ Tcv 1 lkpNi3XRl V7GoT~XIIOz~xL 17n 13zbEeE c 1 p J1 w~oB W ARhOTQDd YNghgUlDfc Y S 8 fhC ~A6cOZdVuVEzYw-DZi4rF wQ2xhEwBKSBLOfF 8 AMNQOyPHzNo qKczc9fMMlU7fROEDiphjUOpAIteCh7Ypc8A wrb8EgC8txz~6~Q_&Key-PairId=APKAJLOHF5GGSLRBV4ZA [22]
[22] G. S. Reddy, R. Srinivasu, M. P. C. Rao, and S. R. Rikkula, "Data Warehousing, Data Mining, OLAP and OLTP Technologies are essential elements to support decision-making process in industries," International Journal on Computer Science and Engineering, vol. 2, no. 9, pp. 28652873, 2010. https://www.researchgate.net/publication/502351 48_DATA_WAREHOUSING_DATA_MINING _OLAP_AND_OLTP_TECHNOLOGIES_ARE_ ESSENTIALELEMENTSTOSUPPORTDE CISIONMAKINGPROCESSININDUSTRIES
[23] R. Tardío, A. Maté, and J. Trujillo, "A new big data benchmark for OLAP cube design using data pre-aggregation techniques," Applied Sciences, vol. 10, no. 23, p. 8674, 2020. https://doi.org/10.3390/appl0238674
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024. This work is published under https://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
With the rapid development of artificial intelligence technology, which has promoted the rise of information technology and e-government, various government departments have accumulated a large amount of macroeconomic data. Faced with a large amount of economic data, how to analyse and utilise these data and provide references for decision-making is the main problem currently faced. Online analytical processing (OLAP) is a data warehouse tool for the interactive analysis of multidimensional data in various dimensions. In this paper, OLAP is applied to macroeconomic data analysis and mining, and an economically intelligent decision-making platform is built to analyse economic indicator data from multiple perspectives and in multiple dimensions. The platform implements OLAP modelling, metadata management, OLAP analysis and report query. In the pre-calculation and update algorithm of data cubes in OLAP modelling, a multiplexed cube aggregation complete cube calculation algorithm is proposed, and the contents of the algorithm are analysed in detail. Practical application shows that the algorithm is feasible and easy to implement, and it supports the calculation of large data volume cubes and can use time slicing to update data cubes, effectively ensuring the timeliness and reliability of economic data information.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 School of Accounting, Zhengzhou College of Finance and Economics, Zhengzhou, 450000, China