This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Content marketing is normally regarded as a management process responsible for identifying, anticipating, and satisfying customer requirements profitably in the context of content via electronic channels [1]. The designation of research methods in content marketing has received plenty of attention for more than 100 years. Traditionally, content marketing strategies are determined based on different developments technologies, market requirements and expectations, growth in knowledge, and so on [2]. Machine learning-based methods have received more attention in the past several decades because they have the ability to analyze historical data and predicting future potential behaviors and activities more effectively [3, 4]. When there exist newly available data, managers can capture relatively accurate estimations from machine learning prediction models to tackle underlying challenges, such as missing outcomes and stimulus sampling [5]. Due to the fast development of machine learning, the current content marketing strategies tend to be digital-driven, meaning that companies put more attention to the consumer concentration on the Internet or historical habitats [6]. In this regard, the data scale and dimensions become huge, meaning that the traditional marketing methods are not suitable. The major motivation of this paper is therefore to investigate the potential of machine learning methods in dealing with the high dimensionality of attributes of marketing data.
Based on different data requirements, machine learning methods can be divided into three parts [7]. The first part refers to supervised learning-based methods that require an independent training process and predict testing data using an already learned model. The representative supervised models include sparse/collaborative representation [8–10], support vector machine [11–13], ensemble learning [14–17], and so on. The second part refers to unsupervised learning-based models that do not demand training samples and determine entire classes by considering the correlations among samples. The unsupervised models are also called clustering methods, such as K-means [18, 19], ISODATA [20, 21], fuzzy C-means [22–24], and so on. The third part refers to semi-supervised learning-based models that simultaneously consider labeled and unlabeled samples in the training stage. Representative methods include semi-supervised support vector machines [25] and deep generative model [26].
Among the above-mentioned machine learning methods, decision trees are the most discussed scalable multivariate methods in machine learning, which intrinsically follow the process of human decision making. Unlike emerging deep learning-based methods [27–31] that require strict support, decision trees consider the latent structure of training data. That is to say, they split the training samples into bins and each of them is verified after calculating the most prevalent split variable by means of a specific metric, such as information gain value, Gini index, and entropy metrics. Besides, decision tree-based methods show at least three advantages. First, the decision tree model to some degree is easy to follow and implement, meaning that customers do not have to know complex and latent background knowledge in the training procedures. Second, the decision tree-based learning models do not pay sufficient attention to the preparation process of samples as it to some degree is useless and time consuming. The main reason behind this fact is that the decision tree-based model equally treats the attributes of samples as the learning foundations, which makes it possible to handle a relatively huge data scale. Third, the process of verifying effectiveness and robustness of the decision tree model can be easily measured. Figure 1 visually illustrates the processing of classification using a decision tree.
[figure omitted; refer to PDF]
Chou [32] proposed a classification and regression tree method in 1984, called CART. This method is a representative nonparametric learning strategy that produces classification and decision trees based on the status of dependent variables. In 1986, Quinlan proposed an iterative dichotomiser 3 method, called ID3 [33]. In this method, information gain is considered to determine the potential of the next nodes, meaning that the nodes with high information gain will be split. Xiaoliang et al. [34] extended Quinlan’s work and developed a well-known C4.5 method in 2009. Similar to ID3, C4.5 uses a well-known information gain metric. Besides, it also deals well with continuous inputs because it uses thresholds and splits attributes when the values are higher than the threshold. In 2012, Patil et al. [35] further extended C4.5 and ID3 and proposed C5.0 classification method. C5.0 not only has low time and memory consumptions compared to C4.5 and ID3 but also is efficient as it splits the nodes in terms of the field that has the maximum information gain. Recently, publications related to decision tree-based machine learning methods exist in different variants such as random forest [36, 37], gradient boosting decision tree [38], regression decision tree [39, 40], and so on.
Among loads of decision tree-based methods, this paper specifically considers C4.5 and applies it to content marketing data analysis. This method is widely used because its classifying rules follow the human’s thinking process and it also has the ability to produce desirable results. Based on four validation metrics, we conduct several important experiments on the bank content marketing dataset for the purpose of verifying the performances of C4.5 and other comparison methods. The main contribution of this paper is that we introduce a decision tree-based method to the content marketing field to efficiently improve the marketing capability.
2. Materials and Methods
2.1. Validation Metrics
Validation metrics including entropy, information gain, Gini index, gain ratio, and so on are the most important standards to determine whether the current node should be split. Entropy verifies the uncertainties among a set of random variables. When random variables have high entropy values, it means that the uncertainty of the random variables is equally high and the current should be further divided into two nodes. The entropy information of such random variables is given as follows:
Information gain represents the difference between the entropy obtained before performing splitting and that after performing splitting. In this regard, the standard of splitting relies on the method that can produce maximum information gain as it means better classification performances. The information gain is
Gini index represents the uncertainty of data. High Gini values mean high uncertainties of samples, and it can be defined as follows:
2.2. C4.5-Based Decision Tree Method
C4.5 algorithm is normally seen as an important variant of the traditional ID3 algorithm, which contains the following representative improvements to ID3 algorithm: (1) C4.5 selects split attributes through information gain rate; (2) C4.5 can handle discrete-continuous attributes; (3) C4.5 involves pruning processing after finishing the construction of the decision tree; (4) when the problem of missing values occurs, C4.5 still can maintain its performances.
When C4.5 constructs the decision tree, those attributes with high information gain rate are normally adopted for splitting the current node. Following this recursive process, the calculated information gain will get smaller. Finally, if the attribute contains high information gain rate, it will be split. Gain ratio can be given by
When the attribute type is discrete, there is no need to discretize the data. When the attribute type is continuous, the data need to be discretized. C4.5 algorithm aims at the discretization of continuous attributes. The core idea is to arrange the
(1) All data samples on the node are arranged from small to large according to the specific values of the continuous attribute to obtain its attribute value sequence
(2) There are
(3) By calculating the entire information gain ratio under possibilities, we can obtain two optimal splitting subsets corresponding to maximal gain ratio results. We record the corresponding splitting threshold.
2.3. Pruning Process in C4.5
The establishment process of the decision tree depends heavily on training samples, meaning that the fitting effect on the given training data will occur. It is worth mentioning that, however, the decision tree-based learning model to some extent is complex for the testing data, triggering low classification accuracies, a.k.a. overfitting. Therefore, the process of simplifying the decision model is highly demanding, resulting in a crucial step, i.e., pruning.
C4.5 algorithm adopts the BEP (Bayesian error pruning) method. The BEP method was proposed by Quinlan, which is a top-down pruning method. It determines whether to prune the subtree according to the error rate before and after pruning, so it does not need a separate pruning dataset.
For a leaf node, assume that it involves
Suppose a subtree misclassifies a sample with a value of 1 and correctly classifies a sample with a value of 0, then the number of misclassifications of the subtrees follows a Bernoulli distribution, so their statistical information, i.e., mean and standard deviation values, can be obtained by
After replacing the subtree by leaf nodes, their error values can be determined by
Note that the number of misclassifications of leaf nodes also follows a Bernoulli distribution. The mean values of the number of misclassifications of leaf nodes are defined as
Here, pruning can be performed if it satisfies the following formula:
In order to visually describe the process of performing C4.5 decision tree on given data, this paper provides the corresponding flowchart of C4.5 (see Figure 2).
[figure omitted; refer to PDF]
Generally, each classifier pays different attention to features, which means that not entire features are equally important in model training. In order to visually display the importance of features in the content marketing dataset, this paper provides comparisons among linear SVM, decision tree, and random forest. It can be seen from Figure 7 that top 10 important features that play a crucial role in model training are visually displayed. For the linear SVM, among 10 features, the first five features including pdays, poutcom_success, poutcom_failure, poutcom_nonexistent, and duration have higher impacts on linear SVM’s training process. In terms of decision tree, it is more obvious that only duration and nr.employed are the key features while other features such as euribor3m, cons.conf.idx, poutcome_success, and so on have lower importance. For the random forest method, the top 10 features seem equal to play an important role in the training model. The relatively lower weights also indicate that most of the features are highly used.
[figure omitted; refer to PDF]
In this paper, 20% of total samples are selected as training samples and the rest of 80% samples are treated as testing samples. Table 1 tabulates overall experimental results obtained from six classification models based on four validation metrics. As can be found from Table 1, the decision tree can capture the best predicting performances as it has the highest metric values. Compared to the decision tree, other comparison algorithms such as neural network and random forest, respectively, have the optimal results with regard to F1 score and recall. In addition, the nearest neighbor has the second-best results for F1 score, linear SVM can achieve the second-best results for recall and F1 score, the neural network has the suboptimal results for accuracy, Naive Bayes has the suboptimal results for precision, and random forest has the suboptimal results for F1 score. Execution time obtained from entire algorithms is displayed in Table 1. As can be found from this table, nearest neighbors and linear SVM require the highest computational time, while others cost much less. Figure 8 visually displays the decision boundaries of six classification methods. From those figures, we find that compared to Figures 8(a)–8(d), Figures 8(e) and 8(f) show more distinct boundaries. Figure 9 further adds the ROC curves obtained from four representative classification methods. As can be seen from Figure 9, even though the decision tree has lower area under the ROC curve than that of the nearest neighbors, it shows high intensity.
Table 1
Comparison results obtained from six classification methods.
Nearest neighbor | Linear SVM | Neural network | Naive Bayes | Random forest | Decision tree | |
Precision | 0.91 | 0.91 | 0.92 | 0.93 | 0.89 | 0.94 |
Recall | 0.97 | 0.99 | 0.98 | 0.76 | 1 | 1 |
F1 score | 0.94 | 0.94 | 0.95 | 0.87 | 0.94 | 0.95 |
Accuracy | 0.8877 | 0.8981 | 0.9037 | 0.782 | 0.8879 | 0.9135 |
Time (s) | 3.63 | 3.07 | 0.48 | 0.07 | 0.11 | 0.10 |
[figures omitted; refer to PDF]
[figures omitted; refer to PDF]
To verify the impact of different ratios of training samples on the experimental results of six algorithms, this paper performs an experiment on the content marketing dataset with varying the number of training samples from 5% to 25. Figure 10 provides a visual comparison of six classifiers on the different ratios of training samples. Compared to other methods, the decision tree shows the best results in all situations of training sample number. The results obtained from Naive Bayes seem disappointing primarily because it assumes that the attributes of samples are independent. In this regard, if the sample attributes are related, the effect is not good. Besides, it cannot learn the interaction among features, which highly limits its experimental performances.
[figure omitted; refer to PDF]4. Conclusions
Traditional content marketing strategies rely heavily on empirical knowledge such as market requirements and expectations. However, when the content marketing process meets loads of diverse users in different societies or communities, the user data will be huge so that the traditional content marketing strategy will be difficult. Machine learning-based methods have the ability to analyze historical data and predict future potential behaviors and activities more effectively. Among plenty of machine learning-based methods, decision tree has received more attention from content marketing managers as it intrinsically follows the process of human decision making. To verify the performance of the decision tree on the content marketing data, this paper considers a well-known decision method, called C4.5, and simultaneously compares C4.5 with other five machine learning methods including nearest neighbor, linear SVM, neural network, Naive Bayes, and random forest. Based on four validation metrics, experiments were conducted on the bank content marketing dataset under different experimental scenarios and settings. Experimental results obtained from six methods indicate that the decision tree has the ability to handle the content marketing dataset, meaning that it can provide reasonable and accurate content marketing suggestions for managers.
[1] J. Rowley, "Understanding digital content marketing," Journal of Marketing Management, vol. 24 no. 5-6, pp. 517-540, DOI: 10.1362/026725708x325977, 2008.
[2] J. Sheth, C. H. Kellstadt, "Next frontiers of research in data driven marketing: Will techniques keep up with data tsunami?," Journal of Business Research, vol. 125, pp. 780-784, DOI: 10.1016/j.jbusres.2020.04.050, 2021.
[3] I. Heimbach, D. S. Kostyra, O. Hinz, "Marketing automation," Business and Information Systems Engineering, vol. 57 no. 2, pp. 129-133, DOI: 10.1007/s12599-015-0370-8, 2015.
[4] A. Miklosik, M. Kuchta, N. Evans, S. Zak, "Towards the adoption of machine learning-based analytical tools in digital marketing," IEEE Access, vol. 7, pp. 85705-85718, DOI: 10.1109/access.2019.2924425, 2019.
[5] L. Hagen, K. Uetake, N. Yang, B. Bollinger, A. J. B. Chaney, D. Dzyabura, J. Etkin, A. Goldfarb, L. Liu, K. Sudhir, Y. Wang, J. R. Wright, Y. Zhu, "How can machine learning aid behavioral marketing research?," Marketing Letters, vol. 31 no. 4, pp. 361-370, DOI: 10.1007/s11002-020-09535-7, 2020.
[6] L. Ma, B. Sun, "Machine learning and AI in marketing-connecting computing power to human insights," International Journal of Research in Marketing, vol. 37 no. 3, pp. 481-504, DOI: 10.1016/j.ijresmar.2020.04.005, 2020.
[7] M. I. Jordan, T. M. Mitchell, "Machine learning: Trends, perspectives, and prospects," Science, vol. 349 no. 6245, pp. 255-260, DOI: 10.1126/science.aaa8415, 2015.
[8] X. Shen, W. Bao, H. Liang, X. Zhang, "Grouped collaborative representation for hyperspectral image classification using a two-phase strategy," IEEE Geoscience and Remote Sensing Letters, vol. 19, 2021.
[9] Z. Yu, X. Zheng, F. Huang, W. Guo, L. Sun, Z. Yu, "A framework based on sparse representation model for time series prediction in smart city," Frontiers of Computer Science, vol. 15 no. 1,DOI: 10.1007/s11704-019-8395-7, 2021.
[10] L. Guo, Q. Dai, "Laplacian regularized low-rank sparse representation transfer learning," International Journal of Machine Learning and Cybernetics, vol. 12 no. 3, pp. 807-821, DOI: 10.1007/s13042-020-01203-6, 2021.
[11] L. Ali, I. Wajahat, N. Amiri Golilarz, F. Keshtkar, S. A. C. Bukhari, "LDA-GA-SVM: improved hepatocellular carcinoma prediction through dimensionality reduction and genetically optimized support vector machine," Neural Computing and Applications, vol. 33 no. 6, pp. 2783-2792, DOI: 10.1007/s00521-020-05157-2, 2021.
[12] W. C. Leong, A. Bahadori, J. Zhang, Z. Ahmad, "Prediction of water quality index (WQI) using support vector machine (SVM) and least square-support vector machine (LS-SVM)," International Journal of River Basin Management, vol. 19 no. 2, pp. 149-156, DOI: 10.1080/15715124.2019.1628030, 2021.
[13] H. Liu, C. Chen, Z. Guo, Y. Xia, X. Yu, S. Li, "Overall grouting compactness detection of bridge prestressed bellows based on RF feature selection and the GA-SVM model," Construction and Building Materials, vol. 301,DOI: 10.1016/j.conbuildmat.2021.124323, 2021.
[14] X. Yu, Q. Peng, L. Xu, F. Jiang, J. Du, D. Gong, "A selective ensemble learning based two-sided cross-domain collaborative filtering algorithm," Information Processing and Management, vol. 58 no. 6,DOI: 10.1016/j.ipm.2021.102691, 2021.
[15] K. Zhou, Y. Yang, Y. Qiao, T. Xiang, "Domain adaptive ensemble learning," IEEE Transactions on Image Processing, vol. 30, pp. 8008-8018, DOI: 10.1109/tip.2021.3112012, 2021.
[16] E. A. Mohammed, M. Keyhani, A. Sanati-Nezhad, S. H. Hejaz, B. H. Far, "An ensemble learning approach to digital corona virus preliminary screening from cough sounds," Scientific Reports, vol. 11 no. 1,DOI: 10.1038/s41598-021-95042-2, 2021.
[17] T. Zhang, D.-G. Zhang, H.-R. Yan, J.-N. Qiu, J.-X. Gao, "A new method of data missing estimation with FNN-based tensor heterogeneous ensemble learning for internet of vehicle," Neurocomputing, vol. 420, pp. 98-110, DOI: 10.1016/j.neucom.2020.09.042, 2021.
[18] D. Zhao, X. Hu, S. Xiong, J. Tian, J. Xiang, J. Zhou, H. Li, "k-means clustering and kNN classification based on negative databases," Applied Soft Computing, vol. 110,DOI: 10.1016/j.asoc.2021.107732, 2021.
[19] X. Shi, Y. Li, Y. Yang, B. Sun, F. Qi, "Multi-models and dual-sampling periods quality prediction with time-dimensional K-means and state transition-LSTM network," Information Sciences, vol. 580, pp. 917-933, DOI: 10.1016/j.ins.2021.09.056, 2021.
[20] M. A Rajab, L. E George, "Stamps extraction using local adaptive k-means and ISODATA algorithms," Indonesian Journal of Electrical Engineering and Computer Science, vol. 21 no. 1, pp. 137-145, DOI: 10.11591/ijeecs.v21.i1.pp137-145, 2021.
[21] Z. Chen, B. Cong, Z. Hua, K. Cengiz, M. Shabaz, "Application of clustering algorithm in complex landscape farmland synthetic aperture radar image segmentation," Journal of Intelligent Systems, vol. 30 no. 1, pp. 1014-1025, DOI: 10.1515/jisys-2021-0096, 2021.
[22] S. Askari, "Fuzzy C-Means clustering algorithm for data with unequal cluster sizes and contaminated with noise and outliers: review and development," Expert Systems with Applications, vol. 165,DOI: 10.1016/j.eswa.2020.113856, 2021.
[23] A. Pickens, S. Sengupta, "Benchmarking studies aimed at clustering and classification tasks using K-means, fuzzy C-means and evolutionary neural networks," Machine Learning and Knowledge Extraction, vol. 3 no. 3, pp. 695-719, DOI: 10.3390/make3030035, 2021.
[24] Z. Shi, D. Wu, C. Guo, C. Zhao, Y. Cui, F.-Y. Wang, "FCM-RDpA: TSK fuzzy regression model construction using fuzzy C-means clustering, regularization, droprule, and powerball adabelief," Information Sciences, vol. 574, pp. 490-504, DOI: 10.1016/j.ins.2021.05.084, 2021.
[25] K. Bennett, A. Demiriz, "Semi-supervised support vector machines," Advances in Neural Information Processing Systems, vol. 11, pp. 368-374, 1999.
[26] D. P. Kingma, S. Mohamed, D. J. Rezende, M. Welling, "Semi-supervised learning with deep generative models," 2014. https://arxiv.org/abs/1406.5298
[27] R. Kundu, H. Basak, P. K. Singh, A. Ahmadian, M. Ferrara, R. Sarkar, "Fuzzy rank-based fusion of CNN models using Gompertz function for screening COVID-19 CT-scans," Scientific Reports, vol. 11 no. 1,DOI: 10.1038/s41598-021-93658-y, 2021.
[28] X. Li, Z. Du, Y. Huang, Z. Tan, "A deep translation (GAN) based change detection network for optical and SAR remote sensing images," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 179, pp. 14-34, DOI: 10.1016/j.isprsjprs.2021.07.007, 2021.
[29] H. Zhang, Z. Le, Z. Shao, H. Xu, J. Ma, "MFF-GAN: an unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion," Information Fusion, vol. 66, pp. 40-53, DOI: 10.1016/j.inffus.2020.08.022, 2021.
[30] H. D. Nguyen, K. P. Tran, S. Thomassey, M. Hamad, "Forecasting and anomaly detection approaches using LSTM and LSTM autoencoder techniques with the applications in supply chain management," International Journal of Information Management, vol. 57,DOI: 10.1016/j.ijinfomgt.2020.102282, 2021.
[31] J. Li, T. Yu, B. Yang, "A data-driven output voltage control of solid oxide fuel cell using multi-agent deep reinforcement learning," Applied Energy, vol. 304,DOI: 10.1016/j.apenergy.2021.117541, 2021.
[32] P. A. Chou, "Optimal partitioning for classification and regression trees," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13 no. 4, pp. 340-354, DOI: 10.1109/34.88569, 1991.
[33] J. R. Quinlan, "Induction of decision trees," Machine Learning, vol. 1 no. 1, pp. 81-106, DOI: 10.1007/bf00116251, 1986.
[34] Z. Xiaoliang, Y. Hongcan, W. Jian, W. Shangzhuo, "Research and application of the improved algorithm C4.5 on decision tree," Proceedings of the 2009 International Conference on Test and Measurement, vol. 2, pp. 184-187, .
[35] N. Patil, R. Lathi, V. Chitre, "Comparison of C5. 0 & CART classification algorithms using pruning technique," International Journal of Engineering Research and Technology, vol. 1 no. 4, 2012.
[36] Y. Chen, W. Zheng, W. Li, Y. Huang, "Large group activity security risk assessment and risk early warning based on random forest algorithm," Pattern Recognition Letters, vol. 144,DOI: 10.1016/j.patrec.2021.01.008, 2021.
[37] J. W. Yang, L. M. Jiang, J. Lemmetyinen, J. M. Pan, K. Luojus, M. Takala, "Improving snow depth estimation by coupling HUT-optimized effective snow grain size parameters with the random forest approach," Remote Sensing of Environment, vol. 264,DOI: 10.1016/j.rse.2021.112630, 2021.
[38] H. Albaqami, G. M. Hassan, A. Subasi, A. Datta, "Automatic detection of abnormal EEG signals using wavelet feature extraction and gradient boosting decision tree," Biomedical Signal Processing and Control, vol. 70,DOI: 10.1016/j.bspc.2021.102957, 2021.
[39] S. Mohammadiun, G. Hu, A. Alavi Gharahbagh, R. Mirshahi, J. Li, K. Hewage, R. Sadiq, "Optimization of integrated fuzzy decision tree and regression models for selection of oil spill response method in the Arctic," Knowledge-Based Systems, vol. 213,DOI: 10.1016/j.knosys.2020.106676, 2021.
[40] G. Bermejo-Martín, C. Rodríguez-Monroy, Y. M. Núñez-Guerrero, "Water consumption range prediction in huelva’s households using classification and regression trees," Water, vol. 13 no. 4, 2021.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2022 Yi Liu and Shuo Yang. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
Traditional content marketing methods resort grossly to market requirements but barely obtain relatively accurate marketing prediction under loads of requirements. Machine learning-based approaches nowadays are widely used in multiple fields as they involve a training process to deal with big data problems. In this paper, decision tree-based methods are introduced to the field of content marketing, and decision tree-based methods intrinsically follow the process of human decision making. Specifically, this paper considers a well-known method, called C4.5, which can deal well with continuous values. Based on four validation metrics, experimental results obtained from several machine learning-based methods indicate that the C4.5-based decision tree method has the ability to handle the content marketing dataset. The results show that the decision tree-based method can provide reasonable and accurate suggestions for content marketing.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 School of Business, Macau University of Science and Technology, Cotai, Macao, China; School of Business, Guangdong Polytechnic of Science and Technology, Zhuhai, Guangdong, China
2 School of Computer Science and Cyber Engineering, Guangzhou University, Guangzhou, Guangdong, China