1. Introduction
Developing competitive products that exceed consumers’ expectations plays a central role in enterprise activities. Successful products can enhance user satisfaction, stimulate purchase desire, increase sales, and promote the completion of the ‘product–money–new product’ cycle, enabling enterprises to thrive in the competitive market [1]. Over the past few decades, product design has undergone significant changes in design concepts and methods, driven by advancements in manufacturing and technology. Design concepts have evolved from ‘form follows function’, ‘technology first’, and ‘product-centric’ to ‘form follows emotion’ and ‘user-centric’ [2]. Design methods have shifted from experiential and fuzzy design to intelligent design, computer-aided design, and multi-domain joint design [3,4,5,6]. Despite this progress, the fierce market competition calls for next-generation product innovation and design methodologies based on the significant progress of big data, deep learning, and other AI algorithms, which have brought a transformative revolution in the understanding, pattern recognition, and generative design and synthesis of text, image, video, audio, etc. [7,8].
Currently, customers are increasingly focused on their diversified and personalized spiritual and emotional needs. They pay more and more attention to product appearance [9,10,11,12,13] in addition to its functions. Traditional product design methods, such as the theory of inventive problem solving (TRIZ), quality function deployment (QFD), KANO model, Kansei engineering, and axiomatic design (AD), have been well-established [2,14,15,16]. However, capturing user requirements, ensuring visibility, and evaluating products remain significant challenges in the field of product design. The fundamental aspect of capturing user requirements and evaluating products is data acquisition. Traditional methods rely on manual surveys [10,17,18,19], which are time-consuming, labor-intensive, and often result in uneven data distribution. Additionally, these data are one-time and cannot be updated, which is a major obstacle in the fast-paced era we live in [20]. Alternatively, relying on designers’ or experts’ experience and intuition is highly subjective, uninterpretable, and risky [21]. As we will discuss in detail in Section 3, traditional methods also have several other drawbacks. These deficiencies led to lengthy product development cycles, poor predictability, and low success rates, which are crucial to enterprises. However, recent studies have found that big data and AI algorithms can alleviate these problems in an automated and intelligent manner [20,22]. Especially, the AI-Generated Content (AIGC) can bring disruptive breakthroughs to product design. Big data and AI algorithms have been increasingly applied in the field of product design [18,23].
With the rapid development of the Internet, loT, and communication technologies, a large amount of data has accumulated in the product lifecycle (Figure 1), which is expanding exponentially every day [24,25,26]. The product lifecycle contains a lot of product feedback information, such as user preferences, market demands, and visual displays [27,28,29]. This information is valuable for guiding product design and has sparked increasing interest in both product design and big data fields [30,31]. On one hand, processing and analyzing such a massive amount of data presents new challenges that need to be appropriately addressed. On the other hand, successful analysis can lead to better products. Thus, how to extract valuable information from big data and apply it to design remains the primary difficulty and focal point of current research [22,27,31].
Big data in the product lifecycle are characterized by multiple data types, a large volume, low-value density, and fast update, making them challenging to handle with conventional techniques and algorithms. However, AI algorithms have strong capabilities in processing big data, including convolutional neural networks (CNNs), generative adversarial networks (GANs), natural language processing (NLP), neural style transfer, motion detection, speech recognition, video summarization, and emotion recognition. Figure 2 provides an overview of mining product-related information from big data. This paper focuses on analyzing big data with AI algorithms and applying the findings to product design.
In recent years, big data and AI-driven product design have become a significant research hotspot. However, to the best of our knowledge, before this work, there has been almost no comprehensive summary of the applications of big data in product design. Therefore, a thorough literature review is necessary to provide theoretical foundations that can be utilized to develop scientific insights in this area. Furthermore, it can help enterprises to reduce the time and cost of product development, enhance user satisfaction, and promote the advancement of product design towards automation and intelligence. This paper introduces both the traditional product design method and the big data-based method, highlights their limitations and challenges, and focuses on the application, algorithm, and process flow of structured, textual, image, audio, and video data. The research framework is shown in Figure 3.
Figure 3 shows the structure of the paper. Section 2 discusses the fundamental tasks in product design, while Section 3 provides an overview of several widely used traditional product design methods along with their limitations. Section 4 reviews the application of big data and AI algorithms in product design, including structured, textual, image, audio, and video data. Moreover, potential future studies and current limitations are discussed in this section. In Section 5, we summarize the paper and outline the future direction of the product design field.
2. Key Tasks in Product Design
Figure 4 summarizes the process of product design. It includes nine key tasks: product vision definition, market research, competitive research, user research, idea generation, feasibility analysis, sketching, prototyping, and scheme evaluation. The product vision, which clarifies the overall goal of the product, is typically defined before the design process begins. Market research is carried out to understand the development trend, user demand, and purchasing power. Competitive analysis compares existing products from multiple dimensions and derives their advantages and deficiencies.
User research is supported by data. Traditional methods of data collection, such as observation, experiment, interview, and survey, are often insufficient in terms of the quality and quantity of data, when compared to big data-driven methods. To extract user requirements, preferences, and satisfaction, traditional methods typically utilize Kansei engineering, QFD, the KANO model, AD, and affective design [14,16,32,33]. In contrast, big data-driven methods widely use technologies such as NLP, speech recognition, emotion recognition, and intelligent video analysis. Depending on different object-oriented, product design methods can be broadly categorized into product-centric and user-centric. The product-centric approach focuses on improving performance and emphasizes that users passively adapt to the product. In contrast, the user-centric approach prioritizes satisfying users’ spirit requirements and emphasizes that products must actively adapt to users. As users’ requirements have become more and more diversified and personalized, the user-centric approach has become mainstream. Kansei engineering and the KANO model are typical user-centric methods, and QFD also involves user requirements [14,16,33].
Idea generation can be classified into two types based on different innovative thinking: logical thinking and intuitive thinking [34]. Logical thinking focuses on detailed analysis and decomposition of problems, such as TRIZ, universal design, and AD [15,32,35]. In contrast, intuitive thinking aims to inspire designers and includes brainstorming, bionics, analogy, combination, and deformation [1,2,36]. For instance, Nathalie and John [37] utilized brainstorming to stimulate inspiration, while Youn et al. [38] found that combining ideas was a significant driver of invention by analyzing a vast number of patents. Similarly, Lai et al. [9] created new products by merging various product forms and colors, and Zarraonandia et al. [39] used the combinatorial creativity in digital game design. After generating ideas, the next task is feasibility analysis. This involves technical, economic, security, infringement, and environmental analysis. Notably, technical analysis examines whether there are any inconsistencies among various technical attributes and resolves them.
Product display is important for enabling users, designers, and experts to intuitively comprehend the designed product. Sketches and prototypes serve as visualizations of ideas. However, traditional methods often require designers to process drawing skills, such as hand drawing or 3D modeling. In contrast, big data and AI algorithm driven methods offer simpler operations and do not require drawing skills. Furthermore, traditional display methods often involve images or 3D models, whereas big data and AI algorithm driven methods can incorporate videos or interactions, resulting in more intuitive and engaging displays.
Evaluation is the final step. Once the product is designed, it will be presented to users, experts, designers, or decision makers for feedback. Based on this feedback, the product can be improved and verified to ensure it aligns with the original vision. For instance, Mattias and Tomohiko [40] proposed a novel evaluation method that considers the importance of customer values, the contribution of each offering to the value, and the customer’s budget. They successfully applied this method to a real-life case at an investment machine manufacturer. User evaluation can also be regarded as a part of the user research task.
3. Traditional Product Design Methods
3.1. Kansei Engineering
Kansei engineering is a widely used user-centric method that is commonly utilized in the user research task [41]. In the 1970s, Nagamechi noted that, in Japan, where material wealth was already abundant, the consumption trend had shifted from functionality to sensibility, with sensibility becoming the core of product design [42]. Nissan, Mitsubishi, Honda, and Mazda used Kansei engineering to improve car positioning, shape, color, dashboard, and more, resulting in significant success for the Japanese automobile industry [43,44,45]. In the late 1990s, Kansei engineering expanded to Europe. Schütte proposed a modification strategy to simplify the approach for European culture [46,47,48,49]. In addition, Schütte discussed samples selection [50] and visualized Kansei engineering steps [51]. Nagamechi laid the foundation of Kansei engineering and continued to explore it further. In his latest research [52], he suggested the introduction of AI in Kansei engineering. The framework of Kansei engineering is shown in Figure 5.
The Kansei engineering process comprises four key stages: (i) Kansei words collection involves gathering words from various sources, such as magazines, documents, manuals, experts, interviewees, product catalogs, and e-mails. However, collected words may lack pertinence and applicability due to their diverse fields of origin; (ii) Kansei words selection can be done manually, but this may lead to subjective results. To address this issue, some researchers have used techniques like principal component analysis (PCA), factor analysis, hierarchical clustering analysis, and K-means for word selection. For instance, Djatna et al. [53] adopted the Term Frequency-Inverse Document Frequency (TF-IDF) approach to select high-frequency Kansei words for their tea powder packaging design, while Shieh and Yeh [10] used cluster analysis to select four sets of Kansei words out of a hundred; (iii) For Kansei evaluation, most studies use semantic differential (SD) scales or Likert scales to design questionnaires and obtain Kansei evaluations from survey results. However, some researchers have paid attention to physiological responses [11,54] as they believe these signals are more reliable than the defined score. For example, Kemal et al. [12] used the eye-tracker to obtain objective data on the ship’s appearance, including the area of interest (AOI), scan path, and heat maps. Nevertheless, relying on a single physiological signal can lead to one-sided results, and thus, Xiao and Cheng [55] designed several experiments involving eye-tracking, skin conductance, heart rate, and electroencephalography. We notice that physiological signals are more related to the intensity of Kansei than the content; (iv) The final step is constructing the relationship model that links user affections to product parameters. Various methods like quantification theory, support vector machine, neural networks, etc., can be employed to achieve this.
Although Kansei engineering has achieved significant success in product design, there are still several deficiencies that need to be addressed. (i) The collected Kansei words are often inadequate in terms of their pertinence and number; (ii) The small number of samples and subjects; (iii) Both questionnaires and physiological signals are susceptible to the effects of time, environment, and concentrated subjects, which can reduce the authenticity and objectivity of the results; (iv) Limited by life background and work experience, subjects’ evaluations in the questionnaire vary greatly; (v) The survey scope is often small, leading to uneven data distribution. These deficiencies mainly arise from data acquisition, especially the collection of Kansei words and Kansei evaluation. The quantity and quality of survey data can directly affect the constructing of the relational model, thus, making data acquisition an urgent issue that need to be addressed.
3.2. KANO Model
The KANO model is utilized in user research tasks to classify user requirements by revealing the nonlinear relationship between user satisfaction and product performance. For a long time, it was believed that user satisfaction is directly proportional to product performance. However, it has been shown that fulfilling individual product performance to a great extent does not necessarily lead to high user satisfaction, and not all performance factors are equally important.
In the 1980s, Noriaki Kano conducted a detailed study on user satisfaction and proposed the KANO model [56], which divides user requirements into five types (Figure 6). Must-be quality refers to requirements that users take for granted. When fulfilled, users just feel normal; otherwise, they will be incredibly dissatisfied, such as the call function for mobile phones. One-dimensional quality is proportional to user satisfaction, such as the battery life for mobile phones. Attractive quality increases satisfaction when fulfilled but does not cause dissatisfaction when unfulfilled, such as the temperature display for cups. Indifferent quality has no impact on user satisfaction, such as the built-in pocket for coats. Reverse quality results in dissatisfaction when fulfilled, such as pearls on men’s wear. Researchers have utilized the KANO model for various applications, including Yao et al. [19], who categorized twelve features of mobile security applications through a structured KANO questionnaire, and Avikal et al.’s [13] examination of customer satisfaction based on aesthetic sentiments by integrating the KANO model with QFD.
The KANO model is an essential tool for understanding user satisfaction and prioritizing product development efforts [16,57]. According to the priority assigned by the KANO model, enterprises must ensure the must-be quality reach the threshold, and any additional investment is a waste; invest in one-dimensional quality as much as possible; prioritize attractive quality when the budget permits; avoid reverse quality and never waste resources on indifferent quality. However, not all customer requirements are equal, even within the same category. As the KANO model cannot distinguish the differences among requirements within the same category [58], Lina et al. [59] proposed the IF-KANO model. This model adopts logical KANO classification criteria to categorize requirements. They take elevators as an example; results showed that both load capacity and operation stability belong to a one-dimensional quality but the priority of operation stability is slightly higher than load capacity.
The KANO model relies on the data acquisition of the KANO questionnaire, which traditionally provides a single option or options within a given range, failing to fully capture the ambiguity and complexity of customers’ preferences. As a result, numerous scholars have attempted to enhance the KANO questionnaire’s options [60,61]. For instance, Mazaher et al. [62] proposed the fuzzy KANO questionnaire, which uses percentages instead of fixed options and allows participants to make multiple choices. Chen and Chuang [63] introduced the Likert scale to gauge the degree of satisfaction or dissatisfaction. Cigdem and Amitava [64] combined the Servqual scale and the KANO model in a complementary way. While some researchers have recognized that text can better express opinions than scale, they also note that text is challenging for traditional statistical methods [65]. In addition to improving questionnaire options, some studies have focused on questioning skills to enhance the KANO questionnaire. For example, Bellandi et al. [66] suggested that questions should avoid polar wording.
While improvements have been made to the KANO questionnaire, as mentioned in Section 3.1, it is important to acknowledge its limitations. In addition, the KANO model itself also has some drawbacks. (i) The model relies on pre-existing requirements and lacks research on the acquisition; (ii) The KANO model only focuses on the categorization and rating of customer requirements, and its application is relatively simplistic.
3.3. QFD
QFD was first proposed by Japanese scholars Yoji Akao and Shigeru Mizuno in the 1970s, and aims to translate customer requirements into technical attributes, parts characteristics, key process operations, and production requirements [15,33,67]. The house of quality (HoQ) is the core of QFD, as depicted in Figure 7. QFD can be used for multiple tasks, such as generating ideas, analyzing competition, and assessing technical feasibility.
Starting from meeting market demands and driven by customer requirements, QFD explicitly translates requirements information into specific information that is directly used by design, production, and sales departments, ensuring that the resulting products meet customer needs and expectations [68]. For instance, Tomohiko [21] applied QFD to hair dryer design, using it to decompose customer requirements into characteristics and building a HoQ to guide the design process. Yan et al. [69] applied QFD for competitive analysis, generating insights into product improvement strategies. Noting the inherently vague and ambiguous nature of customer requirements [17,70], Cengiz et al. [71] proposed fuzzy QFD. Additionally, QFD can translate requirements from various stakeholders such as recyclers, production engineers, and customers. Considering that the technical characteristics in the HoQ may be contradictory, Wang et al. [72] combined QFD with TRIZ and used the contradiction matrix to derive a solution. However, QFD’s reliance on experts to rank customer requirements leads to intense subjectivity. To mitigate this issue, scholars have introduced the Multi-Criteria Decision-Making (MCDM) methods [73,74,75,76]. Although QFD, similar to Kansei engineering, employs a definite score to express evaluation, ignoring the uncertainty of user and expert scoring, some studies have introduced the rough theory, interval-valued fuzzy-rough sets, and grey relational analysis to address this limitation [77,78,79,80].
Although QFD has made some progress in product design, it still has limitations. (i) The relationship between customer requirements and characteristics is determined by experts manually, which heavily relies on their expertise; (ii) The evaluation of satisfaction is often completed by a small number of subjects, leading to potential bias; (iii) Customer requirements are directly given by experts, customers, designers, or summarized by scholars, lacking data acquisition.
3.4. TRIZ
TRIZ is an approach that helps to of generate inventive solutions by identifying and resolving contradictions. It can be utilized for both idea generation and technical analysis tasks. Genrich Altshuller, a Soviet inventor, and his colleagues began developing TRIZ in 1946 [81]. By analyzing over 400,000 invention patents, Altshuller developed the technical contradiction, the concept of ideality of a system, contradiction matrix, 40 principles of invention (Table 1), and 39 engineering parameters (Table 2). In addition, Altshuller observed smart and creative individuals, discovered patterns in their thinking, and developed thinking tools and techniques to model this “talented thinking”.
The TRIZ methodology involves four main steps: (i) defining the specific problem; (ii) abstracting the problem to a more general level; (iii) mapping potential solutions to address the general problem; and (iv) concretizing the general solution to fit the specific problem. For instance, Wang [2] employed principles 1 (segmentation), 5 (combining), and 28 (replacement of a mechanical system) from Table 1 to design phone cameras. Yamashina et al. [15] combined QFD and TRIZ to perform washing machine design. Additionally, Ai et al. [82] designed low-carbon products by considering both technical system and human use, and they used TRIZ to identify measures for improving energy efficiency.
3.5. Summary of Limitations
Among the traditional methods mentioned above, QFD focuses on constructing the HoQ to map customer requirements to product parameters. The KANO model mainly emphasizes the classification and rating of customer requirements. Both QFD and KANO adopt the pre-given requirements and lack research on acquisition. Kansei engineering is a comprehensive method that involves the acquisition, expression, and mapping of customer requirements. However, Kansei engineering still has limitations in the data collection process. To capture customers’ affective responses towards different products, various traditional methods are widely used, such as user interviews, questionnaires, focus groups, experiments, etc. [10,11,41,55]. We have summarized their advantages and disadvantages in Table 3.
Overall, traditional product design methods suffer from several significant disadvantages. (i) It is difficult to capture accurate customer requirements due to the increasing diversification and complexity of customer needs, making it challenging for enterprises to determine product positioning; (ii) Various shortcomings in the process of data acquisition, such as being manual, time-consuming, labor-intensive, hard to update, limited in scope, and subject to time and place. This results in small sample sizes, poor real-time data, and limited quality and quantity of samples and data. In addition, traditional data collection is time-consuming and quickly becomes outdated; (iii) Survey results are susceptible to time and environment, and there may be differences between subjects and real users, which can impact the validity of the data; (iv) Heavy reliance on expert can increase workload and prolong the product development cycle, while different experts may have varying backgrounds and experiences, introducing uncertainty and subjectivity in designs; (v) Traditional methods lacking an intuitive approach to inspiring designers, and visualization plays a crucial role in it; (vi) The absence of visual display in early design schemes can make it difficult for enterprises to hold product development direction, as decision-makers rely on imagination alone and cannot see designs. This increases the probability of failure resulting in wasted resources and potentially leading to enterprise bankruptcy. These deficiencies have significantly hindered the development of product design.
The advent of the big data era has brought forth innovative ideas and technologies that can overcome the shortcomings of traditional product design methods and enhance innovation capabilities. By leveraging big data and AI algorithms, we can reduce subjectivity, expand the scope of survey, and automate data processing (including data acquisition, updating and analysis), to accurately acquire user requirements and present them in an intuitive visual way. To illustrate this point, we can consider product evaluation as an example. We can collect customer reviews from e-commerce platforms, social media, and review websites worldwide using web crawlers. With the help of NLP technology, we can automatically extract information, such as product attributes, opinion words, and sentiment orientations, in customer evaluations of products. Furthermore, as customer evaluations are dynamic, we can easily obtain real-time evaluation by simply adding a piece of updated code. In the next section, we will delve deeper into the new-generation data and AI algorithm driven product design methods, which are the primary focus of this article.
4. Product Design Based on Big Data and AI
4.1. Product Design Based on Structured Data
Big data in the product lifecycle can be categorized into structured, semi-structured, and unstructured [30]. Structured data separate the structure and content, whereas semi-structured data mix them. Unstructured data, such as audio, video, images, text, and location, have no fixed structure.
Structured data refer to information that has been formatted and transformed into a predefined data model, which provides regularity and strict formatting. However, structured data can suffer from poor scalability and limited flexibility. Semi-structured data can be considered as an extension of structured data, that offers greater flexibility and extensibility. For this reason, in this paper, we will treat structured data and semi-structured as the same and introduce their applications in design together.
To address the limitations of traditional questionnaire surveys, including small survey scope, difficulty in updating, time-consuming, and labor-intensive, Li et al. [83] proposed a machine learning-based affective design dynamic mapping approach (MLADM). The approach involves collecting Kansei words from literature, manually clustering them, obtaining product features and images from shopping websites, and generating online questionnaires. Four machine learning algorithms are used to construct the relationship model. However, despite MLADM being able to predict product feeling, it still heavily relies on the expert, and the online questionnaire data are also highly subjective. To overcome these limitations, some studies explore objective data for design knowledge. For instance, Jiao et al. [84] established a database to record affective needs, design elements, and Kansei words from past sales records and previous product specifications. They applied association rule mining to construct the relationship model and used it as the product design inference pattern.
Limited by data sources’ openness, it is not easy for designers to obtain structured data, which can impede their research in product design. In contrast, unstructured data holds advantages in terms of volume and accessibility, accounting for nearly 95% of big data [85]. Despite its amorphous and complex nature, unstructured data are still of great interest to scholars because of their implicit information and value. In Section 4.3, Section 4.4, Section 4.5 and Section 4.6, we will, respectively, introduce the application of textual, image, audio, and video data in product design, as well as the AI algorithms involved.
4.2. Product Design Based on Textual Data
The text exists in every phase of a product’s lifecycle. Among them, user experience feedback after purchasing the product is particularly valuable for product design. In recent years, more and more users have shared their opinions on Twitter, Facebook, microblogs, blogs, and e-commerce websites [86,87,88,89,90,91,92]. As online reviews on e-commerce websites are provided by consumers who have purchased the product, they are considered highly reliable and authentic [93]. Research shows that 90 % of customers use online reviews as a reference, with 81% finding them helpful for purchase decisions, and 45% consulting them even while shopping in physical stores [94,95]. Consequently, online reviews have become an important influence factor in consumer behavior and a reference for enterprises to use to improve their products [22,96].
Online reviews offer many benefits, including understanding customer satisfaction [97,98,99,100,101], capturing user requirements [102,103,104], finding product deficiencies [105,106,107], proposing improvement strategies [65], comparing competitive products [108], and providing product recommendations [109,110,111,112,113,114]. For instance, in order to identify innovation sentences from online reviews, Zhang et al. [4] proposed a deep learning-based approach. Jin et al. [22,108,115,116] focused on filtering out helpful reviews. Wang et al. [117] presented a heuristic deep learning method to extract opinions and classified them into seven pairs of affective attributes, namely “like-dislike”, “aesthetic-inaesthetic”, “soft-hard”, “small-big”, “useful-useless”, “reliable-unreliable”, and “recommended-not recommended”. Kumar et al. [118] combined reviews and electroencephalogram signals to predict product ratings, and Xiao et al. [94] proposed a marginal effect-based KANO model (MEKM) to categorize customer requirements. Simon et al. [119] explored product customization based on online reviews. Figure 8 summarizes the typical processing flow of textual data.
The rapid development of NLP is the key to product design based on textual data, which including topic extraction, opinion mining, text classification, sentiment analysis, and text clustering.
Product attributes extraction. In the field of product design, topic extraction can be used to extract product attributes (e.g., the screen, battery, weight, and camera of a smartphone) and opinions [120]. As each product has multiple attributes, and consumers have varying preferences and evaluations for each attribute, extracting attributes from online reviews becomes essential [110]. Typically, product attributes are nouns or noun phrases in online review sentences [121]. To extract product attributes, most studies utilize the part-of-speech (POS) tag to generate tags identifying whether a word is a noun, adjective, adverb, etc. and consider all nouns and noun phrases as attribute candidates. These candidates are then pruned using techniques such as term-frequency (TF) [116,122], TF-IDF [123,124], dictionary [125], manual definition [99], and clustering [96,126]. Moreover, since product attributes are domain-sensitive, some studies consider it a domain-specific entity recognition problem. For instance, Putthividhya and Hu [127] used named entity recognition (NER) to extract product attributes.
The methods discussed above are limited to extracting explicit attributes and are not suitable for implicit attributes. The distinction between explicit and implicit attributes lies in whether they are explicitly mentioned or not. For example, in the sentence “laptop battery is durable, the laptop is expensive”. The “battery” is explicitly mentioned as an explicit attribute, while “expensive” is related to price and is an implicit attribute. Explicit attributes can be easily identified using rule-based or machine learning-based methods, implicit attributes require more sophisticated techniques [126]. Various unsupervised, semi-supervised, and supervised methods have been developed for implicit attribute extraction [128]. For instance, Xu et al. [129] used an implicit topic model that incorporated pre-existing knowledge to select training attributes and developed an implicit attribute classifier based on SVM. To overcome the lack of a training corpus, they annotated a large number of online reviews. Meanwhile, Kang et al. [130] proposed an unsupervised rule-based method that can extract both subjective and objective features, including implicit attributes, from customer reviews. Additionally, employing synonym dictionaries is a viable method. For example, Jin et al. [108] combined WordNet and manually defined synonyms to extract attributes of mobile phones. WordNet is an English lexical database that organizes words based on synonyms and antonyms. By analyzing online comments, designers can gain a more detailed understanding of users’ attention to each product attribute at a finer granularity, enabling a more precise analysis.
Opinion mining. In product design, opinion mining aims to extract descriptive words that express subjective opinions, commonly known as opinion words. There are two primary approaches to this task: co-occurrence analysis and syntactic analysis. The co-occurrence approach identifies opinion words by analyzing the adjectives that appear in proximity to product attributes. Meanwhile, the syntactic approach relies on analyzing the structure and dependency of review sentences. For example, Hu and Liu [131] categorized product attributes as either frequent or infrequent, with the former identified through POS and association mining, and nearby adjectives deemed opinion words. The latter were identified through a reverse search for opinion words.
In addition to the two types of approaches discussed above, topic modeling has also shown promising results in opinion word extraction. For instance, Bi et al. [97] used latent Dirichlet allocation (LDA) to extract customer satisfaction dimensions from online reviews and proposed the effect-based KANO model (EKM). LDA is a topic model that can group synonyms into the same topic and obtain its probability distribution. Wang et al. [132] applied the long short-term memory (LSTM) model to extract opinion words from raw online reviews and mapped customer opinions to design parameters through deep learning. However, it is worth noting that the topic model ignores the fine-grained aspects.
Several scholars have studied the expression of opinion words since different reviewers may share the same opinions but use different words to express them. In our previous research [133], we clustered similar opinion words based on word2vec. Additionally, Wang et al. [134] restructured raw sentences to extract attribute-opinions from online reviews, following grammatical rules. However, different languages have different sentence structure rules, so it is hard to expand to other languages.
Sentiment analysis. In product design, sentiment analysis plays a crucial role in identifying customers’ emotional attitudes towards a product and classifying them as positive, negative, or neutral. Sentiment analysis can be conducted at three different levels [135,136,137]: (i) document-level, which is a coarse-grained analysis of all reviews in a document; (ii) sentence-level, which is a medium-grained analysis of individual sentences; and (iii) aspect (attribute)-level, which is a fine-grained analysis of specific product attributes.
In addition, sentiment analysis methods are categorized into three categories: machine learning methods, lexicon-based methods, and hybrid methods [117,118,138,139]. Machine learning methods regard sentiment analysis as a classification problem, using sentiment polarities as labels and employing various techniques, such as Recurrent Neural Network (RNN) [4], support vector machines [140,141], conditional random field (CRF) [135], and neural networks [136], to construct classifiers with text features. In contrast, lexicon-based methods identify sentiment orientations by referring to pre-defined lexicons like LIWC, HowNet, and WordNet [141,142]. These lexicons contain sentiment-related terms and their corresponding polarity. However, the quality of these lexicons is critical, and scholars are working towards improving the coverage, domain adaptation, and continuous updating of these resources. For instance, Cho et al. [143] constructed a comprehensive lexicon by merging ten lexicons. Araque et al. [144] proposed a sentiment classification model that uses semantic similarity measures and embedding representations, rather than keyword matching, to compute the semantic distance between the input word and lexicon word rather. Marquez et al. [145] analyzed several lexicons and how they complement each other.
Lexicon collection can be divided into three categories: manual, dictionary-based, and corpus-based. The manual method is time-consuming and labor-intensive. The dictionary-based method uses a set of opinion words as seeds to search in existing dictionaries. The corpus-based method expands lexicons based on information from the corpus, resulting in domain-specific lexicons [146].
Machine learning methods offer higher accuracy, while lexicon-based methods are more general. Hybrid methods combine both. For instance, Dang et al. [147] proposed a lexicon-enhanced approach for sentiment classification that combines machine learning and semantic-orientation methods. Their findings indicate that the hybrid method significantly improves sentiment classification performance.
By reviewing literature related to product design based on textual data, we have found that it is an important research topic that has been addressed by many scholars in recent years. NLP technology is used to extract product attributes and sentiment features from text, which helps to understand user requirements, evaluate products, and grasp product development trends. While requirements and evaluates are the core foundation, it is important to note that the rest of the design process is equally crucial. Unfortunately, most existing research has overlooked this point, resulting in designs that are more of an improvement rather than being original and innovation. In other words, product design based on text still has vast potential for development. Undoubtedly, the emergence of textual data has great significance for product. Compared to traditional methods, textual data provides richer information of higher quality and can be acquired and processed much more quickly.
4.3. Product Design Based on Image Data
Product images serve as intuitive representations of products that convey essential information on color, texture, and shape, playing a crucial role in the design process. Images are an example of “what you see is what you get”, as our eyes can easily interpret the information, constituting a significant advantage. As image data keep growing, it has evolved into a vital information carrier alongside textual data. In recent years, CNN has made significant breakthroughs in image recognition, classification, and segmentation [148,149,150]. Its powerful ability to learn robust feature has attracted attention across various fields. Figure 9 shows the structure of CNN. In product design, image data plays two key roles: inspiration and generation. The former inspires designers by existing product images, while the latter directly generate new product images based on large amounts of existing images.
Spark creative inspiration. Existing product images are helpful to inspire designers to come out with new ideas and initial design schemes more efficiently [151,152]. To achieve this goal, image retrieval, matching, and recommendation play crucial roles [3,150,153,154]. Designers and users can express their requirements through images and text and search for related product images from databases or e-commerce websites. The matched images can be recommended as design references. The retrieval input can be either text, images, or both [155,156,157,158].
However, product retrieval is more complicated than simple image retrieval due to the different shooting angles, conditions, backgrounds, or postures of the images [159,160,161,162,163]. For instance, clothing images taken on the street or in a store with a phone may differ from those in databases and e-commerce websites. Liu et al. [164] used a human detector to locate 30 human parts, and then utilized a sparsely coded transfer matrix to establish a mapping that ensure the two distributions dose not compromise the retrieval quality. On the other hand, free-hand sketches are even more abstract. Yu et al. [165] introduced two new datasets with dense annotation and built a deep network trained with triplet annotations to enable retrieval across the sketch/image gap. Ullah et al. [166] used a 16-layer CNN model (VGG16) to extract product features from images and measure their similarities using the Euclidean distance.
In addition, some studies have explored the use of both image and text features for product matching and categorization. For instance, Ristoski et al. [125] used CNN to extract image features and match them with text embedding features to improve product matching and categorization performance. Liang et al. [167] proposed a joint image segmentation and labeling framework to retrieve clothing. They grouped superpixels into regions, trained an E-SVM classifier using confident foreground regions, and propagated segmentations by applying the E-SVM template to the entire image.
In addition to serving as a reference, image data can be leveraged to track fashion trends [168,169], identify user preferences [168,170], provide product matching recommendations [171,172,173,174], and prevent infringement [151]. For instance, Hu et al. [169] established a furniture visual classification model that contains 16 styles, such as Gothic, Modernist, and Rococo style. The model combines image features extracted by CNN with handcrafted features and can help users understand their preferred styles. Most products do not stand alone, and their compatibility with other products must be considered during the design process. To address this, Aggarwal et al. [175] used Siamese networks to assess the style compatibility between pairs of furniture images. They also proposed a joint visual-text embedding model for recommendation, based on furniture type, color, and material. Laura et al. [176] built a graph neural network (GNN) model to account for multiple items’ interactions, instead of just pairwise compatibility. Their GNN model comprised a deep CNN to extract image features and enabled evaluating the compatibility of multiple furniture items based on style, color, material, and overall appearance. Moreover, they applied the GNN model to solve the fill-in-the-blank task, for example, recommending the most suitable bed from multiple alternatives based on a given desk, cabinet, chair, and mirror.
Generate new product images. In the field of product design, two popular models are used to generate new product images: GAN and neural style transfer. Both of these models take images as input and produce output images. GAN is an unsupervised model that consists of two neural networks, namely the generator and discriminator [177]. The generator creates images, while the discriminator evaluates the generated images against real images. Both networks are trained simultaneously and are improve through competition. Figure 10 shows the structure of GAN. We summarize the contribution of GAN to product design as follows: schemes generation [151,178,179,180], text-to-image synthesis [181], generative transformation [182], collocation generation [183,184,185], sketch acquisition [152,186], colorization [187,188,189,190], and virtual display [28,191,192]. Some examples of new product generation based on image data are shown in Figure 11. In Figure 11a, a new image of handbag is generated given an image from the shoe. In Figure 11b, the edge image is input, new shoes are generated in the second and fourth columns, and the rest are ground truth. In Figure 11c, a product image and style image are inputted, then a new product will be generated.
The automatic generation of product schemes is made possible by using a large number of existing product images. GAN can learn the distribution of product features from these images and generate new designs through the gradual optimization of a generator by a discriminator. For example, Li et al. [151] used GAN to create new smartwatches schemes by training their model with 5459 smartwatch images. They also compared the results of GAN with various extensions, including deep convolution GAN (DCGAN), least-squares GAN (LSGAN), and Wasserstein GAN (WGAN). In another example, to produce collocation clothing images, Liu et al. [194] proposed Attribute-GAN, which comprises a generator and two discriminators. However, GAN only generate one view of the product, which is insufficient for design purposes. To address this limitation, Chen et al. [191] proposed conditional variational GAN (CVGAN), which can synthesize arbitrary product views. This is useful for virtual display, especially for designs that require high shape and appearance standards, such as clothing design where it’s necessary to show the effect of try-on. Moreover, GAN and its extended models are also applicable to layout generation, such as interior, websites, and advertisements [6,195,196].
Text-to-image synthesis is a promising technique for product design, although current research in this field is scarce. One notable study by Kenan et al. [181] proposed an enhanced attentional GAN (e-AttnGAN) for generating fashion images from text descriptions, such as “long sleeve shirt with red check pattern”. Most existing studies aim to improve the synthesis technology [197,198,199,200], and the potential of text-to-image synthesis has yet to be fully realized by designers. It is worth noting that a lot of commercial software has been developed for text-to-image synthesis, including Midjourney, DALL-E2, Imagen, Stable Diffusion, Novel AI, Mimic, Dream by WOMBO, wenxinyige, Tiamat, and 6pen art. These platforms allow users to express their requirements in text and their desired image will be automatically generated. To demonstrate the effectiveness of this technique, we also made some attempts, and the resulting product images are displayed in Figure 12. We generated plates, vases, mugs, butterfly necklaces, crystal balls, and phone cases with wenxinyige. We believe that text-to-image will become an important direction for product design in the future. Users will only need to type out their requirements, without having to master complex skills such as hand-drawing, sketching, or 3D modeling. In addition to text-to-image, progress has also been made in text-to-3D [201,202]. For instance, Wang et al. [203] proposed the Variational Score Distillation (VSD), which models 3D parameters as a probability distribution and optimizes the distance between the distribution of rendered 2D images and the distribution of a pre-trained 2D diffusion model. This approach generates high-quality textured 3D meshes based on the given text.
Generative transformation and colorization can be seen as a type of image-to-image problem, where an input image is transformed into a modified output image [190,204,205]. Generative transformation aims to convert one product image into another. This process generates a sequence of intermediate images that may be used as new design ideas. Zhu et al. [206,207,208] developed a product design assistance system that leverages GAN to achieve three applications: (i) change product shape and color by manipulating an underlying generative model; (ii) generate new images from user scribbles; and (iii) performing generative transformation of one product picture to another product. For example, a short black boot can be converted into a long brown boot, and intermediate shoe images are displayed. Nikolay et al. [192] proposed the conditional analogy GAN (CAGAN) to swap clothing on models automatically. Pan et al. [209] proposed DragGAN to control the pose, expression, and layout of products in the image.
Since the sketch already contains the structural and functional features of the product, once it is colored, a preliminary design scheme is completed. Compared to other product features, color has an intuitive influence, and users have different preferences for it. GAN can automatically color the sketch and quickly generate multiple alternatives. For example, Liu et al. [190] established an end-to-end GAN model for ethnic costume sketch colorization, demonstrating its excellent ability to learn the color rules of ethnic costumes. Additionally, Sreedhar et al. [180] proposed a car design system based on GAN that supports single-color, dual-color, and multiple-color coloring from a single sketch.
GAN and its extended models have made significant contributions to product design but the generated images often suffer from blurriness, lack of detail, and low quality. To address these problems, Lang et al. [184] proposed Design-GAN, which introduces a texture similarity constraint mechanism. Similarly, Oh et al. [5] combined GAN with topology optimization to enhance the quality of two-dimensional wheel images. By limiting the wheel image to a non-design domain, a pre-defined domain, and a design domain for topology optimization, they achieved results that were significantly different from the initial design.
Neural style transfer is a deep generative model that enables the generation of images by separating and reconstructing their content and style. Nevertheless, neural style transfer is not simply a matter of overlapping a content image with a style image. Its implementation relies to the features learned by CNN. Figure 13 shows the idea of neural style transfer.
In 2015, Gatys et al. [210] found that the image style and content could be separated in CNN and manipulated independently. Building upon this finding, they proposed a neural style transfer algorithm to transfer the style of famous artworks. Later, Huang and Serge [211] introduced an adaptive instance normalization (AdaIN) layer, which aligns the mean and variance of content features with style features to realize arbitrary style transfer. Inspired by their method, our previous research [212] combined Kansei engineering with neural style transfer to generate product design schemes (Figure 11c), resulting in an enhanced semantic of the generated product. Additionally, we developed a real-time design system that allowed users to input images and receive product schemes as output [193]. In a similar vein, Wu et al. [213] combined GAN with neural style transfer to generate fashionable Dunhuang clothes, using GAN to generate clothing shapes and neural style transfer to add Dunhuang elements. Neural style transfer is performed over the entire image, limiting its flexibility. By using masks, it is possible to transfer different styles to different parts of a product [211]. For example, using masks, three style images can be transferred to the collar, pocket, and sleeve of a coat, respectively. Overall, neural style transfer still offers tremendous untapped potential for product design.
Although GAN and its extended models show some limitations in generating high-quality and controllable designs due to their strong randomness, recent work by Sohn et al. [214] have demonstrated that AI and GAN can improve customer satisfaction and product design freshness, indicating their potential in generating innovative and attractive design proposals. On the other hand, neural style transfer is a technique used for generating product schemes with good image quality, strong interpretability, and controllability. While it may lack the innovation ability of GAN, it can be regarded as a combination of design elements such as product shape, texture, and color. Generation using these models promises to be a cost-effective way to generate product designs and support designers in creating design schemes quickly. However, further research is necessary to enhance the controllability and quality of GAN-based design generation and explore the potential of combining these techniques to generate innovative and high-quality design schemes.
Generally, image data and AIGC have broken old thinking of traditional product design. Despite the emergence of generative models in product design, the literature on this topic remains relatively sparse. Additionally, most of the existing literature primarily focuses on technical aspects and lacks product design knowledge [215,216]. Simply replacing different training image data is not a viable solution for achieving design goals [217,218], as product design requires specific knowledge and considerations. Traditional product design methods and knowledge cannot be abandoned, but rather should be combined with big data and AI algorithms. It is possible that this integration is the future direction of product design.
4.4. Product Design Based on Audio Data
Big audio data [219] refer to information in the form of sound or voice. The customer center is a valuable resource for collecting users’ complaints, inquiries, and suggestions throughout the product service cycle [220]. Audio feedback can be utilized to collect user requirements, provide recommendations, evaluate products, and improve product design. However, there is currently a lack of literature on product design based on big audio data. Therefore, we will explore several key AI technologies that could be employed in the future, such as speech recognition, speaker identification, and emotion recognition.
Some studies have opted to manually record telephone complaints as text to avoid direct processing of audio signals [221,222]. However, this approach heavily relies on the recorder and is a labor-intensive and time-consuming process. In comparison, speech recognition can accomplish this task automatically [223]. Speech recognition, also referred to as automatic speech recognition (ASR), computer speech recognition, and speech-to-text, involves converting human speech into computer-readable information, typically in the form of text, although it may also be binary codes or character sequences [224]. The process of speech recognition is shown in Figure 14. Speech recognition is widely used in mobile communication, search engines, human-computer interaction, among other applications [225].
Speaker identification, also referred to as voiceprint recognition, is a technology that identifies individuals based on their speech. Each person’s voice has unique characteristics, which are determined by two factors: the size of the sound cavity and how the vocal organs are manipulated. Like fingerprints, voiceprints need to be collected and stored in a database prior to analysis. The spectrogram is generated by the amplitude of the short-time Fourier transform (STFT) of the audio signal (Figure 15) [226]. Voiceprint recognition is achieved by extracting speaker parameters, such as pitch frequency and formant, and using machine learning methods and AI algorithms. Voiceprint recognition has been widely applied in various fields, including biometric authentication, crime forensics, mobile payment, and social security. However, the variability of voiceprint, the differences in audio acquisition equipment, and environmental noise interference pose significant challenges for the voiceprint recognition task.
As a vital division of audio processing and emotion computing, speech emotion recognition aims to identify the emotions expressed by speaker [227], including but not limited to, anger, sadness, surprise, pleasure, and panic. This process can be regarded as a classification problem, and selecting the appropriate emotion feature is essential for its success [228,229]. We have summarized commonly used acoustic parameters in Table 4. Moreover, since spectrograms are images, CNN can be used to automatically learn features for speech recognition, speaker identification, and emotion recognition [230].
As AI products gain popularity, there has been increasing focus on audio signal processing algorithm, particularly in the areas of speech recognition and emotion recognition. These algorithms directly impact the user experience and play a crucial role in customer decision-making. Despite the enormous potential of audio data in product design, practical applications remain limited by processing algorithms and data quality, creating a substantial between theory and practice applications.
4.5. Product Design Based on Video Data
Videos showcasing the usage of products are also valuable source of information for product design. They can be viewed repeatedly without restriction, making it easier to obtain hard-to-find but essential information. For instance, by watching the cooking videos of homemakers, Nagamechi found that standing up while taking food made them feel more comfortable than bending over. Additionally, Nagamechi found that the refrigerator compartment was used more frequently than the freezer compartment. Based on these insights, he improved the traditional refrigerator structure by changing the upper layer to a refrigerator compartment and the lower layer to a freezer compartment [231], which is still in use today.
Although video data is useful for product design, manual viewing can be time-consuming, and is only suitable for small data volume scenes. In comparison, intelligent video analysis supports motion detection, video summarization, video retrieval, color detection, object detection, emotion recognition, and more [219,232,233,234]. This capability makes it possible to apply big video data to user requirement acquisition, user behavior observation, experience improvement, and product virtual display. Intelligent video analysis establishes a mapping relationship between images and their descriptions, allowing computers to understand video through digital image analysis.
Product detection. Compared to single image, object detection in the video has a temporal context, which helps to solve the redundancy between consecutive frames, motion blur, unfocused image, partial occlusion, singular postures, etc. Li et al. [235] proposed a novel method for annotating products in videos by identifying keyframes and extracting SIFT features to generate BOVW histograms, which were then compared with the visual signature of products for annotation. Meanwhile, Zhang et al. [236] developed a framework for identifying clothes worn by celebrities in videos, which involved utilizing DCNN for tasks such as human body detection, human posture selection, human pose estimation, face verification, and clothing detection. Additionally, Zhang et al. [237] linked clothes worn by stars with online shops to provide clothing recommendations. To improve the matching results, Chen et al. [238] used image feature network (IFN) and video feature network (VFN) to generate deep visual features for shopping images and clothing trajectories in videos.
User behavior observation. Understanding user behavior is crucial in designing products and improving user experience. To explore user behaviors in VR spherical video streaming, Wu et al. [239] collected a head tracking dataset. Additionally, Babak et al. [240] evaluated product usability through video analysis. They performed temporal segmentation of video featuring human–product interaction, automatically identifying time segments where users encountered difficulties. They took water faucet design as an example and used optical flow for motion detection. Optical flow, temporal difference, and background subtraction are commonly used for motion detection in videos [241].
Product virtual display. Compared to images, video-based virtual displays demand higher spatiotemporal consistency, but they offer a more comprehensive display for customers. Various AI algorithms have been proposed to enhance the customer experience. For instance, Liu et al. [242] presented a parsing model to predict human poses in the video, while Dong et al. [243] proposed a flow-navigated warping GAN (FW-GAN) to generate try-on videos conditioned on person, clothing images, and a series of target poses. Interactive displays not only promote product understanding but also evoke enjoyment during the experience [244]. To this end, An et al. [245] designed a video composition system that displays products on mobile phones in an interactive and 3D-like manner. This system can automatically perform rotation direction estimation, video object segmentation, motion adjustment, and color adjustment.
Current intelligent video analysis technology has limitations in analyzing complex activities, as it can only identify simple ones. Structuralization is a vital obstacle to video analysis, which identifies features in the video through background modeling, video segmentation, and target tracking operation. In addition, complete video data acquisition requires the cooperation of multiple cameras, but achieving consistent installation conditions for each camera is difficult. It is also challenging to perform continuous and consistent visual analysis of moving targets in multiple videos. Moreover, the existing motion detection technology is not yet mature, as it can only detect target entry, departure, appearance, disappearance, wandering, trailing, etc. The product use process is complicated, and the information extracted at present is of little help to product design. Despite audio data holding great potential in product design, its development and application are restricted by processing technology and data quality.
4.6. Summary of Opportunities
Big data and AI algorithms hold enormous potential in modern product design. There are two types of data: structured and unstructured. Structured data offer strong pertinence and high-value density, but they are limited in openness and volume. Obtaining large-scale structured data is challenging for individuals, and even enterprises—it requires substantial workforce and resources for manual collection, sorting, summarizing, and storage in standard databases. On the other hand, unstructured data enjoy high openness and volume compared to structured data. It has great advantages in terms of veracity, velocity, value, and variety, all may strongly contribute to modern product design.
We have detailed the common data types in the product lifecycle, including text, image, audio, and video. Among these, text and image data are widely used in product design due to their ease of acquisition, mature processing algorithms, and high data quality. Textual data can overcome the limitations of traditional survey methods, such as small sample size, limited survey scope, and high labor-intensive, while also guaranteeing data authenticity, reliability, and timeliness. Image data, on the other hand, is useful for surveys and excels in visual inspiration and schemes display. The color, texture, sharp information in images can inspire designers to create preliminary schemes. Furthermore, generative models can generate new product designs from image data with varying colors, structures, textures, etc. AIGC is a revolutionary advancement in product design, which was previously unattainable through traditional methods.
Video data are also a visual type of information, which have high value for user behavior observation and product virtual display. Compared to images, video can provide customers with a comprehensive understanding and allowing them to interact with products virtually. However, both video and audio data are still in their early stages, with many technological and data quality challenges to overcome before they can be fully utilized in product design. Despite the development of technologies for audio and video processing, few researchers have employed them in product design. However, these limited studies warrant a discussion to motivate further research in the field.
Big data have brought about numerous benefits for product design; however, current research still has its limitations and requires further investigation. There are two research directions that are especially promising: (i) the fusion of different types of data. Existing research is confined to only one type of data (i.e., text, image, video, audio, and location), despite these data types co-existing at various stages of the product lifecycle. Furthermore, multi-modal data fusion could enhance the persuasiveness and efficacy of the extracted information; (ii) The synergistic exploitation of big data techniques and design domain knowledge. While previous efforts have focused on technological breakthroughs, especially for data lacking advanced processing technologies (e.g., audio and video), domain knowledge in the field of product design should be ignored. The knowledge accumulated in traditional product design is precious and helpful. Big data and traditional product design methods are not contradictory but complementary. Only by combining them can we better capture user requirements and improve the success rate of design. By addressing these research directions, we can further unlock the potential of big data and AI-driven product design, leading to more intelligent, personalized, and successful products across various industries.
5. Conclusions
In the era of the knowledge-driven economy, customer demands are more diversified and personalized, and the lifecycles of products are becoming shorter, especially in the usage phase. Therefore, successful product innovation now requires the assistance of interdisciplinary knowledge, the support of powerful techniques, and the guidance of innovation theories that break conventional wisdom.
Currently, big data are one of the resources with the most potential for promoting innovation throughout the whole product lifecycle. In this survey, we presented a comprehensive overview of existing studies on big data and AI-driven product design, aiming to help researchers and practitioners understand the latest developments and opportunities in this exciting field. Firstly, we visualized the product design process and introduced the key tasks. Secondly, we introduced several representative traditional product design methods, including their functionalities, applications, advantages, and disadvantages. Based on this, we summarized seven common shortcomings of traditional methods. Thirdly, we illustrated how big data and related AI algorithms can help solve challenges in modern product design, especially in user requirements acquisition, product evaluation, and visual display. We offered a detailed analysis of the current and potential application of AI techniques in product design, utilizing the textual, image, audio, and video data. For textual data, NLP techniques can be used to extract product attributes and sentiment features, aiding in understanding user requirements, evaluating products, and grasping product development trends. For images, neural style transfer, CNN, and GAN and its extended models can be used to spark creative inspiration and generate new product images. Audio can be used to capture user requirements, provide recommendations, evaluate products, and improve product design. Video can be used to observe user behavior and display product. Since audio data are still in an early stage, we focused on possible processing technologies and workflow that may be applied in the future. Finally, we summarized the deficiencies of existing data-driven product design studies and provided future research directions, especially for synergistic methods that combine big data-driven approaches and traditional methods for product design.
Product design based on big data and AI is typically performed with vast amounts of real-world data. This approach provides unprecedented opportunities to harness the collective intelligence of consumers, suppliers, designers, enterprises, etc. With the aid of advanced big data processing and AI technologies, it is possible to more accurately acquire user requirements, product evaluations, and virtual displays, leading to higher success rates in developing competitive products while saving time, effort, and development costs. We hope this survey will bring greater attention to the role of big data and cutting-edge AI technologies in the modern product design field. By exploiting the power of deep learning-based NLP, speech recognition, and generative AI techniques (such as GAN) in product design, it is believed that product innovation can reach an unprecedented level with high intelligence and automation.
H.Q. and J.H. conceived the conception; H.Q. conducted literature collection and manuscript writing; J.H., S.L., C.Z. and H.W. revised and polished the manuscript. All authors have read and agreed to the published version of the manuscript.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Big data in the product lifecycle. The product lifecycle is the entire process from product development to scrap, including design, manufacture, transportation, sale, usage, repair, and recycle.
Figure 2. A framework for mining product-related information from big data. The framework includes data acquisition, pre-processing, analysis, and application. Data acquisition involves collecting data from various data sources; pre-processing aims to clean and standardize the acquired data; data analysis reveals hidden knowledge and information; application directly reflects the value of big data.
Figure 3. The research framework. Through Kansei engineering, QFD, and the KANO model, we revealed the shortcomings of traditional product design methods and showed that big data could improve them.
Figure 4. The product design process. The dotted line represents the affiliation and the solid line represents the product design process. The user research, idea generation, and product display (sketching and prototyping) are the key tasks in product design.
Figure 6. The KANO model for user requirements classification [56]. The horizontal axis indicates how fulfilled the requirement is and the vertical axis indicates how satisfied the user is.
Figure 7. The structure of HoQ [13]. HoQ consists of customer requirements, importance rating, engineering requirements, correlation matrix, relationship matrix, competitive benchmarking, customer perception, and technical matrix.
Figure 8. The typical processing flow of textual data. It includes five parts, namely data acquisition, data processing, information extraction, and application.
Figure 9. The structure of CNN. CNN consists of the input layer, convolution layer, pooling layer, fully connected layer, and output layer. The input image will be converted into the pixel matrix in the input layer. The convolution layer involves some filters, and different filters get triggered by different features (e.g., semicircle, triangle, quadrilateral, red, and green) in the input image. Each filter will output a feature map, and the pooling layer will reduce its dimensionality. The fully connected layer and the output layer are the same as the artificial neural network.
Figure 10. The structure of GAN. Setting x as the real product and z as the random noise, [Forumla omitted. See PDF.] is the synthetic data generated by the generator G. By the way, images can be regarded as a kind of noise distribution. Both x and [Forumla omitted. See PDF.] are inputted to the discriminator D to predict whether the data are real or fake. If the predicted result is correct, the error will be transferred to G for improvement; otherwise, it will transfer to D for improvement. Eventually, G could capture the statistical distribution of x, and [Forumla omitted. See PDF.] can deceive D. [Forumla omitted. See PDF.] is the generated design scheme that contains product features and is different from the real product.
Figure 11. The case of generating a new product based on image big data: (a) generative transformation [182]. (b) the edges-to-shoe translation [187]. (c) neural style transfer [193].
Figure 13. Neural style transfer. Setting z as the random noise, G (z) is the synthetic image generated by the generator (G). The pre-trained VGG 19 is used to calculate the style loss between G (z) and the style image (S), the content loss between G (z) and the content image (C). Minimize the total loss that consists of style and content loss to optimize G. Once the loss goal is reached, G (z) will be output and marked as O. O can preserve the content feature of C and the style feature of S.
Figure 14. The process of speech recognition. Framing the input speech into many frames and transfer the waveform to extract feature vectors. The acoustic model is then used to convert features into phonemes, matched to words through the pronunciation dictionary. Finally, it eliminates the confusion of homophones by the language model.
Figure 15. The spectrogram. The horizontal axis represents time, and the vertical axis represents frequency. The amplitude of speech at each frequency point is distinguished by color.
The 40 inventive principles of TRIZ [
No. | Inventive Principle | No. | Inventive Principle |
---|---|---|---|
1 | Segmentation | 21 | Skipping |
2 | Taking out | 22 | Blessing in disguise |
3 | Local quality | 23 | Feedback |
4 | Asymmetry | 24 | Intermediary |
5 | Merging | 25 | Self-service |
6 | Universabrationlity | 26 | Copying |
7 | Nested dol1 | 27 | Cheap short-living |
8 | Anti-weight | 28 | Mechanics substitution |
9 | Preliminary anti-action | 29 | Pneumatics and hydraulics |
10 | Preliminary action | 30 | Flexible shells and thin films |
11 | Beforehand cushioning | 31 | Porous materials |
12 | Equipotentiality | 32 | Colours changes |
13 | The other way around | 33 | Homogeneity |
14 | Spheroidality | 34 | Discarding and recovering |
15 | Dynamics | 35 | Parameter changes |
16 | Partial or excessive actions | 36 | Phase transitions |
17 | Another dimension | 37 | Thermal expansion |
18 | Mechanical vibration | 38 | Strong oxidants |
19 | Periodic action | 39 | Inert atmosphere |
20 | Continuity of useful action | 40 | Composite material film |
The 39 engineering parameters [
No. | Engineering Parameter | No. | Engineering Parameter |
---|---|---|---|
1 | Weight of moving object | 21 | Power |
2 | Weight of nonmoving object | 22 | Waste of energy |
3 | Length of moving object | 23 | Waste of substance |
4 | Length of nonmoving object | 24 | Loss of information |
5 | Area of moving object | 25 | Waste of time |
6 | Area of nonmoving object | 26 | Amount of substance |
7 | Volume of moving object | 27 | Reliability |
8 | Volume of nonmoving object | 28 | Accuracy of measurement |
9 | Speed | 29 | Accuracy of manufacturing |
10 | Force | 30 | Harmful factors acting on object |
11 | Tension, pressure | 31 | Harmful side effects |
12 | Shape | 32 | Manufacturability |
13 | Stability of object | 33 | Convenience of use |
14 | Strength | 34 | Reparability |
15 | Durability of moving object | 35 | Adaptability |
16 | Durability of nonmoving object | 36 | Complexity of device |
17 | Temperature | 37 | Complexity of control |
18 | Brightness | 38 | Level of automation |
19 | Energy spent by moving object | 39 | Productivity |
20 | Energy spent by nonmoving object |
Comparison of user requirement mining methods.
Methods | Description | Advantages | Disadvantages |
---|---|---|---|
User interview | The interviewer talks directly with the subject | Detailed; easy to implement | Time-consuming; one-time; subjective; labor-intensive; small survey scope |
Questionnaire | Record subjects’ opinions on specific questions | Easy to implement | Subjective; one-time; time-consuming; labor-intensive; centralized in time and place; small survey scope |
Web-based questionnaire | Distribute questionnaires on the Internet | Decentralized in time and place; large survey scope | Subjective; one-time |
Focus-group | Observe the opinions and behaviors of a group on the subject | Real data; low cost; in-depth questions | Time-consuming; labor-intensive; one-time; complex; small survey scope |
Usability testing | Subjects test the product and give feedback | Detailed; high reliability | One-time; labor-intensive; small survey scope; time-consuming |
Experience | Analyze the data generated by consumers during usage | Easy to implement | Time-consuming; slow; inefficiency; subjective; small survey scope |
Experimental | Record the psychological and physiological data of the subject | Detailed; high reliability | Expensive; one-time; time-consuming; labor-intensive; complex; small survey scope |
Acoustic parameters.
Category | Parameter |
---|---|
Prosody parameter | Duration |
Pitch | |
Energy | |
Intensity | |
Spectral parameter | Linear predictor coefficient (LPC) |
One-sided autocorrelation linear predictor coefficient(OSALPC) | |
Log-frequency power coefficient(LFPC) | |
Linear predictor cepstral coefficient(LPCC) | |
Cepstral-based OSALPC(OSALPCC) | |
Mel-frequency cepstral coefficient(MFCC) | |
Sound quality parameter | Format frequency and bandwidth |
Jitter and shimmer | |
Glottal parameter |
References
1. Keshwani, S.; Lena, T.A.; Ahmed-Kristense, S.; Chakrabart, A. Comparing novelty of designs from biological-inspiration with those from brainstorming. J. Eng. Des.; 2017; 28, pp. 654-680. [DOI: https://dx.doi.org/10.1080/09544828.2017.1393504]
2. Wang, C.H. Using the theory of inventive problem solving to brainstorm innovative ideas for assessing varieties of phone-cameras. Comput. Ind. Eng.; 2015; 85, pp. 227-234. [DOI: https://dx.doi.org/10.1016/j.cie.2015.04.003]
3. Gu, X.; Gao, F.; Tan, M.; Peng, P. Fashion analysis and understanding with artificial intelligence. Inf. Process. Manag.; 2020; 57, pp. 102276-102292. [DOI: https://dx.doi.org/10.1016/j.ipm.2020.102276]
4. Zhang, M.; Fan, B.; Zhang, N.; Wang, W.; Fan, W. Mining product innovation ideas from online reviews. Inf. Process. Manag.; 2021; 58, pp. 102389-102402. [DOI: https://dx.doi.org/10.1016/j.ipm.2020.102389]
5. Oh, S.; Jung, Y.; Lee, I.; Kang, N. Design automation by integrating generative adversarial networks and topology optimization. Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference; Quebec, QC, Canada, 26–29 August 2018; American Society of Mechanical Engineers: New York, NY, USA, 2018; 02-03008.
6. Li, J.; Yang, J.; Zhang, J.; Liu, C.; Wang, C.; Xu, T. Attribute-conditioned layout gan for automatic graphic design. IEEE Trans. Vis. Comput. Graph.; 2020; 27, pp. 4039-4048. [DOI: https://dx.doi.org/10.1109/TVCG.2020.2999335]
7. Cao, Y.; Li, S.; Liu, Y.; Yan, Z.; Dai, Y.; Yu, P.S.; Sun, L. A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt. arXiv; 2023; arXiv: 2303.04226
8. Wu, J.; Gan, W.; Chen, Z.; Wan, S.; Lin, H. Ai-generated content (aigc): A survey. arXiv; 2023; arXiv: 2304.06632
9. Lai, H.-H.; Lin, Y.-C.; Yeh, C.-H.; Wei, C.-H. User-oriented design for the optimal combination on product design. Int. J. Prod. Econ.; 2006; 100, pp. 253-267. [DOI: https://dx.doi.org/10.1016/j.ijpe.2004.11.005]
10. Shieh, M.D.; Yeh, Y.E. Developing a design support system for the exterior form of running shoes using partial least squares and neural networks. Comput. Ind. Eng.; 2013; 65, pp. 704-718. [DOI: https://dx.doi.org/10.1016/j.cie.2013.05.008]
11. Qu, Q.X.; Guo, F. Can eye movements be effectively measured to assess product design? Gender differences should be considered. Int. J. Ind. Ergon.; 2019; 72, pp. 281-289. [DOI: https://dx.doi.org/10.1016/j.ergon.2019.06.006]
12. Dogan, K.M.; Suzuki, H.; Gunpinar, E. Eye tracking for screening design parameters in adjective-based design of yacht hull. Ocean. Eng.; 2018; 166, pp. 262-277. [DOI: https://dx.doi.org/10.1016/j.oceaneng.2018.08.026]
13. Avikal, S.; Singh, R.; Rashmi, R. Qfd and fuzzy kano model based approach for classification of aesthetic attributes of suv car profile. J. Intell. Manuf.; 2020; 31, pp. 271-284. [DOI: https://dx.doi.org/10.1007/s10845-018-1444-5]
14. Mistarihi, M.Z.; Okour, R.A.; Mumani, A.A. An integration of a qfd model with fuzzy-anp approach for determining the importance weights for engineering characteristics of the proposed wheelchair design. Appl. Soft Comput.; 2020; 90, pp. 106136-106148. [DOI: https://dx.doi.org/10.1016/j.asoc.2020.106136]
15. Yamashina, H.; Ito, T.; Kawada, H. Innovative product development process by integrating qfd and triz. Int. J. Prod. Res.; 2002; 40, pp. 1031-1050. [DOI: https://dx.doi.org/10.1080/00207540110098490]
16. Dou, R.; Zhang, Y.; Nan, G. Application of combined kano model and interactive genetic algorithm for product customization. J. Intell. Manuf.; 2019; 30, pp. 2587-2602. [DOI: https://dx.doi.org/10.1007/s10845-016-1280-4]
17. Wu, Y.-H.; Ho, C.C. Integration of green quality function deployment and fuzzy theory: A case study on green mobile phone design. J. Clean. Prod.; 2015; 108, pp. 271-280. [DOI: https://dx.doi.org/10.1016/j.jclepro.2015.09.013]
18. Chen, Z.-S.; Liu, X.-L.; Chin, K.-S.; Pedrycz, W.; Tsui, K.-L.; Skibniewski, M.J. Online-review analysis based large-scale group decision-making for determining passenger demands and evaluating passenger satisfaction: Case study of high-speed rail system in china. Inf. Fusion; 2020; 69, pp. 22-39. [DOI: https://dx.doi.org/10.1016/j.inffus.2020.11.010]
19. Yao, M.-L.; Chuang, M.-C.; Hsu, C.-C. The kano model analysis of features for mobile security applications. Comput. Secur.; 2018; 78, pp. 336-346. [DOI: https://dx.doi.org/10.1016/j.cose.2018.07.008]
20. Dong, M.; Zeng, X.; Koehl, L.; Zhang, J. An interactive knowledge-based recommender system for fashion product design in the big data environment. Inf. Sci.; 2020; 540, pp. 469-488. [DOI: https://dx.doi.org/10.1016/j.ins.2020.05.094]
21. Sakao, T. A qfd-centred design methodology for environmentally conscious product design. Int. J. Prod. Res.; 2007; 45, pp. 4143-4162. [DOI: https://dx.doi.org/10.1080/00207540701450179]
22. Jin, J.; Liu, Y.; Ji, P.; Kwong, C. Review on recent advances in information mining from big consumer opinion data for product design. J. Comput. Inf. Sci. Eng.; 2019; 19, 010801. [DOI: https://dx.doi.org/10.1115/1.4041087]
23. Chen, Z.-S.; Liu, X.-L.; Rodríguez, R.M.; Wang, X.-J.; Chin, K.-S.; Tsui, K.-L.; Martínez, L. Identifying and prioritizing factors affecting in-cabin passenger comfort on high-speed rail in china: A fuzzy-based linguistic approach. Appl. Soft Comput.; 2020; 95, pp. 106558-106577. [DOI: https://dx.doi.org/10.1016/j.asoc.2020.106558]
24. Iosifidis, A.; Tefas, A.; Pitas, I.; Gabbouj, M. Big media data analysis. Signal Process. Image Commun.; 2017; 59, pp. 105-108. [DOI: https://dx.doi.org/10.1016/j.image.2017.10.004]
25. Wang, L.; Liu, Z. Data-driven product design evaluation method based on multi-stage artificial neural network. Appl. Soft Comput.; 2021; 103, 107117. [DOI: https://dx.doi.org/10.1016/j.asoc.2021.107117]
26. Shoumy, N.J.; Ang, L.-M.; Seng, K.P.; Rahaman, D.M.; Zia, T. Multimodal big data affective analytics: A comprehensive survey using text, audio, visual and physiological signals. J. Netw. Comput. Appl.; 2020; 149, pp. 102447-1024482. [DOI: https://dx.doi.org/10.1016/j.jnca.2019.102447]
27. Zhang, X.; Ming, X.; Yin, D. Application of industrial big data for smart manufacturing in product service system based on system engineering using fuzzy dematel. J. Clean. Prod.; 2020; 265, pp. 121863-121888. [DOI: https://dx.doi.org/10.1016/j.jclepro.2020.121863]
28. Pandey, N.; Savakis, A. Poly-gan: Multi-conditioned gan for fashion synthesis. Neurocomputing; 2020; 414, pp. 356-364. [DOI: https://dx.doi.org/10.1016/j.neucom.2020.07.092]
29. Zhang, Y.; Ren, S.; Liu, Y.; Si, S. A big data analytics architecture for cleaner manufacturing and maintenance processes of complex products. J. Clean. Prod.; 2017; 142, pp. 626-641. [DOI: https://dx.doi.org/10.1016/j.jclepro.2016.07.123]
30. Zhang, Y.; Ren, S.; Liu, Y.; Sakao, T.; Huisingh, D. A framework for big data driven product lifecycle management. J. Clean. Prod.; 2017; 159, pp. 229-240. [DOI: https://dx.doi.org/10.1016/j.jclepro.2017.04.172]
31. Ren, S.; Zhang, Y.; Liu, Y.; Sakao, T.; Huisingh, D.; Almeida, C.M.V.B. A comprehensive review of big data analytics throughout product lifecycle to support sustainable smart manufacturing: A framework, challenges and future research directions. J. Clean. Prod.; 2019; 210, pp. 1343-1365. [DOI: https://dx.doi.org/10.1016/j.jclepro.2018.11.025]
32. Büyükzkan, G.; Ger, F. Application of a new combined intuitionistic fuzzy mcdm approach based on axiomatic design methodology for the supplier selection problem. Appl. Soft Comput.; 2017; 52, pp. 1222-1238. [DOI: https://dx.doi.org/10.1016/j.asoc.2016.08.051]
33. Carnevalli, J.A.; Miguel, P.C. Review, analysis and classification of the literature on qfd—Types of research, difficulties and benefits. Int. J. Prod. Econ.; 2008; 114, pp. 737-754. [DOI: https://dx.doi.org/10.1016/j.ijpe.2008.03.006]
34. Li, Y.; Wang, J.; Li, X.L.; Zhao, W.; Hu, W. Creative thinking and computer aided product innovation. Comput. Integr. Manuf. Syst.; 2003; 9, pp. 1092-1096.
35. Li, X.; Qiu, S.; Ming, H.X.G. An integrated module-based reasoning and axiomatic design approach for new product design under incomplete information environment. Comput. Ind. Eng.; 2019; 127, pp. 63-73. [DOI: https://dx.doi.org/10.1016/j.cie.2018.11.057]
36. Kudrowitz, B.M.; Wallace, D. Assessing the quality of ideas from prolific, early-stage product ideation. J. Eng. Des.; 2013; 24, pp. 120-139. [DOI: https://dx.doi.org/10.1080/09544828.2012.676633]
37. Bonnardel, N.; Didier, J. Brainstorming variants to favor creative design. Appl. Ergon.; 2020; 83, 102987. [DOI: https://dx.doi.org/10.1016/j.apergo.2019.102987] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31710954]
38. Youn, H.; Strumsky, D.; Bettencourt, L.M.; Lobo, J. Invention as a combinatorial process: Evidence from us patents. J. R. Soc. Interface; 2015; 12, 20150272. [DOI: https://dx.doi.org/10.1098/rsif.2015.0272]
39. Zarraonandia, T.; Diaz, P.; Aedo, I. Using combinatorial creativity to support end-user design of digital games. Multimed. Tools Appl.; 2017; 76, pp. 9073-9098. [DOI: https://dx.doi.org/10.1007/s11042-016-3457-4]
40. Sakao, T.; Lindahl, M. A value based evaluation method for product/service system using design information. CIRP Ann. Manuf. Technol.; 2012; 61, pp. 51-54. [DOI: https://dx.doi.org/10.1016/j.cirp.2012.03.108]
41. Vieira, J.; Osório, J.M.A.; Mouta, S.; Delgado, P.; Portinha, A.; Meireles, J.F.; Santos, J.A. Kansei engineering as a tool for the design of in-vehicle rubber keypads. Appl. Ergon.; 2017; 61, pp. 1-11. [DOI: https://dx.doi.org/10.1016/j.apergo.2016.12.019] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28237008]
42. Nagamachi, M. Kansei engineering in consumer product design. Ergon. Des. Q. Hum. Factors Appl.; 2016; 10, pp. 5-9. [DOI: https://dx.doi.org/10.1177/106480460201000203]
43. Nagamachi, M. Kansei engineering: A new ergonomic consumer-oriented technology for product development. Int. J. Ind. Ergon.; 1995; 15, pp. 3-11. [DOI: https://dx.doi.org/10.1016/0169-8141(94)00052-5]
44. Nagamachi, M. Successful points of kansei product development. Proceedings of the 7th International Conference on Kansei Engineering & Emotion Research; Kuching, Malaysia, 19–22 March 2018; Linköping University Electronic Press: Linköping, Sweden, 2018; pp. 177-187.
45. Nagamachi, M.; Lokman, A.M. Innovations of Kansei Engineering; CRC Press: Boca Raton, FL, USA, 2016.
46. Ishihara, S.; Nagamachi, M.; Schütte, S.; Eklund, J. Affective Meaning: The Kansei Engineering Approach; Elsevier: Amsterdam, The Netherlands, 2008; pp. 477-496.
47. Schütte, S. Designing Feelings into Products: Integrating Kansei Engineering Methodology in Product Development. Master’s Thesis; Linköping University: Linköping, Sweden, 2002.
48. Schütte, S. Engineering Emotional Values in Product Design: Kansei Engineering in Development. Ph.D. Thesis; Institutionen för Konstruktions-och Produktionsteknik: Linköping, Sweden, 2005.
49. Schütte, S.; Eklund, J. Design of rocker switches for work-vehicles—An application of kansei engineering. Appl. Ergon.; 2005; 36, pp. 557-567. [DOI: https://dx.doi.org/10.1016/j.apergo.2005.02.002] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/15950167]
50. Marco Almagro, L.; Tort-Martorell Llabrés, X.; Schütte, S. A Discussion on the Selection of Prototypes for Kansei Engineering Study; Universitat Politècnica de Catalunya: Barcelona, Spain, 2016.
51. Schütte, S.T.; Eklund, J.; Axelsson, J.R.; Nagamachi, M. Concepts, methods and tools in Kansei engineering. Theor. Issues Ergon. Sci.; 2004; 5, pp. 214-231. [DOI: https://dx.doi.org/10.1080/1463922021000049980]
52. Ishihara, S.; Nagamachi, M.; Tsuchiya, T. Development of a Kansei engineering artificial intelligence sightseeing application. Proceedings of the International Conference on Applied Human Factors and Ergonomics; Orlando, FL, USA, 21–25 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 312-322.
53. Djatna, T.; Kurniati, W.D. A system analysis and design for packaging design of powder shaped fresheners based on Kansei engineering. Procedia Manuf.; 2015; 4, pp. 115-123. [DOI: https://dx.doi.org/10.1016/j.promfg.2015.11.021]
54. Shi, F.; Dey, N.; Ashour, A.S.; Sifaki-Pistolla, D.; Sherratt, R.S. Meta-kansei modeling with valence-arousal fmri dataset of brain. Cogn. Comput.; 2019; 11, pp. 227-240. [DOI: https://dx.doi.org/10.1007/s12559-018-9614-5]
55. Xiao, W.; Cheng, J. Perceptual design method for smart industrial robots based on virtual reality and synchronous quantitative physiological signals. Int. J. Distrib. Sens. Netw.; 2020; 16, pp. 1-15. [DOI: https://dx.doi.org/10.1177/1550147720917646]
56. Kano, N.; Seraku, N.; Takahashi, F.; Tsuji, S.-I. Attractive quality and must-be quality. J. Jpn. Soc. Qual. Control.; 1984; 14, pp. 147-156.
57. Avikal, S.; Jain, R.; Mishra, P. A kano model, ahp and m-topsis method-based technique for disassembly line balancing under fuzzy environment. Appl. Soft Comput.; 2014; 25, pp. 519-529. [DOI: https://dx.doi.org/10.1016/j.asoc.2014.08.002]
58. Violante, M.G.; Vezzetti, E. Kano qualitative vs quantitative approaches: An assessment framework for products attributes analysis. Comput. Ind.; 2017; 86, pp. 15-25. [DOI: https://dx.doi.org/10.1016/j.compind.2016.12.007]
59. He, L.; Song, W.; Wu, Z.; Xu, Z.; Zheng, M.; Ming, X. Quantification and integration of an improved kano model into qfd based on multi-population adaptive genetic algorithm. Comput. Ind. Eng.; 2017; 114, pp. 183-194. [DOI: https://dx.doi.org/10.1016/j.cie.2017.10.009]
60. Geng, X.; Chu, X. A new importance–performance analysis approach for customer satisfaction evaluation supporting pss design. Expert Syst. Appl.; 2012; 39, pp. 1492-1502. [DOI: https://dx.doi.org/10.1016/j.eswa.2011.08.038]
61. Lee, Y.-C.; Sheu, L.-C.; Tsou, Y.-G. Quality function deployment implementation based on fuzzy kano model: An application in plm system. Comput. Ind. Eng.; 2008; 55, pp. 48-63. [DOI: https://dx.doi.org/10.1016/j.cie.2007.11.014]
62. Ghorbani, M.; Mohammad Arabzad, S.; Shahin, A. A novel approach for supplier selection based on the kano model and fuzzy mcdm. Int. J. Prod. Res.; 2013; 51, pp. 5469-5484. [DOI: https://dx.doi.org/10.1080/00207543.2013.784403]
63. Chen, C.-C.; Chuang, M.-C. Integrating the kano model into a robust design approach to enhance customer satisfaction with product design. Int. J. Prod. Econ.; 2008; 114, pp. 667-681. [DOI: https://dx.doi.org/10.1016/j.ijpe.2008.02.015]
64. Basfirinci, C.; Mitra, A. A cross cultural investigation of airlines service quality through integration of servqual and the kano model. J. Air Transp. Manag.; 2015; 42, pp. 239-248. [DOI: https://dx.doi.org/10.1016/j.jairtraman.2014.11.005]
65. Qi, J.; Zhang, Z.; Jeon, S.; Zhou, Y. Mining customer requirements from online reviews: A product improvement perspective. Inf. Manag.; 2016; 53, pp. 951-963. [DOI: https://dx.doi.org/10.1016/j.im.2016.06.002]
66. Bellandi, V.; Ceravolo, P.; Ehsanpour, M. A case study in smart healthcare platform design. Proceedings of the IEEE World Congress on Services; Beijing, China, 18–24 October 2020; pp. 7-12.
67. Almannai, B.; Greenough, R.; Kay, J. A decision support tool based on qfd and fmea for the selection of manufacturing automation technologies. Robot. Comput. -Integr. Manuf.; 2008; 24, pp. 501-507. [DOI: https://dx.doi.org/10.1016/j.rcim.2007.07.002]
68. Lee, C.H.; Chen, C.H.; Lee, Y.C. Customer requirement-driven design method and computer-aided design system for supporting service innovation conceptualization handling. Adv. Eng. Inform.; 2020; 45, pp. 1-16. [DOI: https://dx.doi.org/10.1016/j.aei.2020.101117]
69. Yan, H.B.; Meng, X.S.; Ma, T.; Huynh, V.N. An uncertain target-oriented qfd approach to service design based on service standardization with an application to bank window service. IISE Trans.; 2019; 51, pp. 1167-1189. [DOI: https://dx.doi.org/10.1080/24725854.2018.1542545]
70. Kim, K.-J.; Moskowitz, H.; Dhingra, A.; Evans, G. Fuzzy multicriteria models for quality function deployment. Eur. J. Oper. Res.; 2000; 121, pp. 504-518. [DOI: https://dx.doi.org/10.1016/S0377-2217(99)00048-X]
71. Kahraman, C.; Ertay, T.; Büyüközkan, G. A fuzzy optimization model for qfd planning process using analytic network approach. Eur. J. Oper. Res.; 2006; 171, pp. 390-411. [DOI: https://dx.doi.org/10.1016/j.ejor.2004.09.016]
72. Wang, Y.-H.; Lee, C.-H.; Trappey, A.J. Service design blueprint approach incorporating triz and service qfd for a meal ordering system: A case study. Comput. Ind. Eng.; 2017; 107, pp. 388-400. [DOI: https://dx.doi.org/10.1016/j.cie.2017.01.013]
73. Dursun, M.; Karsak, E.E. A qfd-based fuzzy mcdm approach for supplier selection. Appl. Math. Model.; 2013; 37, pp. 5864-5875. [DOI: https://dx.doi.org/10.1016/j.apm.2012.11.014]
74. Li, M.; Jin, L.; Wang, J. A new mcdm method combining qfd with topsis for knowledge management system selection from the user’s perspective in intuitionistic fuzzy environment. Appl. Soft Comput.; 2014; 21, pp. 28-37. [DOI: https://dx.doi.org/10.1016/j.asoc.2014.03.008]
75. Liu, H.-T. Product design and selection using fuzzy qfd and fuzzy mcdm approaches. Appl. Math. Model.; 2011; 35, pp. 482-496. [DOI: https://dx.doi.org/10.1016/j.apm.2010.07.014]
76. Yazdani, M.; Chatterjee, P.; Zavadskas, E.K.; Zolfani, S.H. Integrated qfd-mcdm framework for green supplier selection. J. Clean. Prod.; 2017; 142, pp. 3728-3740. [DOI: https://dx.doi.org/10.1016/j.jclepro.2016.10.095]
77. Wang, X.; Fang, H.; Song, W. Technical attribute prioritisation in qfd based on cloud model and grey relational analysis. Int. J. Prod. Res.; 2020; 58, pp. 5751-5768. [DOI: https://dx.doi.org/10.1080/00207543.2019.1657246]
78. Yazdani, M.; Kahraman, C.; Zarate, P.; Onar, S.C. A fuzzy multi attribute decision framework with integration of qfd and grey relational analysis. Expert Syst. Appl.; 2019; 115, pp. 474-485. [DOI: https://dx.doi.org/10.1016/j.eswa.2018.08.017]
79. Zhai, L.-Y.; Khoo, L.-P.; Zhong, Z.-W. A rough set based qfd approach to the management of imprecise design information in product development. Adv. Eng. Inform.; 2009; 23, pp. 222-228. [DOI: https://dx.doi.org/10.1016/j.aei.2008.10.010]
80. Zhai, L.-Y.; Khoo, L.P.; Zhong, Z.-W. Towards a qfd-based expert system: A novel extension to fuzzy qfd methodology using rough set theory. Expert Syst. Appl.; 2010; 37, pp. 8888-8896. [DOI: https://dx.doi.org/10.1016/j.eswa.2010.06.007]
81. Moussa, F.Z.B.; Rasovska, I.; Dubois, S.; De Guio, R.; Benmoussa, R. Reviewing the use of the theory of inventive problem solving (triz) in green supply chain problems. J. Clean. Prod.; 2017; 142, pp. 2677-2692. [DOI: https://dx.doi.org/10.1016/j.jclepro.2016.11.008]
82. Ai, X.; Jiang, Z.; Zhang, H.; Wang, Y. Low-carbon product conceptual design from the perspectives of technical system and human use. J. Clean. Prod.; 2020; 244, 118819. [DOI: https://dx.doi.org/10.1016/j.jclepro.2019.118819]
83. Li, Z.; Tian, Z.; Wang, J.; Wang, W.; Huang, G. Dynamic mapping of design elements and affective responses: A machine learning based method for affective design. J. Eng. Des.; 2018; 29, pp. 358-380. [DOI: https://dx.doi.org/10.1080/09544828.2018.1471671]
84. Jiao, J.R.; Zhang, Y.; Helander, M. A kansei mining system for affective design. Expert Syst. Appl.; 2006; 30, pp. 658-673. [DOI: https://dx.doi.org/10.1016/j.eswa.2005.07.020]
85. Gandomi, A.; Haider, M. Beyond the hype: Big data concepts, methods, and analytics. Int. J. Inf. Manag.; 2015; 35, pp. 137-144. [DOI: https://dx.doi.org/10.1016/j.ijinfomgt.2014.10.007]
86. Carvalho, J.P.; Rosa, H.; Brogueira, G.; Batista, F. Misnis: An intelligent platform for twitter topic mining. Expert Syst. Appl.; 2017; 89, pp. 374-388. [DOI: https://dx.doi.org/10.1016/j.eswa.2017.08.001]
87. Lau, R.Y.; Li, C.; Liao, S.S. Social analytics: Learning fuzzy product ontologies for aspect-oriented sentiment analysis. Decis. Support Syst.; 2014; 65, pp. 80-94. [DOI: https://dx.doi.org/10.1016/j.dss.2014.05.005]
88. Liu, Y.; Jiang, C.; Zhao, H. Using contextual features and multi-view ensemble learning in product defect identification from online discussion forums. Decis. Support Syst.; 2018; 105, pp. 1-12. [DOI: https://dx.doi.org/10.1016/j.dss.2017.10.009]
89. Park, Y.; Lee, S. How to design and utilize online customer center to support new product concept generation. Expert Syst. Appl.; 2011; 38, pp. 10638-10647. [DOI: https://dx.doi.org/10.1016/j.eswa.2011.02.125]
90. Ren, L.; Zhu, B.; Xu, Z. Data-driven fuzzy preference analysis from an optimization perspective. Fuzzy Sets Syst.; 2019; 377, pp. 85-101. [DOI: https://dx.doi.org/10.1016/j.fss.2019.03.003]
91. Hong, H.; Xu, D.; Wang, G.A.; Fan, W. Understanding the determinants of online review helpfulness: A meta-analytic investigation. Decis. Support Syst.; 2017; 102, pp. 1-11. [DOI: https://dx.doi.org/10.1016/j.dss.2017.06.007]
92. Min, H.-J.; Park, J.C. Identifying helpful reviews based on customer’s mentions about experiences. Expert Syst. Appl.; 2012; 39, pp. 11830-11838. [DOI: https://dx.doi.org/10.1016/j.eswa.2012.01.116]
93. Choi, J.; Yoon, J.; Chung, J.; Coh, B.-Y.; Lee, J.-M. Social media analytics and business intelligence research: A systematic review. Inf. Process. Manag.; 2020; 57, pp. 102279-102298. [DOI: https://dx.doi.org/10.1016/j.ipm.2020.102279]
94. Xiao, S.; Wei, C.-P.; Dong, M. Crowd intelligence: Analyzing online product reviews for preference measurement. Inf. Manag.; 2016; 53, pp. 169-182. [DOI: https://dx.doi.org/10.1016/j.im.2015.09.010]
95. Zhao, K.; Stylianou, A.C.; Zheng, Y. Sources and impacts of social influence from online anonymous user reviews. Inf. Manag.; 2018; 55, pp. 16-30. [DOI: https://dx.doi.org/10.1016/j.im.2017.03.006]
96. Lee, A.J.; Yang, F.-C.; Chen, C.-H.; Wang, C.-S.; Sun, C.-Y. Mining perceptual maps from consumer reviews. Decis. Support Syst.; 2016; 82, pp. 12-25. [DOI: https://dx.doi.org/10.1016/j.dss.2015.11.002]
97. Bi, J.-W.; Liu, Y.; Fan, Z.-P.; Cambria, E. Modelling customer satisfaction from online reviews using ensemble neural network and effect-based kano model. Int. J. Prod. Res.; 2019; 57, pp. 7068-7088. [DOI: https://dx.doi.org/10.1080/00207543.2019.1574989]
98. Hu, M.; Liu, B. Mining opinion features in customer reviews. AAAI; 2004; 4, pp. 755-760.
99. Kang, D.; Park, Y. Review-based measurement of customer satisfaction in mobile service: Sentiment analysis and vikor approach. Expert Syst. Appl.; 2014; 41, pp. 1041-1050. [DOI: https://dx.doi.org/10.1016/j.eswa.2013.07.101]
100. Kangale, A.; Kumar, S.K.; Naeem, M.A.; Williams, M.; Tiwari, M.K. Mining consumer reviews to generate ratings of different product attributes while producing feature-based review-summary. Int. J. Syst. Sci.; 2016; 47, pp. 3272-3286. [DOI: https://dx.doi.org/10.1080/00207721.2015.1116640]
101. Wang, Y.; Lu, X.; Tan, Y. Impact of product attributes on customer satisfaction: An analysis of online reviews for washing machines. Electron. Commer. Res. Appl.; 2018; 29, pp. 1-11. [DOI: https://dx.doi.org/10.1016/j.elerap.2018.03.003]
102. Aguwa, C.; Olya, M.H.; Monplaisir, L. Modeling of fuzzy-based voice of customer for business decision analytics. Knowl.-Based Syst.; 2017; 125, pp. 136-145. [DOI: https://dx.doi.org/10.1016/j.knosys.2017.03.019]
103. Zhan, J.; Loh, H.T.; Liu, Y. Gather customer concerns from online product reviews–a text summarization approach. Expert Syst. Appl.; 2009; 36, pp. 2107-2115. [DOI: https://dx.doi.org/10.1016/j.eswa.2007.12.039]
104. Archak, N.; Ghose, A.; Ipeirotis, P.G. Deriving the pricing power of product features by mining consumer reviews. Manag. Sci.; 2011; 57, pp. 1485-1509. [DOI: https://dx.doi.org/10.1287/mnsc.1110.1370]
105. Law, D.; Gruss, R.; Abrahams, A.S. Automated defect discovery for dishwasher appliances from online consumer reviews. Expert Syst. Appl.; 2017; 67, pp. 84-94. [DOI: https://dx.doi.org/10.1016/j.eswa.2016.08.069]
106. Winkler, M.; Abrahams, A.S.; Gruss, R.; Ehsani, J.P. Toy safety surveillance from online reviews. Decis. Support Syst.; 2016; 90, pp. 23-32. [DOI: https://dx.doi.org/10.1016/j.dss.2016.06.016] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/27942092]
107. Zhang, W.; Xu, H.; Wan, W. Weakness finder: Find product weakness from chinese reviews by using aspects based sentiment analysis. Expert Syst. Appl.; 2012; 39, pp. 10283-10291. [DOI: https://dx.doi.org/10.1016/j.eswa.2012.02.166]
108. Jin, J.; Ji, P.; Gu, R. Identifying comparative customer requirements from product online reviews for competitor analysis. Eng. Appl. Artif. Intell.; 2016; 49, pp. 61-73. [DOI: https://dx.doi.org/10.1016/j.engappai.2015.12.005]
109. Chatterjee, S. Explaining customer ratings and recommendations by combining qualitative and quantitative user generated contents. Decis. Support Syst.; 2019; 119, pp. 14-22. [DOI: https://dx.doi.org/10.1016/j.dss.2019.02.008]
110. Fan, Z.-P.; Li, G.-M.; Liu, Y. Processes and methods of information fusion for ranking products based on online reviews: An overview. Inf. Fusion; 2020; 60, pp. 87-97. [DOI: https://dx.doi.org/10.1016/j.inffus.2020.02.007]
111. Liu, P.; Teng, F. Probabilistic linguistic todim method for selecting products through online product reviews. Inf. Sci.; 2019; 485, pp. 441-455. [DOI: https://dx.doi.org/10.1016/j.ins.2019.02.022]
112. Liu, Y.; Bi, J.-W.; Fan, Z.-P. Ranking products through online reviews: A method based on sentiment analysis technique and intuitionistic fuzzy set theory. Inf. Fusion; 2017; 36, pp. 149-161. [DOI: https://dx.doi.org/10.1016/j.inffus.2016.11.012]
113. Siering, M.; Deokar, A.V.; Janze, C. Disentangling consumer recommendations: Explaining and predicting airline recommendations based on online reviews. Decis. Support Syst.; 2018; 107, pp. 52-63. [DOI: https://dx.doi.org/10.1016/j.dss.2018.01.002]
114. Zhang, J.; Chen, D.; Lu, M. Combining sentiment analysis with a fuzzy kano model for product aspect preference recommendation. IEEE Access; 2018; 6, pp. 59163-59172. [DOI: https://dx.doi.org/10.1109/ACCESS.2018.2875026]
115. Jin, J.; Ji, P.; Kwong, C.K. What makes consumers unsatisfied with your products: Review analysis at a fine-grained level. Eng. Appl. Artif. Intell.; 2016; 47, pp. 38-48. [DOI: https://dx.doi.org/10.1016/j.engappai.2015.05.006]
116. Jin, J.; Liu, Y.; Ji, P.; Liu, H. Understanding big consumer opinion data for market-driven product design. Int. J. Prod. Res.; 2016; 54, pp. 3019-3041. [DOI: https://dx.doi.org/10.1080/00207543.2016.1154208]
117. Wang, W.M.; Wang, J.; Li, Z.; Tian, Z.; Tsui, E. Multiple affective attribute classification of online customer product reviews: A heuristic deep learning method for supporting Kansei engineering. Eng. Appl. Artif. Intell.; 2019; 85, pp. 33-45. [DOI: https://dx.doi.org/10.1016/j.engappai.2019.05.015]
118. Kumar, S.; Yadava, M.; Roy, P.P. Fusion of eeg response and sentiment analysis of products review to predict customer satisfaction. Inf. Fusion; 2019; 52, pp. 41-52. [DOI: https://dx.doi.org/10.1016/j.inffus.2018.11.001]
119. Li, S.; Nahar, K.; Fung, B.C. Product customization of tablet computers based on the information of online reviews by customers. J. Intell. Manuf.; 2015; 26, pp. 97-110. [DOI: https://dx.doi.org/10.1007/s10845-013-0765-7]
120. Sun, J.-T.; Zhang, Q.-Y. Product typicality attribute mining method based on a topic clustering ensemble. Artif. Intell. Rev.; 2022; 55, pp. 6629-6654. [DOI: https://dx.doi.org/10.1007/s10462-022-10163-y]
121. Zhang, H.; Sekhari, A.; Ouzrout, Y.; Bouras, A. Jointly identifying opinion mining elements and fuzzy measurement of opinion intensity to analyze product features. Eng. Appl. Artif. Intell.; 2016; 47, pp. 122-139. [DOI: https://dx.doi.org/10.1016/j.engappai.2015.06.007]
122. Tubishat, M.; Idris, N.; Abushariah, M. Explicit aspects extraction in sentiment analysis using optimal rules combination. Future Gener. Comput. Syst.; 2021; 114, pp. 448-480. [DOI: https://dx.doi.org/10.1016/j.future.2020.08.019]
123. Quan, C.; Ren, F. Unsupervised product feature extraction for feature-oriented opinion determination. Inf. Sci.; 2014; 272, pp. 16-28. [DOI: https://dx.doi.org/10.1016/j.ins.2014.02.063]
124. Sun, H.; Guo, W.; Shao, H.; Rong, B. Dynamical mining of ever-changing user requirements: A product design and improvement perspective. Adv. Eng. Inform.; 2020; 46, pp. 101174-101186. [DOI: https://dx.doi.org/10.1016/j.aei.2020.101174]
125. Ristoski, P.; Petrovski, P.; Mika, P.; Paulheim, H. A machine learning approach for product matching and categorization. Semant. Web; 2018; 9, pp. 707-728. [DOI: https://dx.doi.org/10.3233/SW-180300]
126. Fang, Z.; Zhang, Q.; Tang, X.; Wang, A.; Baron, C. An implicit opinion analysis model based on feature-based implicit opinion patterns. Artif. Intell. Rev.; 2020; 53, pp. 4547-4574. [DOI: https://dx.doi.org/10.1007/s10462-019-09801-9]
127. Putthividhya, D.; Hu, J. Bootstrapped named entity recognition for product attribute extraction. Proceedings of the Conference on Empirical Methods in Natural Language Processing; Edinburgh, UK, 27–31 July 2011; pp. 1557-1567.
128. Tubishat, M.; Idris, N.; Abushariah, M.A. Implicit aspect extraction in sentiment analysis: Review, taxonomy, oppportunities, and open challenges. Inf. Process. Manag.; 2018; 54, pp. 545-563. [DOI: https://dx.doi.org/10.1016/j.ipm.2018.03.008]
129. Xu, H.; Zhang, F.; Wang, W. Implicit feature identification in chinese reviews using explicit topic mining model. Knowl.-Based Syst.; 2015; 76, pp. 166-175. [DOI: https://dx.doi.org/10.1016/j.knosys.2014.12.012]
130. Kang, Y.; Zhou, L. Rube: Rule-based methods for extracting product features from online consumer reviews. Inf. Manag.; 2017; 54, pp. 166-176. [DOI: https://dx.doi.org/10.1016/j.im.2016.05.007]
131. Hu, M.; Liu, B. Mining and summarizing customer reviews. Proceedings of the Tenth International Conference on Knowledge Discovery and Data Mining; Seattle, DC, USA, 22–25 August 2004; ACM: New York, NY, USA, 2004; pp. 168-177. [DOI: https://dx.doi.org/10.1145/1014052.1014073]
132. Wang, Y.; Mo, D.Y.; Tseng, M.M. Mapping customer needs to design parameters in the front end of product design by applying deep learning. CIRP Ann.; 2018; 67, pp. 145-148. [DOI: https://dx.doi.org/10.1016/j.cirp.2018.04.018]
133. Li, S.B.; Quan, H.F.; Hu, J.J.; Wu, Y.; Zhang, A. Perceptual evaluation method of products based on online reviews data driven. Comput. Integr. Manuf. Syst.; 2018; 24, pp. 752-762.
134. Wang, W.M.; Li, Z.; Tian, Z.; Wang, J.; Cheng, M. Extracting and summarizing affective features and responses from online product descriptions and reviews: A kansei text mining approach. Eng. Appl. Artif. Intell.; 2018; 73, pp. 149-162. [DOI: https://dx.doi.org/10.1016/j.engappai.2018.05.005]
135. Chen, L.; Qi, L.; Wang, F. Comparison of feature-level learning methods for mining online consumer reviews. Expert Syst. Appl.; 2012; 39, pp. 9588-9601. [DOI: https://dx.doi.org/10.1016/j.eswa.2012.02.158]
136. Moraes, R.; Valiati, J.F.; Neto, W.P.G. Document-level sentiment classification: An empirical comparison between svm and ann. Expert Syst. Appl.; 2013; 40, pp. 621-633. [DOI: https://dx.doi.org/10.1016/j.eswa.2012.07.059]
137. Bordoloi, M.; Biswas, S.K. Sentiment analysis: A survey on design framework, applications and future scopes. Artif. Intell. Rev.; 2023; pp. 1-56. [DOI: https://dx.doi.org/10.1007/s10462-023-10442-2]
138. Do, H.H.; Prasad, P.; Maag, A.; Alsadoon, A. Deep learning for aspect-based sentiment analysis: A comparative review. Expert Syst. Appl.; 2019; 118, pp. 272-299. [DOI: https://dx.doi.org/10.1016/j.eswa.2018.10.003]
139. Liu, Y.; Bi, J.-W.; Fan, Z.-P. Multi-class sentiment classification: The experimental comparisons of feature selection and machine learning algorithms. Expert Syst. Appl.; 2017; 80, pp. 323-339. [DOI: https://dx.doi.org/10.1016/j.eswa.2017.03.042]
140. Liu, Y.; Jian-Wu, B.; Fan, Z.P. A method for multi-class sentiment classification based on an improved one-vs-one (ovo) strategy and the support vector machine (svm) algorithm. Inf. Sci.; 2017; 394, pp. 38-52. [DOI: https://dx.doi.org/10.1016/j.ins.2017.02.016]
141. Zengcai, S.; Yunfeng, X.; Dongwen, Z. Chinese comments sentiment classification based on word2vec and svmperf. Expert Syst. Appl.; 2015; 42, pp. 1857-1863.
142. Dehdarbehbahani, I.; Shakery, A.; Faili, H. Semi-supervised word polarity identification in resource-lean languages. Neural Netw.; 2014; 58, pp. 50-59. [DOI: https://dx.doi.org/10.1016/j.neunet.2014.05.018]
143. Cho, H.; Kim, S.; Lee, J.; Lee, J.-S. Data-driven integration of multiple sentiment dictionaries for lexicon-based sentiment classification of product reviews. Knowl.-Based Syst.; 2014; 71, pp. 61-71. [DOI: https://dx.doi.org/10.1016/j.knosys.2014.06.001]
144. Araque, O.; Zhu, G.; Iglesias, C.A. A semantic similarity-based perspective of affect lexicons for sentiment analysis. Knowl.-Based Syst.; 2019; 165, pp. 346-359. [DOI: https://dx.doi.org/10.1016/j.knosys.2018.12.005]
145. Bravo-Marquez, F.; Mendoza, M.; Poblete, B. Meta-level sentiment models for big social data analysis. Knowl.-Based Syst.; 2014; 69, pp. 86-99. [DOI: https://dx.doi.org/10.1016/j.knosys.2014.05.016]
146. Yadollahi, A.; Shahraki, A.G.; Zaiane, O.R. Current state of text sentiment analysis from opinion to emotion mining. ACM Comput. Surv.; 2017; 50, pp. 1-33. [DOI: https://dx.doi.org/10.1145/3057270]
147. Dang, Y.; Zhang, Y.; Chen, H. A lexicon-enhanced method for sentiment classification: An experiment on online product reviews. IEEE Intell. Syst.; 2009; 25, pp. 46-53. [DOI: https://dx.doi.org/10.1109/MIS.2009.105]
148. Chen, Z.; Ai, S.; Jia, C. Structure-aware deep learning for product image classification. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM); 2019; 15, pp. 1-20. [DOI: https://dx.doi.org/10.1145/3231742]
149. Li, Q.; Peng, X.; Cao, L.; Du, W.; Xing, H.; Qiao, Y.; Peng, Q. Product image recognition with guidance learning and noisy supervision. Comput. Vis. Image Underst.; 2020; 196, pp. 102963-102971. [DOI: https://dx.doi.org/10.1016/j.cviu.2020.102963]
150. Liu, S.; Feng, J.; Domokos, C.; Xu, H.; Huang, J.; Hu, Z.; Yan, S. Fashion parsing with weak color-category labels. IEEE Trans. Multimed.; 2013; 16, pp. 253-265. [DOI: https://dx.doi.org/10.1109/TMM.2013.2285526]
151. Li, Y.; Dai, Y.; Liu, L.-J.; Tan, H. Advanced designing assistant system for smart design based on product image dataset. Proceedings of the International Conference on Human-Computer Interaction; Orlando, FL, USA, 26–31 July 2019; Springer: Cham, Switzerland, pp. 18-33.
152. Dai, Y.; Li, Y.; Liu, L.-J. New product design with automatic scheme generation. Sens. Imaging; 2019; 20, pp. 1-16. [DOI: https://dx.doi.org/10.1007/s11220-019-0248-9]
153. Kovacs, B.; O’Donovan, P.; Bala, K.; Hertzmann, A. Context-aware asset search for graphic design. IEEE Trans. Vis. Comput. Graph.; 2018; 25, pp. 2419-2429. [DOI: https://dx.doi.org/10.1109/TVCG.2018.2842734] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29993550]
154. Yamaguchi, K.; Kiapour, M.H.; Ortiz, L.E.; Berg, T.L. Retrieving similar styles to parse clothing. IEEE Trans. Pattern Anal. Mach. Intell.; 2014; 37, pp. 1028-1040. [DOI: https://dx.doi.org/10.1109/TPAMI.2014.2353624] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26353326]
155. Bell, S.; Bala, K. Learning visual similarity for product design with convolutional neural networks. ACM Trans. Graph. (TOG); 2015; 34, pp. 1-10. [DOI: https://dx.doi.org/10.1145/2766959]
156. Liu, X.; Zhang, S.; Huang, T.; Tian, Q. E2bows: An end-to-end bag-of-words model via deep convolutional neural network for image retrieval. Neurocomputing; 2020; 395, pp. 188-198. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.12.069]
157. Rubio, A.; Yu, L.; Simo-Serra, E.; Moreno-Noguer, F. Multi-modal joint embedding for fashion product retrieval. Proceedings of the IEEE International Conference on Image Processing; Beijing, China, 17–20 September 2017; pp. 400-404.
158. Tautkute, I.; Trzciński, T.; Skorupa, A.P.; Brocki, Ł.; Marasek, K. Deepstyle: Multimodal search engine for fashion and interior design. IEEE Access; 2019; 7, pp. 84613-84628. [DOI: https://dx.doi.org/10.1109/ACCESS.2019.2923552]
159. Andreeva, E.; Ignatov, D.I.; Grachev, A.; Savchenko, A.V. Extraction of visual features for recommendation of products via deep learning. Proceedings of the International Conference on Analysis of Images, Social Networks and Texts; Moscow, Russia, 5–7 July 2018; Springer: Berlin/Heidelberg, Germany, 2018; pp. 201-210.
160. Wang, X.; Sun, Z.; Zhang, W.; Zhou, Y.; Jiang, Y.-G. Matching user photos to online products with robust deep features. Proceedings of the 2016 ACM on International Conference on Multimedia Retrieval; New York, NY, USA, 6–9 June 2016; pp. 7-14.
161. Zhan, H.; Shi, B.; Duan, L.-Y.; Kot, A.C. Deepshoe: An improved multi-task view-invariant cnn for street-to-shop shoe retrieval. Comput. Vis. Image Underst.; 2019; 180, pp. 23-33. [DOI: https://dx.doi.org/10.1016/j.cviu.2019.01.001]
162. Jiang, S.; Wu, Y.; Fu, Y. Deep bidirectional cross-triplet embedding for online clothing shopping. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM); 2018; 14, pp. 1-22. [DOI: https://dx.doi.org/10.1145/3152114]
163. Jiang, Y.-G.; Li, M.; Wang, X.; Liu, W.; Hua, X.-S. Deepproduct: Mobile product search with portable deep features. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM); 2018; 14, pp. 1-18. [DOI: https://dx.doi.org/10.1145/3184745]
164. Liu, S.; Song, Z.; Liu, G.; Xu, C.; Lu, H.; Yan, S. Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary set. Proceedings of the Conference on Computer Vision and Pattern Recognition; Providence, RL, USA, 16–21 June 2012; pp. 3330-3337.
165. Yu, Q.; Liu, F.; Song, Y.-Z.; Xiang, T.; Hospedales, T.M.; Loy, C.-C. Sketch me that shoe. Proceedings of the Conference on Computer Vision and Pattern Recognition; Las Vegas, NV, USA, 27–30 June 2016; pp. 799-807.
166. Ullah, F.; Zhang, B.; Khan, R.U.; Ullah, I.; Khan, A.; Qamar, A.M. Visual-based items recommendation using deep neural network. Proceedings of the International Conference on Computing, Networks and Internet of Things; Sanya, China, 24–26 April 2020; pp. 122-126.
167. Liang, X.; Lin, L.; Yang, W.; Luo, P.; Huang, J.; Yan, S. Clothes co-parsing via joint image segmentation and labeling with application to clothing retrieval. IEEE Trans. Multimed.; 2016; 18, pp. 1175-1186. [DOI: https://dx.doi.org/10.1109/TMM.2016.2542983]
168. Chu, W.-T.; Wu, Y.-L. Image style classification based on learnt deep correlation features. IEEE Trans. Multimed.; 2018; 20, pp. 2491-2502. [DOI: https://dx.doi.org/10.1109/TMM.2018.2801718]
169. Hu, Z.; Wen, Y.; Liu, L.; Jiang, J.; Hong, R.; Wang, M.; Yan, S. Visual classification of furniture styles. ACM Trans. Intell. Syst. Technol.; 2017; 8, pp. 1-20. [DOI: https://dx.doi.org/10.1145/3065951]
170. Poursaeed, O.; Matera, T.; Belongie, S. Vision-based real estate price estimation. Mach. Vis. Appl.; 2018; 29, pp. 667-676. [DOI: https://dx.doi.org/10.1007/s00138-018-0922-2]
171. Pan, T.-Y.; Dai, Y.-Z.; Hu, M.-C.; Cheng, W.-H. Furniture style compatibility recommendation with cross-class triplet loss. Multimed. Tools Appl.; 2019; 78, pp. 2645-2665. [DOI: https://dx.doi.org/10.1007/s11042-018-5747-5]
172. Shin, Y.-G.; Yeo, Y.-J.; Sagong, M.-C.; Ji, S.-W.; Ko, S.-J. Deep fashion recommendation system with style feature decomposition. Proceedings of the International Conference on Consumer Electronics; Las Vegas, NV, USA, 11–13 January 2019; pp. 301-305.
173. Zhan, H.; Shi, B.; Chen, J.; Zheng, Q.; Duan, L.-Y.; Kot, A.C. Fashion recommendation on street images. Proceedings of the International Conference on Image Processing; Taipei, Taiwan, 22–25 September 2019; pp. 280-284.
174. Zhang, H.; Huang, W.; Liu, L.; Chow, T.W. Learning to match clothing from textual feature-based compatible relationships. IEEE Trans. Ind. Inform.; 2019; 16, pp. 6750-6759. [DOI: https://dx.doi.org/10.1109/TII.2019.2924725]
175. Aggarwal, D.; Valiyev, E.; Sener, F.; Yao, A. Learning style compatibility for furniture. Proceedings of the German Conference on Pattern Recognition; Stuttgart, Germany, 9–12 October 2018; Springer: Cham, Switzerland, 2018; pp. 552-566.
176. Polania, L.F.; Flores, M.; Nokleby, M.; Li, Y. Learning furniture compatibility with graph neural networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops; Seattle, WA, USA, 14–19 June 2020; pp. 366-367.
177. Dan, Y.; Zhao, Y.; Li, X.; Li, S.; Hu, M.; Hu, J. Generative adversarial networks (gan) based efficient sampling of chemical composition space for inverse design of inorganic materials. npj Comput. Mater.; 2020; 6, pp. 1-7. [DOI: https://dx.doi.org/10.1038/s41524-020-00352-0]
178. Kang, W.-C.; Fang, C.; Wang, Z.; McAuley, J. Visually-aware fashion recommendation and design with generative image models. Proceedings of the IEEE International Conference on Data Mining; New Orleans, LA, USA, 18–21 November 2017; pp. 207-216.
179. Zhang, H.; Sun, Y.; Liu, L.; Xu, X. Cascadegan: A category-supervised cascading generative adversarial network for clothes translation from the human body to tiled images. Neurocomputing; 2020; 382, pp. 148-161. [DOI: https://dx.doi.org/10.1016/j.neucom.2019.11.085]
180. Radhakrishnan, S.; Bharadwaj, V.; Manjunath, V.; Srinath, R. Creative intelligence–automating car design studio with generative adversarial networks (gan). International Cross-Domain Conference for Machine Learning and Knowledge Extraction; Springer: Berlin/Heidelberg, Germany, 2018; pp. 160-175.
181. Ak, K.E.; Lim, J.H.; Tham, J.Y.; Kassim, A.A. Semantically consistent text to fashion image synthesis with an enhanced attentional generative adversarial network. Pattern Recognit. Lett.; 2020; 135, pp. 22-29. [DOI: https://dx.doi.org/10.1016/j.patrec.2020.02.030]
182. Kim, T.; Cha, M.; Kim, H.; Lee, J.K.; Kim, J. Learning to discover cross-domain relations with generative adversarial networks. Proceedings of the International Conference on Machine Learning; Sydney, Australia, 6–11 August 2017; pp. 1857-1865.
183. Hsiao, W.-L.; Katsman, I.; Wu, C.-Y.; Parikh, D.; Grauman, K. Fashion++: Minimal edits for outfit improvement. Proceedings of the IEEE/CVF International Conference on Computer Vision; Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5047-5056.
184. Lang, Y.; He, Y.; Dong, J.; Yang, F.; Xue, H. Design-gan: Cross-category fashion translation driven by landmark attention. Proceedings of the ICASSP 2020—2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP); Barcelona, Spain, 4–8 May 2020; pp. 1968-1972.
185. Liu, J.; Song, X.; Chen, Z.; Ma, J. Mgcm: Multi-modal generative compatibility modeling for clothing matching. Neurocomputing; 2020; 414, pp. 215-224. [DOI: https://dx.doi.org/10.1016/j.neucom.2020.06.033]
186. Lu, Q.; Tao, Q.; Zhao, Y. Sketch simplification using generative adversarial networks. Acta Autom. Sin.; 2018; 44, pp. 75-89.
187. Chai, C.; Liao, J.; Zou, N.; Sun, L. A one-to-many conditional generative adversarial network framework for multiple image-to-image translations. Multimed. Tools Appl.; 2018; 77, pp. 22339-22366. [DOI: https://dx.doi.org/10.1007/s11042-018-5968-7]
188. Lee, Y.; Cho, S. Design of semantic-based colorization of graphical user interface through conditional generative adversarial nets. Int. J. Hum. Comput. Interact.; 2020; 36, pp. 699-708. [DOI: https://dx.doi.org/10.1080/10447318.2019.1680921]
189. Liu, Y.; Qin, Z.; Wan, T.; Luo, Z. Auto-painter: Cartoon image generation from sketch by using conditional wasserstein generative adversarial networks. Neurocomputing; 2018; 311, pp. 78-87. [DOI: https://dx.doi.org/10.1016/j.neucom.2018.05.045]
190. Liu, B.; Gan, J.; Wen, B.; LiuFu, Y.; Gao, W. An automatic coloring method for ethnic costume sketches based on generative adversarial networks. Appl. Soft Comput.; 2021; 98, pp. 106786-106797. [DOI: https://dx.doi.org/10.1016/j.asoc.2020.106786]
191. Chen, Y.; Xia, S.; Zhao, J.; Zhou, Y.; Niu, Q.; Yao, R.; Zhu, D. Appearance and shape based image synthesis by conditional variational generative adversarial network. Knowl.-Based Syst.; 2020; 193, pp. 105450-105477. [DOI: https://dx.doi.org/10.1016/j.knosys.2019.105450]
192. Jetchev, N.; Bergmann, U. The conditional analogy gan: Swapping fashion articles on people images. Proceedings of the IEEE International Conference on Computer Vision Workshops; Venice, Italy, 22–29 October 2017; pp. 2287-2292.
193. Huafeng, Q. Product Design Based on Big Data. Ph.D. Thesis; Guizhou University: Guizhou, China, 2019.
194. Liu, L.; Zhang, H.; Ji, Y.; Wu, Q.J. Toward ai fashion design: An attribute-gan model for clothing match. Neurocomputing; 2019; 341, pp. 156-167. [DOI: https://dx.doi.org/10.1016/j.neucom.2019.03.011]
195. Rahbar, M.; Mahdavinejad, M.; Bemanian, M.; Davaie Markazi, A.H.; Hovestadt, L. Generating synthetic space allocation probability layouts based on trained conditional-gans. Appl. Artif. Intell.; 2019; 33, pp. 689-705. [DOI: https://dx.doi.org/10.1080/08839514.2019.1592919]
196. Wang, X.; Gupta, A. Generative image modeling using style and structure adversarial networks. Proceedings of the European Conference on Computer Vision; Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 318-335.
197. Cheng, Q.; Gu, X. Cross-modal feature alignment based hybrid attentional generative adversarial networks for text-to-image synthesis. Digit. Signal Process.; 2020; 107, pp. 102866-102884. [DOI: https://dx.doi.org/10.1016/j.dsp.2020.102866]
198. Reed, S.; Akata, Z.; Yan, X.; Logeswaran, L.; Schiele, B.; Lee, H. Generative adversarial text to image synthesis. Proceedings of the International Conference on Machine Learning; New York, NY, USA, 19–24 June 2016; pp. 1060-1069.
199. Tan, H.; Liu, X.; Liu, M.; Yin, B.; Li, X. Kt-gan: Knowledge-transfer generative adversarial network for text-to-image synthesis. IEEE Trans. Image Process.; 2020; 30, pp. 1275-1290. [DOI: https://dx.doi.org/10.1109/TIP.2020.3026728]
200. Xu, T.; Zhang, P.; Huang, Q.; Zhang, H.; Gan, Z.; Huang, X.; He, X. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Salt Lake City, UT, USA, 18–23 June 2018; pp. 1316-1324.
201. Poole, B.; Jain, A.; Barron, J.T.; Mildenhall, B. Dreamfusion: Text-to-3d using 2d diffusion. arXiv; 2022; arXiv: 2209.14988
202. Jun, H.; Nichol, A. Shap-e: Generating conditional 3d implicit functions. arXiv; 2023; arXiv: 2305.02463
203. Wang, Z.; Lu, C.; Wang, Y.; Bao, F.; Li, C.; Su, H.; Zhu, J. Prolificdreamer: High-fidelity and diverse text-to-3d generation with variational score distillation. arXiv; 2023; arXiv: 2305.16213
204. Iizuka, S.; Simo-Serra, E.; Ishikawa, H. Let there be color! joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification. ACM Trans. Graph.; 2016; 35, pp. 1-11. [DOI: https://dx.doi.org/10.1145/2897824.2925974]
205. Lei, Y.; Du, W.; Hu, Q. Face sketch-to-photo transformation with multi-scale self-attention gan. Neurocomputing; 2020; 396, pp. 13-23. [DOI: https://dx.doi.org/10.1016/j.neucom.2020.02.024]
206. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA, 21–26 July 2017; pp. 1125-1134.
207. Zhu, J.-Y.; Krähenbühl, P.; Shechtman, E.; Efros, A.A. Generative visual manipulation on the natural image manifold. Proceedings of the European Conference on Computer Vision; Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 597-613.
208. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision; Venice, Italy, 22–29 October 2017; pp. 2223-2232.
209. Pan, X.; Tewari, A.; Leimkühler, T.; Liu, L.; Meka, A.; Theobalt, C. Drag your gan: Interactive point-based manipulation on the generative image manifold. arXiv; 2023; arXiv: 2305.10973
210. Gatys, L.A.; Ecker, A.S.; Bethge, M. A neural algorithm of artistic style. arXiv; 2015; arXiv: 1508.06576[DOI: https://dx.doi.org/10.1167/16.12.326]
211. Huang, X.; Belongie, S. Arbitrary style transfer in real-time with adaptive instance normalization. Proceedings of the IEEE International Conference on Computer Vision; Venice, Italy, 22–29 October 2017.
212. Quan, H.; Li, S.; Hu, J. Product innovation design based on deep learning and kansei engineering. Appl. Sci.; 2018; 8, pp. 2397-2415. [DOI: https://dx.doi.org/10.3390/app8122397]
213. Wu, Q.; Zhu, B.; Yong, B.; Wei, Y.; Jiang, X.; Zhou, R.; Zhou, Q. Clothgan: Generation of fashionable dunhuang clothes using generative adversarial networks. Connect. Sci.; 2021; 33, pp. 341-358. [DOI: https://dx.doi.org/10.1080/09540091.2020.1822780]
214. Sohn, K.; Sung, C.E.; Koo, G.; Kwon, O. Artificial intelligence in the fashion industry: Consumer responses to generative adversarial network (gan) technology. Int. J. Retail. Distrib. Manag.; 2020; 49, pp. 1-20. [DOI: https://dx.doi.org/10.1108/IJRDM-03-2020-0091]
215. Sun, Y.; Chen, J.; Liu, Q.; Liu, G. Learning image compressed sensing with sub-pixel convolutional generative adversarial network. Pattern Recognit.; 2020; 98, 107051. [DOI: https://dx.doi.org/10.1016/j.patcog.2019.107051]
216. Wang, C.; Chen, Z.; Shang, K.; Wu, H. Label-removed generative adversarial networks incorporating with k-means. Neurocomputing; 2019; 361, pp. 126-136. [DOI: https://dx.doi.org/10.1016/j.neucom.2019.06.041]
217. Faezi, M.H.; Bijani, S.; Dolati, A. Degan: Decentralized generative adversarial networks. Neurocomputing; 2021; 419, pp. 335-343. [DOI: https://dx.doi.org/10.1016/j.neucom.2020.07.089]
218. Sun, G.; Ding, S.; Sun, T.; Zhang, C. Sa-capsgan: Using capsule networks with embedded self-attention for generative adversarial network. Neurocomputing; 2021; 423, pp. 399-406. [DOI: https://dx.doi.org/10.1016/j.neucom.2020.10.092]
219. Yao, S.; Wang, Y.; Niu, B. An efficient cascaded filtering retrieval method for big audio data. IEEE Trans. Multimed.; 2015; 17, pp. 1450-1459. [DOI: https://dx.doi.org/10.1109/TMM.2015.2460121]
220. Yang, Z.; Xu, M.; Liu, Z.; Qin, D.; Yao, X. Study of audio frequency big data processing architecture and key technology. Telecommun. Sci.; 2013; 29, pp. 1-5.
221. Lee, C.-H.; Wang, Y.-H.; Trappey, A.J. Ontology-based reasoning for the intelligent handling of customer complaints. Comput. Ind. Eng.; 2015; 84, pp. 144-155. [DOI: https://dx.doi.org/10.1016/j.cie.2014.11.019]
222. Yang, Y.; Xu, D.-L.; Yang, J.-B.; Chen, Y.-W. An evidential reasoning-based decision support system for handling customer complaints in mobile telecommunications. Knowl.-Based Syst.; 2018; 162, pp. 202-210. [DOI: https://dx.doi.org/10.1016/j.knosys.2018.09.029]
223. Bingol, M.C.; Aydogmus, O. Performing predefined tasks using the human–robot interaction on speech recognition for an industrial robot. Eng. Appl. Artif. Intell.; 2020; 95, pp. 103903-103917. [DOI: https://dx.doi.org/10.1016/j.engappai.2020.103903]
224. Tanaka, T.; Masumura, R.; Oba, T. Neural candidate-aware language models for speech recognition. Comput. Speech Lang.; 2021; 66, pp. 101157-101170. [DOI: https://dx.doi.org/10.1016/j.csl.2020.101157]
225. Dokuz, Y.; Tufekci, Z. Mini-batch sample selection strategies for deep learning based speech recognition. Appl. Acoust.; 2021; 171, pp. 107573-107583. [DOI: https://dx.doi.org/10.1016/j.apacoust.2020.107573]
226. Mulimani, M.; Koolagudi, S.G. Extraction of mapreduce-based features from spectrograms for audio-based surveillance. Digit. Signal Process.; 2019; 87, pp. 1-9. [DOI: https://dx.doi.org/10.1016/j.dsp.2019.01.001]
227. El Ayadi, M.; Kamel, M.S.; Karray, F. Survey on speech emotion recognition: Features, classification schemes, and databases. Pattern Recognit.; 2011; 44, pp. 572-587. [DOI: https://dx.doi.org/10.1016/j.patcog.2010.09.020]
228. Fayek, H.M.; Lech, M.; Cavedon, L. Evaluating deep learning architectures for speech emotion recognition. Neural Netw.; 2017; 92, pp. 60-68. [DOI: https://dx.doi.org/10.1016/j.neunet.2017.02.013] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28396068]
229. Li, D.; Zhou, Y.; Wang, Z.; Gao, D. Exploiting the potentialities of features for speech emotion recognition. Inf. Sci.; 2021; 548, pp. 328-343. [DOI: https://dx.doi.org/10.1016/j.ins.2020.09.047]
230. Badshah, A.M.; Ahmad, J.; Rahim, N.; Baik, S.W. Speech emotion recognition from spectrograms with deep convolutional neural network. Proceedings of the International Conference on Platform Technology and Service; Busan, Republic of Korea, 13–15 February 2017.
231. Nagamachi, M.; Lokman, A.M. Kansei Innovation: Practical Design Applications for Product and Service Development; CRC Press: Boca Raton, FL, USA, 2015.
232. Hossain, M.S.; Muhammad, G. Emotion recognition using deep learning approach from audio–visual emotional big data. Inf. Fusion; 2019; 49, pp. 69-78. [DOI: https://dx.doi.org/10.1016/j.inffus.2018.09.008]
233. Jiang, Y.; Li, W.; Hossain, M.S.; Chen, M.; Alelaiwi, A.; Al-Hammadi, M. A snapshot research and implementation of multimodal information fusion for data-driven emotion recognition. Inf. Fusion; 2020; 53, pp. 209-221. [DOI: https://dx.doi.org/10.1016/j.inffus.2019.06.019]
234. Zhang, J.; Yin, Z.; Chen, P.; Nichele, S. Emotion recognition using multi-modal data and machine learning techniques: A tutorial and review. Inf. Fusion; 2020; 59, pp. 103-126. [DOI: https://dx.doi.org/10.1016/j.inffus.2020.01.011]
235. Li, G.; Wang, M.; Lu, Z.; Hong, R.; Chua, T.-S. In-video product annotation with web information mining. ACM Trans. Multimed. Comput. Commun. Appl.; 2012; 8, pp. 1-19. [DOI: https://dx.doi.org/10.1145/2379790.2379797]
236. Zhang, H.; Guo, H.; Wang, X.; Ji, Y.; Wu, Q.J. Clothescounter: A framework for star-oriented clothes mining from videos. Neurocomputing; 2020; 377, pp. 38-48. [DOI: https://dx.doi.org/10.1016/j.neucom.2019.09.023]
237. Zhang, H.; Ji, Y.; Huang, W.; Liu, L. Sitcom-star-based clothing retrieval for video advertising: A deep learning framework. Neural Comput. Appl.; 2019; 31, pp. 7361-7380. [DOI: https://dx.doi.org/10.1007/s00521-018-3579-x]
238. Cheng, Z.-Q.; Wu, X.; Liu, Y.; Hua, X.-S. Video2shop: Exact matching clothes in videos to online shopping images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; Honolulu, HI, USA, 21–26 July 2017; pp. 4048-4056.
239. Wu, C.; Tan, Z.; Wang, Z.; Yang, S. A dataset for exploring user behaviors in vr spherical video streaming. Proceedings of the ACM on Multimedia Systems Conference; Taipei, Taiwan, 20–23 June 2017; pp. 193-198.
240. Taati, B.; Snoek, J.; Mihailidis, A. Video analysis for identifying human operation difficulties and faucet usability assessment. Neurocomputing; 2013; 100, pp. 163-169. [DOI: https://dx.doi.org/10.1016/j.neucom.2011.10.041]
241. Chen, B.-H.; Huang, S.-C.; Yen, J.-Y. Counter-propagation artificial neural network-based motion detection algorithm for static-camera surveillance scenarios. Neurocomputing; 2018; 273, pp. 481-493. [DOI: https://dx.doi.org/10.1016/j.neucom.2017.08.002]
242. Liu, S.; Liang, X.; Liu, L.; Lu, K.; Lin, L.; Cao, X.; Yan, S. Fashion parsing with video context. IEEE Trans. Multimed.; 2015; 17, pp. 1347-1358. [DOI: https://dx.doi.org/10.1109/TMM.2015.2443559]
243. Dong, H.; Liang, X.; Shen, X.; Wu, B.; Chen, B.-C.; Yin, J. Fw-gan: Flow-navigated warping gan for video virtual try-on. Proceedings of the IEEE/CVF International Conference on Computer Vision; Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1161-1170.
244. Yi, C.; Jiang, Z.; Benbasat, I. Enticing and engaging consumers via online product presentations: The effects of restricted interaction design. J. Manag. Inf. Syst.; 2015; 31, pp. 213-242. [DOI: https://dx.doi.org/10.1080/07421222.2014.1001270]
245. An, S.; Liu, S.; Huang, Z.; Che, G.; Bao, Q.; Zhu, Z.; Chen, Y.; Weng, D.Z. Rotateview: A video composition system for interactive product display. IEEE Trans. Multimed.; 2019; 21, pp. 3095-3105. [DOI: https://dx.doi.org/10.1109/TMM.2019.2918720]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
As living standards improve, modern products need to meet increasingly diversified and personalized user requirements. Traditional product design methods fall short due to their strong subjectivity, limited survey scope, lack of real-time data, and poor visual display. However, recent progress in big data and artificial intelligence (AI) are bringing a transformative big data and AI-driven product design methodology with a significant impact on many industries. Big data in the product lifecycle contains valuable information, such as customer preferences, market demands, product evaluation, and visual display: online product reviews reflect customer evaluations and requirements, while product images contain shape, color, and texture information that can inspire designers to quickly generate initial design schemes or even new product images. This survey provides a comprehensive review of big data and AI-driven product design, focusing on how big data of various modalities can be processed, analyzed, and exploited to aid product design using AI algorithms. It identifies the limitations of traditional product design methods and shows how textual, image, audio, and video data in product design cycles can be utilized to achieve much more intelligent product design. We finally discuss the major deficiencies of existing data-driven product design studies and outline promising future research directions and opportunities, aiming to draw increasing attention to modern AI-driven product design.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details



1 College of Big Data and Statistics, Guizhou University of Finance and Economics, Guiyang 550050, China;
2 State Key Laboratory of Public Big Data, Guizhou University, Guiyang 550050, China;
3 School of Computer Science, Civil Aviation Flight University of China, Guanghan 618307, China;
4 School of Mechanical Engineering, Guizhou Institute of Technology, Guiyang 550050, China;
5 Department of Computer Science and Engineering, University of South Carolina, Columbia, SC 29201, USA