1. Introduction
In recent years, there has been a growing emphasis on sustainable and high-quality catches from regional areas. The goal of such efforts is twofold: on the one hand, there is a focus on preserving the existing ecosystems as well as the biodiversity; on the other hand, this leads to the creation of investment opportunities for businesses in this market.
The AliAmvra project (Figure 1) stands as a noteworthy initiative, specifically focusing on the exploration and promotion of premium catches from the Amvrakikos Gulf, extending its reach to the broader regions of Arta. The core objective of the project is to establish an integrated plan of action, fostering a business identity characterized by high added value and tailored services that align with the unique features of the area.
This paper centers its attention on scrutinizing and interpreting data obtained from various tasting exhibitions associated with the AliAmvra project. The overarching goal is to enhance product quality, contribute to the ongoing action plan of AliAmvra on improving the quality of the products [1], and ultimately elevate the customer experience [2,3]. The methodology employed involves a comprehensive analysis of survey data collected during gastronomic events facilitated by the project, utilizing Google Forms to gather insights into both demographic information and product evaluations.
Furthermore, the publication extends its inquiry into the development of a robust recommendation system based on the acquired data. A pivotal aspect of this research is the incorporation of data-driven and model-driven algorithms to optimize the recommendation system’s efficacy. The model-driven analysis utilized in this study focused on the application of several ML/DL algorithms. A series of machine learning algorithms were utilized in the current work, such as MLP [4], RBF [5], GenClass [6], NNC [7], and FC [8]. Through this multifaceted approach, the paper seeks not only to contribute valuable insights to the AliAmvra project, but also to advance and broaden the understanding of the application of diverse algorithms in optimizing recommendation systems for projects of similar nature and scope.
Similar studies on the improvement or assessment of the quality of food products using machine learning or deep learning techniques have shown promising results. Computer Vision is also one of many ways to determine or help improve the quality of food products. One study applied convolutional neural networks [9] in food reviews to classify the quality of products using images as inputs; this task was achieved by segmenting the contents of the plate into multiple sections. Another similar study [10] used Computer Vision to analyze the color of coffee beans and classify their quality. In addition, machine learning is capable of assessing the quality of products through large-scale reviews with the assistance of demographic data or food product data. Two studies mention the use of ML algorithms, one focused on the association of demographic data and food choice motives [11] and the other focused on food quality assessment [12].
Moreover, machine learning techniques have also been applied to food safety models [13,14], food sales prediction [15,16], the evaluation of food quality [17,18], food security [19,20], etc.
The latter sections thoroughly convey the process of each analysis. Materials and Methods (Section 2) discusses the methodologies, tools, and techniques used in each analysis. Section 3 clarifies the data we retrieved from the exhibitions, including the development of our dataset. The Results Section (Section 4) expands the study even further by analyzing and visualizing the results of the models, as well as features a thorough visualization using data analysis. Finally, we provide a conclusion to the experiments applied in this study and determine the best-fitting model for the problem in question.
2. Materials and Methods
This section will begin with the basic principles of Grammatical Evolution, accompanied by a full example of producing valid expressions, and continue with a full description of the methods used to effectively evaluate the data collected during the execution of the project. Furthermore, this section covers information about the tools applied in the analysis of this project, including the methodologies applied to the model and data-driven analysis.
2.1. Grammatical Evolution
Grammatical Evolution [21] is a genetic algorithm with integer chromosomes. The concept of genetic algorithms was proposed by John Holland [22] and they are considered as biologically inspired algorithms. The algorithm produces potential solutions of an optimization problem randomly and these solutions are gradually altered in a series of iterations through the application of the genetic operators of selection, crossover, and mutation [23,24]. Genetic algorithms have been used in a series of real-world problems, such as electromagnetic problems [25], combinatorial problems [26], water distribution problems [27], neural network training [28,29], etc. The main advantage of genetic algorithms is that they can be easily parallelized [30,31] using programming techniques such as MPI [32] or OpenMP [33]. The chromosomes in Grammatical Evolution represent production rules of the provided BNF (Backus–Naur form) grammar [34]. Any BNF grammar G can be defined as the set with the following definitions:
1.. N denotes the set of non terminal symbols of the underlying grammar.
2.. T stands for the set of terminal symbols, where .
3.. The terminal symbol S is named the start symbol of the grammar.
4.. P is a finite set of production rules in the form or .
The Grammatical Evolution starts from the symbol S and produces valid programs, expressed only with terminal symbols, selecting production rules from the grammar. The production rules are selected using the following procedure:
Read the next element V from the chromosome that is being processed.
Obtain the rule: Rule = V mod R, where R is the total number of production rules for the current non-terminal symbol.
As an example, consider the grammar of Figure 2, which was used to produce valid expressions in C-like programming language. This grammar can be used to produce valid expressions in a language similar to the programming language C. On the right, each production rule is the sequential rule number for the corresponding non-terminal symbol. This number will be used by Grammar Evolution to select the next rule during expression generation.
Also, consider the chromosome and . The steps to produce the final string are outlined in Table 1.
The Grammatical Evolution has been used in a variety of problems such as function approximation [35,36], in solving trigonometric equations [37], the automatic composition of music [38], neural network construction [39,40], creating numeric constraints [41], video games [42,43], estimation of energy demand [44], combinatorial optimization [45], cryptography [46], etc. Recent extensions of the Grammatical Evolution procedure include the Structured Grammatical Evolution [47,48], parallel implementations [49,50], the Probabilistic Grammatical Evolution variant [51], the Multi-Objective Grammatical Evolution approach [52], etc.
2.2. Construction of Classification Rules
A basic technique that will be used in conducting the experiments is that of constructing classification rules using Grammatical Evolution. This method was initially proposed in [6] and the corresponding software was described in [53]. This technique constructs classification rules with the assistance of Grammatical Evolution. The main steps of the used method are provided below.
-
1.. Initialization Step:
-
(a). Set as the number of chromosomes that will participate.
-
(b). Set the total number of allowed generations as .
-
(c). Produce (randomly) chromosomes. Each chromosome is considered as a set of integer values representing production rules of the underlying BNF grammar.
-
(d). Define as the used selection rate, with .
-
(e). Define as the used mutation rate, with .
-
(f). Read the train set for the corresponding dataset.
-
(g). Set iter = 0.
-
-
2.. Fitness calculation Step:
-
(a). For do
-
i.. Create a classification program . As an example of a classification program, consider the following expression:
-
ii.. Compute the fitness value as
(1)
-
-
(b). Select EndFor.
-
-
3.. Genetic Operations Step:
-
(a). Selection procedure. The chromosomes are sorted initially according to their fitness values. The first chromosomes with the lowest fitness values are copied to the next generation. The rest of the chromosomes are replaced by offsprings produced during the crossover procedure.
-
(b). Crossover procedure. For every pair of produced offsprings, two chromosomes are selected from the current population using the tournament selection. The process of tournament selection is as follows: Firstly, create a group of randomly selected chromosomes from the current population and the individual with the best fitness in the group is selected. These chromosomes will produce the offsprings and using one-point crossover. An example of one-point crossover is shown in Figure 3.
-
(c). Perform the mutation procedure. In this process, a random number is drawn for every element of each chromosome and it is altered randomly if .
-
-
4.. Termination Check Step:
-
(a). Set ;
-
(b). If , terminate; or else, return to the Fitness Calculation Step.
-
2.3. Neural Network Construction
Another technique that maximizes the potential of Grammatical Evolution is the production of artificial neural networks that use it [7]. This technique can simultaneously construct the optimal structure of an artificial neural network as well as estimate the values of the network weights, minimizing the training error. The steps used in NNC are listed below.
-
1.. Initialization Step:
-
(a). Set as the number of chromosomes.
-
(b). Set as the total number of generations allowed.
-
(c). Produce (randomly) chromosomes as a series of production rules expressed in integer format.
-
(d). Set the selection rate to and the mutation rate to .
-
(e). Read the associated train set .
-
(f). Set iter = 0.
-
-
2.. Fitness Calculation Step:
-
(a). For do
-
i.. Construct an artificial neural network . The neural networks constructed by this procedure are in the form:
(2)
where d stands for the dimension of the input dataset and H denotes the number of processing nodes in the neural network. The function stands for the sigmoid function: -
ii.. Compute the corresponding fitness value as
(3)
-
-
(b). Select EndFor.
-
-
3.. Genetic Operations Step:
-
(a). Selection procedure. Initially, the chromosomes are sorted according to their associated fitness values. The first chromosomes with the lowest fitness values are transferred to the next generation without changes. The rest of the chromosomes are replaced by offsprings produced during the crossover procedure.
-
(b). Crossover procedure. For each pair of newly added chromosomes, two parents are selected using tournament selection. The new chromosomes are created using one-point crossover.
-
(c). Perform the mutation procedure. In this process, a random number is drawn for every element of each chromosome and it is altered randomly if .
-
-
4.. Termination Check Step:
-
(a). Set ;
-
(b). If , terminate; or else, return to the Fitness Calculation Step.
-
2.4. Feature Construction with Grammatical Evolution
The Grammatical Evolution was also used as the base to construct artificial features from the original one for classification and regression problems [8]. The artificial features that create this procedure will be evaluated using a radial basis function (RBF) network [5]. The RBF network has an extremely fast and efficient training procedure with the incorporation of the K-means [54] method; additionally, RBF networks have been used with success in a variety of problems, such as physics problems [55,56], estimation of solutions for differential equations [57,58], robotics [59], chemistry [60], etc. The procedure of creating artificial features is divided in a series of steps listed below.
-
1.. Initialization Step:
-
(a). Set the number of chromosomes to .
-
(b). Set the total number of allowed generations to .
-
(c). Produce (randomly) chromosomes as random sets of integer.
-
(d). Set the selection rate to and the mutation rate to .
-
(e). Set F as the number of artificial features that will be constructed by the procedure.
-
(f). Read the train set .
-
(g). Set iter = 0.
-
-
2.. Fitness Calculation Step:
-
(a). For do
-
i.. Produce F artificial features from the original ones of the dataset.
-
ii.. The original training set TR is mapped to a new one using the artificial features produced. Denote this new training set as .
-
iii.. Train an RBF network using the set .
-
iv.. Compute the fitness value as
(4)
-
-
(b). Select EndFor
-
-
3.. Genetic Operations Step:
-
(a). Selection procedure. Chromosomes are sorted based on the fitness of each one. The first will be transferred without changes to the next generation, while the rest will be replaced by chromosomes created in the crossover process.
-
(b). Crossover procedure. For every pair of produced offsprings, two chromosomes are selected using tournament selection. These chromosomes will be the parents for two new offsprings and created with one-point crossover.
-
(c). Mutation procedure. A random number is drawn for every element of each chromosome. The corresponding element is altered randomly if .
-
-
4.. Termination Check Step:
-
(a). Set ;
-
(b). If , terminate; or else return to the Fitness Calculation Step.
-
2.5. Statistical Analysis
The statistical analysis was to calculate the frequency of occurrences of all entries. Data visualization was achieved using Python libraries that are capable of generating interactable pie charts as a web application.
The frequency calculation for each entry was achieved by counting each occurrence of an entry for each question. The following formula was used to calculate the frequencies of every answer separately:
(5)
In Equation (5), n is the total number of answers; references the index and value of a given answer to a question; x represents the given answer we want to count; evaluates as 1 if is equal to x, otherwise as 0.
Once every frequency for each answer and each question is calculated, the data are prepared to be visualized with the help of graphing libraries.
2.6. Plotting Libraries
Streamlit is a highly capable open-source framework that allows users to deploy web applications exceedingly fast and easily in Python. It is compatible with many modern third-party frameworks and was created for data scientists, as well as ML, DL, and Computer Vision engineers. Furthermore, Streamlit has been a perfect use case for our analysis, not only for data visualization, but also due to its interactiveness. data charts. Streamlit’s impact on our analysis was the ease of use in embedding multiple interactive charts into a web application, allowing us to display the results as we desired.
Plotly is a high-level low-code graphing library [61] that allows users to create interactive graphs from their data. Plotly is fully supported by Streamlit, allowing users to build professional dashboards under the influence of their data. Seaborn [62] is a high-level Python data visualization library based on Matplotlib. We used Seaborn to visualize the prediction result experiments produced by the models we evaluated.
3. Datasets and Data Retrieval
The development of our dataset was instigated once we had completed a large portion of the data retrieval. We retrieved data from a total of eight different exhibitions/locations.
As previously mentioned, we received data by interacting with each customer in the tasting exhibitions. The information retrieved included demographic data, alongside graded products and general questions involving the experience customers had and their preferences.
The data retrieval was achieved with the assistance of the members of the AliAmvra project. Google Forms allowed us to build a very simple and fast survey to use. Our members approached each individual once they had tasted each or some of the samples and were provided with several questions about the experience they had with the products.
Prior to the processing of the dataset by the models, we made sure to clean and process any dormant values to remove possible casualties and improve the model’s performance. Any sample that was not tasted by the customers was attributed as a null value (0). Finally, we made sure to convert the data to numerical values so that the models could detect them successfully.
Survey Structures
There were two types of surveys used in the exhibitions. The first survey included mostly data for a thorough visualization and statistical analysis. We retrieved a total of 366 entries. The second survey included data appropriate for prediction and recommendation systems. Both surveys included timestamps which allowed us to differentiate the locations of each exhibition that every entry was retrieved from. The structure of the first survey included general questions involving the experience customers had with the products, as well as questions referring to each product, how much they enjoyed it, and what they enjoyed the most about it. The names of the samples introduced in the exhibitions with the first survey are “Beetroot risotto with smoked eel”, “Fried Sardine”, “Fish croquettes with mullet”, and “Sea bream Ceviche with lemon cream and pickled fennel”.
Data retrieval locations described in Table 2 include: Neoxori Arta 06/11/2023
The structure of the second survey included three demographic questions and five questions involving the grading of the products within the exhibitions. The names of the samples introduced in the exhibition with the second survey are “Grilled Eel”, “Baked Eel”, “Grilled Sea Bream”, “Grilled Chub”, and “Sardine”.
Data retrieval locations described in Table 3 include: Psathotopi Artas 10/15
The second survey, Figure 3 has an equivalent structure to the first; however, the described products in the exhibition with the second survey were different compared to the exhibitions of the first survey. Furthermore, the second survey did not include questions regarding the reason why a sample was endorsed, which is included in the first survey. The general questions of the second survey focused primarily on retrieving demographic data. Finally, the goal was to observe a correlation between the demographic data and the sample endorsement results.
The total count of entries for the first survey is a total of 366 entries. For the second survey, we retrieved data only from one location and it contains a total of 39 entries. Due to the reduced number of information provided in the second survey, there can be a significant difficulty in training neural networks.
4. Results
The experiments we conducted utilized data retrieved from both surveys. However, for our data analysis, we aimed our focus primarily on the first survey, due to its larger dataset and a wider range of provided questions. For our model review, we directed our attention to the second survey. This decision was driven based on the characteristics of the dataset, as outlined in Table 2, which includes demographic data essential to observing correlations.
4.1. Experimental Results
We conducted a thorough experiment by breaking down the structure of the second survey into two different phases. For the first phase, we used the models mentioned previously to predict customer preferences using only demographic data. For the second phase, we made the models to predict customer preferences using the demographic data, including the rest of the products as class features for each entry.
The experiments were executed 30 times for all used methods, and in each experiment, a different seed was used each time for the random number. To execute the experiments, the freely available QFc software [63] was used, and it is available from
Table 5 and Table 6 represent various information such as the dataset and the algorithms used to extract the evaluation results:
1.. The column DATASET denotes the tested preference.
2.. The column MLP stands for the application of an artificial neural network [64] with H hidden nodes to the dataset. The network is trained using the BFGS variant of Powell [65].
3.. The column RBF represents the application of an RBF network with H hidden nodes to the dataset.
4.. The column PCA represents the application of an artificial neural network to two new features constructed by the Principal Component Analysis (PCA) method [66].
5.. The column GENCLASS refers to the application of the GenClass method, used to construct classification rules using a Grammatical Evolution-guided procedure.
6.. The column NNC stands for the application of the neural network construction method analyzed previously.
7.. The column FC refers to the application of the feature construction method of Section 2.4 to the dataset, where two artificial features are created.
8.. In the experimental tables, an additional row was added with the title AVERAGE. This row contains the average classification error for all datasets.
4.2. Data Visualization
This section includes visualizations of the analysis applied to the exhibition data, using Streamlit [67], Plotly [61], and Seaborn [62]. Both surveys were included; however, as mentioned previously, we focused our attention onto the first survey, which was the largest dataset and which contained the most amount of information for our analysis.
4.2.1. Prediction Results
In this section, we describe the experimental findings derived from the analysis of both datasets showcased under Table 5 and Table 6. In the current section, we utilized box plots to visualize and better understand the prediction results of both experiments.
Figure 4 describes the results retrieved from the first experiment. We can observe a significant imbalance of the overall classification error between each model. FC displays a smaller overall gap and a lower classification error compared to the other model results, meaning that FC performed best for the first phase using only the demographic data. On the other hand, the radial basis function (RBF) displays the highest classification error and gap with two visible outliers compared to the other models. In other words, it displays the least favorable performance in terms of two visible outliers.
In the second experiment (Figure 5), the results display a better distribution among the models compared to the first phase. FC demonstrates superior performance, displaying a more favorable overall range and median classification error than the other models. MLP, RBF, and PCA present a rather suboptimal performance, with the overall range and median classification error much higher compared to the other models; however, NNC appears to perform well in terms of the distribution between each product.
4.2.2. Data Analysis
As mentioned previously, the first survey contained general questions regarding the overall experience the customers had with the samples. This information for the first survey is displayed in Figure 6 and Figure 7.
Figure 6 describes the most endorsed sample and the type of cooking method. Figure 7 is a collection of results containing information regarding the individual preferences for each sample. The left pie charts describe the overall satisfaction of the customer with the sample. The right pie chart describes the reason for their satisfaction with the sample described on the left chart.
The data analysis results of the second survey, as shown in Figure 8 and Figure 9, are similar in terms of visualization to the previous pie chart except for the introduction to the satisfaction of the sample.
Figure 8 describes the analysis of all the demographic data (gender, age, and marital Status) for every customer. Figure 9 describes the overall satisfaction of each sample for every customer.
In our final observation of the first survey results, the information collected and displayed in Table 7 describes that the most liked sample with the highest endorsement percentage was beet risotto with smoked eel; however, the least endorsed sample was fried sardines. Based on the observed results, the customers focused their attention onto the taste and looks and not significantly to the nutritional value of the sample. Whilst both samples are high in nutritional value, sardines contain a high amount of omega-3 fatty acids, which are considered to be healthy fats. On the other hand, the sample beet risotto with smoked eel contains a variety of vitamins such as minerals or antioxidants; however, it might also contain increased fat or calorie content based on the preparation of the sample (added butter, oil contents).
Under the circumstances of Table 7, there is a visible selection bias toward the entries referring to the sample beet risotto with smoked eel. The selection bias refers to the higher total number of entries compared to the other samples.
Table 8 describes the averaged grading of each plate without any constraints. The analysis applied to the second survey does not have enough entries for us to estimate an overall grade to determine which sample was enjoyed the most.
5. Conclusions
AliAmvra focused on the promotion of the products produced by high-quality catches of the Amvrakikos Gulf. As a research team, we focused on a thorough analysis of the data retrieved from each exhibition. In addition, we developed a recommendation system to determine a person’s preferences for the products displayed in the exhibitions, based on their demographic data.
Each experiment focused on the production of different results. In our model-driven analysis, we found a strong correlation between each customer with their demographic data and preferences for each product, as we can observe in the previous plotted results (Figure 4 and Figure 5). We determined that FC performed the best compared to the rest of the models. In our data-driven analysis, we were able to observe the preferences of the mass for each individual question. We found that each analysis contributed to a different conclusion, in demand for the project.
Based on the analysis that was executed, it can be concluded that the best-fitting algorithm for the development of the project AliAmvra is FC [8]. FC performed best in both phases, where only the demographic data, or all of the data, were used to recommend a customer a product they would consider.
6. Future Work
To obtain more accurate results from our model-driven analysis, more data will need to be retrieved using the second survey with the demographic data. This can be achieved with future exhibitions that can take place with the AliAmvra project.
After our analysis, we have acknowledged that in order to improve the accuracy of our model-driven analysis, it is vital that we proceed to retrieve more data through the second survey. This can be accomplished by conducting future exhibitions as part of the AliAmvra project.
I.G.T., D.M. and J.B. conceived the idea and methodology and supervised the technical part regarding the software. I.G.T. and D.M. conducted the experiments, employing several datasets, and provided the comparative experiments. J.B. and all other authors prepared the manuscript. I.G.T. organized the research team. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
This research has been financed by the European Union: European Fund for Regional Development, under the call RESEARCH INNOVATION PLAN (2014–2020) of Central Macedonia Region, project name “Research and development IoT application for collecting and exploiting big data and create smart hotel” (project code: KMP6-0222906).
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 2. An example grammar to produce expressions in a C-like programming language.
An example for the production of a valid expression.
Expression | Selected Value | Next Operation |
---|---|---|
<expr> | 12 | 12 mod 3 = 0 |
(<expr><op><expr>) | 8 | 8 mod 3 = 2 |
(<terminal><op><expr>) | 8 | 8 mod 2 = 0 |
(<xlist><op><expr>) | 20 | 20 mod 3 = 2 |
(x3<op><expr>) | 15 | 15 mod 3 = 0 |
(x3/<expr>) | 100 | 100 mod 3 = 1 |
(x3/<func>(<expr>)) | 6 | 6 mod 4 = 2 |
(x3/exp(<expr>)) | 2 | 2 mod 3 = 2 |
(x3/exp(<terminal>)) | 2 | 2 mod 2 = 0 |
(x3/exp(<xlist>)) | 1 | 1 mod 3 = 1 |
(x3/exp(x2)) |
The first survey structure which includes two grading questions repeated for each sample and two general questions.
General Questions | Answers | Description |
---|---|---|
Which sample did you enjoy the most? | Sample_Name | The name of the sample |
Do you prefer modern or traditional recipes? | Traditional, Modern, Both, None | |
Grading Questions | Answers | Description |
Sample rating | Did not like it, Neutral, Liked it a bit, Liked it, Liked it a lot | How much did the customer enjoy the sample? (Repeated for each sample) |
What did you enjoy the most? | Taste, Cooking method, Appearance | Multiple choice question |
The second survey structure which includes three demographic questions and five sample grading questions.
Demographic Questions | Sample Rows | Description |
---|---|---|
Gender | Male or Female | Gender of each customer |
Age | 15–25, 26–35, 36–45, 46–55, 56–65, 66–75 | Age of each customer |
Marital Status | Married, Not married | The marital status of each customer |
Product Grading Questions | Sample Rows | |
Grilled Eel | 1 = not at all, 5 = a lot | |
Baked Eel | 1 = not at all, 5 = a lot | |
Grilled Sea Bream | 1 = not at all, 5 = a lot | |
Grilled Chub | 1 = not at all, 5 = a lot | |
Sardine | 1 = not at all, 5 = a lot |
The values of the experimental parameters.
Parameter | Meaning | Value |
---|---|---|
| Number of chromosomes | 500 |
| Maximum number of allowed generations | 200 |
| Selection rate | 0.90 |
| Mutation rate | 0.05 |
H | Number of processing nodes | 10 |
F | Constructed features (feature construction method) | 2 |
Experiments for the first phase: demographic data-based dataset preferences.
Dataset | MLP | RBF | PCA | Genclass | NNC | FC |
---|---|---|---|---|---|---|
Grilled Eel | 25.11% | 27.89% | 24.55% | 18.00% | 21.67% | 22.22% |
Baked Eel | 23.78% | 33.56% | 23.56% | 26.22% | 23.11% | 25.67% |
Grilled Sea Bream | 33.22% | 40.67% | 30.22% | 36.45% | 37.34% | 32.00% |
Grilled Chub | 30.22% | 35.00% | 30.67% | 30.78% | 31.45% | 28.00% |
Sardine | 30.22% | 35.00% | 30.67% | 31.55% | 26.33% | 28.00% |
AVERAGE | 28.51% | 34.45% | 27.84% | 28.60% | 27.98% | 24.87% |
Experiments for the second phase: dataset with the demographic data counting the rest of the product preferences.
Dataset | MLP | RBF | PCA | Genclass | NNC | FC |
---|---|---|---|---|---|---|
Grilled Eel | 17.11% | 13.22% | 16.78% | 15.67% | 18.22% | 15.00% |
Baked Eel | 28.56% | 27.89% | 31.33% | 22.67% | 23.00% | 20.00% |
Grilled Sea Bream | 25.45% | 26.78% | 25.67% | 24.22% | 20.89% | 19.56% |
Grilled Chub | 20.44% | 17.22% | 19.45% | 16.89% | 16.45% | 14.33% |
Sardine | 14.22% | 20.44% | 16.44% | 19.66% | 16.44% | 16.22% |
AVERAGE | 21.16% | 21.11% | 21.93% | 19.82% | 19.00% | 17.02% |
The overall endorsement percentage of each sample visualized based on
Sample | Endorsement (%) | Entries |
---|---|---|
Beet risotto with smoked eel | 33.6% | 123 |
Fried sardines | 18.6% | 68 |
Sea bream ceviche with lemon cream and pickled fennel | 22.1% | 81 |
Fish croquettes with mullet | 25.7% | 94 |
The averaged grading of each sample visualized in
Sample | Grading (avg %) | Entries |
---|---|---|
Grilled Eel | 97.714% | 35 |
Baked Eel | 97.793% | 29 |
Grilled Sea Bream | 97.93% | 28 |
Grilled Chub | 96.551% | 29 |
Sardine | 96.111% | 36 |
References
1. Misztal, A. Product improvement on the basis of data analysis concerning customer satisfaction. Knowledge Base for Management—Theory and Practice; University of Žilina: Žilina, Slovakia, 2010; pp. 287-291.
2. Alkerwi, A.; Vernier, C.; Sauvageot, N.; Crichton, G.E.; Elias, M.F. Demographic and socioeconomic disparity in nutrition: Application of a novel Correlated Component Regression approach. BMJ Open; 2015; 5, e006814. [DOI: https://dx.doi.org/10.1136/bmjopen-2014-006814]
3. Mishan, M.; Amir, A.L.; Supir, M.; Kushan, A.; Zulkifli, N.; Rahmat, M. Integrating Business Intelligence and Recommendation Marketplace System for Hawker Using Content Based Filtering. Proceedings of the 2023 4th International Conference on Artificial Intelligence and Data Sciences (AiDAS); Ipoh, Malaysia, 6–7 September 2023; pp. 200-205. [DOI: https://dx.doi.org/10.1109/AiDAS60501.2023.10284691]
4. Nawi, N.M.; Ransing, M.R.; Ransing, R.S. An Improved Learning Algorithm Based on The Broyden-Fletcher-Goldfarb-Shanno (BFGS) Method For Back Propagation Neural Networks. Proceedings of the Sixth International Conference on Intelligent Systems Design and Applications; Jian, China, 16–18 October 2006; pp. 152-157. [DOI: https://dx.doi.org/10.1109/ISDA.2006.95]
5. Pushpa, C.N.; Patil, A.; Thriveni, J.; Venugopal, K.R.; Patnaik, L.M. Web page recommendations using Radial Basis Neural Network technique. Proceedings of the 2013 IEEE 8th International Conference on Industrial and Information Systems; Peradeniya, Sri Lanka, 17–20 December 2013; pp. 501-506. [DOI: https://dx.doi.org/10.1109/ICIInfS.2013.6732035]
6. Tsoulos, I.G. Creating classification rules using grammatical evolution. Int. J. Comput. Intell. Stud.; 2020; 9, pp. 161-171. [DOI: https://dx.doi.org/10.1504/IJCISTUDIES.2020.106477]
7. Tsoulos, I.; Gavrilis, D.; Glavas, E. Neural network construction and training using grammatical evolution. Neurocomputing; 2008; 72, pp. 269-277. [DOI: https://dx.doi.org/10.1016/j.neucom.2008.01.017]
8. Gavrilis, D.; Tsoulos, I.G.; Dermatas, E. Selecting and constructing features using grammatical evolution. Pattern Recognit. Lett.; 2008; 29, pp. 1358-1365. [DOI: https://dx.doi.org/10.1016/j.patrec.2008.02.007]
9. Zhou, L.; Zhang, C.; Liu, F.; Qiu, Z.; He, Y. Application of Deep Learning in Food: A Review. Compr. Rev. Food Sci. Food Saf.; 2019; 18, pp. 1793-1811. [DOI: https://dx.doi.org/10.1111/1541-4337.12492]
10. Przybył, K.; Gawrysiak-Witulska, M.; Bielska, P.; Rusinek, R.; Gancarz, M.; Dobrzański, B.; Siger, A. Application of Machine Learning to Assess the Quality of Food Products. Case Study: Coffee Bean. Appl. Sci.; 2023; 13, 10786. [DOI: https://dx.doi.org/10.3390/app131910786]
11. Vorage, L.; Wiseman, N.; Graca, J.; Harris, N. The Association of Demographic Characteristics and Food Choice Motives with the Consumption of Functional Foods in Emerging Adults. Nutrients; 2020; 12, 2582. [DOI: https://dx.doi.org/10.3390/nu12092582]
12. Anwar, H.; Anwar, T.; Murtaza, S. Review on food quality assessment using machine learning and electronic nose system. Biosens. Bioelectron. X; 2023; 14, 100365. [DOI: https://dx.doi.org/10.1016/j.biosx.2023.100365]
13. IZSTO Ru, G.; Crescio, M.; Ingravalle, F.; Maurella, C. UBESP Gregori, D.; Lanera, C.; Azzolina, D.; Lorenzoni, G. et al. Machine Learning Techniques applied in risk assessment related to food safety. EFSA Support. Publ.; 2017; 14, 1254E. [DOI: https://dx.doi.org/10.2903/sp.efsa.2017.EN-1254]
14. Deng, X.; Cao, S.; Horn, A.L. Emerging Applications of Machine Learning in Food Safety. Annu. Rev. Food Sci. Technol.; 2021; 12, pp. 513-538. [DOI: https://dx.doi.org/10.1146/annurev-food-071720-024112]
15. Liu, X.; Ichise, R. Food Sales Prediction with Meteorological Data—A Case Study of a Japanese Chain Supermarket. Data Mining and Big Data, Proceedings of the Second International Conference, DMBD 2017, Fukuoka, Japan, 27 July–1 August 2017; Tan, Y.; Takagi, H.; Shi, Y. Springer: Cham, Switzerland, 2017; pp. 93-104.
16. Tsoumakas, G. A survey of machine learning techniques for food sales prediction. Artif. Intell. Rev.; 2019; 52, pp. 441-447. [DOI: https://dx.doi.org/10.1007/s10462-018-9637-z]
17. Jiménez-Carvelo, A.M.; González-Casado, A.; Bagur-González, M.G.; Cuadros-Rodríguez, L. Alternative data mining/machine learning methods for the analytical evaluation of food quality and authenticity—A review. Food Res. Int.; 2019; 122, pp. 25-39. [DOI: https://dx.doi.org/10.1016/j.foodres.2019.03.063]
18. Han, J.; Li, T.; He, Y.; Gao, Q. Using Machine Learning Approaches for Food Quality Detection. Math. Probl. Eng.; 2022; 2022, 6852022. [DOI: https://dx.doi.org/10.1155/2022/6852022]
19. Sood, S.; Singh, H. Computer vision and machine learning based approaches for food security: A review. Multimed. Tools Appl.; 2021; 80, pp. 27973-27999. [DOI: https://dx.doi.org/10.1007/s11042-021-11036-2]
20. Zhou, Y.; Lentz, E.; Michelson, H.; Kim, C.; Baylis, K. Machine learning for food security: Principles for transparency and usability. Appl. Econ. Perspect. Policy; 2022; 44, pp. 893-910. [DOI: https://dx.doi.org/10.1002/aepp.13214]
21. O’Neill, M.; Ryan, C. Grammatical Evolution. IEEE Trans. Evol. Comput.; 2001; 5, pp. 349-358. [DOI: https://dx.doi.org/10.1109/4235.942529]
22. Holland, J.H. Genetic algorithms. Sci. Am.; 1992; 267, pp. 66-73. [DOI: https://dx.doi.org/10.1038/scientificamerican0792-66]
23. Goldberg, D.E. Cenetic Algorithms in Search. Optimization, Machine Learning; Addison-Wesley: Boston, MA, USA, 1989.
24. Michalewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs; Springer: Berlin/Heidelberg, Germany, 1999.
25. Haupt, R.L. An introduction to genetic algorithms for electromagnetics. IEEE Antennas Propag. Mag.; 1995; 37, pp. 7-15. [DOI: https://dx.doi.org/10.1109/74.382334]
26. Grefenstette, J.; Gopal, R.; Rosmaita, B.; Van Gucht, D. Genetic algorithms for the traveling salesman problem. Proceedings of the First International Conference on Genetic Algorithms and Their Applications; Psychology Press: London, UK, 2014; pp. 160-168.
27. Savic, D.A.; Walters, G.A. Genetic algorithms for least-cost design of water distribution networks. J. Water Resour. Plan. Manag.; 1997; 123, pp. 67-77. [DOI: https://dx.doi.org/10.1061/(ASCE)0733-9496(1997)123:2(67)]
28. Leung, F.H.F.; Lam, H.K.; Ling, S.H.; Tam, P.K.S. Tuning of the structure and parameters of a neural network using an improved genetic algorithm. IEEE Trans. Neural Netw.; 2003; 14, pp. 79-88. [DOI: https://dx.doi.org/10.1109/TNN.2002.804317]
29. Sedki, A.; Ouazar, D.; El Mazoudi, E. Evolving neural network using real coded genetic algorithm for daily rainfall–runoff forecasting. Expert Syst. Appl.; 2009; 36, pp. 4523-4527. [DOI: https://dx.doi.org/10.1016/j.eswa.2008.05.024]
30. Cantú-Paz, E.; Goldberg, D.E. Efficient parallel genetic algorithms: Theory and practice. Comput. Methods Appl. Mech. Eng.; 2000; 186, pp. 221-238. [DOI: https://dx.doi.org/10.1016/S0045-7825(99)00385-0]
31. Liu, Y.Y.; Wang, S. A scalable parallel genetic algorithm for the generalized assignment problem. Parallel Comput.; 2015; 46, pp. 98-119. [DOI: https://dx.doi.org/10.1016/j.parco.2014.04.008]
32. Graham, R.L.; Shipman, G.M.; Barrett, B.W.; Castain, R.H.; Bosilca, G.; Lumsdaine, A. Open MPI: A high-performance, heterogeneous MPI. Proceedings of the 2006 IEEE International Conference on Cluster Computing; Barcelona, Spain, 28 September 2006; pp. 1-9.
33. Dagum, L.; Menon, R. OpenMP: An industry standard API for shared-memory programming. IEEE Comput. Sci. Eng.; 1998; 5, pp. 46-55. [DOI: https://dx.doi.org/10.1109/99.660313]
34. Backus, J.W. The syntax and semantics of the proposed international algebraic language of the Zurich ACM-GAMM Conference. Proceedings of the IFIP Congress; Paris, France, 15–20 June 1959.
35. Ryan, C.; Collins, J.; Neill, M.O. Grammatical evolution: Evolving programs for an arbitrary language. Proceedings of the Genetic Programming; Paris, France, 1 January 2006; Banzhaf, W.; Poli, R.; Schoenauer, M.; Fogarty, T.C. Springer: Berlin/Heidelberg, Germany, 1998; pp. 83-96.
36. O’Neill, M.; Ryan, C. Evolving Multi-line Compilable C Programs. Proceedings of the Genetic Programming; Paris, France, 26–27 May 1999; Poli, R.; Nordin, P.; Langdon, W.B.; Fogarty, T.C. Springer: Berlin/Heidelberg, Germany, 1999; pp. 83-92.
37. Ryan, C.; O’Neill, M.; Collins, J.J. Grammatical Evolution: Solving Trigonometric Identities. Proceedings of the Mendel 1998: 4th International Mendel Conference on Genetic Algorithms, Optimisation Problems, Fuzzy Logic, Neural Networks, Rough Sets; Brno, Czech Republic, 24–26 June 1998; pp. 111-119.
38. Ortega, A.; Alfonso, R.S.; Alfonseca, M. Automatic composition of music by means of grammatical evolution. Proceedings of the APL Conference; Madrid, Spain, 25 July 2002.
39. de Campos, L.M.L.; de Oliveira, R.C.L.; Roisenberg, M. Optimization of neural networks through grammatical evolution and a genetic algorithm. Expert Syst. Appl.; 2016; 56, pp. 368-384. [DOI: https://dx.doi.org/10.1016/j.eswa.2016.03.012]
40. Soltanian, K.; Ebnenasir, A.; Afsharchi, M. Modular Grammatical Evolution for the Generation of Artificial Neural Networks. Evol. Comput.; 2022; 30, pp. 291-327. [DOI: https://dx.doi.org/10.1162/evco_a_00302]
41. Dempsey, I.; O’Neill, M.; Brabazon, A. Constant Creation in Grammatical Evolution. Int. J. Innov. Comput. Appl.; 2007; 1, pp. 23-38. [DOI: https://dx.doi.org/10.1504/IJICA.2007.013399]
42. Galván-López, E.; Swafford, J.M.; O’Neill, M.; Brabazon, A. Evolving a Ms. PacMan Controller Using Grammatical Evolution. Proceedings of the Applications of Evolutionary Computation; Brno, Czech Republic, 12–14 April 2010; Di Chio, C.; Cagnoni, S.; Cotta, C.; Ebner, M.; Ekárt, A.; Esparcia-Alcazar, A.I.; Goh, C.K.; Merelo, J.J.; Neri, F.; Preuß, M. et al. Springer: Berlin/Heidelberg, Germany, 2010; pp. 161-170.
43. Shaker, N.; Nicolau, M.; Yannakakis, G.N.; Togelius, J.; O’Neill, M. Evolving levels for Super Mario Bros using grammatical evolution. Proceedings of the 2012 IEEE Conference on Computational Intelligence and Games (CIG); Granada, Spain, 14 September 2012; pp. 304-311. [DOI: https://dx.doi.org/10.1109/CIG.2012.6374170]
44. Martínez-Rodríguez, D.; Colmenar, J.M.; Hidalgo, J.I.; Villanueva Micó, R.J.; Salcedo-Sanz, S. Particle swarm grammatical evolution for energy demand estimation. Energy Sci. Eng.; 2020; 8, pp. 1068-1079. [DOI: https://dx.doi.org/10.1002/ese3.568]
45. Sabar, N.R.; Ayob, M.; Kendall, G.; Qu, R. Grammatical Evolution Hyper-Heuristic for Combinatorial Optimization Problems. IEEE Trans. Evol. Comput.; 2013; 17, pp. 840-861. [DOI: https://dx.doi.org/10.1109/TEVC.2013.2281527]
46. Ryan, C.; Kshirsagar, M.; Vaidya, G.; Cunningham, A.; Sivaraman, R. Design of a cryptographically secure pseudo-random number generator with grammatical evolution. Sci. Rep.; 2022; 12, 8602. [DOI: https://dx.doi.org/10.1038/s41598-022-11613-x]
47. Lourenço, N.; Pereira, F.B.; Costa, E. Unveiling the properties of structured grammatical evolution. Genet. Program. Evolvable Mach.; 2016; 17, pp. 251-289. [DOI: https://dx.doi.org/10.1007/s10710-015-9262-4]
48. Lourenço, N.; Assunção, F.; Pereira, F.B.; Costa, E.; Machado, P. Structured Grammatical Evolution: A Dynamic Approach. Handbook of Grammatical Evolution; Ryan, C.; O’Neill, M.; Collins, J. Springer: Cham, Switzerladn, 2018; pp. 137-161. [DOI: https://dx.doi.org/10.1007/978-3-319-78717-6_6]
49. Russo, I.L.; Bernardino, H.S.; Barbosa, H.J. A massively parallel Grammatical Evolution technique with OpenCL. J. Parallel Distrib. Comput.; 2017; 109, pp. 333-349. [DOI: https://dx.doi.org/10.1016/j.jpdc.2017.06.017]
50. Dufek, A.S.; Augusto, D.A.; Barbosa, H.J.C.; da Silva Dias, P.L. Multi- and Many-Threaded Heterogeneous Parallel Grammatical Evolution. Handbook of Grammatical Evolution; Ryan, C.; O’Neill, M.; Collins, J. Springer: Cham, Switzerland, 2018; pp. 219-244. [DOI: https://dx.doi.org/10.1007/978-3-319-78717-6_9]
51. Mégane, J.; Lourenço, N.; Machado, P. Probabilistic Grammatical Evolution. Genetic Programming, Proceedings of the 224th European Conference, EuroGP 2021, Held as Part of EvoStar 2021, Virtual Event, 7–9 April 2021; Hu, T.; Lourenço, N.; Medvet, E. Springer: Cham, Switzerland, 2021; pp. 198-213.
52. Pereira, P.J.; Cortez, P.; Mendes, R. Multi-objective Grammatical Evolution of Decision Trees for Mobile Marketing user conversion prediction. Expert Syst. Appl.; 2021; 168, 114287. [DOI: https://dx.doi.org/10.1016/j.eswa.2020.114287]
53. Anastasopoulos, N.; Tsoulos, I.G.; Tzallas, A. GenClass: A parallel tool for data classification based on Grammatical Evolution. SoftwareX; 2021; 16, 100830. [DOI: https://dx.doi.org/10.1016/j.softx.2021.100830]
54. MacQueen, J. Some methods for classification and analysis of multivariate observations. Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability; Oakland, CA, USA, 21 June 1967; Volume 1, pp. 281-297.
55. Teng, P. Machine-learning quantum mechanics: Solving quantum mechanics problems using radial basis function networks. Phys. Rev. E; 2018; 98, 033305. [DOI: https://dx.doi.org/10.1103/PhysRevE.98.033305]
56. Jovanović, R.Ž.; Sretenović, A.A. Ensemble of radial basis neural networks with k-means clustering for heating energy consumption prediction. FME Trans.; 2017; 45, pp. 51-57. [DOI: https://dx.doi.org/10.5937/fmet1701051J]
57. Mai-Duy, N. Solving high order ordinary differential equations with radial basis function networks. Int. J. Numer. Methods Eng.; 2005; 62, pp. 824-852. [DOI: https://dx.doi.org/10.1002/nme.1220]
58. Sarra, S.A. Adaptive radial basis function methods for time dependent partial differential equations. Appl. Numer. Math.; 2005; 54, pp. 79-94. [DOI: https://dx.doi.org/10.1016/j.apnum.2004.07.004]
59. Vijay, M.; Jena, D. Backstepping terminal sliding mode control of robot manipulator using radial basis functional neural networks. Comput. Electr. Eng.; 2018; 67, pp. 690-707. [DOI: https://dx.doi.org/10.1016/j.compeleceng.2017.11.007]
60. Shankar, V.; Wright, G.B.; Fogelson, A.L.; Kirby, R.M. A radial basis function (RBF) finite difference method for the simulation of reaction–diffusion equations on stationary platelets within the augmented forcing method. Int. J. Numer. Methods Fluids; 2014; 75, pp. 1-22. [DOI: https://dx.doi.org/10.1002/fld.3880]
61. Plotly. Collaborative Data Science 2013–2015. 5555 Av. de Gaspé 118, Montreal, Quebec H2T 2A3, Canada. Available online: https://plotly.com/ (accessed on 1 February 2024).
62. Waskom, M.L. Seaborn: Statistical data visualization. J. Open Source Softw.; 2021; 6, 3021. [DOI: https://dx.doi.org/10.21105/joss.03021]
63. Tsoulos, I.G. QFC: A Parallel Software Tool for Feature Construction, Based on Grammatical Evolution. Algorithms; 2022; 15, 295. [DOI: https://dx.doi.org/10.3390/a15080295]
64. Murtagh, F. Multilayer perceptrons for classification and regression. Neurocomputing; 1991; 2, pp. 183-197. [DOI: https://dx.doi.org/10.1016/0925-2312(91)90023-5]
65. Powell, M. A tolerant algorithm for linearly constrained optimization calculations. Math. Program.; 1989; 45, pp. 547-566. [DOI: https://dx.doi.org/10.1007/BF01589118]
66. Erkmen, B.; Yıldırım, T. Improving classification performance of sonar targets by applying general regression neural network with PCA. Expert Syst. Appl.; 2008; 35, pp. 472-475. [DOI: https://dx.doi.org/10.1016/j.eswa.2007.07.021]
67. Streamlit • A Faster Way to Build and Share Data Apps. Available online: https://streamlit.io/ (accessed on 1 February 2024).
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
AliAmvra is a project developed to explore and promote high-quality catches of the Amvrakikos Gulf (GP) to Artas’ wider regions. In addition, this project aimed to implement an integrated plan of action to form a business identity with high added value and achieve integrated business services adapted to the special characteristics of the area. The action plan for this project was to actively search for new markets, create a collective identity for the products, promote their quality and added value, engage in gastronomes and tasting exhibitions, dissemination and publicity actions, as well as enhance the quality of the products and markets based on the customer needs. The primary focus of this study is to observe and analyze the data retrieved from various tasting exhibitions of the AliAmvra project, with a target goal of improving customer experience and product quality. An extensive analysis was conducted for this study by collecting data through surveys that took place in the gastronomes of the AliAmvra project. Our objective was to conduct two types of reviews, one focused in data analysis and the other on evaluating model-driven algorithms. Each review utilized a survey with an individual structure, with each one serving a different purpose. In addition, our model review focused its attention on developing a robust recommendation system with said data. The algorithms we evaluated were MLP (multi-layered perceptron), RBF (radial basis function), GenClass, NNC (neural network construction), and FC (feature construction), which were used for the implementation of the recommendation system. As our final verdict, we determined that FC (feature construction) performed best, presenting the lowest classification rate of 24.87%, whilst the algorithm that performed the worst on average was RBF (radial basis function). Our final objective was to showcase and expand the work put into the AliAmvra project through this analysis.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer