Full Text

Turn on search term navigation

Copyright © 2022 Shengchong Man and Zepeng Li. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/

Abstract

With the advent of the information age, language is no longer the only way to construct meaning. Besides language, a variety of social symbols, such as gestures, images, music, three-dimensional animation, and so on, are more and more involved in the social practice of meaning construction. Traditional single-modal sentiment analysis methods have a single expression form and cannot fully utilize multiple modal information, resulting in low sentiment classification accuracy. Deep learning technology can automatically mine emotional states in images, texts, and videos and can effectively combine multiple modal information. In the book Image Reading, the first systematic and comprehensive visual grammatical analysis framework is proposed and the expression of image meaning is discussed from the perspectives of representational meaning, interactive meaning, and composition meaning, compared with the three pure theoretical functions in Halliday’s systemic functional grammar. In the past, people often discussed films from the macro perspectives of literary criticism, film criticism, psychology, aesthetics, and so on, and multimodal analysis theory provides film researchers with a set of methods to analyze images, music, and words at the same time. In view of the above considerations, Mu Wen adopts the perspective of social semiotics, based on Halliday’s systemic functional linguistics and Gan He’s “visual grammar,” and builds a multimodal interaction model as a tool to analyze film discourse by referring to evaluation theory.

Details

Title
Multimodal Discourse Analysis of Interactive Environment of Film Discourse Based on Deep Learning
Author
Shengchong Man 1 ; Li, Zepeng 2   VIAFID ORCID Logo 

 School of Animation and Digital Arts, Communication University of Zhejiang, Hangzhou 310018, China 
 College of Film, Shanghai Theatre Academy, Shanghai 201112, China; Seoul School of Integrated Sciences and Technologies, Seoul 03767, Republic of Korea 
Editor
Zhao Kaifa
Publication year
2022
Publication date
2022
Publisher
John Wiley & Sons, Inc.
ISSN
16879805
e-ISSN
16879813
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2712661126
Copyright
Copyright © 2022 Shengchong Man and Zepeng Li. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/