Full Text

Turn on search term navigation

Copyright © 2021 Lingyi Zhu. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/

Abstract

In recent years, economic globalization is the trend, and communication between countries is getting closer and closer; more and more people begin to pay attention to learning spoken English. The development of computer-aided language learning makes it more convenient for people to learn spoken English; however, the detection and correction of incorrect English pronunciation, which is its core, are still inadequate. In this paper, we propose a multimodal end-to-end English pronunciation error detection and correction model based on audio and video, which does not require phoneme forced alignment of the English pronunciation video signal to be processed, and uses rich audio and video features for English pronunciation error detection, which improves the error detection accuracy to a great extent especially in noisy environments. To address the shortcomings of the current lip feature extraction algorithm which is too complicated and not enough characterization ability, a feature extraction scheme based on the lip opening and closing angle is proposed. The lip syllable frames are obtained by video frame splitting, the syllables are denoised, the key point information of the lips is obtained using a gradient enhancement-based regression tree algorithm, the effects of speaker tilt and movement are removed by scale normalization, and finally, the lip opening and closing angles are calculated using mathematical geometry, and the lip feature values are generated by combining the angle changes.

Details

Title
English Pronunciation Standards Based on Multimodal Acoustic Sensors
Author
Zhu, Lingyi 1   VIAFID ORCID Logo 

 School of Foreign Languages, Xinyang Agriculture and Forestry University, Xinyang, Henan 464000, China; Office of International Exchange & Cooperation, Xinyang Agriculture and Forestry University, Xinyang, Henan 464000, China 
Editor
Guolong Shi
Publication year
2021
Publication date
2021
Publisher
John Wiley & Sons, Inc.
ISSN
1687725X
e-ISSN
16877268
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2576545745
Copyright
Copyright © 2021 Lingyi Zhu. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/