This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Vocal music teaching is not intuitive teaching; it is perceptual. Moreover, the correct singing method is mainly obtained by training, and the training must be carried out within a limited class time, which requires the teacher’s teaching language expression to be refined and accurate. Teachers need refined and accurate language expression and should be able to grasp the main contradictions in teaching, reflect the focus of teaching, solve problems in voice, prescribe the right medicine, and make complex theories simple and clear. It is common to hear such teaching prompts: sink the breath, drop the jaw, lift the laughing muscle, lower the larynx, stick to the pharyngeal wall, and so forth [1]. However, these overly specialized teaching languages often make students at a loss in the learning process. In addition, teachers often give special lectures on a certain issue in the classroom, mask the content of the explanation, or talk about vocal music theory in general. This kind of teaching phenomenon overemphasizes teachers’ self-experience and ignores students’ own cognitive ability, which often makes students feel baffled, boundless, confused, and at a loss [2]. Moreover, the expression of vocal music teaching language should be logical and should be based on lively and easy-to-understand teaching language. Refined and accurate teaching language comes from the teacher's own objective and keen hearing and also from the teacher’s own subjective teaching experience. It is the basic skill of teaching and the key to vocal music teaching. In vocal music teaching, teachers should constantly expand their knowledge and enrich and develop their own teaching language and can achieve the teaching effect of turning complexity into simplicity, abstraction into concreteness, and vagueness into vividness [3].
Vocal music teaching should follow the logical system of the subject and should make academic norms for the teaching courses. On the basis of fully understanding of the teaching, according to the step-by-step teaching objectives, the clear teaching content is formulated. Let students develop their own initiative under the premise of systematically mastering basic knowledge and basic skills [4]. In the actual teaching process, it is necessary to take into account individual differences and indeed to promote the fullest development of individuals. The middle-level education in vocal music teaching is based on acknowledging the differences in teaching objects and acknowledging the differences in the professional level of students at all levels, especially acknowledging that there are differences in teaching requirements and teaching progress in the same grade concept, which is also a prominent feature of vocal music teaching that is different from other areas of teaching [5]. Fully activate and mobilize the initiative and enthusiasm of students in their own development and be good at guiding students to use their own abilities and will to carry out creative learning [6]. The basic training of students cannot be neglected, and blind advancement and unscrupulous growth are avoided. It must be based on the systematic nature of vocal music teaching, pay attention to the logical organization of vocal music teaching materials, reflect the value of vocal music training, and pay attention to the internal connection between stages [7]. Vocal music teaching must follow the laws of educational practice, grasp the principles of vocal music teaching, formulate clear teaching guidelines and professional specifications based on the objective and practical conditions of vocal music teaching, and form a scientific and reasonable standardized system [8]. Vocal music teaching should pay attention to the principle of doing what is within one’s capacity and progressing step by step and insist on conducting professional training and theoretical teaching steadily. At the same time, it is necessary to make the teaching of students go in front of the development of students. In vocal music teaching, it is necessary to insist on the assessment principle that students are graded from low to high; scientific and effective teaching steps and reasonable teaching progress can make teaching achieve the expected teaching effect [9]. At the same time, the concept of seeking truth from facts and hierarchical education cannot be ignored. This makes it more difficult to formulate teaching standards. For a long time, people have thought that the standard of art itself is a relatively comparative thing, and there is no absolutely unified standard [10]. Especially in vocal music art, it is impossible to measure right and wrong with specific standards. There is still a perception that the benevolent sees the benevolence and the wise sees the wisdom. Therefore, the long-standing artistic standards have been formulated relatively broadly and vaguely. Although, in teaching, we can have a unified understanding of the standards of voice conditions, accurate intonation, beautiful timbre, deep breathing, clear articulation, and everyone’s feeling and receptive ability to music, because of differences in the quality and aesthetics of teachers, the standards will vary from person to person [11]. The hierarchical education of vocal music teaching makes the teaching have more clear progress standards and specifications and forms a system that can guide students to learn step by step. Moreover, such teaching objectives are clearer and more pertinent, and the determination of relevant teaching content, the selection of teaching methods, and the organization of teaching activities are more meaningful [12].
Learning vocal music begins with the imitation of sound. Personalization is an important feature of vocal music teaching. Such techniques are easy to imitate but not easy to describe. Although the theory of a discipline requires logical and normative language expressions, as an individual imitation process, it can rely on another set of auxiliary languages [13]. In fact, in the process of vocal music teaching, there has always been a set of body language, and each vocal music teacher has a set of habitual gestures, which are the teacher’s popular expression of singing methods. Most of the students will experience and remember these sign language actions in unconscious imitation [14]. For example, when the teacher asks to open the inner mouth and lower the position of the larynx, along with the elaboration of these concepts and themes, the teacher will use the suggestive language “yawn,” “sigh down,” “suck and sing,” and so forth and attach corresponding mouth movements and gestures. These habitual mouth shapes, gestures, or body language has the function of intuitive prompting, guidance, and emphasis. Although these habitual gestures cannot exist independently, they render a singing state in an auxiliary way and are an indispensable teaching method [15]. I am accustomed to transforming a set of words into a set of pictures. Traditional vocal music teaching cannot transform familiar words into pictures, but the development of modern science and technology has provided new conditions for today’s teaching activities. TV video brings visual images into teaching activities [16]. When students watch the opera video, the pursuit of sound quality is no longer a set of illusory language descriptions. A real timbre is displayed in a visual form, and students often even associate an imaginary sound with a specific physical movement of a famous singer. This powerful episodic memory effect is not possessed by any traditional written language and oral language. It directly points to the human heart and enables vocal imitation to enter the same level as instrumental imitation. This is the material condition that vocal music teaching can make full use of in the modern language environment [17]. The progress of art history, of course, can be guided by the interpretation of language produced by theoretical changes, and the rapid development of science and technology has provided a new medium of expression for language expression. Many ancient arts that lack systematic language expression have more room to show that their semantics have not been fully exhausted than disciplines that have a set of traditional expression terms and thus limit their scope of thinking [18].
This paper uses the computer visualization sound parameter analysis method to study the effect of vocal music teaching and constructs an intelligent vocal music teaching system to improve the quality of vocal music teaching.
2. Computer Visualized Sound Parameter Analysis Method
2.1. Perceptual Domain Audio Theory
As shown in Figure 1, the encoder generally divides the audio signal into approximately stationary frames of 20–50 ms, and then the time-frequency analysis module parses out the spectrum of each frame. The role of mental model analysis is to obtain some information from time-frequency analysis. For example, it obtains the allowable quantization error of each frequency band according to the masking effect in the frequency domain and determines the time-domain and frequency-domain resolution requirements according to the stationarity of the signal, and so on.
[figure(s) omitted; refer to PDF]
In this paper, the limit of “transparent” encoding is determined from the receptive field theory, and a simplified algorithm for specific implementation is given. Before that, this paper introduces a concept of sound pressure, which is defined as
SPL is the abbreviation of Sound Pressure Level, that is, sound pressure; p is the pressure generated by the sound source excitation, and Pascal is the unit;
The absolute hearing threshold refers to the minimum sound pressure of a single frequency audio information that the human ear can perceive in a noise-free environment. By definition, this threshold varies with frequency. The absolute hearing threshold is approximated as
In order to facilitate the analysis, considering the exponential characteristics of the human ear’s sensitivity to frequency, the following concept is proposed to redistribute the 20 Hz to 20 kHz that the human ear can perceive into Bark units:
A notable consequence of this classification is that the low-frequency region has a higher-frequency-domain resolution but a narrower bandwidth, while the high-frequency region has a wider band and lower-frequency-domain resolution. This paper proposes a modification to the nonlinear expression in formula (3) and obtains the expression method of the easy inverse function:
The masking between frequency bands is different from the above-mentioned three types of masking effects in that it acts between different frequency bands, rather than being limited to a single frequency band. This predictable masking effect of a certain frequency band on adjacent frequency bands is often represented by a class of convolution functions between frequency bands in perceptual domain analysis:
Since it is a convolution between frequency bands, the unit of x is Bark, and the unit of SF is dB.
The general masking threshold calculation sequence is to first convolve the energy of each frequency band and then divide it into musical and noise components according to the special design of the audio in each frequency band and calculate the thresholds of noise masking musical tone and musical tone masking noise.
In the above formula,
[figure(s) omitted; refer to PDF]
For detected frames with impulsive signals, differential predictive coding of frequency domain parameters similar to Temporal Noise Shaping (TNS) techniques can be used. The principle of this de-“pre-echo” algorithm is shown in Figure 3.
[figure(s) omitted; refer to PDF]
The number of bits required for encoding is
In the above formula, ki represents the number of frequency parameters in this frequency band, and n int is the rounding operator.
Doing frequency-domain analysis FFT is well understood. The reason for normalization is that if the absolute silence 0 dB at 4 kHz is used for representation. The harmonic energy that can be represented by the largest 16-bit PCM can be calculated as
However, if PCM is 8 bits, it is only about 45 dB. The so-called normalization means that if the encoder is to be suitable for PCM of any number of bits, the following processing must be done:
In the above formula, N is the window length of the FFT, b is the number of bits of the PCM, which is assumed to be 16 here, and s(n) is the source signal obtained by sampling. The maximum energy in the frequency domain of x(n) is 0 dB, which is the sequence we will do frequency-domain analysis and processing. Then, a 512-point FFT was done with a 1/16 crossed Hanning window. It is defined as follows:
Such normalization and the choice of parameter PN make P(k) obtained by the audio signal after FFT limited to 0–90 dB. Then, the job is to simply distinguish the musical tones from the noise components. If the maximum value in the spectrum is more than 7 dB greater than the frequency energy in a given range nearby, it is regarded as the signal part of the musical sound; otherwise, it is noise. In mathematical language, it is
The masking source of a musical tone is regarded as the energy superposition of 3 adjacent frequency-domain parameter values:
The remaining part other than
After classifying and merging the masking sources, the next thing to consider is the convolution in the frequency domain, that is, the mutual masking between frequency bands. The equivalent mathematical expressions are given here, and the necessary frequency-domain linear to frequency band (Bark units) conversions have been made.
In the above formula, PTonal,Noise represents the energy of the masking source obtained in the above steps, the function z(j) is the frequency band that converts the linear frequency-domain parameter j to the Bark unit, and SF is the convolution between the frequency bands.
In this regard, on the basis of all the obtained
2.2. Bit Allocation, Quantization, and Huffman Coding
The purpose of the bit allocation module is to use the global masking threshold obtained above to obtain a signal masking ratio, namely, SMR (Signal to Mask Ratio). Furthermore, it determines how many bits are required for quantization according to the formula NMR(m) = SMR − SNR(m) (dB). Among them, m represents the bits; NMR refers to the noise masking ratio, SNR refers to the signal-to-noise ratio, and the unit is dB. A demonstration of the NMR(m) = SMR − SNR(m) (dB) relationship is shown in Figure 4.
[figure(s) omitted; refer to PDF]
The task of the “Rate/Distortion Control” module shown in Figure 5 is to allocate all bit rates to the currently encoded audio frame, and the distortion generated by the quantized frequency domain parameters meets the encoding requirements in terms of sound quality.
[figure(s) omitted; refer to PDF]
2.3. Research on Parametric Audio Coding
Based on the short-term periodicity and short-term stationarity of speech signals, an analytical synthesis model based on harmonic decomposition is proposed. The block diagram of parametric audio coding of ASAC is given as a prototype of parametric waveform hybrid coding, as shown in Figure 6.
[figure(s) omitted; refer to PDF]
A common modeling of speech is to treat speech as the output of an excitation passing through a time-varying filter, where the excitation is expressed as
That is, it is a linear combination of some harmonics of different amplitude, frequency, and phase. The speech signal s(t) is represented by the following convolution:
If the description of the time-varying filter in the frequency domain is
By merging,
After adding some assumptions about the short-term stability of the signal to this expression, that is, the part
In the above formula,
2.4. Two Block Diagrams of Parametric Coding STN and an Overview of HILN
The harmonic + impact + noise model reasonably divides the signal into 3 distinct parts when coded. This paper analyzes whether the impact of the residual signal can be masked according to the mental model, to adjust the time-frequency resolution of the harmonic analysis, until the harmonic analysis cycle is terminated, and output the harmonic parameters and the frequency band energy encoding parameters of the noise part, as shown in Figure 7.
[figure(s) omitted; refer to PDF]
According to the signal segmentation method in Figure 7 and the analysis/synthesis loop of ASAC, another algorithm block diagram of STN can be constructed. Using the time-frequency-domain division of the three signal parts like Levine, the upper frequency limit is limited in the harmonic extraction loop, and the residual signal is energy envelope encoded. The final output is the encoding of a “shock” frame + the harmonic parameters of the stationary frame + the noise envelope, as shown in Figure 8. The model of the parametric audio hybrid encoder selected in this paper is shown in Figure 9.
[figure(s) omitted; refer to PDF]
2.5. Recursive Selection of Frequency Domain Coefficients
The quantitative mathematical formula is
A set of methods for recognizing probabilities of differences in receptive fields are established. The first is to evenly set some reference points on the Bark units to define the global difference recognition probability:
In order to minimize the recognition probability, that is, to achieve a sound quality effect that is “transparent” to the human ear, we only need to examine the
Figure 10 shows the block diagram of minimizing
[figure(s) omitted; refer to PDF]
2.6. Shock Detection Algorithm
Impulsive signals in the time domain are a challenge to the performance of all transform encoders because they do not meet the stationary harmonic assumption and are different from simple noise. The general impact detection steps are shown in Figure 11.
[figure(s) omitted; refer to PDF]
From simple reasoning, the impulse signal is a burst of energy accumulation in the time domain, and we can guess an obvious downsampling algorithm: low-frequency filtering.
The window function acts as a low-pass filter, which is effective for the detection of shock signals in a silent background, and uses energy instead of the absolute value;
We assume that
In the above formula, Wk is the weight. Generally speaking, the music energy of the signal is concentrated in the bass part, and the reason for the sharp change between short frames in the high-frequency part is the shock. Therefore, the detection function here should be amplified; that is, an increasing function similar to Wk is adopted. It is worth noting that
If H(x) = (x + |x|)/2, then the detector so defined is only sensitive to the shock of rising energy but not to the “negative shock” of sudden energy drop. The function defined to be continuous with
Equivalently, it can also be considered that the first-order difference of
In other words, the second-order backward difference is zero:
2.7. Concatenation of Interframe Parameters
Since the audio signal is regarded as the superposition of short-term stable harmonics in the modeling of STN, the amplitude, phase, and frequency of each harmonic in a frame are regarded as constants. If it is simply reconstructed, it is as follows:
H represents the length of the frame, which should be the same length as the analysis frame. The consequence of no interpolation is a discontinuity in the sound due to abrupt changes in harmonic parameters from frame to frame.
Then, we interpolate the frequency and phase. It is a little more complicated because the two are correlated and the frequency is the derivative of the phase. The first is to understand the concept of a phase; that is,
The following is satisfied:
However, the solution is not unique, and
From the above formula,
At this point, the reconstructed audio signal of each frame after interpolation can be completely written:
3. Computer-Visualized Sound Parameter Analysis Method and Its Application in Vocal Music Teaching
The method proposed in this paper is applied to the vocal music teaching, and the vocal music teaching parameters are identified by the computer visual sound parameter analysis method, and the vocal music characteristic research is carried out to improve the vocal music teaching effect. Moreover, this paper uses Matlab to construct a computer visual sound parameter analysis model to obtain a vocal music teaching model and uses simulation experiments to verify the vocal music feature recognition and vocal music teaching effect of this model. The results shown in Table 1 and Figure 12 are obtained.
Table 1
The analysis method of computer visualization sound parameters and its application effect in vocal music teaching.
No. | Feature recognition | Teaching effect | No. | Feature recognition | Teaching effect |
1 | 89.01 | 84.60 | 13 | 91.67 | 82.15 |
2 | 89.45 | 82.47 | 14 | 89.51 | 88.01 |
3 | 88.28 | 83.42 | 15 | 88.52 | 81.09 |
4 | 91.89 | 87.75 | 16 | 91.73 | 84.95 |
5 | 89.38 | 87.33 | 17 | 92.98 | 86.08 |
6 | 88.15 | 84.83 | 18 | 89.92 | 86.75 |
7 | 88.53 | 87.28 | 19 | 92.14 | 82.44 |
8 | 88.18 | 84.83 | 20 | 91.29 | 84.77 |
9 | 88.34 | 88.09 | 21 | 92.93 | 85.28 |
10 | 88.59 | 83.01 | 22 | 90.60 | 81.63 |
11 | 91.19 | 82.98 | 23 | 90.63 | 83.11 |
12 | 88.77 | 83.77 | 24 | 92.60 | 82.36 |
[figure(s) omitted; refer to PDF]
It can be seen from the above research that the computer visualization sound parameter analysis method proposed in this paper has a very good application effect in vocal music teaching and can improve the quality of vocal music teaching.
4. Conclusion
Vocal music teaching contains the systematicity that all the teaching links are connected with each other and cannot be separated. However, from the perspective of long-term teaching experience, more attention should be paid to the problem of those talents who have been buried due to educational philosophy. In this way, the sound teaching and education of vocal music will have a good incentive mechanism and a reasonable restriction mechanism. Therefore, starting from the teaching practice, constantly discovering and excavating students’ creativity and constantly cultivating and improving students’ creativity are the foundation of teaching. This paper studies the effect of vocal music teaching combined with the computer visualization sound parameter analysis method and constructs an intelligent vocal music teaching system. Through the simulation data analysis, it can be seen that the computer visualization sound parameter analysis method proposed in this paper and its application in vocal music teaching are very good and can improve the quality of vocal music teaching.
[1] D. Tomašević, S. Wells, I. Y. Ren, A. Volk, M. Pesek, "Exploring annotations for musical pattern discovery gathered with digital annotation tools," Journal of Mathematics and Music, vol. 15 no. 2, pp. 194-207, DOI: 10.1080/17459737.2021.1943026, 2021.
[2] X. Serra, "The computational study of a musical culture through its digital traces," Acta Musicologica, vol. 89 no. 1, pp. 24-44, 2017.
[3] I. B. Gorbunova, N. N. Petrova, "Digital sets of instruments in the system of contemporary artistic education in music: socio-cultural aspect," Journal of Critical Reviews, vol. 7 no. 19, pp. 982-989, 2020.
[4] E. Partesotti, A. Peñalba, J. Manzolli, "Digital instruments and their uses in music therapy," Nordic Journal of Music Therapy, vol. 27 no. 5, pp. 399-418, DOI: 10.1080/08098131.2018.1490919, 2018.
[5] B. Babich, "Musical “covers” and the culture industry: from antiquity to the age of digital reproducibility," Research in Phenomenology, vol. 48 no. 3, pp. 385-407, DOI: 10.1163/15691640-12341403, 2018.
[6] L. L. Gonçalves, F. L. Schiavoni, "Creating digital musical instruments with libmosaic-sound and mosaicode," Revista de Informática Teórica e Aplicada, vol. 27 no. 4, pp. 95-107, DOI: 10.22456/2175-2745.104342, 2020.
[7] I. B. Gorbunova, "Music computer technologies in the perspective of digital humanities, arts, and researches," Opción, vol. 35 no. 24, pp. 360-375, 2019.
[8] A. Dickens, C. Greenhalgh, B. Koleva, "Facilitating accessibility in performance: participatory design for digital musical instruments," Journal of the Audio Engineering Society, vol. 66 no. 4, pp. 211-219, DOI: 10.17743/jaes.2018.0010, 2018.
[9] L. Turchet, M. Barthet, "An ubiquitous smart guitar system for collaborative musical practice," Journal of New Music Research, vol. 48 no. 4, pp. 352-365, DOI: 10.1080/09298215.2019.1637439, 2019.
[10] R. Khulusi, J. Kusnick, C. Meinecke, C. Gillmann, J. Focht, S. Jänicke, "A survey on visualizations for musical data," Computer Graphics Forum, vol. 39 no. 6, pp. 82-110, DOI: 10.1111/cgf.13905, 2020.
[11] E. Cano, D. FitzGerald, A. Liutkus, M. D. Plumbley, F. R. Stöter, "Musical source separation: an introduction," IEEE Signal Processing Magazine, vol. 36 no. 1, pp. 31-40, DOI: 10.1109/msp.2018.2874719, 2019.
[12] T. Magnusson, "The migration of musical instruments: on the socio-technological conditions of musical evolution," Journal of New Music Research, vol. 50 no. 2, pp. 175-183, DOI: 10.1080/09298215.2021.1907420, 2021.
[13] I. B. Gorbunova, N. N. Petrova, "Music computer technologies, supply chain strategy and transformation processes in socio-cultural paradigm of performing art: using digital button accordion," International Journal of Supply Chain Management, vol. 8 no. 6, pp. 436-445, 2019.
[14] J. A. Anaya Amarillas, "Marketing musical: música, industria y promoción en la era digital," INTERdisciplina, vol. 9 no. 25, pp. 333-335, .
[15] E. Costa-Giomi, L. Benetti, "Through a baby’s ears: musical interactions in a family community," International Journal of Community Music, vol. 10 no. 3, pp. 289-303, DOI: 10.1386/ijcm.10.3.289_1, 2017.
[16] A. Amendola, G. Gabbriellini, P. Dell'Aversana, A. J. Marini, "Seismic facies analysis through musical attributes," Geophysical Prospecting, vol. 65 no. 1, pp. 49-58, DOI: 10.1111/1365-2478.12504, 2017.
[17] K. Stensæth, "Music therapy and interactive musical media in the future: reflections on the subject-object interaction," Nordic Journal of Music Therapy, vol. 27 no. 4, pp. 312-327, DOI: 10.1080/08098131.2018.1439085, 2018.
[18] C. Michalakos, "Designing musical games for electroacoustic improvisation," Organised Sound, vol. 26 no. 1, pp. 78-88, DOI: 10.1017/s1355771821000078, 2021.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2022 Zhenming Yu. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
In order to improve the quality of vocal music teaching, this paper applies the computer visualization sound parameter analysis method to vocal music teaching and discusses the scheme of parametric coding. Moreover, this paper adopts the transient signal detection mechanism to divide the signal. For frames that have detected a shock signal, frequency-domain parametric differential predictive coding can be used like temporal noise shaping (TNS) techniques. In addition, based on the short-term periodicity and short-term stationarity of speech signals, an analytical synthesis model based on harmonic decomposition is proposed. Through the simulation data analysis, it can be seen that the computer visualization sound parameter analysis method proposed in this paper has a very good application effect in vocal music teaching and can improve the quality of vocal music teaching.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer