This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
In the last two decades, the total variation (TV) image denoising model proposed by Rudin, Osher, and Fatemi [1] has received considerable attention. Often referred to as the ROF model, this takes the form
Many efficient iterative algorithms have been proposed to solve the ROF model (1). These include the Chambolle gradient projection algorithm and its variants [10–12], the primal-dual hybrid gradient algorithm [13–15], and the split Bregman algorithm [16, 17]. Isotropic total variation (ITV) and anisotropic total variation (ATV) are the most widely employed methods in the literature. It is worth mentioning that TV includes ITV and ATV, which can both be viewed as compositions of a convex function
Because the pixel values of grayscale images are generally distributed in
As the indicator function
1.1. Existing Work
Next, let us briefly review some existing work concerning the computation of resolvent operators. Bauschke and Combettes [21] extended the Dykstra algorithm [22] for computing projections onto the intersection of two closed convex sets to compute the resolvent of the sum of two maximal monotone operators. Hence, they obtained an algorithm for finding the proximity operator of the sum of two proper lower semicontinuous convex functions. Combettes [23] proposed two inexact parallel splitting algorithms for computing the resolvent of a weighted sum of maximal monotone operators. The key idea was to reformulate the weighted sum of maximal monotone operators as a sum of two maximal monotone operators in a product space. The two iterative algorithms were based on extensions of the Douglas-Rachford splitting and Dykstra-like methods, respectively. Furthermore, Combettes [23] applied these algorithms when computing the proximity operator of a weighted sum of proper lower semicontinuous convex functions. In more recent work, Artacho and Campoy [24] generalized the averaged alternating modified reflection algorithm [25] to compute the resolvent of the sum of two maximal monotone operators.
In contrast, Moudafi [26] proposed a fixed-point algorithm to compute the resolvent of operator
In this study, we focus on computing the resolvent of the operator
The remainder of this paper is organized as follows. In Section 2, we introduce some notation and present useful definitions and lemmas. In Section 3, we present the main fixed-point algorithm and prove its strong convergence. In Section 4, we employ the obtained iterative algorithm to solve a particular convex optimization problem, which is related to the calculation of the resolvent operator (9). In Section 5, we present some numerical experiments on image denoising to illustrate the performance of our proposed algorithm. Finally, we provide some conclusions and ideas for future work in Section 6.
2. Preliminaries
In this section, we review some basic definitions and lemmas in monotone operator theory and convex analysis, which will be used throughout this paper. First, let
Let
We now introduce some definitions and lemmas, most of which can be found in [28, 31].
Definition 1 (see [28], (maximal monotone operator)).
Let
Definition 2 (see [28], (resolvent and Yosida approximation)).
Let
For any
Lemma 3 (see [28]).
Let
(i)
(ii)
The Yosida approximation
Definition 4 (see [28]).
Let
(i)
(ii)
(iii)
(iv)
Remark 5.
(i) An equivalent definition of firm nonexpansiveness is that
(ii) An equivalent definition of
(iii) Let
The following lemma provides some useful characterizations between an operator
Lemma 6 (see [28]).
Let
(i)
(ii)
(iii)
The following lemma shows that a composition of two averaged operators is also an average. This result first appeared in the work of Ogura and Yamada [32]. Combettes and Yamada [33] subsequently confirmed it with a different proof.
Lemma 7 (see [32]).
Let
We also make full use of the following lemma.
Lemma 8 (see [28]).
Let
We end this section by introducing the Krasnoselskii–Mann algorithm. Theorem 9 provides a fundamental tool for studying the convergence of many operator splitting methods.
Theorem 9 (see [28], (Krasnoselskii–Mann algorithm)).
Let
(i)
(ii)
(iii)
3. Computing the Resolvent Operator (9)
Before presenting our main results, we first introduce some notation. For a fixed
In addition, the following lemma provides a fixed-point characterization of the resolvent operator (9).
Lemma 10.
Let
Proof.
It follows from the definition of the resolvent operator that
Conversely, let
Next, we prove the following lemma, which characterizes an important property of the operator
Lemma 11.
Let
(i)
(ii)
For any
Proof.
(i) Let
(ii) For any
Lemma 11 shows that, for any
Now, we are ready to present our main results.
Theorem 12.
Let
(i)
(ii)
Suppose that
Proof.
(i) Because the resolvent operator
(1)
(2)
(3)
Taking into account the fact that
(ii) Let
By Lemma 10,
Remark 13.
We observe that
Remark 14.
We observe that the resolvent operator of
Table 1
Comparison of the proposed algorithm with existing algorithms.
Method | Iterative algorithm | Operator splitting algorithms |
| ||
Primal-dual | [34, 35] | Forward-backward-forward splitting |
[36] | Variable metric forward-backward splitting | |
[37] | Forward-backward-half forward splitting | |
Dual | Proposed (20) | Relaxed forward-backward splitting |
Let
Corollary 15.
Let
(i)
(ii)
If
Remark 16.
Moudafi [26] proposed the following iterative algorithm to solve the resolvent operator
Corollary 15 extends some of the results from [26] in two aspects. (i) The range of
Let
Corollary 17.
Let
(i)
(ii)
Suppose that
Remark 18.
The obtained iterative algorithm (38) for computing the resolvent operator
Bauschke and Combettes [21] proposed a Dykstra-like algorithm to compute the resolvent operator
On the other hand, Combettes [23] proposed an inexact Douglas–Rachford splitting algorithm and an inexact Dykstra-like algorithm for computing the resolvent of the sum of a finite family of maximal monotone operators. For the resolvent of the sum of two maximal monotone operators, the inexact Dykstra-like algorithm without errors coincided with the iterative algorithm (40). For simplicity, we have presented the inexact Douglas-Rachford splitting algorithm without errors for computing the resolvent of the sum of two maximal monotone operators. Let
Comparing (39), (40), and (42), we find that the obtained iterative sequences generated by all algorithms converge strongly to the resolvent operator
4. Application to Convex Optimization Problem
In this section, we apply the obtained results to solve a particular convex optimization problem that has been studied in the literature.
For convenience, we introduce some notation. A function
Problem 19.
Let
Theorem 20.
Under the conditions of Problem 19, let
Proof.
Let
Remark 21.
The Moreau identity states that, for any
Combettes et al. [27] proposed the following iterative algorithm to solve the optimization problem (44). For any
Remark 22.
(1) Comparing (47) with (48), the range of
(2) Although our proposed iterative sequences (45) are error-free, it is not difficult to add error sequences in corresponding locations, as in (48). Because the proof is almost identical to that of Theorem 12, we have omitted it here.
5. Numerical Experiments
In this section, we present numerical experiments to verify the effectiveness of the proposed iterative algorithms for solving the constrained total variation model (4) for image denoising problems. All experiments are conducted on a Lenovo laptop with an Intel 2.3 GHz CPU and 4 GB RAM. We run the program with MATLAB R2014a.
We select “Barbara,” “Lena,” “Boat,” and “Goldhill” as the test images (see Figure 1). Gaussian noise of mean
[figures omitted; refer to PDF]
We use the signal-to-noise (SNR) and peak-signal-to-noise (PSNR) to evaluate the quality of the restored images. These are defined by
We aim mainly to solve the constrained total variation (TV) image denoising problem (4). In particular, we choose the anisotropic total variation as the regularization term during testing. By using the indicator function, the constrained (TV) denoising problem (4) can be reformulated as the following unconstrained optimization problem:
Let
5.1. Numerical Results and Discussion
First, we describe the impacts of the parameters for the iterative step size
[figures omitted; refer to PDF]
[figures omitted; refer to PDF]
[figures omitted; refer to PDF]
As shown in Figures 2–4, when the iterative step size
Table 2
Numerical results for different choices of
| | ROF | N-ROF | B-ROF | ||||||
| | | | | | | | | ||
| ||||||||||
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| ||||||||||
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| ||||||||||
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | | |
| | | | | | | | | |
Because the prior pixel information of the image is introduced as a constraint, the performance of the constrained ROF model is superior to that of the unconstrained model. Numerical results confirm the advantages of the constrained ROF model. We can see from Table 2 that the iterative step size
Next, we focus on investigating the performances of the constrained and unconstrained ROF models for the test images from Figure 1. The numerical results are presented in Table 3. We notice that the SNR and PSNR values slowly decrease with an increasing number of iterations. Because more iterations do not improve the quality of the image, we should stop the iterative algorithm in the early stages. Figures 5–8 present denoised images for
Table 3
Numerical results in terms of the SNR, PSNR, and number of iterations (
Images | Model | | | | ||||||
| | | | | | | | | ||
| ||||||||||
“Barbara” | ROF | | | | | | | | | |
N-ROF | | | | | | | | | | |
B-ROF | | | | | | | | | | |
| ||||||||||
“Lena” | ROF | | | | | | | | | |
N-ROF | | | | | | | | | | |
B-ROF | | | | | | | | | | |
| ||||||||||
“Boat” | ROF | | | | | | | | | |
N-ROF | | | | | | | | | | |
B-ROF | | | | | | | | | | |
| ||||||||||
“Goldhill” | ROF | | | | | | | | | |
N-ROF | | | | | | | | | | |
B-ROF | | | | | | | | | |
[figures omitted; refer to PDF]
[figures omitted; refer to PDF]
[figures omitted; refer to PDF]
[figures omitted; refer to PDF]
6. Conclusions
The total variation can be viewed as a composition of a convex function with a linear transformation. Thus, Micchelli et al. [20] introduced a fixed-point algorithm based on proximity operators to produce a total variation model for image denoising (1). Inspired by the work of Moudafi [26], we studied the calculation of the resolvent of the sum of a maximal monotone operator and a composite operator (9) to produce a constrained total variation model (4). Subsequently, we proposed a fixed-point algorithm for this resolvent operator. Based on the fixed-point theory of nonexpansive mappings, we proved the strong convergence of the obtained iterative sequence. The advantage of the fixed-point approach is that it provides the potential to develop additional fast iterative algorithms. Numerical simulations on image denoising illustrated the performance of the proposed algorithm. In particular, we found that the step size had a significant impact on the convergence speed of the algorithm. In general, when the iterative step size was fixed, larger relaxation parameters resulted in a faster iterative algorithm convergence. Numerical results also confirmed that the constrained ROF model achieved a superior performance compared with the unconstrained ROF model.
Finally, we wish to note that the constrained TV model (4) can also be derived using other iterative algorithms, such as the primal-dual Chambolle–Pock algorithm [15], the alternating direction method of multipliers [29, 38, 39], and the preconditioned primal-dual algorithm [40, 41]. We have not presented the corresponding numerical results here. Thus, we will further examine the convergence rate of our proposed iterative algorithm and include these comparative results in future work.
Conflicts of Interest
The authors declare no conflicts of interest.
Acknowledgments
This work was supported by the National Natural Science Foundations of China (11661056, 11771198, 11771347, 91730306, 41390454, and 11401293), the China Postdoctoral Science Foundation (2015M571989), and the Jiangxi Province Postdoctoral Science Foundation (2015KY51).
[1] L. I. Rudin, S. Osher, E. Fatemi, "Nonlinear total variation based noise removal algorithms," Physica D: Nonlinear Phenomena, vol. 60 no. 1–4, pp. 259-268, DOI: 10.1016/0167-2789(92)90242-F, 1992.
[2] Y. Wang, J. Yang, W. Yin, Y. Zhang, "A new alternating minimization algorithm for total variation image reconstruction," SIAM Journal on Imaging Sciences, vol. 1 no. 3, pp. 248-272, DOI: 10.1137/080724265, 2008.
[3] A. Beck, M. Teboulle, "Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems," IEEE Transactions on Image Processing, vol. 18 no. 11, pp. 2419-2434, DOI: 10.1109/TIP.2009.2028250, 2009.
[4] C. Wu, J. Zhang, X.-C. Tai, "Augmented Lagrangian method for total variation restoration with non-quadratic fidelity," Inverse Problems and Imaging, vol. 5 no. 1, pp. 237-261, DOI: 10.3934/ipi.2011.5.237, 2011.
[5] T. F. Chan, J. Shen, "Mathematical models for local nontexture inpaintings," SIAM Journal on Applied Mathematics, vol. 62 no. 3, pp. 1019-1043, DOI: 10.1137/S0036139900368844, 2002.
[6] A. Marquina, S. J. Osher, "Image super-resolution by TV-regularization and Bregman iteration," Journal of Scientific Computing, vol. 37 no. 3, pp. 367-382, DOI: 10.1007/s10915-008-9214-8, 2008.
[7] J. Liu, Y.-B. Ku, S. Leung, "Expectation-maximization algorithm with total variation regularization for vector-valued image segmentation," Journal of Visual Communication and Image Representation, vol. 23 no. 8, pp. 1234-1244, DOI: 10.1016/j.jvcir.2012.09.002, 2012.
[8] E. Y. Sidky, X. Pan, "Image reconstruction in circular cone-beam computed tomography by constrained, total-variation minimization," Physics in Medicine and Biology, vol. 53 no. 17, pp. 4777-4807, DOI: 10.1088/0031-9155/53/17/021, 2008.
[9] Y. Chen, W. W. Hager, M. Yashtini, X. Ye, H. Zhang, "Bregman operator splitting with variable stepsize for total variation image reconstruction," Computational optimization and applications, vol. 54 no. 2, pp. 317-342, DOI: 10.1007/s10589-012-9519-2, 2013.
[10] A. Chambolle, "An algorithm for total variation minimization and applications," Journal of Mathematical Imaging and Vision, vol. 20 no. 1-2, pp. 89-97, DOI: 10.1023/B:JMIV.0000011320.81911.38, 2004.
[11] G. Yu, L. Qi, Y. Dai, "On nonmonotone Chambolle gradient projection algorithms for total variation image restoration," Journal of Mathematical Imaging and Vision, vol. 35 no. 2, pp. 143-154, DOI: 10.1007/s10851-009-0160-3, 2009.
[12] J.-F. Aujol, "Some first-order algorithms for total variation based image restoration," Journal of Mathematical Imaging and Vision, vol. 34 no. 3, pp. 307-327, DOI: 10.1007/s10851-009-0149-y, 2009.
[13] M. Zhu, T. F. Chan, "An efficient primal-dual hybrid gradient algorithm for total variation image restoration," Technical Report CAM Report, May 2008.
[14] E. Esser, X. Zhang, T. F. Chan, "A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science," SIAM Journal on Imaging Sciences, vol. 3 no. 4, pp. 1015-1046, DOI: 10.1137/09076934X, 2010.
[15] A. Chambolle, T. Pock, "A first-order primal-dual algorithm for convex problems with applications to imaging," Journal of Mathematical Imaging and Vision, vol. 40 no. 1, pp. 120-145, DOI: 10.1007/s10851-010-0251-1, 2011.
[16] T. Goldstein, S. Osher, "The split Bregman method for L 1-regularized problems," SIAM Journal on Imaging Sciences, vol. 2 no. 2, pp. 323-343, DOI: 10.1137/080725891, 2009.
[17] C. Wu, X.-C. Tai, "Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models," SIAM Journal on Imaging Sciences, vol. 3 no. 3, pp. 300-339, DOI: 10.1137/090767558, 2010.
[18] G. Steidl, "A note on the dual treatment of higher-order regularization functionals," Computing: Archives for Scientific Computing, vol. 76 no. 1-2, pp. 135-148, DOI: 10.1007/s00607-005-0129-z, 2006.
[19] P. Chen, J. Huang, X. Zhang, "A primal-dual fixed point algorithm for convex separable minimization with applications to image restoration," Inverse Problems, vol. 29 no. 2,DOI: 10.1088/0266-5611/29/2/025011, 2013.
[20] C. A. Micchelli, L. Shen, Y. Xu, "Proximity algorithms for image models: denoising," Inverse Problems, vol. 27 no. 4,DOI: 10.1088/0266-5611/27/4/045009, 2011.
[21] H. H. Bauschke, P. L. Combettes, "A Dykstra-like algorithm for two monotone operators," Pacific Journal of Optimization, vol. 4 no. 3, pp. 383-391, 2008.
[22] R. L. Dykstra, "An algorithm for restricted least squares regression," Journal of the American Statistical Association, vol. 78 no. 384, pp. 837-842, DOI: 10.1080/01621459.1983.10477029, 1983.
[23] P. L. Combettes, "Iterative construction of the resolvent of a sum of maximal monotone operators," Journal of Convex Analysis, vol. 16 no. 3-4, pp. 727-748, 2009.
[24] F. J. Aragón Artacho, R. Campoy, "Computing the resolvent of the sum of maximally monotone operators with the averaged alternating modified reflections algorithm," Journal of Optimization Theory and Applications,DOI: 10.1007/s10957-019-01481-3, 2019.
[25] F. J. Aragón Artacho, R. Campoy, "A new projection method for finding the closest point in the intersection of convex sets," Computational optimization and applications, vol. 69 no. 1, pp. 99-132, DOI: 10.1007/s10589-017-9942-5, 2018.
[26] A. Moudafi, "Computing the resolvent of composite operators," Cubo: A Mathematical Journal, vol. 16 no. 3, pp. 87-96, DOI: 10.4067/s0719-06462014000300007, 2014.
[27] P. L. Combettes, D. Dung, B. C. Vu, "Dualization of signal recovery problems," Set-Valued and Variational Analysis, vol. 18 no. 3-4, pp. 373-404, DOI: 10.1007/s11228-010-0147-7, 2010.
[28] H. H. Bauschke, P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces,DOI: 10.1007/978-3-319-48311-5, 2017.
[29] R. H. Chan, M. Tao, X. Yuan, "Constrained total variation deblurring models and fast algorithms based on alternating direction method of multipliers," SIAM Journal on Imaging Sciences, vol. 6 no. 1, pp. 680-697, DOI: 10.1137/110860185, 2013.
[30] T. Pennanen, "Dualization of generalized equations of maximal monotone type," SIAM Journal on Optimization, vol. 10 no. 3, pp. 809-835, DOI: 10.1137/S1052623498340448, 2000.
[31] C. Byrne, "A unified treatment of some iterative algorithms in signal processing and image reconstruction," Inverse Problems, vol. 20 no. 1, pp. 103-120, DOI: 10.1088/0266-5611/20/1/006, 2004.
[32] N. Ogura, I. Yamada, "Non-strictly convex minimization over the fixed point set of an asymptotically shrinking nonexpansive mapping," Numerical Functional Analysis and Optimization, vol. 23 no. 1-2, pp. 113-137, DOI: 10.1081/NFA-120003674, 2002.
[33] P. L. Combettes, I. Yamada, "Compositions and convex combinations of averaged nonexpansive operators," Journal of Mathematical Analysis and Applications, vol. 425 no. 1, pp. 55-70, DOI: 10.1016/j.jmaa.2014.11.044, 2015.
[34] L. M. Briceño-Arias, P. L. Combettes, "A monotone+skew splitting model for composite monotone inclusions in duality," SIAM Journal on Optimization, vol. 21 no. 4, pp. 1230-1250, DOI: 10.1137/10081602x, 2011.
[35] P. L. Combettes, J.-C. Pesquet, "Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators," Set-Valued and Variational Analysis, vol. 20 no. 2, pp. 307-330, DOI: 10.1007/s11228-011-0191-y, 2012.
[36] B. C. Vũ, "A splitting algorithm for dual monotone inclusions involving cocoercive operators," Advances in Computational Mathematics, vol. 38 no. 3, pp. 667-681, DOI: 10.1007/s10444-011-9254-8, 2013.
[37] L. M. Briceno-Arias, D. Davis, "Forward-backward-half forward algorithm for solving monotone inclusions," SIAM Journal on Optimization, vol. 28 no. 4, pp. 2839-2871, DOI: 10.1137/17M1120099, 2018.
[38] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, "Distributed optimization and statistical learning via the alternating direction method of multipliers," Foundations and Trends in Machine Learning, vol. 3 no. 1,DOI: 10.1561/2200000016, 2010.
[39] M. K. Ng, P. Weiss, X. M. Yuan, "Solving constrained total-variation image restoration and reconstruction problems via alternating direction methods," SIAM Journal on Scientific Computing, vol. 32 no. 5, pp. 2710-2736, DOI: 10.1137/090774823, 2010.
[40] T. Pock, A. Chambolle, "Diagonal preconditioning for first order primal-dual algorithms in convex optimization," Proceedings of the IEEE International Conference on Computer Vision (ICCV '11), pp. 1762-1769, DOI: 10.1109/ICCV.2011.6126441, .
[41] M. Wen, J. Peng, Y. Tang, C. Zhu, S. Yue, "A preconditioning technique for first-order primal-dual splitting method in convex optimization," Mathematical Problems in Engineering, vol. 2017,DOI: 10.1155/2017/3694525, 2017.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2019 Bao Chen and Yuchao Tang. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
Total variation image denoising models have received considerable attention in the last two decades. To solve constrained total variation image denoising problems, we utilize the computation of a resolvent operator, which consists of a maximal monotone operator and a composite operator. More precisely, the composite operator consists of a maximal monotone operator and a bounded linear operator. Based on recent work, in this paper we propose a fixed-point approach for computing this resolvent operator. Under mild conditions on the iterative parameters, we prove strong convergence of the iterative sequence, which is based on the classical Krasnoselskii–Mann algorithm in general Hilbert spaces. As a direct application, we obtain an effective iterative algorithm for solving the proximity operator of the sum of two convex functions, one of which is the composition of a convex function with a linear transformation. Numerical experiments on image denoising are presented to illustrate the efficiency and effectiveness of the proposed iterative algorithm. In particular, we report the numerical results for the proposed algorithm with different step sizes and relaxation parameters.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer