1. Introduction
We revisit the well-known problem of lossy source coding for finite-alphabet sequences with respect to a certain distortion measure ([1]; [2], Chapter 10; [3], Chapter 9; [4]; [5], Chapters 7 and 8). More concretely, our focus is on d-semifaithful codes, namely, variable–rate codes that meet a certain distortion constraint for every source sequence (and not only in expectation). As is very well known [1], the rate–distortion function quantifies the minimum achievable expected coding rate for a given memoryless source and distortion measure.
During several past decades, many research efforts were motivated by the fact that the source statistics are rarely (if not never) known in practice, and were, therefore, directed to the quest for universal coding schemes, namely, coding schemes which do not depend on the unknown statistics, but nevertheless, approach the lower bound (i.e., the entropy in lossless compression, or the rate–distortion function in the lossy case) asymptotically, as the block length grows without bound. We next provide a very brief (and non-comprehensive) review of some of the relevant earlier work.
In lossless compression, the theory of universal source coding is very well developed and mature. Davisson’s work [6] concerning universal coding redundancies has established the concepts of weak universality and strong universality (vanishing maximin and minimax redundancies, respectively), and has characterized the connection to the capacity of the ‘channel’ defined by the family of conditional distributions of the data to be compressed given the index (or parameter) of the source in the class [7,8,9]. For many of the frequently encountered parametric classes of sources, the minimum achievable redundancy of universal codes is well known to be dominated by , where k is the number of degrees of freedom of the parameter, and n is the block length [10,11,12,13]. A central idea that arises from Davisson’s theory is to construct a Shannon code pertaining to the probability distribution of the data vector with respect to a mixture (with a certain prior function) of all sources in the class. Rissanen, who was the inventor of the minimum description length (MDL) principle [14], has proved in [15] a converse to a coding theorem, which asserts that asymptotically, no universal code can achieve redundancy below , with the possible exception of sources from a subset of the parameter space, whose volume tends to zero as for every positive . Merhav and Feder [16] have generalized this result to more general classes of sources, with the term substituted by the capacity of the above-mentioned ‘channel’. Further developments, including more refined redundancy analyses, have been carried out in later studies.
In the wider realm of universal lossy compression, the theory is, unfortunately, not as sharp and well developed as in the lossless setting. We confine our attention, in this work, to d-semifaithful codes [17], namely, codes that satisfy the distortion requirement with probability one. Zhang, Yang, and Wei [18] have proved that, unlike in lossless compression, in the lossy case, even if the source statistics are known perfectly, it is impossible to achieve redundancy below , but is achievable. Not knowing the source conveys the price of enlarging the multiplicative constant in front of . Indeed, Yu and Speed [19] have established weak universality with a constant that grows with the cardinalities of the alphabets of the source and the reconstruction [20]. Ornstein and Shields [17] have considered universal d-semifaithful coding for stationary and ergodic sources with respect to the Hamming distortion measure, and established convergence with probability one to the rate–distortion function. Kontoyiannis [21] had several interesting findings. The first is a certain central limit theorem (CLT), with an redundancy term, whose coefficient is described as a limiting Gaussian random variable with some constant variance. The second is the so-called law of iterated logarithm (LIL), with redundancy proportional to infinitely often with probability one. One of the counter-intuitive conclusions from [21] is that universality is priceless under those CLT and LIL criteria. In [22], many of the findings are based on the observation that optimal compression can be characterized in terms of the negative logarithm of the probability of a sphere of radius around the source vector with respect to the distortion measure, where D is the allowed per-letter distortion. In the same article, they proposed also the ensemble of random coding with respect to a probability distribution given by a mixture of all distributions in a certain class. In two recent articles, Mahmood and Wagner [23,24] have studied d-semifaithful codes that are strongly universal with respect to both the source and the distortion function. The redundancy rates in [23] behave like with different multiplicative constants. Other interesting results concerning a special distortion measure appear in [25].
A parallel line of research work on universal lossless and lossy compression, which was pioneered by Ziv, pertains to the individual sequence approach. According to this approach, there are no assumptions on the statistical properties of the source. The source sequence to be compressed is considered an arbitrary deterministic (individual) sequence, but limitations are imposed on the encoder and/or the decoder to be implementable by finite-state machines. This includes, first and foremost, the celebrated Lempel–Ziv (LZ) algorithm [26,27,28], as well as further developments that extend the scope to lossy compression with and without side information [29,30], as well as to joint source–channel coding [31,32]. In the lossless case, the article [33] provides an individual sequence analogue of the above-mentioned result due to Rissanen, where the expression continues to designate the best achievable redundancy, but the main term of the compression ratio there is the empirical entropy of the source vector instead of the ordinary entropy of the probabilistic setting. The converse bound of [33] applies to the vast majority of source sequences within each type, and the vast majority of types (in analogy to the vast majority of the parameter space in Rissanen’s framework). In a way, this kind of a converse result still contains some flavor of the probabilistic setting, because arguing that the number of exceptional typical sequences is relatively small is actually equivalent to imposing a uniform distribution across the type and asserting that the induced probability of violating the bound is small. The achievability result of [33], on the other hand, holds pointwise for every sequence. A similar comment applies to [34], where asymptotically pointwise lossy compression was established with respect to first-order statistics (i.e., “memoryless” statistics) with an emphasis on distortion universality, similar to in [23,24].
A similar kind of a mix between the individual sequence setting and the probabilistic setting applies to this paper as well, in the context of universal rate–distortion coding, but here, just like in [34], there is no limitation to finite-state encoders/decoders as in [33]. In particular, our converse theorem asserts that given an arbitrary variable-rate code, and given an arbitrary distortion function within a certain wide class, the vast majority of reproduction vectors that represent source sequences of a given type (of any fixed order), must have a code length that is essentially at least as large as the negative logarithm of the probability of a ball with normalized radius D (D being the allowed per-letter distortion) centered at the given source sequence. The probability of this ball is taken with respect to a universal distribution that is proportional to , where is the code length of LZ encoding of the reproduction vector, .
The emphasis on the word “majority” in the previous paragraph (which was also mentioned earlier) calls for an explanation: it should be understood that in the absence of limitations on the encoding memory resources (such as the finite-state machine model described above), there cannot exist any non-trivial lower bound that applies to each and every given individual sequence. The reason is simple: given a particular individual source sequence, one can always tailor an encoder that compresses that sequence to a single bit (even losslessly), for example, by assigning the bit ‘0’ to be the compressed representation of the given sequence and by appending the bit ‘1’ as a header to the uncompressed binary representation of any other source sequence. For the given individual sequence, the compression ratio would then be , which tends to zero as n grows without bound (of course, this encoder would be very poor for many other source sequences). It is, therefore, unavoidable that any non-trivial lower bound must apply collectively over some set of source sequences, allowing at least some minority of exceptional sequences that may violate the bound.
We also present a matching achievability result, asserting that for every source sequence, this code length is essentially achievable by random coding, using a universal ensemble of codes, which is defined by independent random selection, where each codeword is drawn under the above-described universal probability distribution.
While the achievability result in [34] was pointwise as well, it was tailored to a memoryless structure, in the sense that it was given in terms of the rate–distortion function of the first-order empirical distribution, which is blind to any empirical dependencies and repetitive patterns within the source sequence. In this paper, we both extend the scope to general individual sequences beyond the memoryless statistics and extend the allowable class of distortion measures. In terms of the technical aspects, the proof of the achievability result is similar to a parallel proof in [34], but the novelty lies considerably more in the converse theorem and its proof, which is very different from the one in [34] because here there is no information-theoretic single-letter expression for the best achievable asymptotic performance.
Our main novel messages in this work can be summarized as follows.
-
1.. We essentially derive an asymptotic formula for the rate–distortion function of an individual sequence. It is given by the normalized negative logarithm of the universal probability of a sphere of radius (in the distortion metric sense) around the given source sequence.
-
2.. We establish the universality of the LZ probability distribution, not only in the context of lossless compression, but also for lossy compression, in the role of a universal distribution for independent random selection of a rate–distortion codebook.
-
3.. The expression of this rate–distortion function is intimately related to a converse result due to Kontoyiannis and Zhang [22] (in terms of the probability of a sphere), but it is more explicit in a certain sense, that will be discussed in Section 5.
-
4.. We provide some intuitive insight regarding this rate–distortion function by demonstrating the analogy to the ordinary rate–distortion function (see Section 5).
-
5.. We show that our achievability scheme outperforms the (seemingly) natural extension of LZ compression to the lossy case, where one seeks, within a sphere of radius , the reproduction vector whose LZ code length is minimal (see Section 5).
The outline of this paper is as follows. In Section 2, we establish the notation conventions, define a few terms and quantities, and provide some background. In Section 3, we present the converse theorem and its proof. In Section 4, we present the achievability theorem and prove it. Finally, in Section 5, we summarize the paper and discuss our results.
2. Notation, Definitions, and Background
Throughout the paper, random variables will be denoted by capital letters, specific values they may take will be denoted by the corresponding lower case letters, and their alphabets will be denoted by calligraphic letters. Random vectors and their realizations will be denoted, respectively, by capital letters and the corresponding lower case letters, both in a bold face font. Their alphabets will be superscripted by their dimensions. The source vector of length n, , with components, , , from a finite alphabet, , will be denoted by . The set of all such n-vectors will be denoted by , which is the nth-order Cartesian power of . Likewise, a reproduction vector of length n, , with components, , , from a finite alphabet, , will be denoted by . We denote the cardinalities of and by J and K, respectively.
For , the notation will be used to denote the substring . Probability distributions will be denoted by the letter P or Q with possible subscripts, depending on the context. The probability of an event will be denoted by , and the expectation operator with respect to a probability distribution P will be denoted by . For two positive sequences, and , the notation will stand for equality in the exponential scale, that is, . Similarly, means that , and so on. The indicator function of an event will be denoted by . The notation will stand for . The logarithmic function, , will be understood to be defined to the base 2. Logarithms to the base e will be denoted by ln.
Let ℓ be a positive integer that divides n. The ℓth-order empirical distribution of , which will be denoted by , is the vector of relative frequencies , where
(1)
The set of all ℓth-order empirical distributions of sequences in will be denoted by . For , the type class will be denoted by . Likewise, will denote , where is the ℓth-order empirical distribution of . Finally, will denote the ℓth-order joint empirical distribution of , i.e.,(2)
For a given positive integer n, a distortion function, d, is a function from into . In the two main parts of this paper, different assumptions will be imposed on the distortion function.
-
1.. For the achievability theorem, the distortion function can be completely arbitrary.
-
2.. For the converse theorem, we assume that depends on and only via their first-order joint empirical distribution, , and that for a given such distribution, it grows linearly in n, that is, , where the function is independent of n.
Regarding item 2, additive distortion measures, which obviously comply with the requirement, are given by linear functionals of . However, here arbitrary non-linear functionals are allowed as well.
A rate–distortion block code of length n is a mapping, , , that maps the space of source vectors of length n, , into a set, , of variable-length compressed bit strings. The decoder is a mapping that maps the set of compressed variable-length binary strings into a reproduction codebook . A block code is called d-semifaithful if for every , . The code length for , denoted , is the number of bits of . Since depends on only via , we will also denote it sometimes as or by ( being the reproduction vector pertaining to ), with a slight abuse of notation. For the converse theorem, we assume that correspondence between and is one-to-one. For the achievability theorem, we consider prefix-free codes. Accordingly, the encoder can equivalently be presented as a cascade of a reproduction encoder (also known as a vector quantizer), which maps into , followed by an entropy coder, which maps into with no additional loss of information.
For the purpose of presenting both the converse theorem and the achievability theorem, we need to recall a few terms and facts concerning the 1978 version of the LZ algorithm (also known as the LZ78 algorithm) [27]. The incremental parsing procedure of the LZ78 algorithm is a procedure of sequentially parsing a vector such that each new phrase is the shortest string that has not been encountered before as a parsed phrase, with the possible exception of the last phrase which might be incomplete. For example, the incremental parsing of the vector is . Let denote the number of phrases in resulting from the incremental parsing procedure. Let denote the length of the LZ78 binary compressed code for . According to ([27], Theorem 2),
(3)
where we remember that K is the cardinality of , and where clearly tends to zero as , at the rate of . We next define a universal probability distribution (see also [35,36]):(4)
Finally, we define the D-sphere around as(5)
and(6)
For later use, we also define(7)
Our purpose is to derive upper and lower bounds on the smallest achievable code length, , for d-semifaithful block codes of length n, and individual sequences, , from a given ℓth-order-type class, . As will be seen shortly, in both the converse and the achievability theorems, the main term of the bound on the length function will be .
3. The Converse Theorem
The following converse theorem asserts that even if the type class of the source vector was known to the decoder ahead of time, the code length could not be much smaller than for the vast majority of the codewords pertaining to that type.
Let ℓ be a positive integer that divides n and let be an arbitrary empirical distribution pertaining to a certain type class, , of source sequences in . Let d be a distortion function that depends on only via . Then, for every d-semifaithful variable-length block code, with one-to-one correspondence between and , and for every , the following lower bound applies to a fraction of at least of the codewords, :
(8)
where has the property .As a technical note, observe that can be made small when ℓ is chosen to be large, as behaves like for fixed ℓ and large n. This suggests that the theorem is meaningful mainly when ℓ is appreciably large, which is not surprising, because the larger ℓ is, the better one can exploit empirical dependencies within the source sequence. Enlarging ℓ also increases the asymptotic lower bound and, thus, makes it tighter. An alternative approach would be to let grow with n, but sufficiently slowly such that as .
The remaining part of this section is devoted to the proof of Theorem 1. The proof is based on two main steps whose aim is to bypass the need for the traditional information-theoretic expression of the rate–distortion function. The first step is based on a simple counting argument, displayed in Equations (9)–(14) below, which leads to the identity (15) and, more specifically, to (16). Equation (16) is pivotal to the proof because it establishes exact equality between the sphere-covering ratio (on the left-hand side) and a quantity (on the right-hand side) that can be interpreted as one over the probability of falling into an -sphere around under random coding with respect to the uniform distribution over the type class, . The sphere-covering ratio on the left-hand side belongs to the converse, and the expression on the right-hand side is strongly related to achievability (see Theorem 2 below), as it directly supports random coding. Therefore, these identities create the basic link between the converse and the direct theorems. The second step is to replace the uniform distribution over by the universal LZ distribution, , and thereby relax the dependence upon (or ), as well as on the parameter ℓ, without sacrificing the asymptotic performance. This proof is entirely different from the one in [34].
We first establish a relationship that will be used later on. For two given types and , consider the quantity,
(9)
We can evaluate in two ways. The first is as follows:(10)
(11)
where the second equality is since is the same for all , due to the permutation-invariance assumption on the distortion function. By the same token, we can also express in the following manner:(12)
(13)
which follows from the same consideration by symmetry. It follows then that(14)
or, equivalently,(15)
Now, let be the type of that maximizes . Then, the last equation implies that(16)
This relationship will be used shortly.Let be given. Any d-semifaithful code must fully cover the type class with spheres of radius (henceforth, referred to as D-spheres) centered at the various codewords. Let be M codewords. The number of members of that are covered by is upper-bounded as follows.
(17)
and so, the necessary condition for complete covering, which is , amounts to(18)
where the second line is from (16). Consider now a variable-length code with a codebook of size M. Let denote the length (in bits) of the compressed binary string that represents . The number of codewords with is upper-bounded as follows:(19)
where in the first inequality we have used the assumed one-to-one property of the mapping between the reproduction codewords and their variable-length compressed binary representations. It follows then, that for at least out of the M codewords in (that is, the vast majority of codewords), we have(20)
where is the uniform probability distribution across the type class , i.e.,(21)
We now argue that for every(22)
For , this is trivial as the left-hand side is equal to zero. For , we have the following consideration. In ([37], Equation (30)), it is shown (with somewhat different notation) that a certain quantity called the (unnormalized) s-state complexity [27] of , , for , is upper-bounded by(23)
where(24)
On the other hand, is lower-bounded (see [37], Equation (32)) by(25)
where , and so,(26)
Combining this with the inequality ([38], p. 17, Lemma 2.3),(27)
we have(28)
where(29)
and where the second inequality in (28) follows from (3). The last line of (28) is equivalent to (22). It follows then, that for at least out of the M codewords in ,(30)
where in the last step we have applied Kraft’s inequality to the LZ code-length function. This completes the proof of Theorem 1. □4. The Achievability Theorem
The lower bound of Theorem 1 naturally suggests achievability using the universal distribution, U, for random selection of the various codewords. The basic idea is quite standard and simple: the quantity is the probability that a single randomly chosen reproduction vector, drawn under U, would fall within distance from the source vector, . If all reproduction vectors are drawn independently under U, then the typical number of such random selections that it takes before one sees the first one in is of the exponential order of . Given that the codebook is revealed to both the encoder and decoder, once it has been selected, the encoder merely needs to transmit the index of the first such reproduction vector within the codebook, and the description length of that index can be made essentially as small as . We use this simple idea to prove achievability for an arbitrary distortion measure. The proof is similar to the parallel proof in [34], and it is presented here mainly for completeness.
The achievability theorem is the following.
Let be an arbitrary distortion function. Then, for every , there exists a sequence of d-semifaithful, variable-length block codes of block length n, such that for every , the code length for is upper-bounded by
(31)
where is a constant and .The proof is based on the following simple well-known fact: Given a source vector and a codebook, , let denote the index, i, of the first vector, , such that , namely, . If all reproduction vectors are drawn independently under U, then, for every positive integer, N:
(32)
and so, if , for some arbitrary positive sequence, , that tends to infinity, then(33)
This fact will be used few times in this section. □For later use, we also need the following uniform lower bound to : for a given , let denote an arbitrary reproduction vector within . Then,
Next, observe that is maximized by the K-ary extension of the counting sequence ([27], p. 532), which is defined as follows: for (m – positive integer), let denote the K-ary string of length that lists, say, in lexicographic order, all the words from , and let , whose length is(37)
The LZ incremental parsing of , which is exactly , yields(38)
and so, considering Equation (3), it follows that for some as . It follows then, that(39)
For an alternative to this upper bound on the LZ code length, one can slightly modify the LZ algorithm as follows: if , use the LZ algorithm as usual, otherwise, send uncompressed using bits. To distinguish between the two modes of operation, append a flag bit to indicate whether or not the data are LZ-compressed. The modified code length would then be . Now, replace by in all places throughout this paper, including the definition of U.
Consider now an independent random selection of all reproduction vectors to form a codebook, , of size () codewords, , according to U. Once the codebook has been drawn, it is revealed to both the encoder and the decoder. Consider next the following encoder. As defined before, let be defined as the index of the first codeword that falls within , but now, with the small modification that if none of the codewords fall in , then we define anyway (and then the encoding fails). Next, we define the following probability distribution over the positive integers, :
(40)
Given , the encoder finds and encodes it using a variable-rate lossless code with the length function (in bits, and ignoring the integer length constraint)(41)
where . It follows that the expected codeword length for (with respect to the randomness of the code) is upper-bounded by(42)
and we denote(43)
Consider now the quantity(44)
where the expectation is with respect to the randomness of the code . If can be upper-bounded by , which tends to zero as , this will imply that there must exist a code for which both(45)
and(46)
at the same time. Observe that since the left-hand side of (45) is either zero or one, then if we know that it must be less than for some codebook , it means that it must vanish as soon as n is large enough such that , namely, for all , in other words, the code is d-semifaithful. Furthermore, by (46), for the same codebook, we must have(47)
and adds a negligible redundancy term.To prove that , we first use the simple fact that the maximum of two non-negative numbers is upper-bounded by their sum, i.e.,
(48)
and, therefore, it is sufficient to prove that each one of these terms tends to zero. As for the first term, we have:(49)
where in (a) we have used (39). This quantity decays double-exponentially rapidly as since we have assumed .As for the second term of (48), we have:
(50)
where in (a) we have used (41) and (43), and in (b) we have used (33). The right-most side of this chain of inequalities clearly decays as well when n grows without bound. This completes the proof.5. Summary and Discussion
By deriving asymptotically matching upper and lower bounds, we have established the quantity as having the significance of an empirical rate–distortion function for individual sequences. While this quantity is not easy to calculate for large n, the operative meaning of our results is that we propose a universal ensemble for rate–distortion coding. According to this ensemble, the codewords are drawn independently under the probability distribution that is proportional to .
There are several observations, insights, and perspectives that should be addressed.
Relation to earlier converse bounds. The converse bound is given in terms of the probability of a sphere of radius around the source vector , under the universal distribution, U, defined in (4). This is intimately related to a converse result due to Kontoyiannis and Zhang ([22], Theorem 1, part i), which states that for any d-semifaithful code, there exists a probability distribution Q on such that for all (see also [21]). Here, upon giving up any claims on a minority of the codewords pertaining to a given type class, we derived a lower bound of essentially the same form with the benefit of specifying a concrete choice of the distribution Q, i.e., we propose , the universal distribution (unlike the distribution in [22] (Section III.A), which is proportional to across the codebook).
Interpretation of the main term of the bound. First, observe that in the special case of lossless compression (), the expression boils down to , as expected. In the lossy case, since is essentially bounded by a linear function of n (see (39)), we can approximate the main term as follows:
(51)
This expression, when normalized by n, can be viewed as a certain extension of the rate–distortion function, from the memoryless case to the general case, in the following sense. For a memoryless source P, the rate–distortion function has the following representation, which is parallel to (51):(52)
where the maximum over the empty set is understood to be . Indeed, if we replace U by the the uniform distribution across the first-order type pertaining to the optimal , this is the corresponding single-letter expression of that is obtained using the method in [38].Comparing to the LZ description length of the most compressible . Since our achievable bound involves LZ compression, it is interesting to compare it to the conceptually simple coding scheme that encodes by the vector that minimizes within . Consider the following chain of equalities and inequalities:
(53)
which means that the performance of our proposed scheme is never worse (and conceivably often much better) than that of selecting the vector with the smallest among all reproduction vectors in . The reason for the superiority of the proposed scheme is that it takes advantage of the fact that cannot be any vector in , but it must be a member of the codebook, , i.e., one of the possible outputs of a vector quantizer. On the other hand, in view of [27], is essentially achievable upon compressing the output of a certain reproduction encoder (or vector quantizer) using a finite-state encoder, but a finite-state machine does not have enough memory resources to take advantage of the fact that vectors outside cannot be encountered by the encoder. Another interesting comparison between the two schemes is in terms of computational complexity. While in our scheme, the encoder has to carry out typically about distortion calculations before finding the first , in the alternative scheme the number of calculations is . The former is a decreasing function of D, whereas the latter is an increasing function of D. Therefore, in terms of computational complexity, the preference between the two schemes might depend on D. Specifically, for an additive distortion measure, it is easy to see that(54)
and, by the method of types [38]:(55)
Therefore, whenever D is large enough such that , it is guaranteed that the coding scheme proposed here is computationally less demanding than the alternative scheme of minimizing across .Implementation of the random coding distribution. The universal random coding distribution is not difficult to implement. One way to achieve this is by feeding the LZ decoder with a sequence of purely random bits (fair coin tosses) until we have obtained n symbols at the decoder output. The details can be found in [36].
Universality with respect to the distortion measure. As mentioned in the Introduction, in [23,24,34], there are results on the existence of rate–distortion codes that are universal, not only in terms of the source, but also in the sense of the distortion measure. Since the proof of our achievability scheme is very similar to that of [34], it is possible to extend the achievability proof here too, so as to make our code distortion-universal for a wide class of distortion measures. This can be carried out by redefining to include the maximization of both terms over a dense grid of distortion functions, as was performed in [34]. We opted not to include this in the present paper since it is straightforward given the results we already have here and in [34].
Not applicable.
Not applicable.
The author declares no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
1. Berger, T. Rate Distortion Theory—A Mathematical Basis for Data Compression; Prentice-Hall Inc.: Englewood Cliffs, NJ, USA, 1971.
2. Cover, T.M.; Thomas, J.A. Elements of Information Theory; John Wiley & Sons: Hoboken, NJ, USA, 2006.
3. Gallager, R.G. Information Theory and Reliable Communication; John Wiley & Sons: New York, NY, USA, 1968.
4. Gray, R.M. Source Coding Theory; Kluwer Academic Publishers: Boston, MA, USA, 1990.
5. Viterbi, A.J.; Omura, J.K. Principles of Digital Communication and Coding; McGraw-Hill Inc.: New York, NY, USA, 1979.
6. Davisson, L.D. Universal noiseless coding. IEEE Trans. Inf. Theory; 1973; 19, pp. 783-795. [DOI: https://dx.doi.org/10.1109/TIT.1973.1055092]
7. Gallager, R.G. Source Coding with Side Information and Universal Coding; LIDS-P-937 Massachusetts Institute of Technology: Cambridge, MA, USA, 1976.
8. Ryabko, B. Coding of a source with unknown but ordered probabilities. Probl. Inf. Transm.; 1979; 15, pp. 134-138.
9. Davisson, L.D.; Leon-Garcia, A. A source matching approach to finding minimax codes. IEEE Trans. Inf. Theory; 1980; 26, pp. 166-174. [DOI: https://dx.doi.org/10.1109/TIT.1980.1056167]
10. Krichevsky, R.E.; Trofimov, R.K. The performance of universal encoding. IEEE Trans. Inf. Theory; 1981; 27, pp. 199-207. [DOI: https://dx.doi.org/10.1109/TIT.1981.1056331]
11. Shtar’kov, Y.M. Universal sequential coding of single messages. Probl. Inf. Transm.; 1987; 23, pp. 175-186.
12. Barron, A.R.; Rissanen, J.; Yu, B. The minimum description length principle in coding and modeling. IEEE Trans. Inf. Theory; 1998; 44, pp. 2734-2760. [DOI: https://dx.doi.org/10.1109/18.720554]
13. Yang, Y.; Barron, A.R. Information-theoretic determination of minimax rates of convergence. Ann. Stat.; 1999; 27, pp. 1564-1599. [DOI: https://dx.doi.org/10.1214/aos/1017939142]
14. Rissanen, J. Modeling by shortest data description. Automatica; 1978; 14, pp. 465-471. [DOI: https://dx.doi.org/10.1016/0005-1098(78)90005-5]
15. Rissanen, J. Universal coding, information, prediction, and estimation. IEEE Trans. Inf. Theory; 1984; 30, pp. 629-636. [DOI: https://dx.doi.org/10.1109/TIT.1984.1056936]
16. Merhav, N.; Feder, M. A strong version of the redundancy–capacity theorem of universal coding. IEEE Trans. Inf. Theory; 1995; 41, pp. 714-722. [DOI: https://dx.doi.org/10.1109/18.382017]
17. Ornstein, D.S.; Shields, P.C. Universal almost sure data compression. Ann. Probab.; 1990; 18, pp. 441-452. [DOI: https://dx.doi.org/10.1214/aop/1176990840]
18. Zhang, Z.; Yang, E.-H.; Wei, V. The redundancy of source coding with a fidelity criterion. I. known statistics. IEEE Trans. Inf. Theory; 1997; 43, pp. 71-91. [DOI: https://dx.doi.org/10.1109/18.567651]
19. Yu, B.; Speed, T. A rate of convergence result for a universal d-semifaithful code. IEEE Trans. Inf. Theory; 1993; 39, pp. 813-820. [DOI: https://dx.doi.org/10.1109/18.256490]
20. Silva, J.F.; Piantanida, P. On universal d-semifaithful coding for memoryless sources with infinite alphabets. IEEE Trans. Inf. Theory; 2022; 68, pp. 2782-2800. [DOI: https://dx.doi.org/10.1109/TIT.2021.3134891]
21. Kontoyiannis, I. Pointwise redundancy in lossy data compression and universal lossy data compression. IEEE Trans. Inf. Theory; 2000; 46, pp. 136-152. [DOI: https://dx.doi.org/10.1109/18.817514]
22. Kontoyiannis, I.; Zhang, J. Arbitrary source models and Bayesian codebooks in rate-distortion theory. IEEE Trans. Inf. Theory; 2002; 48, pp. 2276-2290. [DOI: https://dx.doi.org/10.1109/TIT.2002.800493]
23. Mahmood, A.; Wagner, A.B. Lossy compression with universal distortion. IEEE Trans. Inf. Theory; 2023; 69, pp. 3525-3543. [DOI: https://dx.doi.org/10.1109/TIT.2023.3247601]
24. Mahmood, A.; Wagner, A.B. Minimax rate-distortion. arXiv; 2022; arXiv: 2202.04481
25. Sholomov, L.A. Measure of information in fuzzy and partially defined data. Dokl. Math.; 2006; 74, pp. 775-779. [DOI: https://dx.doi.org/10.1134/S1064562406050401]
26. Ziv, J. Coding theorems for individual sequences. IEEE Trans. Inf. Theory; 1978; 24, pp. 405-412. [DOI: https://dx.doi.org/10.1109/TIT.1978.1055911]
27. Ziv, J.; Lempel, A. Compression of individual sequences via variable-rate coding. IEEE Trans. Inf. Theory; 1978; 24, pp. 530-536. [DOI: https://dx.doi.org/10.1109/TIT.1978.1055934]
28. Potapov, V.N. Redundancy estimates for the Lempel-Ziv algorithm of data compression. Discret. Appl. Math.; 2004; 135, pp. 245-254. [DOI: https://dx.doi.org/10.1016/S0166-218X(02)00308-6]
29. Merhav, N.; Ziv, J. On the Wyner-Ziv problem for individual sequences. IEEE Trans. Inf. Theory; 2006; 52, pp. 867-873. [DOI: https://dx.doi.org/10.1109/TIT.2005.864434]
30. Ziv, J. Fixed-rate encoding of individual sequences with side information. IEEE Trans. Inf. Theory; 1984; 30, pp. 348-452. [DOI: https://dx.doi.org/10.1109/TIT.1984.1056878]
31. Merhav, N. Finite-state source-channel coding for individual source sequences with source side information at the decoder. IEEE Trans. Inf. Theory; 2022; 68, pp. 1532-1544. [DOI: https://dx.doi.org/10.1109/TIT.2021.3133422]
32. Ziv, J. Distortion-rate theory for individual sequences. IEEE Trans. Inf. Theory; 1980; 26, pp. 137-143. [DOI: https://dx.doi.org/10.1109/TIT.1980.1056164]
33. Weinberger, M.J.; Merhav, N.; Feder, M. Optimal sequential probability assignment for individual sequences. IEEE Trans. Inf. Theory; 1994; 40, pp. 384-396. [DOI: https://dx.doi.org/10.1109/18.312161]
34. Merhav, N. D-semifaithful codes that are universal over both memoryless sources and distortion measures. IEEE Trans. Inf. Theory; 2023; 69, pp. 4746-4757. [DOI: https://dx.doi.org/10.1109/TIT.2023.3253133]
35. Cohen, A.; Merhav, N. Universal randomized guessing subjected to distortion. IEEE Trans. Inf. Theory; 2022; 68, pp. 7714-7734. [DOI: https://dx.doi.org/10.1109/TIT.2022.3194073]
36. Merhav, N.; Cohen, A. Universal randomized guessing with application to asynchronous decentralized brute–force attacks. IEEE Trans. Inf. Theory; 2020; 66, pp. 114-129. [DOI: https://dx.doi.org/10.1109/TIT.2019.2920538]
37. Merhav, N. Guessing individual sequences: Generating randomized guesses using finite-state machines. IEEE Trans. Inf. Theory; 2020; 66, pp. 2912-2920. [DOI: https://dx.doi.org/10.1109/TIT.2019.2946303]
38. Csiszár, I.; Körner, J. Information Theory—Coding Theorems for Discrete Memoryless Systems; 2nd ed. Cambridge University Press: Cambridge, UK, 2011.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
We propose a universal ensemble for the random selection of rate–distortion codes which is asymptotically optimal in a sample-wise sense. According to this ensemble, each reproduction vector,
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer