1. Introduction
Dual functionals map ordinary functionals onto real numbers. We are here interested in entropic functionals (EF). There are EFs galore, but no simple objective measures to distinguish between them. We remedy this situation in this work by appealing to Born’s proposal, of almost a hundred years ago, that the square modulus of any wave function|ψ|2ought to be regarded as a probability distribution P.
We begin by reminding the reader that the notion of appealing to just a small quantity of expectation values so as to describe main features of physical systems underlies statistical mechanics, particularly in its information theory version, called MaxEnt by its creator (Jaynes). Indeed, theoretical developments of the last century led Jaynes to formulate his MaxEnt approach, which is known to yield the least biased representation consistent with the available data-amount [1,2,3,4,5,6,7,8].
For a similar, but purely quantum treatment in the style of Born, so as to describe pure quantum states, important advances were made in References [9,10,11,12,13,14,15,16], in which a “quantum entropy functional”SQwas utilized, and the MaxEnt approach profitably employed. As an aside, we mention that, precisely, the MaxEnt approach has become the main comparison-via, till now, to ascertain whether a certain entropic measure is more or less apt than another in describing a given scientific phenomenon.
Returning to the pure-states entropic measureSQ;Q=(q1,q2,…,qn) , the MaxEnt methodology was demonstrated to be very useful in describing both ground and excited states of variegated many-body problems [9,10,11,12,13]. It constituted a reasonable alternative to the celebrated Gutzwiller ansatz [15], and paved the way for rather interesting semi-classical treatments [16]. It has been shown to provide one with many-body wave functions of a better quality in several distinct scenarios, like the Hartree-Fock [10], the BCS [11], or the random phase approximation [13] ones. One appeals there to a Shannon’s logarithmic ignorance measure [4] for the probability distributionPi,
S[P]=−∑iPiln(Pi),
with a special choice for the probability distribution (PD)
S(ψ)=−2∑i|ci|2ln[|ci|]
for, in an arbitrary basis|i>,
ψ=∑jcj|j>
in self explanatory notation.
The Quantum Entropic FunctionalSQ
Several important properties of the quantum entropySQ were demonstrated in Reference [16], namely:
-
SQis a true ignorance function, in the sense of Brillouin. For a normalized, discrete probability distributionpi, for instance, Shannon’s measure represents the missing information that one would need to possess so as to be in a “complete information” situation (CIS). In a CIS, just onepi=1 , while the remaining ones vanish [4,5].
-
There is a unique global minimum forSQsubject to appropriate MaxEnt constraints.
-
SQobeys an H-theorem.
-
Ground state wave functions that maximizeSQ satisfy the virial theorem and the hyper virial ones [17].
We see then that our ignorance measure [4]SQ exhibits adequate credentials to be seriously considered. the wave function (wf) we will be interested in here is that advanced in References [18,19], which compactly describes in simple analytic terms the coherent states of the harmonic oscillator (HO), advantageously replacing the usual, cumbersome infinite sum.
2. A Recently Developed Analytic, Compact Expression for Coherent States
Reference [18] introduced for the first time ever an analytic, compact expression for coherent states, that was a posteriori extensively discussed in Reference [19]. the new coherent states’ compact expression advantageously replaces the customary Glauber’s infinite expansion in terms of the harmonic oscillator eigenstates|n>. It reads
ψα(x)=mwπℏ14 e−α22 e−|α|22 e−mwx22ℏ e2mwℏαx.
Theseψα(x)are eigenfunctions of the annihilation operator a corresponding to the one dimensional HO. Thus,
|α>=mwπℏ14 e−α22 e−|α|22∫e−mwx22ℏ e2mwℏαx|x>dx.
and
a|α>=α|α>.
Forα=0we have
ψ0(x)=mwπℏ14 e−mwx22ℏ=ϕ0(x),
namely, the wave function (wf) for the HO-ground state, which is a coherent state itself. For simplicity, in what follows we set
mwℏ=1.
Given a certain operator A, it is certainly much easier to compute<α|A|α>(just one integral) than an infinite number of<n|A|n>(for n phonons) and then sum over them.
Ourψα(x), eigenfunctions of the annihilation operator a corresponding to the one dimensional HO, exhibit a special property that is of the essence for our present purposes: they are states of minimum Heisenberg-uncertainty. Actually, this is their principal feature, to such an extent that it becomes its defining trait: a coherent state is that of minimum uncertainty (with regards to canonical conjugate variables). This translates into the fact that their associated quantal entropySQ , a measure of ignorance [4], is unique in the sense that the associated quantum ignorance is minimal.
Our central proposal here emerges in this context—associate to any entropic functionalSQ(P)a numerical real value. This value emerges when the P input ofSQis a coherent state. This idea is viable because, as we will see, this functional’s numerical associated value m is independent of the displacement factorαof the coherent state. m is the same for any arbitraryαand thus uniquely characterizes any arbitrary dual functionalF[SQ]αR
FαR :{SQ}⟶R+.
3. Some Different Monoparametric Ignorance Measures
Shannon’s logarithmic measure (1) does not possess any parameter. Generalized entropic measures (GEMs) do [the best summary for them is, in our view, Reference [20] (and references therein). They have become quite popular in the last 30 years, being applied to variegated scientific areas of endeavor, from high energy physics to Economics. There are many GEMs [21], but we will limit ourselves in this Section to four monoparametric ones.
LetF(x)be the probability density (PD) corresponding to a wave functionψ(x), of the form
F(x)=ψ*(x)ψ(x).
Shannon’s entropic measure (or ignorance measure) is (we set Boltzmann’s constantkB=1)
SS=−∫F(x)ln[F(x)]dx.
Tsallis’ ignorance measure reads [20]
STq=−1−∫[F(x)]qdxq−1,
while Rènyi’s one adopts the appearance [20]
SRq=−11−qln∫[F(x)]qdx.
Finally, Kaniadakis’ ignorance measure is [22,23,24]
SKq=−12q∫[F(x)]1+qdx−∫[F(x)]1−qdx.
4. The Main Mathematical Tool of This Paper
The coherent state PD is, for complexα,
α=αR+iαI,
given by
Fα(x)=<ψα(x)|ψα(x)>=π−12 e−(x−2αR)2=FαR (x)
and obviously depends only on the real partαRofα.
Given the probability density F for our coherent state, our fundamental tool is to be introduced at this point, via the formal introduction of a dual functionalFof a given ignorance measureS(F)(S is a functional of F). In practice, however, to evaluateFwe just compute the functionalS(F)
FαR (S)=S(FαR ).
We apply it now to our current five ignorance measures, beginning with Shannon’sSS
FαR (SS)=SS(FαR )=12(1+lnπ)∼1.07,
which is independent ofαR! This feature is common to all of our five measures, and can be generalized to other generalized measures.
4.1. Important Comment on the Meaning of Equation (18)
Let us consider now the specific real number associated with Shannon’s measure
NS=12(1+lnπ)∼1.07.
NSis the minimum amount of ignorance displayed by Shannon’s entropy. It could perhaps be thought of as a kind of information theory’s counterpart of the uncertaintyℏ/2of quantum mechanics, although the units are different in the two cases. This leastℏ/2amount of ignorance (with regards to canonically conjugate variables) is physically unavoidable, of course. the Shannon quantum entropic functionalSQ, instead, reflects an altogether distinct ignorance-amount (IA), that pertaining to the Born probability density|ψ(x)|2. Can this IA be diminished if one chooses a different entropic measure? This is a seemingly interesting question, that will be answered in the affirmative in the next Section below. Let us make perfectly clear the following notion. A given minimum IA for an entropic functional (EF)
- in no way makes an EF “better” or “worse” than another EF,
- but it serves the purpose of classifying EFs using it and
-
classification is the starting step of any scientific discipline [25].
Our integrals over the variable x run always between−∞and ∞.
Tsallis’ entropy in the paradigmatic example [20]. In such case we will obtain a functionNT(q)of q rather than a pure number.NT(q)depends on the specific value of the parameter q so that, after a straightforward computation, we get a real numberNTfor each q value. This real number arises from applying the super functionalFαR to the functionalSTq[FαR ]. Indeed,
NT(q)=FαR (STq)=STq[FαR ]=1qq−π1−q2q−1,
while, in Rényi’s case [20] we face the real numbersNR(q)
NR(q)=FαR (SRq)=SRq[FαR ]=12(1−q)[lnπ−lnq−qlnπ]
Finally, forNK(q) - Kaniadakis, we find [22,23,24]
NK(q)=FαR (SKq)=SKq[FαR ]=12q1πq211+q−1π−q211−q.
The values of the super functionalFare indeed independent ofαRand are all functions ofπ[and for all but Shannon’s, also of q]. theπ-dependence comes, of course, from integrating a Gaussian function for the coherent states. We insist on the fact that we are facing here pure numbers. No physical units are involved.
If we carefully inspect the above equations, we will appreciate that, in some cases, the Shannon’s IA is diminished for the generalized functionals. This will be clearly seen in the graphs that we will display below. 4.3. Generalizing the αR-Independence to Arbitrary Entropic Measures
LetGQbe an arbitrary entropic measure that depends upon a set of parameters Q and involves the coherent-state probability density F, withQ=(q1,q2,…,qn). We have the functionalFαR (GC)
FαR (GQ)=∫GQ[FαR (x)]dx=
∫GQ[π−12 e−(x−2αR)2]dx=∫GQ[π−12 e−x2]dx=IQ
and we see that theαRdependence is gone, absorbed in a variables’ change that one makes in performing the Gaussian integrations, as above.
5. Results: Four Numerical Quantities Associated to Each of Our Monoparametric Ignorance Measures
These N quantities are (1)NS, (2)NT(q), (3)NR(q), and (4)NK(q) , associated respectively with Shannon, Tsallis, Rényi, and Kaniadakis. We plot and compare them. We see that Shannon’s ignorance amount can indeed be diminished by other entropic measures. Figure 1 one clearly demonstrates the fact that the Shannon’s ignorance amount is indeed decreased forq>1in both the Tsallis and Rényi instances. Instead, Kanidakis’ functional achieves the same feat for q near zero.
In Figure 2 we compare the ignorance amounts (IA) associated with Tsallis (horizontal) and Rényi (vertical) entropic forms.
The black curve displaysNR(q)(vertical axis) versusNT(q)(horizontal one). A monotonic dependence is observed, as one should expect from the associated mathematical expressions for these entropic forms. the red curve tells us that Tsallis-IA is smaller than Rényi’s one forq>1. Viceversa forq<1.
Figure 3 makes the comparisons as Figure 2, but now relates (black curve) Kaniadakis (vertical) to Rényi (horizontal) functionals. Here the black curve depicts the highly non trivial relationship between them.
6. Sharma-Mittal Biparametric Ignorance Measure
It is defined in term of two parameters r and q as [26,27]
SSM(q,r)=11−r∫[F(x)]qdx1−r1−q−1,
so that
Fαr (SSM(q,2q−1))=12−qπ1−q2q2−q1−q−1,
where we have (arbitrarily, for comparisons ease) selectedr=2q−1. Forr=2one has
Fαr (SSM(q,2))=1−1πq12q−1,
while forr=0.5we have
NSM(q,r)=Fαr (SSM(q,0.5))=2π14 q14(q−1)−2.
The following graph (Figure 4) depicts our functional in terms of the pair(q,r).
The next figure (Figure 5) compares the Tsallis result to the Sharma-Mittal(q,2q−1)one.
We appreciate the fact that Sharma-Mittal measure exhibits a smaller ignorance amount than the Tsallis one for(0≤q≤∞). This is to be expected, since there are two free parameters.
7. Value of Our Dual Functional When the SQ-Argument Is Not a Coherent State
For the sake of completeness, we show now that the numerical value m ofF[SQ], when we deal withSq[F1](withF1the probability density for the HO first excited state), is larger than that for the same dual functional, when the argument ofSQis a coherent state.
This should lend credibility to the statement that coherent states’ information measures yield minimum values for the dual functional.
The expression for the first excited state wave function is
ϕ1(x)=2x(4π)−14 e−x22.
Then,
Fϕ1 (S)=−2π∫x2 e−x2ln2x2πe−x2dx,
so that (29) becomes
Fϕ1 (S)=12lnπ+ln2+C−12=m1∼1.34,
whereC=0.57721566490 is Euler’s constant. From (18) we see thatm1>m(coherentstate).
8. Application to An Statistical Complexity (SC) Measure
Our entropySQ can be viewed as the measure of the uncertainty associated to the basis-states on which the wave function (wf) is expanded (Cf. Equation (3)). We can regard the situation as that of a probabilistic physical processes described by the probability distributionpj=|cj|2;j=1;:::;N,P≡(p1;p2,…,pN), where P is a vector in a probability space. ForSQ[P]=0the situation is that prevailing immediately after performing an experiment (wf “collapse” and minimum ignorance). On the other hand, our ignorance is maximal ifS[P]=lnN (uniform probability). These two extreme circumstances of (i) maximum knowledge (or “perfect order”) and (ii) maximum ignorance (or maximum “randomness”) are regarded by many authors [1,2,3,4,5,6,7,8,9,10,11,28,29,30,31,32,33,34,35] as “trivial” ones. These authors have conceived the idea of devising a “measure” of the “statistical complexity” (SC) contained in P that would vanish in two extreme situations described above. We will analyze here, the quantum SC of whichSQis a basic ingredient. We will apply the quantifier C to the probability distribution (PD)P=|ψα |2corresponding to coherent states. Accordingly, ifC=0, the PD P would contain only trivial information. the larger C, the larger the amount of “non-triviality”. At this stage of our discussion emerges an important and well known observation. No all the available information measures are equally able to detect non-triviality. They are equally ‘informative.’ This is why we will analyze the PD P above with differentC−measures, entailing distinct information measures (IM). Im turn. we study two differentC−definitions.
8.1. Shiner-Davison-Landsberg Complexity Measure for Distinct IM
We appeal to the simplest SC measureTSDL , devised by Shiner, Davison, and Landsberg (SDL) [36]. We first introduce the ratio H betweenSQand the specific maximum value thatSQcan attain (SQmaximal), that is,
TSDL=H(1−H).
What are we looking at with this definition in our particular instance? Remember that hereP=|ψα |2corresponding to coherent states. But all our present entropic measures yield results that are independent ofαas we have seen above. Thus,TSDLShannon=0, not detecting any salient feature in P. Tsallis’ measure, instead, introduces another parameter, namely q, and correspondingly,TSDLTsallis(q)yields different values for different q and produces aq− parametrized curve- We plot in Figure 6TSDLversusq∈[0,∞]for Shannon’s (q=1), Tsallis’ (red,q≥1) and Rényi’s (brown,q≥1) measuresSQq. As expected, the statistical complexity T vanishes atq=1, as we have just explained. For theq−entropies it grows first and then stabilize themselves. Tsallis-curve displays a maximum atq∼2.3, entailing a specialq−value∼2.3of maximum complexity. What to we make of this maximum? that there are salient peculiarities in the distribution P above that the Tsallis SDL-measure best detect with this q value. the Rényi measure detection-ability grows with q at first, but eventually its non-triviality sensor stabilizes itself. Thus, if one is to apply P in computing some physical quantity B, the features of B should better be scrutinized via Tsallis’ measure withq=2.3, that would be the most “informative” one.
8.2. López Ruiz-Mancini-Calbet (LMC) Measure
The López Ruiz-Mancini-Calbet (LMC) is today regarded as the canonical SC measure, that has been applied to multiple physics-instances [28,29,30,31,32,33,34,35,37,38,39,40,41,42,43,44,45,46]. It has the following form:
TLMC=SQ,
where Q is called the disequilibrium and is a distance in probability space between the current probability distribution P and the uniform distribution. For continuous one-dimensional density probabilities P one has [1,2,3,4,5,6,7,8,9,10,11,28,29,30,31,32,33,34,35]
Q=∫P2dx.
We have computedTLMC for the four probability distributions discussed above and plotted them versus q in Figure 7 Shannon blue dot, Tsallis green (q≥1) and Rényi red (q≥1 ). Note that no complexity maximum is displayed here by any of these curves. the LMC picture is the reverse of the SDL one. C is maximal for Shannon’s information measure, that becomes thus the most informative one. Rényi’s fares worse, and even more so Tsallis’. Moreover, in the two last cases the measures become less and less informative as q grows. Let us point out here that most people regard the LMC C as the canonical one, which has successfully detected phase transitions in many systems [28,29,30,31,32,33,34,37,38,39,40,41,42,43,44,45,46]. This, we construct our results as further evidence that LMC is better than SDL.
9. Conclusions
We have in this effort achieved a way of classifying the large number of different entropic functionals in vogue nowadays. This should be of importance in the sense of giving a semblance of order to the pandemonium of entropies galore that are used in a plethora of distinct scientific endeavors. Science always begins with a process of classification [25].
In our classification efforts we were aided by using the pure state entropySQ advanced and utilized in References [9,10,11,12,13]. Our pure states are the coherent ones of the HO (CHO), taking advantage of the closed analytical representation of them advanced in References [18,19]. They are unique in the sense of possessing minimum Heisenberg uncertainty. We compute and compare diverse entropic functionals of the CHO probability densities.
Our quantum entropySQrepresents the information theoretic ignorance pertaining to the square modulus ofψ(x)when it is regarded as a probability density. As just stated, in this paperψα(x)is an HO-coherent state, and for any entropic functionalSQone encounters a displacement-alphaindependent, positive real valueN(Q). This last fact gives sense to our central proposal, stated above, of associating to any entropic functional a numerical real value.N(Q)is the same for any arbitraryαand thus uniquely characterizes the entropic functionalSQ.
These numbersN(Q)provide a way of listing, and thus classifying, the plethora of extant literature’s entropic functionals. An application to statistical complexity measures (SCM) is made, that encounters significant differences between two popular SCM.
Author Contributions
All authors produced the paper collaboratively in equal fashion. All authors have read and agreed to the published version of the manuscript.
Funding
Research was partially supported by FONDECYT, grant 1181558 and by CONICET (Argentine 200 Agency).
Conflicts of Interest
The authors declare no conflict of interest.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
There are entropic functionals galore, but not simple objective measures to distinguish between them. We remedy this situation here by appeal to Born’s proposal, of almost a hundred years ago, that the square modulus of any wave function|ψ|2be regarded as a probability distribution P. the usefulness of using information measures like Shannon’s in this pure-state context has been highlighted in [Phys. Lett. A1993, 181, 446]. Here we will apply the notion with the purpose of generating a dual functional [FαR:{SQ}⟶R+], which maps entropic functionals onto positive real numbers. In such an endeavor, we use as standard ingredients the coherent states of the harmonic oscillator (CHO), which are unique in the sense of possessing minimum uncertainty. This use is greatly facilitated by the fact that the CHO can be given analytic, compact closed form as shown in [Rev. Mex. Fis. E 2019, 65, 191]. Rewarding insights are to be obtained regarding the comparison between several standard entropic measures.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer