1. Introduction
It is easy to observe that large fluctuations in stock market prices are followed by large ones, whereas small fluctuations in prices are more likely to be followed by small ones. This property is known as volatility clustering. Recent works, such as [1,2], have shown that while large fluctuations tend to be more clustered than small ones, large losses tend to lump together more severely than large gains. The financial literature is interested in modeling volatility clustering since the latter is considered as a key indicator of market risk. In fact, the trading volume of some assets, such as derivatives, increases over time, making volatility their most important pricing factor.
It is worth mentioning that both, high and low volatilities, seem to be a relevant factor for stock market crises according to Danielsson et al. [3]. They also found that the relation between unexpected volatility and the incidence of crises became stronger in the last few decades. In the same line, Valentine et al. [4] showed that market instability is not only the result of large volatility, but also of small volatility.
The classical approach for volatility clusters lies in nonlinear models, based on heteroskedastic conditionally variance. They include ARCH [5], GARCH [6,7,8], IGARCH [9], and FIGARCH [10,11] models.
On the other hand, agent-based models allow reproducing and explaining some stylized facts of financial markets [12]. Interestingly, several works have recently been appeared in the literature analyzing a complete order book by real-time simulation [13,14,15]. Regarding the volatility clustering, it is worth mentioning that Lux et al. [16] highlighted that volatility is explained by market instability. Later, Raberto et al. [17] introduced an agent-based artificial market whose heterogeneous agents exchange only one asset, which exhibits some key stylized facts of financial markets. They found that the volatility clustering effect is sensitive to the model size, i.e., when the number of operators increases, the volatility clustering effect tends to disappear. That result is in accordance with the concept of market efficiency.
Krawiecki et al. [18] introduced a microscopic model consisting of many agents with random interactions. Then the volatility clustering phenomenon appears as a result of attractor bubbling. Szabolcs and Farmer [19] empirically developed a behavioral model for order placement to study endogenous dynamics of liquidity and price formation in the order book. They were able to describe volatility through the order flow parameters.
Alfarano et al. [20] contributed a simple model of an agent-based artificial market with volatility clustering generated due to interaction between traders. Similar conclusions were obtained by Cont [21], Chen [22], He et al. [23], and Schmitt and Westerhoff [24].
Other findings on the possible causes of volatility clusters are summarized below. Cont [21] showed that volatility is explained by agent behavior; Chen [22] stated that return volatility correlations arise from asymmetric trading and investors’ herding behavior; He et al. [23] concluded that trade between fundamental noise and noise traders causes the volatility clustering; and Schmitt and Westerhoff [24] highlighted that volatility clustering arises due to the herding behavior of speculators.
Chen et al. [25] proposed an agent-based model with multi-level herding to reproduce the volatilities of New York and Hong Kong stocks. Shi et al. [26] explained volatility clustering through a model of security price dynamics with two kind of participants, namely speculators and fundamental investors. They considered that information arrives randomly to the market, which leads to changes in the viewpoint of the market participants according to a certain ratio. Verma et al. [27] used a factor model to analyze how market volatility could be explained by assets’ volatility.
An interesting contribution was made by Barde in [28], where the author compared the performance of this kind of model with respect to the ARCH/GARCH models. In fact, the author remarked that the performance of three kinds of agent-based models for financial markets is better in key events. Population switching was found also as a crucial factor to explain volatility clustering and fat tails.
On the other hand, the concept of a volatility series was introduced in [2] to study the volatility clusters in the S&P500 series. Moreover, it was shown that the higher the self-similarity exponent of the volatility series of the S&P500, the more frequent the volatility changes and, therefore, the more likely that the volatility clusters appear. In the current article, we provide a novel methodology to calculate the probability of volatility clusters of a given size in a series with special emphasis on cryptocurrencies.
Since the introduction of Bitcoin in 2008, the cryptocurrency market has experienced a constant growth, just like the use of crypto assets as an investment or medium of exchange day to day. As of June 2020, there are 5624 cryptocurrencies, and their market capitalization exceeds 255 billion USD according to the website CoinMarketCap [29]. However, one of the main characteristics of cryptocurrencies is the high volatility of their exchange rates, and consequently, the high risk associated with their use.
Lately, Bitcoin has received more and more attention by researchers. Compared to the traditional financial markets, the cryptocurrency market is very young, and because of this, there are relatively few research works on their characteristics, and all of them quite recent. Some of the authors analyzed the Bitcoin market efficiency by applying different approaches, including the Hurst exponent (cf. [30] for a detailed review), whereas others investigated its volatility using other methods. For instance, Letra [31] used a GARCH model for Bitcoin daily data; Bouoiyour and Selmi [32] carried out many extensions of GARCH models to estimate Bitcoin price dynamics; Bouri, Azzi, and Dyhberg [33] analyzed the relation between volatility changes and price returns of Bitcoin based on an asymmetric GARCH model; Balcilar et al. [34] analyzed the relation between the trading volume of Bitcoin and its returns and volatility by employing, in contrast, a non-parametric causality in quantiles test; and Baur et al. [35] studied the statistical properties of Bitcoin and its relations with traditional asset classes.
Meanwhile, in 2017, Bariviera et al. [36] used the Hurst exponent to compare Bitcoin dynamics with standard currencies’ dynamics and detected evidence of persistent volatility and long memory, facts that justify the GARCH-type models’ application to Bitcoin prices. Shortly after that, Phillip et al. [37] provided evidence of slight leverage effects, volatility clustering, and varied kurtosis. Furthermore, Zhang et al. [38] analyzed the first eight cryptocurrencies that represent almost70%of cryptocurrency market capitalization and pointed out that the returns of cryptocurrencies exhibit leverage effects and strong volatility clustering.
Later, in 2019, Kancs et al. [39], based on the GARCH model, estimated factors that affect Bitcoin price. For it, they used hourly data for the period between 2013 and 2018. After plotting the data graphically, they suggested that periods of high volatility follow periods of high volatility, and periods of low volatility follow periods of low volatility, so in the series, large returns follow large returns and small returns small returns. All these facts indicate evidence of volatility clustering and, therefore, that the residue is conditionally heteroscedastic.
The structure of this article is as follows. Firstly, Section 2 contains some mathematical basic concepts on measure theory and probability (Section 2.1), the FD4 approach Section 2.2), and the volatility series (Section 2.3). The core of the current paper is provided in Section 3, where we explain in detail how to calculate the probability of volatility clusters of a given size. A study of volatility clusters in several cryptocurrencies, as well as in traditional exchanges is carried out in Section 4. Finally, Section 5 summarizes the main conclusions of this work.
2. Methods
This section contains some mathematical tools of both measure and probability theories (cf. Section 2.1) that allow us to mathematically describe the FD4 algorithm applied in this article (cf. Section 2.2) to calculate the self-similarity index of time series. On the other hand, the concept of a volatility series is addressed in Section 2.3.
2.1. Random Functions, Their Increments, and Self-Affinity Properties
Lett≥0denote time and(X,A,P)be a probability space. We shall understand thatX={Xt≡X(t,ω):t≥0}is a random process (also a random function) from[0,∞)×ΩtoR, ifXtis a random variable for allt≥0andω∈Ω, whereΩdenotes a sample space. As such, we may think ofXas defining a sample functiont↦Xtfor allω∈Ω. Hence, the points inΩdo parameterize the functionsX:[0,∞)→Rwith P being a measure of probability in the class of such functions.
LetXtandYtbe two random functions. The notationXt∼Ytmeans that the finite joint distribution functions of such random functions are the same. A random processX={Xt:t≥0}is said to be self-similar if there exists a parameterH>0such that the following power law holds:
Xat∼aHXt
for eacha>0andt≥0 . If Equation (1) is fulfilled, then H is named the self-similarity exponent (also index) of the processX. On the other hand, the increments of a random functionXtare said to be stationary as long asXa+t−Xa∼Xt−X0for allt≥0anda>0. We shall understand that the increments of a random function are self-affine of the parameterH≥0if the next power law stands for allh>0andt0≥0:
Xt0+τ−Xt0 ∼h−H(Xt0+hτ−Xt0 ).
LetXtbe a random function with self-affine increments of the parameter H. Then, the followingTH-law holds:
MT∼THM1,
where its (T-period) cumulative range is defined as:
Mt,T:=supX(s,ω)−X(t,ω):s∈[t,t+T]−infX(s,ω)−X(t,ω):s∈[t,t+T],
andMT:=M0,T (cf. Corollary 3.6 in [40]).
2.2. The FD4 Approach
The FD4 approach was first contributed in [41] to deal with calculations concerning the self-similarity exponent of random processes. It was proven that the FD4 generalizes the GM2 procedure (cf. [42,43]), as well as the fractal dimension algorithms (cf. [44]) to calculate the Hurst exponent of any process with stationary and self-affine increments (cf. Theorem 3.1 in [41]). Moreover, the accuracy of such an algorithm was analyzed for samples of (fractional) Brownian motions and Lévy stable processes with lengths ranging from25to210 points (cf. Section 5 in [41]).
Next, we mathematically show how that parameter could be calculated by the FD4 procedure. First of all, letX={Xt:t≥0}be a random process with stationary increments. Letq>0, and assume that for eachXt∈X, there existsmq(Xt):=E[|Xt|q], its (absolute) q-order moment. Suppose, in addition, that there exists a parameterH>0for which the next relation, which involves (τ-period) cumulative ranges ofX, holds:
Mτ∼τHM1.
Recall that this power law stands for the class of (H-)self-similar processes with self-affine increments (of parameter H; see Section 2.1), which, roughly speaking, is equivalent to the class of processes with stationary increments (cf. Lemma 1.7.2 in [45]). Let us discretize the period byτn=2−n:n∈N and take q-powers on both sides of Equation (2). Thus, we have:
Mτnq∼τnqHM1qforalln∈N.
Clearly, the expression in Equation (3) could be rewritten in the following terms:
Xnq∼τnqHX0q=2−nqHX0
where, for short, the notationXn:=Mτn =M2−n is used for alln∈N . Since the two random variables in Equation (4) are equally distributed, their means must be the same, i.e.,
mq(Xn)=E[Xnq]=2−nqHE[X0q]=2−nqHmq(X0).
Taking (2-base) logarithms on both sides of Equation (5), the parameter H could be obtained by carrying out a linear regression of:
H=1nqlog2mq(X0)mq(Xn).
vs. q. Alternatively, observe that the expression in Equation (4) also provides a relation between cumulative ranges of consecutive periods ofX, i.e.,
Xnq∼2qHXn+1q.
Since the random variables on each side of Equation (7) have the same (joint) distribution function, their means must be equal, namely,
mq(Xn)=E[Xnq]=2qHE[Xn+1q]=2qHmq(Xn+1)foralln∈N,
which provides a strong connection between consecutive moments of order q ofX . If (two-base) logarithms are taken on both sides of Equation (8), a linear regression of the expression appearing in Equation (9) vs. q allows calculating the self-similarity exponent ofX(whenever self-similar patterns do exist for such a process):
H=1qlog2mq(Xn)mq(Xn+1).
Hence, the FD algorithm is defined as the approach whose running is based on the expressions appearing in either Equation (5) or Equation (8). The main restriction underlying the FD algorithm consists of the assumption regarding the existence of the q-order moments of the random processX. At first glance, any non-zero value could be assigned to q to calculate the self-similarity exponent (provided that the existence of that sample moment could be guaranteed). In the case of Lévy stable motions, for example, givenq0, it may occur thatmq(Xn)does not exist for anyq>q0. As such, we shall selectq=0.01to calculate the self-similarity index of a time series by the FD algorithm, thus leading to the so-called FD4 algorithm. Equivalently, the FD4 approach denotes the FD algorithm forq=0.01 . In this paper, the self-similarity exponent of a series by the FD4 approach is calculated according to the expression in Equation (6). Indeed, since it is equivalent to:
log2 mq(Xn)=log2 mq(X0)−nqH,
the Hurst exponent of the series is obtained as the slope of a linear regression, which compareslog2 mq(Xn) with respect to n. In addition, notice that a regression coefficient close to one means that the expression in Equation (5) is fulfilled. As such, the calculation ofmq(Xn)becomes necessary to deal with the procedure described above, and for each n, it depends on a given sample of the random variableXn∈X. For computational purposes, the length of any sample ofXnis chosen to be equal to2n. Accordingly, the greater n, the more accurate the value ofmq(Xn)is. Next, we explain how to calculatemq(Xn). Let a log-price series be given, and divide it into2nnon-overlapping blocks,Bi:i=1,…,2n. The length of each block isk:=2−n·length(series), so for eachi=1,…,2n, we can writeBi={B1,…,Bk}. Then:
-
Determine the range of each blockBi, i.e., calculateRi=max{Bj:j=1,…,k}−min{Bj:j=1,…,k}for eachi=1,…,2n.
-
The (q-order) sample moment is given bymq(Xn)=2−n∑i=12n Riq.
According to the step (1), both the minimum and the maximum values of each period are required to calculate each rangeRi. In this way, notice that such values are usually known for each trading period in the context of financial series. It is also worth noting that when n takes the valuelog2(length(series)), then each block only consists of a single element. In this case, though, each rangeRican be still computed.
2.3. The Volatility Series
The concept of a volatility series was first contributed in Section 2.2 of [2] as an alternative to classical (G)ARCH models with the aim to detect volatility clusters in series of asset returns from the S&P500 index. It was found, interestingly, that whether clusters of high (resp., low) volatility appear in the series, then the self-similarity exponent of the associated volatility series increases (resp., decreases).
Letrndenote the log-return series of a (index/stock) series. In financial series, the autocorrelation function of thern’s is almost null, though the|rn|series is not. The associated volatility series is defined assn=|rn|+sn−1−m, where|·|refers to the absolute value function, m is a constant, ands0=0. For practical purposes, we setm=mean|rn|.
Next, we explain how the Hurst exponent of the volatility series,sn, could provide a useful tool to detect volatility clusters in a series of asset returns. Firstly, assume that the volatility of the series is constant. Then, the values of the associated volatility series would be similar to those from a sample of a Brownian motion. Hence, the self-similarity exponent of that volatility series would become close to0.5 . On the contrary, suppose that there exist some clusters of high (resp., low) volatility in the series. Thus, the graph of its associated volatility series becomes smoother, as illustrated in Figure 1, which also depicts the concept of a volatility series. Hence, almost all the values of the volatility series are greater (resp., lower) than the mean of the series. Accordingly, the volatility series turns out to be increasing (resp., decreasing), so its self-similarity exponent also increases (resp., decreases).
Following the above, the Hurst exponent of the volatility series of an index or asset provides a novel approach to explore the presence of volatility clusters in series of asset returns. 3. Calculating the Probability of Volatility Clusters of a Given Size
In this section, we explore how to estimate the probability of the existence of volatility clusters for blocks of a given size. Equivalently, we shall address the next question: What is the probability that a volatility cluster appears in a period of a given size? Next, we show that the Hurst exponent of a volatility series (see Section 2.2 and Section 2.3) for blocks of that size plays a key role.
We know that the Hurst exponent of the volatility series is high when there are volatility clusters in the series [2]. However, how high should it be?
To deal with this, we shall assume that the series of (log-)returns follows a Gaussian distribution. However, it cannot be an i.i.d. process since the standard deviation of the Gaussian distribution is allowed to change. This hypothesis is more general than an ARCH or GARCH model, for example. Since we are interested in the real possibility that the volatility changes and, in fact, there exist volatility clusters, a static fixed distribution cannot be assumed. In this way, it is worth noting that the return distribution of these kinds of processes (generated from Gaussian distributions with different standard deviations) is not Gaussian, and it is flexible enough to allow very different kinds of distributions.
As such, let us assume that the series of the log-returns,rn, follows a normal distribution,N(0,σ(n)), where its standard deviation varies over time via the functionσ(n) . In fact, some classical models such as ARCH, GARCH, etc., stand as particular cases of that model. As such, we shall analyze the existence of volatility clusters in the following terms. We consider that there exist volatility clusters as long as there are, at least, both, a period of high volatility and a period of low volatility. Figure 2 illustrates that condition. Indeed, two broad periods could be observed concerning the volatility series of the S&P500 index. The first one has a low volatility (and hence, a decreasing volatility series) and the second one a high volatility (and hence, an increasing volatility series). In this case, the effect of the higher volatility (due to the COVID-19 crisis) is evident, thus being confirmed by a very high Hurst exponent of the corresponding volatility series (equal to0.94).
On the other hand, Figure 3 depicts the volatility series of the S&P500 index in the period ranging from January 2017 to January 2018. A self-similarity index equal to0.55was found by the FD4 algorithm. In this case, though, it is not so clear that there are volatility clusters, which is in accordance with the low Hurst exponent of that volatility series.
As such, the Hurst exponent of the volatility series of a Brownian motion will be considered as a benchmark in order to decide whether there are volatility clusters in the series. More precisely, first, by Monte Carlo simulation, a collection of Brownian motions was generated. For each Brownian motion, the Hurst exponents (by FD4 approach) of their corresponding volatility series were calculated. Hence, we denote byHlim(n)the value that becomes greater than90%of those Hurst exponents. Observe thatHlim(n)depends on n, the length of the Brownian motion sample. In fact, for a short series, the accuracy of the FD4 algorithm to calculate the Hurst exponent is lower. Accordingly, the value ofHlim(n) will be higher for a lower value of n. Figure 4 illustrates (for the 90th percentile) how the benchmark given byHlim(n)becomes lower as the length of the Brownian motion series increases.
Therefore, we will use the following criteria. We say that there are volatility clusters in the series provided that the Hurst exponent of the corresponding volatility series is greater thanHlim. Then, we will measure the probability of volatility clusters for subseries of a given length as the ratio between the number of subseries with volatility clusters to the total amount of subseries of the given length.
In order to check that measure of the probability of volatility clusters, we will test it by artificial processes with volatility clusters of a fixed length (equal to 200 data). A sample from that process is generated as follows. For the first 200 data, generate a sample from a normal distributionN(0,0.01); for the next 200 data, generate a sample from a normal distributionN(0,0.03); for the next 200 data, generate a sample from a normal distributionN(0,0.01) , and so on. It is worth pointing out that a mixture of (samples from) normal distributions with distinct standard deviations can lead to (a sample from) a heavy-tailed distribution. Following that example, Figure 5 depicts the distribution of that artificial process with volatility clusters compared to the one from a Gaussian distribution and also to the S&P500 return distribution (rescaled). It is clear that the process is far from Gaussian even in that easy example.
For that process, consider one random block of length 50. It may happen that such a block fully lies in a 200 block of fixed volatility. In this case, there will be no volatility clusters. However, if the first 20 data lie in a block of volatility equal to0.01, with the remaining 30 data lying in a block of volatility equal to0.03, then such a block will have volatility clusters. On the other hand, it is clear that if we have one block of length 50 with the first 49 data lying in a block of volatility equal to0.01, whereas the remaining one datum lies in a block of0.03volatility, we cannot say that there are volatility clusters in such a block. Therefore, we shall consider that there are volatility clusters if there are at least 10 data in blocks with distinct volatilities. In other words, we shall assume that we cannot detect clusters with less that 10 data.
On the other hand, note that we are using a confidence level of90%, and hence, if we get a probability of volatility clusters of, say,x%, that means that there are no volatility clusters regarding the(100−x)%of the blocks of the given size. However, for that confidence level of90%, we are missing10%of that(100−x)%, and hence, we will have the following theoretical estimates.
-
Theoretical probability of volatility clusters considering clusters of at least 10 data:(x−20)/200.
-
Theoretical probability of volatility clusters considering clusters of at least 10 data detected at a confidence level of90%:(x−20)/200+(1−(x−20)/200)·0.1).
Figure 6 graphically shows that the proposed model for estimating the probability of volatility clusters could provide a fair approximation to the actual probability of volatility clusters for such an artificial process.
4. Volatility Clusters in Cryptocurrencies One of the main characteristics of cryptocurrencies is the high volatility of their exchange rates, and consequently, the high risk associated with their use.
In this section, the methodology provided in Section 3 to calculate the probability of volatility clusters is applied to different financial assets, with a special interest in cryptocurrency markets.
First, Figure 7 shows a similar profile in regard to the probabilities of volatility clusters of an index (S&P500) and a stock (Apple). On the other hand, the probability of volatility clusters of the Euro/USD exchange results in being quite lower.
On the other hand, Figure 8 depicts the probability of volatility clusters of the three main cryptocurrencies, namely Bitcoin/USD, Ethereum/USD, and Ripple/USD. A similar profile appears for all such cryptocurrencies with the probabilities of their volatility clusters much greater than the ones for the three asset classes displayed in Figure 7.
These results suggest that the volatility in cryptocurrencies changes faster than in traditional assets, and much faster than in forex pairs. 5. Conclusions
One of the main characteristics of cryptocurrencies is the high volatility of their exchange rates. In a previous work, the authors found that a process with volatility clusters displays a volatility series with a high Hurst exponent [2].
In this paper, we provide a novel methodology to calculate the probability of the volatility clusters of a series using the Hurst exponent of its associated volatility series. Our approach, which generalizes the (G)ARCH models, was tested for a class of processes artificially generated with volatility clusters of a given size. In addition, we provided an explicit criterion to computationally determine whether there exist volatility clusters of a fixed size. Interestingly, this criterion is in line with the behavior of the Hurst exponent (calculated by the FD4 approach) of the corresponding volatility series. We found that the probabilities of volatility clusters of an index (S&P500) and a stock (Apple) show a similar profile, whereas the probability of volatility clusters of a forex pair (Euro/USD) results in being quite lower. On the other hand, a similar profile appears for Bitcoin/USD, Ethereum/USD, and Ripple/USD cryptocurrencies, with the probabilities of volatility clusters of all such cryptocurrencies being much greater than the ones of the three traditional assets. Accordingly, our results suggest that the volatility in cryptocurrencies changes faster than in traditional assets, and much faster than in forex pairs.
Author Contributions
Conceptualization, V.N., J.E.T.S., M.F.-M., and M.A.S.-G.; methodology, V.N., J.E.T.S., M.F.-M., and M.A.S.-G.; validation, V.N., J.E.T.S., M.F.-M., and M.A.S.-G.; formal analysis, V.N., J.E.T.S., M.F.-M., and M.A.S.-G.; writing-original draft preparation, V.N., J.E.T.S., M.F.-M., and M.A.S.-G.; writing-review and editing, V.N., J.E.T.S., M.F.-M., and M.A.S.-G.; These authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.
Funding
Both J.E. Trinidad Segovia and M.A. Sánchez-Granero are partially supported by Ministerio de Ciencia, Innovación y Universidades, Spain, and FEDER, Spain, Grant PGC2018-101555-B-I00, and UAL/CECEU/FEDER, Spain, Grant UAL18-FQM-B038-A. Further, M.A. Sánchez-Granero acknowledges the support of CDTIME. M. Fernández-Martínez is partially supported by Ministerio de Ciencia, Innovación y Universidades, Spain, and FEDER, Spain, Grant PGC2018-097198-B-I00, and Fundación Séneca of Región de Murcia (Murcia, Spain), Grant 20783/PI/18.
Acknowledgments
The authors would also like to express their gratitude to the anonymous reviewers whose suggestions, comments, and remarks allowed them to enhance the quality of this paper.
Conflicts of Interest
The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
One of the main characteristics of cryptocurrencies is the high volatility of their exchange rates. In a previous work, the authors found that a process with volatility clusters displays a volatility series with a high Hurst exponent. In this paper, we provide a novel methodology to calculate the probability of volatility clusters with a special emphasis on cryptocurrencies. With this aim, we calculate the Hurst exponent of a volatility series by means of the FD4 approach. An explicit criterion to computationally determine whether there exist volatility clusters of a fixed size is described. We found that the probabilities of volatility clusters of an index (S&P500) and a stock (Apple) showed a similar profile, whereas the probability of volatility clusters of a forex pair (Euro/USD) became quite lower. On the other hand, a similar profile appeared for Bitcoin/USD, Ethereum/USD, and Ripple/USD cryptocurrencies, with the probabilities of volatility clusters of all such cryptocurrencies being much greater than the ones of the three traditional assets. Our results suggest that the volatility in cryptocurrencies changes faster than in traditional assets, and much faster than in forex pairs.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer





