Market risk estimates the uncertainty of future earnings, due to the changes in market conditions. Value at Risk has become the standard measure that financial analysts use to quantify market risk. For estimating risk, the issue is that different ways to estimate volatility can lead to very different VaR calculations. The performance of SMA with rolling windows of 100 and EWMA using 0.94 (proposed by RiskMetrics) as smoothing constant X and rolling window of 100 days, perhaps the most widely used methodology for measuring market risk is analyzed from investment activities on 7 stock exchange indices from developed and emerging markets. Binary Loss Function (BLF) is employed to measure the accuracy of VaR calculations because VaR models are useful only if they predict future risks accurately. The subject of this research is to determine the possibility of application of the SMA and EWMA models VaR with 95% and 99%> confidence level in investment processes on the stock exchange markets of the selected countries. The methodology applied in the research includes analyses, synthesis and statistical/mathematical methods. The aim of the research is to show whether the models work the same and whether financial analysts from emerging countries can use the same model as their counterparts from the developed countries. The results show that risk managers in developing just as those in developed countries can use risk metric EWMA model as a tool for estimating market risk at 95% confidence level.
(ProQuest: ... denotes formulae omitted.)
1. INTRODUCTION
"Modern times and the past is mastery of risk: the notion that the future is more than a whim of the gods and that men and women are not passive before nature. "
Peter L. Bernstein.
Over the past few decades, risk management has evolved to a point where it is considered to be a distinct sub-field in the theory of finance. The growth of risk management industry traces back to the increased volatility of financial markets in the 1970s. Value at Risk (VaR) measures can have many applications, such as in risk management, to evaluate the performance of risk takers and for regulatory requirements. Even though Value at Risk can be used by any entity to measure its risk exposure, it is used most often by commercial and investment banks to capture the potential loss in value of their traded portfolios from adverse market movements over a specified period. VaR has become the standard measure that financial analysts use to quantify market risk. As it is very important to develop methodologies that provide accurate estimates, the main objective of this paper is to evaluate the performance of the most popular VaR methodology, paying particular attention to their underlying assumptions and to their logical flaws.
Financial market volatility is a central issue to the theory and practice of asset pricing, asset allocation, and risk management. This paper focuses on the econometric modeling of volatility and family of SMA and EWMA models in particular. Modem Portfolio Theory associates the stock market risk with volatility of the return. Volatility is measured by the variance of return but the investment community does not accept this measure, since it weights equally the deviations of the average return, while most investors determine the risk on the basis of small or negative returns. In the last few years the measure (VaR) has established itself in the practice. In accordance with this, the paper contributes to the debate into using VaR as a tool for risk management. There are three key elements of VaR - a specified level of loss in value, a fixed time period over which risk is assessed and a confidence interval. The VaR can be specified for an individual asset, a portfolio of assets or for an entire firm. The VaR is calculated using SMA and EWMA on the data of 3 indices of developed countries (USA, Great Britain and Germany) and 4 indices of emerging countries (Serbia, Slovenia, Croatia and Macedonia). Finally, as the aim of the paper is to show the accuracy of the models used to calculate VaR, Binary Loss Function (BLF) is used.
The rest of the paper is organized as follows. Section 2 presents literature review. In section 3 a general view of VaR, the basic methods of forecasting volatilities and the back testing techniques used to verify the accuracy of these forecasts are given. Section 4 presents data analysis and results, and section 5 concludes the paper.
2. LITERATURE REVIEW
Even though the term "Value at Risk" was not widely used prior to the mid 1990s, the origins of the measure lie further back in time. The mathematics that underlie VaR were largely developed in the context of portfolio theory by Harry Markowitz and others, though their efforts were directed towards a different end - devising optimal portfolios for equity investors. In particular, the focus on market risks and the effects of the comovements in these risks are central to how VaR is computed. The impetus for the use of VaR measures, though, came from the crises that beset financial service firms over time and the regulatory responses to these crises.
After gaining the deserved place in the developed economies, risk measurement and management have also been gaining importance in transitional economies. The capital market has witnessed turbulent changes affecting simultaneously commodity prices, interest rates and stock prices. Although disagreeing in many things, all researchers are united in the opinion that there does not exist a single approach, or a single VaR model that is optimal in all the markets and all situations. In other words, there is no straightforward result, and it is impossible to establish a ranking among the models. The results are very sensitive to the type of loss functions used, the chosen probability level of VaR, the period being turbulent or normal etc. Some researchers also find a trade-off between model sophistication and uncertainty.
A well-known study by Berkowitz and O'Brien (2002) examines the VaR models used by six leading US financial institutions. Their results indicate that these models are in some cases highly inaccurate: banks sometimes experienced high losses much larger than their models predicted, which suggests that these models are poor at dealing with fat tails and extreme events. Similar findings are also reported by Lucas (2000) who finds that sophisticated risk models based on estimates of complete variance-covariance matrices fail to perform much better than simpler univariate VaR models that require only volatility estimates. Lehar, Scheicher and Schittenkopf (2002) find that more complex volatility models (GARCH and Stochastic volatility) are unable to improve on constant volatility models for VaR forecast, although they do for option pricing. Wong et al (2002) conclude that while GARCH models are often superior in forecasting volatility, they consistently fail the Basel back test. Several papers investigate the issue of trade-off in model choice; for example Caporin (2003) finds that the EWMA compared to GARCH-based VaR forecast provides the best efficiency at a lower level of complexity. Bams and Wielhouwer (2000) draw similar conclusions, although sophisticated tail modelling results in better VaR estimates, but with more uncertainty. Supposing that the data-generating process is close to be integrated, the use of the more general GARCH model introduces estimation error, which might result in the superiority of EWMA. Guermat and Harris (2002) find that EWMA-based VaR forecasts are excessively volatile and unnecessarily high, when returns do not have conditionally normal distribution, but fat tail. This is because EWMA puts too much weight on extremes. According to Brooks and Persand (2003), the relative performance of different models depends on the loss function used. However, GARCH models provide reasonably accurate VaR. Christoffersen, Hahn and Inoue (2001) show that different models (EWMA, GARCH, Implied Volatility) might be optimal for different probability levels. Harmantzis, Miao and Chien (2006) praise the EVT approach for dealing with extreme returns, which are characteristic for transitional markets. Wang (2010) used a mixture method of APGARCH-M model and EWMA algorithm to measure VaR using three stock index of Shanghai stock market and shows the mixture method is advantageous and accurate to calculate VaR of a portfolio.
Although there is an abundance of research papers dealing with VaR and market risk measurement and management, all of the existing VaR models were developed and tested in mature, developed and liquid markets (see Manganelli, Engle, 2001 and Alexander, 2001). Testing VaR models in other, less developed or developing stock markets is at best scarce (e.g. Parrando, 1997; Sanioso, 2000; Sinha, Charnu, 2000; Fallon, Sabogal, 2004; Valentinyi-Endrész, 2004; Zikovic, 2006a, 2006b; Zikovic and Bezic, 2006; Andjelic et al., 2010). Zikovic and Bezic (2006) investigated the performance of historical simulation VaR models on stock indices of the EU candidate states - CROBEX (Croatia), SOFIX (Bulgaria), BBETINRM (Romania) and XU 100 (Turkey) indices all show a clear positive trend in a longer time period. Zikovic and Aktan (2009) investigated the relative performance of a wide array of VaR models with the daily returns of Turkish (XU 100) and Croatian (CROBEX) stock index prior to and during the global 2008 financial crisis. Generally speaking, VaR literature is extremely scarce with research papers dealing with quantitative VaR model comparison or volatility forecasting in the stock markets of the EU transition countries. Angelovska (2010) used SMA, EWMA and GARCH models for modeling and forecasting the volatility of thin emerging stock markets and it was found that simpler models like SMA and EWMA performed consistently over the time.
3. METHODOLOGY
"Risk is a choice rather than a fate. "
Peter L. Bernstein
Value at risk (VaR) is mainly concerned with market risk. VaR means the consideration of risk impairing asset value fluctuation. Namely, it refers to the loss risk caused by uncertain changes on asset prices. The VaR approach is attractive to practitioners and regulators because it is easy to understand and provides an estimate of the amount of capital that is needed to support a certain level of risk. Another advantage of this measure is the ability to incorporate the effects of portfolio diversification. VaR is a statistical definition that states the number of maximum losses per day, per week or per month. In other words, VaR is a statistical summary of financial assets or portfolio in terms of market risk.
Over a target horizon Value at risk measures maximum loss at a given confidence level. According to Jorion (2001), "Value at Risk measures the worst expected loss over a given horizon under normal market conditions at a given level of confidence. " The fundamental variables of VaR are (Nylund, 2001):
* confidence level (the confidence level is the probability that the loss is not greater than predicted).
* forecast horizon (the time framework that VaR is estimated. In VaR calculations, it is assumed that in the forecast horizon portfolio does not change) and
* volatility.
The mathematical definition of Value at Risk is as follows:
... (1)
where op is the portfolio's standard deviation, P is the value of the portfolio and K(a) is the desirable level of confidence (the (l-a)% quantile of the standard normal distribution). While VaR is a very easy and intuitive concept, its measurement is a very challenging statistical problem. The methods that are commonly used for calculating Value-at-Risk can be grouped into three categories:
* Variance-covariance methods (used in this paper)
* Simulation methods
* Extreme Value Theory methods
3.1. Simple Moving Average (SMA)
In the historical mean model the forecast is based on all the available observations and each observation whether it is very old or immediate is given equal weight which may lead to stale prices affecting the forecasts. A simple moving average model might be considered as a modified version of the historical average model. This is adjusted in a moving averages method which is a traditional time series technique in which the volatility is defined as the equally weighted average of realized volatilities in the past n days:
... (2)
The moving average is an average of a set of variables such as stock prices over time. The term "moving" stems from the fact that as each new price is added, the oldest price is subsequently deleted. The n-day Simple Moving Average takes the sum of the last n days prices. The SMA model is probably the most widely used volatility model in Value at Risk studies. The disadvantage of the SMA is that a major drop or rise in the price is forgotten and does not manifest itself quantitatively in the simple moving average.
3.2. Exponentially Weighted Moving Average (EWMA)
The simplest model for forecasting the volatility at+i is the exponentially weighted moving average or EWMA procedure. EWMA specifies the following period's variance to be a weighted average of the current variance and the current squared actual return. The EWMA model allows one to calculate a value for a given time on the basis of the previous day's value. The EWMA model has an advantage in comparison with SMA, because the EWMA has a memory. The EWMA remembers a fraction of its past by factor X, that makes the EWMA a good indicator of the history of the price movement if a wise choice of the term is made. Using the exponential moving average of historical observations allows one to capture the dynamic features of volatility. The model uses the latest observations with the highest weights in the volatility estimate. Expected volatilities <J2, <J3, <J4,.. in the EWMA model are calculated by the following formula:
... (3),
where:
* ... is dispersion estimate for the day n calculated at the end of the day (n-1),
* ... is dispersion estimate for the day (n-1),
* ... is asset's return for the day (n-1).
Return for the day n is calculated as natural logarithm of the ratio of stock's price from the day n to previous day n-1. '/. is the decay factor. The exponentially weighted moving average model depends on the parameter A(0(A)\) which is often referred to as the decay factor.
3.3. RiskMetrics VaR
In 1994, J. P. Morgan released "RiskMetrics(TM)", a set of techniques and data to measure market risks in portfolios of fixed income instruments, equities, foreign exchange, commodities, and their derivatives issued in over 30 countries. ''RiskMetrics(TM) (1996) developed a model which estimates the conditional variances and covariances based on the exponentially weighted moving average (EWMA) method, which is a special case of the GARCH(1,1) model. This approach forecasts the conditional variance at time i as a linear combination of the conditional variance and the squared unconditional shock at time t-1. It is simple to estimate and is computationally straightforward for a given portfolio with fixed weights. However, as it is not a statistical model, it is difficult to calibrate (such as choosing critical values), and can also lead to excessive violations of the Basel Accord thresholds. In 1998, RiskMetrics was spun off from J. P. Morgan. Since RiskMetrics(TM) represents a cornerstone of risk management theory and practice, it is important to test the assumptions upon which it is built in order to assess the applicability of RiskMetrics(TM) in various situations. The standard RiskMetrics model assumes that returns follow a conditional normal distribution - conditional on the standard deviation - where the variance of returns is a function of the previous day's variance forecast and squared return" (Risk Metrics Technical Document, 236). The RiskMetrics model of financial returns can be fully described by a single parameter, the standard deviation of returns, ot more commonly referred to as volatility. To forecast VaR, it is first necessary to forecast volatility. RiskMetrics forecasts volatility based on the historical price data. Recalling that: ot2 = E (rt2) RiskMetrics forecasts future variance of returns as an exponentially weighted moving average of past squared returns:
... (4)
where Oist+i|t is the one-day volatility forecast for time t+1 (given information up to and including time t and 0 < k< 1), with the value of index / in the range from 0 to infinity. RiskMetrics determines the decay factor for one-day time horizons to be 0.94, at the 1% confidence level, to be equivalent to including approximately 74 days in the calculation. The VaR corresponding to 5% may be defined as that amount of capital, expressed as a percentage of the initial value of the position, which will be required to cover 95% of probable losses.
3.4. Shadow effect
Shadow effect is an interesting phenomenon when constructing volatility modeling. Risk managers use 100 days of data to eliminate sampling errors. However, if for example an unexpected event happens in the stock markets, its effects will continue during these 100 days. Only one day when the peak is reached in the market will affect the future volatility estimation and increase the volatility level which is deviate from the market reality. In order to solve this problem, risk managers use the EWMA model to give more weight on the latest data and less on the previous data. Previous data denotes by n the number of days multiplied by V. As n increases, decreases. Each estimate of the mean is based on a 100-day rolling window, that is, for every day in the sample period we estimate a mean based on returns over the last 100 days.
3.5. Backtesting methods
"VaR is only as good as its backtest. When someone shows me a VaR number, I don't ask how it is computed, I ask to see the backtest. "
(Brown, 2008, p.20).
Backtesting is the comparison of actual trading results with model- generated risk measures when measuring the number of failures of the VaR risk measure. How can we assess the accuracy and performance of a VaR model? To answer this question, first what is meant by "accuracy." By accuracy, we could mean:
* How well does the model measure a particular percentile of or the entire profit-and-loss distribution?
* How well does the model predict the size and frequency of losses?
The numerous shortcomings of these methods and VaR in general are the most significant reasons why the accuracy of the risk estimates should be questioned. Therefore, VaR models are useful only if they predict future risks accurately. In order to evaluate the quality of the estimates, the models should always be backtested with appropriate methods. In the backtesting process we could statistically examine whether the frequency of exceptions over some specified time interval is in line with the selected confidence level. These types of tests are known as tests of unconditional coverage. They are straightforward tests to implement since they do not take into account for when the exceptions occur (Jorion, 2001). In theory, however, a good VaR model not only produces the 'correct' amount of exceptions but also exceptions that are evenly spread over time i.e. are independent of each other. Clustering of exceptions indicates that the model does not accurately capture the changes in market volatility and correlations. Tests of conditional coverage therefore examine also conditioning, or time variation, in the data (Jorion, 2001).
The most common test of a VaR model is to count the number of VaR exceptions, i.e. days, or holding periods of other length, when portfolio losses exceed VaR estimates. If the number of exceptions is less than the selected confidence level would indicate, the system overestimates risk. On the contrary, too many exceptions signal underestimation of risk. Naturally, it is rarely the case that we observe the exact amount of exceptions suggested by the confidence level. It therefore comes down to statistical analysis to study whether the number of exceptions is reasonable or not, i.e. will the model be accepted or rejected.
The three accuracy measures are: binary loss function, LR test of unconditional coverage (Kupiec, 1995) and the scaling multiple to obtain coverage. Binary Loss Function (BLF) is based on whether the actual loss is larger or smaller than the VaR estimate and is simply concerned with the number of failures rather than the magnitude of the exception. If the actual loss is larger than the VaR then it is termed as an "exception", or failure, and is equal to 1, with all others being 0. The aggregate of the number of failures across all dates is divided by the sample size. The BLF obtained is the rate of failure. The closer the BLF value is to the confidence level of the model, the more accurate the model. In this paper, the accuracy is defined as the rate of failure, or exception,) associated with how close each specific model came to the pre-set level of significance.
4. DATA ANALYSIS AND RESULTS
The data used in the paper are the daily closing market indices collected from official Stock exchanges' databases from January 3th 2010 to December 14th 2010. The daily return is calculated as the change in the logarithm of the closing price on successive days. The number of trading days (observations) is 236, which is enough to produce some statistically significant backtests and is as well in line with the Basel backtesting framework. The main point is to produce accurate results with short data series, as there is a problem in these young emerging countries. The performance of the selected VaR models is tested on stock indices from: the USA (DOW DIJA), Great Britain (FTSE 100), Germany (DAX), Croatia (CROBEX), Serbia (BELEX), Slovenia (SBI20) and Macedonia (MBI10). Table 1 presents the basic descriptive analysis of the time series of stock returns.
The mean returns for developed markets are positive, and negative for developing countries and all kurtosis values are much larger than 3. This shows that for all series, the distribution of those variables is fat-tailed as compared to the normal distribution. The returns also shows evidence of positive or negative skewness in their distributions, indicating returns are asymmetric. Applying the Jarque-Bera test of normality, we additionally find strong support for the hypothesis that the return and volume series do not have a normal distribution.
Even though researchers have widely used GARCH models for forecasting the stock market volatility, the exponentially weighted moving average is the most popular model for stock market volatility forecasting among practitioners (Deloitte and Touche Tohmatsu, 2002).
Dimson and Marsh (1990) give another explanation of the popularity of the EWMA model. It states that sometimes sophisticated models could provide worse forecasts than naïve models. The performance of SMA with rolling windows of 100 and EWMA using 0.94,proposed by Risk Metrics, as smoothing constant X and rolling window of 100 days, perhaps the most widely used methodology for measuring market risk is analysed. The RiskMetrics model is based on the unrealistic assumption of normally distributed returns, and completely ignores the presence of fat tails in the probability distribution, a most important feature of financial data. For this reason, one would expect the model to seriously underestimate risk. However, it was commonly found by market participants that RiskMetrics performed satisfactorily well and this helped the method become a standard in risk measurement. Its widespread use is due largely to the ease with which it can be implemented.
VaR models are calculated for a one-day holding period at 95% and 99% coverage of the market risk. The BLF provides a point estimate of the probability of failure. In other words, the accuracy of the VaR model requires that the BLF, on average, is equal to one minus the prescribed confidence level of the VaR model.
Table 2 shows the rate of failure of the models employed for calculating VaR, at 95% and 99% confidence levels. Both models (risk metrics EWMA and Simple moving average) estimate the risk adequately for the London Stock Exchange (FTSE 100), at the 95% confidence level. EWMA at the 95% confidence level performs SMA, for developed countries stock exchanges, DOW, DAX, and as well for emerging ex-Yugoslavian stock exchanges: CROBEX, BELEX, and SBI 20, but not for MBI10. SMA model overestimates the risk for all stock indices except for BELEX where the risk is underestimated. The backtesting results using BLF method show that at high quantiles (99) both models fail. Risk metrics EWMA at 99% confidence level works better than the simple method SMA, but the model does not provide accuracy, and underestimates the risk.
5. CONCLUSION
Risk Metrics model is based on the unrealistic assumption of normally distributed returns, the most important feature of financial data, and completely ignores the presence of fat tails in the probability distribution. Beside this, it was commonly found by market participants that empirical results demonstrated that simple methods like RiskMetrics EWMA used in estimating VAR in terms of accuracy, can be used for measuring market risk. Systematic backtesting should be a part of regular VaR reporting in order to constantly monitor the performance of the model. However, if the users of VaR know the flaws associated with VaR, the method can be a very useful tool in risk management, especially because there are no serious contenders that could be used as alternatives for VaR. The simple model SMA and the preferred model by practitioners RiskMetrics EWMA were considered to evaluate the ability to forecast volatility in the context of 3 developed and 4 former Yugoslavian states' stock markets. The models were evaluated on the basis of BLF error statistics at the 95% and 99% confidence level. At the 95% confidence level the results showed better accuracy, and at high quantiles (99) both models underestimated risk. RiskMetrics EWMA can be used in estimating VAR in terms of accuracy for measuring market risk not just in developed countries, but in developing countries as well because its results conform to the BLF value.
UPRAVLJANJE TRZISNIM RIZIKOM POMOCU VaR-a (RIZICNE VRIJEDNOSTI)
Sazetak
Trzisni rizik ocjenjuje neizvjesnost buducih zarada, uslijed promjena trzisnih uvjeta. Value at Risk (VaR) je póstala standardna mjera koju financijski analiticari koriste za kvantificiranje trzisnog rizika. Za procjenu rizika vazno je da razliciti nacini procjene volatilnosti mogu voditi do vrlo razlicitih izracuna VaR-a. U radu se analizira ucinak jednostavnih pomicnih prosjeka (Simple Moving Average - SMA) pristupom Rolling Window od 100 i eksponencijalno ponderiranih pomicnih prosjeka (Exponentially Weighted Moving Average - EWMA) koristeci 0.94 (a kako predlaze RiskMetrics) kao konstante izgladivanja X i Rolling Window od 100 dana, kao najcesce koristene metodologije za mjerenje trzisnog rizika, koristeci investicijske aktivnosti na 7 indeksa burza vrijednosnica kako s razvijenih, tako i razvijajucih trzista kapitala. Pritom se koristi binama funkcija gubitka Binary Loss Function (BLF) kako bi se mjerila preciznost izracuna VaR-a, s obzirom da su predmetni modeli korisni jedino ako tocno predvidaju buduce rizike. Predmet ovog istrazivanja je odrediti mogucnost primjene SMA i EWMA modela VaR-a, na razini statisticke pouzdanosti od 95% i 99%, u investicijskom procesu na burzama vrijednosnica izabranih zemalja. Metodologija primijenjena u istrazivanju ukljucuje analizu, sintezu i statisticke/matematicke metode. Cilj je istrazivanja pokazati jesu li modeli pouzdani te mogu li financijski analiticari iz zemalja u razvoju koristiti iste modele kao i njihovi kolege/kolegice u razvijenim zemljama. Rezultati pokazuju da menadzeri rizika u razvijenim i zemljama u razvoju mogu kao metriku rizika koristiti EWMA model za procjenu trzisnog rizika sa statistickom sigumoscu od 95%.
REFERENCES
1. Alexander C. (2001): Market Models: A Guide to Financial Data Analysis. New York: John Wiley & Sons.
2. Alexander C. (2003): The Present and Future of Financial Risk Management, ISMA Centre Discussion Papers in Finance DP2003-12, The University of Reading.
3. Alexander C. (2000): Risk Management and Analysis, Volume 1: Measuring and Modelling Financial Risk. New York: John Wiley & Sons.
4. Alexander, C. (1996): Handbook of Risk Management and Analysis. New York: JohnWiley and Sons.
5. Andersen, T. G., Boilerslev, T. (1998): Answering the Skeptics: Yes, Standard Volatility Models Do Provide Accurate Forecasts, International Economic Review, 39: pp. 885-905.
6. Andjelic, G., Djakovic, V., Radisic, S. (2010): Application of VaR in Emerging markets: A Case of Selected Central and Eastern European Countries, African Journal of Business Management, 4 (17), pp. 3666-3680.
7. Angelovska^ J. (2010): VaR based on SMA, EWMA and GARCH (1,1): Volatility models, VDM Verlag Dr.Muler Gmbh&Co.KG
8. Bams, D., Wielhouwer, L. J. (2001): Empirical Issues in Value-at-Risk, Astin Bulletin, 31 (2), pp 299-317.
9. Berkowitz, J., O'Brien, J. (2002): How accurate are VaR models at commercial banks? The Journal of Finance, 52 (3), 1093-1 111.
10. Bernstein, Peter L. (1996): Against the Gods: The Remarkable Ston: of Risk. New York: John Wiley & Sons, Inc.
11. Boudoukh, J., Richardson M., Whitelaw F. R. (1998): The Best of Both Worlds: A hybrid Approach to Calculating Value at Risk, Risk, 11 (5), pp. 64-67
12. Brooks, C., Persand, G. (2003): Volatility forecasting for Risk Management, Journal of forecasting, 22, pp. 1-22
13. Brown, A. (2008): Private Profits and Socialized Risk - Counterpoint: Capital Inadequacy, Global Association of Risk Professionals, June/July.
14. Campbell, J. Y., Hentschel, L. (1992): No news is good news: An asymmetric model of changing volatility in stock returns. Journal of Financial Economics, 31, pp. 281-318.
15. Caporin, M. (2003): The Trade Off Between Complexity and Efficiency of VaR Measures: A Comparison of Risk Metric and GARCH-Type Models. GRETA, working paper n. 03.06
16. Christoffersen, P., Hahn, J., Inoue, A. (2001): Testing and Comparing Value-at-Risk Measures, CIRANO, Paper 2001s-03
17. Deloitte Touche Tohmatsu Limited (2002): Global Risk Management Survey. http://www.deloitte.com/assets/DcomUnitedStates/Local%20 Assets/Documents/us_fsi_aers_global_nsk_management_survey_8thed_07 2913.pdf, Accessed January 28, 2010
18. Dezelan, S. (2000): Efficiency of the Slovenian capital market. Economic and Business Review, 2, pp. 61-83.
19. Dimson, E. and Marsh, P. (1990): Volatility forecasting without data- snooping, Journal of Banking and Finance, 14, pp. 399-421.
20. Fallon C. E., Sabogal S. J. (2004): Is historical VaR a reliable tool for relative risk measurement in the Columbian stock market?: An empirical analysis using the coefficient of variation, http://cuademosadministracion. javeriana.edu.co/pdfs/6_27.pdf. Accessed January 28, 2010.
21. Fama E. F. (1965): The Behaviour of Stock-Market Prices, Journal of Business, 38J pp. 34-105.
22. Guermat C., Harris D. F. R. (2002): Forecasting value at risk allowing for time variation in the variance and kurtosis of portfolio returns, International Journal of Forecasting No 18, pp. 409-419
23. Harmantzis, F. C., Miao L., Chien Y. (2006): Empirical study of value-at- risk and expected shortfall model with heavy tails, Journal of Risk Finance, No 7, pp. 117-135, http://www.gloriamundi.org/picsresources/dbjw.pdf. Accessed January 28, 2010.
24. Hull, J. C., White, A. (1998):, Incorporating Volatility Updating Into The Historical Simulation Method For Value At Risk, Journal of Risk, 7(1), pp.5-19.
25. Jorion, P. (2001): Value at Risk, The New Benchmark for Managing Financial Risk, 2nd. ed., New York: McGraw Hill.
26. Kupiec, P. H. (1995): Techniques for verifying the accuracy of risk measurement models. The Journal of Derivatives, 3, pp. 73-84.
27. Lehar, A., Scheicher, M., Schittenkopf, C. (2002): GARCH vs. stochastic volatility: Option pricing and risk management, Journal of Banking Finance, 26, pp. 323-345.
28. Lopez, J. (1999): Methods for evaluating Value-at-Risk Estimates, Federal Reserve Bank of San Francisco, Economic Review, 02. pp.3-15.
29. Lucas A. (2000): A note on optimal estimation from a risk management perspective under possibly miss specified tail behavior, Journal of Business and Economic Statistics No 18, pp. 31-39
30. Manganelli, S., Engle, R. F. (2001): Value at Risk models in Finance, ECB working paper series. No 75.
31. Marinelli, C., d'Addona, S., Rachev, S. T. (2007): A Comparison Of Some Univariate Models For Value-At-Risk And Expected Shortfall, International Journal of Theoretical and Applied Finance (IJTAF), 10 (6), pp. 1043-1075.
32. Markowitz H. (1952): Portfolio Selection. Journal of Finance, 7, pp. 77-91.
33. Nylund, S. (2001): Value-at-Risk Analysis for Heavy-Tailed Financial Returns Helsinki University of Technology, Department of Engineering Physics and Mathematics.
34. Parrando, M., Juan, R. (1997): Calculation of the Value at Risk in emerging markets. Santander Investments report, No 2, pp 38.
35. Risk Metrics Technical Document 1995. JPMorgan/Reuters, Third Edition, New York.
36. Risk Metrics Technical Document 1996. JPMorgan/Reuters, Fourth Edition, New York.
37. Sanioso, W. (2000): Value at Risk: An Approach to Calculating Market Risk, Working paper, Banking Research and Regulation Directorate, Bank Indonesia.
38. Sinha T., Charnu F. (2000): Comparing Different Methods of Calculating Value at Risk, Instituto Tecnológico Autonomo de Mexico. http://www.gloriamundi.org/picsresources/tapens.pdf. Accessed January 20, 2010.
39. Valentinyi-Endrész, M. (2004): Structural breaks and financial risk management, MNB Working paper 2004/11, Magyar Nemzeti Bank.
40. Wong, C. S. M., Cheng, Y. W., Wong, Y. P. C. (2002): Market risk management of banks: Implications from the accuracy of VaR forecasts, Journal of Forecasting, 22, pp. 22-33.
41. Ping Wang, (2010): A Measuring Approach of Portfolio's VaR Based on APGARCH-EWMA Model, Information Processing (ISIP): Third International Symposium on Information Processing, pp.6-8.
42. Zikovic S., Bezic H. (2006):, Is historical simulation appropriate for measuring market risk? : A case of countries candidates for EU accession, CEDIMES conference paper, 23-27 March, Ohrid
43. Zikovic, S. (2006a): Applying hybrid approach to calculating VaR in Croatia. Proceedings of the International Conference "From Transition to Sustainable Development: The Path to European Integration", Faculty of Economics in Sarajevo, Sarajevo, October 12-13, pp. 50-71.
44. Zikovic, S. (2006b): Implications of measuring VaR using historical simulation; An example of Zagreb Stock Exchange index - CROBEX. In J. Roufagalas (Ed.): Resource allocation and institutions: Explorations in economics finance and law, pp. 367-389. Athens: Athens Institute for Education and Research.
45. Zikovic, S. (2007): Measuring market risk in EU new member states. Proceedings of the 13th Dubrovnik Economic Conference, Dubrovnik, Croatia.
46. Zikovic S., Aktan, B. (2009): Global financial crisis and VaR performance in emerging markets: A case of EU candidate states - Turkey and Croatia. Proceedings of Faculty of Economics Rijeka, 21 (1), pp. 149 - 170.
Julijana Angelovska*
Received: 3. 7.2012 Preliminary communication
Accepted: 16. 10.2013 UDC 336.76
* Julijana Angelovska, Faculty of Economics and Administrative Sciences, International Balkan University, Tasko Karaga bb, Skopje, Macedonia, Phone: +389 70 380 397, Fax: +3892 3 174030, E-mail: [email protected]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright Sveuciliste u Splitu Nov/Dec 2013
Abstract
Market risk estimates the uncertainty of future earnings, due to the changes in market conditions. Value at Risk has become the standard measure that financial analysts use to quantify market risk. For estimating risk, the issue is that different ways to estimate volatility can lead to very different VaR calculations. The performance of SMA with rolling windows of 100 and EWMA using 0.94 (proposed by RiskMetrics) as smoothing constant X and rolling window of 100 days, perhaps the most widely used methodology for measuring market risk is analyzed from investment activities on 7 stock exchange indices from developed and emerging markets. Binary Loss Function (BLF) is employed to measure the accuracy of VaR calculations because VaR models are useful only if they predict future risks accurately. The subject of this research is to determine the possibility of application of the SMA and EWMA models VaR with 95% and 99%> confidence level in investment processes on the stock exchange markets of the selected countries. The methodology applied in the research includes analyses, synthesis and statistical/mathematical methods. The aim of the research is to show whether the models work the same and whether financial analysts from emerging countries can use the same model as their counterparts from the developed countries. The results show that risk managers in developing just as those in developed countries can use risk metric EWMA model as a tool for estimating market risk at 95% confidence level. [PUBLICATION ABSTRACT]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer