Abstract
In this article, I would like to make a detailed introduction about VaR, an important method to value risk. I am going to explain the concept of VaR and describe the calculation of it. Then I would about some special tools such as marginal VaR, incremental VaR and component VaR. The most important part is that I would list the limitations of VaR and find some effective ways to replace it.
Key Words: VaR, Application, Limitation
Introduction
Value at Risk is an effective measure of the potential loss for one investment. It is a statistical technique used to measure and quantify the level of financial risk within a firm or investment portfolio over a specific time frame. This metric is most commonly used by investment and commercial banks to determine the extent and occurrence ratio of potential losses in their institutional portfolios. Risk managers use VaR to measure and control the level of risk exposure. One can apply VaR calculations to specific positions or whole portfolios, or to measure firm-wide risk exposure. In this paragraph, I will explain the concept of VaR and some general tools of it.
(1) Definition
Given a confidence level α ϵ (0,1), Value-at-Risk of a portfolio X at α over the time period t is given by the smallest number k ϵ R such that the probability of a loss over a time interval t greater than k is 1-α.
VaRα(X)=inf {x ϵ R: FX(x)>α}
In order to calculate the number of VaR, a straightforward way is to use distribution of the portfolio return. Given
the probability density function (pdf) of X and α the confidence interval, then the VaR number of a time interval can be calculated with the following equation:
1-α=
The most common example is normal distribution. For a given portfolio, if the portfolio return is normally distributed with mean µ and standard deviation σ, the VaR number can be obtained as following. Through the standard normal table, there is a number corresponding to the confidence level α. For example, if α is chosen to be 95%, the corresponding number is 1.65, and if α is 99%, then the corresponding number is 2.33. Since VaR corresponds to the left tail, the actual cutting line is negative, as shown in the following figure.
Moreover, for any general distribution, we can do a standard transformation
To obtain the VaR number. Furthermore, if F∆P(x) is the cumulative distribution function (cdf) of X, the equation can be written as 1- α =
Fx (VaR)
(2) VaR Tools
An important objective of the VaR approach is to control and manage risks. Therefore, in addition to calculating the VaR of the entire portfolio, we also want to know which asset contributes the most to the total risk, what impact will it have if assets are deleted or added, and so on. In this section, a detailed analysis of VaR tools is introduced to control and manage portfolio risk.
a) Marginal VaR
The first tool for risk management is the marginal VaR, which is defined as the partial derivative with respect to the component weight. It measures the change in portfolio VaR resulting from adding additional dollar to a component. Take the partial derivative of the variance with respect to
=2
Since
, the above equation is equivalent to
,
Therefore,
∆
.
The marginal VaR for I th component is ∆
. The marginal VaR is closely related to vector β, which has its i th component defined by
. Recall that
, the vector β can be expressed in matrix notation as β
. Since
, we have
. Thus the relationship between ∆
and
is
∆
(
.
b) Incremental VaR
Another tool for risk management is the incremental VaR, which measures the change in VaR due to a new position on the portfolio. Let a be the new position added, and
is the amount invested on asset i. Then intuitively the incremental VaR can be defined by the difference between the new VaR and original VaR, i.e.
Incremental
However, to calculate the VaR for the new portfolio, we need to compute the new covariance matrix, which might be time consuming. Therefore, the following approximation sometimes is used to shorten the computation time.Expanding
, we have
Thus, when a is small relative to P,
Incremental VaR
Therefore, we can simultaneously compute ∆
and
. When a new trade is added to the portfolio, the approximation of incremental VaR can be immediately known by formula.
If only one asset is added to the portfolio, we can choose the amount to invest so that the risk is minimized. This action is also called best hedge.
Suppose amount ai is invested on asset i, then the new portfolio
, and the variance of returns for
is
Differentiating with respect to
, we get
Thus the best hedge occurs when the equation equals to zero, or
Recall the definition of β, the optimal of
can also be computed by the following formula:
c) Component VaR
The other tool that is extremely useful to manage risk is the component VaR, which is a partition of the portfolio VaR that indicates the change of VaR if a given component was deleted. We can use it to have a risk decomposition of the current portfolio. As discussed before, the sum of individual VaR is not so useful since it discards the diversification effects. Thus, we define the component VaR in term of marginal VaR as follows:
Component VaR = (∆
Calculation Method of VaR.
Statistically, the value of VaR can be estimated with two different ways, and these methods are used given a known probability distribution. The number estimatiing method is used to determine the VaR value. When the unknown distribution is used, the non-parametric method is to directly introduce the quantile, and the quantile value is used as the VaR. Accordingly, VaR measuring models can be divided into two general categories: parametric models and nonparametric models. The parametric model estimates the VaR by assuming that the yield of the portfolio is subject to a certain distribution, such as JP Morgan’s Risk metrics, GARCH model, etc. The nonparametric model does not need to make any assumptions about the yield distribution of the portfolio. It applies Some historical data analysis simulations to estimate VaR values, such as historical simulation, Monte Carlo simulation.
(1) Historical simulation
Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our academic writing services
Assuming that the distribution of returns is independent and identical, the future fluctuations of market factors are exactly the same as historical fluctuations. Consider the changes in historical samples to simulate the future distribution of real asset returns. Then find the quantile corresponding to a certain confidence level to determine the VaR.
The method is simple and intuitive, easy to understand, and does not to make any assumptions about the statistical distribution of the rate of return, avoiding parameter estimation errors and reflecting the thick tail phenomenon of data and the autocorrelation between data precisely. Compared to the parametric method, at the lower tail critical point, the results of historical simulations may be more accurate at making predictions. In addition, it can also be effective at nonlinear combination. Based on this robustness and intuitiveness, the Basel Committee on Banking Supervision adopted in 1993 historical simulation as the basic measure of market risk.
The main problem with this approach is that the return is consistent with historical changes and subject to independence. It is assumed that the distribution and probability density functions do not change with time and are inconsistent with actual financial market conditions. When the volatility changes greatly in the short term, the sample size will have a large impact on the prediction results. The estimation is inaccurate and therefore requires longer and more accurate data (at least 5 years). The “experienced distribution” obtained by this method is generally discontinuous and does not provide the greatest loss prediction beyond the sample point. This simulation method is more complementary than the parameter method.
(2) Moving Average
Moving average (MA) is one of the most popular and easy to use tools available to measure time-varying risk. By using an average of prices, moving average provides a smooth trend line, which can be used to predict future changes in the risk factor. Suppose we have the data of returns rt over n days and we choose to use a M-day average. Then the day M is the first day possible to compute an average, and the average variance can be obtained by
The variance for the day M + 1 can be got by adding the newest data
and dropping the earliest data
. Continue the process in this way: each day, the variance is updated by adding the most recent day’s information and dropping the information M days ago, and divide the sum by M. The general formula for average computation is as follows:
When we use up all n days of data, we can fit these points with a smooth line, and this line can indicate the trend of changes.
(3) GARCH Estimation
Because the GARCH model has a good description of the characteristics of financial time series and the time variation of variance and the ability to handle thick tails, so it is better to grasp the estimation of the VaR model than other volatility estimation methods. The equation usually includes two equations, the first is the autoregressive or conditional mean equation, and the other is the conditional variance qquations which iterate out the volatility values for each period.
Define
as the conditional variance, i.e. the forecast of the variance of a time series at time t based on previous data. In GARCH model,
is a function of previous 24 conditional variance up to time t – p, and previous returns up to time t – q, and the value of
in the GARCH(p, q) process is
Where
is the return on day t and
is the conditional variance on day t.
Here we focus our attention to the simplest case, which is GARCH(1,1) process. In GARCH (1, 1) model, the value of
can be calculated from the return
and the conditional variance
by the following formula
Computation of average standard deviation: Since the return
is a normal variable with mean zero and variance
, we have
Limitations of VaR
(1) VaR does not measure worst case loss
VaR Ignores the tail event. 99% percent VAR really means that in 1% of cases (that would be 2-3 trading days in a year with daily VAR) the loss is expected to be greater than the VAR amount. Value at Risk does not say anything about the size of losses within this 1% of trading days and by no means does it say anything about the maximum possible loss. The worst case might be only a few percent higher than the VAR, but it could also be high enough to liquidate your company. You can see the example of this GS in the subprime mortgage, to see the effect of this 1%! The example mentions the degree of deviation of 25 sigma, which is a typical super black swan. Eight sigma is the probability that it will occur once since the birth of the Earth. 9 sigma is already the limit that matlab can calculate. 25 sigma is probably the so-called “once 1.3e+135 years”.
(2) VaR can be misleading: false sense of security
This index is too direct and easy to mislead. I want to say that we all understand the definition of VaR, but when this number appears in front of you, the illusion that is easy to cause is “my biggest loss”, so that it creates an illusion of security. However, this maximum possible loss is defined in a confidence interval strictly less than one.Unfortunately, in reality 99% is very far from 100% and here’s where the limitations of VAR and their incomplete understanding can be fatal.
(3) VaR is not that effective and accurate
The three most commonly used: weighted var-cov methods, simple historical simulations, simple monte-carlo simulations. It can only be used in general situations. But when the portfolio is large, complicated and not linear, it is very painful to calculate the covariance matrix. For the case of skew and excess kurtosis, if it is calculated as normal, it will be underestimated. At the same time, VaR has no additivity (even no secondary additivity). Therefore, the calculations after adjusting the position must be repeated, which is very troublesome. At the same time, the results of different calculation methods tend to be significantly different which is also relatively tangled, because you do not know which is the most representative.
Approaches of Overcoming the Limitations
(1) Expected Shortfall (conditional VaR)
Expected Shortfall is defined as the average of all losses which are greater or equal than VaR, i.e. the average loss in the worst (1- α) % cases, where α is the confidence level. Said differently, it gives the expected value of an investment in the worst q% of the cases. Another important advantage of ES is that it satisfies the sub-additive. This method mainly studies the mean value of the tail loss, assuming that the weight of each loss is as large as the average value of the tail extreme value. The calculation result is closer to the actual situation, but the ES is more complicated to calculate. Here is the mathematical definition.
If X is the is the payoff of a portfolio at some future time and 0 < α<1 then we define the expected shortfall as
where
is the Value at risk. This can be equivalently written as
Where
is the lower
-quantile and
is the indicator function. The dual representation is
where
is the set of probability measures.
(2) Extreme Value Theory (EVT)
Extreme Value Theory (EVT) takes a different approach to calculating VaR EVT concentrates on estimating the shape of only the tail of a probability distribution Given this shape, we can find estimates for losses associated with very small probabilities, such as the 99.9% VaR A typical shape used is the Generalized Pareto Distribution that has the following form: Extreme Value Theory Here, a, b, and c are variables that are chosen so the function fits the data in the tail The main problem with the approach is that it is only easily applicable to single risk factors It is also, by definition, difficult to parameterize because there are few observations of extreme events.
Let
be a sequence of independent and identically distributed random variables with cumulative distribution function F and let
=max(
denote the maximum.
In theory, the exact distribution of the maximum can be derived:
The associated indicator function
is a Bernoulli process with a success probability
that depends on the magnitude z of the extreme event. The number of extreme events within n trials thus follows a binomial distribution and the number of trials until an event occurs follows a geometric distribution with expected value and standard deviation of the same order O(1/P(z)).
References
- Dai Bo, “Undergraduate Research Opportunity Programme in Science, Value at Risk”, National University of Singapore,2001
- Mária Bohdalová, Faculty of Management, Comenius University, Bratislava, Slovakia, “A comparison of Value–at–Risk methods for measurement of the financial risk1”, E-Leade, 2007
Cite This Work
To export a reference to this article please select a referencing style below: