Value at risk is the measure of prospective loss in risky portfolio value over given time span for a specified confidence interval. It is the value such that the chance that the loss on the portfolio over a given period exceeds this value is the given probability. For example, if the VaR of a given portfolio is $950 over one month period, with a 70% confidence interval, there is only 30% probability that the portfolio will drop exceeding $950 over given one month period (Abken 2000, p.21).
This measure is used in measuring risk exposure. It has been adopted by many investment and commercial banks to capture possible loss against unfavorable market changes over a given time frame. The key elements of VaR include a specified loss in value, a specified time frame and a confidence level.
There are three methods used in computing the value at risk. These are the variance–covariance approach, historical simulation and the Monte Carlo simulation approach. It could be done analytically by making assumptions about return distribution for risk in the market, and use of variances and covariances against the risks. Another way is running hypothetical assets through historical data or adopting the Monte Carlo simulations. Below is a brief analysis of each of these methods.
This is an easy method except for the challenges in deriving the probability distributions. Consider a basic example. When computing the VaR for a single asset with the following;
Mean= $120 million
Annual standard deviation= $ 10million
With this confidence level, it can be assessed that an asset's value will not go below $80 million which is two standard deviations below the mean or go above $120 which is two standard deviations above the mean. When working with several assets or a portfolio, the process of estimating the parameters is complicated as assets in the portfolio move together. In estimating variance of a portfolio, the covariances of assets in the portfolio are considered. In large portfolios with shifting assets, it is complicated to calculate the VaR. This problem has been addressed by mapping the risk in the individual investments in a portfolio to more general market risks when computing VaR. The measure is then estimated based on the market risk exposures.
The variance–covariance method involves four steps:
The first step involves taking each asset in the portfolio and mapping them to uniform and simpler instruments. Stocks and options are complicated in mapping. The assets are mapped into a set of instruments representing the market risk. After mapping, the values of the common market risk assets are analyzed other than estimating the variances and covariances for individual assets.
The second step involves stating each of the financial assets as a set of positions in the market instruments. This process is usually much complicated when dealing with stocks, derivatives or convertible bonds as with the mapping.
Having identified the standardized instruments, the next step is to estimate the variances in each of the instruments. The covariances are also estimated across the instruments. This is done by looking at historical data which is vital in measuring VaR. Allen et al., (2004, p.34), notes that the value at risk for the portfolio is computed at the final stage. The computation is done using weights in the standardized instruments determined in step two and the variance and covariances computed in step three. This step assumes that the returns on the standardized risk measures are normally distributed.
The following is an illustration of VaR computation for a 6month dollar/euro forward contract. The instruments are identified as 6month risk free securities in euro and dollar, the spot $/euro exchange rate, the dollar values of instruments computed and the value at risk is based on covariances across the 3 assets. The computation goes through the four steps as follows;
Step 1: Each asset in the portfolio is mapped to the standardized and simpler instruments. The market factors affecting the instrument in this illustration are the spot exchange rate and the 6month risk free rates in each currency; the financial instruments under these risk factors are the spot dollar/euro, 6month zero coupon dollar bond and the 6month zero coupon euro bond.
Step 2: This is the identification of the position of each financial asset in the standardized instruments. In computing the forward contract, it is assumed that the forward contract requires $12.7 million dollars to be delivered in a period of 180 days; hence receive 10 million Euros in exchange. It is further assumed that the annual interest on a 6month zero coupon euro bond is 3%, dollar bond is 4%, and the current spot rate is $1.26/Euro. The positions in these three instruments may be counted as below:
The value of a long position in zero-coupon euro bond and the value of spot euro position are the same. This is because the asset exposes one to risk in the euro in two places; both the spot exchange rate and the riskless euro rate could change over time.
Step 2: On identification of the standardized instruments affecting the assets in a portfolio, the variances are estimated in each of these instruments and the covariances across the instruments. Considering the six-month Dollar / Euro forward contract and the three normalized instruments we mapped that investment onto, we assume that the matrix of the variance and covariance (in daily returns) across those instruments is as follows:
First-Class Online Research Paper Writing Service
- Your research paper is written by a PhD professor
- Your requirements and targets are always met
- You are able to control the progress of your writing assignment
- You get a chance to become an excellent student!
In practice, the covariance and variance estimates are usually obtained by looking at historical data. In the above computation, the vaiance of an asset with itself is the covariance. Therefore, the values on the diagonal are the variances of the assets. The daily return variance in the Dollar bond is 0.0000314.
A research has been conducted aimed at bettering the estimation techniques in order to provide more dependable variance and covariance values used in the VaR calculations. Suggestions have been made to refine the sampling methods and data innovations, hence allow for better estimates of the variances and covariances. Others conceive that statistical innovations can yield enhanced estimates from existing data. For example, there is a conventional assumption that the VaR estimates are based upon constant standard deviation in returns. The researchers argue that much better estimates could be obtained using methods which provide for changes in the standard deviation over time.
There are criticisms that the VaR variance-covariance estimate is intended for portfolios with linear relations between portfolio and risk positions. As a result, it could break down when the portfolio includes options for non-linear payoffs. In an attempt to deal with the non-linear instruments, researchers came up with the quadratic value at risk measures, which are sometimes categorized as delta gamma models. It is linked to the use of ARCH and GARCH models. Schaefer, the researcher of non-linear value-at-risk, allowed researchers to estimate the VaR for complex portfolios that comprise options and other securities of its kind such as the convertible bonds. The price, though, is that the mathematics related with deriving the VaR is exceedingly complicated.
This is the second approach to finding VaR. It is perceived to represent the simplest way of estimating the Value at Risk for several portfolios. In this approach, the VaR is estimated through the creation of a hypothetical time series of returns on the portfolio. The returns are obtained by running the portfolio through actual past data and computing the adjustments that would have occurred in each period.
Running historical simulation begins with time series data on each market risk factor, as done in the variance-covariance approach (Colella & Sullivan 1974) Nevertheless, data is not used in estimating variances and covariances since the adjustments on the portfolio over time yield all the information needed in computing the Value at Risk.
An illustration provided on historical simulation approach has been used in measuring the VaR in some oil prices. Historical data from 1991 to 1999 has been used to obtain the daily prices of crude oil, and it is a graph on the prices is as shown in the figure below:
The daily price changes were separated into positive and negative numbers, and each group was analyzed. Adopting a 99% confidence level, the positive value at risk was identified as the price change in the 99% of the positive price changes while the negative VaR as price change at the 99% of the negative price changes. The separation of price changes into the positive and negative changes allows for unevenness in the return process. That is, large positive changes are more regular than the negative changes and vice versa. For the period studied, the value at risk with 99% confidence interval was about 1% in both directions.
The implicit assumptions of this approach are evidenced in this example. First, the approach doubts the distributional assumptions, hence determines the VaR by the actual price movements (Madura, 2003). Second is the assumption that each day in the time series has an equal weight when measuring the VaR. This is an impending issue if there is an existing trend in the variability which could be lower in the earlier periods or higher in the later periods. The third assumption is that of a repeating history, with the periods used giving a picture of the risks in other periods that the oil market is exposed to.
Modifications on this approach include:
Concentrating on the recent past more. An argument provides that returns in the recent past are better in prediction of the immediate future than the returns from the distant past. An experiment is done where the more recent data was weighted more. The time weighting mechanism used was a decay factor. Each return is weighted based on being recent rather than equally weighted. If the decay factor is 0.90, then the most recent data has the probability weight p, the data just before will be 0.9p, the one after is weighted 0.81p. With decay factors, the Value at Risk remarkably quickly adjusts to reflect the market changes.
Having a combination of historical simulation and time series models. It is suggested that better estimates of VaR could be obtained by incorporating the time series model through the historical data and using its parameters in forecasting the Value at Risk. This modification leads to an improvement in the returns as the value at risk becomes much more sensitive to changes in the market.
Updating Volatility. Some researchers have suggested a way of updating past data for changes in volatility. For the instruments with the recent volatility being higher than the historical volatility, they proposed that the historical data be attuned to reflect the change (Loistl & Petrag 2003, p. 31). For example, if the updated standard deviation in prices is 0.9% and that it was only 0.7% 20 days ago it is better to scale that number to reflect the change. This approach requires an estimation of variances. The estimates are established using the GARCH (1, 1) models. All the variations are intended to obtain shifts in the recent past which are underweighted by the traditional approach. The conventional approach did not consider the relevant risks that are out of the sampled historical period. It also did not capture the structural shifts in the economy.
Monte Carlo Simulation
This is the last approach to computation of the value at rissk. This assessment tool of the VaR focuses on the chances of losses beyond a given value and not the entire distribution.
The first and second steps in the Monte Carlo simulation emulate the first two steps in the variance-covariance approach where the markets risks that affect the asset(s) are identified in a portfolio and the individual assets are converted into positions in standardized instruments. The third step is however different. It does not compute the variances and covariances across the market risk factors, but rather takes the simulation route, the probability distributions for each market risk factor is specified and also how these market risk factors move together (Brealey & Myers 2003, p. 42)
In the above example of Dollar/Euro forward contract for six months, the probability dispensations for the zero coupon Dollar bond, the dollar/euro spot rate, and the zero coupon euro bond should be precise, as well as the correlation across these instruments. Estimating the parameters is easier when assuming normal distributions for all variables. However, a Monte Carlo simulation is more powerful due to the freedom one has.
Once the probability distributions are specified, the simulation begins. The market risk variables take on unlike outcomes in each run and portfolio’s value reflect the outcomes. After a series of repeated runs, one obtains a distribution of portfolio values which could be used in the calculating the value at risk (Rubinstein 1981). For example, if a series of 10,000 simulations is run and the corresponding values for the portfolio are derived, these values can be ranked from highest to lowest, the 99% value at risk will be in line with the 100th least value and the 95% to the 500th lowest value.
The Monte Carlo simulation approach is often perceived as more refined than the historical simulations. However, the users often use the historical data in making their distributional probabilities.
This method becomes more complex with an increase in a number of market risk factors. This is because one has to estimate the probability distributions for more of market risk variables and the number of simulations to run in order to obtain a reasonable estimate of value at risk will have to increase significantly.
The advantage of Monte Carlo simulations approach can be seen when compared to the other approaches used in computing the value at risk. Different from the variance-covariance method, there are no idealistic assumptions about normality in returns. In distinction to the historical simulation approach, Monte Carlo simulations approach begins with historical data but is free to bring in other information or subjective judgments to improve estimated probability distributions. An advantage of the Monte Carlo simulations is its ability to be applied for any type of portfolio. It is also flexible enough to also cover the options and option-like securities.
Modifications have been suggested to cover the weaknesses to this method, especially to simplify its computational bulk. The adapted versions narrow the simulation focus using various techniques, and reduce the required number of simulations. The modifications areas outlined below.
In order to reduce the computational bulk of administering Monte Carlo simulations, an analysis has to be done over a number of discrete scenarios. An approach that can be used is applying a small set of pre-determined shocks to the system. As suggested by a researcher, scenario simulations should be adopted where use of principal component analysis is the first step in narrowing the number of factors. Likely, combinations of risk variables that take on the potential values are analyzed to arrive at scenarios. The simulation results are arrived at by computing values in these scenarios.
Combining Monte Carlo simulations with the variance-covariance method. This modification speeds up the computation of the value at risk. When making the distributional assumption on normality in returns, the variance-covariance matrix makes the computation of the value at risk for any portfolio exceptionally easy and fast (Ross et al. 2002). The flexibility strength of the Monte Carlo simulation approach to make various distributional estimates and deal with given types of risk could be terribly slow to run. This could be solved through the use of estimations from the variance-covariance approach in guiding the sampling process in Monte Carlo approach hence saving significant time and resources, with no loss of precision.
All of the outlined approaches to estimating the value at risk have both advantages and disadvantages. The variance-covariance approach, requires making strong assumptions about the return distributions of the assets, but is exceptionally easy to count after those suppositions have been made. Historical simulation implicitly assumes that the data used is representative. The Monte Carlo simulation approach is the most flexible in choosing distributions for returns and allows for subjective judgments and external data, but is the most complex in computing.
Determination of which VaR approach to use depends on the task at hand. When assessing portfolios with no options within a short time spans, it is best to use the variance-covariance approach. If, however, the risk source is a bit stable and there is enough historical data, historical simulations would be the best. When computing VaR for nonlinear portfolios over long periods, where the historical data is unstable and where the normality assumption is doubtful, the Monte Carlo simulations is the best.
Value at Risk can be wrong as there is no particular measure for it. The existing methods are full of limitations. Errors may occur in the computation and may be hugely significant to avoid making a misleading measure as to the risk exposure. It is therefore crucial to do a clear and careful analysis with more emphasis on using all the information in the probability schedules compared to a small portion of it.