GARCHNet: Value-at-Risk Forecasting with GARCH Models Based on Neural Networks Computational Economics

Moreover, the increased volatility may be predictive of volatility going forward. Volatility may then return to levels resembling that of pre-crisis levels or be more uniform going forward. A simple regression model does not account for this variation in volatility exhibited in financial markets. It is not representative of the “black swan” events that occur more often than predicted. Since the original introduction, many variations of GARCH have emerged. These include Nonlinear (NGARCH), which addresses correlation and observed “volatility clustering” of returns, and Integrated GARCH (IGARCH), which restricts the volatility parameter.

We assume that this is due to the worse predictive power of the GARCH model in turbulent periods. In addition, the specific form of the conditional variance does not affect the QML in the above form. This opens up the possibility of using much more complicated nonlinear forms, such as NN (Goodfellow et al., 2016).

  1. This is rather undesirable behavior due to the use of a non-linear approach.
  2. We used a rolling-window estimation approach (Zanin & Marra, 2012).
  3. We tested a framework where the model was fully reset (random weights fully initialized) less often than with each timestep forecast.
  4. We look at volatility clustering, and some aspects of modeling it with a univariate GARCH(1,1) model.
  5. However, there exist investments which have considerably more risk and create opportunities for profit through that risk.

GARCH models are viewed to provide better gauges of risk than can be obtained through tracking standard deviation alone. These properties of the ARCH(1) model match many of the stylized facts
of daily asset returns. The half-life is log(0.5)/log(alpha1 + beta1), where the units will be the frequency of the returns.

In other words, it has conditional heteroskedasticity, and the reason for the heteroskedasticity is that the error term is following an autoregressive moving average pattern. This means that it is a function of an average of its own past values. Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) is a statistical model used in analyzing time-series data where the variance error is believed to be serially autocorrelated.

Model Diagnostics

It gave us tools in the EDA phase to recognize when to use the ARCH and GARCH models. We fit models and built forecasts that could be done on a rolling basis to predict the volatility in the most practical manner. Lastly, we ended on an exploration of what can we do to improve our model by subtly changing our assumptions. We then generate autocorrelation (ACF) and partial autocorrelation (PACF) plots to visualize how previous returns with different lags are correlated with one another. Recall that because of properties of stationarity, the correlation and covariance at different points in time should be the same for the same gap — also know as lag h — regardless of the exact point in time t. That is, ACF and PACF should be constant across the same lag in time and should depend only on the lag h and not directly on t.

Autoregressive conditional heteroskedasticity

However, in this case, we specify that there is actually heteroskedasticity (non-constant variance). Since volatility is conceptually linked with variance, garch we are interested in modeling how the variance changes over time. In the stock market, volatility refers to the amount and frequency of price fluctuations.

2.1 Statistical Properties of the GARCH(1, Model

Below, we see how well this rolling volatility prediction is able to capture the volatility in our unseen test data. The model we will use for this is called auto-regressive conditional heteroskedastic model of order P, ARCH(P). The efficient market hypothesis states that the price of a stock is reflective of an efficient market — that is, all information about the company is reflected in the current value of the stock. While controversial, this view is at the basis of a lot of economic theory. It implies that consistently beating the market is impossible based on information about the price of a security up until time t, since all information is available publicly through time t. In the previous post, we concluded that the closing price of the S&P 500 could be modeled adequately by a random walk model.

This is rather undesirable behavior due to the use of a non-linear approach. The GARCHNet model with a normal distribution appears to have the lowest ABLLF cost function value among the GARCHNet models and can usually compete with the same cost function calculated for its GARCH family counterpart. In summary, based on the cost function results, we assume that GARCHNet at this stage is a relatively conservative model. The results converge across the index tested—with noticeable differences, but these are due to the distribution of the data rather than the model specification. One of the most influential drivers of risk is variance, particularly its changing temporal structure or tendency to cluster (Cont, 2002).

Where n is the size of the data, S is the sample skewness (which measures the symmetry of the data relative to the mean) and K is the sample kurtosis (which measures the shape of distribution, especially the tails). We implement the process by using a training and testing data split of 80%/20%, with more recent historical data serving as the test set. The above ACF and PACF plots do not indicate that there are ARCH and GARCH effects present.

The standardized residuals from a model assuming a normal distribution will be closer to normally distributed than the residuals from a model on the same data assuming a t distribution. The in-sample https://1investing.in/ estimates of volatility will look quite similar no matter what the parameter estimates are. Figures 1 and 3 would not change much if we changed the parameter estimates for the respective models.

We use arch_model() from the arch package and specify that the data is of mean zero and modeled with a GARCH process. We specify the arguments of P,Q as 1,1 and choose not to standardize (“rescale”) the data. After the above transformation to absolute returns, there is now a pattern indicative of ARCH and GARCH effects being present. In the PACF, there is not a big drop until about lag 4, which might indicate that we should have an order of P,Q up to 4. For simplicity and illustration we will use a GARCH(1,1) model before assessing and testing different specifications.

3.1 The Likelihood Function

The assumption of the distribution when fitting the model does have an influence even when using the empirical distribution. The persistence of the model is a key driver of the predictions — it determines how fast the predictions go to the unconditional volatility. If there really is a lot of persistence in the volatility and your model accurately captures the persistence, then you will get good predictions far ahead. The persistence is estimated by seeing how fast the decay looks during the in-sample period. If there is a trend in the volatility during the in-sample period, then the estimator “thinks” that it never sees a full decay. The shorter the sample period, the more likely there’s a trend that will fool the estimation.

Uncertainty in financial markets has been a main point of risk-related research for decades (Segal et al., 2015; Vorbrink, 2014). The market standard, which was established more than 30 years ago for use as a measure of risk, is Value-at-Risk (VaR) (Duffie & Pan, 1997). It is the simplest way to express potential losses over a target time horizon with a specified statistical certainty.

The aforementioned studies, however, do not specifically focus on the implementation of NNs for conditional variance alone. For example, studies by Kristjanpoller and Minutolo (2015, 2016) use GARCH estimates of variability as inputs to the NN model, while Kim and Won (2018) build NNs with covariates that are parameters of artificially generated GARCH models. Our approach leans toward estimating conditional moments of an assumed distribution using NNs, such as in Rothfuss et al. (2019). The first to propose such approach were Nikolaev et al. (2011), who investigated an approach with recursive NNs (RNN) to represent conditional variance and found that incorporating nonlinear methods (RNN-GARCH) reduces model uncertainty. Further on, Liu and So (2020) consider using the LSTM NN to model conditional variance directly through the maximum likelihood approach of the density function of the assumed distribution. They showed that this method can successfully determine both the standard deviation and variance of financial returns.

Specifically, the name of such a test is the Traffic Light Test (Costanzino & Curran, 2018). In the case of VaR at 2.5% significance level and 250 testing instances the ’safe’ (green) zone ends with 10 exceptions (95% cumulative probability) and yellow (warning zone) ends with 16 exceptions (99.99% cumulative probability). Financial professionals often prefer the GARCH process because it provides a more real-world context than other models when trying to predict the prices and rates of financial instruments.


Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *