Limits of Granger Causality

In this article: We discuss Granger Causality works and some of the common issues, drawbacks, and potential ways to improve the method(s). Building primarily off our previous article Pitfalls of Backtesting and insights gained from building ProjectPiglet.com.


One of the most common forms of analysis on the stock market is Granger Causality, which is a method for indicating one signal possibly causes another signal. This type of causality is often called “predictive causality”, as it does not for certain determine causality – it  simply determines correlations at various time intervals.

Why Granger Causality? If you search “causality in the stock market“, you’ll be greeted with a list of links all mentioning “granger causality”:

Search on DuckDuckGo

In other words, it’s popular and Clive Granger won a Nobel on the matter[1]. That being said, there are quite a few limitations. In this article, we’ll be covering a brief example of Granger Causality, as well as some of the common pitfalls and how brittle it can be.

What is Granger Causality?

Granger Causality (from Wikipedia) is defined as:

A time series X is said to Granger-cause Y if it can be shown, usually through a series of t-tests and F-tests on lagged values of X (and with lagged values of Y also included), that those X values provide statistically significant information about future values of Y.

In other words, Granger Causality is the analysis of trying to find out if one signal impacts another signal (such that it’s statistically significant). Pretty straightforward, and is even clearer with an image:

From Wikipedia

In a sense, it’s just one spike in a graph causing another spike at a later time. The real challenge with this is that this needs to be consistent. It has to repeatedly do this over the source of the entire dataset. This brings us to the next part: one of the fragile aspects of this method is that it often doesn’t account for seasonality.

Granger Causality and Seasonality

One common aspect of markets is that they are seasonal. Commodities (as it relates to the futures market) related to food are extremely impacted by seasonality[2]. For instance, if there is a drought across Illinois and Indiana during the summer (killing the corn crop), then corn prices from Iowa would likely rise (i.e. the corn from Iowa would be worth more).

From Wikipedia

In the example, there may be decades where some pattern in the market holds and Granger Causality is relevant. For instance, during summer heat waves in Illinois, corn prices in Iowa increase. On the other hand, with the advent of irrigation methods that deliver water underground, heat waves may no longer impact crops[3]. Thus, the causality of heat waves in Illinois may no longer impact the corn prices in Iowa.

If we then attempt to search for Granger Causality on the entire time range (a) pre-irrigation and (b) post irrigation, we will find there is no causality!

However, during the pre-irrigation time range we will find probable causality, and for post-irrigation time range we likely won’t find probable causality. Any time you combine two timeframes like this, the default is no Granger Causality (unless it’s a very small portion of the dataset). Bringing us to the conclusion, that:

Granger Causality is very sensitive to timeframe(s)

Just a few data points in either direction can break the analysis. This makes sense, as it is a way to evaluate if two time series are related. However, it does lead one to note how brittle this method can be.

Granger Causality and Sparse Datasets

Yet another potential issue with Granger Causality is sparse datasets. Let’s say we have dataset X and dataset Y: if dataset X has 200 data points and data set Y as 150 data points, how do you merge them? Assuming they are in (datetime, value) format, if we do an inner join on “datetime”, we get something that looks like the following:

From W3School

Then we will have 150 data points in a combined X and Y dataset, i.e.: (datetime, x, y). Unforunately, this also means if the data is continuous (as most timeseries data is), then we have completely broke our Granger Causality analysis. In other words, we are just skipping over days, which would break any causality analysis.

In contrast, we could do an outer join:

From W3School

We will have 200 data points in a combined X and Y dataset. Again, there’s an issue – it means we probably have empty values (Null, NULL, None, NaN, etc. ) where the Y data set didn’t have data (recall Y only had 150 data points). The dataset would then have various entries that look as such: (datetime, x, NULL).

To fix the empty values, we can attempt to use a forward or back fill technique. A forward/back fill technique is where you fill all the empty values with the previous or following location(s) real value.

This code could look like the following:

From the sound of it, this method sounds promising! You’ll end up with something that’s continuous with all real values. You’ll actually get a graph like this:

Change in BCH price vs Random Walk (with NaNs)

As you can see, there are large sections of time where the data is flat. Recall the seasonality issue with Granger Causality? This method of outer joins + forward / back filling will definitely cause issues, and lead to minimal to no meaningful correlations.

Sparse datasets make it very difficult (or impossible) to identify probable causality.

Granger Causality and Resampling

There is another option for us, and that is “resampling”. Where instead of just filling the empty values (Nulls / NaNs) with the previous or following real values, we actually resample the whole series. Resampling is a technique where we fill the holes in the data with what amounts to a guess of what we think the data could be.

Although there are quite a few techniques, in this example we’ll use the python package Scipy, with the Signal module.

At first glance, this appears to have solved some of the issues:

Change in Bitcoin Price vs Random Walk

However, in reality it does not work; especially if the dataset starts or ends with NaN’s (at least when using the Scipy package):

Change in BCH price vs Random Walk (with NaNs)

If you notice, prior to the ~110 data point, the values are just oscillating up and down. The resampling method Scipy is using does not appear to be functional / practical with so few data points. This is because I selected data set for Bitcoin Cash (BCH) and the date range is prior to Bitcoin Cash (BCH) becoming a currency (i.e. there is no price information).

In a sense, this indicates it’s not possible (at least given the data provided) to attempt Granger Causality on the given date ranges. Small gaps in time can have dramatic impacts on whether or not “probable causality” exists.

When determining Granger Causaily it is extremely important to have two complete overlapping datasets.

Without two complete datasets, it’s impossible to identify whether or not there are correlations over various time ranges.

Resampling can cause artifacts that impact the Granger Causality method(s).

In fact, the most recent example was actually positive for Granger Causality (p-value < 0.05)… That is the worst scenario, as it is a false positive. In the example, the false positive occurs because when both datasets are resampled they had a matching oscillation… it wouldn’t have even been noticed if the raw data sets weren’t being reviewed.

This is probably the largest issue with Granger Causality: every dataset needs to be reviewed to see if it makes sense. Sometimes what at first appears to make sense, in reality the underlying data has been altered in some way (such as resampling).

Granger Causality and Non-Linear Regression

Changing gears a bit (before we get to a real-world ProjectPiglet.com example), it’s important to note that most Granger Causality uses linear regression. In other words, the method is searching for linear correlations between datasets:

From austingwalters.com

However, in many cases – especially in the case of markets – correlations are highly likely to be non-linear. This is because markets are anti-inductive[5]. In other words, every pattern discovered in a market creates a new pattern as people exploit that inefficiency. This is called the Efficient Market Hypothesis.

Ultimately, this means most implementations of Granger Causality are overly simplistic; as most correlations are certainly non-linear in nature. There are a large number of non-linear regression models, below is an example of Gaussian Process Regression:

From Wikipedia

Similar, non-linear regression techniques do appear to improve Granger Causality[6]. This is probably due to most linear correlations already being priced into the market and the non-linear correlations will be where the potential profits are. It remains to be seen how effective this can be, as most research in this area is kept private (increasing profits of trading firms). What we can say is that non-linear methods do improve predictions on ProjectPiglet.com. They also require a larger dataset than their linear regression counterparts.

Conclusion

Overall, Granger Causality has quite a few potential pitfalls. It is useful for indicating a potential correlation, but is only a probable correlation. It can help to identify market inefficiencies and open the opportunity to make money, but will probably require more finesse than simple linear regression.

All that being said, hope you’ve found some of the insights useful!

One thought on “Limits of Granger Causality

Leave a Reply

Your email address will not be published. Required fields are marked *