The post Systematic Futures Trading appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>

In the high frequency space, our focus is on strategies with very high Sharpe Ratios and low drawdowns. We trade a range of futures products, including equity, fixed income, metals and energy markets. Despite the current low levels of market volatility, these strategies have performed well in 2017:

Building high frequency strategies with double-digit Sharpe Ratios requires a synergy of computational capability and modeling know-how. The microstructure of futures markets is, of course, substantially different to that of equity or forex markets and the components of the model that include microstructure effects vary widely from one product to another. There can be substantial variations too in the way that time is handled in the model – whether as discrete or continuous “wall time”, in trade time, or some other measure. But some of the simple technical indicators we use – moving averages, for example – are common to many models across different products and markets. Machine learning plays a role in most of our trading strategies, including high frequency.

Here are some relevant blog posts that you may find interesting:

The post Systematic Futures Trading appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Protected: Systematic Strategies Fund – Sept 2017 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Protected: Systematic Strategies Fund – Sept 2017 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Analyzing the FDIC Dataset appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post A Winer Process appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>

But, in fact, I really did have in mind something more like this:

We are following an example from the recently published *Mathematica Beyond Mathematics* by Jose Sanchez Leon, an up-to-date text that describes many of the latest features in Mathematica, illustrated with interesting applications. Sanchez Leon shows how Mathematica’s machine learning capabilities can be applied to the craft of wine-making.

We begin by loading a curated Wolfram dataset comprising measurements of the physical properties and quality of wines:

We’re going to apply Mathematica’s built-in machine learning algorithms to train a predictor of wine quality, using the training dataset. Mathematica determines that the most effective machine learning technique in this case is Random Forest and after a few seconds produces the predictor function:

Mathematica automatically selects what it considers to be the best performing model from several available machine learning algorithms:

Let’s take a look at how well the predictor perform on the test dataset of 1,298 wines:

We can use the predictor function to predict the quality of an unknown wine, based on its physical properties:

Next we create a function to predict the quality of an unknown wine as a function of just two of its characteristics, its pH and alcohol level. The analysis suggests that the quality of our unknown wine could be improved by increasing both its pH and alcohol content:

This simple toy example illustrates how straightforward it is to deploy machine learning techniques in Mathematica. Machine Learning and Neural Networks became a major focus for Wolfram Research in version 10, and the software’s capabilities have been significantly enhanced in version 11, with several applications such as text and sentiment analysis that have direct relevance to trading system development:

For other detailed examples see:

The post A Winer Process appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post The Story of a HFT Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Correlation Copulas appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>In case you missed it, the post can be found here:

We saw previously that the levels of the three indices are all highly correlated, and we were able to successfully account for approximately half the variation in the VIX index using either linear regression models or non-linear machine-learning models that incorporated the two correlation indices. It turns out that the log-returns processes are also highly correlated:

We can create a simple linear regression model that relates log-returns in the VIX index to contemporaneous log-returns in the two correlation indices, as follows. The derived model accounts for just under 40% of the variation in VIX index returns, with each correlation index contributing approximately one half of the total VIX return.

Although the linear model is highly statistically significant, we see clear evidence of lack of fit in the model residuals, which indicates non-linearities present in the relationship. So, ext we use a nearest-neighbor algorithm, a machine learning technique that allows us to model non-linear components of the relationship. The residual plot from the nearest neighbor model clearly shows that it does a better job of capturing these nonlinearities, with lower standard in the model residuals, compared to the linear regression model:

Another approach entails the use of copulas to model the inter-dependency between the volatility and correlation indices. For a fairly detailed exposition on copulas, see the following blog posts:

We begin by taking a smaller sample comprising around three years of daily returns in the indices. This minimizes the impact of any long-term nonstationarity in the processes and enables us to fit marginal distributions relatively easily. First, let’s look at the correlations in our sample data:

We next proceed to fit margin distributions to the VIX and Correlation Index processes. It turns out that the VIX process is well represented by a Logistic distribution, while the two Correlation Index returns processes are better represented by a Student-T density. In all three cases there is little evidence of lack of fit, wither in the body or tails of the estimated probability density functions:

The final step is to fit a copula to model the joint density between the indices. To keep it simple I have chosen to carry out the analysis for the combination of the VIX index with only the first of the correlation indices, although in principle there no reason why a copula could not be estimated for all three indices. The fitted model is a multinormal Gaussian copula with correlation coefficient of 0.69. of course, other copulas are feasible (Clayton, Gumbel, etc), but Gaussian model appears to provide an adequate fit to the empirical copula, with approximate symmetry in the left and right tails.

The post Correlation Copulas appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Protected: Systematic Strategies Fund Aug 2017 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Protected: Systematic Strategies Fund Aug 2017 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post A Tactical Equity Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>Systematic Strategies is a hedge fund rather than an RIA, so we have no plans to offer the product to the public. However, we are currently holding exploratory discussions with Registered Investment Advisors about how the strategy might be made available to their clients.

For more background, see this post on Seeking Alpha: http://tiny.cc/ba3kny

The post A Tactical Equity Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Correlation Cointegration appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>

The question was put to me whether the VIX and correlation indices might be cointegrated.

Let’s begin by looking at the pattern of correlation between the three indices:

If you recall from my previous post, we were able to fit a linear regression model with the Year 1 and Year 2 Correlation Indices that accounts for around 50% in the variation in the VIX index. While the model certainly has its shortcomings, as explained in the post, it will serve the purpose of demonstrating that the three series are cointegrated. The standard Dickey-Fuller test rejects the null hypothesis of a unit root in the residuals of the linear model, confirming that the three series are cointegrated, order 1.

We can attempt to take the modeling a little further by fitting a VAR model. We begin by splitting the data into an in-sample period from Jan 2007 to Dec 2015 and an out-of-sample test period from Jan 2016 to Aug 2017. We then fit a vector autoregression model to the in-sample data:

When we examine how the model performs on the out-of-sample data, we find that it fails to pick up on much of the variation in the series – the forecasts are fairly flat and provide quite poor predictions of the trends in the three series over the period from 2016-2017:

The VIX and Correlation Indices are not only highly correlated, but also cointegrated, in the sense that a linear combination of the series is stationary.

One can fit a weakly stationary VAR process model to the three series, but the fit is quite poor and forecasts from the model don’t appear to add much value. It is conceivable that a more comprehensive model involving longer lags would improve forecasting performance.

The post Correlation Cointegration appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Modeling Volatility and Correlation appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>A follow-up article in ZeroHedge shortly afterwards pointed out that the VVIX/VIX ratio had reached record highs, prompting Goldman Sachs analyst Ian Wright to comment that this could signal the ending of the current low-volatility regime:

A linkedIn reader pointed out that individual stock volatility was currently quite high and when selling index volatility one is effectively selling stock correlations, which had now reached historically low levels. I concurred:

What’s driving the low vol regime is the exceptionally low level of cross-sectional correlations. And, as correlations tighten, index vol will rise. Worse, we are likely to see a feedback loop – higher vol leading to higher correlations, further accelerating the rise in index vol. So there is a second order, Gamma effect going on. We see that is the very high levels of the VVIX index, which shot up to 130 last week. The all-time high in the VVIX prior to Aug 2015 was around 120. The intra-day high in Aug 2015 reached 225. I’m guessing it will get back up there at some point, possibly this year.

As there appears to be some interest in the subject I decided to add a further blog post looking a little further into the relationship between volatility and correlation. To gain some additional insight we are going to make use of the CBOE implied correlation indices. The CBOE web site explains:

Using SPX options prices, together with the prices of options on the 50 largest stocks in the S&P 500 Index, the CBOE S&P 500 Implied Correlation Indexes offers insight into the relative cost of SPX options compared to the price of options on individual stocks that comprise the S&P 500.

- CBOE calculates and disseminates two indexes tied to two different maturities, usually one year and two years out. The index values are published every 15 seconds throughout the trading day.
- Both are measures of the expected average correlation of price returns of S&P 500 Index components, implied through SPX option prices and prices of single-stock options on the 50 largest components of the SPX.

One application is dispersion trading, which the CBOE site does a good job of summarizing:

The CBOE S&P 500 Implied Correlation Indexes may be used to provide trading signals for a strategy known as volatility dispersion (or correlation) trading. For example, a long volatility dispersion trade is characterized by selling at-the-money index option straddles and purchasing at-the-money straddles in options on index components. One interpretation of this strategy is that when implied correlation is high, index option premiums are rich relative to single-stock options. Therefore, it may be profitable to sell the rich index options and buy the relatively inexpensive equity options.

Again, the CBOE web site is worth quoting:

The CBOE S&P 500 Implied Correlation Indexes measure changes in the relative premium between index options and single-stock options. A single stock’s volatility level is driven by factors that are different from what drives the volatility of an Index (which is a basket of stocks). The implied volatility of a single-stock option simply reflects the market’s expectation of the future volatility of that stock’s price returns. Similarly, the implied volatility of an index option reflects the market’s expectation of the future volatility of that index’s price returns.

However, index volatility is driven by a combination of two factors: the individual volatilities of index components and the correlation of index component price returns.

Let’s dig into this analytically. We first download and plot the daily for the VIX and Correlation Indices from the CBOE web site, from which it is evident that all three series are highly correlated:

An inspection reveals significant correlations between the VIX index and the two implied correlation indices, which are themselves highly correlated. The S&P 500 Index is, of course, negatively correlated with all three indices:

The response surface that describes the relationship between the VIX index and the two implied correlation indices is locally very irregular, but the slope of the surface is generally positive, as we would expect, since the level of VIX correlates positively with that of the two correlation indices.

The most straightforward approach is to use a simple linear regression specification to model the VIX level as a function of the two correlation indices. We create a VIX Model Surface object using this specification with the Mathematica Predict function:The linear model does quite a good job of capturing the positive gradient of the response surface, and in fact has a considerable amount of explanatory power, accounting for a little under half the variance in the level of the VIX index:

However, there are limitations. To begin with, the assumption of independence between the explanatory variables, the correlation indices, clearly does not hold. In cases such as this, where explanatory variables are multicolinear, we are unable to draw inferences about the explanatory power of individual regressors, even though the model as a whole may be highly statistically significant, as here.

Secondly, a linear regression model is not going to capture non-linearities in the volatility-correlation relationship that are evident in the surface plot. This is confirmed by a comparison plot, which shows that the regression model underestimates the VIX level for both low and high values of the index:

We can achieve a better outcome using a machine learning algorithm such as nearest neighbor, which is able to account for non-linearities in the response surface:

The comparison plot shows a much closer correspondence between actual and predicted values of the VIX index, even though there is evidence of some remaining heteroscedasticity in the model residuals:

A useful way to think about index volatility is as a two dimensional process, with time-series volatility measured on one dimension and dispersion (cross-sectional volatility, the inverse of correlation) measured on the second. The two factors are correlated and, as we have shown here, interact in a complicated, non-linear way.

The low levels of index volatility we have seen in recent months result, not from low levels of volatility in component stocks, but in the historically low levels of correlation (high levels of dispersion) in the underlying stock returns processes. As correlations begin to revert to historical averages, the impact will be felt in an upsurge in index volatility, compounded by the non-linear interaction between the two factors.

The post Modeling Volatility and Correlation appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>