Finding Alpha in 2018

Given the current macro-economic environment, where should investors focus their search for sources of alpha in the year ahead?  By asking enough economists or investment managers you will find as many different opinions on the subject as would care to, no doubt many of them conflicting.  These are some thoughts on the subject from my perspective, as a quantitative hedge fund manager.

SSALGOTRADING AD

Global Market Performance in 2017

Let’s begin by reviewing some of the best and worst performing assets of 2017 (I am going to exclude cryptocurrencies from the ensuing discussion).  Broadly speaking, the story across the piste has been one of strong appreciation in emerging markets, both in equities and currencies, especially in several of the Eastern European economies.  In Government bond markets Greece has been the star of the show, having stepped back from the brink of the economic abyss.  Overall, international diversification has been a key to investment success in 2017 and I believe that pattern will hold in 2018.

BestWorstEquityMkts2017

BestWorstCurrencies2017

BestWorstGvtBond

 

US Yield Curve and Its Implications

Another key development that investors need to take account of is the extraordinary degree of flattening of the yield curve in US fixed income over the course of 2017:

YieldCurve

 

This process has now likely reached the end point and will begin to reverse as the Fed and other central banks in developed economies start raising rates.  In 2018 investors should seek to protect their fixed income portfolios by shortening duration, moving towards the front end of the curve.

US Volatility and Equity Markets

A prominent feature of US markets during 2017 has been the continuing collapse of equity index volatility, specifically the VIX Index, which reached an all-time low of 9.14 in November and continues to languish at less than half the average level of the last decade:

VIX Index

Source: Wolfram Alpha

One consequence of the long term decline in volatility has been to drastically reduce the profitability of derivatives markets, for both traders and market makers. Firms have struggled to keep up with the high cost of technology and the expense of being connected to the fragmented U.S. options market, which is spread across 15 exchanges. Earlier in 2017, Interactive Brokers Group Inc. sold its Timber Hill options market-making unit — a pioneer of electronic trading — to Two Sigma Securities.   Then, in November, Goldman Sachs announced it was shuttering its option market making business in US exchanges, citing high costs, sluggish volume and low volatility.

The impact has likewise been felt by volatility strategies, which performed well in 2015 and 2016, only to see returns decline substantially in 2017.  Our own Systematic Volatility strategy, for example, finished the year up only 8.08%, having produced over 28% in the prior year.

One side-effect of low levels of index volatility has been a fall in stock return correlations, and, conversely, a rise in the dispersion of stock returns.   It turns out that index volatility and stock correlation are themselves correlated and indeed, cointegrated:

http://jonathankinlay.com/2017/08/correlation-cointegration/

 

In simple terms, stocks have a tendency to disperse more widely around an increasingly sluggish index.  The “kinetic energy” of markets has to disperse somewhere and if movements in the index are muted then relative movement in individual equity returns will become more accentuated.  This is an environment that ought to favor stock picking and both equity long/short and market neutral strategies  should outperform.  This certainly proved to be the case for our Quantitative Equity long/short strategy, which produced a net return of 17.79% in 2017, but with an annual volatility of under 5%:

QE Perf

 

Looking ahead to 2018, I expect index volatility and equity correlations rise as  the yield curve begins to steepen, producing better opportunities for volatility strategies.  Returns from equity long/short and market neutral strategies may moderate a little as dispersion diminishes.

Futures Markets

Big increases in commodity prices and dispersion levels also lead to improvements in the performance of many CTA strategies in 2017. In the low frequency space our Futures WealthBuilder strategy produced a net return of 13.02% in 2017, with a Sharpe Ratio above 3 (CAGR from inception in 2013 is now at 20.53%, with an average annual standard deviation of 6.36%).  The star performer, however, was our High Frequency Futures strategy.  Since launch in March 2017 this has produce a net return of 32.72%, with an annual standard deviation of 5.02%, on track to generate an annual Sharpe Ratio above 8 :

HFT Perf

Looking ahead, the World Bank has forecast an increase of around 4% in energy prices during 2018, with smaller increases in the price of agricultural products.   This is likely to be helpful to many CTA strategies, which will likely see further enhancements in performance over the course of the year.  Higher frequency strategies are more dependent on commodity market volatility, which is seen more likely to rise than fall in the year ahead.

Conclusion

US fixed income investors are likely to want to shorten duration as the yield curve begins to steepen in 2018, bringing with it higher levels of index volatility that will favor equity high frequency and volatility strategies.  As in 2017, there is likely much benefit to be gained in diversifying across international equity and currency markets.  Strengthening energy prices are likely to sustain higher rates of return in futures strategies during the coming year.

Alpha Extraction and Trading Under Different Market Regimes

Market Noise and Alpha Signals

One of the perennial problems in designing trading systems is noise in the data, which can often drown out an alpha signal.  This is turn creates difficulties for a trading system that relies on reading the signal, resulting in greater uncertainty about the trading outcome (i.e. greater volatility in system performance).  According to academic research, a great deal of market noise is caused by trading itself.  There is apparently not much that can be done about that problem:  sure, you can trade after hours or overnight, but the benefit of lower signal contamination from noise traders is offset by the disadvantage of poor liquidity.  Hence the thrust of most of the analysis in this area lies in the direction of trying to amplify the signal, often using techniques borrowed from signal processing and related engineering disciplines.

There is, however, one trick that I wanted to share with readers that is worth considering.  It allows you to trade during normal market hours, when liquidity is greatest, but at the same time limits the impact of market noise.

SSALGOTRADING AD

Quantifying Market Noise

How do you measure market noise?  One simple approach is to start by measuring market volatility, making the not-unreasonable assumption that higher levels of volatility are associated with greater amounts of random movement (i.e noise). Conversely, when markets are relatively calm, a greater proportion of the variation is caused by alpha factors.  During the latter periods, there is a greater information content in market data – the signal:noise ratio is larger and hence the alpha signal can be quantified and captured more accurately.

For a market like the E-Mini futures, the variation in daily volatility is considerable, as illustrated in the chart below.  The median daily volatility is 1.2%, while the maximum value (in 2008) was 14.7%!

Fig1

The extremely long tail of the distribution stands out clearly in the following histogram plot.

Fig 2

Obviously there are times when the noise in the process is going to drown out almost any alpha signal. What if we could avoid such periods?

Noise Reduction and Model Fitting

Let’s divide our data into two subsets of equal size, comprising days on which volatility was lower, or higher, than the median value.  Then let’s go ahead and use our alpha signal(s) to fit a trading model, using only data drawn from the lower volatility segment.

This is actually a little tricky to achieve in practice:  most software packages for time series analysis or charting are geared towards data occurring at equally spaced points in time.  One useful trick here is to replace the actual date and time values of the observations with sequential date and time values, in order to fool the software into accepting the data, since there are no longer any gaps in the timestamps.  Of course, the dates on our time series plot or chart will be incorrect. But that doesn’t matter:  as long as we know what the correct timestamps are.

An example of such a system is illustrated below.  The model was fitted  to  3-Min bar data in EMini futures, but only on days with market volatility below the median value, in the period from 2004 to 2015.  The strategy equity curve is exceptionally smooth, as might be expected, and the performance characteristics of the strategy are highly attractive, with a 27% annual rate of return, profit factor of 1.58 and Sharpe Ratio approaching double-digits.

Fig 3

Fig 4

Dealing with the Noisy Trading Days

Let’s say you have developed a trading system that works well on quiet days.  What next?  There are a couple of ways to go:

(i) Deploy the model only on quiet trading days; stay out of the market on volatile days; or

(ii) Develop a separate trading system to handle volatile market conditions.

Which approach is better?  It is likely that the system you develop for trading quiet days will outperform any system you manage to develop for volatile market conditions.  So, arguably, you should simply trade your best model when volatility is muted and avoid trading at other times.  Any other solution may reduce the overall risk-adjusted return.  But that isn’t guaranteed to be the case – and, in fact, I will give an example of systems that, when combined, will in practice yield a higher information ratio than any of the component systems.

Deploying the Trading Systems

The astute reader is likely to have noticed that I have “cheated” by using forward information in the model development process.  In building a trading system based only on data drawn from low-volatility days, I have assumed that I can somehow know in advance whether the market is going to be volatile or not, on any given day.  Of course, I don’t know for sure whether the upcoming session is going to be volatile and hence whether to deploy my trading system, or stand aside.  So is this just a purely theoretical exercise?  No, it’s not, for the following reasons.

The first reason is that, unlike the underlying asset market, the market volatility process is, by comparison, highly predictable.  This is due to a phenomenon known as “long memory”, i.e. very slow decay in the serial autocorrelations of the volatility process.  What that means is that the history of the volatility process contains useful information about its likely future behavior.  [There are several posts on this topic in this blog – just search for “long memory”].  So, in principle, one can develop an effective system to forecast market volatility in advance and hence make an informed decision about whether or not to deploy a specific model.

But let’s say you are unpersuaded by this argument and take the view that market volatility is intrinsically unpredictable.  Does that make this approach impractical?  Not at all.  You have a couple of options:

You can test the model built for quiet days on all the market data, including volatile days.  It may perform acceptably well across both market regimes.

For example, here are the results of a backtest of the model described above on all the market data, including volatile and quiet periods, from 2004-2015.  While the performance characteristics are not quite as good, overall the strategy remains very attractive.

Fig 5

Fig 6

 

Another approach is to develop a second model for volatile days and deploy both low- and high-volatility regime models simultaneously.  The trading systems will interact (if you allow them to) in a highly nonlinear and unpredictable way.  It might turn out badly – but on the other hand, it might not!  Here, for instance, is the result of combining low- and high-volatility models simultaneously for the Emini futures and running them in parallel.  The result is an improvement (relative to the low volatility model alone), not only in the annual rate of return (21% vs 17.8%), but also in the risk-adjusted performance, profit factor and average trade.

Fig 7

Fig 8

 

CONCLUSION

Separating the data into multiple subsets representing different market regimes allows the system developer to amplify the signal:noise ratio, increasing the effectiveness of his alpha factors. Potentially, this allows important features of the underlying market dynamics to be captured in the model more easily, which can lead to improved trading performance.

Models developed for different market regimes can be tested across all market conditions and deployed on an everyday basis if shown to be sufficiently robust.  Alternatively, a meta-strategy can be developed to forecast the market regime and select the appropriate trading system accordingly.

Finally, it is possible to achieve acceptable, or even very good results, by deploying several different models simultaneously and allowing them to interact, as the market moves from regime to regime.