A High Frequency Market Making Algorithm

 

This algorithm builds on the research by Stoikova and Avelleneda in their 2009 paper “High Frequency Trading in a Limit Order Book“, 2009 and extends the basic algorithm in several ways:

  1. The algorithm makes two sided markets in a specified list of equities, with model parameters set at levels appropriate for each product.
  2. The algorithm introduces an automatic mechanism for managing inventory, reducing the risk of adverse selection by changing the rate of inventory accumulation dynamically.
  3. The algorithm dynamically adjusts the range of the bid-ask spread as the trading session progresses, with the aim of minimizing inventory levels on market close.
  4. The extended algorithm makes use of estimates of recent market trends and adjusts the bid-offer spread to lean in the direction of the trend.
  5. A manual adjustment factor allows the market-maker to nudge the algorithm in the direction of reducing inventory.

The algorithm is implemented in Mathematica, and can be compiled to create dlls callable from with a C++ or Python application.

The application makes use of the MATH-TWS library to connect to the Interactive Brokers TWS or Gateway platform via the C++ api. MATH-TWS is used to create orders, manage positions and track account balances & P&L.

 

Reflections on Careers in Quantitative Finance

CMU’s MSCF Program

Carnegie Mellon’s Steve Shreve is out with an interesting post on careers in quantitative finance, with his commentary on the changing landscape in quantitative research and the implications for financial education.

I taught at Carnegie Mellon in the late 1990’s, including its excellent Master’s program in quantitative finance that Steve co-founded, with Sanjay Srivastava.  The program was revolutionary in many ways and was immediately successful and rapidly copied by rival graduate schools (I help to spread the word a little, at Cambridge).

Fig1The core of the program remains largely unchanged over the last 20 years, featuring Steve’s excellent foundation course in stochastic calculus;  but I am happy to see that the school has added many, new and highly relevant topics to the second year syllabus, including market microstructure, machine learning, algorithmic trading and statistical arbitrage.  This has moved the program in terms of its primary focus, which was originally financial engineering, to include coverage of subjects that are highly relevant to quantitative investment research and trading.

It was this combination of sound theoretical grounding with practitioner-oriented training that made the program so successful.  As I recall, every single graduate was successful in finding a job on Wall Street, often at salaries in excess of $200,000, a considerable sum in those days.  One of the key features of the program was that it combined theoretical concepts with practical training, using a simulated trading floor gifted by Thomson Reuters (a model later adopted btrading-floor-1y the ICMA centre at the University of Reading in the UK).  This enabled us to test students’ understanding of what they had been taught, using market simulation models that relied upon key theoretical ideas covered in the program.  The constant reinforcement of the theoretical with the practical made for a much deeper learning experience for most students and greatly facilitated their transition to Wall Street.

Masters in High Frequency Finance

While CMU’s program has certainly evolved and remains highly relevant to the recruitment needs of Wall Street firms, I still believe there is an opportunity for a program focused exclusively on high frequency finance, as previously described in this post.  The MHFF program would be more computer science oriented, with less emphasis placed on financial engineering topics.  So, for instance, students would learn about trading hardware and infrastructure, the principles of efficient algorithm design, as well as HFT trading techniques such as order layering and priority management.  The program would also cover HFT strategies such as latency arbitrage, market making, and statistical arbitrage.  Students would learn both lower level (C++, Java) and higher level (Matlab, R) programming languages and there is  a good case for a mandatory machine code programming course also.  Other core courses might include stochastic calculus and market microstructure.

Who would run such a program?  The ideal school would have a reputation for excellent in both finance and computer science. CMU is an obvious candidate, as is MIT, but there are many other excellent possibilities.

Careers

I’ve been involved in quantitative finance since the beginning:  I recall programming one of the first 68000 Assembler microcomputers in the 1980s, which was ultimately used for an F/X system at a major UK bank. The ensuing rapid proliferation of quantitative techniques in finance has been fueled by the ubiquity of cheap computing power, facilitating the deployment of quantitate techniques that would previously been impractical to implement due to their complexity.  A good example is the machine learning techniques that now pervade large swathes of the finance arena, from credit scoring to HFT trading.  When I first began working in that field in the early 2000’s it was necessary to assemble a fairly sizable cluster of cpus to handle the computation load. These days you can access comparable levels of computational power on a single server and, if you need more, you can easily scale up via Azure or EC2.

fig3It is this explosive growth in computing power  that has driven the development of quantitative finance in both the financial engineering and quantitative investment disciplines. As the same time, the huge reduction in the cost of computing power has leveled the playing field and lowered barriers to entry.  What was once the exclusive preserve of the sell-side has now become readily available to many buy-side firms.  As a consequence, much of the growth in employment opportunities in quantitative finance over the last 20 years has been on the buy-side, with the arrival of quantitative hedge funds and proprietary trading firms, including my own, Systematic Strategies.  This trend has a long way to play out so that, when also taking into consideration the increasing restrictions that sell-side firms face in terms of their proprietary trading activity, I am inclined to believe that the buy-side will offer the best employment opportunities for quantitative financiers over the next decade.

It was often said that hedge fund managers are typically in their 30’s or 40’s when they make the move to the buy-side. That has changed in the last 15 years, again driven by the developments in technology.  These days you are more likely to find the critically important technical skills in younger candidates, in their late 20’s or early 30’s.  My advice to those looking for a career in quantitative finance, who are unable to find the right job opportunity, would be: do what every other young person in Silicon Valley is doing:  join a startup, or start one yourself.

 

Trading the Presidential Election

There is a great deal of market lore related to the US presidential elections.  It is generally held that elections are good for the market,  regardless of whether the incoming president is Democrat or Republican.   To examine this thesis, I gathered data on presidential elections since 1950, considering only the first term of each newly elected president.  My reason for considering first terms only was twofold:  firstly, it might be expected that a new president is likely to exert a greater influence during his initial term in office and secondly, the 2016 contest will likewise see the appointment of a new president (rather than the re-election of a current one).

Market Performance Post Presidential Elections

The table below shows the 11 presidential races considered, with sparklines summarizing the cumulative return in the S&P 500 Index in the 12 month period following the start of the presidential term of office.  The majority are indeed upward sloping, as is the overall average.

fig 1

A more detailed picture emerges from the following chart.  It transpires that the generally positive “presidential effect” is due overwhelmingly to the stellar performance of the market during the first year of the Gerald Ford and Barack Obama presidencies.  In both cases presidential elections coincided with the market nadir following, respectively, the 1973 oil crisis and 2008 financial crisis, after which  the economy staged a strong recovery.

fig2

Democrat vs. Republican Presidencies

There is a marked difference in the average market performance during the first year of a Democratic presidency vs. a Republican presidency.  Doubtless, plausible explanations for this disparity are forthcoming from both political factions.  On the Republican side, it could be argued that Democratic presidents have benefitted from the benign policies of their (often) Republican  predecessors, while incoming Republican presidents have had to clean up the mess left to them by their Democratic predecessors.  Democrats would no doubt argue that the market, taking its customary forward view, tends to react favorably to the prospect of a more enlightened, liberal approach to the presidency (aka more government spending).

SSALGOTRADING AD

Market Performance Around the Start of Presidential Terms

I shall leave such political speculations to those interested in pursuing them and instead focus on matters  of a more apolitical nature.  Specifically, we will look at the average market returns during the twelve months leading up to the start of a new presidential term, compared to the average returns in the twelve months after the start of the term.  The results are as follows:

fig3

The twelve months leading up to the start of the presidential term are labelled -12, -11, …, -1, while the following twelve months are labelled 1, 2, … , 12.  The start of the term is designated as month zero, while months that fall outside the 24 month period around the start of a presidential term are labelled as month 13.

The key finding stands out clearly from the chart: namely, that market returns during the start month of a new presidential term are distinctly negative, averaging -3.3% , while returns in the first month after the start of the term are distinctly positive, averaging 2.81%.

Assuming that market returns are approximately Normally distributed, a standard t-test rejects the null hypothesis of no difference in the means of the month 0 and month 1 returns, at the 2% confidence level.  In other words, the “presidential effect” is both large and statistically significant.

Conclusion: Trading the Election

Given the array of candidates before the electorate this election season, I am strongly inclined to take the trade.  The market will certainly “feel the Bern” in the unlikely event that Bernie Sanders is elected president.  I can even make an argument for a month 1 recovery, when the market realizes that there are limits to how much economic damage even a Socialist president can do, given constitutional checks and balances, “pen and phone” not withstanding.

Again, an incoming president Trump is likely to be greeted by a sharp market sell-off, based on jittery speculation about the Donald’s proclivity to start a trade war with China, or Mexico, or a ground war with Russia, Iran, or anyone else. Likewise, however, the market will fairly quickly come around to the realization that electioneering rhetoric is unlikely to provide much guidance as to what a president Trump is likely to do in practice.

A Hillary Clinton presidency is likely to be seen, ex-ante, as the most benign for the market, especially given the level of (financial) support she has received from Wall Street.  However, there’s a glitch:  Bernie is proving much tougher to shake off than she could ever have anticipated. In order to win over his supporters, she is going to have to move out of the center ground, towards the left.  Who knows what hostages to fortune a desperate Clinton is likely to have to offer the election gods in her bid to secure the White House?

In terms of the mechanics, while you could take the trade in ETF’s or futures, this is one of those situations ideally suited to options and I am inclined to suggest combining a front-month put spread with a back-month call spread.

 

Yes, You Can Time the Market. How it Works, And Why

One of the most commonly cited maxims is that market timing is impossible.  In fact, empirical evidence makes a compelling case that market timing is feasible and can yield substantial economic benefits.  What’s more, we even understand why it works.  For the typical portfolio investor, applying simple techniques to adjust their market exposure can prevent substantial losses during market downturns.

The Background From Empirical and Theoretical Research

For the last fifty years, since the work of Paul Samuelson, the prevailing view amongst economists has been that markets are (mostly) efficient and follow a random walk. Empirical evidence to the contrary was mostly regarded as anomalous and/or unimportant economically.  Over time, however, evidence has accumulated that market effects may persist that are exploitable. The famous 1992 paper published by Fama and French, for example, identified important economic effects in stock returns due to size and value factors, while Cahart (1997) demonstrated the important incremental effect of momentum.  The combined four-factor Cahart model explains around 50% of the variation in stock returns, but leaves a large proportion that cannot be accounted for.

Other empirical studies have provided evidence that stock returns are predictable at various frequencies.  Important examples include work by Brock, Lakonishok and LeBaron (1992), Pesaran and Timmermann (1995) and Lo, Mamaysky and Wang (2000), who provide further evidence using a range of technical indicators with wide popularity among traders showing that this adds value even at the individual stock level over and above the performance of a stock index.  The research in these and other papers tends to be exceptional in term of both quality and comprehensiveness, as one might expect from academics risking their reputations in taking on established theory.  The appendix of test results to the Pesaran and Timmermann study, for example, is so lengthy that is available only in CD-ROM format.

A more recent example is the work of Paskalis Glabadanidis, in a 2012 paper entitled Market Timing with Moving Averages.  Glabadanidis examines a simple moving average strategy that, he finds, produces economically and statistically significant alphas of 10% to 15% per year, after transaction costs, and which are largely insensitive to the four Cahart factors. 

Glabadanidis reports evidence regarding the profitability of the MA strategy in seven international stock markets. The performance of the MA strategies also holds for more than 18,000 individual stocks. He finds that:

“The substantial market timing ability of the MA strategy appears to be the main driver of the abnormal returns.”

An Illustration of a Simple Marketing Timing Strategy in SPY

It is impossible to do justice to Glabadanidis’s research in a brief article and the interested reader is recommended to review the paper in full.  However, we can illustrate the essence of the idea using the SPY ETF as an example.   

A 24-period moving average of the monthly price series over the period from 1993 to 2016 is plotted in red in the chart below.

Fig1

The moving average indicator is used to time the market using the following simple rule:

if Pt >= MAt  invest in SPY in month t+1

if Pt < MAt  invest in T-bills in month t+1

In other words, we invest or remain invested in SPY when the monthly closing price of the ETF lies at or above the 24-month moving average, otherwise we switch our investment to T-Bills.

The process of switching our investment will naturally incur transaction costs and these are included in the net monthly returns.

The outcome of the strategy in terms of compound growth is compared to the original long-only SPY investment in the following chart.

Fig2

The marketing timing strategy outperforms the long-only ETF,  with a CAGR of 16.16% vs. 14.75% (net of transaction costs), largely due to its avoidance of the major market sell-offs in 2000-2003 and 2008-2009.

But the improvement isn’t limited to a 141bp improvement in annual compound returns.  The chart below compares the distributions of monthly returns in the SPY ETF and market timing strategy.

Fig3

It is clear that, in addition to a higher average monthly return, the market timing strategy has lower dispersion in the distribution in returns.  This leads to a significantly higher information ratio for the strategy compared to the long-only ETF.  Nor is that all:  the market timing strategy has both higher skewness and kurtosis, both desirable features.

Fig4

These results are entirely consistent with Glabadanidis’s research.  He finds that the performance of the market timing strategy is robust to different lags of the moving average and in subperiods, while investor sentiment, liquidity risks, business cycles, up and down markets, and the default spread cannot fully account for its performance. The strategy works just as well with randomly generated returns and bootstrapped returns as it does for the more than 18,000 stocks in the study.

A follow-up study by the author applying the same methodology to a universe of 20 REIT indices and 274 individual REITs reaches largely similar conclusions.

Why Marketing Timing Works

For many investors, empirical evidence – compelling though it may be – is not enough to make market timing a credible strategy, absent some kind of “fundamental” explanation of why it works.  Unusually, in the case of the simple moving average strategy, such explanation is possible.

It was Cox, Ross and Rubinstein who in 1979 developed the binomial model as a numerical method for pricing options.  The methodology relies on the concept of option replication, in which one constructs a portfolio comprising holdings of the underlying stock and bonds to produce the same cash flows as the option at every point in time (the proportion of stock to hold is given by the option delta).  Since the replicating portfolio produces the same cash flows as the option, it must have the same value and since once knows the price of the stock and bond at each point in time one can therefore price the option.  For those interested in the detail, Wikipedia gives a detailed explanation of the technique.

We can apply the concept of option replication to construct something very close the MA market timing strategy, as follows.  Consider what happens when the ETF falls below the moving average level.  In that case we convert the ETF portfolio to cash and use the proceeds to acquire T-Bills.  An equivalent outcome would be achieved by continuing to hold our long ETF position and acquiring a put option to hedge it.  The combination of a long ETF position, and a 1-month put option with delta of -1, would provide the same riskless payoff as the market timing strategy, i.e. the return on 30-day T-Bills.  An option in which the strike price is based on the average price of the underlying is known as an Arithmetic Asian option.    Hence when we apply the MA timing strategy we are effectively constructing a dynamic portfolio that replicates the payoff of an Arithmetic Asian protective put option struck as (just above) the moving average level.

Market Timing Alpha and The Cost of Hedging

None of this explanation is particularly contentious – the theory behind option replication through dynamic hedging is well understood – and it provides a largely complete understanding of the way the MA market timing strategy works, one that should satisfy those who are otherwise unpersuaded by arguments purely from empirical research.

There is one aspect of the foregoing description that remains a puzzle, however.  An option is a valuable financial instrument and the owner of a protective put of the kind described can expect to pay a price amounting to tens or perhaps hundreds of basis points.  Of course, in the market timing strategy we are not purchasing a put option per se, but creating one synthetically through dynamic replication.  The cost of creating this synthetic equivalent comprises the transaction costs incurred as we liquidate and re-assemble our portfolio from month to month, in the form of bid/ask spread and commissions.  According to efficient market theory, one should be indifferent as to whether one purchases the option at a fair market price or constructs it synthetically through replication – the cost should be equivalent in either case.  And yet in empirical tests the cost of the synthetic protective put falls far short of what one would expect to pay for an equivalent option instrument.  This is, in fact, the source of the alpha in the market timing strategy.

According to efficient market theory one might expect to pay something of the order of 140 basis points a year in transaction costs – the difference between the CAGR of the market timing strategy and the SPY ETF – in order to construct the protective put.  Yet, we find that no such costs are incurred.

Now, it might be argued that there is a hidden cost not revealed in our simple study of a market timing strategy applied to a single underlying ETF, which is the potential costs that could be incurred if the ETF should repeatedly cross and re-cross the level of the moving average, month after month.  In those circumstances the transaction costs would be much higher than indicated here.  The fact that, in a single example, such costs do not arise does not detract in any way from the potential for such a scenario to play out. Therefore, the argument goes, the actual costs from the strategy are likely to prove much higher over time, or when implemented for a large number of stocks.

All well and good, but this is precisely the scenario that Glabadanidis’s research addresses, by examining the outcomes, not only for tens of thousands of stocks, but also using a large number of scenarios generated from random and/or bootstrapped returns.  If the explanation offered did indeed account for the hidden costs of hedging, it would have been evident in the research findings.

Instead, Glabadanidis concludes:

“This switching strategy does not involve any heavy trading when implemented with break-even transaction costs, suggesting that it will be actionable even for small investors.”

Implications For Current Market Conditions

As at the time of writing, in mid-February 2016, the price of the SPY ETF remains just above the 24-month moving average level.  Consequently the market timing strategy implies one should continue to hold the market portfolio for the time being, although that could change very shortly, given recent market action.

Conclusion

The empirical evidence that market timing strategies produce significant alphas is difficult to challenge.  Furthermore, we have reached an understanding of why they work, from an application of widely accepted option replication theory. It appears that using a simple moving average to time market entries and exits is approximately equivalent to hedging a portfolio with a protective Arithmetic Asian put option.

What remains to be answered is why the cost of constructing put protection synthetically is so low.  At the current time, research indicates that market timing strategies consequently are able to generate alphas of 10% to 15% per annum.

References

  1. Brock, W., Lakonishok, J., LeBaron, B., 1992, “Simple Technical Trading Rules and the Stochastic Properties of Stock Returns,” Journal of Finance 47, pp. 1731-1764.
  2. Carhart, M. M., 1997, “On Persistence in Mutual Fund Performance,” Journal of Finance 52, pp. 57–82.

  3. Fama, E. F., French, K. R., 1992, “The Cross-Section of Expected Stock Returns,” Journal of Finance 47(2), 427–465
  4. Glabadanidis, P., 2012, “Market Timing with Moving Averages”, 25th Australasian Finance and Banking Conference.
  5. Glabadanidis, P., 2012, “The Market Timing Power of Moving Averages: Evidence from US REITs and REIT Indexes”, University of Adelaide Business School.
  6. Lo, A., Mamaysky, H., Wang, J., 2000, “Foundations of Technical Analysis: Computational Algorithms, Statistical Inference, and Empirical Implementation,” Journal of Finance 55, 1705–1765.
  7. Pesaran, M.H., Timmermann, A.G., 1995, “Predictability of Stock Returns: Robustness and Economic Significance”, Journal of Finance, Vol. 50 No. 4

Profit Margins – Are they Predicting a Crash?

Jeremy Grantham: A Bullish Bear

Is Jeremy Grantham, co-founder and CIO of GMO, bullish or bearish these days?  According to Myles Udland at Business Insider, he’s both.  He quotes Grantham:

“I think the global economy and the U.S. in particular will do better than the bears believe it will because they appear to underestimate the slow-burning but huge positive of much-reduced resource prices in the U.S. and the availability of capacity both in labor and machinery.”

Grantham

Udland continues:

“On top of all this is the decline in profit margins, which Grantham has called the “most mean-reverting series in finance,” implying that the long period of elevated margins we’ve seen from American corporations is most certainly going to come an end. And soon. “

fredgraph

Corporate Profit Margins as a Leading Indicator

The claim is an interesting one.  It certainly looks as if corporate profit margins are mean-reverting and, possibly, predictive of recessionary periods. And there is an economic argument why this should be so, articulated by Grantham as quoted in an earlier Business Insider article by Sam Ro:

“Profit margins are probably the most mean-reverting series in finance, and if profit margins do not mean-revert, then something has gone badly wrong with capitalism.

If high profits do not attract competition, there is something wrong with the system and it is not functioning properly.”

Thomson Research / Barclays Research’s take on the same theme echoes Grantham:

“The link between profit margins and recessions is strong,” Barclays’ Jonathan Glionna writes in a new note to clients. “We analyze the link between profit margins and recessions for the last seven business cycles, dating back to 1973. The results are not encouraging for the economy or the market. In every period except one, a 0.6% decline in margins in 12 months coincided with a recession.”

barclays-margin

Buffett Weighs in

Even Warren Buffett gets in on the act (from 1999):

“In my opinion, you have to be wildly optimistic to believe that corporate profits as a percent of GDP can, for any sustained period, hold much above 6%.”

warren-buffett-477

With the Illuminati chorusing as one on the perils of elevated rates of corporate profits, one would be foolish to take a contrarian view, perhaps.  And yet, that claim of Grantham’s (“probably the most mean-reverting series in finance”) poses a challenge worthy of some analysis.  Let’s take a look.

The Predictive Value of Corporate Profit Margins

First, let’s reproduce the St Louis Fed chart:

CPGDP
Corporate Profit Margins

A plot of the series autocorrelations strongly suggests that the series is not at all mean-reverting, but non-stationary, integrated order 1:

CPGDPACF
Autocorrelations

 

Next, we conduct an exhaustive evaluation of a wide range of time series models, including seasonal and non-seasonal ARIMA and GARCH:

ModelFit ModelFitResults

The best fitting model (using the AIC criterion) is a simple ARMA(0,1,0) model, integrated order 1, as anticipated.  The series is apparently difference-stationary, with no mean-reversion characteristics at all.  Diagnostic tests indicate no significant patterning in the model residuals:

ModelACF
Residual Autocorrelations
LjungPlot
Ljung-Box Test Probabilities

Using the model to forecast a range of possible values of the Corporate Profit to GDP ratio over the next 8 quarters suggests a very wide range, from as low as 6% to as high as 13%!

Forecast

 

CONCLUSION

The opinion of investment celebrities like Grantham and Buffett to the contrary, there really isn’t any evidence in the data to support the suggestion that corporate profit margins are mean reverting, even though common-sense economics suggests they should be.

The best-available econometric model produces a very wide range of forecasts of corporate profit rates over the next two years, some even higher than they are today.

If a recession is just around the corner,  corporate profit margins aren’t going to call it for us.

Alpha Extraction and Trading Under Different Market Regimes

Market Noise and Alpha Signals

One of the perennial problems in designing trading systems is noise in the data, which can often drown out an alpha signal.  This is turn creates difficulties for a trading system that relies on reading the signal, resulting in greater uncertainty about the trading outcome (i.e. greater volatility in system performance).  According to academic research, a great deal of market noise is caused by trading itself.  There is apparently not much that can be done about that problem:  sure, you can trade after hours or overnight, but the benefit of lower signal contamination from noise traders is offset by the disadvantage of poor liquidity.  Hence the thrust of most of the analysis in this area lies in the direction of trying to amplify the signal, often using techniques borrowed from signal processing and related engineering disciplines.

There is, however, one trick that I wanted to share with readers that is worth considering.  It allows you to trade during normal market hours, when liquidity is greatest, but at the same time limits the impact of market noise.

SSALGOTRADING AD

Quantifying Market Noise

How do you measure market noise?  One simple approach is to start by measuring market volatility, making the not-unreasonable assumption that higher levels of volatility are associated with greater amounts of random movement (i.e noise). Conversely, when markets are relatively calm, a greater proportion of the variation is caused by alpha factors.  During the latter periods, there is a greater information content in market data – the signal:noise ratio is larger and hence the alpha signal can be quantified and captured more accurately.

For a market like the E-Mini futures, the variation in daily volatility is considerable, as illustrated in the chart below.  The median daily volatility is 1.2%, while the maximum value (in 2008) was 14.7%!

Fig1

The extremely long tail of the distribution stands out clearly in the following histogram plot.

Fig 2

Obviously there are times when the noise in the process is going to drown out almost any alpha signal. What if we could avoid such periods?

Noise Reduction and Model Fitting

Let’s divide our data into two subsets of equal size, comprising days on which volatility was lower, or higher, than the median value.  Then let’s go ahead and use our alpha signal(s) to fit a trading model, using only data drawn from the lower volatility segment.

This is actually a little tricky to achieve in practice:  most software packages for time series analysis or charting are geared towards data occurring at equally spaced points in time.  One useful trick here is to replace the actual date and time values of the observations with sequential date and time values, in order to fool the software into accepting the data, since there are no longer any gaps in the timestamps.  Of course, the dates on our time series plot or chart will be incorrect. But that doesn’t matter:  as long as we know what the correct timestamps are.

An example of such a system is illustrated below.  The model was fitted  to  3-Min bar data in EMini futures, but only on days with market volatility below the median value, in the period from 2004 to 2015.  The strategy equity curve is exceptionally smooth, as might be expected, and the performance characteristics of the strategy are highly attractive, with a 27% annual rate of return, profit factor of 1.58 and Sharpe Ratio approaching double-digits.

Fig 3

Fig 4

Dealing with the Noisy Trading Days

Let’s say you have developed a trading system that works well on quiet days.  What next?  There are a couple of ways to go:

(i) Deploy the model only on quiet trading days; stay out of the market on volatile days; or

(ii) Develop a separate trading system to handle volatile market conditions.

Which approach is better?  It is likely that the system you develop for trading quiet days will outperform any system you manage to develop for volatile market conditions.  So, arguably, you should simply trade your best model when volatility is muted and avoid trading at other times.  Any other solution may reduce the overall risk-adjusted return.  But that isn’t guaranteed to be the case – and, in fact, I will give an example of systems that, when combined, will in practice yield a higher information ratio than any of the component systems.

Deploying the Trading Systems

The astute reader is likely to have noticed that I have “cheated” by using forward information in the model development process.  In building a trading system based only on data drawn from low-volatility days, I have assumed that I can somehow know in advance whether the market is going to be volatile or not, on any given day.  Of course, I don’t know for sure whether the upcoming session is going to be volatile and hence whether to deploy my trading system, or stand aside.  So is this just a purely theoretical exercise?  No, it’s not, for the following reasons.

The first reason is that, unlike the underlying asset market, the market volatility process is, by comparison, highly predictable.  This is due to a phenomenon known as “long memory”, i.e. very slow decay in the serial autocorrelations of the volatility process.  What that means is that the history of the volatility process contains useful information about its likely future behavior.  [There are several posts on this topic in this blog – just search for “long memory”].  So, in principle, one can develop an effective system to forecast market volatility in advance and hence make an informed decision about whether or not to deploy a specific model.

But let’s say you are unpersuaded by this argument and take the view that market volatility is intrinsically unpredictable.  Does that make this approach impractical?  Not at all.  You have a couple of options:

You can test the model built for quiet days on all the market data, including volatile days.  It may perform acceptably well across both market regimes.

For example, here are the results of a backtest of the model described above on all the market data, including volatile and quiet periods, from 2004-2015.  While the performance characteristics are not quite as good, overall the strategy remains very attractive.

Fig 5

Fig 6

 

Another approach is to develop a second model for volatile days and deploy both low- and high-volatility regime models simultaneously.  The trading systems will interact (if you allow them to) in a highly nonlinear and unpredictable way.  It might turn out badly – but on the other hand, it might not!  Here, for instance, is the result of combining low- and high-volatility models simultaneously for the Emini futures and running them in parallel.  The result is an improvement (relative to the low volatility model alone), not only in the annual rate of return (21% vs 17.8%), but also in the risk-adjusted performance, profit factor and average trade.

Fig 7

Fig 8

 

CONCLUSION

Separating the data into multiple subsets representing different market regimes allows the system developer to amplify the signal:noise ratio, increasing the effectiveness of his alpha factors. Potentially, this allows important features of the underlying market dynamics to be captured in the model more easily, which can lead to improved trading performance.

Models developed for different market regimes can be tested across all market conditions and deployed on an everyday basis if shown to be sufficiently robust.  Alternatively, a meta-strategy can be developed to forecast the market regime and select the appropriate trading system accordingly.

Finally, it is possible to achieve acceptable, or even very good results, by deploying several different models simultaneously and allowing them to interact, as the market moves from regime to regime.