Trading With Indices

In this post I want to discuss ways to make use of signals from relevant market indices in your trading.  These signals can add value regardless of whether you trade algorithmically or manually.  The techniques described here are one of the most widely applicable in the quantitative analyst’s arsenal.

Let’s motivate the discussion by looking an example of a simple trading system trading the VIX on weekly bars.  Performance results for the system are summarized in the chart and table below.  The system outperforms the buy and hold return by a substantial margin, with a profit factor of over 3 and a win rate exceeding 82%.  What’s not to like?

VIX EC

VIX Performance

Well, for one thing, this isn’t really a trading system – because the VIX Index itself isn’t tradable. So the performance results are purely notional (and, if you didn’t already notice, no slippage or commission is included).

It is very easy to build high-performing trading system in indices – because they are not traded products,  index prices are often stale and tend to “follow” the price action in the equivalent traded market.

This particular system for the VIX Index took me less than ten minutes to develop and comprises only a few lines of code.  The system makes use of a simple RSI indicator to decide when to buy or sell the index.  I optimized the indicator parameters (separately for long and short) over the period to 2012, and tested it out-of-sample on the data from 2013-2016.

inputs:
Price( Close ) ,
Length( 14 ) ,
OverSold( 30 ) ;

variables:
RSIValue( 0 );

RSIValue = RSI( Price, Length );
if CurrentBar > 1 and RSIValue crosses over OverSold then
Buy ( !( “RsiLE” ) ) next bar at market;

.

The daily system I built for the S&P 500 Index is a little more sophisticated than the VIX model, and produces the following results.

SP500 EC

SP500 Perf

 

Using Index Trading Systems

We have seen that its trivially easy to build profitable trading systems for index products.  But since they can’t be traded, what’s the point?

The analyst might be tempted by the idea of using the signals generated by an index trading system to trade a corresponding market, such as VIX or eMini futures.  However, this approach is certain to fail.  Index prices lag the prices of equivalent futures products, where traders first monetize their view on the market.  So using an index strategy directly to trade a cash or futures market would be like trying to trade using prices delayed by a few seconds, or minutes – a recipe for losing money.

SSALGOTRADING AD

Nor is it likely that a trading system developed for an index product will generalize to a traded market.  What I mean by this is that if you were to take an index strategy, such as the VIX RSI strategy, transfer it to VIX futures and tweak the parameters in the hope of producing a profitable system, you are likely to be disappointed. As I have shown, you can produce a profitable index trading system using the simplest and most antiquated trading concepts (such as the RSI index) that long ago ceased to offer any predictive value in actual traded markets.  Index markets are actually inefficient – the prices of index products often fail to fully reflect all relevant, available information in a timely way. Such simple inefficiencies are easily revealed by indicators such as moving averages.  Traded markets, by contrast, are highly efficient and, with the exception of HFT, it is going to take a great deal more than a simple moving average to provide insight into the few inefficiencies that do arise.

bullbear

Strategies in index products are best thought of, not as trading strategies, but rather as a means of providing broad guidance as to the general condition of the market and its likely direction over the longer term.  To take the VIX index strategy as an example, you can see that each “trade” spans several weeks.  So one might regard a “buy” signal from the VIX index system as an indication that volatility is expected to rise over the next month or two.  A trader might use that information to lean on the side of being long volatility, perhaps even avoiding any short volatility positions altogether for the next several weeks.  Following the model’s guidance in that way would would certainly have helped many equity and volatility traders during the market sell off during August 2015, for example:

 

Vix Example

The S&P 500 Index model is one I use to provide guidance as to market conditions for the current trading day.  It is a useful input to my thinking as to how aggressive I want my trading models to be during the upcoming session. If the index model suggests a positive tone to the market, with muted volatility, I might be inclined to take a more aggressive stance.  If the model starts trading to the short side, however, I am likely to want to be much more cautious.    Yesterday (May 16, 2016), for example, the index model took an early long trade, providing confirmation of the positive tenor to the market and encouraging me to trade volatility to the short side more aggressively.

 

SP500 Example

 

 

In general, I would tend to classify index trading systems as “decision support” tools that provide a means of shading opinion on the market, or perhaps providing a means of calibrating trading models to the anticipated market conditions. However, they can be used in a more direct way, short of actual trading.  For example, one of our volatility trading systems uses the trading signals from a trading system designed for the VVIX volatility-of-volatility index.  Another approach is to use the signals from an index trading system as an indicator of the market regime in a regime switching model.

Designing Index Trading Models

Whereas it is profitability that is typically the primary design criterion for an actual trading system, given the purpose of an index trading system there are other criteria that are at least as important.

It should be obvious from these few illustrations that you want to design your index model to trade less frequently than the system you are intending to trade live: if you are swing-trading the eminis on daily bars, it doesn’t help to see 50 trades a day from your index system.  What you want is an indication as to whether the market action over the next several days is likely to be positive or negative.  This means that, typically, you will design your index system using bar frequencies at least as long as for your live system.

Another way to slow down the signals coming from your index trading system is to design it for very high accuracy – a win rate of  70%, or higher.  It is actually quite easy to do this:  I have systems that trade the eminis on daily bars that have win rates of over 90%.  The trick is simply that you have to be prepared to wait a long time for the trade to come good.  For a live system that can often be a problem – no-one like to nurse an underwater position for days or weeks on end.  But for an index trading system it matters far less and, in fact, it helps:  because you want trading signals over longer horizons than the time intervals you are using in your live trading system.

Since the index system doesn’t have to trade live, it means of course that the usual trading costs and frictions do not apply.  The advantage here is that you can come up with concepts for trading systems that would be uneconomic in the real world, but which work perfectly well in the frictionless world of index trading.  The downside, however, is that this might lead you to develop index systems that trade far too frequently.  So, even though they should not apply, you might seek to introduce trading costs in order to penalize higher frequency trading systems and benefit systems that trade less frequently.

Designing index trading systems in an area in which genetic programming algorithms excel.  There are two main reasons for this.  Firstly, as I have previously discussed, simple technical indicators of the kind employed by GP modeling systems work well in index markets.  Secondly, and more importantly, you can use the GP system to tailor an index trading system to meet the precise criteria you have in mind, such as the % win rate, trading frequency, etc.

An outstanding product that I can highly recommend in this context is Mike Bryant’s Adaptrade Builder.  Builder is a superb piece of software whose power and ease of use reflects Mike’s engineering background and systems development expertise.


Adaptrade

 

 

Some Further Notes on Market Timing

Almost at the very moment I published a post featuring some interesting research by Glabadanidis (“Market Timing With Moving Averages”  (2015), International Review of Finance, Volume 15, Number 13, Pages 387-425 – see Yes, You Can Time the Market. How it Works, And Why), several readers wrote to point out a recently published paper by Valeriy Zakamulin, (dubbed the “Moving Average Research King” by Alpha Architect, the source for our fetching cover shot) debunking Glabadanidis’s findings in no uncertain terms:

We demonstrate that “too good to be true” reported performance of the moving average strategy is due to simulating the trading with look-ahead bias. We perform the simulations without look-ahead bias and report the true performance of the moving average strategy. We find that at best the performance of the moving average strategy is only marginally better than that of the corresponding buy-and-hold strategy.

So far, no response from Glabadanidis – from which one is tempted to conclude that Zakamulin is correct.

I can’t recall the last time a paper published in a leading academic journal turned out to be so fundamentally flawed.  That’s why papers are supposed to be peer reviewed.   But, I guess, it can happen. Still, it’s rather alarming to think that a respected journal could accept a piece of research as shoddy as Zakamulin claims it to be.

What Glabadanidis had done, according to Zakamulin, was to use the current month closing price to compute the moving average that was used to decide whether to exit the market (or remain invested) at the start of the same month.  An elementary error that introduces look-ahead bias that profoundly impacts the results.

Following this revelation I hastily checked my calculations for the SPY marketing timing  strategy illustrated in my blog post and, to my relief, confirmed that I had avoided the look-ahead trap that Glabadanidis has fallen into.  As the reader can see from the following extract from the Excel spreadsheet I used for the calculations, the decision to assume the returns for the SPY ETF or T-Bills for the current month rests on the value of the 24 month MA computed using prices up to the end of the prior month.  In other words, my own findings are sound, even if Glabadanidis’s are not, as the reader can easily check for himself.


Excel Workbook

 

Nonetheless, despite my relief at having avoided Glabadanidis’s  blunder, the apparent refutation of his findings comes as a disappointment.  And my own research on the SPY market timing strategy, while sound as far as it goes, cannot by itself rehabilitate the concept of market timing using moving averages.  The reason is given in the earlier post.  There is a hidden penalty involved in using the market timing strategy to synthetically replicate an Asian put option, namely the costs incurred in exiting and rebuilding the portfolio as the market declines below the moving average, or later overtakes it.  In a single instance, such as the case of SPY, it might easily transpire simply by random chance that the cost of replication are far lower than the fair value of the put.  But the whole point of Glabadanidis’s research was that the same was true, not only for a single ETF or stock, but for many thousands of them.  Absent that critical finding, the SPY case is no more than an interesting anomaly.

Finally, one reader pointed out that the effect of combining a put option with a stock (or ETF) long position was to create synthetically a call option in the stock (ETF).  He is quite correct.  The key point, however, is that when the stock trades down below its moving average, the value of the long synthetic call position and the market timing portfolio are equivalent.

 

 

The Information Content of the Pre- and Post-Market Trading Sessions

I apologize in advance for this rather “wonkish” post, which is aimed chiefly at the high frequency fraternity, or those at least who trade intra-day, in the equity markets.  Such minutiae are the lot of those engaged in high frequency trading.  I promise that my next post will be of more general interest.

Pre- and Post Market Sessions

The pre-market session in US equities runs from 8:00 AM ET, while the post-market session runs until 8:00 PM ET.  The question arises whether these sessions are worth trading, or at the very least, offer a source of data (quotes, trades) that might be relevant to trading the regular session, which of course runs from 9:30 AM to 4:00 PM ET.  Even if liquidity is thin and trades infrequent, and opportunities in the pre- and post-market very limited, it might be that we can improve our trading models by taking into account such information as these sessions do provide, even if we only ever plan to trade during regular trading hours.

It is somewhat challenging to discuss this in great detail, because HFT equity trading is very much in the core competencies of my firm, Systematic Strategies.  However, I hope to offer some ideas, at least, that some readers may find useful.

SSALGOTRADING AD

 

A Tale of Two Pharmaceutical Stocks

In what follows I am going to make use of two examples from the pharmaceutical industry: Alexion Pharmaceuticals, Inc. (ALXN), which has a market cap of $35Bn and trades around 800,000 shares daily, and Pfizer Inc. (PFE), which has a market cap of over $200Bn and trades close to 50M shares a day.

Let’s start by looking at a system trading ALXN during regular market hours.  The system isn’t high frequency, but trades around 1-2 times a day, on average.  The strategy equity curve from 2015 to April 2016 is not at all impressive.

 

ALXN Regular

ALXN – Regular Session Only

 

But look at the equity curve for the same strategy when we allow it to run on the pre- and post-market sessions, in addition to regular trading hours.  Clearly the change in the trading hours utilized by the strategy has made a huge improvement in the total gain and risk-adjusted returns.

 

ALEXN with pre-market

ALXN – with Pre- and Post-Market Sessions

 

The PFE system trades much more frequently, around 4 times a day, but the story is somewhat similar in terms of how including the pre- and post-market sessions appears to improve its performance.

PFE Regular

PFE – Regular Session Only

PFE with premarket

PFE – with Pre- and Post-Market Sessions

 

Improving Trading Performance

In both cases, clearly, the trading performance of the strategies has improved significantly with the inclusion of the out-of-hours sessions.  In the case of ALXN, we see a modest increase of around 10% in the total number of trades, but in the case of PFE the increase in trading activity is much more marked – around 30%, or more.

The first important question to ask is when these additional trades are occurring.  Assuming that most of them take place during the pre- or post-market, our concern might be whether there is likely to be sufficient liquidity to facilitate trades of the frequency and size we wish to execute.  Of various possible hypotheses, some negative, other positive, we might consider the following:

(a) Bad ticks in the market data feed during out-of-hours sessions give rise to apparently highly profitable “phantom” trades

(b) The market data is valid, but the trades are done in such low volume as to be insignificant for practical purposes (i.e. trades were done for a few hundred lots and additional liquidity is unlikely to be available)

(c) Out-of-hours sessions enable the system to improve profitability by entering or exiting positions in a more timely manner than by trading the regular session alone

(d) Out-of-hours market data improves the accuracy of model forecasts, facilitating a larger number of trades, and/or more profitable trades, during regular market hours

An analysis of the trading activity for the two systems provides important insight as to which of the possible explanations might be correct.


ALXN Analysis

(click to enlarge)

Dealing first with ALXN, we that, indeed, an additional 11% of trades are entered or exited out-of-hours.  However, these additional trades account for somewhere between 17% (on exit) and 20% (on entry) of the total profits.  Furthermore, the size of the average entry trade during the post-market session and of the average exit trade in the pre-market session is more than double that of the average trade entered or exited during regular market hours. That gives concerns that some of the apparent increase in profits may be due to bad ticks at prices away from the market, allowing the system enter or exit trades at unrealistically low or high prices.  Even if many of the trades are good, we will have concerns about the scalability of the strategy in out-of-hours trading, given the relatively poor liquidity in the stock. On the other hand, at least some of the uplift in profits arises from new trades occurring during the regular session. This suggests that, even if we are unable to execute many of the trading opportunities seen during pre- or post-market, the trades from those sessions provides useful additional data points for our model, enabling it to increase the number and/or profitability of trades in the regular session.

Next we turn to PFE.  We can see straight away that, while the proportion of trades occurring during out-of-hours sessions is around 23%, those trades now account for over 50% of the total profits.  Furthermore, the average PL for trades executed on entry post-market, and on exit pre-market, is more than 4x the average for trades entered or exited during normal market hours.  Despite the much better liquidity in PFE compared to ALXN, this is a huge concern – we might expect to see significant discrepancies occurring between theoretical and actual performance of the strategy, due to the very high dependency on out-of-hours trading.

PFE Analysis

(click to enlarge)

As we dig further into the analysis, we do indeed find evidence that bad data ticks play a disproportionate role.  For example, this trade in PFE which apparently occurred at around 16:10 on 4/6 was almost certainly a phantom trade resulting from a bad data point. It turns out that, for whatever reason, such bad ticks are a common occurrence in the stock and account for a large proportion of the apparent profitability of out-of-hours trading in PFE.

 

PFE trade

 

CONCLUSION

We are, of course, only skimming the surface of the analysis that is typically carried out.  One would want to dig more deeply into ways in which the market data feed could be cleaned up and bad data ticks filtered out so as to generate fewer phantom trades.  One would also want to look at liquidity across the various venues where the stocks trade, including dark pools, in order to appraise the scalability of the strategies.

For now, the main message that I am seeking to communicate is that it is often well worthwhile considering trading in the pre- and post-market sessions, not only with a view to generating additional, profitable trading opportunities, but also to gather additional data points that can enhance trading profitability during regular market hours.

A High Frequency Scalping Strategy on Collective2

Scalping vs. Market Making

A market-making strategy is one in which the system continually quotes on the bid and offer and looks to make money from the bid-offer spread (and also, in the case of equities, rebates).  During a typical trading day, inventories will build up on the long or short side of the book as the market trades up and down.  There is no intent to take a market view as such, but most sophisticated market making strategies will use microstructure models to help decide whether to “lean” on the bid or offer at any given moment. Market makers may also shade their quotes to reduce the buildup of inventory, or even pull quotes altogether if they suspect that informed traders are trading against them (a situation referred to as “toxic flow”).  They can cover short positions through the repo desk and use derivatives to hedge out the risk of an accumulated inventory position.

marketmaking

A scalping strategy shares some of the characteristics of  a market making strategy:  it will typically be mean reverting, seeking to enter passively on the bid or offer and the average PL per trade is often in the region of a single tick.  But where a scalping strategy differs from market making is that it does take a view as to when to get long or short the market, although that view may change many times over the course of a trading session.  Consequently, a scalping strategy will only ever operate on one side of the market at a time, working the bid or offer; and it will typically never build inventory, since will it usually reverse and later try to sell for a profit the inventory it has previously purchased, hopefully at a lower price.

In terms of performance characteristics, a market making strategy will often have a double-digit Sharpe Ratio, which means that it may go for many days, weeks, or months, without taking a loss.  Scalping is inherently riskier, since it is taking directional bets, albeit over short time horizons.  With a Sharpe Ratio in the region of 3 to 5, a scalping strategy will often experience losing days and even losing months.

So why prefer scalping to market making?  It’s really a question of capability.  Competitive advantage in scalping derives from the successful exploitation of identified sources of alpha, whereas  market making depends primarily on speed and execution capability. Market making requires HFT infrastructure with latency measured in microseconds, the ability to layer orders up and down the book and manage order priority.  Scalping algos are generally much less demanding in terms of trading platform requirements: depending on the specifics of the system, they can be implemented successfully on many third party networks.

Developing HFT Futures Strategies

Some time ago my firm Systematic Strategies began research and development on a number of HFT strategies in futures markets.  Our primary focus has always been HFT equity strategies, so this was something of a departure for us, one that has entailed a significant technological obstacles (more on this in due course). Amongst the strategies we developed were several very profitable scalping algorithms in fixed income futures.  The majority trade at high frequency, with short holding periods measured in seconds or minutes, trading tens or even hundreds of times a day.

xtraderThe next challenge we faced was what to do with our research product.  As a proprietary trading firm our first instinct was to trade the strategies ourselves; but the original intent had been to develop strategies that could provide the basis of a hedge fund or CTA offering.  Many HFT strategies are unsuitable for that purpose, since the technical requirements exceed the capabilities of the great majority of standard trading platforms typically used by managed account investors. Besides, HFT strategies typically offer too limited capacity to be interesting to larger, institutional investors.

In the end we arrived at a compromise solution, keeping the highest frequency strategies in-house, while offering the lower frequency strategies to outside investors. This enabled us to keep the limited capacity of the highest frequency strategies for our own trading, while offering investors significant capacity in strategies that trade at lower frequencies, but still with very high performance characteristics.

HFT Bond Scalping

A typical example is the following scalping strategy in US Bond Futures.  The strategy combines two of the lower frequency algorithms we developed for bond futures that scalp around 10 times per session.  The strategy attempts to take around 8 ticks out of the market on each trade and averages around 1 tick per trade.   With a Sharpe Ratio of over 3, the strategy has produced net profits of approximately $50,000 per contract per year, since 2008.    A pleasing characteristic of this and other scalping strategies is their consistency:  There have been only 10 losing months since January 2008, the last being a loss of $7,100 in Dec 2015 (the prior loss being $472 in July 2013!)

Annual P&L

Fig2

Strategy Performance

fig4Fig3

 

Offering The Strategy to Investors on Collective2

The next challenge for us to solve was how best to introduce the program to potential investors.  Systematic Strategies is not a CTA and our investors are typically interested in equity strategies.  It takes a great deal of hard work to persuade investors that we are able to transfer our expertise in equity markets to the very different world of futures trading. While those efforts are continuing with my colleagues in Chicago, I decided to conduct an experiment:  what if we were to offer a scalping strategy through an online service like Collective2?  For those who are unfamiliar, Collective2 is an automated trading-system platform that allowed the tracking, verification, and auto-trading of multiple systems.  The platform keeps track of the system profit and loss, margin requirements, and performance statistics.  It then allows investors to follow the system in live trading, entering the system’s trading signals either manually or automatically.

Offering a scalping strategy on a platform like this certainly creates visibility (and a credible track record) with investors; but it also poses new challenges.  For example, the platform assumes trading cost of around $14 per round turn, which is at least 2x more expensive than most retail platforms and perhaps 3x-5x more expensive than the cost a HFT firm might pay.  For most scalping strategies that are designed to take a tick out of the market such high fees would eviscerate the returns.  This motivated our choice of US Bond Futures, since the tick size and average trade are sufficiently large to overcome even this level of trading friction.  After a couple of false starts, during which we played around with the algorithms and boosted strategy profitability with a couple of low frequency trades, the system is now happily humming along and demonstrating the kind of performance it should (see below).

For those who are interested in following the strategy’s performance, the link on collective2 is here.

 

Collective2Perf

trades

Disclaimer

About the results you see on this Web site

Past results are not necessarily indicative of future results.

These results are based on simulated or hypothetical performance results that have certain inherent limitations. Unlike the results shown in an actual performance record, these results do not represent actual trading. Also, because these trades have not actually been executed, these results may have under-or over-compensated for the impact, if any, of certain market factors, such as lack of liquidity. Simulated or hypothetical trading programs in general are also subject to the fact that they are designed with the benefit of hindsight. No representation is being made that any account will or is likely to achieve profits or losses similar to these being shown.

In addition, hypothetical trading does not involve financial risk, and no hypothetical trading record can completely account for the impact of financial risk in actual trading. For example, the ability to withstand losses or to adhere to a particular trading program in spite of trading losses are material points which can also adversely affect actual trading results. There are numerous other factors related to the markets in general or to the implementation of any specific trading program, which cannot be fully accounted for in the preparation of hypothetical performance results and all of which can adversely affect actual trading results.

Material assumptions and methods used when calculating results

The following are material assumptions used when calculating any hypothetical monthly results that appear on our web site.

  • Profits are reinvested. We assume profits (when there are profits) are reinvested in the trading strategy.
  • Starting investment size. For any trading strategy on our site, hypothetical results are based on the assumption that you invested the starting amount shown on the strategy’s performance chart. In some cases, nominal dollar amounts on the equity chart have been re-scaled downward to make current go-forward trading sizes more manageable. In these cases, it may not have been possible to trade the strategy historically at the equity levels shown on the chart, and a higher minimum capital was required in the past.
  • All fees are included. When calculating cumulative returns, we try to estimate and include all the fees a typical trader incurs when AutoTrading using AutoTrade technology. This includes the subscription cost of the strategy, plus any per-trade AutoTrade fees, plus estimated broker commissions if any.
  • “Max Drawdown” Calculation Method. We calculate the Max Drawdown statistic as follows. Our computer software looks at the equity chart of the system in question and finds the largest percentage amount that the equity chart ever declines from a local “peak” to a subsequent point in time (thus this is formally called “Maximum Peak to Valley Drawdown.”) While this is useful information when evaluating trading systems, you should keep in mind that past performance does not guarantee future results. Therefore, future drawdowns may be larger than the historical maximum drawdowns you see here.

Trading is risky

There is a substantial risk of loss in futures and forex trading. Online trading of stocks and options is extremely risky. Assume you will lose money. Don’t trade with money you cannot afford to lose.

High Frequency Trading: Equities vs. Futures

A talented young system developer I know recently reached out to me with an interesting-looking equity curve for a high frequency strategy he had designed in E-mini futures:

Fig1

Pretty obviously, he had been making creative use of the “money management” techniques so beloved by futures systems designers.  I invited him to consider how it would feel to be trading a 1,000-lot E-mini position when the market took a 20 point dive.  A $100,000 intra-day drawdown might make the strategy look a little less appealing.  On the other hand, if you had already made millions of dollars in the strategy, you might no longer care so much.

SSALGOTRADING AD

A more important criticism of money management techniques is that they are typically highly path-dependent:  if you had started your strategy slightly closer to one of the drawdown periods that are almost unnoticeable on the chart, it could have catastrophic consequences for your trading account.  The only way to properly evaluate this, I advised, was to backtest the strategy over many hundreds of thousands of test-runs using Monte Carlo simulation.  That would reveal all too clearly that the risk of ruin was far larger than might appear from a single backtest.

Next, I asked him whether the strategy was entering and exiting passively, by posting bids and offers, or aggressively, by crossing the spread to sell at the bid and buy at the offer.  I had a pretty good idea what his answer would be, given the volume of trades in the strategy and, sure enough he confirmed the strategy was using passive entries and exits.  Leaving to one side the challenge of executing a trade for 1,000 contracts in this way, I instead ask him to show me the equity curve for a single contract in the underlying strategy, without the money-management enhancement. It was still very impressive.

Fig2

 

The Critical Fill Assumptions For Passive Strategies

But there is an underlying assumption built into these results, one that I have written about in previous posts: the fill rate.  Typically in a retail trading platform like Tradestation the assumption is made that your orders will be filled if a trade occurs at the limit price at which the system is attempting to execute.  This default assumption of a 100% fill rate is highly unrealistic.  The system’s orders have to compete for priority in the limit order book with the orders of many thousands of other traders, including HFT firms who are likely to beat you to the punch every time.  As a consequence, the actual fill rate is likely to be much lower: 10% to 20%, if you are lucky.  And many of those fills will be “toxic”:  buy orders will be the last to be filled just before the market  moves lower and sell orders will be the last to get filled just as the market moves higher. As a result, the actual performance of the strategy will be a very long way from the pretty picture shown in the chart of the hypothetical equity curve.

One way to get a handle on the problem is to make a much more conservative assumption, that your limit orders will only get filled when the market moves through them.  This can easily be achieved in a product like Tradestation by selecting the appropriate backtest option:

fig3

 

The strategy performance results often look very different when this much more conservative fill assumption is applied.  The outcome for this system was not at all unusual:

Fig4

 

Of course, the more conservative assumption applied here is also unrealistic:  many of the trading system’s sell orders would be filled at the limit price, even if the market failed to move higher (or lower in the case of a buy order).  Furthermore, even if they were not filled during the bar-interval in which they were issued, many limit orders posted by the system would be filled in subsequent bars.  But the reality is likely to be much closer to the outcome assuming a conservative fill-assumption than an optimistic one.    Put another way:  if the strategy demonstrates good performance under both pessimistic and optimistic fill assumptions there is a reasonable chance that it will perform well in practice, other considerations aside.

An Example of a HFT Equity Strategy

Let’s contrast the futures strategy with an example of a similar HFT strategy in equities.  Under the optimistic fill assumption the equity curve looks as follows:

Fig5

Under the more conservative fill assumption, the equity curve is obviously worse, but the strategy continues to produce excellent returns.  In other words, even if the market moves against the system on every single order, trading higher after a sell order is filled, or lower after a buy order is filled, the strategy continues to make money.

Fig6

Market Microstructure

There is a fundamental reason for the discrepancy in the behavior of the two strategies under different fill scenarios, which relates to the very different microstructure of futures vs. equity markets.   In the case of the E-mini strategy the average trade might be, say, $50, which is equivalent to only 4 ticks (each tick is worth $12.50).  So the average trade: tick size ratio is around 4:1, at best.  In an equity strategy with similar average trade the tick size might be as little as 1 cent.  For a futures strategy, crossing the spread to enter or exit a trade more than a handful of times (or missing several limit order entries or exits) will quickly eviscerate the profitability of the system.  A HFT system in equities, by contrast, will typically prove more robust, because of the smaller tick size.

Of course, there are many other challenges to high frequency equity trading that futures do not suffer from, such as the multiplicity of trading destinations.  This means that, for instance, in a consolidated market data feed your system is likely to see trading opportunities that simply won’t arise in practice due to latency effects in the feed.  So the profitability of HFT equity strategies is often overstated, when measured using a consolidated feed.  Futures, which are traded on a single exchange, don’t suffer from such difficulties.  And there are a host of other differences in the microstructure of futures vs equity markets that the analyst must take account of.  But, all that understood, in general I would counsel that equities make an easier starting point for HFT system development, compared to futures.

ETFs vs. Hedge Funds – Why Not Combine Both?

Grace Kim, Brand Director at DarcMatter, does a good job of setting out the pros and cons of ETFs vs hedge funds for the family office investor in her LinkedIn post.

She points out that ETFs now offer as much liquidity as hedge funds, both now having around $2.96 trillion in assets.  So, too, are her points well made about the low cost, diversification and ease of investing in ETFs compared to hedge funds.

But, of course, the point of ETF investing is to mimic the return in some underlying market – to gain beta exposure, in the jargon – whereas hedge fund investing is all about alpha – the incremental return that is achieved over and above the return attributable to market risk factors.

SSALGOTRADING AD

But should an investor be forced to choose between the advantages of diversification and liquidity of ETFs on the one hand and the (supposedly) higher risk-adjusted returns of hedge funds, on the other?  Why not both?

Diversified Long/Short ETF Strategies

In fact, there is nothing whatever to prevent an investment strategist from constructing a hedge fund strategy using ETFs.  Just as one can enjoy the hedging advantages of a long/short equity hedge fund portfolio, so, too, can one employ the same techniques to construct long/short ETF portfolios.  Compared to a standard equity L/S portfolio, an ETF L/S strategy can offer the added benefit of exposure to (or hedge against) additional risk factors, including currency, commodity or interest rate.

For an example of this approach ETF long/short portfolio construction, see my post on Developing Long/Short ETF Strategies.  As I wrote in that article:

My preference for ETFs is due primarily to the fact that  it is easier to achieve a wide diversification in the portfolio with a more limited number of securities: trading just a handful of ETFs one can easily gain exposure, not only to the US equity market, but also international equity markets, currencies, real estate, metals and commodities.

More Exotic Hedge Fund Strategies with ETFs

But why stop at vanilla long/short strategies?  ETFs are so varied in terms of the underlying index, leverage and directional bias that one can easily construct much more sophisticated strategies capable of tapping the most obscure sources of alpha.

Take our very own Volatility ETF strategy for example.  The strategy constructs hedged positions, not by being long/short, but by being short/short or long/long volatility and inverse volatility products, like SVXY and UVXY, or VXX and XIV.  The strategy combines not only strategic sources of alpha that arise from factors such as convexity in the levered ETF products, but also short term alpha signals arising from temporary misalignments in the relative value of comparable ETF products.  These can be exploited by tactical, daytrading algorithms of a kind more commonly applied in the context of high frequency trading.

For more on this see for example Investing in Levered ETFs – Theory and Practice.

Does the approach work?  On the basis that a picture is worth a thousand words, let me answer that question as follows:

Systematic Strategies Volatility ETF Strategy

Perf Summary Dec 2015

Conclusion

There is no reason why, in considering the menu of ETF and hedge fund strategies, it should be a case of either-or.  Investors can combine the liquidity, cost and diversification advantages of ETFs with the alpha generation capabilities of well-constructed hedge fund strategies.

A New Approach to Equity Valuation

How Analysts Traditionally Value Equity

fig1I learned the traditional method for producing equity valuations in the 1980’s, from  Chase bank’s excellent credit training program.  The standard technique was to develop several years of projected financial statements, and then discount the cash flows and terminal value to arrive at an NPV. I’m guessing the basic approach hasn’t changed all that much over the last 30-40 years and probably continues to serve as the fundamental building block for M&A transactions and PE deals.

Damadoran

Amongst several excellent texts on the topic I can recommend, for example, Aswath Damodaran’s book on valuation.

Arguably the weakest point in the methodology are the assumptions made about the long term growth rate of the business and the rate used to discount the cash flows to produce the PV.  Since we are dealing with long term projections, small variations in these rates can make a considerable difference to the outcome.

The Monte Carlo Approach

Around 20 years ago I wrote a paper titled “A New Approach to Equity Valuation”, in which I attempted to define a new methodology for equity valuation.  The idea was simple enough:  instead of guessing an appropriate rate to discount the projected cash flows generated by the company, you embed the riskiness into the cash flows themselves, using probability distributions.  That allows you to model the cash flows using Monte Carlo simulation and discount them using the risk-free rate, which is much easier to determine.  In a similar vein,  the model can allow for stochastic growth rates, perhaps also taking into account the arrival of potential new entrants, or disruptive technologies.

I recall taking the idea to an acquaintance of mine who at the time was head of M&A at a prestigious boutique bank in London.  About five minutes into the conversation I realized I had lost him at “Monte Carlo”.  It was yet another instance of the gulf between the fundamental and quantitative approach to investment finance, something I have always regarded as rather artificial.  The line has blurred in several places over the last few decades – option theory of the firm and factor models, to name but two examples – but remains largely intact.  I have met very few equity analysts who have the slightest clue about quantitative research and vice-versa, for that matter.  This is a pity in my view, as there is much to be gained by blending knowledge of the two disciplines.

SSALGOTRADING AD

The basic idea of the Monte Carlo approach is to formulate probability distributions for key variables that drive the business, such as sales, gross margin, cost of goods, etc., as well as related growth rates. You then determine the outcome in terms of P&L and cash flows over a large number of simulations, from which you can derive a probability distribution for the firm/equity value.

npv

There are two potential sources of data one can use to build a Monte Carlo model: the historical distributions of the variables and information from line management. It is the latter that is likely to be especially useful, because you can embed management’s expertise and understanding of the business and its competitive environment directly into the model variables, rather than relying upon a single discount rate to account for all the possible sources of variation in the cash flows.

It can get a little complicated, of course: one cannot simply assume that all the variables evolve independently – COGS is likely to fall as a % of sales as sales increase, for example, due to economies of scale. Such interactive effects are critically important and it is necessary to dig deep into the inner workings of the business to model them successfully.  But to those who may view such a task as overwhelmingly complicated I can offer several counter examples.  For instance, in the 1970’s  I worked on large scale simulation models of the North Sea oil fields that incorporated volumes of information from geology to engineering to financial markets.  Another large scale simulation was built to assess how best to manage tanker traffic at one of the world’s busiest sea ports.

Creating a simulation model of  the financials of a single firm is a simple task, by comparison. And, after you have built the model it will typically remain fundamentally unchanged in basic form for many years making the task of producing valuation estimates much easier in future.

Applications of Monte Carlo Methods in Equity Valuation

Ok, so what’s the point?  At the end of the day, don’t you just end up with the same result as from traditional methods, i.e. an estimate of the equity or firm value? Actually no – what you have instead is an estimate of the probability distribution of the value, something decidedly more useful.

For example:

Contract Negotiation

Monte Carlo methods have been applied successfully to model contract negotiation scenarios, for instance for management consulting projects, where several rounds of negotiation are often involved in reaching an agreed pricing structure.

Negotiation

 Stock Selection

You might build a portfolio of value stocks whose share price is below the median value, in the expectation that the majority of the universe will prove to be undervalued, over the long term.  Or you might embed information about the expected value of the equities in your universe (and their cashflow volatilities) into you portfolio construction model.

Private Equity / Mergers & Acquisitions

In a PE or M&A negotiation your model provides a range of values to select from, each of which is associated with an estimated “probability of overpayment”.  For example, your opening bid might be a little below the median value, where it is likely that you are under-bidding for the projected cash flows.  That allows some headroom to increase the bid, if necessary, without incurring too great a risk of over-paying.

Recent Research

A survey of recent research in the field yields some interesting results, amongst them a paper by Magnus Pedersen entitled Monte Carlo Simulation in Financial Valuation (2014).  Pedersen takes a rather different approach to applying Monte Carlo methods to equity valuation.   Specifically, he uses the historical distribution of the price/book ratio to derive the empirical distribution of the equity value rather than modeling the individual cash flows.  This is a sensible compromise for someone who, unlike an analyst at a major sell-side firm, may not have access to management information necessary to build a more sophisticated model.  Nevertheless, Pedersen is able to demonstrate quite interesting results using MC methods to construct equity portfolios (weighted according to the Kelly criterion), in an accompanying paper Portfolio Optimization & Monte Carlo Simulation (2014).

For those who find the subject interesting, Pedersen offers several free books on his web site, which are worth reviewing.

cover_strategies-sp500

Transcendental Spheres

One of the most beautiful equations in the whole of mathematics is the identity (and its derivation):

 

 

I recently came across another beautiful mathematical concept that likewise relates the two transcendental numbers e and Pi.

We begin by reviewing the concept of a unit sphere, which in 3-dimensional space is the region of points described by the equation:

We can some generate random coordinates that satisfy the equation, to produce the expected result:

The equation above represents a 3-D unit sphere using the standard Euclidean Norm.  It can be generalized to produce a similar formula for an n-dimensional hyper-sphere:

Another way to generalize the concept is by extending the Euclidean distance measure with what are referred to as p-Norms, or L-p spaces:

The shape of a unit sphere in L-p space can take many different forms, including some that have “corners”.  Here are some examples of 2-dimensional spheres for values of p varying in the range { 0.25, 4}:

 

which can also be explored in the complex plane:

Reverting to the regular Euclidean metric, let’s focus on the n-dimensional unit hypersphere, whose volume is given by:

To see this, note that the volume of the unit sphere in 2-D space is just  the surface area of a unit circle, which has area V(2) =  π.  Furthermore:

This is the equation for the volume of the unit hypersphere in n dimensions.  Hence we have the following recurrence relationship:

This recursion allows us to prove the equation for the volume of the unit hypersphere, by induction.

The function V(n) take a maximal value of 5.26 for n = 5 dimensions, thereafter declining rapidly towards zero:

 

In the limit, the volume of the n-dimensional unit hypersphere tends to zero:

 

Now, consider the sum of the volumes of unit hypersphere in even dimensions, i.e. for n = 0, 2, 4, 6,….  For example, the first few terms of the sum are:

 

These are the initial terms of a well-known McClaurin expansion, which in the limit produces the following remarkable result:

In other words, the infinite sum of the volumes of n-dimensional unit hyperspheres evaluates to a power relationship between the two most famous transcendental numbers.  The result, known as Gelfond’s constant, is itself a transcendental number:

A High Frequency Market Making Algorithm

 

This algorithm builds on the research by Stoikova and Avelleneda in their 2009 paper “High Frequency Trading in a Limit Order Book“, 2009 and extends the basic algorithm in several ways:

  1. The algorithm makes two sided markets in a specified list of equities, with model parameters set at levels appropriate for each product.
  2. The algorithm introduces an automatic mechanism for managing inventory, reducing the risk of adverse selection by changing the rate of inventory accumulation dynamically.
  3. The algorithm dynamically adjusts the range of the bid-ask spread as the trading session progresses, with the aim of minimizing inventory levels on market close.
  4. The extended algorithm makes use of estimates of recent market trends and adjusts the bid-offer spread to lean in the direction of the trend.
  5. A manual adjustment factor allows the market-maker to nudge the algorithm in the direction of reducing inventory.

The algorithm is implemented in Mathematica, and can be compiled to create dlls callable from with a C++ or Python application.

The application makes use of the MATH-TWS library to connect to the Interactive Brokers TWS or Gateway platform via the C++ api. MATH-TWS is used to create orders, manage positions and track account balances & P&L.

 

Reflections on Careers in Quantitative Finance

CMU’s MSCF Program

Carnegie Mellon’s Steve Shreve is out with an interesting post on careers in quantitative finance, with his commentary on the changing landscape in quantitative research and the implications for financial education.

I taught at Carnegie Mellon in the late 1990’s, including its excellent Master’s program in quantitative finance that Steve co-founded, with Sanjay Srivastava.  The program was revolutionary in many ways and was immediately successful and rapidly copied by rival graduate schools (I help to spread the word a little, at Cambridge).

Fig1The core of the program remains largely unchanged over the last 20 years, featuring Steve’s excellent foundation course in stochastic calculus;  but I am happy to see that the school has added many, new and highly relevant topics to the second year syllabus, including market microstructure, machine learning, algorithmic trading and statistical arbitrage.  This has moved the program in terms of its primary focus, which was originally financial engineering, to include coverage of subjects that are highly relevant to quantitative investment research and trading.

It was this combination of sound theoretical grounding with practitioner-oriented training that made the program so successful.  As I recall, every single graduate was successful in finding a job on Wall Street, often at salaries in excess of $200,000, a considerable sum in those days.  One of the key features of the program was that it combined theoretical concepts with practical training, using a simulated trading floor gifted by Thomson Reuters (a model later adopted btrading-floor-1y the ICMA centre at the University of Reading in the UK).  This enabled us to test students’ understanding of what they had been taught, using market simulation models that relied upon key theoretical ideas covered in the program.  The constant reinforcement of the theoretical with the practical made for a much deeper learning experience for most students and greatly facilitated their transition to Wall Street.

Masters in High Frequency Finance

While CMU’s program has certainly evolved and remains highly relevant to the recruitment needs of Wall Street firms, I still believe there is an opportunity for a program focused exclusively on high frequency finance, as previously described in this post.  The MHFF program would be more computer science oriented, with less emphasis placed on financial engineering topics.  So, for instance, students would learn about trading hardware and infrastructure, the principles of efficient algorithm design, as well as HFT trading techniques such as order layering and priority management.  The program would also cover HFT strategies such as latency arbitrage, market making, and statistical arbitrage.  Students would learn both lower level (C++, Java) and higher level (Matlab, R) programming languages and there is  a good case for a mandatory machine code programming course also.  Other core courses might include stochastic calculus and market microstructure.

Who would run such a program?  The ideal school would have a reputation for excellent in both finance and computer science. CMU is an obvious candidate, as is MIT, but there are many other excellent possibilities.

Careers

I’ve been involved in quantitative finance since the beginning:  I recall programming one of the first 68000 Assembler microcomputers in the 1980s, which was ultimately used for an F/X system at a major UK bank. The ensuing rapid proliferation of quantitative techniques in finance has been fueled by the ubiquity of cheap computing power, facilitating the deployment of quantitate techniques that would previously been impractical to implement due to their complexity.  A good example is the machine learning techniques that now pervade large swathes of the finance arena, from credit scoring to HFT trading.  When I first began working in that field in the early 2000’s it was necessary to assemble a fairly sizable cluster of cpus to handle the computation load. These days you can access comparable levels of computational power on a single server and, if you need more, you can easily scale up via Azure or EC2.

fig3It is this explosive growth in computing power  that has driven the development of quantitative finance in both the financial engineering and quantitative investment disciplines. As the same time, the huge reduction in the cost of computing power has leveled the playing field and lowered barriers to entry.  What was once the exclusive preserve of the sell-side has now become readily available to many buy-side firms.  As a consequence, much of the growth in employment opportunities in quantitative finance over the last 20 years has been on the buy-side, with the arrival of quantitative hedge funds and proprietary trading firms, including my own, Systematic Strategies.  This trend has a long way to play out so that, when also taking into consideration the increasing restrictions that sell-side firms face in terms of their proprietary trading activity, I am inclined to believe that the buy-side will offer the best employment opportunities for quantitative financiers over the next decade.

It was often said that hedge fund managers are typically in their 30’s or 40’s when they make the move to the buy-side. That has changed in the last 15 years, again driven by the developments in technology.  These days you are more likely to find the critically important technical skills in younger candidates, in their late 20’s or early 30’s.  My advice to those looking for a career in quantitative finance, who are unable to find the right job opportunity, would be: do what every other young person in Silicon Valley is doing:  join a startup, or start one yourself.