Day Trading System in VIX Futures –

This is a follow up to my earlier post on a Calendar Spread Strategy in VIX Futures (more information on calendar spreads ).

The strategy trades the front two months in the CFE VIX futures contract, generating an annual profit of around $25,000 per spread.

I built an equivalent day trading system in VIX futures in Trading Technologies visual ADL language, using 1-min bar data for 2010, and tested the system out-of-sample in 2011-2014. (for more information on X-Trader/ ADL go here).

The annual net PL is around $20,000 per spread, with a win rate of 67%.   On the downside, the profit factor is rather low and the average trade is barely 1/10 of a tick). Note that this is net of Bid-Ask spread of 0.05 ($50) and commission/transaction costs of $20 per round turn.  These cost assumptions are reasonable for online trading at many brokerage firms.

However, the strategy requires you to work the spread to enter passively (thereby reducing the cost of entry).  This is usually only feasible on a  platform suitable for a high frequency trading, where you can assume that your orders have acceptable priority in the limit order queue.  This will result in a reasonable proportion of your passive bids and offers will be executed.  Typically the spread trade is held throughout the session, exiting on close (since this is a day trading system).

Overall, while the trading system characteristics are reasonable, the spread strategy is better suited to longer (i.e. overnight) holding periods, since the VIX futures market is not the most liquid and the tick value is large.  We’ll take a look at other day trading strategies in more liquid products, like the S&P 500 e-mini futures, for example, in another post.

High Freq Strategy Equity Curve(click to enlarge)


High Frequency Perf Results

(click to enlarge)

Posted in Algo Design Language, Algo Strategy Engine, Day Trading, Futures, High Frequency Trading, Spread Trading, Trading, Trading Technologies, VIX Index | Tagged , , , , , , | Comments Off

A Calendar Spread Strategy in VIX Futures

I have been working on developing some high frequency spread strategies using Trading Technologies’ Algo Strategy Engine, which is extremely impressive (more on this in a later post).  I decided to take a time out to experiment with a slower version of one of the trades, a calendar spread in VIX futures that trades  the spread on the front two contracts.  The strategy applies a variety of trend-following and mean-reversion indicators to trade the spread on a daily basis.

Modeling a spread strategy on a retail platform like Interactivebrokers or TradeStation is extremely challenging, due to the limitations of the platform and the Easylanguage programming language compared to professional platforms that are built for purpose, like TT’s XTrader and development tools like ADL.  If you backtest strategies based on signals generated from the spread calculated using the last traded prices in the two securities, you will almost certainly see “phantom trades” – trades that could not be executed at the indicated spread price (for example, because both contracts last traded on the same side of the bid/ask spread).   You also can’t easily simulate passive entry or exit strategies, which typically constrains you to using market orders for both legs, in and out of the spread.  On the other hand, while using market orders would almost certainly be prohibitively expensive in a high frequency or daytrading context, in a low-frequency scenario the higher transaction costs entailed in aggressive entries and exits are typically amortized over far longer time frames.

In the following example I have allowed transaction costs of $100 per round turn and slippage of $0.1 (equivalent to $100) per spread.  Daily settlement prices from Mar 2004 to June 2010 were used to fit the model, which was tested out of sample in the period July 2010 to June 2014. Results are summarized in the chart and table below.

Even burdened with significant transaction cost assumptions the strategy performance looks impressive on several counts, notably a profit factor in excess of 300, a win rate of over 90% and a Sortino Ratio of over 6.  These features of the strategy prove robust (and even increase) during the four year out-of-sample period, although the annual net profit per spread declines to around $8,500, from $36,600 for the in-sample period.  Even so, this being a straightforward calendar spread, it should be possible to trade the strategy in size at relative modest margin cost, making the strategy return highly attractive.

Equity Curve

 (click to enlarge)

Performance Results




























(click to enlarge)



Posted in Algo Design Language, Algo Strategy Engine, Futures, Spread Trading, TradeStation, Trading Technologies, VIX Index, Volatility Modeling | Tagged , , , , , , , | Comments Off

Volatility ETF Strategy – Sept 2014 Update


  • CAGR over 40% annually
  • Sharpe ratio in excess  of 3
  • Max drawdown -4.3%
  • Liquid, exchange-traded ETF assets
  • Fully automated, algorithmic execution
  • Monthly portfolio turnover
  • Managed accounts with daily MTM
  • Minimum investment $250,000
  • Fee structure 2%/20%

VALUE OF $1,000 2012-2014

VALUE OF $1000








The Systematic Strategies Volatility ETF  strategy uses mathematical models to quantify the relative value of ETF products based on the CBOE S&P500 Volatility Index (VIX) and create a positive-alpha long/short volatility portfolio. The strategy is designed to perform robustly during extreme market conditions, by utilizing the positive convexity of the underlying ETF assets. It does not rely on volatility term structure (“carry”), or statistical correlations, but generates a return derived from the ETF pricing methodology.  The net volatility exposure of the portfolio may be long, short or neutral, according to market conditions, but at all times includes an underlying volatility hedge. Portfolio holdings are adjusted daily using execution algorithms that minimize market impact to achieve the best available market prices.


Ann Returns









Monthly Returns

Our portfolio is not dependent on statistical correlations and is always hedged. We never invest in illiquid securities. We operate hard exposure limits and caps on volume participation.

We operate fully redundant dual servers operating an algorithmic execution platform designed to minimize market impact and slippage.  The strategy is not latency sensitive.



(click to enlarge)

Posted in ETFs, VIX Index, Volatility ETF Strategy, Volatility Modeling | Tagged , , , | Comments Off

Not the Market Top

Our most reliable market timing indicator is a  system that “trades” the CBOE VIX Index, a measure of option volatility in the S&P500 Index.  While the VIX index itself is not tradable, the system provides a signal that can be used to trade products such as VIX futures, or ETF products like the VXX and XIV.  Since the VIX index is correlated negatively with the market, the system can also provide a very useful signal to time market entries and exits (or when to add to positions) in equity portfolios.

Since 1992 the system has “traded” 238 times, with 81% accuracy (i.e. 8 out of ten trades were profitable).  The profit percentage is even higher on the long side – around 89%, although short signals tend to be more frequent than long signals by a factor of more than 2:1.

VIX Signals
(click to enlarge)

Since the start of 2014 the system has issued 9 signals, 7 of which were profitable.  The latest signal was generated on July 11, when the system went short the VIX at 12.08.  At the time of writing, the trade is underwater, with the VIX at around the 14 level.  It is not at all uncommon for a trade to lose money initially, and this one may still work out.  The more important point, however, is this: the system is not behaving as it did during previous market crashes in 2000-01 and 2008-09, periods in which it made very large gains of 42% and 28%, respectively.  The more modest return of +1.59% in 2014 suggests that the market has not yet entered the long-awaited correction anticipated by so many.  Indeed, I would hazard a prediction that we will see a return to the 2,000 level in the S&P500 before any such correction occurs.  The merchants of doom may  have to wait a little while longer for their worst case scenario to play out.

VIX Strategy Report


(Click to enlarge)



Posted in Market Timing, VIX Index, Volatility Modeling | Tagged , , , | Comments Off

What Wealth Managers and Family Offices Need to Understand About Alternative Investing


The most recent Morningstar survey provides an interesting snapshot of the state of the alternatives market.  In 2013, for the third successive year, liquid alternatives was the fastest growing category of mutual funds, drawing in flows totaling $95.6 billion.  The fastest growing subcategories have been long-short stock funds (growing more than 80% in 2013), nontraditional bond funds (79%) and “multi-alternative” fund-of-alts-funds products (57%).

Benchmarking Alternatives
The survey also provides some interesting insights into the misconceptions about alternative investments that remain prevalent amongst advisors, despite contrary indications provided by long-standing academic research.  According to Morningstar, a significant proportion of advisors continue to use inappropriate benchmarks, such as the S&P 500 or Russell 2000, to evaluate alternatives funds (see Some advisers using ill-suited benchmarks to measure alts performance by Trevor Hunnicutt, Investment News July 2014).  As Investment News points out, the problem with applying standards developed to measure the performance of funds that are designed to beat market benchmarks is that many alternative funds are intended to achieve other investment goals, such as reducing volatility or correlation.  These funds will typically have under-performed standard equity indices during the bull market, causing investors to jettison them from their portfolios at a time when the additional protection they offer may be most needed.

This is but one example in a broader spectrum of issues about alternative investing that are poorly understood.  Even where advisors recognize the need for a more appropriate hedge fund index to benchmark fund performance, several traps remain for the unwary.  As shown in Brooks and Kat (The Statistical Properties of Hedge Fund Index Returns and Their Implications for Investors, Journal of Financial and Quantitative Analysis, 2001), there can be considerable heterogeneity between indices that aim to benchmark the same type of strategy, since indices tend to cover different parts of the alternatives universe.  There are also significant differences between indices in terms of their survivorship bias – the tendency to overstate returns by ignoring poorly performing funds that have closed down (see Welcome to the Dark Side – Hedge Fund Attribution and Survivorship Bias, Amin and Kat, Working Paper, 2002).  Hence, even amongst more savvy advisors, the perception of performance tends to be biased by the choice of index.

Risks and Benefits of Diversifying with Alternatives
An important and surprising discovery in relation to diversification with alternatives was revealed in Amin and Kat’s Diversification and Yield Enhancement with Hedge Funds (Working Paper, 2002).  Their study showed that the median standard deviation of a portfolio of stocks, bonds and hedge funds reached its lowest point where the allocation to alternatives was 50%, far higher than the 1%-5% typically recommended by advisors.

Standard Deviation of Portfolios of Stocks, Bonds and 20 hedge Funds

Hedge Fund Pct Mix and Volatility

Source: Diversification and Yield Enhancement with Hedge Funds, Amin and Kat, Working Paper, 2002

Another potential problem is that investors will not actually invest in the fund index that is used for benchmarking, but in a basket containing a much smaller number of funds, often through a fund of funds vehicle.  The discrepancy in performance between benchmark and basket can often be substantial in the alternatives space.

Amin and Kat studied this problem in 2002 (Portfolios of Hedge Funds, Working Paper, 2002), by constructing hedge fund portfolios ranging in size from 1 to 20 funds and measuring their performance on a number of criteria that included, not just the average return and standard deviation, but also the skewness (a measure of the asymmetry of returns), kurtosis (a measure of the probability of extreme returns)and the correlation with the S&P 500 Index and the Salomon (now Citigroup) Government Bond Index.  Their startling conclusion was that, in the alternatives space, diversification is not necessarily a good thing.    As expected, as the number of funds in the basket is increased, the overall volatility drops substantially; but at the same time skewness drops and kurtosis and market correlation increase significantly.  In other words, when adding more funds, the likelihood of a large loss increases and the diversification benefit declines.   The researchers found that a good approximation to a typical hedge fund index could be constructed with a basket of just 15 well-chosen funds, in most cases.

Concerns about return distribution characteristics such as skewness and kurtosis may appear arcane, but these factors often become crucially important at just the wrong time, from the investor’s perspective.  When things go wrong in the stock market they also tend to go wrong for hedge funds, as a fall in stock prices is typically accompanied by a drop in market liquidity, a widening of spreads and, often, an increase in stock loan costs.  Equity market neutral and long/short funds that are typically long smaller cap stocks and short larger cap stocks will pay a higher price for the liquidity they need to maintain neutrality.  Likewise, a market sell-off is likely to lead to postponing of M&A transactions that will have a negative impact on the performance of risk arbitrage funds.  Nor are equity-related funds the only alternatives likely to suffer during a market sell-off.  A market fall will typically be accompanied by widening credit spreads, which in turn will damage the performance of fixed income and convertible arbitrage funds.   The key point is that, because they all share this risk, diversification among different funds will not do much to mitigate it.

Many advisors remain wedded to using traditional equity indices that are inappropriate benchmarks for alternative strategies.  Even where more relevant indices are selected, they may suffer from survivorship and fund-selection bias.

In order to reap the diversification benefit from alternatives, research shows that investors should concentrate a significant proportion of their wealth in the limited number of alternatives funds, a portfolio strategy that is diametrically opposed to the “common sense” approach of many advisors.

Finally, advisors often overlook the latent correlation and liquidity risks inherent in alternatives that come into play during market down-turns, at precisely the time when investors are most dependent on diversification to mitigate market risk.  Such risks can be managed, but only by paying attention to portfolio characteristics such as skewness and kurtosis, which alternative funds significantly impact.


Posted in Alternative Investment, Hedge Funds | Tagged , , , , , , | Comments Off

Volatility Strategy +15.19% in August: Here’s How


Mark Gilbert has written extensively in BloombergView about the demise of volatility across asset classes and what this may portend for markets (see Volatility Dies, Hedge Funds Lose).  As Mark and other commentators have pointed out, the effect has been to narrow the dispersion of asset returns and hence reduce the opportunity set.  This can be seen quite clearly in the following chart, which tracks the trend in the monthly cross-sectional dispersion in the DOW 30 index member stocks, together with the CBOE S&P 500 Volatility Index ($VIX). Monthly dispersion reached a low of 3.3% in August, only marginally higher than the all-time low of 2.8% in February 2007 that preceded the crash of 2008/09.


Fig 1


The concern that macroeconomic or geopolitical risk factors could cause the Fed to lose control of the process is reflected in the persistently high levels of the VVIX, the volatility of the VIX, i.e. in the volatility of volatility.  The latest reading in August of 8.8% for the VVIX is well above the long-term average level, despite the persistent downtrend in the series since 2008.


Fig 2

To give some perspective, this is equivalent to an annual volatility of around 140% – more than enough to give rise to profitable trading opportunities, which in part accounts for the continuing popularity of volatility ETF and ETN products, such as the iPath S&P 500 VIX ST Futures ETN (VXX) and VelocityShares Daily Inverse VIX ST ETN (XIV), as well are their counterparts in VIX futures and options.  As stocks continue to move in a highly correlated way the pickings will be slim for traditional strategies that depend on normal levels of dispersion, such as equity long/short and pairs trading.  In the meantime, investors might do better to focus on the volatility asset class and other niche sectors that continue to offer opportunity.


Posted in Trading, Volatility Modeling | Tagged , , | Comments Off

Creating Robust, High-Performance Stock Portfolios

In this article I am going to look at how stock portfolios should be constructed that best meet investment objectives.

  • The theoretical and practical difficulties of the widely adopted Modern Portfolio Theory approach limits its usefulness as a tool for portfolio construction.
  • MPT portfolios typically produce disappointing out-of-sample results and will often under-perform a naïve, equally-weighted stock portfolio.
  • The article introduces the concept of robust portfolio construction, which leads to portfolios that have more stable performance characteristics, including during periods of high volatility or market corrections.
  • The benefits from this approach include risk-adjusted returns that substantially exceed those of traditional portfolios, together with much lower drawdowns and correlations.

Read more here.

Posted in S&P500 Index | Tagged , , , , , | Comments Off

Pattern Trading


  • Pattern trading rules try to identify profit opportunities, based on short term price patterns.
  • An exhaustive test of simple pattern trading rules was conducted for several stocks, incorporating forecasts of the Open, High, Low and Close prices.
  • There is clear evidence that pattern trading rules continue to work consistently for many stocks.
  • Almost all of the optimal pattern trading rules suggest buying the stock if the close is below the mid-range of the day.
  • This “buy the dips” approach can sometimes be improved by overlaying additional conditions, or signals from forecasting models.


Trading Pattern Rules

From time to time one comes across examples of trading pattern rules that appear to work. By “pattern rule”, I mean something along the lines of: “if the stock closes below the open and today’s high is greater than yesterday’s high, then buy tomorrow’s open”.

Trading rules of this kind are typically one-of-a-kind oddities that only work for limited periods, or specific securities. But I was curious enough to want to investigate the concept of pattern trading, to see if there might be some patterns that are generally applicable and potentially worth trading.

To my surprise, I was able to find such a rule, which I will elaborate on in this article. The rule appears to work consistently for a wide range of stocks, across long time frames. While perhaps not interesting enough to trade by itself, the rule might provide some useful insight and, possibly, be combined with other indicators in a more elaborate trading strategy.

The original basis for this piece of research was the idea of using vector autoregression models to forecast the daily O/H/L/C prices of a stock. The underlying thesis is that there might be information in the historical values of these variables that, combined together, could produce more useful forecasts than, say, using close prices alone. In technical terms, we say that the O/H/L/C price series are cointegrated, which one might think of as a more robust kind of correlation: cointegrated series tend to continue to move together for some underlying economic reason, whereas series that are merely correlated will often see that purely statistical relationship break down. In this case the economic relationship between the O/H/L/C series is clear: the high price will always be greater than the low price, and the open and close prices will always lie between the two. Furthermore, the prices cannot drift arbitrarily far apart indefinitely, since volatility is finite and mean-reverting. So there is some kind of rationale for using a vector autoregression model in this context. But I don’t want to dwell on this idea too much, as it turns out to be useful only at the margin.

To keep it simple I decided to focus attention on simple pattern trades of the following kind:

If Rule1 and/or Rule2 then Trade

Rule1 and Rule2 are simple logical statements of the kind: “Today’s Open greater than yesterday’s Close”, or “today’s High below yesterday’s Low”. The trade can be expressed in combinations of the form “Buy today’s Open, Sell today’s Close”, or “Buy today’s Close, Sell tomorrow’s Close”.

In my model I had to consider rules combining not only the O/H/L/C prices from yesterday, today and tomorrow, but also forecast O/H/L/C prices from the vector autoregression model. This gave rise to hundreds of thousands of possibilities. A brute-force test of every one of them would certainly be feasible, but rather tedious to execute. And many of the possible rules would be redundant – for example a rule such as : “if today’s open is lower than today’s close, buy today’s open”. Rules of that kind will certainly make a great deal of money, but they aren’t practical, unfortunately!

To keep the number of possibilities to a workable number, I restricted the trading rule to the following: “Buy today’s close, sell tomorrow’s close”. Consequently, we are considering long-only trading strategies and we ignore any rules that might require us to short a stock.

I chose stocks with long histories, dating back to at least the beginning of the 1970′s, in order to provide sufficient data to construct the VAR model. Data from the period from Jan 1970 to Dec 2012 were used to estimate the model, and the performance of the various possible trading rules was evaluated using out-of-sample data from Jan 2013 to Jun 2014.

For ease of illustration the algorithms were coded up in MS-Excel (a copy of the Excel workbook is available on request). In evaluating trading rule performance an allowance was made of $1c per share in commission and $2c per share in slippage. Position size was fixed at 1,000 shares. Considering that the trading rules requires entry and exit at market close, a greater allowance for slippage may be required for some stocks. In addition, we should note the practical difficulties of trading a sizeable position at the close, especially in situations where the stock price may be very near to key levels such as the intra-day high or low that our trading rule might want to take account of.

As a further caveat, we should note that there is an element of survivor bias here: in order to fit this test protocol, stocks would have had to survive from the 1970′s to the present day. Many stocks that were current at the start of that period are no longer in existence, due to mergers, bankruptcies, etc. Excluding such stocks from the evaluation will tend to inflate the test results. It should be said that I did conduct similar tests on several now-defunct stocks, for which the outcomes were similar to those presented here, but a fully survivor-bias corrected study is beyond the scope of this article. With that caveat behind us, let’s take a look at some of the results.

Trading Pattern Analysis

Fig. 1 below shows the summary output from the test for the 3M Company (NYSE:MMM). At the top you can see the best trading rule that the system was able to find for this particular stock. In simple English, the rule tells you to buy today’s close in MMM and sell tomorrow’s close, if the stock opened below the forecast of yesterday’s high price and, in addition, the stock closed below the midrange of the day (the average of today’s high and low prices).

Fig. 1 Summary Analysis for MMM 

Fig 1

Source: Yahoo Finance.

The in-sample results from Jan 2000, summarized in left-hand table in Fig. 2 below, are hardly stellar, but do show evidence of a small, but significant edge, with total net returns of 165%, profit factor of 1.38 and % win rate of 54%. And while the trading rule is, ultimately, outperformed by a simple buy-and-hold strategy, after taking into account transaction costs, for extended periods (e.g. 2009-2012), investors would have been better off had they used the trading rule, because it successfully avoided the worst of the effects of the 2008/09 market crash.

Out-of-sample results, shown in the right-hand table, are less encouraging, but net returns are nonetheless positive and the % win rate actually increases to 55%.

Fig 2. Trade Rule Performance


Source: Yahoo Finance.

I noted earlier that the first part of our trading rule for MMM involved comparing the opening price to the forecast of yesterday’s high, produced by the vector autoregression model, while the second part of the trading rule references only the midrange and closing prices. How much added value does the VAR model provide? We can test this by eliminating the first part of the rule and considering all days in which the stock closed below the midrange. The results turn out to as shown in Fig. 3.

Fig. 3 Performance of Simplified Trading Rule 


Source: Yahoo Finance.

As expected, the in-sample results from our shortened trading rule are certainly inferior to the original rule, in which the VAR model forecasts played a role. But the out-of-sample performance of the simplified rule is actually improved – not only is the net return higher than before, so too is the % win rate, by a couple of percentage points.

A similar pattern emerges for many other stocks: in almost every case, our test algorithm finds that the best trading rule buys the close, based on a comparison of the closing price to the mid-range price. In some cases, the in-sample test results are improved by adding further conditions, such as we saw in the case of MMM. But, as with MMM, we often find that the additional benefit derived from use of the autoregression model forecasts fails to improve trading rule results in the out-of-sample period, and indeed often makes them worse.


In general, we find evidence that a simple trading rule based on a comparison of the closing price to the mid-range price appears to work for many stocks, across long time spans.

In a sense, this simple trading rule is already well known: it is just a variant of the “buy the dips” idea, where, in this case, we define a dip as being when the stock closes below the mid-range of the day, rather than, say, below a moving average level. The economic basis for this finding is also well known: stocks have positive drift. But it is interesting to find yet another confirmation of this well-known idea. And it leaves open the possibility that the trading concept could be further improved by introducing additional rules, trading indicators, and model forecasts to the mix.

Posted in Uncategorized | Tagged , , | Comments Off

More on Strategy Robustness

Commentators have made the point that a high % win rate is not enough.

Yes, you obviously want to pay attention to other performance metrics also, such as profit factor. In fact, there is no reason why you shouldn’t consider an objective function that explicitly combines various desirable performance measures, for example:

net profit * % win rate * profit factor

Another approach is to build the model using a data set spanning a different period. I did this with WFC using data from 1990, rather than 1970. Not only was the performance from 1990-2014 better, so too was the performance during the OOS period 1970-1989.  Profit factor was 2.49 and %Win rate was 70% across the 44 year period from 1970.  For the period from 1990, the performance metrics increase to 3.04 and 73%, respectively.

So in this case, it appears, a most robust strategy resulted from using less data, rather than more.  At first this appears counterintuitive. But it’s quite possible for a strategy to be over-condition on behavior that is no longer relevant to the market today. Eliminating such conditioning can sometimes enable strategies to emerge that have greater longevity.

WFC from 1970-2014 (1990 data)


Posted in Uncategorized | Tagged , , , | Comments Off

Optimizing Strategy Robustness

Below is the equity curve for an equity strategy I developed recently, implemented in WFC.  The results appear outstanding:  no losing years in over 20 years, profit factor of 2.76 and average win rate of 75%.  Out-of-sample results (double blind) for 2013 and 2014:  net returns of 27% and 16% YTD.

WFC from 1993-2014


So far so good. However, if we take a step back through the earlier out of sample period, from 1970, the picture is rather less rosy:


WFC from 1970-2014


Now, at this point, some of you will be saying:  nothing to see here – it’s obviously just curve fitting.  To which I would respond that I have seen successful strategies, including several hedge fund products, with far shorter and less impressive back-tests than the initial 20-year history I showed above.

That said, would you be willing to take the risk of trading a strategy such as this one?  I would not:  at the back of my mind would always be the concern that the market might easily revert to the conditions that applied during the 1970s and 1980’s.  I expect many investors would share that concern.

But to the point of this post:  most strategies are designed around the criterion of maximizing net profit.  Occasionally you might come across someone who has considered risk, perhaps in the form of drawdown, or Sharpe ratio.  But, in general, it’s all about optimizing performance.

Suppose that, instead of maximizing performance, your objective was to maximize the robustness of the strategy.  What criteria would you use?

In my own research, I have used a great many different objective functions, often multi-dimensional.  Correlation to the perfect equity curve, net profit / max drawdown and Sortino ratio are just a few examples.  But if I had to guess, I would say that the criteria that tends to produce the most robust strategies and reliable out of sample performance is the maximization of the win rate, subject to a minimum number of trades.

I am not aware of a great deal of theory on this topic. I would be interested to learn of other readers’ experience.


Posted in Uncategorized | Tagged , , | Comments Off