Not the Market Top

Our most reliable market timing indicator is a  system that “trades” the CBOE VIX Index, a measure of option volatility in the S&P500 Index.  While the VIX index itself is not tradable, the system provides a signal that can be used to trade products such as VIX futures, or ETF products like the VXX and XIV.  Since the VIX index is correlated negatively with the market, the system can also provide a very useful signal to time market entries and exits (or when to add to positions) in equity portfolios.

Since 1992 the system has “traded” 238 times, with 81% accuracy (i.e. 8 out of ten trades were profitable).  The profit percentage is even higher on the long side – around 89%, although short signals tend to be more frequent than long signals by a factor of more than 2:1.

VIX Signals
(click to enlarge)

Since the start of 2014 the system has issued 9 signals, 7 of which were profitable.  The latest signal was generated on July 11, when the system went short the VIX at 12.08.  At the time of writing, the trade is underwater, with the VIX at around the 14 level.  It is not at all uncommon for a trade to lose money initially, and this one may still work out.  The more important point, however, is this: the system is not behaving as it did during previous market crashes in 2000-01 and 2008-09, periods in which it made very large gains of 42% and 28%, respectively.  The more modest return of +1.59% in 2014 suggests that the market has not yet entered the long-awaited correction anticipated by so many.  Indeed, I would hazard a prediction that we will see a return to the 2,000 level in the S&P500 before any such correction occurs.  The merchants of doom may  have to wait a little while longer for their worst case scenario to play out.

VIX Strategy Report

 

(Click to enlarge)

 

 

Posted in Market Timing, VIX Index, Volatility Modeling | Tagged , , , | Leave a comment

What Wealth Managers and Family Offices Need to Understand About Alternative Investing

Gold

The most recent Morningstar survey provides an interesting snapshot of the state of the alternatives market.  In 2013, for the third successive year, liquid alternatives was the fastest growing category of mutual funds, drawing in flows totaling $95.6 billion.  The fastest growing subcategories have been long-short stock funds (growing more than 80% in 2013), nontraditional bond funds (79%) and “multi-alternative” fund-of-alts-funds products (57%).

Benchmarking Alternatives
The survey also provides some interesting insights into the misconceptions about alternative investments that remain prevalent amongst advisors, despite contrary indications provided by long-standing academic research.  According to Morningstar, a significant proportion of advisors continue to use inappropriate benchmarks, such as the S&P 500 or Russell 2000, to evaluate alternatives funds (see Some advisers using ill-suited benchmarks to measure alts performance by Trevor Hunnicutt, Investment News July 2014).  As Investment News points out, the problem with applying standards developed to measure the performance of funds that are designed to beat market benchmarks is that many alternative funds are intended to achieve other investment goals, such as reducing volatility or correlation.  These funds will typically have under-performed standard equity indices during the bull market, causing investors to jettison them from their portfolios at a time when the additional protection they offer may be most needed.

This is but one example in a broader spectrum of issues about alternative investing that are poorly understood.  Even where advisors recognize the need for a more appropriate hedge fund index to benchmark fund performance, several traps remain for the unwary.  As shown in Brooks and Kat (The Statistical Properties of Hedge Fund Index Returns and Their Implications for Investors, Journal of Financial and Quantitative Analysis, 2001), there can be considerable heterogeneity between indices that aim to benchmark the same type of strategy, since indices tend to cover different parts of the alternatives universe.  There are also significant differences between indices in terms of their survivorship bias – the tendency to overstate returns by ignoring poorly performing funds that have closed down (see Welcome to the Dark Side – Hedge Fund Attribution and Survivorship Bias, Amin and Kat, Working Paper, 2002).  Hence, even amongst more savvy advisors, the perception of performance tends to be biased by the choice of index.

Risks and Benefits of Diversifying with Alternatives
An important and surprising discovery in relation to diversification with alternatives was revealed in Amin and Kat’s Diversification and Yield Enhancement with Hedge Funds (Working Paper, 2002).  Their study showed that the median standard deviation of a portfolio of stocks, bonds and hedge funds reached its lowest point where the allocation to alternatives was 50%, far higher than the 1%-5% typically recommended by advisors.

Standard Deviation of Portfolios of Stocks, Bonds and 20 hedge Funds

Hedge Fund Pct Mix and Volatility

Source: Diversification and Yield Enhancement with Hedge Funds, Amin and Kat, Working Paper, 2002

Another potential problem is that investors will not actually invest in the fund index that is used for benchmarking, but in a basket containing a much smaller number of funds, often through a fund of funds vehicle.  The discrepancy in performance between benchmark and basket can often be substantial in the alternatives space.

Amin and Kat studied this problem in 2002 (Portfolios of Hedge Funds, Working Paper, 2002), by constructing hedge fund portfolios ranging in size from 1 to 20 funds and measuring their performance on a number of criteria that included, not just the average return and standard deviation, but also the skewness (a measure of the asymmetry of returns), kurtosis (a measure of the probability of extreme returns)and the correlation with the S&P 500 Index and the Salomon (now Citigroup) Government Bond Index.  Their startling conclusion was that, in the alternatives space, diversification is not necessarily a good thing.    As expected, as the number of funds in the basket is increased, the overall volatility drops substantially; but at the same time skewness drops and kurtosis and market correlation increase significantly.  In other words, when adding more funds, the likelihood of a large loss increases and the diversification benefit declines.   The researchers found that a good approximation to a typical hedge fund index could be constructed with a basket of just 15 well-chosen funds, in most cases.

Concerns about return distribution characteristics such as skewness and kurtosis may appear arcane, but these factors often become crucially important at just the wrong time, from the investor’s perspective.  When things go wrong in the stock market they also tend to go wrong for hedge funds, as a fall in stock prices is typically accompanied by a drop in market liquidity, a widening of spreads and, often, an increase in stock loan costs.  Equity market neutral and long/short funds that are typically long smaller cap stocks and short larger cap stocks will pay a higher price for the liquidity they need to maintain neutrality.  Likewise, a market sell-off is likely to lead to postponing of M&A transactions that will have a negative impact on the performance of risk arbitrage funds.  Nor are equity-related funds the only alternatives likely to suffer during a market sell-off.  A market fall will typically be accompanied by widening credit spreads, which in turn will damage the performance of fixed income and convertible arbitrage funds.   The key point is that, because they all share this risk, diversification among different funds will not do much to mitigate it.

Conclusions
Many advisors remain wedded to using traditional equity indices that are inappropriate benchmarks for alternative strategies.  Even where more relevant indices are selected, they may suffer from survivorship and fund-selection bias.

In order to reap the diversification benefit from alternatives, research shows that investors should concentrate a significant proportion of their wealth in the limited number of alternatives funds, a portfolio strategy that is diametrically opposed to the “common sense” approach of many advisors.

Finally, advisors often overlook the latent correlation and liquidity risks inherent in alternatives that come into play during market down-turns, at precisely the time when investors are most dependent on diversification to mitigate market risk.  Such risks can be managed, but only by paying attention to portfolio characteristics such as skewness and kurtosis, which alternative funds significantly impact.

 

Posted in Alternative Investment, Hedge Funds | Tagged , , , , , , | Comments Off

Volatility Strategy +15.19% in August: Here’s How

WHERE VOLATILITY THRIVES

Mark Gilbert has written extensively in BloombergView about the demise of volatility across asset classes and what this may portend for markets (see Volatility Dies, Hedge Funds Lose).  As Mark and other commentators have pointed out, the effect has been to narrow the dispersion of asset returns and hence reduce the opportunity set.  This can be seen quite clearly in the following chart, which tracks the trend in the monthly cross-sectional dispersion in the DOW 30 index member stocks, together with the CBOE S&P 500 Volatility Index ($VIX). Monthly dispersion reached a low of 3.3% in August, only marginally higher than the all-time low of 2.8% in February 2007 that preceded the crash of 2008/09.

            CBOE VIX INDEX AND DISPERSION IN  THE DOW 30 STOCKS

Fig 1

 

The concern that macroeconomic or geopolitical risk factors could cause the Fed to lose control of the process is reflected in the persistently high levels of the VVIX, the volatility of the VIX, i.e. in the volatility of volatility.  The latest reading in August of 8.8% for the VVIX is well above the long-term average level, despite the persistent downtrend in the series since 2008.

CBOE VIX INDEX ($VIX) AND AVERAGE DAILY VIX VOLATILITY (VVIX)

Fig 2

To give some perspective, this is equivalent to an annual volatility of around 140% – more than enough to give rise to profitable trading opportunities, which in part accounts for the continuing popularity of volatility ETF and ETN products, such as the iPath S&P 500 VIX ST Futures ETN (VXX) and VelocityShares Daily Inverse VIX ST ETN (XIV), as well are their counterparts in VIX futures and options.  As stocks continue to move in a highly correlated way the pickings will be slim for traditional strategies that depend on normal levels of dispersion, such as equity long/short and pairs trading.  In the meantime, investors might do better to focus on the volatility asset class and other niche sectors that continue to offer opportunity.

 

Posted in Trading, Volatility Modeling | Tagged , , | Comments Off

Creating Robust, High-Performance Stock Portfolios

Summary
In this article I am going to look at how stock portfolios should be constructed that best meet investment objectives.

  • The theoretical and practical difficulties of the widely adopted Modern Portfolio Theory approach limits its usefulness as a tool for portfolio construction.
  • MPT portfolios typically produce disappointing out-of-sample results and will often under-perform a naïve, equally-weighted stock portfolio.
  • The article introduces the concept of robust portfolio construction, which leads to portfolios that have more stable performance characteristics, including during periods of high volatility or market corrections.
  • The benefits from this approach include risk-adjusted returns that substantially exceed those of traditional portfolios, together with much lower drawdowns and correlations.

Read more here.

Posted in S&P500 Index | Tagged , , , , , | Comments Off

Pattern Trading

Summary

  • Pattern trading rules try to identify profit opportunities, based on short term price patterns.
  • An exhaustive test of simple pattern trading rules was conducted for several stocks, incorporating forecasts of the Open, High, Low and Close prices.
  • There is clear evidence that pattern trading rules continue to work consistently for many stocks.
  • Almost all of the optimal pattern trading rules suggest buying the stock if the close is below the mid-range of the day.
  • This “buy the dips” approach can sometimes be improved by overlaying additional conditions, or signals from forecasting models.

MMM

Trading Pattern Rules

From time to time one comes across examples of trading pattern rules that appear to work. By “pattern rule”, I mean something along the lines of: “if the stock closes below the open and today’s high is greater than yesterday’s high, then buy tomorrow’s open”.

Trading rules of this kind are typically one-of-a-kind oddities that only work for limited periods, or specific securities. But I was curious enough to want to investigate the concept of pattern trading, to see if there might be some patterns that are generally applicable and potentially worth trading.

To my surprise, I was able to find such a rule, which I will elaborate on in this article. The rule appears to work consistently for a wide range of stocks, across long time frames. While perhaps not interesting enough to trade by itself, the rule might provide some useful insight and, possibly, be combined with other indicators in a more elaborate trading strategy.

The original basis for this piece of research was the idea of using vector autoregression models to forecast the daily O/H/L/C prices of a stock. The underlying thesis is that there might be information in the historical values of these variables that, combined together, could produce more useful forecasts than, say, using close prices alone. In technical terms, we say that the O/H/L/C price series are cointegrated, which one might think of as a more robust kind of correlation: cointegrated series tend to continue to move together for some underlying economic reason, whereas series that are merely correlated will often see that purely statistical relationship break down. In this case the economic relationship between the O/H/L/C series is clear: the high price will always be greater than the low price, and the open and close prices will always lie between the two. Furthermore, the prices cannot drift arbitrarily far apart indefinitely, since volatility is finite and mean-reverting. So there is some kind of rationale for using a vector autoregression model in this context. But I don’t want to dwell on this idea too much, as it turns out to be useful only at the margin.

To keep it simple I decided to focus attention on simple pattern trades of the following kind:

If Rule1 and/or Rule2 then Trade

Rule1 and Rule2 are simple logical statements of the kind: “Today’s Open greater than yesterday’s Close”, or “today’s High below yesterday’s Low”. The trade can be expressed in combinations of the form “Buy today’s Open, Sell today’s Close”, or “Buy today’s Close, Sell tomorrow’s Close”.

In my model I had to consider rules combining not only the O/H/L/C prices from yesterday, today and tomorrow, but also forecast O/H/L/C prices from the vector autoregression model. This gave rise to hundreds of thousands of possibilities. A brute-force test of every one of them would certainly be feasible, but rather tedious to execute. And many of the possible rules would be redundant – for example a rule such as : “if today’s open is lower than today’s close, buy today’s open”. Rules of that kind will certainly make a great deal of money, but they aren’t practical, unfortunately!

To keep the number of possibilities to a workable number, I restricted the trading rule to the following: “Buy today’s close, sell tomorrow’s close”. Consequently, we are considering long-only trading strategies and we ignore any rules that might require us to short a stock.

I chose stocks with long histories, dating back to at least the beginning of the 1970′s, in order to provide sufficient data to construct the VAR model. Data from the period from Jan 1970 to Dec 2012 were used to estimate the model, and the performance of the various possible trading rules was evaluated using out-of-sample data from Jan 2013 to Jun 2014.

For ease of illustration the algorithms were coded up in MS-Excel (a copy of the Excel workbook is available on request). In evaluating trading rule performance an allowance was made of $1c per share in commission and $2c per share in slippage. Position size was fixed at 1,000 shares. Considering that the trading rules requires entry and exit at market close, a greater allowance for slippage may be required for some stocks. In addition, we should note the practical difficulties of trading a sizeable position at the close, especially in situations where the stock price may be very near to key levels such as the intra-day high or low that our trading rule might want to take account of.

As a further caveat, we should note that there is an element of survivor bias here: in order to fit this test protocol, stocks would have had to survive from the 1970′s to the present day. Many stocks that were current at the start of that period are no longer in existence, due to mergers, bankruptcies, etc. Excluding such stocks from the evaluation will tend to inflate the test results. It should be said that I did conduct similar tests on several now-defunct stocks, for which the outcomes were similar to those presented here, but a fully survivor-bias corrected study is beyond the scope of this article. With that caveat behind us, let’s take a look at some of the results.

Trading Pattern Analysis

Fig. 1 below shows the summary output from the test for the 3M Company (NYSE:MMM). At the top you can see the best trading rule that the system was able to find for this particular stock. In simple English, the rule tells you to buy today’s close in MMM and sell tomorrow’s close, if the stock opened below the forecast of yesterday’s high price and, in addition, the stock closed below the midrange of the day (the average of today’s high and low prices).

Fig. 1 Summary Analysis for MMM 

Fig 1

Source: Yahoo Finance.

The in-sample results from Jan 2000, summarized in left-hand table in Fig. 2 below, are hardly stellar, but do show evidence of a small, but significant edge, with total net returns of 165%, profit factor of 1.38 and % win rate of 54%. And while the trading rule is, ultimately, outperformed by a simple buy-and-hold strategy, after taking into account transaction costs, for extended periods (e.g. 2009-2012), investors would have been better off had they used the trading rule, because it successfully avoided the worst of the effects of the 2008/09 market crash.

Out-of-sample results, shown in the right-hand table, are less encouraging, but net returns are nonetheless positive and the % win rate actually increases to 55%.

Fig 2. Trade Rule Performance

Results1

Source: Yahoo Finance.

I noted earlier that the first part of our trading rule for MMM involved comparing the opening price to the forecast of yesterday’s high, produced by the vector autoregression model, while the second part of the trading rule references only the midrange and closing prices. How much added value does the VAR model provide? We can test this by eliminating the first part of the rule and considering all days in which the stock closed below the midrange. The results turn out to as shown in Fig. 3.

Fig. 3 Performance of Simplified Trading Rule 

Results2

Source: Yahoo Finance.

As expected, the in-sample results from our shortened trading rule are certainly inferior to the original rule, in which the VAR model forecasts played a role. But the out-of-sample performance of the simplified rule is actually improved – not only is the net return higher than before, so too is the % win rate, by a couple of percentage points.

A similar pattern emerges for many other stocks: in almost every case, our test algorithm finds that the best trading rule buys the close, based on a comparison of the closing price to the mid-range price. In some cases, the in-sample test results are improved by adding further conditions, such as we saw in the case of MMM. But, as with MMM, we often find that the additional benefit derived from use of the autoregression model forecasts fails to improve trading rule results in the out-of-sample period, and indeed often makes them worse.

Conclusion

In general, we find evidence that a simple trading rule based on a comparison of the closing price to the mid-range price appears to work for many stocks, across long time spans.

In a sense, this simple trading rule is already well known: it is just a variant of the “buy the dips” idea, where, in this case, we define a dip as being when the stock closes below the mid-range of the day, rather than, say, below a moving average level. The economic basis for this finding is also well known: stocks have positive drift. But it is interesting to find yet another confirmation of this well-known idea. And it leaves open the possibility that the trading concept could be further improved by introducing additional rules, trading indicators, and model forecasts to the mix.

Posted in Uncategorized | Tagged , , | Comments Off

More on Strategy Robustness

Commentators have made the point that a high % win rate is not enough.

Yes, you obviously want to pay attention to other performance metrics also, such as profit factor. In fact, there is no reason why you shouldn’t consider an objective function that explicitly combines various desirable performance measures, for example:

net profit * % win rate * profit factor

Another approach is to build the model using a data set spanning a different period. I did this with WFC using data from 1990, rather than 1970. Not only was the performance from 1990-2014 better, so too was the performance during the OOS period 1970-1989.  Profit factor was 2.49 and %Win rate was 70% across the 44 year period from 1970.  For the period from 1990, the performance metrics increase to 3.04 and 73%, respectively.

So in this case, it appears, a most robust strategy resulted from using less data, rather than more.  At first this appears counterintuitive. But it’s quite possible for a strategy to be over-condition on behavior that is no longer relevant to the market today. Eliminating such conditioning can sometimes enable strategies to emerge that have greater longevity.

WFC from 1970-2014 (1990 data)

Performance

Posted in Uncategorized | Tagged , , , | Comments Off

Optimizing Strategy Robustness

Below is the equity curve for an equity strategy I developed recently, implemented in WFC.  The results appear outstanding:  no losing years in over 20 years, profit factor of 2.76 and average win rate of 75%.  Out-of-sample results (double blind) for 2013 and 2014:  net returns of 27% and 16% YTD.

WFC from 1993-2014

 

So far so good. However, if we take a step back through the earlier out of sample period, from 1970, the picture is rather less rosy:

 

WFC from 1970-2014

 

Now, at this point, some of you will be saying:  nothing to see here – it’s obviously just curve fitting.  To which I would respond that I have seen successful strategies, including several hedge fund products, with far shorter and less impressive back-tests than the initial 20-year history I showed above.

That said, would you be willing to take the risk of trading a strategy such as this one?  I would not:  at the back of my mind would always be the concern that the market might easily revert to the conditions that applied during the 1970s and 1980’s.  I expect many investors would share that concern.

But to the point of this post:  most strategies are designed around the criterion of maximizing net profit.  Occasionally you might come across someone who has considered risk, perhaps in the form of drawdown, or Sharpe ratio.  But, in general, it’s all about optimizing performance.

Suppose that, instead of maximizing performance, your objective was to maximize the robustness of the strategy.  What criteria would you use?

In my own research, I have used a great many different objective functions, often multi-dimensional.  Correlation to the perfect equity curve, net profit / max drawdown and Sortino ratio are just a few examples.  But if I had to guess, I would say that the criteria that tends to produce the most robust strategies and reliable out of sample performance is the maximization of the win rate, subject to a minimum number of trades.

I am not aware of a great deal of theory on this topic. I would be interested to learn of other readers’ experience.

 

Posted in Uncategorized | Tagged , , | Comments Off

Enhancing Mutual Fund Returns With Market Timing

Summary

  • In this article, I will apply market timing techniques to several popular mutual funds.
  • The market timing approach produces annual rates of return that are 3% to 7% higher, with lower risk, than an equivalent buy and hold mutual fund investment.
  • Investors could in some cases have earned more than double the return achieved by holding a mutual fund investment, over a 10-year period.
  • Hedging strategies that use market timing signals are able to sidestep market corrections, volatile conditions and the ensuing equity drawdowns.
  • Hedged portfolios typically employ around 12% less capital than the equivalent buy and hold strategy.

Read the full article here.

Posted in Forecasting, Market Timing, Time Series Modeling, Trading, Volatility Modeling | Tagged , , | Comments Off

How to Bulletproof Your Portfolio

Summary

  • How to stay in the market and navigate the rocky terrain ahead, without risking hard won gains.
  • A hedging program to get you out of trouble at the right time and step back in when skies are clear.
  • Even a modest ability to time the market can produce enormous dividends over the long haul.
  • Investors can benefit by using quantitative market timing techniques to strategically adjust their market exposure.
  • Market timing can be a useful tool to avoid major corrections, increasing investment returns, while reducing volatility and drawdowns.

Read the full article here.

Posted in ETFs, Modeling, S&P500 Index, Volatility Modeling | Tagged , , , , | Comments Off

How Not to Develop Trading Strategies – A Cautionary Tale

In his post on Multi-Market Techniques for Robust Trading Strategies (http://www.adaptrade.com/Newsletter/NL-MultiMarket.htm) Michael Bryant of Adaptrade discusses some interesting approaches to improving model robustness. One is to use data from several correlated assets to build the model, on the basis that if the algorithm works for several assets with differing price levels, that would tend to corroborate the system’s robustness. The second approach he advocates is to use data from the same asset series at different bars lengths. The example he uses @ES.D at 5, 7 and 9 minute bars. The argument in favor of this approach is the same as for the first, albeit in this case the underlying asset is the same.

I like Michael’s idea in principle, but I wanted to give you a sense of what can all too easily go wrong with GP modeling, even using techniques such as multi-time frame fitting and Monte Carlo simulation to improve robustness testing.

In the chart below I have extended the analysis back in time, beyond the 2011-2012 period that Michael used to build his original model. As you can see, most of the returns are generated in-sample, in the 2011-2012 period. As we look back over the period from 2007-2010, the results are distinctly unimpressive – the strategy basically trades sideways for four years.

Adaptrade ES Strategy in Multiple Time Frames

 

How do Do It Right

In my view, there is only one, safe way to use GP to develop strategies. Firstly, you need to use a very long span of data – as much as possible, to fit your model. Only in this way can you ensure that the model has encountered enough variation in market conditions to stand a reasonable chance of being able to adapt to changing market conditions in future.

Secondly, you need to use two OOS period. The first OOS span of data, drawn from the start of the data series, is used in the normal way, to visually inspect the performance of the model. But the second span of OOS data, from more recent history, is NOT examined before the model is finalized. This is really important. Products like Adaptrade make it too easy for the system designer to “cheat”, by looking at the recent performance of his trading system “out of sample” and selecting models that do well in that period. But the very process of examining OOS performance introduces bias into the system. It would be like adding a line of code saying something like:

IF (model performance in OOS period > x) do the following….

I am quite sure if I posted a strategy with a line of code like that in it, it would immediately be shot down as being blatantly biased, and quite rightly so. But, if I look at the recent “OOS” performance and use it to select the model, I am effectively doing exactly the same thing.

That is why it is so important to have a second span of OOS data that it not only not used to build the model, but also is not used to assess performance, until after the final model selection is made. For that reason, the second OOS period is referred to as a “double blind” test.

That’s the procedure I followed to build my futures daytrading strategy: I used as much data as possible, dating from 2002. The first 20% of the each data set was used for normal OOS testing. But the second set of data, from Jan 2012 onwards, was my double-blind data set. Only when I saw that the system maintained performance in BOTH OOS periods was I reasonably confident of the system’s robustness.

DoubleBlind

This further explains why it is so challenging to develop higher frequency strategies using GP. Running even a very fast GP modeling system on a large span of high frequency data can take inordinate amounts of time.

The longest span of 5-min bar data that a GP system can handle would typically be around 5-7 years. This is probably not quite enough to build a truly robust system, although if you pick you time span carefully it might be (I generally like to use the 2006-2011 period, which has lots of market variation).

For 15 minute bar data, a well-designed GP system can usually handle all the available data you can throw at it – from 1999 in the case of the Emini, for instance.

Why I don’t Like Fitting Models over Short Time Spans

The risks of fitting models to data in short time spans are intuitively obvious. If you happen to pick a data set in which the market is in a strong uptrend, then your model is going to focus on that kind of market behavior. Subsequently, when the trend changes, the strategy will typically break down.
Monte Carlo simulation isn’t going to change much in this situation: sure, it will help a bit, perhaps, but since the resampled data is all drawn from the same original data set, in most cases the simulated paths will also show a strong uptrend – all that will be shown is that there is some doubt about the strength of the trend. But a completely different scenario, in which, say, the market drops by 10%, is unlikely to appear.

One possible answer to that problem, recommended by some system developers, is simply to rebuild the model when a breakdown is detected. While it’s true that a product like MSA can make detection easier, rebuilding the model is another question altogether. There is no guarantee that the kind of model that has worked hitherto can be re-tooled to work once again. In fact, there may be no viable trading system that can handle the new market dynamics.

Here is a case in point. We have a system that works well on 10 min bars in TF.D up until around May 2012, when MSA indicates a breakdown in strategy performance.

TF.F Monte Carlo

So now we try to fit a new model, along the pattern of the original model, taking account some of the new data.  But it turns out to be just a Band-Aid – after a few more data points the strategy breaks down again, irretrievably.

TF EC 1

This is typical of what often happens when you use GP to build a model using s short span of data. That’s why I prefer to use a long time span, even at lower frequency. The chances of being able to build a robust system that will adapt well to changing market conditions are much higher.

A Robust Emini Trading System

Here, for example is a GP system build on daily data in @ES.D from 1999 to 2011 (i.e. 2012 to 2014 is OOS).

ES.D EC

Posted in Algorithmic Trading, Futures, Machine Learning, S&P500 Index, Trading | Tagged , , , , , | Comments Off