Pattern Trading

Summary

  • Pattern trading rules try to identify profit opportunities, based on short term price patterns.
  • An exhaustive test of simple pattern trading rules was conducted for several stocks, incorporating forecasts of the Open, High, Low and Close prices.
  • There is clear evidence that pattern trading rules continue to work consistently for many stocks.
  • Almost all of the optimal pattern trading rules suggest buying the stock if the close is below the mid-range of the day.
  • This “buy the dips” approach can sometimes be improved by overlaying additional conditions, or signals from forecasting models.

MMM

Trading Pattern Rules

From time to time one comes across examples of trading pattern rules that appear to work. By “pattern rule”, I mean something along the lines of: “if the stock closes below the open and today’s high is greater than yesterday’s high, then buy tomorrow’s open”.

Trading rules of this kind are typically one-of-a-kind oddities that only work for limited periods, or specific securities. But I was curious enough to want to investigate the concept of pattern trading, to see if there might be some patterns that are generally applicable and potentially worth trading.

To my surprise, I was able to find such a rule, which I will elaborate on in this article. The rule appears to work consistently for a wide range of stocks, across long time frames. While perhaps not interesting enough to trade by itself, the rule might provide some useful insight and, possibly, be combined with other indicators in a more elaborate trading strategy.

The original basis for this piece of research was the idea of using vector autoregression models to forecast the daily O/H/L/C prices of a stock. The underlying thesis is that there might be information in the historical values of these variables that, combined together, could produce more useful forecasts than, say, using close prices alone. In technical terms, we say that the O/H/L/C price series are cointegrated, which one might think of as a more robust kind of correlation: cointegrated series tend to continue to move together for some underlying economic reason, whereas series that are merely correlated will often see that purely statistical relationship break down. In this case the economic relationship between the O/H/L/C series is clear: the high price will always be greater than the low price, and the open and close prices will always lie between the two. Furthermore, the prices cannot drift arbitrarily far apart indefinitely, since volatility is finite and mean-reverting. So there is some kind of rationale for using a vector autoregression model in this context. But I don’t want to dwell on this idea too much, as it turns out to be useful only at the margin.

To keep it simple I decided to focus attention on simple pattern trades of the following kind:

If Rule1 and/or Rule2 then Trade

Rule1 and Rule2 are simple logical statements of the kind: “Today’s Open greater than yesterday’s Close”, or “today’s High below yesterday’s Low”. The trade can be expressed in combinations of the form “Buy today’s Open, Sell today’s Close”, or “Buy today’s Close, Sell tomorrow’s Close”.

In my model I had to consider rules combining not only the O/H/L/C prices from yesterday, today and tomorrow, but also forecast O/H/L/C prices from the vector autoregression model. This gave rise to hundreds of thousands of possibilities. A brute-force test of every one of them would certainly be feasible, but rather tedious to execute. And many of the possible rules would be redundant – for example a rule such as : “if today’s open is lower than today’s close, buy today’s open”. Rules of that kind will certainly make a great deal of money, but they aren’t practical, unfortunately!

To keep the number of possibilities to a workable number, I restricted the trading rule to the following: “Buy today’s close, sell tomorrow’s close”. Consequently, we are considering long-only trading strategies and we ignore any rules that might require us to short a stock.

I chose stocks with long histories, dating back to at least the beginning of the 1970′s, in order to provide sufficient data to construct the VAR model. Data from the period from Jan 1970 to Dec 2012 were used to estimate the model, and the performance of the various possible trading rules was evaluated using out-of-sample data from Jan 2013 to Jun 2014.

For ease of illustration the algorithms were coded up in MS-Excel (a copy of the Excel workbook is available on request). In evaluating trading rule performance an allowance was made of $1c per share in commission and $2c per share in slippage. Position size was fixed at 1,000 shares. Considering that the trading rules requires entry and exit at market close, a greater allowance for slippage may be required for some stocks. In addition, we should note the practical difficulties of trading a sizeable position at the close, especially in situations where the stock price may be very near to key levels such as the intra-day high or low that our trading rule might want to take account of.

As a further caveat, we should note that there is an element of survivor bias here: in order to fit this test protocol, stocks would have had to survive from the 1970′s to the present day. Many stocks that were current at the start of that period are no longer in existence, due to mergers, bankruptcies, etc. Excluding such stocks from the evaluation will tend to inflate the test results. It should be said that I did conduct similar tests on several now-defunct stocks, for which the outcomes were similar to those presented here, but a fully survivor-bias corrected study is beyond the scope of this article. With that caveat behind us, let’s take a look at some of the results.

Trading Pattern Analysis

Fig. 1 below shows the summary output from the test for the 3M Company (NYSE:MMM). At the top you can see the best trading rule that the system was able to find for this particular stock. In simple English, the rule tells you to buy today’s close in MMM and sell tomorrow’s close, if the stock opened below the forecast of yesterday’s high price and, in addition, the stock closed below the midrange of the day (the average of today’s high and low prices).

Fig. 1 Summary Analysis for MMM 

Fig 1

Source: Yahoo Finance.

The in-sample results from Jan 2000, summarized in left-hand table in Fig. 2 below, are hardly stellar, but do show evidence of a small, but significant edge, with total net returns of 165%, profit factor of 1.38 and % win rate of 54%. And while the trading rule is, ultimately, outperformed by a simple buy-and-hold strategy, after taking into account transaction costs, for extended periods (e.g. 2009-2012), investors would have been better off had they used the trading rule, because it successfully avoided the worst of the effects of the 2008/09 market crash.

Out-of-sample results, shown in the right-hand table, are less encouraging, but net returns are nonetheless positive and the % win rate actually increases to 55%.

Fig 2. Trade Rule Performance

Results1

Source: Yahoo Finance.

I noted earlier that the first part of our trading rule for MMM involved comparing the opening price to the forecast of yesterday’s high, produced by the vector autoregression model, while the second part of the trading rule references only the midrange and closing prices. How much added value does the VAR model provide? We can test this by eliminating the first part of the rule and considering all days in which the stock closed below the midrange. The results turn out to as shown in Fig. 3.

Fig. 3 Performance of Simplified Trading Rule 

Results2

Source: Yahoo Finance.

As expected, the in-sample results from our shortened trading rule are certainly inferior to the original rule, in which the VAR model forecasts played a role. But the out-of-sample performance of the simplified rule is actually improved – not only is the net return higher than before, so too is the % win rate, by a couple of percentage points.

A similar pattern emerges for many other stocks: in almost every case, our test algorithm finds that the best trading rule buys the close, based on a comparison of the closing price to the mid-range price. In some cases, the in-sample test results are improved by adding further conditions, such as we saw in the case of MMM. But, as with MMM, we often find that the additional benefit derived from use of the autoregression model forecasts fails to improve trading rule results in the out-of-sample period, and indeed often makes them worse.

Conclusion

In general, we find evidence that a simple trading rule based on a comparison of the closing price to the mid-range price appears to work for many stocks, across long time spans.

In a sense, this simple trading rule is already well known: it is just a variant of the “buy the dips” idea, where, in this case, we define a dip as being when the stock closes below the mid-range of the day, rather than, say, below a moving average level. The economic basis for this finding is also well known: stocks have positive drift. But it is interesting to find yet another confirmation of this well-known idea. And it leaves open the possibility that the trading concept could be further improved by introducing additional rules, trading indicators, and model forecasts to the mix.

Posted in Uncategorized | Tagged , , | Leave a comment

More on Strategy Robustness

Commentators have made the point that a high % win rate is not enough.

Yes, you obviously want to pay attention to other performance metrics also, such as profit factor. In fact, there is no reason why you shouldn’t consider an objective function that explicitly combines various desirable performance measures, for example:

net profit * % win rate * profit factor

Another approach is to build the model using a data set spanning a different period. I did this with WFC using data from 1990, rather than 1970. Not only was the performance from 1990-2014 better, so too was the performance during the OOS period 1970-1989.  Profit factor was 2.49 and %Win rate was 70% across the 44 year period from 1970.  For the period from 1990, the performance metrics increase to 3.04 and 73%, respectively.

So in this case, it appears, a most robust strategy resulted from using less data, rather than more.  At first this appears counterintuitive. But it’s quite possible for a strategy to be over-condition on behavior that is no longer relevant to the market today. Eliminating such conditioning can sometimes enable strategies to emerge that have greater longevity.

WFC from 1970-2014 (1990 data)

Performance

Posted in Uncategorized | Tagged , , , | Leave a comment

Optimizing Strategy Robustness

Below is the equity curve for an equity strategy I developed recently, implemented in WFC.  The results appear outstanding:  no losing years in over 20 years, profit factor of 2.76 and average win rate of 75%.  Out-of-sample results (double blind) for 2013 and 2014:  net returns of 27% and 16% YTD.

WFC from 1993-2014

 

So far so good. However, if we take a step back through the earlier out of sample period, from 1970, the picture is rather less rosy:

 

WFC from 1970-2014

 

Now, at this point, some of you will be saying:  nothing to see here – it’s obviously just curve fitting.  To which I would respond that I have seen successful strategies, including several hedge fund products, with far shorter and less impressive back-tests than the initial 20-year history I showed above.

That said, would you be willing to take the risk of trading a strategy such as this one?  I would not:  at the back of my mind would always be the concern that the market might easily revert to the conditions that applied during the 1970s and 1980’s.  I expect many investors would share that concern.

But to the point of this post:  most strategies are designed around the criterion of maximizing net profit.  Occasionally you might come across someone who has considered risk, perhaps in the form of drawdown, or Sharpe ratio.  But, in general, it’s all about optimizing performance.

Suppose that, instead of maximizing performance, your objective was to maximize the robustness of the strategy.  What criteria would you use?

In my own research, I have used a great many different objective functions, often multi-dimensional.  Correlation to the perfect equity curve, net profit / max drawdown and Sortino ratio are just a few examples.  But if I had to guess, I would say that the criteria that tends to produce the most robust strategies and reliable out of sample performance is the maximization of the win rate, subject to a minimum number of trades.

I am not aware of a great deal of theory on this topic. I would be interested to learn of other readers’ experience.

 

Posted in Uncategorized | Tagged , , | Leave a comment

Enhancing Mutual Fund Returns With Market Timing

Summary

  • In this article, I will apply market timing techniques to several popular mutual funds.
  • The market timing approach produces annual rates of return that are 3% to 7% higher, with lower risk, than an equivalent buy and hold mutual fund investment.
  • Investors could in some cases have earned more than double the return achieved by holding a mutual fund investment, over a 10-year period.
  • Hedging strategies that use market timing signals are able to sidestep market corrections, volatile conditions and the ensuing equity drawdowns.
  • Hedged portfolios typically employ around 12% less capital than the equivalent buy and hold strategy.

Read the full article here.

Posted in Forecasting, Market Timing, Time Series Modeling, Trading, Volatility Modeling | Tagged , , | Leave a comment

How to Bulletproof Your Portfolio

Summary

  • How to stay in the market and navigate the rocky terrain ahead, without risking hard won gains.
  • A hedging program to get you out of trouble at the right time and step back in when skies are clear.
  • Even a modest ability to time the market can produce enormous dividends over the long haul.
  • Investors can benefit by using quantitative market timing techniques to strategically adjust their market exposure.
  • Market timing can be a useful tool to avoid major corrections, increasing investment returns, while reducing volatility and drawdowns.

Read the full article here.

Posted in ETFs, Modeling, S&P500 Index, Volatility Modeling | Tagged , , , , | Comments Off

How Not to Develop Trading Strategies – A Cautionary Tale

In his post on Multi-Market Techniques for Robust Trading Strategies (http://www.adaptrade.com/Newsletter/NL-MultiMarket.htm) Michael Bryant of Adaptrade discusses some interesting approaches to improving model robustness. One is to use data from several correlated assets to build the model, on the basis that if the algorithm works for several assets with differing price levels, that would tend to corroborate the system’s robustness. The second approach he advocates is to use data from the same asset series at different bars lengths. The example he uses @ES.D at 5, 7 and 9 minute bars. The argument in favor of this approach is the same as for the first, albeit in this case the underlying asset is the same.

I like Michael’s idea in principle, but I wanted to give you a sense of what can all too easily go wrong with GP modeling, even using techniques such as multi-time frame fitting and Monte Carlo simulation to improve robustness testing.

In the chart below I have extended the analysis back in time, beyond the 2011-2012 period that Michael used to build his original model. As you can see, most of the returns are generated in-sample, in the 2011-2012 period. As we look back over the period from 2007-2010, the results are distinctly unimpressive – the strategy basically trades sideways for four years.

Adaptrade ES Strategy in Multiple Time Frames

 

How do Do It Right

In my view, there is only one, safe way to use GP to develop strategies. Firstly, you need to use a very long span of data – as much as possible, to fit your model. Only in this way can you ensure that the model has encountered enough variation in market conditions to stand a reasonable chance of being able to adapt to changing market conditions in future.

Secondly, you need to use two OOS period. The first OOS span of data, drawn from the start of the data series, is used in the normal way, to visually inspect the performance of the model. But the second span of OOS data, from more recent history, is NOT examined before the model is finalized. This is really important. Products like Adaptrade make it too easy for the system designer to “cheat”, by looking at the recent performance of his trading system “out of sample” and selecting models that do well in that period. But the very process of examining OOS performance introduces bias into the system. It would be like adding a line of code saying something like:

IF (model performance in OOS period > x) do the following….

I am quite sure if I posted a strategy with a line of code like that in it, it would immediately be shot down as being blatantly biased, and quite rightly so. But, if I look at the recent “OOS” performance and use it to select the model, I am effectively doing exactly the same thing.

That is why it is so important to have a second span of OOS data that it not only not used to build the model, but also is not used to assess performance, until after the final model selection is made. For that reason, the second OOS period is referred to as a “double blind” test.

That’s the procedure I followed to build my futures daytrading strategy: I used as much data as possible, dating from 2002. The first 20% of the each data set was used for normal OOS testing. But the second set of data, from Jan 2012 onwards, was my double-blind data set. Only when I saw that the system maintained performance in BOTH OOS periods was I reasonably confident of the system’s robustness.

DoubleBlind

This further explains why it is so challenging to develop higher frequency strategies using GP. Running even a very fast GP modeling system on a large span of high frequency data can take inordinate amounts of time.

The longest span of 5-min bar data that a GP system can handle would typically be around 5-7 years. This is probably not quite enough to build a truly robust system, although if you pick you time span carefully it might be (I generally like to use the 2006-2011 period, which has lots of market variation).

For 15 minute bar data, a well-designed GP system can usually handle all the available data you can throw at it – from 1999 in the case of the Emini, for instance.

Why I don’t Like Fitting Models over Short Time Spans

The risks of fitting models to data in short time spans are intuitively obvious. If you happen to pick a data set in which the market is in a strong uptrend, then your model is going to focus on that kind of market behavior. Subsequently, when the trend changes, the strategy will typically break down.
Monte Carlo simulation isn’t going to change much in this situation: sure, it will help a bit, perhaps, but since the resampled data is all drawn from the same original data set, in most cases the simulated paths will also show a strong uptrend – all that will be shown is that there is some doubt about the strength of the trend. But a completely different scenario, in which, say, the market drops by 10%, is unlikely to appear.

One possible answer to that problem, recommended by some system developers, is simply to rebuild the model when a breakdown is detected. While it’s true that a product like MSA can make detection easier, rebuilding the model is another question altogether. There is no guarantee that the kind of model that has worked hitherto can be re-tooled to work once again. In fact, there may be no viable trading system that can handle the new market dynamics.

Here is a case in point. We have a system that works well on 10 min bars in TF.D up until around May 2012, when MSA indicates a breakdown in strategy performance.

TF.F Monte Carlo

So now we try to fit a new model, along the pattern of the original model, taking account some of the new data.  But it turns out to be just a Band-Aid – after a few more data points the strategy breaks down again, irretrievably.

TF EC 1

This is typical of what often happens when you use GP to build a model using s short span of data. That’s why I prefer to use a long time span, even at lower frequency. The chances of being able to build a robust system that will adapt well to changing market conditions are much higher.

A Robust Emini Trading System

Here, for example is a GP system build on daily data in @ES.D from 1999 to 2011 (i.e. 2012 to 2014 is OOS).

ES.D EC

Posted in Algorithmic Trading, Futures, Machine Learning, S&P500 Index, Trading | Tagged , , , , , | Comments Off

A Primer on Genetic Programming

Posted by androidMarvin:

Genetic programming is an approach to letting the computer generate its own program code, rather than have a person write the program. It doesn’t specifically “find patterns” or rules within data structures. It starts with a number of randomly-constructed (as long as they are mathematically valid) sample programs, evaluates how close each one is to achieving what the desired result program should achieve, then steadily modifies the best matches to the desired target program in order to improve their match to the desired target; the original random attempts “evolve” towards a better match by natural selection, the best ones being selected to act as the basis for the next generation of attempts.

A tree representing a candidate formula could be represented as follows:

It basically shows the mathematical operations that will be used in the formula, the order in which they are applied, and what values they act on. When the EL Verifier is analysing a statement like

value1 = sin( X ) / a + b * cos( X )

it has to see work out what order the parts of the statement should be evaluated in, which a person sees immediately; effectively, the Verifier constructs the tree diagram above, so that it knows that it has to generate code to make the computer :

  1. take the value of variable X and pass it through a call to the sin() functio
  2. take that result, and divide it by the value of a
  3. take the value of variable X and pass it through a call to the cos() functio
  4. take that result and multiply it by the value of variable
  5. take the result of step 2 and the result of step 4 and add the
  6. that result is the value of Y for the input value of X

Tradestation optimiser would take a single such tree, defining a fixed formula, and attempt to fit it to the data by varying the values of variables a and b. A Genetic Programming optimiser could do the same, but it also has the freedom to change the mathematical operators and the merge points in the tree, and change the shape of the tree to make the formula more or less complex as well; it can adjust both the parameters to the equation and the equation itself in order to evolve it to a better result.

For a mathematical curve fit, a GP optimiser would evaluate each individual tree by applying all the measured X values to the tree’s inputs, compare each output to the measured Y values, and sum a measure of the error over all the data; that sum would be the measure of how well the current tree matches the measure data. The “genetic” part of the name derives from the way it tries to evolve the population of trees its using to find the best.

The main evolution technique is “crossover”. When two parent animals create offspring, each offspring will get part of its DNA from one parent and part from the other; improvement of the species happens if some of the offspring get DNA component combinations that suit the environment better than their parents are suited. The GP optimiser emulates this process by selecting two parent trees, and swapping a section of one of those trees with a section of tree from the other parent, to create two offspring. Eg given parent trees

representing equations

value1 = sin( X )/a + b * cos( X )

and

value1 = cos( X ) / a + b * sin( X )

the offspring might be

representing equations

value1 = sin( X )/cos( X ) + b * cos( X )

and

value1 = a / a + b * sin( X )

Those specific changes are unlikely to both be an improvement, but that’s the way with random processes; the changes made aren’t guided by any sort of principle, its just a case of “change something, anything, and see if its any better”.

A secondary change process that can be used is “mutation”, in which something about a single tree is simply changed, not swapped. This is intended to introduce diversity, so that if none of the current trees is a particularly good performer, there’s a chance that something radically better might be brought into the pool.

The push trying to steer the evolution towards a better result comes from deciding which parents are allowed to create offspring. The original idea was that all the current trees were ranked in sorted order of their fitness, the worst ones were removed from the population to be replaced by new offspring, amd the trees that were the best performing are selected to be parents – so the weak die, and the strongest breed, hoping their offspring will be at least as good as the parents.

One reservation I have about a product like Adaptrade Builder is that it doesn’t follow this original pattern. It chooses “a few” (2 by default) trees to be considered as parents, by entering a “tournament” and the best tree in the tournament is selected as a parent. This seems to me to reduce the bias towards breeding strength with strength, but I’m no expert.

Rather than being simply mathematical, Builder seems to generate tests for entry and exit orders. It takes arithmetic and comparison operators for granted, and allows trees to be built from technical indicators rather than mathematical functions like sin() and cos(). So where an EL programmer might write

if average( Close, fast ) crosses above Average( ( High + Low )/2, slow ) and CCI( length ) > overbought then buy

Builder would have a tree

from which an offspring might be generated as :

to use a Buy test

if average( ( High + Low )/2, fast ) crosses above Average( Close, slow ) and CCI( length ) > overbought then buy

The structure of the test to go long has changed, but in a random rather than the guided way a human might do when trying to develop a strategy.

Posted in Machine Learning | Tagged , , , | Comments Off

Developing High Performing Trading Strategies with Genetic Programming

One of the frustrating aspects of research and development of trading systems is that there is never enough time to investigate all of the interesting trading ideas one would like to explore. In the early 1970’s, when a moving average crossover system was considered state of the art, it was relatively easy to develop profitable strategies using simple technical indicators. Indeed, research has shown that the profitability of simple trading rules persisted in foreign exchange and other markets for a period of decades. But, coincident with the advent of the PC in the late 1980’s, such simple strategies began to fail. The widespread availability of data, analytical tools and computing power has, arguably, contributed to the increased efficiency of financial markets and complicated the search for profitable trading ideas. We are now at a stage where is can take a team of 5-6 researchers/developers, using advanced research techniques and computing technologies, as long as 12-18 months, and hundreds of thousands of dollars, to develop a prototype strategy. And there is no guarantee that the end result will produce the required investment returns.

The lengthening lead times and rising cost and risk of strategy research has obliged trading firms to explore possibilities for accelerating the R&D process. One such approach is Genetic Programming.

Early Experiences with Genetic Programming
I first came across the GP approach to investment strategy in the late 1990s, when I began to work with Haftan Eckholdt, then head of neuroscience at Yeshiva University in New York. Haftan had proposed creating trading strategies by applying the kind of techniques widely used to analyze voluminous and highly complex data sets in genetic research. I was extremely skeptical of the idea and spent the next 18 months kicking the tires very hard indeed, of behalf of an interested investor. Although Haftan’s results seemed promising, I was fairly sure that they were the product of random chance and set about devising tests that would demonstrate that.

One of the challenges I devised was to create data sets in which real and synthetic stock series were mixed together and given to the system evaluate. To the human eye (or analyst’s spreadsheet), the synthetic series were indistinguishable from the real thing. But, in fact, I had “planted” some patterns within the processes of the synthetic stocks that made them perform differently from their real-life counterparts. Some of the patterns I created were quite simple, such as introducing a drift component. But other patterns were more nuanced, for example, using a fractal Brownian motion generator to induce long memory in the stock volatility process.

It was when I saw the system detect and exploit the patterns buried deep within the synthetic series to create sensible, profitable strategies that I began to pay attention. A short time thereafter Haftan and I joined forces to create what became the Proteom Fund.

That Proteom succeeded at all was a testament not only to Haftan’s ingenuity as a researcher, but also to his abilities as a programmer and technician. Processing such large volumes of data was a tremendous challenge at that time and required a cluster of 50 cpu’s networked together and maintained with a fair amount of patch cable and glue. We housed the cluster in a rat-infested warehouse in Brooklyn that had a very pleasant view of Manhattan, but no a/c. The heat thrown off from the cluster was immense, and when combined with very loud rap music blasted through the walls by the neighboring music studios, the effect was debilitating. As you might imagine, meetings with investors were a highly unpredictable experience. Fortunately, Haftan’s intellect was matched by his immense reserves of fortitude and patience and we were able to attract investments from several leading institutional investors.

The Genetic Programming Approach to Building Trading Models

Genetic programming is an evolutionary-based algorithmic methodology which can be used in a very general way to identify patterns or rules within data structures. The GP system is given a set of instructions (typically simple operators like addition and subtraction), some data observations and a fitness function to assess how well the system is able to combine the functions and data to achieve a specified goal.

In the trading strategy context the data observations might include not only price data, but also price volatility, moving averages and a variety of other technical indicators. The fitness function could be something as simple as net profit, but might represent alternative measures of profitability or risk, with factors such as PL per trade, win rate, or maximum drawdown. In order to reduce the danger of over-fitting, it is customary to limit the types of functions that the system can use to simple operators (+,-,/,*), exponents, and trig functions. The length of the program might also be constrained in terms of the maximum permitted lines of code.

We can represent what is going on using a tree graph:

Tree

In this example the GP system is combining several simple operators with the Sin and Cos trig functions to create a signal comprising an expression in two variables, X and Y, which may be, for example, stock prices, moving averages, or technical indicators of momentum or mean reversion.
The “evolutionary” aspect of the GP process derives from the idea that an existing signal or model can be mutated by replacing nodes in a branch of a tree, or even an entire branch by another. System performance is re-evaluated using the fitness function and the most profitable mutations are retained for further generation.
The resulting models are often highly non-linear and can be very general in form.

A GP Daytrading Strategy
The last fifteen years has seen tremendous advances in the field of genetic programming, in terms of the theory as well as practice. Using a single hyper-threaded CPU, it is now possible for a GP system to generate signals at a far faster rate than was possible on Proteom’s cluster of 50 networked CPUs. A researcher can develop and evaluate tens of millions of possible trading algorithms with the space of a few hours. Implementing a thoroughly researched and tested strategy is now feasible in a matter of weeks. There can be no doubt of GP’s potential to produce dramatic reductions in R&D lead times and costs. But does it work?

To address that question I have summarized below the performance results from a GP-developed daytrading system that trades nine different futures markets: Crude Oil (CL), Euro (EC), E-Mini (ES), Gold (GC), Heating Oil (HO), Coffee (KC), Natural gas (NG), Ten Year Notes (TY) and Bonds (US). The system trades a single contract in each market individually, going long and short several times a day. Only the most liquid period in each market is traded, which typically coincides with the open-outcry session, with any open positions being exited at the end of the session using market orders. With the exception of the NG and HO markets, which are entered using stop orders, all of the markets are entered and exited using standard limit orders, at prices determined by the system

The system was constructed using 15-minute bar data from Jan 2006 to Dec 2011 and tested out-of-sample of data from Jan 2012 to May 2014. The in-sample span of data was chosen to cover periods of extreme market stress, as well as less volatile market conditions. A lengthy out-of-sample period, almost half the span of the in-sample period, was chosen in order to evaluate the robustness of the system.
Out-of-sample testing was “double-blind”, meaning that the data was not used in the construction of the models, nor was out-of-sample performance evaluated by the system before any model was selected.

Performance results are net of trading commissions of $6 per round turn and, in the case of HO and NG, additional slippage of 2 ticks per round turn.

Ann Returns Risk

Value 1000 Sharpe

Performance

(click on the table for a higher definition view)

The most striking feature of the strategy is the high rate of risk-adjusted returns, as measured by the Sharpe ratio, which exceeds 5 in both in-sample and out-of-sample periods. This consistency is a reflection of the fact that, while net returns fall from an annual average of over 29% in sample to around 20% in the period from 2012, so, too, does the strategy volatility decline from 5.35% to 3.86% in the respective periods. The reduction in risk in the out-of-sample period is also reflected in lower Value-at-Risk and Drawdown levels.

A decline in the average PL per trade from $25 to $16 in offset to some degree by a slight increase in the rate of trading, from 42 to 44 trades per day, on average, while daily win rate and percentage profitable trades remain consistent at around 65% and 56%, respectively.

Overall, the system appears to be not only highly profitable, but also extremely robust. This is impressive, given that the models were not updated with data after 2011, remaining static over a period almost half as long as the span of data used in their construction. It is reasonable to expect that out-of-sample performance might be improved by allowing the models to be updated with more recent data.

Benefits and Risks of the GP Approach to Trading System Development
The potential benefits of the GP approach to trading system development include speed of development, flexibility of design, generality of application across markets and rapid testing and deployment.

What about the downside? The most obvious concern is the risk of over-fitting. By allowing the system to develop and test millions of models, there is a distinct risk that the resulting systems may be too closely conditioned on the in-sample data, and will fail to maintain performance when faced with new market conditions. That is why, of course, we retain a substantial span of out-of-sample data, in order to evaluate the robustness of the trading system. Even so, given the enormous number of models evaluated, there remains a significant risk of over-fitting.

Another drawback is that, due to the nature of the modelling process, it can be very difficult to understand, or explain to potential investors, the “market hypothesis” underpinning any specific model. “We tested it and it works” is not a particularly enlightening explanation for investors, who are accustomed to being presented with a more articulate theoretical framework, or investment thesis. Not being able to explain precisely how a system makes money is troubling enough in good times; but in bad times, during an extended drawdown, investors are likely to become agitated very quickly indeed if no explanation is forthcoming. Unfortunately, evaluating the question of whether a period of poor performance is temporary, or the result of a breakdown in the model, can be a complicated process.

Finally, in comparison with other modeling techniques, GP models suffer from an inability to easily update the model parameters based on new data as it become available. Typically, as GP model will be to rebuilt from scratch, often producing very different results each time.

Conclusion
Despite the many limitations of the GP approach, the advantages in terms of the speed and cost of researching and developing original trading signals and strategies have become increasingly compelling.

Given the several well-documented successes of the GP approach in fields as diverse as genetics and physics, I think an appropriate position to take with respect to applications within financial market research would be one of cautious optimism.

Posted in Algorithmic Trading, High Frequency Trading, Machine Learning, Market Efficiency, Nonlinear Classification | Tagged , , , , , , , , , , , , , , | Comments Off

A Scalping Strategy in E-Mini Futures

This is a follow up post to my post on the Mathematics of Scalping. To illustrate the scalping methodology, I coded up a simple strategy based on the techniques described in the post.

The strategy trades a single @ES contract on 1-minute bars. The attached ELD file contains the Easylanguage code for ES scalping strategy, which can be run in Tradestation or Multicharts.

This strategy makes no attempt to forecast market direction and doesn’t consider market trends at all. It simply looks at the current levels of volatility and takes a long volatility position or a short volatility position depending on whether volatility is above or below some threshold parameters.

By long volatility I mean a position where we buy or sell the market and set a loose Profit Target and a tight Stop Loss. By short volatility I mean a position where we buy or sell the market and set a tight Profit Target and loose Stop Loss. This is exactly the methodology I described earlier in the post. The parameters I ended up using are as follows:

Long Volatility: Profit Target = 8 ticks, Stop Loss = 2 ticks
Short Volatility: Profit Target = 2 ticks, Stop Loss = 30 ticks

I have made no attempt to optimize these parameters settings, which can easily be done in Tradestation or Multicharts.

What do we mean by volatility being above our threshold level? I use a very simple metric: I take the TrueRange for the current bar and add 50% of the increase or decrease in TrueRange over the last two bars. That’s my crude volatility “forecast”.

The final point to explain is this: let’s suppose our volatility forecast is above our threshold level, so we know we want to be long volatility. Ok, but do we buy or sell the ES? One approach ia to try to gauge the direction of the market by estimating the trend. Not a bad idea, by any means, although I have argued that volatility drowns out any trend signal at short time frames (like 1 minute, for example). So I prefer an approach that makes no assumptions about market direction.

In this approach what we do is divide volatility into upsideVolatility and downsideVolatility. upsideVolatility uses the TrueRange for bars where Close > Close[1]. downsideVolatility is calculated only for bars where Close < Close[1]. This kind of methodology, where you calculate volatility based on the sign of the returns, is well known and is used in performance measures like the Sortino ratio. This is like the Sharpe ratio, except that you calculate the standard deviation of returns using only days in which the market was down. When it’s calculated this way, standard deviation is known as the (square root of the) semi-variance.

Anyway, back to our strategy. So we calculate the upside and downside volatilities and test them against our upper and lower volatility thresholds.

The decision tree looks like this:

LONG VOLATILITY
If upsideVolatilityForecast > upperVolThrehold, buy at the market with wide PT and tight ST (long market, long volatility)
If downsideVolatilityForecast > upperVolThrehold, sell at the market with wide PT and tight ST (short market, long volatility)

SHORT VOLATILITY
If upsideVolatilityForecast < lowerVolThrehold, sell at the Ask on a limit with tight PT and wide ST (short market, short volatility)
If downsideVolatilityForecast < lowerVolThrehold, buy at the Bid on a limit with tight PT and wide ST (long market, short volatility)

NOTE THE FOLLOWING CAVEATS. DO NOT TRY TO TRADE THIS STRATEGY LIVE (but use it as a basis for a tradable strategy)

1. The strategy makes the usual TS assumption about fill rates, which is unrealistic, especially at short intervals like 1-minute.
2. The strategy allows fees and commissions of $3 per contract, or $6 per round turn. Your trading costs may be higher than this.
3. Tradestation is unable to perform analysis at the tick level for a period as long at the one used here (2000 to 2014). A tick by tick analysis would likely show very different results (better or worse).
4. The strategy is extremely lop-sided: the great majority of the profits are made on the long side and the Win Rates and Profit Factors are very different for long trades vs short trades. I suspect this would change with a tick by tick analysis. But it also may be necessary to add more parameters so that long trades are treated differently from short trades.
5. No attempt has been made to optimize the parameters.
6 This is a daytading strategy that will exit the market on close.

So with all that said here are the results.

As you can see, the strategy produces a smooth, upward sloping equity curve, the slope of which increases markedly during the period of high market volatility in 2008.
Net profits after commissions for a single ES contract amount to $243,000 ($3.42 per contract) with a win rate of 76% and Profit Factor of 1.24.

This basic implementation would obviously require improvement in several areas, not least of which would be to address the imbalance in strategy profitability on the short vs long side, where most of the profits are generated.

Scalping Strategy EC

 

Scalping Strategy Perf Report

 

 

Posted in Algorithmic Trading, Trading, Volatility Modeling | Tagged , , , , , , | Comments Off

Stationarity and Fat Tails

In this post I am going to explore how, starting from the assumption of a stable, Gaussian distribution in a returns process, we evolve to a system that displays all the characteristics of empirical market data, notably time-dependent moments, high levels of kurtosis and fat tails.  As it turns out, the only additional assumption one needs to make is that the market is periodically disturbed by the random arrival of news.

NOTE:  if you are unable to see the Mathematica models below, you can download the free Wolfram CDF player and you may also need this plug-in.

You can also download the complete Mathematica CDF file here.

Stationarity

A stationary process is one that evolves over time, but whose probability distribution does not vary with time. As the word implies, such a process is stable. More formally, the moments of the distribution are independent of time.

Let’ s assume we are dealing with such a process that have constant mean μ and constant volatility (standard deviation) σ.

 Φ=NormalDistribution[μ,σ]

Here are some examples of Normal probability distributions, with constant mean μ = 0 and standard deviation σ ranging from 0.75 to 2

 Plot[Evaluate@Table[PDF[Φ,x],{σ,{.75,1,2}}]/.μ→0,{x,-6,6},Filling→Axis]

 

Chart 1

The moments of Φ are given by:

 Through[{Mean, StandardDeviation, Skewness, Kurtosis}[Φ]]

{μ,  σ,  0,   3}

They, too, are time – independent.

We can simulate some observations from such a process, with, say, mean μ = 0 and standard deviation σ = 1:

ListPlot[sampleData=RandomVariate[Φ /.{μ→0, σ→1},10^4]]

 

Chart 2

Histogram[sampleData]

Chart 3

If we assume for the moment that such a process is an adequate description of an asset returns process, we can simulate the evolution of a price process as follows :

ListPlot[prices=Accumulate[sampleData]]

Chart 4

 

An Empirical Distribution

Lets take a look at a real price series, comprising 1 – minute bar data in the June ‘ 14 E – Mini futures contract.

Chart 5

As with our simulated price process, it is clear that the real price process for Emini futures is also non – stationary.

What about the returns process?

ListPlot[returnsES]

Chart 6

Notice the banding effect in returns, which results from having a fixed, minimum price move of $12 .50, rather than a continuous scale.

Histogram[returnsES]

 

Chart 7

Through[{Min,Max,Mean,Median,StandardDeviation,Skewness,Kurtosis}[returnsES]]

{-0.00867214,  0.0112353,  2.75501×10-6,   0.,   0.000780895,   0.35467,   26.2376}

The empirical returns distribution doesn’ t appear to be Gaussian – the distribution is much more peaked than a standard Normal distribution with the same mean and standard deviation. And the higher moments don’t fit the Normal model either – the empirical distribution has positive skew and a kurtosis that is almost 9x greater than a Gaussian distribution. The latter signifies what is often referred to as “fat tails”: the distribution has much greater weight in the tails than a standard Normal distribution, indicating a much greater likelihood of an extreme value than a Normal distribution would predict.

Non-Stationarity: Two States

Non – stationarity arises when one or more of the moments of a distribution vary over time. Let’s take a look at how that can arise, and its effects.Suppose we have a Gaussian returns process for which the mean, or drift, or trend, fluctuates over time.

Let’s consider a simple example where the process drift is  μ1 and volatility σ1 for most of the time and then for some proportion of time k, we get addition drift  μ2 and volatility σ2.  In other words we have:

 Φ1=NormalDistribution[μ1,σ1]

 Through[{Mean,StandardDeviation,Skewness,Kurtosis}[Φ1]]

{μ1,   σ1,   0,   3}

 Φ2=NormalDistribution[μ2,σ2]

 Through[{Mean,StandardDeviation,Skewness,Kurtosis}[Φ2]]

{μ2,   σ2,   0,   3}

This simple model fits a scenario in which we suppose that the returns process spends most of its time in State 1, in which is Normally distributed with  drift is  μ1 and volatility σ1, and suffers from the occasional “shock” which propels the systems into a second State 2, in which its distribution is a combination of its original distribution and a new Gaussian distribution with different mean and volatility.

Let’ s suppose that we sample the combined process y =  Φ1 + k  Φ2.   What distribution would it have?  We can represent this is follows :

 y=TransformedDistribution[(x1+k x2),{x11,x22}]


Eqn2
 

 Through[{Mean,StandardDeviation,Skewness,Kurtosis}[y]]

Stationarity_52

 Plot[PDF[y,x]/.{μ10,μ20,σ1 1,σ2 2, k0.5},{x,-6,6},FillingAxis]

Chart 8

The result is just another Normal distribution. Depending on the incidence k, y will follow a Gaussian distribution whose mean and variance depend on the mean and variance of the two Normal distributions being mixed. The resulting distribution in State 2 may have higher or lower drift and volatility, but it is still Gaussian, with constant kurtosis of 3.

In other words, the system y will be non-stationary, because the first and second moments change over time, depending on what state it is in. But the form of the distribution is unchanged – it is still Gaussian. There are no fat-tails.

Non – Stationarity : Random States

In the above example the system moved between states in a known, predictable way. The “shocks” to the system were not really shocks, but transitions. But that’s not how financial markets behave: markets move from one state to another in an unpredictable way, with the arrival of news.

We can simulate this situation as follows. Using the former model as a starting point, lets now relax the assumption that the incidence of the second state, k, is a constant. Instead, let’ s assume that k is itself a random variable. In other words we are going to now assume that our system changes state in a random way. How does this alter the distribution?

An appropriate model for λ might be a Poisson process, which is often used as a model for unpredictable, discrete events, ranging from bus arrivals to earthquakes.  PDFs of Poisson distributions with means  λ=5, 10 and 20 are shown in the chart below.  These represent probability distributions for processes that have mean  arrivals of 5, 10 or 20 events.

 DiscretePlot[Evaluate@Table[PDF[PoissonDistribution[λ],k],{λ,{5,10,20}}],{k,0,30},PlotRangeAll,PlotMarkersAutomatic]

Chart 9

Our new model now looks like this :

 y=TransformedDistribution[{x1+k*x2},{x1⎡Φ1,x2⎡Φ2,kPoissonDistribution[λ]}]

The first two moments of the distribution are as follows :

Through[{Mean,StandardDeviation}[y]]

Stationarity_60

As before, the mean and standard deviation of the distribution are going to vary, depending on the state of the system, and the mean arrival rate of shocks, . But what about kurtosis? Is it still constant?

Kurtosis[y]

Eqn1

Emphatically not!  The fourth moment of the distribution is now dependent on the drift in the second state, the volatilities of both states and the mean arrival rate of shocks, λ.

Let’ s look at a specific example.  Assume that in State 1 the process has volatility of 7.5 %, with zero drift, and that the shock distribution also has zero drift with volatility of 65 %. If the mean incidence rate of shocks λ = 10 %, the distribution kurtosis is close to that seen in the empirical distribution for the E-Mini.

 Kurtosis[y] /.{σ10.075,μ20,σ20.65,λ→0.1}

{35.3551}

More generally :

 ListLinePlot[Flatten[Kurtosis[y]/.Table[{σ10.075,μ20,σ20.65,λ→i/20},{i,1,20}]],PlotLabelStyle["Kurtosis vs Mean Shock Arrival Rate", FontSize18],AxesLabel->{“Incidence Rate (%)”, “Kurtosis”},FillingAxis, ImageSizeLarge]

 

Chart 10

Thus we can see how, even if the underlying returns distribution is Gaussian in form, the random arrival of news “shocks” to the system can induce non – stationarity in overall drift and volatility. It can also result in fat tails. More specifically, if the arrival of news is stochastic in nature, rather than deterministic, the process may exhibit far higher levels of kurtosis than in its original Gaussian state, in which the fourth moment was a constant level of 3.

Jump Diffusion

Nobel – prize winning economist Robert Merton extended this basic concept to the realm of stochastic calculus.

In Merton’s jump diffusion model, the stock price follows the random process

∂St / St =μdt + σdWt+(J-1)dNt

The first two terms are familiar from the Black–Scholes model : drift rate μ, volatility σ, and random walk Wt (Wiener process).The last term represents the jumps :J is the jump size as a multiple of stock price, while Nt is the number of jump events that have occurred up to time t.is assumed to follow the Poisson process.

 PDF[PoissonDistribution[λt]]

where λ is the average frequency with which jumps occur.

The jump size J follows a log – normal distribution

 PDF[LogNormalDistribution[m, ν], s]

where m is the average jump size and v is the volatility of the jump size.

In the jump diffusion model, the stock price St follows the random process dSt/St=μ dt+σ dWt+(J-1) dN(t), which comprises, in order, drift, diffusive, and jump components. The jumps occur according to a Poisson distribution and their size follows a log-normal distribution. The model is characterized by the diffusive volatility σ, the average jump size J (expressed as a fraction of St), the frequency of jumps λ, and the volatility of jump size ν.

The Volatility Smile

The “implied volatility” corresponding to an option price is the value of the volatility parameter for which the Black-Scholes model gives the same price. A well-known phenomenon in market option prices is the “volatility smile”, in which the implied volatility increases for strike values away from the spot price. The jump diffusion model is a generalization of Black–Scholes in which the stock price has randomly occurring jumps in addition to the random walk behavior. One of the interesting properties of this model is that it displays the volatility smile effect. In this Demonstration, we explore the Black–Scholes implied volatility of option prices (equal for both put and call options) in the jump diffusion model. The implied volatility is modeled as a function of the ratio of option strike price to spot price.

 

Posted in Fat Tails, Mathematica, Regime Shifts, Regime Switching, Uncategorized, Volatility Modeling | Tagged , , , , , , | Comments Off