A Meta-Strategy in S&P 500 E-Mini Futures

In earlier posts I have described the idea of a meta-strategy as a strategies that trades strategies.  It is an algorithm, or set of rules, that is used to decide when to trade an underlying strategy.  In some cases a meta-strategy may influence the size in which the underlying strategy is traded, or may even amend the base code.  In other word, a meta-strategy actively “trades” an underlying strategy, or group of strategies, much as in the same way a regular strategy may actively trade stocks, going long or short from time to time.  One distinction is that a meta-strategy will rarely, if ever, actually “short” an underlying strategy – at most it will simply turn the strategy off (reduce the position size to zero) for a period.

For a more detailed description, see this post:

Improving Trading System Performance Using a Meta-Strategy

In this post I look at a meta-strategy that developed for a client’s strategy in S&P E-Mini futures.  What is extraordinary is that the underlying strategy was so badly designed (not by me!) and performs so poorly that no rational systematic trader would likely give it  a second look –  instead he would toss it into the large heap of failed ideas that all quantitative researchers accumulate over the course of their careers.  So this is a textbook example that illustrates the power of meta-strategies to improve, or in this case transform, the performance of an underlying strategy.

1. The Strategy

The Target Trader Strategy (“TTS”) is a futures strategy applied to S&P 500 E-Mini futures that produces a very high win rate, but which occasionally experiences very large losses. The purpose of the analysis if to find methods that will:

1) Decrease the max loss / drawdown
2) Increase the win rate / profitability

For longs the standard setting is entry 40 ticks below the target, stop loss 1000 ticks below the target, and then 2 re-entries 100 ticks below entry 1 and 100 ticks below entry 2

For shorts the standard is entry 80 ticks above the target. stop loss 1000 ticks above the target, and then 2 re-entries 100 ticks above entry 1 and 100 ticks above entry 2

For both directions its 80 ticks above/below for entry 1, 1000 tick stop, and then 1 re entry 100 ticks above/below, and then re-entry 2 100 ticks above/below entry 2

 

2. Strategy Performance

2.1 Overall Performance

The overall performance of the strategy over the period from 2018 to 2020 is summarized in the chart of the strategy equity curve and table of performance statistics below.
These confirm that, while the win rate if very high (over 84%) there strategy experiences many significant drawdowns, including a drawdown of -$61,412.50 (-43.58%). The total return is of the order of 5% per year, the strategy profit factor is fractionally above 1 and the Sharpe Ratio is negligibly small. Many traders would consider the performance to be highly unattractive.

 

 

 

2.2 Long Trades

We break the strategy performance down into long and short trades, and consider them separately. On the long side, the strategy has been profitable, producing a gain of over 36% during the period 2018-2020. It also suffered catastrophic drawdown of over -$97,000 during that period:

 


 

 

2.3 Short Trades

On the short side, the story is even worse, producing an overall loss of nearly -$59,000:

 

 

 

3. Improving Strategy Performance with a Meta-Strategy

We considered two possible methods to improve strategy performance. The first method attempts to apply technical indicators and other data series to improve trading performance. Here we evaluated price series such as the VIX index and a wide selection of technical indicators, including RSI, ADX, Moving Averages, MACD, ATR and others. However, any improvement in strategy performance proved to be temporary in nature and highly variable, in many cases amplifying the problems with the strategy performance rather than improving them.

The second approach proved much more effective, however. In this method we create a meta-strategy which effectively “trades the strategy”, turning it on and off depending on its recent performance. The meta-strategy consists of a set of rules that determines whether or not to continue trading the strategy after a series of wins or losses. In some cases the meta-strategy may increase the trade size for a sequence of trades, at times when it considers the conditions for the underlying strategy to be favorable.

The result of applying the meta-strategy are described in the following sections.

3.1 Long & Short Strategies with Meta-Strategy Overlay

The performance of the long/short strategies combined with the meta-strategy overlay are set out in the chart and table below.
The overall improvements can be summarized as follows:

  • Net profit increases from $15,387 to $176,287
  • Account return rises from 15% to 176%
  • Percentage win rate rises from 84% to 95%
  • Profit factor increases from 1.0 to 6.7
  • Average trade rises from $51 to $2,631
  • Max $ Drawdown falls from -$61,412 to -$30,750
  • Return/Max Drawdown ratio rises from 0.35 to 5.85
  •  The modified Sharpe ratio increases from 0.07 to 0.5

Taken together, these are dramatic improvements to every important aspect of strategy performance.

There are two key rules in the meta-strategy, applicable to winning and losing trades:

Rule for winning trades:
After 3 wins in a row, skip the next trade.

Rule for losing trades:
After 3 losses in a row, add 1 contract until the first win. Subtract 1 contract after each win until the next loss, or back to 1 contract.

 

 

 

 

3.2 Long Trades with Meta-Strategy

The meta-strategy rules produce significant improvements in the performance of both the long and short components of the strategy. On the long side the percentage win rate is increased to 100% and the max % drawdown is reduced to 0%:

 

 

3.3 Short Trades with Meta-Strategy

Improvements to the strategy on the short side are even more significant, transforming a loss of -$59,000 into a profit of $91,600:

 

 

 

 

4. Conclusion

A meta-strategy is a simple, yet powerful technique that can transform the performance of an underlying strategy.  The rules are often simple, although they can be challenging to implement.  Meta strategies can be applied to almost any underlying strategy, whether in futures, equities, or forex. Worthwhile improvements in strategy performance are often achievable, although not often as spectacular as in this case.

If any reader is interested in designing a meta-strategy for their own use, please get in contact.

High Frequency Scalping Strategies

HFT scalping strategies enjoy several highly desirable characteristics, compared to low frequency strategies.  A case in point is our scalping strategy in VIX futures, currently running on the Collective2 web site:

  • The strategy is highly profitable, with a Sharpe Ratio in excess of 9 (net of transaction costs of $14 prt)
  • Performance is consistent and reliable, being based on a large number of trades (10-20 per day)
  • The strategy has low, or negative correlation to the underlying equity and volatility indices
  • There is no overnight risk

 

VIX HFT Scalper

 

Background on HFT Scalping Strategies

The attractiveness of such strategies is undeniable.  So how does one go about developing them?

It is important for the reader to familiarize himself with some of the background to  high frequency trading in general and scalping strategies in particular.  Specifically, I would recommend reading the following blog posts:

http://jonathankinlay.com/2015/05/high-frequency-trading-strategies/

http://jonathankinlay.com/2014/05/the-mathematics-of-scalping/

 

Execution vs Alpha Generation in HFT Strategies

The key to understanding HFT strategies is that execution is everything.  With low frequency strategies a great deal of work goes into researching sources of alpha, often using highly sophisticated mathematical and statistical techniques to identify and separate the alpha signal from the background noise.  Strategy alpha accounts for perhaps as much as 80% of the total return in a low frequency strategy, with execution making up the remaining 20%.  It is not that execution is unimportant, but there are only so many basis points one can earn (or save) in a strategy with monthly turnover.  By contrast, a high frequency strategy is highly dependent on trade execution, which may account for 80% or more of the total return.  The algorithms that generate the strategy alpha are often very simple and may provide only the smallest of edges.  However, that very small edge, scaled up over thousands of trades, is sufficient to produce a significant return. And since the risk is spread over a large number of very small time increments, the rate of return can become eye-wateringly high on a risk-adjusted basis:  Sharpe Ratios of 10, or more, are commonly achieved with HFT strategies.

In many cases an HFT algorithm seeks to estimate the conditional probability of an uptick or downtick in the underlying, leaning on the bid or offer price accordingly.  Provided orders can be positioned towards the front of the queue to ensure an adequate fill rate, the laws of probability will do the rest.  So, in the HFT context, much effort is expended on mitigating latency and on developing techniques for establishing and maintaining priority in the limit order book.  Another major concern is to monitor order book dynamics for signs that book pressure may be moving against any open orders, so that they can be cancelled in good time, avoiding adverse selection by informed traders, or a buildup of unwanted inventory.

In a high frequency scalping strategy one is typically looking to capture an average of between 1/2 to 1 tick per trade.  For example, the VIX scalping strategy illustrated here averages around $23 per contract per trade, i.e. just under 1/2 a tick in the futures contract.  Trade entry and exit is effected using limit orders, since there is no room to accommodate slippage in a trading system that generates less than a single tick per trade, on average. As with most HFT strategies the alpha algorithms are only moderately sophisticated, and the strategy is highly dependent on achieving an acceptable fill rate (the proportion of limit orders that are executed).  The importance of achieving a high enough fill rate is clearly illustrated in the first of the two posts referenced above.  So what is an acceptable fill rate for a HFT strategy?

Fill Rates

I’m going to address the issue of fill rates by focusing on a critical subset of the problem:  fills that occur at the extreme of the bar, also known as “extreme hits”. These are limit orders whose prices coincide with the highest (in the case of a sell order) or lowest (in the case of a buy order) trade price in any bar of the price series. Limit orders at prices within the interior of the bar are necessarily filled and are therefore uncontroversial.  But limit orders at the extremities of  the bar may or may not be filled and it is therefore these orders that are the focus of attention.

By default, most retail platform backtest simulators assume that all limit orders, including extreme hits, are filled if the underlying trades there.  In other words, these systems typically assume a 100% fill rate on extreme hits.  This is highly unrealistic:  in many cases the high or low of a bar forms a turning point that the price series visits only fleetingly before reversing its recent trend, and does not revisit for a considerable time.  The first few orders at the front of the queue will be filled, but many, perhaps the majority of, orders further down the priority order will be disappointed.  If the trader is using a retail trading system rather than a HFT platform to execute his trades, his limit orders are almost always guaranteed to rest towards the back of the queue, due to the relatively high latency  of his system.  As a result, a great many of his limit orders – in particular, the extreme hits – will not be filled.

The consequences of missing a large number of trades due to unfilled limit orders are likely to be catastrophic for any HFT strategy. A simple test that is readily available  in most backtest systems is to change the underlying assumption with regard to the fill rate on extreme hits – instead of assuming that 100% of such orders are filled, the system is able to test the outcome if limit orders are filled only if the price series subsequently exceeds the limit price.  The outcome produced under this alternative scenario is typically extremely adverse, as illustrated in first blog post referenced previously.

 

Fig4

In reality, of course, neither assumption is reasonable:  it is unlikely that either 100% or 0% of a strategy’s extreme hits will be filled – the actual fill rate will likely lie somewhere between these two outcomes.   And this is the critical issue:  at some level of fill rate the strategy will move from profitability into unprofitability.  The key to implementing a HFT scalping strategy successfully is to ensure that the execution falls on the right side of that dividing line.

Implementing HFT Scalping Strategies in Practice

One solution to the fill rate problem is to spend millions of dollars building HFT infrastructure.  But for the purposes of this post let’s assume that the trader is confined to using a retail trading platform like Tradestation or Interactive Brokers.  Are HFT scalping systems still feasible in such an environment?  The answer, surprisingly, is a qualified yes – by using a technique that took me many years to discover.

To illustrate the method I will use the following HFT scalping system in the E-Mini S&P500 futures contract.  The system trades the E-Mini futures on 3 minute bars, with an average hold time of 15 minutes.  The average trade is very low – around $6, net of commissions of $8 prt.  But the strategy appears to be highly profitable ,due to the large number of trades – around 50 to 60 per day, on average.


fig-4

fig-3

So far so good.  But the critical issue is the very large number of extreme hits produced by the strategy.  Take the trading activity on 10/18 as an example (see below).  Of 53 trades that day, 25 (47%) were extreme hits, occurring at the high or low price of the 3-minute bar in which the trade took place.

 

fig5

 

Overall, the strategy extreme hit rate runs at 34%, which is extremely high.  In reality, perhaps only 1/4 or 1/3 of these orders will actually execute – meaning that remainder, amounting to around 20% of the total number of orders, will fail.  A HFT scalping strategy cannot hope to survive such an outcome.  Strategy profitability will be decimated by a combination of missed, profitable trades and  losses on trades that escalate after an exit order fails to execute.

So what can be done in such a situation?

Manual Override, MIT and Other Interventions

One approach that will not work is to assume naively that some kind of manual oversight will be sufficient to correct the problem.  Let’s say the trader runs two versions of the system side by side, one in simulation and the other in production.  When a limit order executes on the simulation system, but fails to execute in production, the trader might step in, manually override the system and execute the trade by crossing the spread.  In so doing the trader might prevent losses that would have occurred had the trade not been executed, or force the entry into a trade that later turns out to be profitable.  Equally, however, the trader might force the exit of a trade that later turns around and moves from loss into profit, or enter a trade that turns out to be a loser.  There is no way for the trader to know, ex-ante, which of those scenarios might play out.  And the trader will have to face the same decision perhaps as many as twenty times a day.  If the trader is really that good at picking winners and cutting losers he should scrap his trading system and trade manually!

An alternative approach would be to have the trading system handle the problem,  For example, one could program the system to convert limit orders to market orders if a trade occurs at the limit price (MIT), or after x seconds after the limit price is touched.  Again, however, there is no way to know in advance whether such action will produce a positive outcome, or an even worse outcome compared to leaving the limit order in place.

In reality, intervention, whether manual or automated, is unlikely to improve the trading performance of the system.  What is certain, however,  is that by forcing the entry and exit of trades that occur around the extreme of a price bar, the trader will incur additional costs by crossing the spread.  Incurring that cost for perhaps as many as 1/3 of all trades, in a system that is producing, on average less than half a tick per trade, is certain to destroy its profitability.

Successfully Implementing HFT Strategies on a Retail Platform

For many years I assumed that the only solution to the fill rate problem was to implement scalping strategies on HFT infrastructure.  One day, I found myself asking the question:  what would happen if we slowed the strategy down?  Specifically, suppose we took the 3-minute E-Mini strategy and ran it on 5-minute bars?

My first realization was that the relative simplicity of alpha-generation algorithms in HFT strategies is an advantage here.  In a low frequency context, the complexity of the alpha extraction process mitigates its ability to generalize to other assets or time-frames.  But HFT algorithms are, by and large, simple and generic: what works on 3-minute bars for the E-Mini futures might work on 5-minute bars in E-Minis, or even in SPY.  For instance, if the essence of the algorithm is something as simple as: “buy when the price falls by more than x% below its y-bar moving average”, that approach might work on 3-minute, 5-minute, 60-minute, or even daily bars.

So what happens if we run the E-mini scalping system on 5-minute bars instead of 3-minute bars?

Obviously the overall profitability of the strategy is reduced, in line with the lower number of trades on this slower time-scale.   But note that average trade has increased and the strategy remains very profitable overall.

fig8 fig9

More importantly, the average extreme hit rate has fallen from 34% to 22%.

fig6

Hence, not only do we get fewer, slightly more profitable trades, but a much lower proportion of them occur at the extreme of the 5-minute bars.  Consequently the fill-rate issue is less critical on this time frame.

Of course, one can continue this process.  What about 10-minute bars, or 30-minute bars?  What one tends to find from such experiments is that there is a time frame that optimizes the trade-off between strategy profitability and fill rate dependency.

However, there is another important factor we need to elucidate.  If you examine the trading record from the system you will see substantial variation in the extreme hit rate from day to day (for example, it is as high as 46% on 10/18, compared to the overall average of 22%).  In fact, there are significant variations in the extreme hit rate during the course of each trading day, with rates rising during slower market intervals such as from 12 to 2pm.  The important realization that eventually occurred to me is that, of course, what matters is not clock time (or “wall time” in HFT parlance) but trade time:  i.e. the rate at which trades occur.

Wall Time vs Trade Time

What we need to do is reconfigure our chart to show bars comprising a specified number of trades, rather than a specific number of minutes.  In this scheme, we do not care whether the elapsed time in a given bar is 3-minutes, 5-minutes or any other time interval: all we require is that the bar comprises the same amount of trading activity as any other bar.  During high volume periods, such as around market open or close, trade time bars will be shorter, comprising perhaps just a few seconds.  During slower periods in the middle of the day, it will take much longer for the same number of trades to execute.  But each bar represents the same level of trading activity, regardless of how long a period it may encompass.

How do you decide how may trades per bar you want in the chart?

As a rule of thumb, a strategy will tolerate an extreme hit rate of between 15% and 25%, depending on the daily trade rate.  Suppose that in its original implementation the strategy has an unacceptably high hit rate of 50%.  And let’s say for illustrative purposes that each time-bar produces an average of 1, 000 contracts.  Since volatility scales approximately with the square root of time, if we want to reduce the extreme hit rate by a factor of 2, i.e. from 50% to 25%, we need to increase the average number of trades per bar by a factor of 2^2, i.e. 4.  So in this illustration we would need volume bars comprising 4,000 contracts per bar.  Of course, this is just a rule of thumb – in practice one would want to implement the strategy of a variety of volume bar sizes in a range from perhaps 3,000 to 6,000 contracts per bar, and evaluate the trade-off between performance and fill rate in each case.

Using this approach, we arrive at a volume bar configuration for the E-Mini scalping strategy of 20,000 contracts per bar.  On this “time”-frame, trading activity is reduced to around 20-25 trades per day, but with higher win rate and average trade size.  More importantly, the extreme hit rate runs at a much lower average of 22%, which means that the trader has to worry about maybe only 4 or 5 trades per day that occur at the extreme of the volume bar.  In this scenario manual intervention is likely to have a much less deleterious effect on trading performance and the strategy is probably viable, even on a retail trading platform.

(Note: the results below summarize the strategy performance only over the last six months, the time period for which volume bars are available).

 

fig7

 

fig10 fig11

Concluding Remarks

We have seen that is it feasible in principle to implement a HFT scalping strategy on a retail platform by slowing it down, i.e. by implementing the strategy on bars of lower frequency.  The simplicity of many HFT alpha generation algorithms often makes them robust to generalization across time frames (and sometimes even across assets).  An even better approach is to use volume bars, or trade-time, to implement the strategy.  You can estimate the appropriate bar size using the square root of time rule to adjust the bar volume to produce the requisite fill rate.  An extreme hit rate if up to 25% may be acceptable, depending on the daily trade rate, although a hit rate in the range of 10% to 15% would typically be ideal.

Finally, a word about data.  While necessary compromises can be made with regard to the trading platform and connectivity, the same is not true for market data, which must be of the highest quality, both in terms of timeliness and completeness. The reason is self evident, especially if one is attempting to implement a strategy in trade time, where the integrity and latency of market data is crucial. In this context, using the data feed from, say, Interactive Brokers, for example, simply will not do – data delivered in 500ms packets in entirely unsuited to the task.  The trader must seek to use the highest available market data feed that he can reasonably afford.

That caveat aside, one can conclude that it is certainly feasible to implement high volume scalping strategies, even on a retail trading platform, providing sufficient care is taken with the modeling and implementation of the system.

Overnight Trading in the E-Mini S&P 500 Futures

Jeff Swanson’s Trading System Success web site is often worth a visit for those looking for new trading ideas.

A recent post Seasonality S&P Market Session caught my eye, having investigated several ideas for overnight trading in the E-minis.  Seasonal effects are of course widely recognized and traded in commodities markets, but they can also apply to financial products such as the E-mini.  Jeff’s point about session times is well-made:  it is often worthwhile to look at the behavior of an asset, not only in different time frames, but also during different periods of the trading day, day of the week, or month of the year.

Jeff breaks the E-mini trading session into several basic sub-sessions:

  1. “Pre-Market” Between 530 and 830
  2. “Open” Between 830 and 900
  3. “Morning” Between 900 though 1130
  4. “Lunch” Between 1130 and 1315
  5. “Afternoon” Between 1315 and 1400
  6. “Close” Between 1400 and 1515
  7. “Post-Market” Between 1515 and 1800
  8. “Night” Between 1800 and 530

In his analysis Jeff’s strategy is simply to buy at the open of the session and close that trade at the conclusion of the session. This mirrors the traditional seasonality study where a trade is opened at the beginning of the season and closed several months later when the season comes to an end.

Evaluating Overnight Session and Seasonal Effects

The analysis evaluates the performance of this basic strategy during the “bullish season”, from Nov-May, when the equity markets traditionally make the majority of their annual gains, compared to the outcome during the “bearish season” from Jun-Oct.

None of the outcomes of these tests is especially noteworthy, save one:  the performance during the overnight session in the bullish season:

Fig 1

The tendency of the overnight session in the E-mini to produce clearer trends and trading signals has been well documented.  Plausible explanations for this phenomenon are that:

(a) The returns process in the overnight session is less contaminated with noise, which primarily results from trading activity; and/or

(b) The relatively poor liquidity of the overnight session allows participants to push the market in one direction more easily.

Either way, there is no denying that this study and several other, similar studies appear to demonstrate interesting trading opportunities in the overnight market.

That is, until trading costs are considered.  Results for the trading strategy from Nov 1997-Nov 2015 show a gain of $54,575, but an average trade of only just over $20:

Gross PL

# Trades

Av Trade

$54,575

2701

$20.21

Assuming that we enter and exit aggressively, buying at the market at the start of the session and selling MOC at the close, we will pay the bid-offer spread and commissions amounting to around $30, producing a net loss of $10 per trade.

The situation can be improved by omitting January from the “bullish season”, but the slightly higher average trade is still insufficient to overcome trading costs :

Gross PL

# Trades

Av Trade

$54,550

2327

$23.44

SSALGOTRADING AD

Designing a Seasonal Trading Strategy for the Overnight Session

At this point an academic research paper might conclude that the apparently anomalous trading profits are subsumed within the bid-offer spread.  But for a trading system designer this is not the end of the story.

If the profits are insufficient to overcome trading frictions when we cross the spread on entry and exit, what about a trading strategy that permits market orders on only the exit leg of the trade, while using limit orders to enter?  Total trading costs will be reduced to something closer to $17.50 per round turn, leaving a net profit of almost $6 per trade.

Of course, there is no guarantee that we will successfully enter every trade – our limit orders may not be filled at the bid price and, indeed, we are likely to suffer adverse selection – i.e. getting filled on every losing trading, while missing a proportion of the winning trades.

On the other hand, we are hardly obliged to hold a position for the entire overnight session.  Nor are we obliged to exit every trade MOC – we might find opportunities to exit prior to the end of the session, using limit orders to achieve a profit target or cap a trading loss.  In such a system, some proportion of the trades will use limit orders on both entry and exit, reducing trading costs for those trades to around $5 per round turn.

The key point is that we can use the seasonal effects detected in the overnight session as a starting point for the development for a more sophisticated trading system that uses a variety of entry and exit criteria, and order types.

The following shows the performance results for a trading system designed to trade 30-minute bars in the E-mini futures overnight session during the months of Nov to May.The strategy enters trades using limit prices and exits using a combination of profit targets, stop loss targets, and MOC orders.

Data from 1997 to 2010 were used to design the system, which was tested on out-of-sample data from 2011 to 2013.  Unseen data from Jan 2014 to Nov 2015 were used to provide a further (double blind) evaluation period for the strategy.

Fig 2

 

 

  

ALL TRADES

LONG

SHORT

Closed Trade Net Profit

$83,080

$61,493

$21,588

  Gross Profit

$158,193

$132,573

$25,620

  Gross Loss

-$75,113

-$71,080

-$4,033

Profit Factor

2.11

1.87

6.35

Ratio L/S Net Profit

2.85

Total Net Profit

$83,080

$61,493

$21,588

Trading Period

11/13/97 2:30:00 AM to 12/31/13 6:30:00 AM (16 years 48 days)

Number of Trading Days

2767

Starting Account Equity

$100,000

Highest Equity

$183,080

Lowest Equity

$97,550

Final Closed Trade Equity

$183,080

Return on Starting Equity

83.08%

Number of Closed Trades

849

789

60

  Number of Winning Trades

564

528

36

  Number of Losing Trades

285

261

24

  Trades Not Taken

0

0

0

Percent Profitable

66.43%

66.92%

60.00%

Trades Per Year

52.63

48.91

3.72

Trades Per Month

4.39

4.08

0.31

Max Position Size

1

1

1

Average Trade (Expectation)

$97.86

$77.94

$359.79

Average Trade (%)

0.07%

0.06%

0.33%

Trade Standard Deviation

$641.97

$552.56

$1,330.60

Trade Standard Deviation (%)

0.48%

0.44%

1.20%

Average Bars in Trades

15.2

14.53

24.1

Average MAE

$190.34

$181.83

$302.29

Average MAE (%)

0.14%

0.15%

0.27%

Maximum MAE

$3,237

$2,850

$3,237

Maximum MAE (%)

2.77%

2.52%

3.10%

Win/Loss Ratio

1.06

0.92

4.24

Win/Loss Ratio (%)

2.10

1.83

7.04

Return/Drawdown Ratio

15.36

14.82

5.86

Sharpe Ratio

0.43

0.46

0.52

Sortino Ratio

1.61

1.69

6.40

MAR Ratio

0.71

0.73

0.33

Correlation Coefficient

0.95

0.96

0.719

Statistical Significance

100%

100%

97.78%

Average Risk

$1,099

$1,182

$0.00

Average Risk (%)

0.78%

0.95%

0.00%

Average R-Multiple (Expectancy)

0.0615

0.0662

0

R-Multiple Standard Deviation

0.4357

0.4357

0

Average Leverage

0.399

0.451

0.463

Maximum Leverage

0.685

0.694

0.714

Risk of Ruin

0.00%

0.00%

0.00%

Kelly f

34.89%

31.04%

50.56%

Average Annual Profit/Loss

$5,150

$3,811

$1,338

Ave Annual Compounded Return

3.82%

3.02%

1.22%

Average Monthly Profit/Loss

$429.17

$317.66

$111.52

Ave Monthly Compounded Return

0.31%

0.25%

0.10%

Average Weekly Profit/Loss

$98.70

$73.05

$25.65

Ave Weekly Compounded Return

0.07%

0.06%

0.02%

Average Daily Profit/Loss

$30.03

$22.22

$7.80

Ave Daily Compounded Return

0.02%

0.02%

0.01%

INTRA-BAR EQUITY DRAWDOWNS

ALL TRADES

LONG

SHORT

Number of Drawdowns

445

422

79

Average Drawdown

$282.88

$269.15

$441.23

Average Drawdown (%)

0.21%

0.20%

0.33%

Average Length of Drawdowns

10 days 19 hours

10 days 20 hours

66 days 1 hours

Average Trades in Drawdowns

3

3

1

Worst Case Drawdown

$6,502

$4,987

$4,350

Date at Trough

12/13/00 1:30

5/24/00 4:30

12/13/00 1:30

Developing High Performing Trading Strategies with Genetic Programming

One of the frustrating aspects of research and development of trading systems is that there is never enough time to investigate all of the interesting trading ideas one would like to explore. In the early 1970’s, when a moving average crossover system was considered state of the art, it was relatively easy to develop profitable strategies using simple technical indicators. Indeed, research has shown that the profitability of simple trading rules persisted in foreign exchange and other markets for a period of decades. But, coincident with the advent of the PC in the late 1980’s, such simple strategies began to fail. The widespread availability of data, analytical tools and computing power has, arguably, contributed to the increased efficiency of financial markets and complicated the search for profitable trading ideas. We are now at a stage where is can take a team of 5-6 researchers/developers, using advanced research techniques and computing technologies, as long as 12-18 months, and hundreds of thousands of dollars, to develop a prototype strategy. And there is no guarantee that the end result will produce the required investment returns.

The lengthening lead times and rising cost and risk of strategy research has obliged trading firms to explore possibilities for accelerating the R&D process. One such approach is Genetic Programming.

Early Experiences with Genetic Programming
I first came across the GP approach to investment strategy in the late 1990s, when I began to work with Haftan Eckholdt, then head of neuroscience at Yeshiva University in New York. Haftan had proposed creating trading strategies by applying the kind of techniques widely used to analyze voluminous and highly complex data sets in genetic research. I was extremely skeptical of the idea and spent the next 18 months kicking the tires very hard indeed, of behalf of an interested investor. Although Haftan’s results seemed promising, I was fairly sure that they were the product of random chance and set about devising tests that would demonstrate that.

SSALGOTRADING AD

One of the challenges I devised was to create data sets in which real and synthetic stock series were mixed together and given to the system evaluate. To the human eye (or analyst’s spreadsheet), the synthetic series were indistinguishable from the real thing. But, in fact, I had “planted” some patterns within the processes of the synthetic stocks that made them perform differently from their real-life counterparts. Some of the patterns I created were quite simple, such as introducing a drift component. But other patterns were more nuanced, for example, using a fractal Brownian motion generator to induce long memory in the stock volatility process.

It was when I saw the system detect and exploit the patterns buried deep within the synthetic series to create sensible, profitable strategies that I began to pay attention. A short time thereafter Haftan and I joined forces to create what became the Proteom Fund.

That Proteom succeeded at all was a testament not only to Haftan’s ingenuity as a researcher, but also to his abilities as a programmer and technician. Processing such large volumes of data was a tremendous challenge at that time and required a cluster of 50 cpu’s networked together and maintained with a fair amount of patch cable and glue. We housed the cluster in a rat-infested warehouse in Brooklyn that had a very pleasant view of Manhattan, but no a/c. The heat thrown off from the cluster was immense, and when combined with very loud rap music blasted through the walls by the neighboring music studios, the effect was debilitating. As you might imagine, meetings with investors were a highly unpredictable experience. Fortunately, Haftan’s intellect was matched by his immense reserves of fortitude and patience and we were able to attract investments from several leading institutional investors.

The Genetic Programming Approach to Building Trading Models

Genetic programming is an evolutionary-based algorithmic methodology which can be used in a very general way to identify patterns or rules within data structures. The GP system is given a set of instructions (typically simple operators like addition and subtraction), some data observations and a fitness function to assess how well the system is able to combine the functions and data to achieve a specified goal.

In the trading strategy context the data observations might include not only price data, but also price volatility, moving averages and a variety of other technical indicators. The fitness function could be something as simple as net profit, but might represent alternative measures of profitability or risk, with factors such as PL per trade, win rate, or maximum drawdown. In order to reduce the danger of over-fitting, it is customary to limit the types of functions that the system can use to simple operators (+,-,/,*), exponents, and trig functions. The length of the program might also be constrained in terms of the maximum permitted lines of code.

We can represent what is going on using a tree graph:

Tree

In this example the GP system is combining several simple operators with the Sin and Cos trig functions to create a signal comprising an expression in two variables, X and Y, which may be, for example, stock prices, moving averages, or technical indicators of momentum or mean reversion.
The “evolutionary” aspect of the GP process derives from the idea that an existing signal or model can be mutated by replacing nodes in a branch of a tree, or even an entire branch by another. System performance is re-evaluated using the fitness function and the most profitable mutations are retained for further generation.
The resulting models are often highly non-linear and can be very general in form.

A GP Daytrading Strategy
The last fifteen years has seen tremendous advances in the field of genetic programming, in terms of the theory as well as practice. Using a single hyper-threaded CPU, it is now possible for a GP system to generate signals at a far faster rate than was possible on Proteom’s cluster of 50 networked CPUs. A researcher can develop and evaluate tens of millions of possible trading algorithms with the space of a few hours. Implementing a thoroughly researched and tested strategy is now feasible in a matter of weeks. There can be no doubt of GP’s potential to produce dramatic reductions in R&D lead times and costs. But does it work?

To address that question I have summarized below the performance results from a GP-developed daytrading system that trades nine different futures markets: Crude Oil (CL), Euro (EC), E-Mini (ES), Gold (GC), Heating Oil (HO), Coffee (KC), Natural gas (NG), Ten Year Notes (TY) and Bonds (US). The system trades a single contract in each market individually, going long and short several times a day. Only the most liquid period in each market is traded, which typically coincides with the open-outcry session, with any open positions being exited at the end of the session using market orders. With the exception of the NG and HO markets, which are entered using stop orders, all of the markets are entered and exited using standard limit orders, at prices determined by the system

The system was constructed using 15-minute bar data from Jan 2006 to Dec 2011 and tested out-of-sample of data from Jan 2012 to May 2014. The in-sample span of data was chosen to cover periods of extreme market stress, as well as less volatile market conditions. A lengthy out-of-sample period, almost half the span of the in-sample period, was chosen in order to evaluate the robustness of the system.
Out-of-sample testing was “double-blind”, meaning that the data was not used in the construction of the models, nor was out-of-sample performance evaluated by the system before any model was selected.

Performance results are net of trading commissions of $6 per round turn and, in the case of HO and NG, additional slippage of 2 ticks per round turn.

Ann Returns Risk

Value 1000 Sharpe

Performance

(click on the table for a higher definition view)

The most striking feature of the strategy is the high rate of risk-adjusted returns, as measured by the Sharpe ratio, which exceeds 5 in both in-sample and out-of-sample periods. This consistency is a reflection of the fact that, while net returns fall from an annual average of over 29% in sample to around 20% in the period from 2012, so, too, does the strategy volatility decline from 5.35% to 3.86% in the respective periods. The reduction in risk in the out-of-sample period is also reflected in lower Value-at-Risk and Drawdown levels.

A decline in the average PL per trade from $25 to $16 in offset to some degree by a slight increase in the rate of trading, from 42 to 44 trades per day, on average, while daily win rate and percentage profitable trades remain consistent at around 65% and 56%, respectively.

Overall, the system appears to be not only highly profitable, but also extremely robust. This is impressive, given that the models were not updated with data after 2011, remaining static over a period almost half as long as the span of data used in their construction. It is reasonable to expect that out-of-sample performance might be improved by allowing the models to be updated with more recent data.

Benefits and Risks of the GP Approach to Trading System Development
The potential benefits of the GP approach to trading system development include speed of development, flexibility of design, generality of application across markets and rapid testing and deployment.

What about the downside? The most obvious concern is the risk of over-fitting. By allowing the system to develop and test millions of models, there is a distinct risk that the resulting systems may be too closely conditioned on the in-sample data, and will fail to maintain performance when faced with new market conditions. That is why, of course, we retain a substantial span of out-of-sample data, in order to evaluate the robustness of the trading system. Even so, given the enormous number of models evaluated, there remains a significant risk of over-fitting.

Another drawback is that, due to the nature of the modelling process, it can be very difficult to understand, or explain to potential investors, the “market hypothesis” underpinning any specific model. “We tested it and it works” is not a particularly enlightening explanation for investors, who are accustomed to being presented with a more articulate theoretical framework, or investment thesis. Not being able to explain precisely how a system makes money is troubling enough in good times; but in bad times, during an extended drawdown, investors are likely to become agitated very quickly indeed if no explanation is forthcoming. Unfortunately, evaluating the question of whether a period of poor performance is temporary, or the result of a breakdown in the model, can be a complicated process.

Finally, in comparison with other modeling techniques, GP models suffer from an inability to easily update the model parameters based on new data as it become available. Typically, as GP model will be to rebuilt from scratch, often producing very different results each time.

Conclusion
Despite the many limitations of the GP approach, the advantages in terms of the speed and cost of researching and developing original trading signals and strategies have become increasingly compelling.

Given the several well-documented successes of the GP approach in fields as diverse as genetics and physics, I think an appropriate position to take with respect to applications within financial market research would be one of cautious optimism.

A Scalping Strategy in E-Mini Futures

This is a follow up post to my post on the Mathematics of Scalping. To illustrate the scalping methodology, I coded up a simple strategy based on the techniques described in the post.

The strategy trades a single @ES contract on 1-minute bars. The attached ELD file contains the Easylanguage code for ES scalping strategy, which can be run in Tradestation or Multicharts.

This strategy makes no attempt to forecast market direction and doesn’t consider market trends at all. It simply looks at the current levels of volatility and takes a long volatility position or a short volatility position depending on whether volatility is above or below some threshold parameters.

By long volatility I mean a position where we buy or sell the market and set a loose Profit Target and a tight Stop Loss. By short volatility I mean a position where we buy or sell the market and set a tight Profit Target and loose Stop Loss. This is exactly the methodology I described earlier in the post. The parameters I ended up using are as follows:

Long Volatility: Profit Target = 8 ticks, Stop Loss = 2 ticks
Short Volatility: Profit Target = 2 ticks, Stop Loss = 30 ticks

I have made no attempt to optimize these parameters settings, which can easily be done in Tradestation or Multicharts.

What do we mean by volatility being above our threshold level? I use a very simple metric: I take the TrueRange for the current bar and add 50% of the increase or decrease in TrueRange over the last two bars. That’s my crude volatility “forecast”.

SSALGOTRADING AD

The final point to explain is this: let’s suppose our volatility forecast is above our threshold level, so we know we want to be long volatility. Ok, but do we buy or sell the ES? One approach ia to try to gauge the direction of the market by estimating the trend. Not a bad idea, by any means, although I have argued that volatility drowns out any trend signal at short time frames (like 1 minute, for example). So I prefer an approach that makes no assumptions about market direction.

In this approach what we do is divide volatility into upsideVolatility and downsideVolatility. upsideVolatility uses the TrueRange for bars where Close > Close[1]. downsideVolatility is calculated only for bars where Close < Close[1]. This kind of methodology, where you calculate volatility based on the sign of the returns, is well known and is used in performance measures like the Sortino ratio. This is like the Sharpe ratio, except that you calculate the standard deviation of returns using only days in which the market was down. When it’s calculated this way, standard deviation is known as the (square root of the) semi-variance.

Anyway, back to our strategy. So we calculate the upside and downside volatilities and test them against our upper and lower volatility thresholds.

The decision tree looks like this:

LONG VOLATILITY
If upsideVolatilityForecast > upperVolThrehold, buy at the market with wide PT and tight ST (long market, long volatility)
If downsideVolatilityForecast > upperVolThrehold, sell at the market with wide PT and tight ST (short market, long volatility)

SHORT VOLATILITY
If upsideVolatilityForecast < lowerVolThrehold, sell at the Ask on a limit with tight PT and wide ST (short market, short volatility)
If downsideVolatilityForecast < lowerVolThrehold, buy at the Bid on a limit with tight PT and wide ST (long market, short volatility)

NOTE THE FOLLOWING CAVEATS. DO NOT TRY TO TRADE THIS STRATEGY LIVE (but use it as a basis for a tradable strategy)

1. The strategy makes the usual TS assumption about fill rates, which is unrealistic, especially at short intervals like 1-minute.
2. The strategy allows fees and commissions of $3 per contract, or $6 per round turn. Your trading costs may be higher than this.
3. Tradestation is unable to perform analysis at the tick level for a period as long at the one used here (2000 to 2014). A tick by tick analysis would likely show very different results (better or worse).
4. The strategy is extremely lop-sided: the great majority of the profits are made on the long side and the Win Rates and Profit Factors are very different for long trades vs short trades. I suspect this would change with a tick by tick analysis. But it also may be necessary to add more parameters so that long trades are treated differently from short trades.
5. No attempt has been made to optimize the parameters.
6 This is a daytading strategy that will exit the market on close.

So with all that said here are the results.

As you can see, the strategy produces a smooth, upward sloping equity curve, the slope of which increases markedly during the period of high market volatility in 2008.
Net profits after commissions for a single ES contract amount to $243,000 ($3.42 per contract) with a win rate of 76% and Profit Factor of 1.24.

This basic implementation would obviously require improvement in several areas, not least of which would be to address the imbalance in strategy profitability on the short vs long side, where most of the profits are generated.

Scalping Strategy EC

 

Scalping Strategy Perf Report