Money Management – the Good, the Bad and the Ugly

The infatuation of futures traders with the subject of money management, (more aptly described as position sizing), is something of a puzzle for someone coming from a background in equities or forex.  The idea is, simply, that one can improve one’s  trading performance through the judicious use of leverage, increasing the size of a position at times and reducing it at others.

MM Grapgic

Perhaps the most widely known money management technique is the Martingale, where the size of the trade is doubled after every loss.  It is easy to show mathematically that such a system must win eventually, provided that the bet size is unlimited.  It is also easy to show that, small as it may be, there is a non-zero probability of a long string of losing trades that would bankrupt the trader before he was able to recoup all his losses.  Still, the prospect offered by the Martingale strategy is an alluring one: the idea that, no matter what the underlying trading strategy, one can eventually be certain of winning.  And so a virtual cottage industry of money management techniques has evolved.

One of the reasons why the money management concept is prevalent in the futures industry compared to, say, equities or f/x, is simply the trading mechanics.  Doubling the size of a position in futures might mean trading an extra contract, or perhaps a ten-lot; doing the same in equities might mean scaling into and out of multiple positions comprising many thousands of shares.  The execution risk and cost of trying to implement a money management program in equities has historically made the  idea infeasible, although that is less true today, given the decline in commission rates and the arrival of smart execution algorithms.  Still, money management is a concept that originated in the futures industry and will forever be associated with it.

Van Tharp on Position Sizing
I was recently recommended to read Van Tharp’s Definitive Guide to Position Sizing, which devotes several hundred pages to the subject.  Leaving aside the great number of pages of simulation results, there is much to commend it.  Van Tharp does a pretty good job of demolishing highly speculative and very dangerous “money management” techniques such as the Kelly Criterion and Ralph Vince’s Optimal f, which make unrealistic assumptions of one kind or another, such as, for example, that there are only two outcomes, rather than the multiple possibilities from a trading strategy, or considering only the outcome of a single trade, rather than a succession of trades (whose outcome may not be independent).  Just as  with the Martingale, these techniques will often produce unacceptably large drawdowns.  In fact, as I have pointed out elsewhere, the use of leverage which many so-called money management techniques actually calls for increases in the risk in the original strategy, often reducing its risk-adjusted return.

As Van Tharp points out, mathematical literacy is not one of the strongest suits of futures traders in general and the money management strategy industry reflects that.

But Van Tharp  himself is not immune to misunderstanding mathematical concepts.  His central idea is that trading systems should be rated according to its System Quality Number, which he defines as:

SQN  = (Expectancy / standard deviation of R) * square root of Number of Trades

R is a central concept of Van Tharp’s methodology, which he defines as how much you will lose per unit of your investment.  So, for example, if you buy a stock today for $50 and plan to sell it if it reaches $40,  your R is $10.  In cases like this you have a clear definition of your R.  But what if you don’t?  Van Tharp sensibly recommends you use your average loss as an estimate of R.

Expectancy, as Van Tharp defines it, is just the expected profit per trade of the system expressed as a multiple of R.  So

SQN = ( (Average Profit per Trade / R) / standard deviation (Average Profit per Trade / R) * square root of Number of Trades

Squaring both sides of the equation, we get:

SQN^2  =  ( (Average Profit per Trade )^2 / R^2) / Variance (Average Profit per Trade / R) ) * Number of Trades

The R-squared terms cancel out, leaving the following:

SQN^2     =  ((Average Profit per Trade ) ^ 2 / Variance (Average Profit per Trade)) *  Number of Trades

Hence,

SQN = (Average Profit per Trade / Standard Deviation (Average Profit per Trade)) * square root of Number of Trades

There is another name by which this measure is more widely known in the investment community:  the Sharpe Ratio.

On the “Optimal” Position Sizing Strategy
In my view,  Van Tharp’s singular achievement has been to spawn a cottage industry out of restating a fact already widely known amongst investment professionals, i.e. that one should seek out strategies that maximize the Sharpe Ratio.

Not that seeking to maximize the Sharpe Ratio is a bad idea – far from it.  But then Van Tharp goes on to suggest that one should consider only strategies with a SQN of greater than 2, ideally much higher (he mentions SQNs of the order of 3-6).

But 95% or more of investable strategies have a Sharpe Ratio less than 2.  In fact, in the world of investment management a Sharpe Ratio of 1.5 is considered very good.  Barely a handful of funds have demonstrated an ability to maintain a Sharpe Ratio of greater than 2 over a sustained period (Jim Simon’s Renaissance Technologies being one of them).  Only in the world of high frequency trading do strategies typically attain the kind of Sharpe Ratio (or SQN) that Van Tharp advocates.  So while Van Tharp’s intentions are well meaning, his prescription is unrealistic, for the majority of investors.

One recommendation of Van Tharp’s that should be taken seriously is that there is no single “best” money management strategy that suits every investor.  Instead, position sizing should be evolved through simulation, taking into account each trader or investor’s preferences in terms of risk and return.  This makes complete sense: a trader looking to make 100% a year and willing to risk 50% of his capital is going to adopt a very different approach to money management, compared to an investor who will be satisfied with a 10% return, provided his risk of losing money is very low.  Again, however, there is nothing new here:  the problem of optimal allocation based on an investor’s aversion to risk has been thoroughly addressed in the literature for at least the last 50 years.

What about the Equity Curve Money Management strategy I discussed in a previous post?  Isn’t that a kind of Martingale?  Yes and no.  Indeed, the strategy does require us to increase the original investment after a period of loss. But it does so, not after a single losing trade, but after a series of losses from which the strategy is showing evidence of recovering.  Furthermore, the ECMM system caps the add-on investment at some specified level, rather than continuing to double the trade size after every loss, as in a Martingale.

But the critical difference between the ECMM and the standard Martingale lies in the assumptions about dependency in the returns of the underlying strategy. In the traditional Martingale, profits and losses are independent from one trade to the next.  By contrast, scenarios where ECMM is likely to prove effective are ones where there is dependency in the underlying strategy, more specifically, negative autocorrelation in returns over some horizon.  What that means is that periods of losses or lower returns tend to be followed by periods of gains, or higher returns.  In other words, ECMM works when the underlying strategy has a tendency towards mean reversion.

CONCLUSION
The futures industry has spawned a myriad of position sizing strategies.  Many are impractical, or positively dangerous, leading as they do to significant risk of catastrophic loss.  Generally, investors should seek out strategies with higher Sharpe Ratios, and use money management techniques only to improve the risk-adjusted return.  But there is no universal money management methodology that will suit every investor.  Instead, money management should be conditioned on each individual investors risk preferences.

Posted in Equity Curve, Futures, Kelly Criterion, Money Management, Optimal f, Trading, Uncategorized, Van Tharp | Leave a comment

High Frequency Trading with ADL

Trading Technologies’ ADL is a visual programming language designed specifically for trading strategy development that is integrated in the company’s flagship XTrader product. ADL Extract2 Despite the radically different programming philosophy, my experience of working with ADL has been delightfully easy and strategies that would typically take many months of coding in C++ have been up and running in a matter of days or weeks.  An extract of one such strategy, a high frequency scalping trade in the E-Mini S&P 500 futures, is shown in the graphic above.  The interface and visual language is so intuitive to a trading system developer that even someone who has never seen ADL before can quickly grasp at least some of what it happening in the code.

Strategy Development in Low vs. High-Level Languages
What are the benefits of using a high level language like ADL compared to programming languages like C++/C# or Java that are traditionally used for trading system development?  The chief advantage is speed of development:  I would say that ADL offers the potential up the development process by at least one order of magnitude.  A complex trading system would otherwise take months or even years to code and test in C++ or Java, can be implemented successfully and put into production in a matter of weeks in ADL. In this regard, the advantage of speed of development is one shared by many high level languages, including, for example, Matlab, R and Mathematica.  But in ADL’s case the advantage in terms of time to implementation is aided by the fact that, unlike generalist tools such as MatLab, etc, ADL is designed specifically for trading system development.  The ADL development environment comes equipped with compiled pre-built blocks designed to accomplish many of the common tasks associated with any trading system such as acquiring market data and handling orders.  Even complex spread trades can be developed extremely quickly due to the very comprehensive library of pre-built blocks.

Integrating Research and Development
One of the drawbacks of using a higher  level language for building trading systems is that, being interpreted rather than compiled, they are simply too slow – one or more orders of magnitude, typically – to be suitable for high frequency trading.  I will come on to discuss the execution speed issue a little later.  For now, let me bring up a second major advantage of ADL relative to other high level languages, as I see it.  One of the issues that plagues trading system development is the difficulty of communication between researchers, who understand financial markets well, but systems architecture and design rather less so, and developers, whose skill set lies in design and programming, but whose knowledge of markets can often be sketchy.  These difficulties are heightened where researchers might be using a high level language and relying on developers to re-code their prototype system  to get it into production.  Developers  typically (and understandably) demand a high degree of specificity about the requirement and if it’s not included in the spec it won’t be in the final deliverable.  Unfortunately, developing a successful trading system is a highly non-linear process and a researcher will typically have to iterate around the core idea repeatedly until they find a combination of alpha signal and entry/exit logic that works.  In other words, researchers need flexibility, whereas developers require specificity. ADL helps address this issue by providing a development environment that is at once highly flexible and at the same time powerful enough to meet the demands of high frequency trading in a production environment.  It means that, in theory, researchers and developers can speak a common language and use a common tool throughout the R&D development cycle.  This is likely to reduce the kind of misunderstanding between researchers and developers that commonly arise (often setting back the implementation schedule significantly when they do).

Latency
Of course,  at least some of the theoretical benefit of using ADL depends on execution speed.  The way the problem is typically addressed with systems developed in high level languages like Matlab or R is to recode the entire system in something like C++, or to recode some of the most critical elements and plug those back into the main Matlab program as dlls.  The latter approach works, and preserves the most important benefits of working in both high and low level languages, but the resulting system is likely to be sub-optimal and can be difficult to maintain. The approach taken by Trading Technologies with ADL is very different.  Firstly,  the component blocks are written in  C# and in compiled form should run about as fast as native code.  Secondly, systems written in ADL can be deployed immediately on a co-located algo server that is plugged directly into the exchange, thereby reducing latency to an acceptable level.  While this is unlikely to sufficient for an ultra-high frequency system operating on the sub-millisecond level, it will probably suffice for high frequency systems that operate at speeds above above a few millisecs, trading up to say, around 100 times a day.

Fill Rate and Toxic Flow
For those not familiar with the HFT territory, let me provide an example of why the issues of execution speed and latency are so important.  Below is a simulated performance record for a HFT system in ES futures.  The system is designed to enter and exit using limit orders and trades around 120 times a day, with over 98% profitability, if we assume a 100% fill rate. Monthly PNL 1 Perf Summary 1  So far so good.  But  a 100% fill rate  is clearly unrealistic.  Let’s look at a pessimistic scenario: what if we  got filled on orders only when the limit price was exceeded?  (For those familiar with the jargon, we are assuming a high level of flow toxicity)  The outcome is rather different: Perf Summary 2 Neither scenario is particularly realistic, but the outcome is much more likely to be closer to the second scenario rather than the first if we our execution speed is slow, or if we are using a retail platform such as Interactive Brokers or Tradestation, with long latency wait times.  The reason is simple: our orders will always arrive late and join the limit order book at the back of the queue.  In most cases the orders ahead of ours will exhaust demand at the specified limit price and the market will trade away without filling our order.  At other times the market will fill our order whenever there is a large flow against us (i.e. a surge of sell orders into our limit buy), i.e. when there is significant toxic flow. The proposition is that, using ADL and the its high-speed trading infrastructure, we can hope to avoid the latter outcome.  While we will never come close to achieving a 100% fill rate, we may come close enough to offset the inevitable losses from toxic flow and produce a decent return.  Whether ADL is capable of fulfilling that potential remains to be seen.

More on ADL
For more information on ADL go here.

Posted in Algo Design Language, Algorithmic Trading, Futures, High Frequency Trading, Latency, Market Microstructure, Mathematica, Matlab, Order Flow, S&P500 Index, Scalping, Toxic Flow, TradeStation, Trading Technologies | Tagged , , , , , | Leave a comment

Equity Curve Money Management

Amongst a wide variety of money management methods that have evolved over the years, a perennial favorite is the use of the equity curve to guide position sizing.  The most common version of this technique is to add to the existing position (whether long or short) depending on the relationship between the current value of the account equity (realized + unrealized PL) and its moving average.  According to whether you believe that the  equity curve is momentum driven, or mean reverting, you will add to your existing position when the equity move above (or, on the case of mean-reverting, below) the long term moving average.

In this article I want to discuss a  slightly different version of equity curve money management, which is mean-reversion oriented.  The underlying thesis is that your trading strategy has good profit characteristics, and while it suffers from the occasional, significant drawdown, it can be expected to recover from the downswings.  You should therefore be looking to add to your positions when the equity curve moves down sufficiently, in the expectation that the trading strategy will recover.  The extra contracts you add to your position during such downturns  with increase the overall P&L. To illustrate the approach I am going to use a low frequency strategy on the S&P500 E-mini futures contract (ES).  The performance of the strategy is summarized in the chart and table below. EC PNL

(click to enlarge)

The overall results of the strategy are not bad:  at over 87% the  win rate is high as, too, is the profit factor of 2.72.  And the strategy’s performance, although hardly stellar, has been quite consistent over the period from 1997.  That said, most  the profits derive from the long side, and the strategy suffers from the occasional large loss, including a significant drawdown of over 18% in 2000.

I am going to use this underlying strategy to illustrate how its performance can be improved with equity curve money management (ECMM).  To start, we calculate a simple moving average of the equity curve, as before.  However, in this variation of ECMM we then calculate offsets  that are a number of standard deviations above or below the moving average.  Typical default values for the moving average length might be 50 bars for a daily series, while we might  use, say,  +/- 2 S.D. above and below the moving average as our trigger levels. The idea is that we add to our position when the equity curve falls below the lower threshold level (moving average – 2x S.D) and then crosses back above it again.  This is similar to how a trader might use Bollinger bands, or an oscillator like Stochastics.  The chart below illustrates the procedure.

ED.D Chart with ECMM

The lower and upper trigger levels are shown as green and yellow lines in the chart indicator (note that in this variant of ECMM we only use the lower level to add to positions).

After a significant drawdown early in October the equity curve begins to revert and crosses back over the lower threshold level on Oct 21.  Applying our ECMM rule, we add to our existing long position the next day, Oct 22 (the same procedure would apply to adding to short positions).  As you can see, our money management trade worked out very well, since the EC did continue to mean-revert as expected. We closed the trade on Nov 11, for a substantial, additional profit.

Now we have illustrated the procedure, let’s being to explore the potential of the ECMM idea in more detail.  The first important point to understand is what ECMM will NOT do: i.e. reduce risk.  Like all money management techniques that are designed to pyramid into positions, ECMM will INCREASE risk, leading to higher drawdowns.  But ECMM should also increase profits:  so the question is whether the potential for greater profits is sufficient to offset the risk of greater losses.  If not, then there is a simpler alternative method of increasing profits: simply increase position size!  It follows that one of the key metrics of performance to focus on in evaluating this technique is the ratio of PL to drawdown.  Let’s look at some examples for our baseline strategy.

Single Entry, 2SD

The chart shows the effect of adding a specified number of contracts to our existing long or short position whenever the equity curve crosses back above the lower trigger level, which in this case is set at 2xS.D below the 50-day moving average of the equity curve.  As expected, the overall strategy P&L increases linearly in line with the number of additional contracts traded, from a base level of around $170,000, to over $500,000 when we trade an additional five contracts.  So, too, does the profit factor rise from around 2.7 to around 5.0. That’s where the good news ends. Because, just as the strategy PL increases, so too does the size of the maximum drawdown, from $(18,500) in the baseline case to over $(83,000) when we trade an additional five contracts.  In fact, the PL/Drawdown ratio declines from over 9.0 in the baseline case, to only 6.0 when we trade the ECMM strategy with five additional contracts.  In terms of risk and reward, as measured by the PL/Drawdown ratio, we would be better off simply trading the baseline strategy:  if we traded 3 contracts instead of 1 contract, then without any money management at all we would have made total profits of around $500,000, but with a drawdown of just over $(56,000).  This is the same profit as produced with the 5-contract ECMM strategy, but with a drawdown that is $23,000 smaller.

How does this arise?  Quite simply, our ECMM money management trades as not all automatic winners from the get-go (even if they eventually produce profits.  In some cases, having crossed above the lower threshold level, the equity curve will subsequently cross back down below it again.  As it does so, the additional contracts we have traded are now adding to the strategy drawdown.

This suggests that there might be a better alternative.  How about if, instead of doing a single ECMM trade for, say, 5 additional contracts, we instead add an additional contract each time the equity curve crosses above the lower threshold level.  Sure, we might give up some extra profits, but our drawdown should be lower, right? That turns out to be true.  Unfortunately, however, profits are impacted more than the drawdown, so as a result the PL/Drawdown ratio shows the same precipitous decline:

Multiple Entry, 2SD

Once again, we would be better off trading the baseline strategy in larger size, rather than using ECMM, even when we scale into the additional contracts.

What else can we try?  An obvious trick to try is tweaking the threshold levels.  We can do this by adjusting the # of standard deviations at which to set the trigger levels.  Intuitively, it might seem that the obvious thing to do is set the threshold levels further apart, so that ECMM trades are triggered less frequently.  But, as it turns out, this does not produce the desired effect.  Instead, counter-intuitively, we have to set the threshold levels CLOSER to the moving average, at only +/-1x S.D.  The results are shown in the chart below.

Single Entry, 1SD

With these settings, the strategy PL and profit factor increase linearly, as before.  So too does the strategy drawdown, but at a slower rate.  As a consequence, the PL/Drawdown ration actually RISES, before declining at a moderate pace.  Looking at the chart, it is apparent the optimal setting is trading two additional contracts with a threshold setting one standard deviation below the 50-day moving average of the equity curve.

Below are the overall results.  With these settings the baseline strategy plus ECMM produces total profits of $334,000, a profit factor of 4.27 and a drawdown of $(35,212), making the PL/Drawdown ratio 9.50.  Producing the same rate of profits using the baseline strategy alone would require us to trade two contracts, producing a slightly higher drawdown of almost $(37,000).  So our ECMM strategy has increased overall profitability on a risk-adjusted basis.

EC with ECMM PNL ECMM

(Click to enlarge)

CONCLUSION

It is certainly feasible to improve not only the overall profitability of a strategy using equity curve money management, but also the risk-adjusted performance.  Whether ECMM will have much effect depends on the specifics of the underlying strategy, and the level at which the ECMM parameters are set to.  These can be optimized on a walk-forward basis.

EASYLANGUAGE CODE

Inputs:

MALen(50),
SDMultiple(2),
PositionMult(1),
ExitAtBreakeven(False);

Var:
OpenEquity(0),
EquitySD(0),
EquityMA(0),
UpperEquityLevel(0),
LowerEquityLevel(0),
NShares(0);

OpenEquity=OpenPositionProfit+NetProfit;a
EquitySD=stddev(OpenEquity,MALen);
EquityMA=average(OpenEquity,MALen);
UpperEquityLevel=EquityMA + SDMultiple*EquitySD;
LowerEquityLevel=EquityMA-SDMultiple*EquitySD;
NShares=CurrentContracts*PositionMult;
If OpenEquity crosses above LowerEquityLevel then begin
If Marketposition > 0 then begin
Buy(“EnMark-LMM”) NShares shares next bar at market;
end;
If Marketposition < 0 then begin
Sell Short(“EnMark-SMM”) NShares shares next bar at market;
end;
end;
If ExitAtBreakeven then begin

If OpenEquity crosses above EquityMA then begin
If Marketposition > 1 then begin
Sell Short (“ExBE-LMM”) (Currentcontracts-1) shares next bar at market;
end;
If Marketposition < -1 then begin
Buy (“ExBE-SMM”) (Currentcontracts-1) shares next bar at market;
end;

end;
end;

Posted in Equity Curve, Money Management | Comments Off

Building Systematic Strategies – A New Approach

Anyone active in the quantitative space will tell you that it has become a great deal more competitive in recent years.  Many quantitative trades and strategies are a lot more crowded than they used to be and returns from existing  strategies are on the decline.

THE CHALLENGE

The Challenge

Meanwhile, costs have been steadily rising, as the technology arms race has accelerated, with more money being spent on hardware, communications and software than ever before.  As lead times to develop new strategies have risen, the cost of acquiring and maintaining expensive development resources have spiraled upwards.  It is getting harder to find new, profitable strategies, due in part to the over-grazing of existing methodologies and data sets (like the E-Mini futures, for example). There has, too, been a change in the direction of quantitative research in recent years.  Where once it was simply a matter of acquiring the fastest pipe to as many relevant locations as possible, the marginal benefit of each extra $ spent on infrastructure has since fallen rapidly.  New strategy research and development is now more model-driven than technology driven.

THE OPPORTUNITY

The Opportunity

What is needed at this point is a new approach:  one that accelerates the process of identifying new alpha signals, prototyping and testing new strategies and bringing them into production, leveraging existing battle-tested technologies and trading platforms.

 

GENETIC PROGRAMMING

Genetic programming, which has been around since the 1990′s when its use was pioneered in proteomics, enjoys significant advantages over traditional research and development methodologies.

GP

GP is an evolutionary-based algorithmic methodology in which a system is given a set of simple rules, some data, and a fitness function that produces desired outcomes from combining the rules and applying them to the data.   The idea is that, by testing large numbers of possible combinations of rules, typically in the  millions, and allowing the most successful rules to propagate, eventually we will arrive at a strategy solution that offers the required characteristics.

ADVANTAGES OF GENETIC PROGRAMMING

AdvantagesThe potential benefits of the GP approach are considerable:  not only are strategies developed much more quickly and cost effectively (the price of some software and a single CPU vs. a small army of developers), the process is much more flexible. The inflexibility of the traditional approach to R&D is one of its principle shortcomings.  The researcher produces a piece of research that is subsequently passed on to the development team.  Developers are usually extremely rigid in their approach: when asked to deliver X, they will deliver X, not some variation on X.  Unfortunately research is not an exact science: what looks good in a back-test environment may not pass muster when implemented in live trading.  So researchers need to “iterate around” the idea, trying different combinations of entry and exit logic, for example, until they find a variant that works.  Developers are lousy at this;  GP systems excel at it.

CHALLENGES FOR THE GENETIC PROGRAMMING APPROACH

So enticing are the potential benefits of GP that it begs the question as to why the approach hasn’t been adopted more widely.  One reason is the strong preference amongst researchers for an understandable – and testable – investment thesis.  Researchers – and, more importantly, investors –  are much more comfortable if they can articulate the premise behind a strategy.  Even if a trade turns out to be a loser, we are generally more comfortable buying a stock on the supposition of, say,  a positive outcome of a pending drug trial, than we are if required to trust the judgment of a black box, whose criteria are inherently unobservable.

GP Challenges

Added to this, the GP approach suffers from three key drawbacks:  data sufficiency, data mining and over-fitting.  These are so well known that they hardly require further rehearsal.  There have been many adverse outcomes resulting from poorly designed mechanical systems curve fitted to the data. Anyone who was active in the space in the 1990s will recall the hype over neural networks and the over-exaggerated claims made for their efficacy in trading system design.  Genetic Programming, a far more general and powerful concept,  suffered unfairly from the ensuing adverse publicity, although it does face many of the same challenges.

A NEW APPROACH

I began working in the field of genetic programming in the 1990′s, with my former colleague Haftan Eckholdt, at that time head of neuroscience at Yeshiva University, and we founded a hedge fund, Proteom Capital, based on that approach (large due to Haftan’s research).  I and my colleagues at Systematic Strategies have continued to work on GP related ideas over the last twenty years, and during that period we have developed a methodology that address the weaknesses that have held back genetic programming from widespread adoption.

Advances

Firstly, we have evolved methods for transforming original data series that enables us to avoid over-using the same old data-sets and, more importantly, allows new patterns to be revealed in the underlying market structure.   This effectively eliminates the data mining bias that has plagued the GP approach. At the same time, because our process produces a stronger signal relative to the background noise, we consume far less data – typically no more than a couple of years worth.

Secondly, we have found we can enhance the robustness of prototype strategies by using double-blind testing: i.e. data sets on which the performance of the model remains unknown to the machine, or the researcher, prior to the final model selection.

Finally, we are able to test not only the alpha signal, but also multiple variations of the trade expression, including different types of entry and exit logic, as well as profit targets and stop loss constraints.

OUTCOMES:  ROBUST, PROFITABLE STRATEGIES

outcomes

Taken together, these measures enable our GP system to produce strategies that not only have very high performance characteristics, but are also extremely robust.  So, for example, having constructed a model using data only from the continuing bull market in equities in 2012 and 2013, the system is nonetheless capable of producing strategies that perform extremely well when tested out of sample over the highly volatility bear market conditions of 2008/09.

So stable are the results produced by many of the strategies, and so well risk-controlled, that it is possible to deploy leveraged money-managed techniques, such as Vince’s fixed fractional approach.  Money management schemes take advantage of the high level of consistency in performance to increase the capital allocation to the strategy in a way that boosts returns without incurring a high risk of catastrophic loss.  You can judge the benefits of applying these kinds of techniques in some of the strategies we have developed in equity, fixed income, commodity and energy futures which are described below.

CONCLUSION

After 20-30 years of incubation, the Genetic Programming approach to strategy research and development has come of age. It is now entirely feasible to develop trading systems that far outperform the overwhelming majority of strategies produced by human researchers, in a fraction of the time and for a fraction of the cost.

SAMPLE GP SYSTEMS

Sample

emini    emini MM

NG  NG MM

SI MMSI

US US MM

 

 

Posted in Commodity Futures, Equity Futures, Fixed Income Futures, Futures, Genetic Programming, Money Management, Natural Gas Futures, Strategy Development, Systematic Strategies, Uncategorized | Tagged , , , , , , , , | Comments Off

Volatility ETF Strategy – Oct 2014 Update: +4.66%

HIGHLIGHTS

  • CAGR over 40% annually
  • Sharpe ratio in excess  of 3
  • Max drawdown -13.40%
  • Liquid, exchange-traded ETF assets
  • Fully automated, algorithmic execution
  • Monthly portfolio turnover
  • Managed accounts with daily MTM
  • Minimum investment $250,000
  • Fee structure 2%/20%

VALUE OF $1,000 2012-2014

VolETFStrat

 

 

 

 

 

 

STRATEGY DESCRIPTION

The Systematic Strategies Volatility ETF  strategy uses mathematical models to quantify the relative value of ETF products based on the CBOE S&P500 Volatility Index (VIX) and create a positive-alpha long/short volatility portfolio. The strategy is designed to perform robustly during extreme market conditions, by utilizing the positive convexity of the underlying ETF assets. It does not rely on volatility term structure (“carry”), or statistical correlations, but generates a return derived from the ETF pricing methodology.  The net volatility exposure of the portfolio may be long, short or neutral, according to market conditions, but at all times includes an underlying volatility hedge. Portfolio holdings are adjusted daily using execution algorithms that minimize market impact to achieve the best available market prices.

ANNUAL RETURNS

Ann Returns

 

 

 

 

 

 

 

MONTHLY RETURNS

Monthly Returns

(click to enlarge)

RISK CONTROL

Our portfolio is not dependent on statistical correlations and is always hedged. We never invest in illiquid securities. We operate hard exposure limits and caps on volume participation.

OPERATIONS
We operate fully redundant dual servers operating an algorithmic execution platform designed to minimize market impact and slippage.  The strategy is not latency sensitive.

 PERFORMANCE STATISTICS

PERFORMANCE STATS

(click to enlarge)

Posted in Hedge Funds, VIX Index, Volatility ETF Strategy | Comments Off

A High Frequency Spread Strategy in VIX Futures

For anyone interested, this is what the Equity Curve looks like for the high frequency version of the VIX futures spread strategy. The strategy was build on 1-min bars in 2010, and tested out-of-sample in 2011-2014.

The annual net PL is around $20,000 per spread.   Note that this is net of Bid-Ask spread of 0.05 ($50) and commission/transaction costs of $20 per round turn.

These assumptions are only realistic on a HFT platform where you can work the spread to enter passively.  I have begun coding this in TT’s ADL to see how it will perform.

High Freq Strategy Equity Curve(click to enlarge)

 

High Frequency Perf Results

(click to enlarge)

Posted in Algo Design Language, Algo Strategy Engine, Futures, High Frequency Trading, Spread Trading, Trading, Trading Technologies, VIX Index | Tagged , , , , , | Comments Off

A Calendar Spread Strategy in VIX Futures

I have been working on developing some high frequency spread strategies using Trading Technologies’ Algo Strategy Engine, which is extremely impressive (more on this in a later post).  I decided to take a time out to experiment with a slower version of one of the trades, a calendar spread in VIX futures that trades  the spread on the front two contracts.  The strategy applies a variety of trend-following and mean-reversion indicators to trade the spread on a daily basis.

Modeling a spread strategy on a retail platform like Interactivebrokers or TradeStation is extremely challenging, due to the limitations of the platform and the Easylanguage programming language compared to professional platforms that are built for purpose, like TT’s XTrader and development tools like ADL.  If you backtest strategies based on signals generated from the spread calculated using the last traded prices in the two securities, you will almost certainly see “phantom trades” – trades that could not be executed at the indicated spread price (for example, because both contracts last traded on the same side of the bid/ask spread).   You also can’t easily simulate passive entry or exit strategies, which typically constrains you to using market orders for both legs, in and out of the spread.  On the other hand, while using market orders would almost certainly be prohibitively expensive in a high frequency or daytrading context, in a low-frequency scenario the higher transaction costs entailed in aggressive entries and exits are typically amortized over far longer time frames.

In the following example I have allowed transaction costs of $100 per round turn and slippage of $0.1 (equivalent to $100) per spread.  Daily settlement prices from Mar 2004 to June 2010 were used to fit the model, which was tested out of sample in the period July 2010 to June 2014. Results are summarized in the chart and table below.

Even burdened with significant transaction cost assumptions the strategy performance looks impressive on several counts, notably a profit factor in excess of 300, a win rate of over 90% and a Sortino Ratio of over 6.  These features of the strategy prove robust (and even increase) during the four year out-of-sample period, although the annual net profit per spread declines to around $8,500, from $36,600 for the in-sample period.  Even so, this being a straightforward calendar spread, it should be possible to trade the strategy in size at relative modest margin cost, making the strategy return highly attractive.

Equity Curve

 (click to enlarge)

Performance Results

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(click to enlarge)

 

 

Posted in Algo Design Language, Algo Strategy Engine, Futures, Spread Trading, TradeStation, Trading Technologies, VIX Index, Volatility Modeling | Tagged , , , , , , , | Comments Off

Volatility ETF Strategy – Sept 2014 Update

HIGHLIGHTS

  • CAGR over 40% annually
  • Sharpe ratio in excess  of 3
  • Max drawdown -4.3%
  • Liquid, exchange-traded ETF assets
  • Fully automated, algorithmic execution
  • Monthly portfolio turnover
  • Managed accounts with daily MTM
  • Minimum investment $250,000
  • Fee structure 2%/20%

VALUE OF $1,000 2012-2014

VALUE OF $1000

 

 

 

 

 

 

STRATEGY DESCRIPTION

The Systematic Strategies Volatility ETF  strategy uses mathematical models to quantify the relative value of ETF products based on the CBOE S&P500 Volatility Index (VIX) and create a positive-alpha long/short volatility portfolio. The strategy is designed to perform robustly during extreme market conditions, by utilizing the positive convexity of the underlying ETF assets. It does not rely on volatility term structure (“carry”), or statistical correlations, but generates a return derived from the ETF pricing methodology.  The net volatility exposure of the portfolio may be long, short or neutral, according to market conditions, but at all times includes an underlying volatility hedge. Portfolio holdings are adjusted daily using execution algorithms that minimize market impact to achieve the best available market prices.

ANNUAL RETURNS

Ann Returns

 

 

 

 

 

 

 

MONTHLY RETURNS

Monthly Returns
RISK CONTROL

Our portfolio is not dependent on statistical correlations and is always hedged. We never invest in illiquid securities. We operate hard exposure limits and caps on volume participation.

OPERATIONS
We operate fully redundant dual servers operating an algorithmic execution platform designed to minimize market impact and slippage.  The strategy is not latency sensitive.

 PERFORMANCE STATISTICS

PERFORMANCE STATS

(click to enlarge)

Posted in ETFs, VIX Index, Volatility ETF Strategy, Volatility Modeling | Tagged , , , | Comments Off

Not the Market Top

Our most reliable market timing indicator is a  system that “trades” the CBOE VIX Index, a measure of option volatility in the S&P500 Index.  While the VIX index itself is not tradable, the system provides a signal that can be used to trade products such as VIX futures, or ETF products like the VXX and XIV.  Since the VIX index is correlated negatively with the market, the system can also provide a very useful signal to time market entries and exits (or when to add to positions) in equity portfolios.

Since 1992 the system has “traded” 238 times, with 81% accuracy (i.e. 8 out of ten trades were profitable).  The profit percentage is even higher on the long side – around 89%, although short signals tend to be more frequent than long signals by a factor of more than 2:1.

VIX Signals
(click to enlarge)

Since the start of 2014 the system has issued 9 signals, 7 of which were profitable.  The latest signal was generated on July 11, when the system went short the VIX at 12.08.  At the time of writing, the trade is underwater, with the VIX at around the 14 level.  It is not at all uncommon for a trade to lose money initially, and this one may still work out.  The more important point, however, is this: the system is not behaving as it did during previous market crashes in 2000-01 and 2008-09, periods in which it made very large gains of 42% and 28%, respectively.  The more modest return of +1.59% in 2014 suggests that the market has not yet entered the long-awaited correction anticipated by so many.  Indeed, I would hazard a prediction that we will see a return to the 2,000 level in the S&P500 before any such correction occurs.  The merchants of doom may  have to wait a little while longer for their worst case scenario to play out.

VIX Strategy Report

 

(Click to enlarge)

 

 

Posted in Market Timing, VIX Index, Volatility Modeling | Tagged , , , | Comments Off

What Wealth Managers and Family Offices Need to Understand About Alternative Investing

Gold

The most recent Morningstar survey provides an interesting snapshot of the state of the alternatives market.  In 2013, for the third successive year, liquid alternatives was the fastest growing category of mutual funds, drawing in flows totaling $95.6 billion.  The fastest growing subcategories have been long-short stock funds (growing more than 80% in 2013), nontraditional bond funds (79%) and “multi-alternative” fund-of-alts-funds products (57%).

Benchmarking Alternatives
The survey also provides some interesting insights into the misconceptions about alternative investments that remain prevalent amongst advisors, despite contrary indications provided by long-standing academic research.  According to Morningstar, a significant proportion of advisors continue to use inappropriate benchmarks, such as the S&P 500 or Russell 2000, to evaluate alternatives funds (see Some advisers using ill-suited benchmarks to measure alts performance by Trevor Hunnicutt, Investment News July 2014).  As Investment News points out, the problem with applying standards developed to measure the performance of funds that are designed to beat market benchmarks is that many alternative funds are intended to achieve other investment goals, such as reducing volatility or correlation.  These funds will typically have under-performed standard equity indices during the bull market, causing investors to jettison them from their portfolios at a time when the additional protection they offer may be most needed.

This is but one example in a broader spectrum of issues about alternative investing that are poorly understood.  Even where advisors recognize the need for a more appropriate hedge fund index to benchmark fund performance, several traps remain for the unwary.  As shown in Brooks and Kat (The Statistical Properties of Hedge Fund Index Returns and Their Implications for Investors, Journal of Financial and Quantitative Analysis, 2001), there can be considerable heterogeneity between indices that aim to benchmark the same type of strategy, since indices tend to cover different parts of the alternatives universe.  There are also significant differences between indices in terms of their survivorship bias – the tendency to overstate returns by ignoring poorly performing funds that have closed down (see Welcome to the Dark Side – Hedge Fund Attribution and Survivorship Bias, Amin and Kat, Working Paper, 2002).  Hence, even amongst more savvy advisors, the perception of performance tends to be biased by the choice of index.

Risks and Benefits of Diversifying with Alternatives
An important and surprising discovery in relation to diversification with alternatives was revealed in Amin and Kat’s Diversification and Yield Enhancement with Hedge Funds (Working Paper, 2002).  Their study showed that the median standard deviation of a portfolio of stocks, bonds and hedge funds reached its lowest point where the allocation to alternatives was 50%, far higher than the 1%-5% typically recommended by advisors.

Standard Deviation of Portfolios of Stocks, Bonds and 20 hedge Funds

Hedge Fund Pct Mix and Volatility

Source: Diversification and Yield Enhancement with Hedge Funds, Amin and Kat, Working Paper, 2002

Another potential problem is that investors will not actually invest in the fund index that is used for benchmarking, but in a basket containing a much smaller number of funds, often through a fund of funds vehicle.  The discrepancy in performance between benchmark and basket can often be substantial in the alternatives space.

Amin and Kat studied this problem in 2002 (Portfolios of Hedge Funds, Working Paper, 2002), by constructing hedge fund portfolios ranging in size from 1 to 20 funds and measuring their performance on a number of criteria that included, not just the average return and standard deviation, but also the skewness (a measure of the asymmetry of returns), kurtosis (a measure of the probability of extreme returns)and the correlation with the S&P 500 Index and the Salomon (now Citigroup) Government Bond Index.  Their startling conclusion was that, in the alternatives space, diversification is not necessarily a good thing.    As expected, as the number of funds in the basket is increased, the overall volatility drops substantially; but at the same time skewness drops and kurtosis and market correlation increase significantly.  In other words, when adding more funds, the likelihood of a large loss increases and the diversification benefit declines.   The researchers found that a good approximation to a typical hedge fund index could be constructed with a basket of just 15 well-chosen funds, in most cases.

Concerns about return distribution characteristics such as skewness and kurtosis may appear arcane, but these factors often become crucially important at just the wrong time, from the investor’s perspective.  When things go wrong in the stock market they also tend to go wrong for hedge funds, as a fall in stock prices is typically accompanied by a drop in market liquidity, a widening of spreads and, often, an increase in stock loan costs.  Equity market neutral and long/short funds that are typically long smaller cap stocks and short larger cap stocks will pay a higher price for the liquidity they need to maintain neutrality.  Likewise, a market sell-off is likely to lead to postponing of M&A transactions that will have a negative impact on the performance of risk arbitrage funds.  Nor are equity-related funds the only alternatives likely to suffer during a market sell-off.  A market fall will typically be accompanied by widening credit spreads, which in turn will damage the performance of fixed income and convertible arbitrage funds.   The key point is that, because they all share this risk, diversification among different funds will not do much to mitigate it.

Conclusions
Many advisors remain wedded to using traditional equity indices that are inappropriate benchmarks for alternative strategies.  Even where more relevant indices are selected, they may suffer from survivorship and fund-selection bias.

In order to reap the diversification benefit from alternatives, research shows that investors should concentrate a significant proportion of their wealth in the limited number of alternatives funds, a portfolio strategy that is diametrically opposed to the “common sense” approach of many advisors.

Finally, advisors often overlook the latent correlation and liquidity risks inherent in alternatives that come into play during market down-turns, at precisely the time when investors are most dependent on diversification to mitigate market risk.  Such risks can be managed, but only by paying attention to portfolio characteristics such as skewness and kurtosis, which alternative funds significantly impact.

 

Posted in Alternative Investment, Hedge Funds | Tagged , , , , , , | Comments Off