Volatility ETF Strategy June 2015: -0.13% +13.99% YTD Sharpe 2.68 YTD

HIGHLIGHTS

  • 2015 YTD: + 13.99%
  • CAGR over 40%
  • Sharpe ratio in excess  of 3
  • Max drawdown -13.40%
  • Liquid, exchange-traded ETF assets
  • Fully automated, algorithmic execution
  • Monthly portfolio turnover
  • Managed accounts with daily MTM
  • Minimum investment $250,000
  • Fee structure 2%/20%

VALUE OF $1000

 

 

 

 

 

 

NOTES FOR JUNE 2015

We went to cash in the latter half of June in view of the uncertainties over the situation in Greece.

STRATEGY DESCRIPTION

The Systematic Strategies Volatility ETF  strategy uses mathematical models to quantify the relative value of ETF products based on the CBOE S&P500 Volatility Index (VIX) and create a positive-alpha long/short volatility portfolio. The strategy is designed to perform robustly during extreme market conditions, by utilizing the positive convexity of the underlying ETF assets. It does not rely on volatility term structure (“carry”), or statistical correlations, but generates a return derived from the ETF pricing methodology.

The net volatility exposure of the portfolio may be long, short or neutral, according to market conditions, but at all times includes an underlying volatility hedge. Portfolio holdings are adjusted daily using execution algorithms that minimize market impact to achieve the best available market prices.

 

PERFORMANCE

The strategy is designed to produce consistent returns in the range of 25% to 40% annually, with annual volatility of around 10% and Sharpe ratio in the region of 2.5 to 3.5.

Ann Returns

RISK CONTROL

Our portfolio is not dependent on statistical correlations and is always hedged. We never invest in illiquid securities. We operate hard exposure limits and caps on volume participation.

Sharpe

 

 

 

 

 

OPERATIONS
We operate fully redundant dual servers operating an algorithmic execution platform designed to minimize market impact and slippage.  The strategy is not latency sensitive.

MONTHLY RETURNS     (Click to Enlarge)

Monthly Returns

 

 

 

PERFORMANCE STATISTICS

PERFORMANCE STATS

 

 

 

 

 

 

 

 

 

 

 

 

(Click to Enlarge)

Disclaimer

Past performance does not guarantee future results. You should not rely on any past performance as a guarantee of future investment performance. Investment returns will fluctuate. Investment monies are at risk and you may suffer losses on any investment.

Posted in Uncategorized | Leave a comment

The Case for Volatility as an Asset Class

Volatility as an asset class has grown up over the fifteen years since I started my first volatility arbitrage fund in 2000.  Caissa Capital grew to about $400m in assets before I moved on, while several of its rivals have gone on to manage assets in the multiple billions of dollars.  Back then volatility was seen as a niche, esoteric asset class and quite rightly so.  Nonetheless, investors who braved the unknown and stayed the course have been well rewarded: in recent years volatility strategies as an asset class have handily outperformed the indices for global macro, equity market neutral and diversified funds of funds, for example. Fig 1

The Fundamentals of Volatility

It’s worth rehearsing a few of the fundamental features of volatility for those unfamiliar with the territory.

Volatility is Unobservable

Volatility is the ultimate derivative, one whose fair price can never be known, even after the event, since it is intrinsically unobservable.  You can estimate what the volatility of an asset has been over some historical period using, for example, the standard deviation of returns.  But this is only an estimate, one of several possibilities, all of which have shortcomings.  We now know that volatility can be measured with almost arbitrary precision using an integrated volatility estimator (essentially a metric based on high frequency data), but that does not change the essential fact:  our knowledge of volatility is always subject to uncertainty, unlike a stock price, for example.

Volatility Trends

Huge effort is expended in identifying trends in commodity markets and many billions of dollars are invested in trend following CTA strategies (and, equivalently, momentum strategies in equities).  Trend following undoubtedly works, according to academic research, but is also subject to prolonged drawdowns during periods when a trend moderates or reverses. By contrast, volatility always trends.  You can see this from the charts below, which express the relationship between volatility in the S&P 500 index in consecutive months.  The r-square of the regression relationship is one of the largest to be found in economics. Fig 2 And this is a feature of volatility not just in one asset class, such as equities, nor even for all classes of financial assets, but in every time series process for which data exists, including weather and other natural phenomena.  So an investment strategy than seeks to exploit volatility trends is relying upon one of the most consistent features of any asset process we know of (more on this topic in Long Memory and Regime Shifts in Asset Volatility).

Volatility Mean-Reversion and Correlation

One of the central assumptions behind the ever-popular stat-arb strategies is that the basis between two or more correlated processes is stationary. Consequently, any departure from the long term relationship between such assets will eventually revert to the mean. Mean reversion is also an observed phenomenon in volatility processes.  In fact, the speed of mean reversion (as estimated in, say, an Ornstein-Ulenbeck framework) is typically an order of magnitude larger than for a typical stock-pairs process.  Furthermore, the correlation between one volatility process and another volatility process, or indeed between a volatility process and an asset returns process, tends to rise when markets are stressed (i.e. when volatility increases). Fig 3

Another interesting feature of volatility correlations is that they are often lower than for the corresponding asset returns processes.  One can therefore build a diversified volatility portfolio with far fewer assets that are required for, say, a basket of equities (see Modeling Asset Volatility for more on this topic).

Fig 4   Finally, more sophisticated stat-arb strategies tend to rely on cointegration rather than correlation, because cointegrated series are often driven by some common fundamental factors, rather than purely statistical ones, which may prove temporary (see Developing Statistical Arbitrage Strategies Using Cointegration for more details).  Again, cointegrated relationships tend to be commonplace in the universe of volatility processes and are typically more reliable over the long term than those found in asset return processes.

Volatility Term Structure

One of the most marked characteristics of the typical asset volatility process its upward sloping term structure.  An example of the typical term structure for futures on the VIX S&P 500 Index volatility index (as at the end of May, 2015), is shown in the chart below. A steeply upward-sloping curve characterizes the term structure of equity volatility around 75% of the time.

Fig 5   Fixed income investors can only dream of such yield in the current ZIRP environment, while f/x traders would have to plunge into the riskiest of currencies to achieve anything comparable in terms of yield differential and hope to be able to mitigate some of the devaluation risk by diversification.

The Volatility of Volatility

One feature of volatility processes that has been somewhat overlooked is the consistency of the volatility of volatility.  Only on one occasion since 2007 has the VVIX index, which measures the annual volatility of the VIX index, ever fallen below 60.

Fig 6   What this means is that, in trading volatility, you are trading an asset whose annual volatility has hardly ever fallen below 60% and which has often exceeded 100% per year.  Trading opportunities tend to abound when volatility is consistently elevated, as here (and, conversely, the performance of many hedge fund strategies tends to suffer during periods of sustained, low volatility)

Anything You Can Do, I Can Do better

The take-away from all this should be fairly obvious:  almost any strategy you care to name has an equivalent in the volatility space, whether it be volatility long/short, relative value, stat-arb, trend following or carry trading. What is more, because of the inherent characteristics of volatility, all these strategies tend to produce higher levels of performance than their more traditional counterparts. Take as an example our own Volatility ETF strategy, which has produced consistent annual returns of between 30% and 40%, with a Sharpe ratio in excess of 3, since 2012.   VALUE OF $1000

Sharpe

  Monthly Returns

 

(click to enlarge)

Where does the Alpha Come From?

It is traditional at this stage for managers to point the finger at hedgers as the source of abnormal returns and indeed I will do the same now.   Equity portfolio managers are hardly ignorant of the cost of using options and volatility derivatives to hedge their portfolios; but neither are they likely to be leading experts in the pricing of such derivatives.  And, after all, in a year in which they might be showing a 20% to 30% return, saving a few basis points on the hedge is neither here nor there, compared to the benefits of locking in the performance gains (and fees!). The same applies even when the purpose of using such derivatives is primarily to produce trading returns. Maple Leaf’s George Castrounis puts it this way:

Significant supply/demand imbalances continuously appear in derivative markets. The principal users of options (i.e. pension funds, corporates, mutual funds, insurance companies, retail and hedge funds) trade these instruments to express a view on the direction of the underlying asset rather than to express a view on the volatility of that asset, thus making non-economic volatility decisions. Their decision process may be driven by factors that have nothing to do with volatility levels, such as tax treatment, lockup, voting rights, or cross ownership. This creates opportunities for strategies that trade volatility.

We might also point to another source of potential alpha:  the uncertainty as to what the current level of volatility is, and how it should be priced.  As I have already pointed out, volatility is intrinsically uncertain, being unobservable.  This allows for a disparity of views about its true level, both currently and in future.  Secondly, there is no universal agreement on how volatility should be priced.  This permits at times a wide divergence of views on fair value (to give you some idea of the complexities involved, I would refer you to, for example, Range based EGARCH Option pricing Models). What this means, of course, is that there is a basis for a genuine source of competitive advantage, such as the Caissa Capital fund enjoyed in the early 2000s with its advanced option pricing models. The plethora of volatility products that have emerged over the last decade has only added to the opportunity set.

 Why Hasn’t It Been Done Before?

This was an entirely legitimate question back in the early days of volatility arbitrage. The cost of trading an option book, to say nothing of the complexities of managing the associated risks, were significant disincentives for both managers and investors.  Bid/ask spreads were wide enough to cause significant heads winds for strategies that required aggressive price-taking.  Mangers often had to juggle two sets of risks books, one reflecting the market’s view of the portfolio Greeks, the other the model view.  The task of explaining all this to investors, many of whom had never evaluated volatility strategies previously, was a daunting one.  And then there were the capacity issues:  back in the early 2000s a $400m long/short option portfolio would typically have to run to several hundred names in order to meet liquidity and market impact risk tolerances. Much has changed over the last fifteen years, especially with the advent of the highly popular VIX futures contract and the newer ETF products such as VXX and XIV, whose trading volumes and AUM are growing rapidly.  These developments have exerted strong downward pressure on trading costs, while providing sufficient capacity for at least a dozen volatility funds managing over $1Bn in assets.

Why Hasn’t It Been Done Right Yet?

Again, this question is less apposite than it was ten years ago and since that time there have been a number of success stories in the volatility space. One of the learning points occurred in 2004-2007, when volatility hit the lows for a 20 month period, causing performance to crater in long volatility funds, as well as funds with a volatility neutral mandate. I recall meeting with Nassim Taleb to discuss his Empirica volatility fund prior to that period, at the start of the 2000s.  My advice to him was that, while he had some great ideas, they were better suited to an insurance product rather than a hedge fund.  A long volatility fund might lose money month after month for an entire year, and with it investors and AUM, before seeing the kind of payoff that made such investment torture worthwhile.  And so it proved.

Conversely, stories about managers of short volatility funds showing superb performance, only to blow up spectacularly when volatility eventually explodes, are legion in this field.  One example comes to mind of a fund in Long Beach, CA, whose prime broker I visited with sometime in 2002.  He told me the fund had been producing a rock-steady 30% annual return for several years, and the enthusiasm from investors was off the charts – the fund was managing north of $1Bn by then.  Somewhat crestfallen I asked him how they were producing such spectacular returns.  “They just sell puts in the S&P, 100 points out of the money”, he told me.  I waited, expecting him to continue with details of how the fund managers handled the enormous tail risk.  I waited in vain. They were selling naked put options.  I can only imagine how those guys did when the VIX blew up in 2003 and, if they made it through that, what on earth happened to them in 2008!

Conclusion

The moral is simple:  one cannot afford to be either all-long, or all-short volatility.  The fund must run a long/short book, buying cheap Gamma and selling expensive Theta wherever possible, and changing the net volatility exposure of the portfolio dynamically, to suit current market conditions. It can certainly be done; and with the new volatility products that have emerged in recent years, the opportunities in the volatility space have never looked more promising.

Posted in Hedge Funds, VIX Index, Volatility ETF Strategy, Volatility Modeling | Tagged , | Comments Off

High Frequency Trading Strategies

Most investors have probably never seen the P&L of a high frequency trading strategy.  There is a reason for that, of course:  given the typical performance characteristics of a HFT strategy, a trading firm has little need for outside capital.  Besides, HFT strategies can be capacity constrained, a major consideration for institutional investors.  So it is amusing to see the reaction of an investor on encountering the track record of a HFT strategy for the first time.  Accustomed as they are to seeing Sharpe ratios in the range of 0.5-1.5, or perhaps as high as 1.8, if they are lucky, the staggering risk-adjusted returns of a HFT strategy, which often have double-digit Sharpe ratios, are truly mind-boggling.

By way of illustration I have attached below the performance record of one such HFT strategy, which trades around 100 times a day in the eMini S&P 500 contract (including the overnight session).  Note that the edge is not that great – averaging 55% profitable trades and profit per contract of around half a tick - these are some of the defining characteristics of HFT trading strategies.  But due to the large number of trades it results in very substantial profits.  At this frequency, trading commissions are very low, typically under $0.1 per contract, compared to $1 – $2 per contract for a retail trader (in fact an HFT firm would typically own or lease exchange seats to minimize such costs).

Fig 2 Fig 3 Fig 4

 

Hidden from view in the above analysis are the overhead costs associated with implementing such a strategy: the market data feed, execution platform and connectivity capable of handling huge volumes of messages, as well as algo logic to monitor microstructure signals and manage order-book priority.  Without these, the strategy would be impossible to implement profitably.

Scaling things back a little, lets take a look at a day-trading strategy that trades only around 10 times a day, on 15-minute bars.  Although not ultra-high frequency, the strategy nonetheless is sufficiently high frequency to be very latency sensitive. In other words, you would not want to try to implement such a strategy without a high quality market data feed and low-latency trading platform capable of executing at the 1-millisecond level.  It might just be possible to implement a strategy of this kind using TT’s ADL platform, for example.

While the win rate and profit factor are similar to the first strategy, the lower trade frequency allows for a higher trade PL of just over 1 tick, while the equity curve is a lot less smooth reflecting a Sharpe ratio that is “only” around 2.7.

Fig 5 Fig 6 Fig 7

 

The critical assumption in any HFT strategy is the fill rate.  HFT strategies execute using limit or IOC orders and only a certain percentage of these will ever be filled.  Assuming there is alpha in the signal, the P&L grows in direct proportion to the number of trades, which in turn depends on the fill rate.  A fill rate of 10% to 20% is usually enough to guarantee profitability (depending on the quality of the signal). A low fill rate, such as would typically be seen if one attempted to trade on a retail trading platform, would  destroy the profitability of any HFT strategy.

To illustrate this point, we can take a look at the outcome if the above strategy was implemented on a trading platform which resulted in orders being filled only when the market trades through the limit price.  It isn’t a pretty sight.

 

Fig 8

The moral of the story is:  developing a HFT trading algorithm that contains a viable alpha signal is only half the picture.  The trading infrastructure used to implement such a strategy is no less critical.  Which is why HFT firms spend tens, or hundreds of millions of dollars developing the best infrastructure they can afford.

Posted in Algo Design Language, Algorithmic Trading, eMini Futures, High Frequency Trading | Comments Off

Designing a Scalable Futures Strategy

I have been working on a higher frequency version of the eMini S&P 500 futures strategy, based on 3-minute bar intervals, which is designed to trade a couple of times a week, with hold periods of 2-3 days.  Even higher frequency strategies are possible, of course, but my estimation is that a hold period of under a week provides the best combination of liquidity and capacity.  Furthermore, the strategy is of low enough frequency that it is not at all latency sensitive – indeed, in the performance analysis below I have assumed that the market must trade through the limit price before the system enters a trade (relaxing the assumption and allowing the system to trade when the market touches the limit price improves the performance).

The other important design criteria are the high % of profitable trades and Kelly f (both over 95%).  This enables the investor to employ money management techniques, such a fixed-fractional allocation for example, in order to scale the trade size up from 1 to 10 contracts, without too great a risk of a major drawdown in realized P&L.

The end result is a strategy that produces profits of $80,000 to $100,000 a year on a 10 contract position, with an annual rate of return of 30% and a Sharpe ratio in excess of 2.0.

Furthermore, of the 682 trades since Jan 2010, only 29 have been losers.

Annual P&L (out of sample)

Annual PL

 

Equity Curve

EC

Strategy Performance

Perf 1

What’s the Downside?

Everything comes at a price, of course.  Firstly, the strategy is long-only and, by definition, will perform poorly in falling markets, such as we saw in 2008.  That’s a defensible investment thesis, of course – how many $billions are invested in buy and hold strategies? – and, besides, as one commentator remarked, the trick is to develop multiple strategies for different market regimes (although, sensible as that sounds, one is left with the difficulty of correctly identifying the market regime).

The second drawback is revealed by the trade chart below, which plots the drawdown experienced during each trade.  The great majority of these drawdowns are unrealized, and in most cases the trade recovers to make a profit.  However, there are some very severe cases, such as Sept 2014, when the strategy experienced a drawdown of $85,000 before recovering to make a profit on the trade.

For most investors, the agony of risking an entire year’s P&L just to make a few hundred dollars would be too great.

It should be pointed out that the by the time the drawdown event took place the strategy had already produced many hundreds of thousands of dollars of profit.  So, one could take the view that by that stage the strategy was playing with “house money” and could well afford to take such a risk.

One obvious “solution” to the drawdown problem is to use some kind of stop loss. Unfortunately, the effect is simply to convert an unrealized drawdown into a realized loss.  For some, however, it might be preferable to take a hit of $40,000 or $50,000 once every few years, rather than suffer the  uncertainty of an even larger potential loss.  Either way, despite its many pleasant characteristics, this is not a strategy for investors with weak stomachs!

Trade

Posted in eMini Futures, Futures, Kelly Criterion, Money Management, Optimal f | Comments Off

Investing in Leveraged ETFs – Theory and Practice

May. 5, 2015 11:48 AM ET  |  44 comments  |  Includes: DUSTERXERYFASFAZGDXNUGTSPXLSPXSTNATZA

Summary

  • Leveraged ETFs suffer from decay, or “beta slippage.” Researchers have attempted to exploit this effect by shorting pairs of long and inverse leveraged ETFs.
  • The results of these strategies look good if you assume continuous compounding, but are often poor when less frequent compounding is assumed.
  • In reality, the trading losses incurred in rebalancing the portfolio, which requires you to sell low and buy high, overwhelm any benefit from decay, making the strategies unprofitable in practice.
  • A short levered ETF strategy has similar characteristics to a short straddle option position, with positive Theta and negative Gamma, and will experience periodic, large drawdowns.
  • It is possible to develop leveraged ETF strategies producing high returns and Sharpe ratios with relative value techniques commonly used in option trading strategies.

Decay in Leveraged ETFs

Leveraged ETFs continue to be much discussed on Seeking Alpha.

One aspect in particular that has caught analysts’ attention is the decay, or “beta slippage” that leveraged ETFs tend to suffer from.

Seeking Alpha contributor Fred Picard in a 2013 article (“What You Need To Know About The Decay Of Leveraged ETFs“) described the effect using the following hypothetical example:

To understand what is beta-slippage, imagine a very volatile asset that goes up 25% one day and down 20% the day after. A perfect double leveraged ETF goes up 50% the first day and down 40% the second day. On the close of the second day, the underlying asset is back to its initial price:

(1 + 0.25) x (1 – 0.2) = 1

And the perfect leveraged ETF?

(1 + 0.5) x (1 – 0.4) = 0.9

Nothing has changed for the underlying asset, and 10% of your money has disappeared. Beta-slippage is not a scam. It is the normal mathematical behavior of a leveraged and rebalanced portfolio. In case you manage a leveraged portfolio and rebalance it on a regular basis, you create your own beta-slippage. The previous example is simple, but beta-slippage is not simple. It cannot be calculated from statistical parameters. It depends on a specific sequence of gains and losses.

Fred goes on to make the point that is the crux of this article, as follows:

At this point, I’m sure that some smart readers have seen an opportunity: if we lose money on the long side, we make a profit on the short side, right?

Shorting Leveraged ETFs

Taking his cue from Fred’s article, Seeking Alpha contributor Stanford Chemist (“Shorting Leveraged ETF Pairs: Easier Said Than Done“) considers the outcome of shorting pairs of leveraged ETFs, including the Market Vectors Gold Miners ETF (NYSEARCA:GDX), the Direxion Daily Gold Miners Bull 3X Shares ETF (NYSEARCA:NUGT) and the Direxion Daily Gold Miners Bear 3X Shares ETF (NYSEARCA:DUST).

His initial finding appears promising:

Therefore, investing $10,000 each into short positions of NUGT and DUST would have generated a profit of $9,830 for NUGT, and $3,900 for DUST, good for an average profit of 68.7% over 3 years, or 22.9% annualized.

At first sight, this appears to a nearly risk-free strategy; after all, you are shorting both the 3X leveraged bull and 3X leveraged bear funds, which should result in a market neutral position. Is there easy money to be made?

Fig 3

Continue reading

Posted in ETFs, Options | Comments Off

Volatility ETF Strategy Apr 2015: +4.41% YTD: +12.02% Sharpe: 3.02 YTD

HIGHLIGHTS

  • 2015 YTD: + 12.02%
  • CAGR over 40%
  • Sharpe ratio in excess  of 3
  • Max drawdown -13.40%
  • Liquid, exchange-traded ETF assets
  • Fully automated, algorithmic execution
  • Monthly portfolio turnover
  • Managed accounts with daily MTM
  • Minimum investment $250,000
  • Fee structure 2%/20%

VALUE OF $1000STRATEGY DESCRIPTION The Systematic Strategies Volatility ETF  strategy uses mathematical models to quantify the relative value of ETF products based on the CBOE S&P500 Volatility Index (VIX) and create a positive-alpha long/short volatility portfolio. The strategy is designed to perform robustly during extreme market conditions, by utilizing the positive convexity of the underlying ETF assets. It does not rely on volatility term structure (“carry”), or statistical correlations, but generates a return derived from the ETF pricing methodology.

Ann Returns

 

The net volatility exposure of the portfolio may be long, short or neutral, according to market conditions, but at all times includes an underlying volatility hedge. Portfolio holdings are adjusted daily using execution algorithms that minimize market impact to achieve the best available market prices.

RISK CONTROL

Our portfolio is not dependent on statistical correlations and is always hedged. We never invest in illiquid securities. We operate hard exposure limits and caps on volume participation.

Sharpe

 

 

 

 

 

OPERATIONS
We operate fully redundant dual servers operating an algorithmic execution platform designed to minimize market impact and slippage.  The strategy is not latency sensitive.

 

MONTHLY RETURNS Monthly Returns     (Click to Enlarge)

 

 

PERFORMANCE STATISTICS

PERFORMANCE STATS

 

 

 

 

 

 

 

 

 

 

 

 

 

(Click to Enlarge)

Disclaimer

Past performance does not guarantee future results. You should not rely on any past performance as a guarantee of future investment performance. Investment returns will fluctuate. Investment monies are at risk and you may suffer losses on any investment.

Posted in Algorithmic Trading, ETFs, VIX Index, Volatility ETF Strategy, Volatility Modeling | Comments Off

Is Your Trading Strategy Still Working?

The Challenge of Validating Strategy Performance

One of the challenges faced by investment strategists is to assess whether a strategy is continuing to perform as it should.  This applies whether it is a new strategy that has been backtested and is now being traded in production, or a strategy that has been live for a while.

Fig 6All strategies have a limited lifespan.  Markets change, and a trading strategy that can’t accommodate that change will get out of sync with the market and start to lose money. Unless you have a way to identify when a strategy is no longer in sync with the market, months of profitable trading can be undone very quickly.

The issue is particularly important for quantitative strategies.  Firstly, quantitative strategies are susceptible to the risk of over-fitting.  Secondly, unlike a strategy based on fundamental factors, it may be difficult for the analyst to verify that the drivers of strategy profitability remain intact.

Savvy investors are well aware of the risk of quantitative strategies breaking down and are likely to require reassurance that a period of underperformance is a purely temporary phenomenon.

It might be tempting to believe that you will simply stop trading when the strategy stops working.  But given the stochastic nature of investment returns, how do you distinguish a losing streak from a system breakdown?

Stochastic Process Control

One approach to the problem derives from the field of Monte Carlo simulation and stochastic process control.  Here we random draw samples from the distribution of strategy returns and use these to construct a prediction envelope to forecast the range of future returns.  If the equity curve of the strategy over the forecast period  falls outside of the envelope, it would raise serious concerns that the strategy may have broken down.  In those circumstances you would almost certainly want to trade the strategy in smaller size for a while to see if it recovers, or even exit the strategy altogether it it does not.

I will illustrate the procedure for the long/short ETF strategy that I described in an earlier post, making use of Michael Bryant’s excellent Market System Analyzer software.

To briefly refresh, the strategy is built using cointegration theory to construct long/short portfolios is a selection of ETFs that provide exposure to US and international equity, currency, real estate and fixed income markets.  The out of sample back-test performance of the strategy is very encouraging:

Fig 2

 

Fig 1

There was evidently a significant slowdown during 2014, with a reduction in the risk-adjusted returns and win rate for the strategy:

Fig 1

This period might itself have raised questions about the continuing effectiveness of the strategy.  However, we have the benefit of hindsight in seeing that, during the first two months of 2015, performance appeared to be recovering.

Consequently we put the strategy into production testing at the beginning of March 2015 and we now wish to evaluate whether the strategy is continuing on track.   The results indicate that strategy performance has been somewhat weaker than we might have hoped, although this is compensated for by a significant reduction in strategy volatility, so that the net risk-adjusted returns remain somewhat in line with recent back-test history.

Fig 3

Using the MSA software we sample the most recent back-test returns for the period to the end of Feb 2015, and create a 95% prediction envelope for the returns since the beginning of March, as follows:

Fig 2

As we surmised, during the production period the strategy has slightly underperformed the projected median of the forecast range, but overall the equity curve still falls within the prediction envelope.  As this stage we would tentatively conclude that the strategy is continuing to perform within expected tolerance.

Had we seen a pattern like the one shown in the chart below, our conclusion would have been very different.

Fig 4

As shown in the illustration, the equity curve lies below the lower boundary of the prediction envelope, suggesting that the strategy has failed. In statistical terms, the trades in the validation segment appear not to belong to the same statistical distribution of trades that preceded the validation segment.

This strategy failure can also be explained as follows: The equity curve prior to the validation segment displays relatively little volatility. The drawdowns are modest, and the equity curve follows a fairly straight trajectory. As a result, the prediction envelope is fairly narrow, and the drawdown at the start of the validation segment is so large that the equity curve is unable to rise back above the lower boundary of the envelope. If the history prior to the validation period had been more volatile, it’s possible that the envelope would have been large enough to encompass the equity curve in the validation period.

 CONCLUSION

Systematic trading has the advantage of reducing emotion from trading because the trading system tells you when to buy or sell, eliminating the difficult decision of when to “pull the trigger.” However, when a trading system starts to fail a conflict arises between the need to follow the system without question and the need to stop following the system when it’s no longer working.

Stochastic process control provides a technical, objective method to determine when a trading strategy is no longer working and should be modified or taken offline. The prediction envelope method extrapolates the past trade history using Monte Carlo analysis and compares the actual equity curve to the range of probable equity curves based on the extrapolation.

Next we will look at nonparametric distributions tests  as an alternative method for assessing strategy performance.

Posted in Monte Carlo, Performance Testing, Portfolio Management, Stochastic Process Control, Strategy Development, Systematic Strategies | Comments Off

Volatility ETF Strategy March 2015: +2.04%

HIGHLIGHTS

  • 2015 YTD: + 7.29%
  • CAGR over 40%
  • Sharpe ratio in excess  of 3
  • Max drawdown -13.40%
  • Liquid, exchange-traded ETF assets
  • Fully automated, algorithmic execution
  • Monthly portfolio turnover
  • Managed accounts with daily MTM
  • Minimum investment $250,000
  • Fee structure 2%/20%

 VALUE OF $1000

STRATEGY DESCRIPTION
The Systematic Strategies Volatility ETF  strategy uses mathematical models to quantify the relative value of ETF products based on the CBOE S&P500 Volatility Index (VIX) and create a positive-alpha long/short volatility portfolio. The strategy is designed to perform robustly during extreme market conditions, by utilizing the positive convexity of the underlying ETF assets. It does not rely on volatility term structure (“carry”), or statistical correlations, but generates a return derived from the ETF pricing methodology.

The net volatility exposure of the portfolio may be long, short or neutral, according to market conditions, but at all times includes an underlying volatility hedge. Portfolio holdings are adjusted daily using execution algorithms that minimize market impact to achieve the best available market prices.

Ann Returns

RISK CONTROL

Our portfolio is not dependent on statistical correlations and is always hedged. We never invest in illiquid securities. We operate hard exposure limits and caps on volume participation.

Sharpe

 

 

 

 

 

OPERATIONS

We operate fully redundant dual servers operating an algorithmic execution platform designed to minimize market impact and slippage.  The strategy is not latency sensitive.

MONTHLY RETURNS

Monthly Returns

 

 

(Click to Enlarge)

PERFORMANCE STATISTICS

PERFORMANCE STATS

 

 

 

 

 

 

 

 

 

 

 

 

 

(Click to Enlarge)

 

 

Posted in VIX Index, Volatility ETF Strategy, Volatility Modeling | Comments Off

The Lazarus Effect

A perennial favorite with investors, presumably because they are easy to understand and implement, are trades based on a regularly occurring pattern, preferably one that is seasonal in nature.  A well-known example is the Christmas effect, wherein equities generally make their highest risk-adjusted returns during the month of December (and equity indices make the greater proportion of their annual gains in the period from November to January).

As we approach the Easter holiday I thought I might join in the fun with a trade of my own.  There being not much new under the sun, I can assume that there is some ancient trader’s almanac that documents the effect I am about to describe.  If so, I apologize in advance if this is duplicative.

The Pattern of Returns in the S&P 500 Index Around Easter

I want to look at the pattern of pre- and post- Easter returns in the S&P 500 index using weekly data from 1950  (readers can of course substitute the index, ETF or other tradable security in a similar analysis).

The first question is whether there are significant differences (economic and statistical) in index returns in the weeks before and after Easter, compared to a regular week.

Fig 1

It is perhaps not immediately apparent from the smooth histogram plot above, but a whisker plot gives a clearer indication of the disparity in the distributions of returns in the post-Easter week vs. regular weeks.

Fig 2

It is evident that chief distinction is not in the means of the distributions, but in their variances.

A t-test (with unequal variances) confirms that the difference in average returns in the index in the post-Easter week vs. normal weeks is not statistically significant.

Fig 3 It appears that there is nothing special about Index returns in the post-Easter period.

The Lazarus Effect

Hold on – not so fast.  Suppose we look at conditional returns: that is to say, we consider returns in the post-Easter week for holiday periods in which the index sold off in the  week prior to Easter.

There are 26 such periods in the 65 years since 1950 and when we compare the conditional distribution of index returns for these periods against the unconditional distribution of weekly returns we appear to find significant differences in the distributions.  Not only is the variance of the conditional returns much tighter, the mean is clearly higher than the unconditional weekly returns.

Fig 6


Fig 5

 

The comparison is perhaps best summarized in the following table.  Here we can see that the average conditional return is more than twice that of the unconditional return in the post-Easter week and almost 4x as large as the average weekly return in the index.  The standard deviation in conditional returns for the post-Easter week is less than half that of the unconditional weekly return, producing and information ratio that is almost 10x larger.  Furthermore, of the 26 periods in which the index return in the week prior to Easter was negative, 22 (85%) produced a positive return in the week after Easter (compared to a win rate of only 57% for unconditional weekly returns.

Fig 4

A t-test of conditional vs. unconditional weekly returns confirms that the 58bp difference in conditional vs unconditional (all weeks) average returns is statistically significant at the 0.2% level.

Fig 7

Our initial conclusion, therefore, is that there appears to be a statistically significant pattern in the conditional returns in the S&P 500 index around the post-Easter week. Specifically, the returns in the post-Easter week tend to be much higher than average for  periods in which the pre-Easter weekly returns were negative.

More simply, the S&P 500 index tends to rebound strongly in the week after Easter – a kind of “Lazarus” effect.

 Lazarus – Or Not?

Hold on – not so fast.   What’s so special about Easter?  Yes, I realize it’s topical.  But isn’t this so-called Lazarus effect just a manifestation of the usual mean-reversion in equity index returns?  There is a tendency for weekly returns in the S&P 500 index to “correct” in the week after a downturn.  Maybe the Lazarus effect isn’t specific to Easter.

To examine this hypothesis we need to compare two sets of conditional weekly returns in the S&P 500 index:

A:  Weeks in which the prior week’s return was negative

B:  the subset of A which contains only post-Easter weeks

 If the difference in average returns for sets A and B is not statistically significant, we would conclude that the so-called Lazarus effect is just a manifestation of the commonplace mean reversion in weekly returns.  Only if the average return for the B data set is significant higher than that for set A would we be able to conclude that, in addition to normal mean reversion at weekly frequency, there is an incremental effect specific to the Easter period – the Lazarus effect.

Let’s begin by establishing that there is a statistically significant mean reversion effect in weekly returns in the S&P 500 Index.  Generally, we expect a fall in the index to be followed by a rise (and perhaps vice versa). So we need to  compare the returns in the index for weeks in which the preceding week’s return was positive, vs weeks in which the preceding week’s return was negative.  The t-test below shows the outcome.

Fig 9

The average return in weeks following a downturn is approximately double that during weeks following a rally and the effect is statistically significant at the 3% level.

Given that result, is there any incremental “Lazarus” effect around Easter?  We test that hypothesis by comparing the average returns during the 26 post-Easter weeks which were preceded by a downturn in the index against the average return for all 1,444 weeks which followed a decline in the index.

The t-test shown in the table below confirms that conditional returns in post-Easter weeks are approximately 3x larger on average than returns for all weeks that followed a decline in the index.

Fig 8

Lazarus, it appears, is alive and well.

Happy holidays, all.

Posted in Mean Reversion, Pattern Trading, S&P500 Index, Seasonal Effects | Comments Off

Combining Momentum and Mean Reversion Strategies

The Fama-French World

For many years now the “gold standard” in factor models has been the 1996 Fama-French 3-factor model: Fig 1 Fig 5Here r is the portfolio’s expected rate of return, Rf is the risk-free return rate, and Km is the return of the market portfolio. The “three factor” β is analogous to the classical β but not equal to it, since there are now two additional factors to do some of the work. SMB stands for “Small [market capitalization] Minus Big” and HML for “High [book-to-market ratio] Minus Low”; they measure the historic excess returns of small caps over big caps and of value stocks over growth stocks. These factors are calculated with combinations of portfolios composed by ranked stocks (BtM ranking, Cap ranking) and available historical market data. The Fama–French three-factor model explains over 90% of the diversified portfolios in-sample returns, compared with the average 70% given by the standard CAPM model.

The 3-factor model can also capture the reversal of long-term returns documented by DeBondt and Thaler (1985), who noted that extreme price movements over long formation periods were followed by movements in the opposite direction. (Alpha Architect has several interesting posts on the subject, including this one).

Fama and French say the 3-factor model can account for this. Long-term losers tend to have positive HML slopes and higher future average returns. Conversely, long-term winners tend to be strong stocks that have negative slopes on HML and low future returns. Fama and French argue that DeBondt and Thaler are just loading on the HML factor.

Enter Momentum

While many anomalies disappear under  tests, shorter term momentum effects (formation periods ~1 year) appear robust. Carhart (1997) constructs his 4-factor model by using FF 3-factor model plus an additional momentum factor. He shows that his 4-factor model with MOM substantially improves the average pricing errors of the CAPM and the 3-factor model. After his work, the standard factors of asset pricing model are now commonly recognized as Value, Size and Momentum.

 Combining Momentum and Mean Reversion

In a recent post, Alpha Architect looks as some possibilities for combining momentum and mean reversion strategies.  They examine all firms above the NYSE 40th percentile for market-cap (currently around $1.8 billion) to avoid weird empirical effects associated with micro/small cap stocks. The portfolios are formed at a monthly frequency with the following 2 variables:

  1. Momentum = Total return over the past twelve months (ignoring the last month)
  2. Value = EBIT/(Total Enterprise Value)

They form the simple Value and Momentum portfolios as follows:

  1. EBIT VW = Highest decile of firms ranked on Value (EBIT/TEV). Portfolio is value-weighted.
  2. MOM VW = Highest decile of firms ranked on Momentum. Portfolio is value-weighted.
  3. Universe VW = Value-weight returns to the universe of firms.
  4. SP500 = S&P 500 Total return

The results show that the top decile of Value and Momentum outperformed the index over the past 50 years.  The Momentum strategy has stronger returns than value, on average, but much higher volatility and drawdowns. On a risk-adjusted basis they perform similarly. Fig 2   The researchers then form the following four portfolios:

  1. EBIT VW = Highest decile of firms ranked on Value (EBIT/TEV). Portfolio is value-weighted.
  2. MOM VW = Highest decile of firms ranked on Momentum. Portfolio is value-weighted.
  3. COMBO VW = Rank firms independently on both Value and Momentum.  Add the two rankings together. Select the highest decile of firms ranked on the combined rankings. Portfolio is value-weighted.
  4. 50% EBIT/ 50% MOM VW = Each month, invest 50% in the EBIT VW portfolio, and 50% in the MOM VW portfolio. Portfolio is value-weighted.

With the following results:

Fig 3 The main takeaways are:

  • The combined ranked portfolio outperforms the index over the same time period.
  • However, the combination portfolio performs worse than a 50% allocation to Value and a 50% allocation to Momentum.

A More Sophisticated Model

Yangru Wu of Rutgers has been doing interesting work in this area over the last 15 years, or more. His 2005 paper (with Ronald Balvers), Momentum and mean reversion across national equity markets, considers joint momentum and mean-reversion effects and allows for complex interactions between them. Their model is of the form Fig 4 where the excess return for country i (relative to the global equity portfolio) is represented by a combination of mean-reversion and autoregressive (momentum) terms. Balvers and Wu  find that combination momentum-contrarian strategies, used to select from among 18 developed equity markets at a monthly frequency, outperform both pure momentum and pure mean-reversion strategies. The results continue to hold after corrections for factor sensitivities and transaction costs. The researchers confirm that momentum and mean reversion occur in the same assets. So in establishing the strength and duration of the momentum and mean reversion effects it becomes important to control for each factor’s effect on the other. The momentum and mean reversion effects exhibit a strong negative correlation of 35%. Accordingly, controlling for momentum accelerates the mean reversion process, and controlling for mean reversion may extend the momentum effect.

 Momentum, Mean Reversion and Volatility

The presence of  strong momentum and mean reversion in volatility processes provides a rationale for the kind of volatility strategy that we trade at Systematic Strategies.  One  sophisticated model is the Range Based EGARCH model of  Alizadeh, Brandt, and Diebold (2002) .  The model posits a two-factor volatility process in which a short term, transient volatility process mean-reverts to a stochastic long term mean process, which may exhibit momentum, or long memory effects  (details here).

In our volatility strategy we model mean reversion and momentum effects derived from the level of short and long term volatility-of-volatility, as well as the forward volatility curve. These are applied to volatility ETFs, including levered ETF products, where convexity effects are also important.  Mean reversion is a well understood phenomenon in volatility, as, too, is the yield roll in volatility futures (which also impacts ETF products like VXX and XIV).

Momentum effects are perhaps less well researched in this context, but our research shows them to be extremely important.  By way of illustration, in the chart below I have isolated the (gross) returns generated by one of the momentum factors in our model.

Fig 6

 

Posted in Factor Models, Mean Reversion, Momentum, VIX Index, Volatility Modeling | Comments Off