Robustness in Quantitative Research and Trading

What is Strategy Robustness?  What is its relevance to Quantitative Research and Trading?

One of the most highly desired properties of any financial model or investment strategy, by investors and managers alike, is robustness.  I would define robustness as the ability of the strategy to deliver a consistent  results across a wide range of market conditions.  It, of course, by no means the only desirable property – investing in Treasury bills is also a pretty robust strategy, although the returns are unlikely to set an investor’s pulse racing – but it does ensure that the investor, or manager, is unlikely to be on the receiving end of an ugly surprise when market conditions adjust.

Robustness is not the same thing as low volatility, which also tends to be a characteristic highly prized by many investors.  A strategy may operate consistently, with low volatility in certain market conditions, but behave very differently in other.  For instance, a delta-hedged short-volatility book containing exotic derivative positions.   The point is that empirical researchers do not know the true data-generating process for the markets they are modeling. When specifying an empirical model they need to make arbitrary assumptions. An example is the common assumption that assets returns follow a Gaussian distribution.  In fact, the empirical distribution of the great majority of asset process exhibit the characteristic of “fat tails”, which can result from the interplay between multiple market states with random transitions.  See this post for details:

http://jonathankinlay.com/2014/05/a-quantitative-analysis-of-stationarity-and-fat-tails/

 

In statistical arbitrage, for example, quantitative researchers often make use of cointegration models to build pairs trading strategies.  However the testing procedures used in current practice are not sufficient powerful to distinguish between cointegrated processes and those whose evolution just happens to correlate temporarily, resulting in the frequent breakdown in cointegrating relationships.  For instance, see this post:

http://jonathankinlay.com/2017/06/statistical-arbitrage-breaks/

Modeling Assumptions are Often Wrong – and We Know It

We are, of course, not the first to suggest that empirical models are misspecified:

“All models are wrong, but some are useful” (Box 1976, Box and Draper 1987).

 

Martin Feldstein (1982: 829): “In practice all econometric specifications are necessarily false models.”

 

Luke Keele (2008: 1): “Statistical models are always simplifications, and even the most complicated model will be a pale imitation of reality.”

 

Peter Kennedy (2008: 71): “It is now generally acknowledged that econometric models are false and there is no hope, or pretense, that through them truth will be found.”

During the crash of 2008 quantitative Analysts and risk managers found out the hard way that the assumptions underpinning the copula models used to price and hedge credit derivative products were highly sensitive to market conditions.  In other words, they were not robust.  See this post for more on the application of copula theory in risk management:

http://jonathankinlay.com/2017/01/copulas-risk-management/

 

Robustness Testing in Quantitative Research and Trading

We interpret model misspecification as model uncertainty. Robustness tests analyze model uncertainty by comparing a baseline model to plausible alternative model specifications.  Rather than trying to specify models correctly (an impossible task given causal complexity), researchers should test whether the results obtained by their baseline model, which is their best attempt of optimizing the specification of their empirical model, hold when they systematically replace the baseline model specification with plausible alternatives. This is the practice of robustness testing.

SSALGOTRADING AD

Robustness testing analyzes the uncertainty of models and tests whether estimated effects of interest are sensitive to changes in model specifications. The uncertainty about the baseline model’s estimated effect size shrinks if the robustness test model finds the same or similar point estimate with smaller standard errors, though with multiple robustness tests the uncertainty likely increases. The uncertainty about the baseline model’s estimated effect size increases of the robustness test model obtains different point estimates and/or gets larger standard errors. Either way, robustness tests can increase the validity of inferences.

Robustness testing replaces the scientific crowd by a systematic evaluation of model alternatives.

Robustness in Quantitative Research

In the literature, robustness has been defined in different ways:

  • as same sign and significance (Leamer)
  • as weighted average effect (Bayesian and Frequentist Model Averaging)
  • as effect stability We define robustness as effect stability.

Parameter Stability and Properties of Robustness

Robustness is the share of the probability density distribution of the baseline model that falls within the 95-percent confidence interval of the baseline model.  In formulaeic terms:

Formula

  • Robustness is left-–right symmetric: identical positive and negative deviations of the robustness test compared to the baseline model give the same degree of robustness.
  • If the standard error of the robustness test is smaller than the one from the baseline model, ρ converges to 1 as long as the difference in point estimates is negligible.
  • For any given standard error of the robustness test, ρ is always and unambiguously smaller the larger the difference in point estimates.
  • Differences in point estimates have a strong influence on ρ if the standard error of the robustness test is small but a small influence if the standard errors are large.

Robustness Testing in Four Steps

  1. Define the subjectively optimal specification for the data-generating process at hand. Call this model the baseline model.
  2. Identify assumptions made in the specification of the baseline model which are potentially arbitrary and that could be replaced with alternative plausible assumptions.
  3. Develop models that change one of the baseline model’s assumptions at a time. These alternatives are called robustness test models.
  4. Compare the estimated effects of each robustness test model to the baseline model and compute the estimated degree of robustness.

Model Variation Tests

Model variation tests change one or sometimes more model specification assumptions and replace with an alternative assumption, such as:

  • change in set of regressors
  • change in functional form
  • change in operationalization
  • change in sample (adding or subtracting cases)

Example: Functional Form Test

The functional form test examines the baseline model’s functional form assumption against a higher-order polynomial model. The two models should be nested to allow identical functional forms. As an example, we analyze the ‘environmental Kuznets curve’ prediction, which suggests the existence of an inverse u-shaped relation between per capita income and emissions.

Emissions and percapitaincome

Note: grey-shaded area represents confidence interval of baseline model

Another example of functional form testing is given in this review of Yield Curve Models:

http://jonathankinlay.com/2018/08/modeling-the-yield-curve/

Random Permutation Tests

Random permutation tests change specification assumptions repeatedly. Usually, researchers specify a model space and randomly and repeatedly select model from this model space. Examples:

  • sensitivity tests (Leamer 1978)
  • artificial measurement error (Plümper and Neumayer 2009)
  • sample split – attribute aggregation (Traunmüller and Plümper 2017)
  • multiple imputation (King et al. 2001)

We use Monte Carlo simulation to test the sensitivity of the performance of our Quantitative Equity strategy to changes in the price generation process and also in model parameters:

http://jonathankinlay.com/2017/04/new-longshort-equity/

Structured Permutation Tests

Structured permutation tests change a model assumption within a model space in a systematic way. Changes in the assumption are based on a rule, rather than random.  Possibilities here include:

  • sensitivity tests (Levine and Renelt)
  • jackknife test
  • partial demeaning test

Example: Jackknife Robustness Test

The jackknife robustness test is a structured permutation test that systematically excludes one or more observations from the estimation at a time until all observations have been excluded once. With a ‘group-wise jackknife’ robustness test, researchers systematically drop a set of cases that group together by satisfying a certain criterion – for example, countries within a certain per capita income range or all countries on a certain continent. In the example, we analyse the effect of earthquake propensity on quake mortality for countries with democratic governments, excluding one country at a time. We display the results using per capita income as information on the x-axes.

jackknife

Upper and lower bound mark the confidence interval of the baseline model.

Robustness Limit Tests

Robustness limit tests provide a way of analyzing structured permutation tests. These tests ask how much a model specification has to change to render the effect of interest non-robust. Some examples of robustness limit testing approaches:

  • unobserved omitted variables (Rosenbaum 1991)
  • measurement error
  • under- and overrepresentation
  • omitted variable correlation

For an example of limit testing, see this post on a review of the Lognormal Mixture Model:

http://jonathankinlay.com/2018/08/the-lognormal-mixture-variance-model/

Summary on Robustness Testing

Robustness tests have become an integral part of research methodology. Robustness tests allow to study the influence of arbitrary specification assumptions on estimates. They can identify uncertainties that otherwise slip the attention of empirical researchers. Robustness tests offer the currently most promising answer to model uncertainty.

Forecasting Volatility in the S&P500 Index

Several people have asked me for copies of this research article, which develops a new theoretical framework, the ARFIMA-GARCH model as a basis for forecasting volatility in the S&P 500 Index.  I am in the process of updating the research, but in the meantime a copy of the original paper is available here

In this analysis we are concerned with the issue of whether market forecasts of volatility, as expressed in the Black-Scholes implied volatilities of at-the-money European options on the S&P500 Index, are superior to those produced by a new forecasting model in the GARCH framework which incorporates long-memory effects.  The ARFIMA-GARCH model, which uses high frequency data comprising 5-minute returns, makes volatility the subject process of interest, to which innovations are introduced via a volatility-of-volatility (kurtosis) process.  Despite performing robustly in- and out-of-sample, an encompassing regression indicates that the model is unable to add to the information already contained in market forecasts.  However, unlike model forecasts, implied volatility forecasts show evidence of a consistent and substantial bias.  Furthermore, the model is able to correctly predict the direction of volatility approximately 62% of the time whereas market forecasts have very poor direction prediction ability.  This suggests that either option markets may be inefficient, or that the option pricing model is mis-specified.  To examine this hypothesis, an empirical test is carried out in which at-the-money straddles are bought or sold (and delta-hedged) depending on whether the model forecasts exceed or fall below implied volatility forecasts.  This simple strategy generates an annual compound return of 18.64% over a four year out-of-sample period, during which the annual return on the S&P index itself was -7.24%.  Our findings suggest that, over the period of analysis, investors required an additional risk premium of 88 basis points of incremental return for each unit of volatility risk.

Yield Curve Construction Models – Tools & Techniques

Yield Curve

Yield curve models are used to price a wide variety of interest rate-contingent claims.  The existence of several different competing methods of curve construction available and there is no single standard method for constructing yield curves and alternate procedures are adopted in different business areas to suit local requirements and market conditions.  This fragmentation has often led to confusion amongst some users of the models as to their precise functionality and uncertainty as to which is the most appropriate modeling technique. In addition, recent market conditions, which inter-alia have seen elevated levels of LIBOR basis volatility, have served to heighten concerns amongst some risk managers and other model users about the output of the models and the validity of the underlying modeling methods.

SSALGOTRADING AD

The purpose of this review, which was carried out in conjunction with research analyst Xu Bai, now at Morgan Stanley, was to gain a thorough understanding of current methodologies, to validate their theoretical frameworks and implementation, identify any weaknesses in the current modeling methodologies, and to suggest improvements or alternative approaches that may enhance the accuracy, generality and robustness of modeling procedures.

Yield Curve Construction Models

The Lognormal Mixture Variance Model

The LNVM model is a mixture of lognormal models and the model density is a linear combination of the underlying densities, for instance, log-normal densities. The resulting density of this mixture is no longer log-normal and the model can thereby better fit skew and smile observed in the market.  The model is becoming increasingly widely used for interest rate/commodity hybrids.

SSALGOTRADING AD

In this review of the model, I examine the mathematical framework of the model in order to gain an understanding of its key features and characteristics.

The LogNormal Mixture Variance Model

Learning the Kalman Filter

Michael Kleder’s “Learning the Kalman Filter” mini tutorial, along with the great feedback it has garnered (73 comments and 67 ratings, averaging 4.5 out of 5 stars),  is one of the most popular downloads from Matlab Central and for good reason.

In his in-file example, Michael steps through a Kalman filter example in which a voltmeter is used to measure the output of a 12-volt automobile battery. The model simulates both randomness in the output of the battery, and error in the voltmeter readings. Then, even without defining an initial state for the true battery voltage, Michael demonstrates that with only 5 lines of code, the Kalman filter can be implemented to predict the true output based on (not-necessarily-accurate) uniformly spaced, measurements:

 

This is a simple but powerful example that shows the utility and potential of Kalman filters. It’s sure to help those who are trepid about delving into the world of Kalman filtering.

Using Volatility to Predict Market Direction

Decomposing Asset Returns

 

We can decompose the returns process Rt as follows:

While the left hand side of the equation is essentially unforecastable, both of the right-hand-side components of returns display persistent dynamics and hence are forecastable. Both the signs of returns and magnitude of returns are conditional mean dependent and hence forecastable, but their product is conditional mean independent and hence unforecastable. This is an example of a nonlinear “common feature” in the sense of Engle and Kozicki (1993).

Although asset returns are essentially unforecastable, the same is not true for asset return signs (i.e. the direction-of-change). As long as expected returns are nonzero, one should expect sign dependence, given the overwhelming evidence of volatility dependence. Even in assets where expected returns are zero, sign dependence may be induced by skewness in the asset returns process.  Hence market timing ability is a very real possibility, depending on the relationship between the mean of the asset returns process and its higher moments. The highly nonlinear nature of the relationship means that conditional sign dependence is not likely to be found by traditional measures such as signs autocorrelations, runs tests or traditional market timing tests. Sign dependence is likely to be strongest at intermediate horizons of 1-3 months, and unlikely to be important at very low or high frequencies. Empirical tests demonstrate that sign dependence is very much present in actual US equity returns, with probabilities of positive returns rising to 65% or higher at various points over the last 20 years. A simple logit regression model captures the essentials of the relationship very successfully.

Now consider the implications of dependence and hence forecastability in the sign of asset returns, or, equivalently, the direction-of-change. It may be possible to develop profitable trading strategies if one can successfully time the market, regardless of whether or not one is able to forecast the returns themselves.  

There is substantial evidence that sign forecasting can often be done successfully. Relevant research on this topic includes Breen, Glosten and Jaganathan (1989), Leitch and Tanner (1991), Wagner, Shellans and Paul (1992), Pesaran and Timmerman (1995), Kuan and Liu (1995), Larsen and Wozniak (10050, Womack (1996), Gencay (1998), Leung Daouk and Chen (1999), Elliott and Ito (1999) White (2000), Pesaran and Timmerman (2000), and Cheung, Chinn and Pascual (2003).

There is also a huge body of empirical research pointing to the conditional dependence and forecastability of asset volatility. Bollerslev, Chou and Kramer (1992) review evidence in the GARCH framework, Ghysels, Harvey and Renault (1996) survey results from stochastic volatility modeling, while Andersen, Bollerslev and Diebold (2003) survey results from realized volatility modeling.

Sign Dynamics Driven By Volatility Dynamics

Let the returns process Rt be Normally distributed with mean m and conditional volatility st.

The probability of a positive return Pr[Rt+1 >0] is given by the Normal CDF F=1-Prob[0,f]


 

 

For a given mean return, m, the probability of a positive return is a function of conditional volatility st. As the conditional volatility increases, the probability of a positive return falls, as illustrated in Figure 1 below with m = 10% and st = 5% and 15%.

In the former case, the probability of a positive return is greater because more of the probability mass lies to the right of the origin. Despite having the same, constant expected return of 10%, the process has a greater chance of generating a positive return in the first case than in the second. Thus volatility dynamics drive sign dynamics.  

 Figure 1

Email me at jkinlay@investment-analytics.com.com for a copy of the complete article.


 

 

 

 

Volatility Metrics

Volatility Estimation

For a very long time analysts were content to accept the standard deviation of returns as the norm for estimating volatility, even though theoretical research and empirical evidence dating from as long ago as 1980 suggested that superior estimators existed.
Part of the reason was that the claimed efficiency improvements of the Parkinson, GarmanKlass and other estimators failed to translate into practice when applied to real data. Or, at least, no one could quite be sure whether such estimators really were superior when applied to empirical data since volatility, the second moment of the returns distribution, is inherently unknowable. You can say for sure what the return on a particular stock in a particular month was simply by taking the log of the ratio of the stock price at the month end and beginning. But the same cannot be said of volatility: the standard deviation of daily returns during the month, often naively assumed to represent the asset volatility, is in fact only an estimate of it.

Realized Volatility

All that began to change around 2000 with the advent of high frequency data and the concept of Realized Volatility developed by Andersen and others (see Andersen, T.G., T. Bollerslev, F.X. Diebold and P. Labys (2000), “The Distribution of Exchange Rate Volatility,” Revised version of NBER Working Paper No. 6961). The researchers showed that, in principle, one could arrive at an estimate of volatility arbitrarily close to its true value by summing the squares of asset returns at sufficiently high frequency. From this point onwards, Realized Volatility became the “gold standard” of volatility estimation, leaving other estimators in the dust.

Except that, in practice, there are often reasons why Realized Volatility may not be the way to go: for example, high frequency data may not be available for the series, or only for a portion of it; and bid-ask bounce can have a substantial impact on the robustness of Realized Volatility estimates. So even where high frequency data is available, it may still make sense to compute alternative volatility estimators. Indeed, now that a “gold standard” estimator of true volatility exists, it is possible to get one’s arms around the question of the relative performance of other estimators. That was my intent in my research paper on Estimating Historical Volatility, in which I compare the performance characteristics of the Parkinson, GarmanKlass and other estimators relative to the realized volatility estimator. The comparison was made on a number of synthetic GBM processes in which the simulated series incorporated non-zero drift, jumps, and stochastic volatility. A further evaluation was made using an actual data series, comprising 5 minute returns on the S&P 500 in the period from Jan 1988 to Dec 2003.

The findings were generally supportive of the claimed efficiency improvements for all of the estimators, which were superior to the classical standard deviation of returns on every criterion in almost every case. However, the evident superiority of all of the estimators, including the Realized Volatility estimator, began to decline for processes with non-zero drift, jumps and stochastic volatility. There was even evidence of significant bias in some of the estimates produced for some of the series, notably by the standard deviation of returns estimator.

The Log Volatility Estimator

Finally, analysis of the results from the study of the empirical data series suggested that there were additional effects in the empirical data, not seen in the simulated processes, that caused estimator efficiency to fall well below theoretical levels. One conjecture is that long memory effects, a hallmark of most empirical volatility processes, played a significant role in that finding.
The bottom line is that, overall, the log-range volatility estimator performs robustly and with superior efficiency to the standard deviation of returns estimator, regardless of the precise characteristics of the underlying process.

Estimating Historical Volatility

Career Opportunity for Quant Traders

Career Opportunity for Quant Traders as Strategy Managers

We are looking for 3-4 traders (or trading teams) to showcase as Strategy Managers on our Algorithmic Trading Platform.  Ideally these would be systematic quant traders, since that is the focus of our fund (although they don’t have to be).  So far the platform offers a total of 10 strategies in equities, options, futures and f/x.  Five of these are run by external Strategy Managers and five are run internally.

The goal is to help Strategy Managers build a track record and gain traction with a potential audience of over 100,000 members.  After a period of 6-12 months we will offer successful managers a position as a PM at Systematic Strategies and offer their strategies in our quantitative hedge fund.  Alternatively, we will assist the manager is raising external capital in order to establish their own fund.

If you are interested in the possibility (or know a talented rising star who might be), details are given below.

Manager Platform

Daytrading Index Futures Arbitrage

Trading with Indices

I have always been an advocate of incorporating index data into one’s trading strategies.  Since they are not tradable, the “market” in index products if often highly inefficient and displays easily identifiable patterns that can be exploited by a trader, or a trading system.  In fact, it is almost trivially easy to design “profitable” index trading systems and I gave a couple of examples in the post below, including a system producing stellar results in the S&P 500 Index.

 

http://jonathankinlay.com/2016/05/trading-with-indices/

Of course such systems are not directly useful.  But traders often use signals from such a system as a filter for an actual trading system.  So, for example, one might look for a correlated signal in the S&P 500 index as a means of filtering trades in the E-Mini futures market or theSPDR S&P 500 ETF (SPY).

Multi-Strategy Trading Systems

This is often as far as traders will take the idea, since it quickly gets a lot more complicated and challenging to build signals generated from an index series into the logic of a strategy designed for related, tradable market. And for that reason, there is a great deal of unexplored potential in using index data in this way.  So, for instance, in the post below I discuss a swing trading system in the S&P500 E-mini futures (ticker: ES) that comprises several sub-systems build on prime-valued time intervals.  This has the benefit of minimizing the overlap between signals from multiple sub-systems, thereby increasing temporal diversification.

http://jonathankinlay.com/2018/07/trading-prime-market-cycles/

A critical point about this system is that each of sub-systems trades the futures market based on data from both the E-mini contract and the S&P 500 cash index.  A signal is generated when the system finds particular types of discrepancy between the cash index and corresponding futures, in a quasi risk-arbitrage.

SSALGOTRADING AD

Arbing the NASDAQ 100 Index Futures

Developing trading systems for the S&P500 E-mini futures market is not that hard.  A much tougher challenge, at least in my experience, is presented by the E-mini NASDAQ-100 futures (ticker: NQ).  This is partly to do with the much smaller tick size and different market microstructure of the NASDAQ futures market. Additionally, the upward drift in equity related products typically favors strategies that are long-only.  Where a system trades both long and short sides of the market, the performance on the latter is usually much inferior.  This can mean that the strategy performs poorly in bear markets such as 2008/09 and, for the tech sector especially, the crash of 2000/2001.  Our goal was to develop a daytrading system that might trade 1-2 times a week, and which would perform as well or better on short trades as on the long side.  This is where NASDAQ 100 index data proved to be especially helpful.  We found that discrepancies between the cash index and futures market gave particularly powerful signals when markets seemed likely to decline.  Using this we were able to create a system that performed exceptionally well during the most challenging market conditions. It is notable that, in the performance results below (for a single futures contract, net of commissions and slippage), short trades contributed the greater proportion of total profits, with a higher overall profit factor and average trade size.

EC

Annual PL

PL

Conclusion: Using Index Data, Or Other Correlated Signals, Often Improves Performance

It is well worthwhile investigating how non-tradable index data can be used in a trading strategy, either as a qualifying signal or, more directly, within the logic of the algorithm itself.  The greater challenge of building such systems means that there are opportunities to be found, even in well-mined areas like index futures markets.  A parallel idea that likewise offers plentiful opportunity is in designing systems that make use of data on multiple time frames, and in correlated markets, for instance in the energy sector.Here one can identify situations in which, under certain conditions, one market has a tendency to lead another, a phenomenon referred to as Granger Causality.

 

Volatility Trading Styles

The VIX Surge of Feb 2018

Volatility trading has become a popular niche in investing circles over the last several years.  It is easy to understand why:  with yields at record lows it has been challenging to find an alternative to equities that offers a respectable return.  Volatility, however, continues to be volatile (which is a good thing in this context) and the steepness of the volatility curve has offered investors attractive returns by means of the volatility carry trade.  In this type of volatility trading the long end of the vol curve is sold, often using longer dated futures in the CBOE VIX Index, for example.  The idea is that profits are generated as the contract moves towards expiration, “riding down” the volatility curve as it does so.  This is a variant of the ever-popular “riding down the yield curve” strategy, a staple of fixed income traders for many decades.  The only question here is what to use to hedge the short volatility exposure – highly correlated S&P500 futures are a popular choice, but the resulting portfolio is exposed to significant basis risk.  Besides, when the volatility curve flatten and inverts, as it did in spectacular fashion in February, the transition tends to happen very quickly, producing a substantial losses on the portfolio.  These may be temporary, if the volatility spike is small or short-lived, but as traders and investors discovered in the February drama, neither of these two desirable outcomes is guaranteed.  Indeed as I pointed out in an earlier post this turned out to be the largest ever two-day volatility surge in history.  The results for many hedge funds, especially in the quant sector were devastating, with several showing high single digit or double-digit losses for the month.

VIX_Spike_1

 

Over time, investors have become more familiar with the volatility space and have learned to be wary of strategies like volatility carry or option selling, where the returns look superficially attractive, until a market event occurs.  So what alternative approaches are available?

An Aggressive Approach to Volatility Trading

In my blog post Riders on the Storm  I described one such approach:  the Option Trader strategy on our Algo Trading Platform made a massive gain of 27% for the month of February and as a result strategy performance is now running at over 55% for 2018 YTD, while maintaining a Sharpe Ratio of 2.23.

Option Trader

 

The challenge with this style of volatility trading is that it requires a trader (or trading system) with a very strong stomach and an investor astute enough to realize that sizable drawdowns are in a sense “baked in” for this trading strategy and should be expected from time to time.  But traders are often temperamentally unsuited to this style of trading – many react by heading for the hills and liquidating positions at the first sign of trouble; and the great majority of investors are likewise unable to withstand substantial drawdowns, even if the eventual outcome is beneficial.

SSALGOTRADING AD

The Market Timing Approach

So what alternatives are there?  One way of dealing with the problem of volatility spikes is simply to try to avoid them.  That means developing a strategy logic that step aside altogether when there is a serious risk of an impending volatility surge.  Market timing is easy to describe, but very hard to implement successfully in practice.  The VIX Swing Trader strategy on the Systematic Algotrading platform attempts to do just that, only trading when it judges it safe to do so. So, for example, it completely side-stepped the volatility debacle in August 2015, ending the month up +0.74%.  The strategy managed to do the same in February this year, finishing ahead +1.90%, a pretty creditable performance given how volatility funds performed in general.  One helpful characteristic of the strategy is that it trades the less-volatile mid-section of the volatility curve, in the form of the VelocityShares Daily Inverse VIX MT ETN (ZIV).  This ensures that the P&L swings are much less dramatic than for strategies exposed to the front end of the curve, as most volatility strategies are.

VIX Swing Trader1 VIX Swing Trader2

A potential weakness of the strategy is that it will often miss great profit opportunities altogether, since its primary focus is to keep investors out of trouble. Allied to this, the system may trade only a handful of times each month.  Indeed, if you look at the track record above you find find months in which the strategy made no trades at all. From experience, investors are almost as bad at sitting on their hands as they are at taking losses:  patience is not a highly regarded virtue in the investing community these days.  But if you are a cautious, patient investor looking for a source of uncorrelated alpha, this strategy may be a good choice. On the other hand, if you are looking for high returns and are willing to take the associated risks, there are choices better suited to your goals.

The Hedging Approach to Volatility Trading

A “middle ground” is taken in our Hedged Volatility strategy. Like the VIX Swing Trader this strategy trades VIX ETFs/ETNs, but it does so across the maturity table. What distinguishes this strategy from the others is its use of long call options in volatility products like the iPath S&P 500 VIX ST Futures ETN (VXX) to hedge the short volatility exposure in other ETFs in the portfolio.  This enables the strategy to trade much more frequently, across a wider range of ETF products and maturities, with the security of knowing that the tail risk in the portfolio is protected.  Consequently, since live trading began in 2016, the strategy has chalked up returns of over 53% per year, with a Sharpe Ratio of 2 and Sortino Ratio above 3.  Don’t be confused by the low % of trades that are profitable:  the great majority of these loss-making “trades” are in fact hedges, which one would expect to be losers, as most long options trades are.  What matters is the overall performance of the strategy.

Hedged Volatility

All of these strategies are available on our Systematic Algotrading Platform, which offers investors the opportunity to trade the strategies in their own brokerage account for a monthly subscription fee.

The Multi-Strategy Approach

The approach taken by the Systematic Volatility Strategy in our Systematic Strategies hedge fund again seeks to steer a middle course between risk and return.  It does so by using a meta-strategy approach that dynamically adjusts the style of strategy deployed as market conditions change.  Rather than using options (the strategy’s mandate includes only ETFs) the strategy uses leveraged ETFs to provide tail risk protection in the portfolio. The strategy has produced an average annual compound return of 38.54% since live trading began in 2015, with a Sharpe Ratio of 3.15:

Systematic Volatility Strategy 1 Page Tear Sheet June 2018

 

A more detailed explanation of how leveraged ETFs can be used in volatility trading strategies is given in an earlier post:

http://jonathankinlay.com/2015/05/investing-leveraged-etfs-theory-practice/

 

Conclusion:  Choosing the Investment Style that’s Right for You

There are different styles of volatility trading and the investor should consider carefully which best suits his own investment temperament.  For the “high risk” investor seeking the greatest profit the Option Trader strategy in an excellent choice, producing returns of +176% per year since live trading began in 2016.   At the other end of the spectrum, the VIX Swing trader is suitable for an investor with a cautious trading style, who is willing to wait for the right opportunities, i.e. ones that are most likely to be profitable.  For investors seeking to capitalize on opportunities in the volatility space, but who are concerned about the tail risk arising from major market corrections, the Hedge Volatility strategy offers a better choice.  Finally, for investors able to invest $250,000 or more, a hedge fund investment in our Systematic Volatility strategy offers the highest risk-adjusted rate of return.