Improving A Hedge Fund Investment – Cantab Capital’s Quantitative Aristarchus Fund

cantab

In this post I am going to take a look at what an investor can do to improve a hedge fund investment through the use of dynamic capital allocation. For the purposes of illustration I am going to use Cantab Capital’s Aristarchus program – a quantitative fund which has grown to over $3.5Bn in assets under management since its opening with $30M in 2007 by co-founders Dr. Ewan Kirk and Erich Schlaikjer.

I chose this product because, firstly, it is one of the most successful quantitative funds in existence and, secondly, because as a CTA its performance record is publicly available.

Cantab’s Aristarchus Fund

Cantab’s stated investment philosophy is that algorithmic trading can help to overcome cognitive biases inherent in human-based trading decisions, by exploiting persistent statistical relationships between markets. Taking a multi-asset, multi-model approach, the majority of Cantab’s traded instruments are liquid futures and forwards, across currencies, fixed income, equity indices and commodities.

Let’s take a look at how that has worked out in practice:

Fig 1 Fig 2

Whatever the fund’s attractions may be, we can at least agree that alpha is not amongst them.  A Sharpe ratio of < 0.5 (I calculate to be nearer 0.41) is hardly in Renaissance territory, so one imagines that the chief benefit of the product must lie in its liquidity and low market correlation.  Uncorrelated it may be, but an investor in the fund must have extremely deep pockets – and a very strong stomach – to handle the 34% drawdown that the fund suffered in 2013.

Improving the Aristarchus Fund Performance

If we make the assumption that an investment in this product is warranted in the first place, what can be done to improve its performance characteristics?  We’ll look at that question from two different perspectives – the investor’s and the manager’s.

Firstly, from the investor’s perspective, there are relatively few options available to enhance the fund’s contribution, other than through diversification.  One other possibility available to the investor, however, is to develop a program for dynamic capital allocation.  This requires the manager to be open to allowing significant changes in the amount of capital to be allocated from month to month, or quarter to quarter, but in a liquid product like Aristarchus some measure of flexibility ought to be feasible.

SSALGOTRADING AD

An analysis of the fund’s performance indicates the presence of a strong dependency in the returns process.  This is not at all unusual.  Often investment strategies have a tendency to mean-revert: a negative dependency in which periods of poor performance tend to be followed by positive performance, and vice versa.  CTA strategies such as Aristarchus tend to be trend-following, and this can induce positive dependency in the strategy returns process, in which positive months tend to follow earlier positive months, while losing months tend to be followed by further losses.  This is the pattern we find here.

Consequently, rather than maintaining a constant capital allocation, an investor would do better to allocate capital dynamically, increasing the amount of capital after a positive period, while decreasing the allocation after a period of losses.  Let’s consider a variation of this allocation plan, in which the amount of allocated capital is increased by 70% when the last monthly equity value exceeds the quarterly moving average, while the allocation is reduced to zero when the last month’s equity falls below the average.  A dynamic capital allocation plan as simple as this appears to produce a significant improvement in the overall performance of the investment:

Fig 4

The slight increase in annual volatility in the returns produced by the dynamic capital allocation model is more than offset by the 412bp improvement in the CAGR. Consequently, the Sharpe Ratio improves from o.41 to 0.60.

Nor is this by any means the entire story: the dynamic model produces lower average drawdowns (7.93% vs. 8.52%) and, more importantly, reduces the maximum drawdown over the life of the fund from a painful 34.87% to more palatable 23.92%.

The much-improved risk profile of the dynamic allocation scheme is reflected in the Return/Drawdown Ratio, which rises from 2.44 to 6.52.

Note, too, that the average level of capital allocated in the dynamic scheme is very slightly less than the original static allocation.  In other words, the dynamic allocation technique results in a more efficient use of capital, while at the same time producing a higher rate of risk-adjusted return and enhancing the overall risk characteristics of the strategy.

Improving Fund Performance Using a Meta-Strategy

So much for the investor.  What could the manager to do improve the strategy performance?  Of course, there is nothing in principle to prevent the manager from also adopting a dynamic approach to capital allocation, although his investment mandate may require him to be fully invested at all times.

Assuming for the moment that this approach is not available to the manager, he can instead look into the possibilities for developing a meta-strategy.    As I explained in my earlier post on the topic:

A meta-strategy is a trading system that trades trading systems.  The idea is to develop a strategy that will make sensible decisions about when to trade a specific system, in a way that yields superior performance compared to simply following the underlying trading system.

It turns out to be quite straightforward to develop such a meta-strategy, using a combination of stop-loss limits and profit targets to decide when to turn the strategy on or off.  In so doing, the manager is able to avoid some periods of negative performance, producing a significant uplift in the overall risk-adjusted return:

Fig 5

Conclusion

Meta-strategies and dynamic capital allocation schemes can enable the investor and the investment manager to improve the performance characteristics of their investment and investment strategy, by increasing returns, reducing volatility and the propensity of the strategy to produce substantial drawdowns.

We have demonstrated how these approaches can be applied successfully to Cantab’s Aristarchus quantitative fund, producing substantial gains in risk adjusted performance and reductions in the average and maximum drawdowns produced over the life of the fund.

Pairs Trading – Part 2: Practical Considerations

Pairs Trading = Numbers Game

One of the first things you quickly come to understand in equity pairs trading is how important it is to spread your risk.  The reason is obvious: stocks are subject to a multitude of risk factors – amongst them earning shocks and corporate actions -that can blow up an otherwise profitable pairs trade.  Instead of the pair re-converging, they continue to diverge until you are stopped out of the position.  There is not much you can do about this, because equities are inherently risky.  Some arbitrageurs prefer trading ETF pairs for precisely this reason.  But risk and reward are two sides of the same coin:  risks tend to be lower in ETF pairs trades, but so, too, are the rewards.  Another factor to consider is that there are many more opportunities to be found amongst the vast number of stock combinations than in the much smaller universe of ETFs.  So equities remain the preferred asset class of choice for the great majority of arbitrageurs.

So, because of the risk in trading equities, it is vitally important to spread the risk amongst a large number of pairs.  That way, when one of your pairs trades inevitably blows up for one reason or another, the capital allocation is low enough not to cause irreparable damage to the overall portfolio.  Nor are you over-reliant on one or two star performers that may cease to contribute if, for example, one of the stock pairs is subject to a merger or takeover.

Does that mean that pairs trading is accessible only to managers with deep enough pockets to allocate broadly in the investment universe?  Yes and no.  On the one hand, of course, you need sufficient capital to allocate a meaningful sum to each of your pairs.  But pairs trading is highly efficient in its use of capital:  margin requirements are greatly reduced by the much lower risk of a dollar-neutral portfolio.  So your capital goes further than in would in a long-only strategy, for example.

How many pair combinations would you need to research to build an investment portfolio of the required size?  The answer might shock you:  millions.  Or  even tens of millions.  In the case of the Gemini Pairs strategy, for example, the universe comprises around 10m stock pairs and 200,000 ETF combinations.

It turns out to be much more challenging to find reliable stock pairs to trade than one might imagine, for reasons I am about to discuss.  So what tends to discourage investors from exploring pairs trading as an investment strategy is not because the strategy is inherently hard to understand; nor because the methods are unknown; nor because it requires vast amounts of investment capital to be viable.  It is that the research effort required to build a successful statistical arbitrage strategy is beyond the capability of the great majority of investors.

Before you become too discouraged, I will just say that there are at least two solutions to this challenge I can offer, which I will discuss later.

Methodology Isn’t a Decider

I have traded pairs successfully using all of the techniques described in the first part of the post (i.e. Ratio, Regression, Kalman and Copula methods).  Equally, I have seen a great many failed pairs strategies produced by using every available technique.  There is no silver bullet.  One often finds that a pair that perform poorly using the ratio method produces decent returns when a regression or Kalman Filter model is applied.  From experience, there is no pattern that allows you to discern which technique, if any, is gong to work.  You have to be prepared to try all of them, at least in back-test.

Correlation is Not the Answer

In a typical description of pairs trading the first order of business is often to look for a highly correlated pairs to trade.  While this makes sense as a starting point, it can never provide a complete answer.  The reason is well known:  correlations are unstable, and can often arise from random chance rather than as a result of a real connection between two stock processes.  The concept of spurious correlation is most easily grasped with an example, for instance:

Of course, no rational person believes that there is a causal connection between cheese consumption and death by bedsheet entanglement – it is a spurious correlation that has arisen due to the random fluctuations in the two time series.  And because the correlation is spurious, the apparent relationship is likely to break down in future.

We can provide a slightly more realistic illustration as follows.  Let us suppose we have two correlated stocks, one with annual drift (i.e trend of 5% and annual volatility of 25%, the other with annual drift of 20% and annual volatility of 50%.  We assume that returns from the two processes follow a Normal distribution, with true correlation of 0.3.  Let’s assume that we sample the  returns for the two stocks over 90 days to estimate the correlation, simulating the real-world situation in which the true correlation is unknown.  Unlike in the real-world scenario, we can sample the 90-day returns many times (100,000 in this experiment) and look at the range of correlation estimates we observe:

We find that, over the 100,000 repeated experiments the average correlation estimate is very close indeed to the true correlation.  However, in the real-world situation we only have a single observation, based on the returns from the two stock processes over the prior 90 days.  If we are very lucky, we might happen to pick a period in which the processes correlate at a level close to the true value of 0.3.  But as the experiment shows, we might be unlucky enough to see an estimate as high as 0.64, or as low as zero!

So when we look at historical data and use estimates of the correlation coefficient to gauge the strength of the relationship between two stocks, we are at the mercy of random variation in the sampling process, one that could suggest a much stronger (or weaker) connection than is actually the case.

One is on firmer ground in selecting pairs of stocks in the same sector, for example oil or gold-mining stocks, because we are able to identify causal factors that should provide a basis for a reliable correlation, such as the price of oil or gold.  This is indeed one of the “screens” that statistical arbitrageurs often use to select pairs for analysis.  But there are many examples of stocks that “ought” to be correlated but which nonetheless break down and drift apart.  This can happen for many reasons:  changes in the capital structure of one of the companies; a major product launch;  regulatory action; or corporate actions such as mergers and takeovers.

The bottom line is that correlation, while important, is not by itself a sufficiently reliable measure to provide a basis for pair selection.

Cointegration: the Drunk and His Dog

Suppose you see two drunks (i.e., two random walks) wandering around. The drunks don’t know each other (they’re independent), so there’s no meaningful relationship between their paths.

But suppose instead you have a drunk walking with his dog. This time there is a connection. What’s the nature of this connection? Notice that although each path individually is still an unpredictable random walk, given the location of one of the drunk or dog, we have a pretty good idea of where the other is; that is, the distance between the two is fairly predictable. (For example, if the dog wanders too far away from his owner, he’ll tend to move in his direction to avoid losing him, so the two stay close together despite a tendency to wander around on their own.) We describe this relationship by saying that the drunk and her dog form a cointegrating pair.

In more technical terms, if we have two non-stationary time series X and Y that become stationary when differenced (these are called integrated of order one series, or I(1) series; random walks are one example) such that some linear combination of X and Y is stationary (aka, I(0)), then we say that X and Y are cointegrated. In other words, while neither X nor Y alone hovers around a constant value, some combination of them does, so we can think of cointegration as describing a particular kind of long-run equilibrium relationship. (The definition of cointegration can be extended to multiple time series, with higher orders of integration.)

Other examples of cointegrated pairs:

  • Income and consumption: as income increases/decreases, so too does consumption.
  • Size of police force and amount of criminal activity
  • A book and its movie adaptation: while the book and the movie may differ in small details, the overall plot will remain the same.
  • Number of patients entering or leaving a hospital

So why do we care about cointegration? Someone else can probably give more econometric applications, but in quantitative finance, cointegration forms the basis of the pairs trading strategy: suppose we have two cointegrated stocks X and Y, with the particular (for concreteness) cointegrating relationship X – 2Y = Z, where Z is a stationary series of zero mean. For example, X could be McDonald’s, Y could be Burger King, and the cointegration relationship would mean that X tends to be priced twice as high as Y, so that when X is more than twice the price of Y, we expect X to move down or Y to move up in the near future (and analogously, if X is less than twice the price of Y, we expect X to move up or Y to move down). This suggests the following trading strategy: if X – 2Y > d, for some positive threshold d, then we should sell X and buy Y (since we expect X to decrease in price and Y to increase), and similarly, if X – 2Y < -d, then we should buy X and sell Y.

So how do you detect cointegration? There are several different methods, but the simplest is probably the Engle-Granger test, which works roughly as follows:

  • Check that Xt and Yt are both I(1).
  • Estimate the cointegrating relationship =+Yt=aXt+et by ordinary least squares.
  • Check that the cointegrating residuals et are stationary (say, by using a so-called unit root test, e.g., the Dickey-Fuller test).

Also, something else that should perhaps be mentioned is the relationship between cointegration and error-correction mechanisms: suppose we have two cointegrated series ,Xt,Yt, with autoregressive representations

=1+1+Xt=aXt−1+bYt−1+ut
=1+1+Yt=cXt−1+dYt−1+vt

By the Granger representation theorem (which is actually a bit more general than this), we then have

Δ=1(11)+ΔXt=α1(Yt−1−βXt−1)+ut
Δ=2(11)+ΔYt=α2(Yt−1−βXt−1)+vt

where 11(0)Yt−1−βXt−1∼I(0) is the cointegrating relationship. Regarding 11Yt−1−βXt−1 as the extent of disequilibrium from the long-run relationship, and the αi as the speed (and direction) at which the time series correct themselves from this disequilibrium, we can see that this formalizes the way cointegrated variables adjust to match their long-run equilibrium.

So, just to summarize a bit, cointegration is an equilibrium relationship between time series that individually aren’t in equilibrium (you can kind of contrast this with (Pearson) correlation, which describes a linear relationship), and it’s useful because it allows us to incorporate both short-term dynamics (deviations from equilibrium) and long-run expectations , i.e. corrections to equilibrium.  (My thanks to Edwin Chen for this entertaining explanation)

Cointegration is Not the Answer

So a typical workflow for researching possible pairs trade might be to examine a large number of pairs in a sector of interest, select those that meet some correlation threshold (e.e. 90%), test those pairs for cointegration and select those that appear to be cointegrated.  The problem is:  it doesn’t work!  The pairs thrown up by this process are likely to work for a while, but many (even the majority) will break down at some point, typically soon after you begin live trading.  `The reason is that all of the major statistical tests for cointegration have relatively low power and pairs that are apparently cointegrated break down suddenly, with consequential losses for the trader.  The following posts delves into the subject in some detail:

 

Other Practical “Gotchas”

Apart from correlations/cointegration breakdowns there is a long list of things that can go wrong with a pairs trade that the practitioner needs to take account of, for instance:

  • A stock may become difficult or expensive to short
  • The overall backtest performance stats for a pair may look great, but the P&L per share is too small to overcome trading costs and other frictions.
  • Corporate actions (mergers, takeovers) and earnings can blow up one side of an otherwise profitable pair.
  • It is possible to trade passively, crossing the spread  to trade the other leg when the first leg trades.  But this trade expression is challenging to test.  If paying the spread on both legs is going to jeopardize the profitability of the strategy, it is probably better to reject the pair.

What Works

From my experience, the testing phase of the process of building a statistical arbitrage strategy is absolutely critical.  By this I mean that, after screening for correlation and cointegration, and back-testing all of the possible types of model, it is essential to conduct an extensive simulation test over a period of several weeks before adding a new pair to the production system.  Testing is important for any algorithmic strategy, of course, but it is an integral part of the selection process where pairs trading is concerned.  You should expect 60% to 80% of your candidates to fail in simulated trading, even after they have been carefully selected and thoroughly back-tested.  The good good news is that those pairs that pass the final stage of testing usually are successful in a production setting.

Implementation

Putting all of this information together, it should be apparent that the major challenge in pairs trading lies not so much in understanding and implementing methodologies and techniques, but in implementing the research process on an industrial scale, sufficient to collate and analyze tens of millions of pairs. This is beyond the reach of most retail investors, and indeed, many small trading firms:  I once worked with a trading firm for over a year on a similar research project, but in the end it proved to be capabilities of even their highly competent development team.

So does this mean that for the average quantitative strategist investors statistical arbitrage must remain an investment concept of purely theoretical interest?  Actually, no.  Firstly, for the investor, there are plenty of investment products available that they can access via hedge fund structures (or even our algotrading platform, as I have previously mentioned).

For those interested in building stat arb strategies there is an excellent resource that collates all of the data and analysis on tens of millions of stock pairs that enables the researcher to identify promising pairs, test their level of cointegration, backtest strategies using different methodologies and even put selected pars strategies into production (see example below).

Those interested should contact me for more information.

 

A Meta-Strategy in Euro Futures

Several readers responded to my recent invitation to send me details of their trading strategies, to see if I could develop a meta-strategy with superior overall performance characteristics (see original post here).

One reader sent me the following strategy in EUR futures, with a promising-looking equity curve over the period from 2009-2014.

EUR Orig Equity Curve

I have no information about the underlying architecture of the strategy, but a performance analysis shows that it trades approximately once per day, with a win rate of 49%, a PNL per trade of $4.79 and a IR estimated to be 2.6.

Designing the Meta-Strategy

My task was to see if I could design a meta-strategy that would “trade” the underlying strategy, i.e. produce signals to turn the underlying strategy on or off.  Here we are designing a long-only strategy, where a “buy” trade represents the signal to turn the underlying strategy on, while an exit trade from the meta-strategy turns the underlying strategy off.

The meta-strategy is built in trade time rather than calendar time – we don’t want the meta-strategy trying to turn the underlying trading strategy on or off while it is in the middle of a trade.  The data we use in the design exercise is the trade-by-trade equity curve, including the date and timestamp and the open, high, low and close values of the equity curve for each trade.

SSALGOTRADING AD

No allowance for trading costs is necessary since all of the transaction costs are baked into the PNL of the underlying strategy – there are no additional costs entailed in turning the strategy on or off, as long as we do that in a period when there is no open position.

In designing the meta-strategy I chose simply to try to improve the overall net PNL.  This is a good starting point, but one would typically go on to consider a variety of other possible criteria, including, for example, Net Profit / Av. Max Drawdown, Net Profit / Flat Time, MAR Ratio, Sharpe Ratio, Kelly Criterion, or a combination of them.

I used 80% of the trade data to design and test the strategy and reserved 20% of the data to test the performance of the meta-strategy out-of-sample.

Results

The analysis summarized below shows a clear improvement in the overall performance of the meta-strategy, compared to the underlying strategy.  Net PNL and Average Trade are increased by 40%, while the trade standard deviation is noticeably reduced, leading to a higher IR of 5.27 vs 3.10.  The win rate increases from around 2/3 to over 90%.

Although not as marked, the overall improvement in strategy performance metrics during the out-of-sample test period is highly significant, both economically and statistically.

Note that the Meta-strategy is a long-only strategy in which each “trade” is a period in which the system trades the underlying EUR futures strategy.  So in fact, in the Meta-strategy, each trade represents a number of successive underlying, real trades (which of course may be long or short).

Put another way, the Meta-Strategy turns the underlying trading strategy on and off 276 times in total.

Perf1

Perf 2 Perf 3 Perf 4

 

Conclusion

It is feasible to design a meta-strategy that improves the overall performance characteristics of an underlying trading strategy, by identifying the higher-value trades and turning the strategy on or off based on forecasts of its future performance.

No knowledge is required of the mechanics of the underlying trading strategy in order to design a profitable Meta-strategy.

Meta-strategies have been successfully applied to problems of capital allocation, where decisions are made on a regular basis about how much capital to allocate to multiple trading strategies, or traders.

 

 

Pairs Trading in Practice

Part 1 – Methodologies

It is perhaps a little premature for a deep dive into the Gemini Pairs Trading strategy which trades on our Systematic Algotrading platform.  At this stage all one can say for sure is that the strategy has made a pretty decent start – up around 17% from October 2018.  The strategy does trade multiple times intraday, so the record in terms of completed trades – numbering over 580 – is appreciable (the web site gives a complete list of live trades).  And despite the turmoil through the end of last year the Sharpe Ratio has ranged consistently around 2.5.

One of the theoretical advantages of pairs trading is, of course, that the coupling of long and short positions in a relative value trade is supposed to provide a hedge against market downdrafts, such as we saw in Q4 2018.  In that sense pairs trading is the quintessential hedge fund strategy, embodying the central concept on which the entire edifice of hedge fund strategies is premised.
In practice, however, things often don’t work out as they should. In this thread I want to spend a little time reviewing why that is and to offer some thoughts based on my own experience of working with statistical arbitrage strategies over many years.

Methodology

There is no “secret recipe” for pairs trading:  the standard methodologies are as well known as the strategy concept.  But there are some important practical considerations that I would like to delve into in this post.  Before doing that, let me quickly review the tried and tested approaches used by statistical arbitrageurs.

The Ratio Model is one of the standard pair trading models described in literature. It is based in ratio of instrument prices, moving average and standard deviation. In other words, it is based on Bollinger Bands indicator.

  • we trade pair of stocks A, B, having price series A(t)B(t)
  • we need to calculate ratio time series R(t) = A(t) / B(t)
  • we apply a moving average of type T with period Pm on R(t) to get time series M(t)
  • Next we apply the standard deviation with period Ps on R(t) to get time series S(t)
  • now we can create Z-score series Z(t) as Z(t) = (R(t) – M(t)) / S(t), this time series can give us z-score to signal trading decision directly (in reality we have two Z-scores: Z-scoreask and Z-scorebid as they are calculated using different prices, but for the sake of simplicity let’s now pretend we don’t pay bid-ask spread and we have just one Z-score)

Another common way to visualize  this approach is to think in terms of bands around the moving average M(t):

  • upper entry band Un(t) = M(t) + S(t) * En
  • lower entry band Ln(t) = M(t) – S(t) * En
  • upper exit band Ux(t) = M(t) + S(t) * Ex
  • lower exit band Lx(t) = M(t) – S(t) * Ex

These bands are actually the same bands as in Bollinger Bands indicator and we can use crossing of R(t) and bands as trade signals.

  • We open short pair position, if the Z-score Z(t) >= En (equivalent to R(t) >= Un(t))
  • We open long pair position if the Z-score Z(t) <= -En (equivalent to R(t) <= Ln(t))

In the Regression, Residual or Cointegration approach we construct a linear regression between A(t)B(t) using OLS, where A(t) = β * B(t) + α + R(t)

Because we use a moving window of period P (we calculate new regression each day), we actually get new series β(t)α(t)R(t), where β(t)α(t) are series of regression coefficients and R(t) are residuals (prediction errors)

  • We look at the residuals series  R(t) = A(t) – (β(t) * B(t) + α(t))
  • We next calculate the standard deviation of the residuals R(t), which we designate S(t)
  • Now we can create Z-score series Z(t) as Z(t) = R(t) / S(t) – the time series that is used to generate trade signals, just as in the Ratio model.

The Kalman Filter model provides superior estimates of the current hedge ratio compared to the Regression method.  For a detailed explanation of the techniques, see the following posts (the post on ETF trading contains complete Matlab code).

 

 

Finally,  the rather complex Copula methodology models the joint and margin distributions of the returns process in each stock as described in the following post

Portfolio Improvement for the Equity Investor

Portfolio

Equity investors and long-only portfolio managers are constantly on the lookout for ways to improve their portfolios, either by yield enhancement, or risk reduction.  In the case of yield enhancement, the principal focus is on adding alpha to the portfolio through stock selection and active management, while risk reduction tends to be accomplished through diversification.

Another approach is to seek improvement by adding investments outside the chosen universe of stocks, while remaining within the scope of the investment mandate (which, for instance, may include equity-related products, but not futures or options).  The advent of volatility products in the mid-2000’s offered new opportunities for risk reduction; but this benefit was typically achieved at the cost of several hundred basis points in yield.  Over the last decade, however, a significant evolution has taken place in volatility strategies, such that they can now not only provide insurance for the equity portfolio, but, in addition, serve as an orthogonal source of alpha to enhance portfolio yields.

An example of one such product is our volatility strategy, a quantitative approach to trading VIX-related ETF products traded on ARCA. A summary of the performance of the strategy is given below.

Vol Strategy perf Sept 2015

The mechanics of the strategy are unlikely to be of great interest to the typical equity investor and so need not detain us here.  Rather, I want to focus on how an investor can use such products to enhance their equity portfolio.

Performance of the Equity Market and Individual Sectors

The last five years have been extremely benign for the equity market, not only for the broad market, as evidenced by the performance of the SPDR S&P 500 Trust ETF (SPY), and also by almost every individual sector, with the notable exception of energy.

Sector ETF Performance 2012-2015

The risk-adjusted returns have been exceptional over this period, with information ratios reaching 1.4 or higher for several of the sectors, including Financials, Consumer Staples, Healthcare and Consumer Discretionary.  If the equity investor has been in a position to diversify his portfolio as fully as the SPY ETF, it might reasonably been assumed that he has accomplished the maximum possible level of risk reduction; at the same time, no-one is going to argue with a CAGR of 16.35%.  Yet, even here, portfolio improvement is possible.

Yield Enhancement

The key to improving the portfolio yield lies in the superior risk-adjusted performance of the volatility portfolio compared to the equity portfolio and also due the fact that, while the correlation between the two is significant (at 0.44), it is considerably lower than 1.  Hence there is potential for generating higher rates of return on a risk-adjusted basis by combining the pair of portfolios in some proportion.

SSALGOTRADING AD

To illustrate this we assume, firstly, that the investor is comfortable with the currently level of risk in his broadly diversified equity portfolio, as measured by the annual standard deviation of returns, currently 10.65%.   Holding this level of risk constant, we now introduce an overlay strategy, namely the volatility portfolio, to which we seek to allocate some proportion of the available investment capital.  With this constraint it turns out that we can achieve a substantial improvement in the overall yield by reducing our holding in the equity portfolio to just over 2/3 of the current level (67.2%) and allocating 32.8% of the capital to the volatility portfolio.  Over the period from 2012, the combined equity and volatility portfolio produced a CAGR of 26.83%, but with the same annual standard deviation – a yield enhancement of 10.48% annually.  The portfolio Information Ratio improves from 1.53 to a 2.52, reflecting the much higher returns produced by the combined portfolio, for the same level of risk as before.

Chart

Risk Reduction

The given example may appear impressive, but it isn’t really a practical proposition.  Firstly, no equity investor or portfolio manager is likely to want to allocate 1/3 of their total capital to a strategy operated by a third party, no matter how impressive the returns. Secondly, the capacity in the volatility strategy is, realistically, of the order of $100 million.  A 32.8% allocation of capital from a sizeable equity portfolio would absorb a large proportion of the available capacity in the volatility ETF strategy, or even all of it.

A much more realistic approach would be to cap the allocation to the volatility component at a reasonable level – say, 5%.  Then the allocation from a $100M capital budget would be $5M, well within the capacity constraints of the volatility product.  In fact, operating at this capped allocation percentage, the volatility strategy provides capacity for equity portfolios of up to $2Bn in total capital.

Let’s look at an example of what can be achieved under a 5% allocation constraint.  In this scenario I am going to move along the second axis of portfolio improvement – risk reduction.  Here, we assume that we wish to maintain the current level of performance of the equity portfolio (CAGR 16.35%), while reducing the risk as much as possible.

A legitimate question at this stage would be to ask how it might be possible to reduce risk by introducing a new investment that has a higher annual standard deviation than the existing portfolio?  The answer is simply that we move some of our existing investment into cash (or, rather, Treasury securities).  In fact, by allocating the maximum allowed to the volatility portfolio (5%) and reducing our holding in the equity portfolio to 85.8% of the original level (with the remaining 9.2% in cash), we are able to create a portfolio with the same CAGR but with an annual volatility in single digits: 9.53%, a reduction in risk of  112 basis points annually.  At the same time, the risk adjusted performance of the portfolio improves from 1.53 to 1.71 over the period from 2012.

Of course, the level of portfolio improvement is highly dependent on the performance characteristics of both the equity portfolio and overlay strategy, as well as the correlation between them. To take a further example, if we consider an equity portfolio mirroring the characteristics of the Materials Select Sector SPDR ETF (XLB), we can achieve a reduction of as much as 3.31% in the annual standard deviation, without any loss in expected yield, through an allocation of 5% to the volatility overlay strategy and a much higher allocation of 18% to cash.

Other Considerations

Investors and money managers being what they are, it goes against the grain to consider allocating money to a third party – after all, a professional money manager earns his living from his own investment expertise, rather than relying on others.  Yet no investor can reasonably expect to achieve the same level of success in every field of investment.  If you have built your reputation on your abilities as a fundamental analyst and stock picker, it is unreasonable to expect that you will be able accomplish as much in the arena of quantitative investment strategies.  Secondly, by capping the allocation to an external manager at the level of 5% to 10%, your primary investment approach remains unaltered –  you are maintaining the fidelity of your principal investment thesis and investment mandate.  Thirdly, there is no reason why overlay strategies such as the one discussed here should not provide easy liquidity terms – after all, the underlying investments are liquid, exchange traded products. Finally, if you allocate capital in the form of a managed account you can maintain control over the allocated capital and make adjustments rapidly, as your investment needs change.

Conclusion

Quantitative strategies have a useful role to play for equity investors and portfolio managers as a means to improve existing portfolios, whether by yield enhancement, risk reduction, or a combination of the two.  While the level of improvement is highly dependent on the performance characteristics of the equity portfolio and the overlay strategy, the indications are that yield enhancement, or risk reduction, of the order of hundreds of basis points may be achievable even through very modest allocations of capital.

Daytrading Volatility ETFs

ETFAs we have discussed before, there is no standard definition of high frequency trading.  For some, trading more than once or twice a day constitutes high frequency, while others regard anything less than several hundred times a session as low, or medium frequency trading.  Hence in this post I have referred to “daytrading” since we can at least agree on that description for a strategy that exits all positions by the close of the session.

HFT Trading in ETFs – Challenges and Opportunities

High frequency trading in equities and ETFs offer their own opportunities and challenges compared to futures. Amongst the opportunities we might list:

  • Arbitrage between destinations (exchanges, dark pools) where the stock is traded
  • Earning rebates from the exchanges willing to pay for order flow
  • Arbitraging news flows amongst pairs or baskets of equities

When it comes to ETFs, unfortunately, the set of possibilities is more restricted than for single names and one is often obliged to dig deeply into the basket/replication/cointegration type of approach, which can be very challenging in a high frequency context.  The risk of one leg of a multi-asset trade being left unfilled is such that one has to be willing to cross the spread to get the trade on.  Depending on the trading platform and the quality of the execution algorithms, this can make trading the strategy prohibitively expensive.

In that case you have a number of possibilities to consider.  You can simplify the trade, limit the number of stocks in the basket and hope that there is enough alpha left in the reduced strategy. You can focus on managing the trade execution sufficiently well that aggressive trading becomes necessary on relatively few occasions and you look to minimize the costs of paying the spread when they arise.  You can design strategies with higher profit factors that are able to withstand the performance drag entailed in trading aggressively.  Or you can design slower versions of the strategy where latency, fill rates and execution costs are not such critical factors.

SSALGOTRADING AD

Developing high frequency strategies in the volatility ETFs presents special challenges.  Being fairly new, the products have limited histories, which makes modeling more of a challenge.  One way to address this is to create synthetic series priced from the VIX futures, using the published methodology for constructing the ETFs.  Be warned, though, that these synthetic series are likely to inflate your backtest results since they aren’t traded instruments.

Another practical problem that crops up regularly in products like UVXY and VXX is that the broker has difficulty locating stock for short selling.  So you are limited to taking the strategy offline when that occurs, designing strategies that trade long only, or as we do, switching to other products when the ETF is unavailable to short.

Then there is the capacity issue. Despite their fast-growing popularity, volatility ETF funds are in many cases quite small, totaling perhaps a few hundred millions of dollars in AUM. You are never going to be able to construct a strategy capable of absorbing billions of dollars of investment in the ETF products alone.

Volatility and Alpha

volatilitychartFor these reasons, volatility ETFs are not a natural choice for many investment strategists.  But they do have one great advantage compared to other products:  volatility.  Volatility implies uncertainty about the true value of a security, which means that market participants can have very different views about what it is worth at any moment in time.  So the prospects for achieving competitive advantage through superior analytical methods is much greater than for a stock that hardly moves at all and on whose value everyone concurs.  Furthermore, volatility creates regular opportunities for hitting stops, and creating mini crashes or short squeezes, in which the security is temporarily under- or over-valued.  If ever there was a security offering the potential for generating alpha, it is the volatility ETF.

The volatility of the VIX ETFs is enormous, by the standards of regular stocks.  A typical stock might have an annual volatility of 30% to 60%.  The lowest level ever seen in the VVIX index series so far is 70%. To give you an idea of how extreme it can become, during the latest market swoon in August the VVIX, the volatility-of-volatility for the S&P500 index, reached over 200% a year.

A Daytrading Strategy in the VXX

So, despite the challenges and difficulties, there are very good reasons to make the attempt to develop strategies for the volatility ETF products.  My firm, Systematic Strategies, has developed several such algorithms that are combined to create a strategy that trades the volatility ETFs very successfully.  Until recently, however,  all of the sub-strategies we employ were longer term in nature, and entailed holding positions overnight.  We wanted to develop higher frequency algorithms that could react more quickly to changes in the volatility landscape.  We had to dig pretty deep into the arsenal of trading ideas to get there, but eventually we succeeded.  After six months of live trading we were ready to release the new VXX daytrading algorithm into production for our volatility ETF strategy investors.  Here’s how it looks (results are for a $100,000 account).

Fig 1 Fig 2 Fig 3

As you can see, the strategy trades up to around 10 times a day with a reasonable profit factor (1.53) and win rate of just under 60%. By itself, the strategy has a Sharpe Ratio of around 6, so it is well worth trading on its own.  But its real value (for us) emerges when it is combined in appropriate proportion with the other, lower frequency algorithms in the volatility strategy.  The additional alpha from the VXX strategy reduces the size of the loss in August and produces a substantial gain in September, taking the YTD return to just under 50%.  Returns for Oct MTD are already at 16%.

Vol Strategy perf Sept 2015

 

 

Improving Trading System Performance Using a Meta-Strategy

What is a Meta-Strategy?

In my previous post on identifying drivers of strategy performance I mentioned the possibility of developing a meta-strategy.

fig0A meta-strategy is a trading system that trades trading systems.  The idea is to develop a strategy that will make sensible decisions about when to trade a specific system, in a way that yields superior performance compared to simply following the underlying trading system.  Put another way, the simplest kind of meta-strategy is a long-only strategy that takes positions in some underlying trading system.  At times, it will follow the underlying system exactly; at other times it is out of the market and ignore the trading system’s recommendations.

More generally, a meta-strategy can determine the size in which one, or several, systems should be traded at any point in time, including periods where the size can be zero (i.e. the system is not currently traded).  Typically, a meta-strategy is long-only:  in theory there is nothing to stop you developing a meta-strategy that shorts your underlying strategy from time to time, but that is a little counter-intuitive to say the least!

A meta-strategy is something that could be very useful for a fund-of-funds, as a way of deciding how to allocate capital amongst managers.

Caissa Capital operated a meta-strategy in its option arbitrage hedge fund back in the early 2000’s.  The meta-strategy (we called it a “model management system”) selected from a half dozen different volatility models to be used for option pricing, depending their performance, as measured by around 30 different criteria.  The criteria included both statistical metrics, such as the mean absolute percentage error in the forward volatility forecasts, as well as trading performance criteria such as the moving average of the trade PNL.  The model management system probably added 100 – 200 basis points per annum to the performance the underlying strategy, so it was a valuable add-on.

Illustration of a Meta-Strategy in US Bond Futures

To illustrate the concept we will use an underlying system that trades US Bond futures at 15-minute bar intervals.  The performance of the system is summarized in the chart and table below.

Fig1A

 

FIG2A

 

Strategy performance has been very consistent over the last seven years, in terms of the annual returns, number of trades and % win rate.  Can it be improved further?

To assess this possibility we create a new data series comprising the points of the equity curve illustrated above.  More specifically, we form a series comprising the open, high, low and close values of the strategy equity, for each trade.  We will proceed to treat this as a new data series and apply a range of different modeling techniques to see if we can develop a trading strategy, in exactly the same way as we would if the underlying was a price series for a stock.

It is important to note here that, for the meta-strategy at least, we are working in trade-time, not calendar time. The x-axis will measure the trade number of the underlying strategy, rather than the date of entry (or exit) of the underlying trade.  Thus equally spaced points on the x-axis represent different lengths of calendar time, depending on the duration of each trade.

It is necessary to work in trade time rather than calendar time because, unlike a stock, it isn’t possible to trade the underlying strategy whenever we want to – we can only enter or exit the strategy at points in time when it is about to take a trade, by accepting that trade or passing on it (we ignore the other possibility which is sizing the underlying trade, for now).

SSALGOTRADING AD

Another question is what kinds of trading ideas do we want to consider for the meta-strategy?  In principle one could incorporate almost any trading concept, including the usual range of technical indictors such as RSI, or Bollinger bands.  One can go further an use machine learning techniques, including Neural Networks, Random Forest, or SVM.

In practice, one tends to gravitate towards the simpler kinds of trading algorithm, such as moving averages (or MA crossover techniques), although there is nothing to say that more complex trading rules should not be considered.  The development process follows a familiar path:  you create a hypothesis, for example, that the equity curve of the underlying bond futures strategy tends to be mean-reverting, and then proceed to test it using various signals – perhaps a moving average, in this case.  If the signal results in a potential improvement in the performance of the default meta-strategy (which is to take every trade in the underlying system system), one includes it in the library of signals that may ultimately be combined to create the finished meta-strategy.

As with any strategy development you should follows the usual procedure of separating the trade data to create a set used for in-sample modeling and out-of-sample performance testing.

Following this general procedure I arrived at the following meta-strategy for the bond futures trading system.

FigB1

FigB2

The modeling procedure for the meta-strategy has succeeded in eliminating all of the losing trades in the underlying bond futures system, during both in-sample and out-of-sample periods (comprising the most recent 20% of trades).

In general, it is unlikely that one can hope to improve the performance of the underlying strategy quite as much as this, of course.  But it may well be possible to eliminate a sufficient proportion of losing trades to reduce the equity curve drawdown and/or increase the overall Sharpe ratio by a significant amount.

A Challenge / Opportunity

If you like the meta-strategy concept, but are unsure how to proceed, I may be able to help.

Send me the data for your existing strategy (see details below) and I will attempt to model a meta-strategy and send you the results.  We can together evaluate to what extent I have been successful in improving the performance of the underlying strategy.

Here are the details of what you need to do:

1. You must have an existing, profitable strategy, with sufficient performance history (either real, simulated, or a mixture of the two).  I don’t need to know the details of the underlying strategy, or even what it is trading, although it would be helpful to have that information.

2. You must send  the complete history of the equity curve of the underlying strategy,  in Excel format, with column headings Date, Open, High, Low, Close.  Each row represents consecutive trades of the underlying system and the O/H/L/C refers to the value of the equity curve for each trade.

3.  The history must comprise at least 500 trades as an absolute minimum and preferably 1000 trades, or more.

4. At this stage I can only consider a single underlying strategy (i.e. a single equity curve)

5.  You should not include any software or algorithms of any kind.  Nothing proprietary, in other words.

6.  I will give preference to strategies that have a (partial) live track record.

As my time is very limited these days I will not be able to deal with any submissions that fail to meet these specifications, or to enter into general discussions about the trading strategy with you.

You can reach me at jkinlay@systematic-strategies.com

 

Identifying Drivers of Trading Strategy Performance

Building a winning strategy, like the one in the e-Mini S&P500 futures described here is only half the challenge:  it remains for the strategy architect to gain an understanding of the sources of strategy alpha, and risk.  This means identifying the factors that drive strategy performance and, ideally, building a model so that their relative importance can be evaluated.  A more advanced step is the construction of a meta-model that will predict strategy performance and provided recommendations as to whether the strategy should be traded over the upcoming period.

Strategy Performance – Case Study

Let’s take a look at how this works in practice.  Our case study makes use of the following daytrading strategy in e-Mini futures.

Fig1

The overall performance of the strategy is quite good.  Average monthly PNL over the period from April to Oct 2015 is almost $8,000 per contract, after fees, with a standard deviation of only $5,500. That equates to an annual Sharpe Ratio in the region of 5.0.  On a decent execution platform the strategy should scale to around 10-15 contracts, with an annual PNL of around $1.0 to $1.5 million.

Looking into the performance more closely we find that the win rate (56%) and profit factor (1.43) are typical for a profitable strategy of medium frequency, trading around 20 times per session (in this case from 9:30AM to 4PM EST).

fig2

Another attractive feature of the strategy risk profile is the Max Adverse Execution, the drawdown experienced in individual trades (rather than the realized drawdown). In the chart below we see that the MAE increases steadily, without major outliers, to a maximum of only around $1,000 per contract.

Fig3

One concern is that the average trade PL is rather small – $20, just over 1.5 ticks. Strategies that enter and exit with limit orders and have small average trade are generally highly dependent on the fill rate – i.e. the proportion of limit orders that are filled.  If the fill rate is too low, the strategy will be left with too many missed trades on entry or exit, or both.  This is likely to damage strategy performance, perhaps to a significant degree – see, for example my post on High Frequency Trading Strategies.

The fill rate is dependent on the number of limit orders posted at the extreme high or low of the bar, known as the extreme hit rate.  In this case the strategy has been designed specifically to operate at an extreme hit rate of only around 10%, which means that, on average, only around one trade in ten occurs at the high or low of the bar.  Consequently, the strategy is not highly fill-rate dependent and should execute satisfactorily even on a retail platform like Tradestation or Interactive Brokers.

Drivers of Strategy Performance

So far so good.  But before we put the strategy into production, let’s try to understand some of the key factors that determine its performance.  Hopefully that way we will be better placed to judge how profitable the strategy is likely to be as market conditions evolve.

In fact, we have already identified one potential key performance driver: the extreme hit rate (required fill rate) and determined that it is not a major concern in this case. However, in cases where the extreme hit rate rises to perhaps 20%, or more, the fill ratio is likely to become a major factor in determining the success of the strategy.  It would be highly inadvisable to attempt implementation of such a strategy on a retail platform.

SSALGOTRADING AD

What other factors might affect strategy performance?  The correct approach here is to apply the scientific method:  develop some theories about the drivers of performance and see if we can find evidence to support them.

For this case study we might conjecture that, since the strategy enters and exits using limit orders, it should exhibit characteristics of a mean reversion strategy, which will tend to do better when the market moves sideways and rather worse in a strongly trending market.

Another hypothesis is that, in common with most day-trading and high frequency strategies, this strategy will produce better results during periods of higher market volatility.  Empirically, HFT firms have always produced higher profits during volatile market conditions  – 2008 was a banner year for many of them, for example.  In broad terms, times when the market is whipsawing around create additional opportunities for strategies that seek to exploit temporary mis-pricings.  We shall attempt to qualify this general understanding shortly.  For now let’s try to gather some evidence that might support the hypotheses we have formulated.

I am going to take a very simple approach to this, using linear regression analysis.  It’s possible to do much more sophisticated analysis using nonlinear methods, including machine learning techniques. In our regression model the dependent variable will be the daily strategy returns.  In the first iteration, let’s use measures of market returns, trading volume and market volatility as the independent variables.

Fig4

The first surprise is the size of the (adjusted) R Square – at 28%, this far exceeds the typical 5% to 10% level achieved in most such regression models, when applied to trading systems.  In other words, this model does a very good job of account for a large proportion of the variation in strategy returns.

Note that the returns in the underlying S&P50o index play no part (the coefficient is not statistically significant). We might expect this: ours is is a trading strategy that is not specifically designed to be directional and has approximately equivalent performance characteristics on both the long and short side, as you can see from the performance report.

Now for the next surprise: the sign of the volatility coefficient.  Our ex-ante hypothesis is that the strategy would benefit from higher levels of market volatility.  In fact, the reverse appears to be true (due to the  negative coefficient).  How can this be?  On further reflection, the reason why most HFT strategies tend to benefit from higher market volatility is that they are momentum strategies.  A momentum strategy typically enters and exits using market orders and hence requires  a major market move to overcome the drag of the bid-offer spread (assuming it calls the market direction correctly!).  This strategy, by contrast, is a mean-reversion strategy, since entry/exits are effected using limit orders.  The strategy wants the S&P500 index to revert to the mean – a large move that continues in the same direction is going to hurt, not help, this strategy.

Note, by contrast, that the coefficient for the volume factor is positive and statistically significant.  Again this makes sense:  as anyone who has traded the e-mini futures overnight can tell you, the market tends to make major moves when volume is light – simply because it is easier to push around.  Conversely, during a heavy trading day there is likely to be significant opposition to a move in any direction.  In other words, the market is more likely to trade sideways on days when trading volume is high, and this is beneficial for our strategy.

The final surprise and perhaps the greatest of all, is that the strategy alpha appears to be negative (and statistically significant)!  How can this be?  What the regression analysis  appears to be telling us is that the strategy’s performance is largely determined by two underlying factors, volume and volatility.

Let’s dig into this a little more deeply with another regression, this time relating the current day’s strategy return to the prior day’s volume, volatility and market return.

Fig5

In this regression model the strategy alpha is effectively zero and statistically insignificant, as is the case for lagged volume.  The strategy returns relate inversely to the prior day’s market return, which again appears to make sense for a mean reversion strategy:  our model anticipates that, in the mean, the market will reverse the prior day’s gain or loss.  The coefficient for the lagged volatility factor is once again negative and statistically significant.  This, too, makes sense:  volatility tends to be highly autocorrelated, so if the strategy performance is dependent on market volatility during the current session, it is likely to show dependency on volatility in the prior day’s session also.

So, in summary, we can provisionally conclude that:

This strategy has no market directional predictive power: rather it is a pure, mean-reversal strategy that looks to make money by betting on a reversal in the prior session’s market direction.  It will do better during periods when trading volume is high, and when market volatility is low.

Conclusion

Now that we have some understanding of where the strategy performance comes from, where do we go from here?  The next steps might include some, or all, of the following:

(i) A more sophisticated econometric model bringing in additional lags of the explanatory variables and allowing for interaction effects between them.

(ii) Introducing additional exogenous variables that may have predictive power. Depending on the nature of the strategy, likely candidates might include related equity indices and futures contracts.

(iii) Constructing a predictive model and meta-strategy that would enable us assess the likely future performance of the strategy, and which could then be used to determine position size.  Machine learning techniques can often be helpful in this content.

I will give an example of the latter approach in my next post.

Signal Processing and Sample Frequency

The Importance of Sample Frequency

Too often we apply a default time horizon for our trading, whether it below (daily, weekly) or higher (hourly, 5 minute) frequency.  Sometimes the choice is dictated by practical considerations, such as a desire to avoid overnight risk, or the (lack of0 availability of low-latency execution platform.

But there is an alternative approach to the trade frequency decision that often yields superior results in terms of trading performance.    The methodology derives from signal processing and the idea essentially is to use Fourier transforms to help identify the cyclical behavior of the strategy alpha and hence determine the best time-frames for sampling and trading.  I wrote about this is a previous blog post, in which I described how to use principal components analysis to investigate the factors driving the returns in various pairs trading strategies.  Here I want to take a simpler approach, in which we use Fourier analysis to select suitable sample frequencies.  The idea is simply to select sample frequencies where the signal strength appears strongest, in the hope that it will lead to superior performance characteristics in what strategy we are trying to develop.

Signal Decomposition for S&P500 eMini Futures

Let’s take as an example the S&P 500 emini futures contract. The chart below shows the continuous ES futures contract plotted at 1-minute intervals from 1998. At the bottom of the chart I have represented the signal analysis as a bar chart (in blue), with each bar representing the amplitude at each frequency. The white dots on the chart identify frequencies that are spaced 10 minutes apart.  It is immediately evident that local maxima in the spectrum occur around 40 mins, 60 mins and 120 mins.  So a starting point for our strategy research might be to look at emini data sampled at these frequencies.  Incidentally, it is worth pointing out that I have restricted the session times to 7AM – 4PM EST, which is where the bulk of the daily volume and liquidity tend to occur.  You may get different results if you include data from the Globex session.

Emini Signal

This is all very intuitive and unsurprising: the clearest signals occur at frequencies that most traders typically tend to trade, using hourly data, for example. Any strategy developer is already quite likely to consider these and other common frequencies as part of their regular research process.  There are many instances of successful trading strategies built on emini data sampled at 60 minute intervals.

SSALGOTRADING AD

Signal Decomposition for US Bond Futures

Let’s look at a rather more interesting example:  US (30 year) Bond futures. Unlike the emini contract, the spectral analysis of the US futures contract indicates that the strongest signal by far occurs at a frequency of around 47 minutes.  This is decidedly an unintuitive outcome – I can’t think of any reason why such a strong signal should appear at this cycle length, but, statistically it does. 

US Bond futures

Does it work?  Readers can judge for themselves:  below is an example of an equity curve for a strategy on US futures sampled at 47 minute frequency over the period from 2002.  The strategy has performed very consistently, producing around $25,000 per contract per year, after commissions and slippage.

US futures EC

Conclusion

While I have had similar success with products as diverse as Corn and VIX futures, the frequency domain approach is by no means a panacea:  there are plenty of examples where I have been unable to construct profitable strategies for data sampled at the frequencies with very strong signals. Conversely, I have developed successful strategies using data at frequencies that hardly registered at all on the spectrum, but which I selected for other reasons.  Nonetheless, spectral analysis (and signal processing in general) can be recommended as a useful tool in the arsenal of any quantitative analyst.