My Big Fat Greek Vacation

LEARNING TO TRUST A TRADING SYSTEM

One of the most difficult decisions to make when running a systematic trading program is SystemTradingknowing when to override the system.  During the early 2000’s when I was running the Caissa Capital fund, the models would regularly make predictions on volatility that I and our head Trader, Ron Henley, a former option trader from the AMEX, disagreed with.  Most times, the system proved to have made the correct decision. My take-away from that experience was that, as human beings, even as traders, we are not very good at pricing risk.

My second take-away was that, by and large, you are better off trusting the system, rather than second-guessing its every decision.  Of course, markets can change and systems break down; but the right approach to assessing this possibility is to use statistical control procedures to determine formally whether or not the system has broken down, rather than going through a routine period of under-performance (see:  is your strategy still working?)

GREEK LESSONS

So when the Greek crisis blew up in June my first instinct was not to start looking grexit jisawimmediately for the escape hatch.  However, as time wore on I became increasingly concerned that the risk of a Grexit or default had not abated.  Moreover, I realized that there was really nothing comparable in the data used in the development of the trading models that was in any way comparable to the scenario facing Greece, the EU and, by a process of contagion, US markets.  Very reluctantly, therefore, I came to the decision that the smart way to play the crises was from the sidelines.  So we made the decisions to go 100% to cash and waited for the crisis to subside.

A week went by. Then another.  Of course, I had written to our investors explaining what we intended to do, and why, so there were no surprises.  Nonetheless, I felt uncomfortable not making money for them.  I did my best to console myself with the principal rule of money management: first, do not lose money.  Of course we didn’t – but neither did we make much money, and ended June more or less flat.

COMEBACK

After the worst of the crisis was behind us, I was relieved to see that the models appeared almost as anxious as I was to make up for lost time.  One of the features of the system is

poker2that it makes aggressive use of leverage. Rather like an expert poker player, when it judges the odds to be in its favor, the system will increase its bet size considerably; at other times it will hunker down, play conservatively, or even exit altogether.  Consequently, the turnover in the portfolio can be large at times.  The cost of trading high volume can substantial, especially in some of the less liquid ETF products, where the bid/ask spread can amount to several cents.  So we typically aim to execute passively, looking to buy on the bid and sell on the offer, using execution algos to split our orders up and randomize them. That also makes it tougher for HFT algos to pick us off as we move into and out of our positions.

So, in July, our Greek “vacation” at an end, the system came roaring back, all guns blazing. It quickly moved into some aggressive short volatility positions to take advantage of the elevated levels in the VIX, before reversing and gong long as the index collapsed to the bottom of the monthly range.

A DOUBLE-DIGIT MONTHLY RETURN: +21.28%

The results were rather spectacular:  a return of +21.28% for the month, bringing the totalMonthly Pct Returns return to 38.25% for 2015 YTD.

In the current low rate environment, this rate of return is extraordinary, but not entirely unprecedented: the strategy has produced double-digit monthly returns several times in the past, most recently in August last year, which saw a return of +14.1%.  Prior, to that, the record had been +8.90% in April 2013.

Such outsized returns come at a price – they have the effect of increasing strategy volatility and hence reducing the Sharpe Ratio.   Of course, investors worry far less about upside volatility than downside volatility (or simi-variance), which is why the Sortino Ratio is in some ways a more appropriate measure of risk-adjusted performance, especially for strategies like ours which has very large kurtosis.

VALUE OF $1000Since inception the compound annual growth rate (CAGR) of the strategy has been 45.60%, while the Sharpe Ratio has maintained a level of around 3 since that time.

Most of the drawdowns we have seen in the strategy have been in single digits, both in back-test and in live trading.  The only exception was in 2013, where we experienced a very short term decline of -13.40%, from which the strategy recovered with a couple of days.

In the great majority of cases, drawdowns in VIX-related strategies result from bad end-of-day “marks” in the VIX index.  These can arise for legitimate reasons, but are often

Sharpecaused by traders manipulating the index, especially around option expiration. Because of the methodology used to compute the VIX, it is very easy to move the index by 5bp to 10bp, or more, by quoting prices for deep OTM put options as expiration nears.  This can be critically important to holders of large VIX option positions and hence the temptation to engage in such manipulation may be irresistible.

For us, such market machinations are simply an annoyance, a cost of doing business in the VIX.  Sure, they inflate drawdowns and strategy volatility, but there is not much we can do about them, other wait patiently for bad “marks” to be corrected the following day, which they almost always are.

Looking ahead over the remainder of the year, we are optimistic about the strategy’s opportunities to make money in August, but, like many traders, we are apprehensive about Ann Returnsthe consequences if the Fed should decide to take action to raise rates in September.  We are likely to want to take in smaller size through the ensuing volatility, since either a long- or short-vol positions carries considerable risk in such a situation.  As and when a rate rise does occur, we anticipate a market correction of perhaps 20% or more, accompanied by surge in market volatility.  We are likely to see the VIX index reach the 20’s or 30’s, before it subsides.  However, under this scenario, opportunities to make money on the short side will likely prove highly attractive going into the final quarter of the year.  We remain hopeful of achieving a total return in the region of 40% to 50%, or more in 2015.

STRATEGY PERFORMANCE REPORT Jan 2012 – Jul 2015

Monthly Returns

 

 

The Case for Volatility as an Asset Class

Volatility as an asset class has grown up over the fifteen years since I started my first volatility arbitrage fund in 2000.  Caissa Capital grew to about $400m in assets before I moved on, while several of its rivals have gone on to manage assets in the multiple billions of dollars.  Back then volatility was seen as a niche, esoteric asset class and quite rightly so.  Nonetheless, investors who braved the unknown and stayed the course have been well rewarded: in recent years volatility strategies as an asset class have handily outperformed the indices for global macro, equity market neutral and diversified funds of funds, for example. Fig 1

The Fundamentals of Volatility

It’s worth rehearsing a few of the fundamental features of volatility for those unfamiliar with the territory.

Volatility is Unobservable

Volatility is the ultimate derivative, one whose fair price can never be known, even after the event, since it is intrinsically unobservable.  You can estimate what the volatility of an asset has been over some historical period using, for example, the standard deviation of returns.  But this is only an estimate, one of several possibilities, all of which have shortcomings.  We now know that volatility can be measured with almost arbitrary precision using an integrated volatility estimator (essentially a metric based on high frequency data), but that does not change the essential fact:  our knowledge of volatility is always subject to uncertainty, unlike a stock price, for example.

Volatility Trends

Huge effort is expended in identifying trends in commodity markets and many billions of dollars are invested in trend following CTA strategies (and, equivalently, momentum strategies in equities).  Trend following undoubtedly works, according to academic research, but is also subject to prolonged drawdowns during periods when a trend moderates or reverses. By contrast, volatility always trends.  You can see this from the charts below, which express the relationship between volatility in the S&P 500 index in consecutive months.  The r-square of the regression relationship is one of the largest to be found in economics. Fig 2 And this is a feature of volatility not just in one asset class, such as equities, nor even for all classes of financial assets, but in every time series process for which data exists, including weather and other natural phenomena.  So an investment strategy than seeks to exploit volatility trends is relying upon one of the most consistent features of any asset process we know of (more on this topic in Long Memory and Regime Shifts in Asset Volatility).

Volatility Mean-Reversion and Correlation

One of the central assumptions behind the ever-popular stat-arb strategies is that the basis between two or more correlated processes is stationary. Consequently, any departure from the long term relationship between such assets will eventually revert to the mean. Mean reversion is also an observed phenomenon in volatility processes.  In fact, the speed of mean reversion (as estimated in, say, an Ornstein-Ulenbeck framework) is typically an order of magnitude larger than for a typical stock-pairs process.  Furthermore, the correlation between one volatility process and another volatility process, or indeed between a volatility process and an asset returns process, tends to rise when markets are stressed (i.e. when volatility increases). Fig 3

Another interesting feature of volatility correlations is that they are often lower than for the corresponding asset returns processes.  One can therefore build a diversified volatility portfolio with far fewer assets that are required for, say, a basket of equities (see Modeling Asset Volatility for more on this topic).

Fig 4   Finally, more sophisticated stat-arb strategies tend to rely on cointegration rather than correlation, because cointegrated series are often driven by some common fundamental factors, rather than purely statistical ones, which may prove temporary (see Developing Statistical Arbitrage Strategies Using Cointegration for more details).  Again, cointegrated relationships tend to be commonplace in the universe of volatility processes and are typically more reliable over the long term than those found in asset return processes.

Volatility Term Structure

One of the most marked characteristics of the typical asset volatility process its upward sloping term structure.  An example of the typical term structure for futures on the VIX S&P 500 Index volatility index (as at the end of May, 2015), is shown in the chart below. A steeply upward-sloping curve characterizes the term structure of equity volatility around 75% of the time.

Fig 5   Fixed income investors can only dream of such yield in the current ZIRP environment, while f/x traders would have to plunge into the riskiest of currencies to achieve anything comparable in terms of yield differential and hope to be able to mitigate some of the devaluation risk by diversification.

The Volatility of Volatility

One feature of volatility processes that has been somewhat overlooked is the consistency of the volatility of volatility.  Only on one occasion since 2007 has the VVIX index, which measures the annual volatility of the VIX index, ever fallen below 60.

Fig 6   What this means is that, in trading volatility, you are trading an asset whose annual volatility has hardly ever fallen below 60% and which has often exceeded 100% per year.  Trading opportunities tend to abound when volatility is consistently elevated, as here (and, conversely, the performance of many hedge fund strategies tends to suffer during periods of sustained, low volatility)

SSALGOTRADING AD

Anything You Can Do, I Can Do better

The take-away from all this should be fairly obvious:  almost any strategy you care to name has an equivalent in the volatility space, whether it be volatility long/short, relative value, stat-arb, trend following or carry trading. What is more, because of the inherent characteristics of volatility, all these strategies tend to produce higher levels of performance than their more traditional counterparts. Take as an example our own Volatility ETF strategy, which has produced consistent annual returns of between 30% and 40%, with a Sharpe ratio in excess of 3, since 2012.   VALUE OF $1000

Sharpe

  Monthly Returns

 

(click to enlarge)

Where does the Alpha Come From?

It is traditional at this stage for managers to point the finger at hedgers as the source of abnormal returns and indeed I will do the same now.   Equity portfolio managers are hardly ignorant of the cost of using options and volatility derivatives to hedge their portfolios; but neither are they likely to be leading experts in the pricing of such derivatives.  And, after all, in a year in which they might be showing a 20% to 30% return, saving a few basis points on the hedge is neither here nor there, compared to the benefits of locking in the performance gains (and fees!). The same applies even when the purpose of using such derivatives is primarily to produce trading returns. Maple Leaf’s George Castrounis puts it this way:

Significant supply/demand imbalances continuously appear in derivative markets. The principal users of options (i.e. pension funds, corporates, mutual funds, insurance companies, retail and hedge funds) trade these instruments to express a view on the direction of the underlying asset rather than to express a view on the volatility of that asset, thus making non-economic volatility decisions. Their decision process may be driven by factors that have nothing to do with volatility levels, such as tax treatment, lockup, voting rights, or cross ownership. This creates opportunities for strategies that trade volatility.

We might also point to another source of potential alpha:  the uncertainty as to what the current level of volatility is, and how it should be priced.  As I have already pointed out, volatility is intrinsically uncertain, being unobservable.  This allows for a disparity of views about its true level, both currently and in future.  Secondly, there is no universal agreement on how volatility should be priced.  This permits at times a wide divergence of views on fair value (to give you some idea of the complexities involved, I would refer you to, for example, Range based EGARCH Option pricing Models). What this means, of course, is that there is a basis for a genuine source of competitive advantage, such as the Caissa Capital fund enjoyed in the early 2000s with its advanced option pricing models. The plethora of volatility products that have emerged over the last decade has only added to the opportunity set.

 Why Hasn’t It Been Done Before?

This was an entirely legitimate question back in the early days of volatility arbitrage. The cost of trading an option book, to say nothing of the complexities of managing the associated risks, were significant disincentives for both managers and investors.  Bid/ask spreads were wide enough to cause significant heads winds for strategies that required aggressive price-taking.  Mangers often had to juggle two sets of risks books, one reflecting the market’s view of the portfolio Greeks, the other the model view.  The task of explaining all this to investors, many of whom had never evaluated volatility strategies previously, was a daunting one.  And then there were the capacity issues:  back in the early 2000s a $400m long/short option portfolio would typically have to run to several hundred names in order to meet liquidity and market impact risk tolerances. Much has changed over the last fifteen years, especially with the advent of the highly popular VIX futures contract and the newer ETF products such as VXX and XIV, whose trading volumes and AUM are growing rapidly.  These developments have exerted strong downward pressure on trading costs, while providing sufficient capacity for at least a dozen volatility funds managing over $1Bn in assets.

Why Hasn’t It Been Done Right Yet?

Again, this question is less apposite than it was ten years ago and since that time there have been a number of success stories in the volatility space. One of the learning points occurred in 2004-2007, when volatility hit the lows for a 20 month period, causing performance to crater in long volatility funds, as well as funds with a volatility neutral mandate. I recall meeting with Nassim Taleb to discuss his Empirica volatility fund prior to that period, at the start of the 2000s.  My advice to him was that, while he had some great ideas, they were better suited to an insurance product rather than a hedge fund.  A long volatility fund might lose money month after month for an entire year, and with it investors and AUM, before seeing the kind of payoff that made such investment torture worthwhile.  And so it proved.

Conversely, stories about managers of short volatility funds showing superb performance, only to blow up spectacularly when volatility eventually explodes, are legion in this field.  One example comes to mind of a fund in Long Beach, CA, whose prime broker I visited with sometime in 2002.  He told me the fund had been producing a rock-steady 30% annual return for several years, and the enthusiasm from investors was off the charts – the fund was managing north of $1Bn by then.  Somewhat crestfallen I asked him how they were producing such spectacular returns.  “They just sell puts in the S&P, 100 points out of the money”, he told me.  I waited, expecting him to continue with details of how the fund managers handled the enormous tail risk.  I waited in vain. They were selling naked put options.  I can only imagine how those guys did when the VIX blew up in 2003 and, if they made it through that, what on earth happened to them in 2008!

Conclusion

The moral is simple:  one cannot afford to be either all-long, or all-short volatility.  The fund must run a long/short book, buying cheap Gamma and selling expensive Theta wherever possible, and changing the net volatility exposure of the portfolio dynamically, to suit current market conditions. It can certainly be done; and with the new volatility products that have emerged in recent years, the opportunities in the volatility space have never looked more promising.

Combining Momentum and Mean Reversion Strategies

The Fama-French World

For many years now the “gold standard” in factor models has been the 1996 Fama-French 3-factor model: Fig 1
Here r is the portfolio’s expected rate of return, Rf is the risk-free return rate, and Km is the return of the market portfolio. The “three factor” β is analogous to the classical β but not equal to it, since there are now two additional factors to do some of the work. SMB stands for “Small [market capitalization] Minus Big” and HML for “High [book-to-market ratio] Minus Low”; they measure the historic excess returns of small caps over big caps and of value stocks over growth stocks. These factors are calculated with combinations of portfolios composed by ranked stocks (BtM ranking, Cap ranking) and available historical market data. The Fama–French three-factor model explains over 90% of the diversified portfolios in-sample returns, compared with the average 70% given by the standard CAPM model.

The 3-factor model can also capture the reversal of long-term returns documented by DeBondt and Thaler (1985), who noted that extreme price movements over long formation periods were followed by movements in the opposite direction. (Alpha Architect has several interesting posts on the subject, including this one).

Fama and French say the 3-factor model can account for this. Long-term losers tend to have positive HML slopes and higher future average returns. Conversely, long-term winners tend to be strong stocks that have negative slopes on HML and low future returns. Fama and French argue that DeBondt and Thaler are just loading on the HML factor.

SSALGOTRADING AD

Enter Momentum

While many anomalies disappear under  tests, shorter term momentum effects (formation periods ~1 year) appear robust. Carhart (1997) constructs his 4-factor model by using FF 3-factor model plus an additional momentum factor. He shows that his 4-factor model with MOM substantially improves the average pricing errors of the CAPM and the 3-factor model. After his work, the standard factors of asset pricing model are now commonly recognized as Value, Size and Momentum.

 Combining Momentum and Mean Reversion

In a recent post, Alpha Architect looks as some possibilities for combining momentum and mean reversion strategies.  They examine all firms above the NYSE 40th percentile for market-cap (currently around $1.8 billion) to avoid weird empirical effects associated with micro/small cap stocks. The portfolios are formed at a monthly frequency with the following 2 variables:

  1. Momentum = Total return over the past twelve months (ignoring the last month)
  2. Value = EBIT/(Total Enterprise Value)

They form the simple Value and Momentum portfolios as follows:

  1. EBIT VW = Highest decile of firms ranked on Value (EBIT/TEV). Portfolio is value-weighted.
  2. MOM VW = Highest decile of firms ranked on Momentum. Portfolio is value-weighted.
  3. Universe VW = Value-weight returns to the universe of firms.
  4. SP500 = S&P 500 Total return

The results show that the top decile of Value and Momentum outperformed the index over the past 50 years.  The Momentum strategy has stronger returns than value, on average, but much higher volatility and drawdowns. On a risk-adjusted basis they perform similarly. Fig 2   The researchers then form the following four portfolios:

  1. EBIT VW = Highest decile of firms ranked on Value (EBIT/TEV). Portfolio is value-weighted.
  2. MOM VW = Highest decile of firms ranked on Momentum. Portfolio is value-weighted.
  3. COMBO VW = Rank firms independently on both Value and Momentum.  Add the two rankings together. Select the highest decile of firms ranked on the combined rankings. Portfolio is value-weighted.
  4. 50% EBIT/ 50% MOM VW = Each month, invest 50% in the EBIT VW portfolio, and 50% in the MOM VW portfolio. Portfolio is value-weighted.

With the following results:

Fig 3 The main takeaways are:

  • The combined ranked portfolio outperforms the index over the same time period.
  • However, the combination portfolio performs worse than a 50% allocation to Value and a 50% allocation to Momentum.

A More Sophisticated Model

Yangru Wu of Rutgers has been doing interesting work in this area over the last 15 years, or more. His 2005 paper (with Ronald Balvers), Momentum and mean reversion across national equity markets, considers joint momentum and mean-reversion effects and allows for complex interactions between them. Their model is of the form Fig 4 where the excess return for country i (relative to the global equity portfolio) is represented by a combination of mean-reversion and autoregressive (momentum) terms. Balvers and Wu  find that combination momentum-contrarian strategies, used to select from among 18 developed equity markets at a monthly frequency, outperform both pure momentum and pure mean-reversion strategies. The results continue to hold after corrections for factor sensitivities and transaction costs. The researchers confirm that momentum and mean reversion occur in the same assets. So in establishing the strength and duration of the momentum and mean reversion effects it becomes important to control for each factor’s effect on the other. The momentum and mean reversion effects exhibit a strong negative correlation of 35%. Accordingly, controlling for momentum accelerates the mean reversion process, and controlling for mean reversion may extend the momentum effect.

 Momentum, Mean Reversion and Volatility

The presence of  strong momentum and mean reversion in volatility processes provides a rationale for the kind of volatility strategy that we trade at Systematic Strategies.  One  sophisticated model is the Range Based EGARCH model of  Alizadeh, Brandt, and Diebold (2002) .  The model posits a two-factor volatility process in which a short term, transient volatility process mean-reverts to a stochastic long term mean process, which may exhibit momentum, or long memory effects  (details here).

In our volatility strategy we model mean reversion and momentum effects derived from the level of short and long term volatility-of-volatility, as well as the forward volatility curve. These are applied to volatility ETFs, including levered ETF products, where convexity effects are also important.  Mean reversion is a well understood phenomenon in volatility, as, too, is the yield roll in volatility futures (which also impacts ETF products like VXX and XIV).

Momentum effects are perhaps less well researched in this context, but our research shows them to be extremely important.  By way of illustration, in the chart below I have isolated the (gross) returns generated by one of the momentum factors in our model.

Fig 6

 

The Mathematics of Scalping

NOTE:  if you are unable to see the Mathematica models below, you can download the free Wolfram CDF player and you may also need this plug-in.

You can also download the complete Mathematica CDF file here.

In this post I want to explore aspects of scalping, a type of strategy widely utilized by high frequency trading firms.

I will define a scalping strategy as one in which we seek to take small profits by posting limit orders on alternate side of the book. Scalping, as I define it, is a strategy rather like market making, except that we “lean” on one side of the book. So, at any given time, we may have a long bias and so look to enter with a limit buy order. If this is filled, we will then look to exit with a subsequent limit sell order, taking a profit of a few ticks. Conversely, we may enter with a limit sell order and look to exit with a limit buy order.
The strategy relies on two critical factors:

(i) the alpha signal which tells us from moment to moment whether we should prefer to be long or short
(ii) the execution strategy, or “trade expression”

In this article I want to focus on the latter, making the assumption that we have some kind of alpha generation model already in place (more about this in later posts).

There are several means that a trader can use to enter a position. The simplest approach, the one we will be considering here, is simply to place a single limit order at or just outside the inside bid/ask prices – so in other words we will be looking to buy on the bid and sell on the ask (and hoping to earn the bid-ask spread, at least).

SSALGOTRADING AD

One of the problems with this approach is that it is highly latency sensitive. Limit orders join the limit order book at the back of the queue and slowly works their way towards the front, as earlier orders get filled. Buy the time the market gets around to your limit buy order, there may be no more sellers at that price. In that case the market trades away, a higher bid comes in and supersedes your order, and you don’t get filled. Conversely, yours may be one of the last orders to get filled, after which the market trades down to a lower bid and your position is immediately under water.

This simplistic model explains why latency is such a concern – you want to get as near to the front of the queue as you can, as quickly as possible. You do this by minimizing the time it takes to issue and order and get it into the limit order book. That entails both hardware (co-located servers, fiber-optic connections) and software optimization and typically also involves the use of Immediate or Cancel (IOC) orders. The use of IOC orders by HFT firms to gain order priority is highly controversial and is seen as gaming the system by traditional investors, who may end up paying higher prices as a result.

Another approach is to layer limit orders at price points up and down the order book, establishing priority long before the market trades there. Order layering is a highly complex execution strategy that brings addition complications.

Let’s confine ourselves to considering the single limit order, the type of order available to any trader using a standard retail platform.

As I have explained, we are assuming here that, at any point in time, you know whether you prefer to be long or short, and therefore whether you want to place a bid or an offer. The issue is, at what price do you place your order, and what do you do about limiting your risk? In other words, we are discussing profit targets and stop losses, which, of course, are all about risk and return.

Risk and Return in Scalping

Lets start by considering risk. The biggest risk to a scalper is that, once filled, the market goes against his position until he is obliged to trigger his stop loss. If he sets his stop loss too tight, he may be forced to exit positions that are initially unprofitable, but which would have recovered and shown a profit if he had not exited. Conversely,  if he sets the stop loss too loose, the risk reward ratio is very low – a single loss-making trade could eradicate the profit from a large number of smaller, profitable trades.

Now lets think about reward. If the trader is too ambitious in setting his profit target he may never get to realize the gains his position is showing – the market could reverse, leaving him with a loss on a position that was, initially, profitable. Conversely, if he sets the target too tight, the trader may give up too much potential in a winning trade to overcome the effects of the occasional, large loss.

It’s clear that these are critical concerns for a scalper: indeed the trade exit rules are just as important, or even more important, than the entry rules. So how should he proceed?

Theoretical Framework for Scalping

Let’s make the rather heroic assumption that market returns are Normally distributed (in fact, we know from empirical research that they are not – but this is a starting point, at least). And let’s assume for the moment that our trader has been filled on a limit buy order and is looking to decide where to place his profit target and stop loss limit orders. Given a current price of the underlying security of X, the scalper is seeking to determine the profit target of p ticks and the stop loss level of q ticks that will determine the prices at which he should post his limit orders to exit the trade. We can translate these into returns, as follows:

to the upside: Ru = Ln[X+p] – Ln[X]

and to the downside: Rd = Ln[X-q] – Ln[X]

This situation is illustrated in the chart below.

Normal Distn Shaded

The profitable area is the shaded region on the RHS of the distribution. If the market trades at this price or higher, we will make money: p ticks, less trading fees and commissions, to be precise. Conversely we lose q ticks (plus commissions) if the market trades in the region shaded on the LHS of the distribution.

Under our assumptions, the probability of ending up in the RHS shaded region is:

probWin = 1 – NormalCDF(Ru, mu, sigma),

where mu and sigma are the mean and standard deviation of the distribution.

The probability of losing money, i.e. the shaded area in the LHS of the distribution, is given by:

probLoss = NormalCDF(Rd, mu, sigma),

where NormalCDF is the cumulative distribution function of the Gaussian distribution.

The expected profit from the trade is therefore:

Expected profit = p * probWin – q * probLoss

And the expected win rate, the proportion of profitable trades, is given by:

WinRate = probWin / (probWin + probLoss)

If we set a stretch profit target, then p will be large, and probWin, the shaded region on the RHS of the distribution, will be small, so our winRate will be low. Under this scenario we would have a low probability of a large gain. Conversely, if we set p to, say, 1 tick, and our stop loss q to, say, 20 ticks, the shaded region on the RHS will represent close to half of the probability density, while the shaded LHS will encompass only around 5%. Our win rate in that case would be of the order of 91%:

WinRate = 50% / (50% + 5%) = 91%

Under this scenario, we make frequent, small profits  and suffer the occasional large loss.

So the critical question is: how do we pick p and q, our profit target and stop loss?  Does it matter?  What should the decision depend on?

Modeling Scalping Strategies

We can begin to address these questions by noticing, as we have already seen, that there is a trade-off between the size of profit we are hoping to make, and the size of loss we are willing to tolerate, and the probability of that gain or loss arising.  Those probabilities in turn depend on the underlying probability distribution, assumed here to be Gaussian.

Now, the Normal or Gaussian distribution which determines the probabilities of winning or losing at different price levels has two parameters – the mean, mu, or drift of the returns process and sigma, its volatility.

Over short time intervals the effect of volatility outweigh any impact from drift by orders of magnitude.  The reason for this is simple:  volatility scales with the square root of time, while the drift scales linearly.  Over small time intervals, the drift becomes un-noticeably small, compared to the process volatility.  Hence we can assume that mu, the process mean is zero, without concern, and focus exclusively on sigma, the volatility.

What other factors do we need to consider?  Well there is a minimum price move, which might be 1 tick, and the dollar value of that tick, from which we can derive our upside and downside returns, Ru and Rd.  And, finally, we need to factor in commissions and exchange fees into our net trade P&L.

Here’s a simple formulation of the model, in which I am using the E-mini futures contract as an exemplar.

 WinRate[currentPrice_,annualVolatility_,BarSizeMins_, nTicksPT_, nTicksSL_,minMove_, tickValue_, costContract_]:=Module[{ nMinsPerDay, periodVolatility, tgtReturn, slReturn,tgtDollar, slDollar, probWin, probLoss, winRate, expWinDollar, expLossDollar, expProfit},
nMinsPerDay = 250*6.5*60;
periodVolatility = annualVolatility / Sqrt[nMinsPerDay/BarSizeMins];
tgtReturn=nTicksPT*minMove/currentPrice;tgtDollar = nTicksPT * tickValue;
slReturn = nTicksSL*minMove/currentPrice;
slDollar=nTicksSL*tickValue;
probWin=1-CDF[NormalDistribution[0, periodVolatility],tgtReturn];
probLoss=CDF[NormalDistribution[0, periodVolatility],slReturn];
winRate=probWin/(probWin+probLoss);
expWinDollar=tgtDollar*probWin;
expLossDollar=slDollar*probLoss;
expProfit=expWinDollar+expLossDollar-costContract;
{expProfit, winRate}]

For the ES contract we have a min price move of 0.25 and the tick value is $12.50.  Notice that we scale annual volatility to the size of the period we are trading (15 minute bars, in the following example).

Scenario Analysis

Let’s take a look at how the expected profit and win rate vary with the profit target and stop loss limits we set.  In the following interactive graphics, we can assess the impact of different levels of volatility on the outcome.

Expected Profit by Bar Size and Volatility

Expected Win Rate by Volatility

Notice to begin with that the win rate (and expected profit) are very far from being Normally distributed – not least because they change radically with volatility, which is itself time-varying.

For very low levels of volatility, around 5%, we appear to do best in terms of maximizing our expected P&L by setting a tight profit target of a couple of ticks, and a stop loss of around 10 ticks.  Our win rate is very high at these levels – around 90% or more.  In other words, at low levels of volatility, our aim should be to try to make a large number of small gains.

But as volatility increases to around 15%, it becomes evident that we need to increase our profit target, to around 10 or 11 ticks.  The distribution of the expected P&L suggests we have a couple of different strategy options: either we can set a larger stop loss, of around 30 ticks, or we can head in the other direction, and set a very low stop loss of perhaps just 1-2 ticks.  This later strategy is, in fact, the mirror image of our low-volatility strategy:  at higher levels of volatility, we are aiming to make occasional, large gains and we are willing to pay the price of sustaining repeated small stop-losses.  Our win rate, although still well above 50%, naturally declines.

As volatility rises still further, to 20% or 30%, or more, it becomes apparent that we really have no alternative but to aim for occasional large gains, by increasing our profit target and tightening stop loss limits.   Our win rate under this strategy scenario will be much lower – around 30% or less.

Non – Gaussian Model

Now let’s address the concern that asset returns are not typically distributed Normally. In particular, the empirical distribution of returns tends to have “fat tails”, i.e. the probability of an extreme event is much higher than in an equivalent Normal distribution.

A widely used model for fat-tailed distributions in the Extreme Value Distribution. This has pdf:

PDF[ExtremeValueDistribution[,],x]

 EVD

Plot[Evaluate@Table[PDF[ExtremeValueDistribution[,2],x],{,{-3,0,4}}],{x,-8,12},FillingAxis]

EVD pdf

Mean[ExtremeValueDistribution[,]]

+EulerGamma

Variance[ExtremeValueDistribution[,]]

EVD Variance

In order to set the parameters of the EVD, we need to arrange them so that the mean and variance match those of the equivalent Gaussian distribution with mean = 0 and standard deviation . hence:

EVD params

The code for a version of the model using the GED is given as follows

WinRateExtreme[currentPrice_,annualVolatility_,BarSizeMins_, nTicksPT_, nTicksSL_,minMove_, tickValue_, costContract_]:=Module[{ nMinsPerDay, periodVolatility, alpha, beta,tgtReturn, slReturn,tgtDollar, slDollar, probWin, probLoss, winRate, expWinDollar, expLossDollar, expProfit},
nMinsPerDay = 250*6.5*60;
periodVolatility = annualVolatility / Sqrt[nMinsPerDay/BarSizeMins];
beta = Sqrt[6]*periodVolatility / Pi;
alpha=-EulerGamma*beta;
tgtReturn=nTicksPT*minMove/currentPrice;tgtDollar = nTicksPT * tickValue;
slReturn = nTicksSL*minMove/currentPrice;
slDollar=nTicksSL*tickValue;
probWin=1-CDF[ExtremeValueDistribution[alpha, beta],tgtReturn];
probLoss=CDF[ExtremeValueDistribution[alpha, beta],slReturn];
winRate=probWin/(probWin+probLoss);
expWinDollar=tgtDollar*probWin;
expLossDollar=slDollar*probLoss;
expProfit=expWinDollar+expLossDollar-costContract;
{expProfit, winRate}]

WinRateExtreme[1900,0.05,15,2,30,0.25,12.50,3][[2]]

0.21759

We can now produce the same plots for the EVD version of the model that we plotted for the Gaussian versions :

Expected Profit by Bar Size and Volatility – Extreme Value Distribution

Expected Win Rate by Volatility – Extreme Value Distribution

Next we compare the Gaussian and EVD versions of the model, to gain an understanding of how the differing assumptions impact the expected Win Rate.

Expected Win Rate by Stop Loss and Profit Target

As you can see, for moderate levels of volatility, up to around 18 % annually, the expected Win Rate is actually higher if we assume an Extreme Value distribution of returns, rather than a Normal distribution.If we use a Normal distribution we will actually underestimate the Win Rate, if the actual return distribution is closer to Extreme Value.In other words, the assumption of a Gaussian distribution for returns is actually conservative.

Now, on the other hand, it is also the case that at higher levels of volatility the assumption of Normality will tend to over – estimate the expected Win Rate, if returns actually follow an extreme value distribution. But, as indicated before, for high levels of volatility we need to consider amending the scalping strategy very substantially. Either we need to reverse it, setting larger Profit Targets and tighter Stops, or we need to stop trading altogether, until volatility declines to normal levels.Many scalpers would prefer the second option, as the first alternative doesn’t strike them as being close enough to scalping to justify the name.If you take that approach, i.e.stop trying to scalp in periods when volatility is elevated, then the differences in estimated Win Rate resulting from alternative assumptions of return distribution are irrelevant.

If you only try to scalp when volatility is under, say, 20 % and you use a Gaussian distribution in your scalping model, you will only ever typically under – estimate your actual expected Win Rate.In other words, the assumption of Normality helps, not hurts, your strategy, by being conservative in its estimate of the expected Win Rate.

If, in the alternative, you want to trade the strategy regardless of the level of volatility, then by all means use something like an Extreme Value distribution in your model, as I have done here.That changes the estimates of expected Win Rate that the model produces, but it in no way changes the structure of the model, or invalidates it.It’ s just a different, arguably more realistic set of assumptions pertaining to situations of elevated volatility.

Monte-Carlo Simulation Analysis

Let’ s move on to do some simulation analysis so we can get an understanding of the distribution of the expected Win Rate and Avg Trade PL for our two alternative models. We begin by coding a generator that produces a sample of 1,000 trades and calculates the Avg Trade PL and Win Rate.

Gaussian Model

GenWinRate[currentPrice_,annualVolatility_,BarSizeMins_, nTicksPT_, nTicksSL_,minMove_, tickValue_, costContract_]:=Module[{ nMinsPerDay, periodVolatility, randObs, tgtReturn, slReturn,tgtDollar, slDollar, nWins,nLosses, perTradePL, probWin, probLoss, winRate, expWinDollar, expLossDollar, expProfit},
nMinsPerDay = 250*6.5*60;
periodVolatility = annualVolatility / Sqrt[nMinsPerDay/BarSizeMins];
tgtReturn=nTicksPT*minMove/currentPrice;tgtDollar = nTicksPT * tickValue;
slReturn = nTicksSL*minMove/currentPrice;
slDollar=nTicksSL*tickValue;
randObs=RandomVariate[NormalDistribution[0,periodVolatility],10^3];
nWins=Count[randObs,x_/;x>=tgtReturn];
nLosses=Count[randObs,x_/;xslReturn];
winRate=nWins/(nWins+nLosses)//N;
perTradePL=(nWins*tgtDollar+nLosses*slDollar)/(nWins+nLosses);{perTradePL,winRate}]

GenWinRate[1900,0.1,15,1,-24,0.25,12.50,3]

{7.69231,0.984615}

Now we can generate a random sample of 10, 000 simulation runs and plot a histogram of the Win Rates, using, for example, ES on 5-min bars, with a PT of 2 ticks and SL of – 20 ticks, assuming annual volatility of 15 %.

Histogram[Table[GenWinRate[1900,0.15,5,2,-20,0.25,12.50,3][[2]],{i,10000}],10,AxesLabel{“Exp. Win Rate (%)”}]

WinRateHist

Histogram[Table[GenWinRate[1900,0.15,5,2,-20,0.25,12.50,3][[1]],{i,10000}],10,AxesLabel{“Exp. PL/Trade ($)”}]

PLHist

Extreme Value Distribution Model

Next we can do the same for the Extreme Value Distribution version of the model.

GenWinRateExtreme[currentPrice_,annualVolatility_,BarSizeMins_, nTicksPT_, nTicksSL_,minMove_, tickValue_, costContract_]:=Module[{ nMinsPerDay, periodVolatility, randObs, tgtReturn, slReturn,tgtDollar, slDollar, alpha, beta,nWins,nLosses, perTradePL, probWin, probLoss, winRate, expWinDollar, expLossDollar, expProfit},
nMinsPerDay = 250*6.5*60;
periodVolatility = annualVolatility / Sqrt[nMinsPerDay/BarSizeMins];
beta = Sqrt[6]*periodVolatility / Pi;
alpha=-EulerGamma*beta;
tgtReturn=nTicksPT*minMove/currentPrice;tgtDollar = nTicksPT * tickValue;
slReturn = nTicksSL*minMove/currentPrice;
slDollar=nTicksSL*tickValue;
randObs=RandomVariate[ExtremeValueDistribution[alpha, beta],10^3];
nWins=Count[randObs,x_/;x>=tgtReturn];
nLosses=Count[randObs,x_/;xslReturn];
winRate=nWins/(nWins+nLosses)//N;
perTradePL=(nWins*tgtDollar+nLosses*slDollar)/(nWins+nLosses);{perTradePL,winRate}]

Histogram[Table[GenWinRateExtreme[1900,0.15,5,2,-10,0.25,12.50,3][[2]],{i,10000}],10,AxesLabel{“Exp. Win Rate (%)”}]

WinRateEVDHist

Histogram[Table[GenWinRateExtreme[1900,0.15,5,2,-10,0.25,12.50,3][[1]],{i,10000}],10,AxesLabel{“Exp. PL/Trade ($)”}]

PLEVDHist

 

 

Conclusions

The key conclusions from this analysis are:

  1. Scalping is essentially a volatility trade
  2. The setting of optimal profit targets are stop loss limits depend critically on the volatility of the underlying, and needs to be handled dynamically, depending on current levels of market volatility
  3. At low levels of volatility we should set tight profit targets and wide stop loss limits, looking to make a high percentage of small gains, of perhaps 2-3 ticks.
  4. As volatility rises, we need to reverse that position, setting more ambitious profit targets and tight stops, aiming for the occasional big win.