High Frequency Trading with ADL – JonathanKinlay.com

Trading Technologies’ ADL is a visual programming language designed specifically for trading strategy development that is integrated in the company’s flagship XTrader product. ADL Extract2 Despite the radically different programming philosophy, my experience of working with ADL has been delightfully easy and strategies that would typically take many months of coding in C++ have been up and running in a matter of days or weeks.  An extract of one such strategy, a high frequency scalping trade in the E-Mini S&P 500 futures, is shown in the graphic above.  The interface and visual language is so intuitive to a trading system developer that even someone who has never seen ADL before can quickly grasp at least some of what it happening in the code.

Strategy Development in Low vs. High-Level Languages
What are the benefits of using a high level language like ADL compared to programming languages like C++/C# or Java that are traditionally used for trading system development?  The chief advantage is speed of development:  I would say that ADL offers the potential up the development process by at least one order of magnitude.  A complex trading system would otherwise take months or even years to code and test in C++ or Java, can be implemented successfully and put into production in a matter of weeks in ADL. In this regard, the advantage of speed of development is one shared by many high level languages, including, for example, Matlab, R and Mathematica.  But in ADL’s case the advantage in terms of time to implementation is aided by the fact that, unlike generalist tools such as MatLab, etc, ADL is designed specifically for trading system development.  The ADL development environment comes equipped with compiled pre-built blocks designed to accomplish many of the common tasks associated with any trading system such as acquiring market data and handling orders.  Even complex spread trades can be developed extremely quickly due to the very comprehensive library of pre-built blocks.

SSALGOTRADING AD

Integrating Research and Development
One of the drawbacks of using a higher  level language for building trading systems is that, being interpreted rather than compiled, they are simply too slow – one or more orders of magnitude, typically – to be suitable for high frequency trading.  I will come on to discuss the execution speed issue a little later.  For now, let me bring up a second major advantage of ADL relative to other high level languages, as I see it.  One of the issues that plagues trading system development is the difficulty of communication between researchers, who understand financial markets well, but systems architecture and design rather less so, and developers, whose skill set lies in design and programming, but whose knowledge of markets can often be sketchy.  These difficulties are heightened where researchers might be using a high level language and relying on developers to re-code their prototype system  to get it into production.  Developers  typically (and understandably) demand a high degree of specificity about the requirement and if it’s not included in the spec it won’t be in the final deliverable.  Unfortunately, developing a successful trading system is a highly non-linear process and a researcher will typically have to iterate around the core idea repeatedly until they find a combination of alpha signal and entry/exit logic that works.  In other words, researchers need flexibility, whereas developers require specificity. ADL helps address this issue by providing a development environment that is at once highly flexible and at the same time powerful enough to meet the demands of high frequency trading in a production environment.  It means that, in theory, researchers and developers can speak a common language and use a common tool throughout the R&D development cycle.  This is likely to reduce the kind of misunderstanding between researchers and developers that commonly arise (often setting back the implementation schedule significantly when they do).

Latency
Of course,  at least some of the theoretical benefit of using ADL depends on execution speed.  The way the problem is typically addressed with systems developed in high level languages like Matlab or R is to recode the entire system in something like C++, or to recode some of the most critical elements and plug those back into the main Matlab program as dlls.  The latter approach works, and preserves the most important benefits of working in both high and low level languages, but the resulting system is likely to be sub-optimal and can be difficult to maintain. The approach taken by Trading Technologies with ADL is very different.  Firstly,  the component blocks are written in  C# and in compiled form should run about as fast as native code.  Secondly, systems written in ADL can be deployed immediately on a co-located algo server that is plugged directly into the exchange, thereby reducing latency to an acceptable level.  While this is unlikely to sufficient for an ultra-high frequency system operating on the sub-millisecond level, it will probably suffice for high frequency systems that operate at speeds above above a few millisecs, trading up to say, around 100 times a day.

Fill Rate and Toxic Flow
For those not familiar with the HFT territory, let me provide an example of why the issues of execution speed and latency are so important.  Below is a simulated performance record for a HFT system in ES futures.  The system is designed to enter and exit using limit orders and trades around 120 times a day, with over 98% profitability, if we assume a 100% fill rate. Monthly PNL 1 Perf Summary 1  So far so good.  But  a 100% fill rate  is clearly unrealistic.  Let’s look at a pessimistic scenario: what if we  got filled on orders only when the limit price was exceeded?  (For those familiar with the jargon, we are assuming a high level of flow toxicity)  The outcome is rather different: Perf Summary 2 Neither scenario is particularly realistic, but the outcome is much more likely to be closer to the second scenario rather than the first if we our execution speed is slow, or if we are using a retail platform such as Interactive Brokers or Tradestation, with long latency wait times.  The reason is simple: our orders will always arrive late and join the limit order book at the back of the queue.  In most cases the orders ahead of ours will exhaust demand at the specified limit price and the market will trade away without filling our order.  At other times the market will fill our order whenever there is a large flow against us (i.e. a surge of sell orders into our limit buy), i.e. when there is significant toxic flow. The proposition is that, using ADL and the its high-speed trading infrastructure, we can hope to avoid the latter outcome.  While we will never come close to achieving a 100% fill rate, we may come close enough to offset the inevitable losses from toxic flow and produce a decent return.  Whether ADL is capable of fulfilling that potential remains to be seen.

More on ADL
For more information on ADL go here.

Building Systematic Strategies – A New Approach

Anyone active in the quantitative space will tell you that it has become a great deal more competitive in recent years.  Many quantitative trades and strategies are a lot more crowded than they used to be and returns from existing  strategies are on the decline.

THE CHALLENGE

The Challenge

Meanwhile, costs have been steadily rising, as the technology arms race has accelerated, with more money being spent on hardware, communications and software than ever before.  As lead times to develop new strategies have risen, the cost of acquiring and maintaining expensive development resources have spiraled upwards.  It is getting harder to find new, profitable strategies, due in part to the over-grazing of existing methodologies and data sets (like the E-Mini futures, for example). There has, too, been a change in the direction of quantitative research in recent years.  Where once it was simply a matter of acquiring the fastest pipe to as many relevant locations as possible, the marginal benefit of each extra $ spent on infrastructure has since fallen rapidly.  New strategy research and development is now more model-driven than technology driven.

 

 

 

THE OPPORTUNITY

The Opportunity

What is needed at this point is a new approach:  one that accelerates the process of identifying new alpha signals, prototyping and testing new strategies and bringing them into production, leveraging existing battle-tested technologies and trading platforms.

 

 

 

 

GENETIC PROGRAMMING

Genetic programming, which has been around since the 1990’s when its use was pioneered in proteomics, enjoys significant advantages over traditional research and development methodologies.

GP

GP is an evolutionary-based algorithmic methodology in which a system is given a set of simple rules, some data, and a fitness function that produces desired outcomes from combining the rules and applying them to the data.   The idea is that, by testing large numbers of possible combinations of rules, typically in the  millions, and allowing the most successful rules to propagate, eventually we will arrive at a strategy solution that offers the required characteristics.

ADVANTAGES OF GENETIC PROGRAMMING

AdvantagesThe potential benefits of the GP approach are considerable:  not only are strategies developed much more quickly and cost effectively (the price of some software and a single CPU vs. a small army of developers), the process is much more flexible. The inflexibility of the traditional approach to R&D is one of its principle shortcomings.  The researcher produces a piece of research that is subsequently passed on to the development team.  Developers are usually extremely rigid in their approach: when asked to deliver X, they will deliver X, not some variation on X.  Unfortunately research is not an exact science: what looks good in a back-test environment may not pass muster when implemented in live trading.  So researchers need to “iterate around” the idea, trying different combinations of entry and exit logic, for example, until they find a variant that works.  Developers are lousy at this;  GP systems excel at it.

CHALLENGES FOR THE GENETIC PROGRAMMING APPROACH

So enticing are the potential benefits of GP that it begs the question as to why the approach hasn’t been adopted more widely.  One reason is the strong preference amongst researchers for an understandable – and testable – investment thesis.  Researchers – and, more importantly, investors –  are much more comfortable if they can articulate the premise behind a strategy.  Even if a trade turns out to be a loser, we are generally more comfortable buying a stock on the supposition of, say,  a positive outcome of a pending drug trial, than we are if required to trust the judgment of a black box, whose criteria are inherently unobservable.

GP Challenges

Added to this, the GP approach suffers from three key drawbacks:  data sufficiency, data mining and over-fitting.  These are so well known that they hardly require further rehearsal.  There have been many adverse outcomes resulting from poorly designed mechanical systems curve fitted to the data. Anyone who was active in the space in the 1990s will recall the hype over neural networks and the over-exaggerated claims made for their efficacy in trading system design.  Genetic Programming, a far more general and powerful concept,  suffered unfairly from the ensuing adverse publicity, although it does face many of the same challenges.

A NEW APPROACH

I began working in the field of genetic programming in the 1990’s, with my former colleague Haftan Eckholdt, at that time head of neuroscience at Yeshiva University, and we founded a hedge fund, Proteom Capital, based on that approach (large due to Haftan’s research).  I and my colleagues at Systematic Strategies have continued to work on GP related ideas over the last twenty years, and during that period we have developed a methodology that address the weaknesses that have held back genetic programming from widespread adoption.

Advances

Firstly, we have evolved methods for transforming original data series that enables us to avoid over-using the same old data-sets and, more importantly, allows new patterns to be revealed in the underlying market structure.   This effectively eliminates the data mining bias that has plagued the GP approach. At the same time, because our process produces a stronger signal relative to the background noise, we consume far less data – typically no more than a couple of years worth.

Secondly, we have found we can enhance the robustness of prototype strategies by using double-blind testing: i.e. data sets on which the performance of the model remains unknown to the machine, or the researcher, prior to the final model selection.

Finally, we are able to test not only the alpha signal, but also multiple variations of the trade expression, including different types of entry and exit logic, as well as profit targets and stop loss constraints.

OUTCOMES:  ROBUST, PROFITABLE STRATEGIES

outcomes

Taken together, these measures enable our GP system to produce strategies that not only have very high performance characteristics, but are also extremely robust.  So, for example, having constructed a model using data only from the continuing bull market in equities in 2012 and 2013, the system is nonetheless capable of producing strategies that perform extremely well when tested out of sample over the highly volatility bear market conditions of 2008/09.

So stable are the results produced by many of the strategies, and so well risk-controlled, that it is possible to deploy leveraged money-managed techniques, such as Vince’s fixed fractional approach.  Money management schemes take advantage of the high level of consistency in performance to increase the capital allocation to the strategy in a way that boosts returns without incurring a high risk of catastrophic loss.  You can judge the benefits of applying these kinds of techniques in some of the strategies we have developed in equity, fixed income, commodity and energy futures which are described below.

CONCLUSION

After 20-30 years of incubation, the Genetic Programming approach to strategy research and development has come of age. It is now entirely feasible to develop trading systems that far outperform the overwhelming majority of strategies produced by human researchers, in a fraction of the time and for a fraction of the cost.

SAMPLE GP SYSTEMS

Sample

SSALGOTRADING AD

emini    emini MM

NG  NG MM

SI MMSI

US US MM

 

 

Day Trading System in VIX Futures – JonathanKinlay.com

This is a follow up to my earlier post on a Calendar Spread Strategy in VIX Futures (more information on calendar spreads ).

The strategy trades the front two months in the CFE VIX futures contract, generating an annual profit of around $25,000 per spread.

DAY TRADING SYSTEM
I built an equivalent day trading system in VIX futures in Trading Technologies visual ADL language, using 1-min bar data for 2010, and tested the system out-of-sample in 2011-2014. (for more information on X-Trader/ ADL go here).

The annual net PL is around $20,000 per spread, with a win rate of 67%.   On the downside, the profit factor is rather low and the average trade is barely 1/10 of a tick). Note that this is net of Bid-Ask spread of 0.05 ($50) and commission/transaction costs of $20 per round turn.  These cost assumptions are reasonable for online trading at many brokerage firms.

SSALGOTRADING AD

However, the strategy requires you to work the spread to enter passively (thereby reducing the cost of entry).  This is usually only feasible on a  platform suitable for a high frequency trading, where you can assume that your orders have acceptable priority in the limit order queue.  This will result in a reasonable proportion of your passive bids and offers will be executed.  Typically the spread trade is held throughout the session, exiting on close (since this is a day trading system).

Overall, while the trading system characteristics are reasonable, the spread strategy is better suited to longer (i.e. overnight) holding periods, since the VIX futures market is not the most liquid and the tick value is large.  We’ll take a look at other day trading strategies in more liquid products, like the S&P 500 e-mini futures, for example, in another post.

High Freq Strategy Equity Curve(click to enlarge)

 

High Frequency Perf Results

(click to enlarge)

A Calendar Spread Strategy in VIX Futures

I have been working on developing some high frequency spread strategies using Trading Technologies’ Algo Strategy Engine, which is extremely impressive (more on this in a later post).  I decided to take a time out to experiment with a slower version of one of the trades, a calendar spread in VIX futures that trades  the spread on the front two contracts.  The strategy applies a variety of trend-following and mean-reversion indicators to trade the spread on a daily basis.

Modeling a spread strategy on a retail platform like Interactivebrokers or TradeStation is extremely challenging, due to the limitations of the platform and the Easylanguage programming language compared to professional platforms that are built for purpose, like TT’s XTrader and development tools like ADL.  If you backtest strategies based on signals generated from the spread calculated using the last traded prices in the two securities, you will almost certainly see “phantom trades” – trades that could not be executed at the indicated spread price (for example, because both contracts last traded on the same side of the bid/ask spread).   You also can’t easily simulate passive entry or exit strategies, which typically constrains you to using market orders for both legs, in and out of the spread.  On the other hand, while using market orders would almost certainly be prohibitively expensive in a high frequency or daytrading context, in a low-frequency scenario the higher transaction costs entailed in aggressive entries and exits are typically amortized over far longer time frames.

SSALGOTRADING AD

In the following example I have allowed transaction costs of $100 per round turn and slippage of $0.1 (equivalent to $100) per spread.  Daily settlement prices from Mar 2004 to June 2010 were used to fit the model, which was tested out of sample in the period July 2010 to June 2014. Results are summarized in the chart and table below.

Even burdened with significant transaction cost assumptions the strategy performance looks impressive on several counts, notably a profit factor in excess of 300, a win rate of over 90% and a Sortino Ratio of over 6.  These features of the strategy prove robust (and even increase) during the four year out-of-sample period, although the annual net profit per spread declines to around $8,500, from $36,600 for the in-sample period.  Even so, this being a straightforward calendar spread, it should be possible to trade the strategy in size at relative modest margin cost, making the strategy return highly attractive.

Equity Curve

 (click to enlarge)

Performance Results

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

(click to enlarge)

 

 

How Not to Develop Trading Strategies – A Cautionary Tale

In his post on Multi-Market Techniques for Robust Trading Strategies (http://www.adaptrade.com/Newsletter/NL-MultiMarket.htm) Michael Bryant of Adaptrade discusses some interesting approaches to improving model robustness. One is to use data from several correlated assets to build the model, on the basis that if the algorithm works for several assets with differing price levels, that would tend to corroborate the system’s robustness. The second approach he advocates is to use data from the same asset series at different bars lengths. The example he uses @ES.D at 5, 7 and 9 minute bars. The argument in favor of this approach is the same as for the first, albeit in this case the underlying asset is the same.

I like Michael’s idea in principle, but I wanted to give you a sense of what can all too easily go wrong with GP modeling, even using techniques such as multi-time frame fitting and Monte Carlo simulation to improve robustness testing.

In the chart below I have extended the analysis back in time, beyond the 2011-2012 period that Michael used to build his original model. As you can see, most of the returns are generated in-sample, in the 2011-2012 period. As we look back over the period from 2007-2010, the results are distinctly unimpressive – the strategy basically trades sideways for four years.

Adaptrade ES Strategy in Multiple Time Frames

 

How do Do It Right

In my view, there is only one, safe way to use GP to develop strategies. Firstly, you need to use a very long span of data – as much as possible, to fit your model. Only in this way can you ensure that the model has encountered enough variation in market conditions to stand a reasonable chance of being able to adapt to changing market conditions in future.

SSALGOTRADING AD

Secondly, you need to use two OOS period. The first OOS span of data, drawn from the start of the data series, is used in the normal way, to visually inspect the performance of the model. But the second span of OOS data, from more recent history, is NOT examined before the model is finalized. This is really important. Products like Adaptrade make it too easy for the system designer to “cheat”, by looking at the recent performance of his trading system “out of sample” and selecting models that do well in that period. But the very process of examining OOS performance introduces bias into the system. It would be like adding a line of code saying something like:

IF (model performance in OOS period > x) do the following….

I am quite sure if I posted a strategy with a line of code like that in it, it would immediately be shot down as being blatantly biased, and quite rightly so. But, if I look at the recent “OOS” performance and use it to select the model, I am effectively doing exactly the same thing.

That is why it is so important to have a second span of OOS data that it not only not used to build the model, but also is not used to assess performance, until after the final model selection is made. For that reason, the second OOS period is referred to as a “double blind” test.

That’s the procedure I followed to build my futures daytrading strategy: I used as much data as possible, dating from 2002. The first 20% of the each data set was used for normal OOS testing. But the second set of data, from Jan 2012 onwards, was my double-blind data set. Only when I saw that the system maintained performance in BOTH OOS periods was I reasonably confident of the system’s robustness.

DoubleBlind

This further explains why it is so challenging to develop higher frequency strategies using GP. Running even a very fast GP modeling system on a large span of high frequency data can take inordinate amounts of time.

The longest span of 5-min bar data that a GP system can handle would typically be around 5-7 years. This is probably not quite enough to build a truly robust system, although if you pick you time span carefully it might be (I generally like to use the 2006-2011 period, which has lots of market variation).

For 15 minute bar data, a well-designed GP system can usually handle all the available data you can throw at it – from 1999 in the case of the Emini, for instance.

Why I don’t Like Fitting Models over Short Time Spans

The risks of fitting models to data in short time spans are intuitively obvious. If you happen to pick a data set in which the market is in a strong uptrend, then your model is going to focus on that kind of market behavior. Subsequently, when the trend changes, the strategy will typically break down.
Monte Carlo simulation isn’t going to change much in this situation: sure, it will help a bit, perhaps, but since the resampled data is all drawn from the same original data set, in most cases the simulated paths will also show a strong uptrend – all that will be shown is that there is some doubt about the strength of the trend. But a completely different scenario, in which, say, the market drops by 10%, is unlikely to appear.

One possible answer to that problem, recommended by some system developers, is simply to rebuild the model when a breakdown is detected. While it’s true that a product like MSA can make detection easier, rebuilding the model is another question altogether. There is no guarantee that the kind of model that has worked hitherto can be re-tooled to work once again. In fact, there may be no viable trading system that can handle the new market dynamics.

Here is a case in point. We have a system that works well on 10 min bars in TF.D up until around May 2012, when MSA indicates a breakdown in strategy performance.

TF.F Monte Carlo

So now we try to fit a new model, along the pattern of the original model, taking account some of the new data.  But it turns out to be just a Band-Aid – after a few more data points the strategy breaks down again, irretrievably.

TF EC 1

This is typical of what often happens when you use GP to build a model using s short span of data. That’s why I prefer to use a long time span, even at lower frequency. The chances of being able to build a robust system that will adapt well to changing market conditions are much higher.

A Robust Emini Trading System

Here, for example is a GP system build on daily data in @ES.D from 1999 to 2011 (i.e. 2012 to 2014 is OOS).

ES.D EC

The Mathematics of Scalping

NOTE:  if you are unable to see the Mathematica models below, you can download the free Wolfram CDF player and you may also need this plug-in.

You can also download the complete Mathematica CDF file here.

In this post I want to explore aspects of scalping, a type of strategy widely utilized by high frequency trading firms.

I will define a scalping strategy as one in which we seek to take small profits by posting limit orders on alternate side of the book. Scalping, as I define it, is a strategy rather like market making, except that we “lean” on one side of the book. So, at any given time, we may have a long bias and so look to enter with a limit buy order. If this is filled, we will then look to exit with a subsequent limit sell order, taking a profit of a few ticks. Conversely, we may enter with a limit sell order and look to exit with a limit buy order.
The strategy relies on two critical factors:

(i) the alpha signal which tells us from moment to moment whether we should prefer to be long or short
(ii) the execution strategy, or “trade expression”

In this article I want to focus on the latter, making the assumption that we have some kind of alpha generation model already in place (more about this in later posts).

There are several means that a trader can use to enter a position. The simplest approach, the one we will be considering here, is simply to place a single limit order at or just outside the inside bid/ask prices – so in other words we will be looking to buy on the bid and sell on the ask (and hoping to earn the bid-ask spread, at least).

SSALGOTRADING AD

One of the problems with this approach is that it is highly latency sensitive. Limit orders join the limit order book at the back of the queue and slowly works their way towards the front, as earlier orders get filled. Buy the time the market gets around to your limit buy order, there may be no more sellers at that price. In that case the market trades away, a higher bid comes in and supersedes your order, and you don’t get filled. Conversely, yours may be one of the last orders to get filled, after which the market trades down to a lower bid and your position is immediately under water.

This simplistic model explains why latency is such a concern – you want to get as near to the front of the queue as you can, as quickly as possible. You do this by minimizing the time it takes to issue and order and get it into the limit order book. That entails both hardware (co-located servers, fiber-optic connections) and software optimization and typically also involves the use of Immediate or Cancel (IOC) orders. The use of IOC orders by HFT firms to gain order priority is highly controversial and is seen as gaming the system by traditional investors, who may end up paying higher prices as a result.

Another approach is to layer limit orders at price points up and down the order book, establishing priority long before the market trades there. Order layering is a highly complex execution strategy that brings addition complications.

Let’s confine ourselves to considering the single limit order, the type of order available to any trader using a standard retail platform.

As I have explained, we are assuming here that, at any point in time, you know whether you prefer to be long or short, and therefore whether you want to place a bid or an offer. The issue is, at what price do you place your order, and what do you do about limiting your risk? In other words, we are discussing profit targets and stop losses, which, of course, are all about risk and return.

Risk and Return in Scalping

Lets start by considering risk. The biggest risk to a scalper is that, once filled, the market goes against his position until he is obliged to trigger his stop loss. If he sets his stop loss too tight, he may be forced to exit positions that are initially unprofitable, but which would have recovered and shown a profit if he had not exited. Conversely,  if he sets the stop loss too loose, the risk reward ratio is very low – a single loss-making trade could eradicate the profit from a large number of smaller, profitable trades.

Now lets think about reward. If the trader is too ambitious in setting his profit target he may never get to realize the gains his position is showing – the market could reverse, leaving him with a loss on a position that was, initially, profitable. Conversely, if he sets the target too tight, the trader may give up too much potential in a winning trade to overcome the effects of the occasional, large loss.

It’s clear that these are critical concerns for a scalper: indeed the trade exit rules are just as important, or even more important, than the entry rules. So how should he proceed?

Theoretical Framework for Scalping

Let’s make the rather heroic assumption that market returns are Normally distributed (in fact, we know from empirical research that they are not – but this is a starting point, at least). And let’s assume for the moment that our trader has been filled on a limit buy order and is looking to decide where to place his profit target and stop loss limit orders. Given a current price of the underlying security of X, the scalper is seeking to determine the profit target of p ticks and the stop loss level of q ticks that will determine the prices at which he should post his limit orders to exit the trade. We can translate these into returns, as follows:

to the upside: Ru = Ln[X+p] – Ln[X]

and to the downside: Rd = Ln[X-q] – Ln[X]

This situation is illustrated in the chart below.

Normal Distn Shaded

The profitable area is the shaded region on the RHS of the distribution. If the market trades at this price or higher, we will make money: p ticks, less trading fees and commissions, to be precise. Conversely we lose q ticks (plus commissions) if the market trades in the region shaded on the LHS of the distribution.

Under our assumptions, the probability of ending up in the RHS shaded region is:

probWin = 1 – NormalCDF(Ru, mu, sigma),

where mu and sigma are the mean and standard deviation of the distribution.

The probability of losing money, i.e. the shaded area in the LHS of the distribution, is given by:

probLoss = NormalCDF(Rd, mu, sigma),

where NormalCDF is the cumulative distribution function of the Gaussian distribution.

The expected profit from the trade is therefore:

Expected profit = p * probWin – q * probLoss

And the expected win rate, the proportion of profitable trades, is given by:

WinRate = probWin / (probWin + probLoss)

If we set a stretch profit target, then p will be large, and probWin, the shaded region on the RHS of the distribution, will be small, so our winRate will be low. Under this scenario we would have a low probability of a large gain. Conversely, if we set p to, say, 1 tick, and our stop loss q to, say, 20 ticks, the shaded region on the RHS will represent close to half of the probability density, while the shaded LHS will encompass only around 5%. Our win rate in that case would be of the order of 91%:

WinRate = 50% / (50% + 5%) = 91%

Under this scenario, we make frequent, small profits  and suffer the occasional large loss.

So the critical question is: how do we pick p and q, our profit target and stop loss?  Does it matter?  What should the decision depend on?

Modeling Scalping Strategies

We can begin to address these questions by noticing, as we have already seen, that there is a trade-off between the size of profit we are hoping to make, and the size of loss we are willing to tolerate, and the probability of that gain or loss arising.  Those probabilities in turn depend on the underlying probability distribution, assumed here to be Gaussian.

Now, the Normal or Gaussian distribution which determines the probabilities of winning or losing at different price levels has two parameters – the mean, mu, or drift of the returns process and sigma, its volatility.

Over short time intervals the effect of volatility outweigh any impact from drift by orders of magnitude.  The reason for this is simple:  volatility scales with the square root of time, while the drift scales linearly.  Over small time intervals, the drift becomes un-noticeably small, compared to the process volatility.  Hence we can assume that mu, the process mean is zero, without concern, and focus exclusively on sigma, the volatility.

What other factors do we need to consider?  Well there is a minimum price move, which might be 1 tick, and the dollar value of that tick, from which we can derive our upside and downside returns, Ru and Rd.  And, finally, we need to factor in commissions and exchange fees into our net trade P&L.

Here’s a simple formulation of the model, in which I am using the E-mini futures contract as an exemplar.

 WinRate[currentPrice_,annualVolatility_,BarSizeMins_, nTicksPT_, nTicksSL_,minMove_, tickValue_, costContract_]:=Module[{ nMinsPerDay, periodVolatility, tgtReturn, slReturn,tgtDollar, slDollar, probWin, probLoss, winRate, expWinDollar, expLossDollar, expProfit},
nMinsPerDay = 250*6.5*60;
periodVolatility = annualVolatility / Sqrt[nMinsPerDay/BarSizeMins];
tgtReturn=nTicksPT*minMove/currentPrice;tgtDollar = nTicksPT * tickValue;
slReturn = nTicksSL*minMove/currentPrice;
slDollar=nTicksSL*tickValue;
probWin=1-CDF[NormalDistribution[0, periodVolatility],tgtReturn];
probLoss=CDF[NormalDistribution[0, periodVolatility],slReturn];
winRate=probWin/(probWin+probLoss);
expWinDollar=tgtDollar*probWin;
expLossDollar=slDollar*probLoss;
expProfit=expWinDollar+expLossDollar-costContract;
{expProfit, winRate}]

For the ES contract we have a min price move of 0.25 and the tick value is $12.50.  Notice that we scale annual volatility to the size of the period we are trading (15 minute bars, in the following example).

Scenario Analysis

Let’s take a look at how the expected profit and win rate vary with the profit target and stop loss limits we set.  In the following interactive graphics, we can assess the impact of different levels of volatility on the outcome.

Expected Profit by Bar Size and Volatility

Expected Win Rate by Volatility

Notice to begin with that the win rate (and expected profit) are very far from being Normally distributed – not least because they change radically with volatility, which is itself time-varying.

For very low levels of volatility, around 5%, we appear to do best in terms of maximizing our expected P&L by setting a tight profit target of a couple of ticks, and a stop loss of around 10 ticks.  Our win rate is very high at these levels – around 90% or more.  In other words, at low levels of volatility, our aim should be to try to make a large number of small gains.

But as volatility increases to around 15%, it becomes evident that we need to increase our profit target, to around 10 or 11 ticks.  The distribution of the expected P&L suggests we have a couple of different strategy options: either we can set a larger stop loss, of around 30 ticks, or we can head in the other direction, and set a very low stop loss of perhaps just 1-2 ticks.  This later strategy is, in fact, the mirror image of our low-volatility strategy:  at higher levels of volatility, we are aiming to make occasional, large gains and we are willing to pay the price of sustaining repeated small stop-losses.  Our win rate, although still well above 50%, naturally declines.

As volatility rises still further, to 20% or 30%, or more, it becomes apparent that we really have no alternative but to aim for occasional large gains, by increasing our profit target and tightening stop loss limits.   Our win rate under this strategy scenario will be much lower – around 30% or less.

Non – Gaussian Model

Now let’s address the concern that asset returns are not typically distributed Normally. In particular, the empirical distribution of returns tends to have “fat tails”, i.e. the probability of an extreme event is much higher than in an equivalent Normal distribution.

A widely used model for fat-tailed distributions in the Extreme Value Distribution. This has pdf:

PDF[ExtremeValueDistribution[,],x]

 EVD

Plot[Evaluate@Table[PDF[ExtremeValueDistribution[,2],x],{,{-3,0,4}}],{x,-8,12},FillingAxis]

EVD pdf

Mean[ExtremeValueDistribution[,]]

+EulerGamma

Variance[ExtremeValueDistribution[,]]

EVD Variance

In order to set the parameters of the EVD, we need to arrange them so that the mean and variance match those of the equivalent Gaussian distribution with mean = 0 and standard deviation . hence:

EVD params

The code for a version of the model using the GED is given as follows

WinRateExtreme[currentPrice_,annualVolatility_,BarSizeMins_, nTicksPT_, nTicksSL_,minMove_, tickValue_, costContract_]:=Module[{ nMinsPerDay, periodVolatility, alpha, beta,tgtReturn, slReturn,tgtDollar, slDollar, probWin, probLoss, winRate, expWinDollar, expLossDollar, expProfit},
nMinsPerDay = 250*6.5*60;
periodVolatility = annualVolatility / Sqrt[nMinsPerDay/BarSizeMins];
beta = Sqrt[6]*periodVolatility / Pi;
alpha=-EulerGamma*beta;
tgtReturn=nTicksPT*minMove/currentPrice;tgtDollar = nTicksPT * tickValue;
slReturn = nTicksSL*minMove/currentPrice;
slDollar=nTicksSL*tickValue;
probWin=1-CDF[ExtremeValueDistribution[alpha, beta],tgtReturn];
probLoss=CDF[ExtremeValueDistribution[alpha, beta],slReturn];
winRate=probWin/(probWin+probLoss);
expWinDollar=tgtDollar*probWin;
expLossDollar=slDollar*probLoss;
expProfit=expWinDollar+expLossDollar-costContract;
{expProfit, winRate}]

WinRateExtreme[1900,0.05,15,2,30,0.25,12.50,3][[2]]

0.21759

We can now produce the same plots for the EVD version of the model that we plotted for the Gaussian versions :

Expected Profit by Bar Size and Volatility – Extreme Value Distribution

Expected Win Rate by Volatility – Extreme Value Distribution

Next we compare the Gaussian and EVD versions of the model, to gain an understanding of how the differing assumptions impact the expected Win Rate.

Expected Win Rate by Stop Loss and Profit Target

As you can see, for moderate levels of volatility, up to around 18 % annually, the expected Win Rate is actually higher if we assume an Extreme Value distribution of returns, rather than a Normal distribution.If we use a Normal distribution we will actually underestimate the Win Rate, if the actual return distribution is closer to Extreme Value.In other words, the assumption of a Gaussian distribution for returns is actually conservative.

Now, on the other hand, it is also the case that at higher levels of volatility the assumption of Normality will tend to over – estimate the expected Win Rate, if returns actually follow an extreme value distribution. But, as indicated before, for high levels of volatility we need to consider amending the scalping strategy very substantially. Either we need to reverse it, setting larger Profit Targets and tighter Stops, or we need to stop trading altogether, until volatility declines to normal levels.Many scalpers would prefer the second option, as the first alternative doesn’t strike them as being close enough to scalping to justify the name.If you take that approach, i.e.stop trying to scalp in periods when volatility is elevated, then the differences in estimated Win Rate resulting from alternative assumptions of return distribution are irrelevant.

If you only try to scalp when volatility is under, say, 20 % and you use a Gaussian distribution in your scalping model, you will only ever typically under – estimate your actual expected Win Rate.In other words, the assumption of Normality helps, not hurts, your strategy, by being conservative in its estimate of the expected Win Rate.

If, in the alternative, you want to trade the strategy regardless of the level of volatility, then by all means use something like an Extreme Value distribution in your model, as I have done here.That changes the estimates of expected Win Rate that the model produces, but it in no way changes the structure of the model, or invalidates it.It’ s just a different, arguably more realistic set of assumptions pertaining to situations of elevated volatility.

Monte-Carlo Simulation Analysis

Let’ s move on to do some simulation analysis so we can get an understanding of the distribution of the expected Win Rate and Avg Trade PL for our two alternative models. We begin by coding a generator that produces a sample of 1,000 trades and calculates the Avg Trade PL and Win Rate.

Gaussian Model

GenWinRate[currentPrice_,annualVolatility_,BarSizeMins_, nTicksPT_, nTicksSL_,minMove_, tickValue_, costContract_]:=Module[{ nMinsPerDay, periodVolatility, randObs, tgtReturn, slReturn,tgtDollar, slDollar, nWins,nLosses, perTradePL, probWin, probLoss, winRate, expWinDollar, expLossDollar, expProfit},
nMinsPerDay = 250*6.5*60;
periodVolatility = annualVolatility / Sqrt[nMinsPerDay/BarSizeMins];
tgtReturn=nTicksPT*minMove/currentPrice;tgtDollar = nTicksPT * tickValue;
slReturn = nTicksSL*minMove/currentPrice;
slDollar=nTicksSL*tickValue;
randObs=RandomVariate[NormalDistribution[0,periodVolatility],10^3];
nWins=Count[randObs,x_/;x>=tgtReturn];
nLosses=Count[randObs,x_/;xslReturn];
winRate=nWins/(nWins+nLosses)//N;
perTradePL=(nWins*tgtDollar+nLosses*slDollar)/(nWins+nLosses);{perTradePL,winRate}]

GenWinRate[1900,0.1,15,1,-24,0.25,12.50,3]

{7.69231,0.984615}

Now we can generate a random sample of 10, 000 simulation runs and plot a histogram of the Win Rates, using, for example, ES on 5-min bars, with a PT of 2 ticks and SL of – 20 ticks, assuming annual volatility of 15 %.

Histogram[Table[GenWinRate[1900,0.15,5,2,-20,0.25,12.50,3][[2]],{i,10000}],10,AxesLabel{“Exp. Win Rate (%)”}]

WinRateHist

Histogram[Table[GenWinRate[1900,0.15,5,2,-20,0.25,12.50,3][[1]],{i,10000}],10,AxesLabel{“Exp. PL/Trade ($)”}]

PLHist

Extreme Value Distribution Model

Next we can do the same for the Extreme Value Distribution version of the model.

GenWinRateExtreme[currentPrice_,annualVolatility_,BarSizeMins_, nTicksPT_, nTicksSL_,minMove_, tickValue_, costContract_]:=Module[{ nMinsPerDay, periodVolatility, randObs, tgtReturn, slReturn,tgtDollar, slDollar, alpha, beta,nWins,nLosses, perTradePL, probWin, probLoss, winRate, expWinDollar, expLossDollar, expProfit},
nMinsPerDay = 250*6.5*60;
periodVolatility = annualVolatility / Sqrt[nMinsPerDay/BarSizeMins];
beta = Sqrt[6]*periodVolatility / Pi;
alpha=-EulerGamma*beta;
tgtReturn=nTicksPT*minMove/currentPrice;tgtDollar = nTicksPT * tickValue;
slReturn = nTicksSL*minMove/currentPrice;
slDollar=nTicksSL*tickValue;
randObs=RandomVariate[ExtremeValueDistribution[alpha, beta],10^3];
nWins=Count[randObs,x_/;x>=tgtReturn];
nLosses=Count[randObs,x_/;xslReturn];
winRate=nWins/(nWins+nLosses)//N;
perTradePL=(nWins*tgtDollar+nLosses*slDollar)/(nWins+nLosses);{perTradePL,winRate}]

Histogram[Table[GenWinRateExtreme[1900,0.15,5,2,-10,0.25,12.50,3][[2]],{i,10000}],10,AxesLabel{“Exp. Win Rate (%)”}]

WinRateEVDHist

Histogram[Table[GenWinRateExtreme[1900,0.15,5,2,-10,0.25,12.50,3][[1]],{i,10000}],10,AxesLabel{“Exp. PL/Trade ($)”}]

PLEVDHist

 

 

Conclusions

The key conclusions from this analysis are:

  1. Scalping is essentially a volatility trade
  2. The setting of optimal profit targets are stop loss limits depend critically on the volatility of the underlying, and needs to be handled dynamically, depending on current levels of market volatility
  3. At low levels of volatility we should set tight profit targets and wide stop loss limits, looking to make a high percentage of small gains, of perhaps 2-3 ticks.
  4. As volatility rises, we need to reverse that position, setting more ambitious profit targets and tight stops, aiming for the occasional big win.