Transcendental Spheres

One of the most beautiful equations in the whole of mathematics is the identity (and its derivation):

 

 

I recently came across another beautiful mathematical concept that likewise relates the two transcendental numbers e and Pi.

We begin by reviewing the concept of a unit sphere, which in 3-dimensional space is the region of points described by the equation:

We can some generate random coordinates that satisfy the equation, to produce the expected result:

The equation above represents a 3-D unit sphere using the standard Euclidean Norm.  It can be generalized to produce a similar formula for an n-dimensional hyper-sphere:

Another way to generalize the concept is by extending the Euclidean distance measure with what are referred to as p-Norms, or L-p spaces:

The shape of a unit sphere in L-p space can take many different forms, including some that have “corners”.  Here are some examples of 2-dimensional spheres for values of p varying in the range { 0.25, 4}:

 

which can also be explored in the complex plane:

Reverting to the regular Euclidean metric, let’s focus on the n-dimensional unit hypersphere, whose volume is given by:

To see this, note that the volume of the unit sphere in 2-D space is just  the surface area of a unit circle, which has area V(2) =  π.  Furthermore:

This is the equation for the volume of the unit hypersphere in n dimensions.  Hence we have the following recurrence relationship:

This recursion allows us to prove the equation for the volume of the unit hypersphere, by induction.

The function V(n) take a maximal value of 5.26 for n = 5 dimensions, thereafter declining rapidly towards zero:

 

In the limit, the volume of the n-dimensional unit hypersphere tends to zero:

 

Now, consider the sum of the volumes of unit hypersphere in even dimensions, i.e. for n = 0, 2, 4, 6,….  For example, the first few terms of the sum are:

 

These are the initial terms of a well-known McClaurin expansion, which in the limit produces the following remarkable result:

In other words, the infinite sum of the volumes of n-dimensional unit hyperspheres evaluates to a power relationship between the two most famous transcendental numbers.  The result, known as Gelfond’s constant, is itself a transcendental number:

A High Frequency Market Making Algorithm

 

This algorithm builds on the research by Stoikova and Avelleneda in their 2009 paper “High Frequency Trading in a Limit Order Book“, 2009 and extends the basic algorithm in several ways:

  1. The algorithm makes two sided markets in a specified list of equities, with model parameters set at levels appropriate for each product.
  2. The algorithm introduces an automatic mechanism for managing inventory, reducing the risk of adverse selection by changing the rate of inventory accumulation dynamically.
  3. The algorithm dynamically adjusts the range of the bid-ask spread as the trading session progresses, with the aim of minimizing inventory levels on market close.
  4. The extended algorithm makes use of estimates of recent market trends and adjusts the bid-offer spread to lean in the direction of the trend.
  5. A manual adjustment factor allows the market-maker to nudge the algorithm in the direction of reducing inventory.

The algorithm is implemented in Mathematica, and can be compiled to create dlls callable from with a C++ or Python application.

The application makes use of the MATH-TWS library to connect to the Interactive Brokers TWS or Gateway platform via the C++ api. MATH-TWS is used to create orders, manage positions and track account balances & P&L.

 

Reflections on Careers in Quantitative Finance

CMU’s MSCF Program

Carnegie Mellon’s Steve Shreve is out with an interesting post on careers in quantitative finance, with his commentary on the changing landscape in quantitative research and the implications for financial education.

I taught at Carnegie Mellon in the late 1990’s, including its excellent Master’s program in quantitative finance that Steve co-founded, with Sanjay Srivastava.  The program was revolutionary in many ways and was immediately successful and rapidly copied by rival graduate schools (I help to spread the word a little, at Cambridge).

Fig1The core of the program remains largely unchanged over the last 20 years, featuring Steve’s excellent foundation course in stochastic calculus;  but I am happy to see that the school has added many, new and highly relevant topics to the second year syllabus, including market microstructure, machine learning, algorithmic trading and statistical arbitrage.  This has moved the program in terms of its primary focus, which was originally financial engineering, to include coverage of subjects that are highly relevant to quantitative investment research and trading.

It was this combination of sound theoretical grounding with practitioner-oriented training that made the program so successful.  As I recall, every single graduate was successful in finding a job on Wall Street, often at salaries in excess of $200,000, a considerable sum in those days.  One of the key features of the program was that it combined theoretical concepts with practical training, using a simulated trading floor gifted by Thomson Reuters (a model later adopted btrading-floor-1y the ICMA centre at the University of Reading in the UK).  This enabled us to test students’ understanding of what they had been taught, using market simulation models that relied upon key theoretical ideas covered in the program.  The constant reinforcement of the theoretical with the practical made for a much deeper learning experience for most students and greatly facilitated their transition to Wall Street.

Masters in High Frequency Finance

While CMU’s program has certainly evolved and remains highly relevant to the recruitment needs of Wall Street firms, I still believe there is an opportunity for a program focused exclusively on high frequency finance, as previously described in this post.  The MHFF program would be more computer science oriented, with less emphasis placed on financial engineering topics.  So, for instance, students would learn about trading hardware and infrastructure, the principles of efficient algorithm design, as well as HFT trading techniques such as order layering and priority management.  The program would also cover HFT strategies such as latency arbitrage, market making, and statistical arbitrage.  Students would learn both lower level (C++, Java) and higher level (Matlab, R) programming languages and there is  a good case for a mandatory machine code programming course also.  Other core courses might include stochastic calculus and market microstructure.

Who would run such a program?  The ideal school would have a reputation for excellent in both finance and computer science. CMU is an obvious candidate, as is MIT, but there are many other excellent possibilities.

Careers

I’ve been involved in quantitative finance since the beginning:  I recall programming one of the first 68000 Assembler microcomputers in the 1980s, which was ultimately used for an F/X system at a major UK bank. The ensuing rapid proliferation of quantitative techniques in finance has been fueled by the ubiquity of cheap computing power, facilitating the deployment of quantitate techniques that would previously been impractical to implement due to their complexity.  A good example is the machine learning techniques that now pervade large swathes of the finance arena, from credit scoring to HFT trading.  When I first began working in that field in the early 2000’s it was necessary to assemble a fairly sizable cluster of cpus to handle the computation load. These days you can access comparable levels of computational power on a single server and, if you need more, you can easily scale up via Azure or EC2.

fig3It is this explosive growth in computing power  that has driven the development of quantitative finance in both the financial engineering and quantitative investment disciplines. As the same time, the huge reduction in the cost of computing power has leveled the playing field and lowered barriers to entry.  What was once the exclusive preserve of the sell-side has now become readily available to many buy-side firms.  As a consequence, much of the growth in employment opportunities in quantitative finance over the last 20 years has been on the buy-side, with the arrival of quantitative hedge funds and proprietary trading firms, including my own, Systematic Strategies.  This trend has a long way to play out so that, when also taking into consideration the increasing restrictions that sell-side firms face in terms of their proprietary trading activity, I am inclined to believe that the buy-side will offer the best employment opportunities for quantitative financiers over the next decade.

It was often said that hedge fund managers are typically in their 30’s or 40’s when they make the move to the buy-side. That has changed in the last 15 years, again driven by the developments in technology.  These days you are more likely to find the critically important technical skills in younger candidates, in their late 20’s or early 30’s.  My advice to those looking for a career in quantitative finance, who are unable to find the right job opportunity, would be: do what every other young person in Silicon Valley is doing:  join a startup, or start one yourself.

 

Trading the Presidential Election

There is a great deal of market lore related to the US presidential elections.  It is generally held that elections are good for the market,  regardless of whether the incoming president is Democrat or Republican.   To examine this thesis, I gathered data on presidential elections since 1950, considering only the first term of each newly elected president.  My reason for considering first terms only was twofold:  firstly, it might be expected that a new president is likely to exert a greater influence during his initial term in office and secondly, the 2016 contest will likewise see the appointment of a new president (rather than the re-election of a current one).

Market Performance Post Presidential Elections

The table below shows the 11 presidential races considered, with sparklines summarizing the cumulative return in the S&P 500 Index in the 12 month period following the start of the presidential term of office.  The majority are indeed upward sloping, as is the overall average.

fig 1

A more detailed picture emerges from the following chart.  It transpires that the generally positive “presidential effect” is due overwhelmingly to the stellar performance of the market during the first year of the Gerald Ford and Barack Obama presidencies.  In both cases presidential elections coincided with the market nadir following, respectively, the 1973 oil crisis and 2008 financial crisis, after which  the economy staged a strong recovery.

fig2

Democrat vs. Republican Presidencies

There is a marked difference in the average market performance during the first year of a Democratic presidency vs. a Republican presidency.  Doubtless, plausible explanations for this disparity are forthcoming from both political factions.  On the Republican side, it could be argued that Democratic presidents have benefitted from the benign policies of their (often) Republican  predecessors, while incoming Republican presidents have had to clean up the mess left to them by their Democratic predecessors.  Democrats would no doubt argue that the market, taking its customary forward view, tends to react favorably to the prospect of a more enlightened, liberal approach to the presidency (aka more government spending).

SSALGOTRADING AD

Market Performance Around the Start of Presidential Terms

I shall leave such political speculations to those interested in pursuing them and instead focus on matters  of a more apolitical nature.  Specifically, we will look at the average market returns during the twelve months leading up to the start of a new presidential term, compared to the average returns in the twelve months after the start of the term.  The results are as follows:

fig3

The twelve months leading up to the start of the presidential term are labelled -12, -11, …, -1, while the following twelve months are labelled 1, 2, … , 12.  The start of the term is designated as month zero, while months that fall outside the 24 month period around the start of a presidential term are labelled as month 13.

The key finding stands out clearly from the chart: namely, that market returns during the start month of a new presidential term are distinctly negative, averaging -3.3% , while returns in the first month after the start of the term are distinctly positive, averaging 2.81%.

Assuming that market returns are approximately Normally distributed, a standard t-test rejects the null hypothesis of no difference in the means of the month 0 and month 1 returns, at the 2% confidence level.  In other words, the “presidential effect” is both large and statistically significant.

Conclusion: Trading the Election

Given the array of candidates before the electorate this election season, I am strongly inclined to take the trade.  The market will certainly “feel the Bern” in the unlikely event that Bernie Sanders is elected president.  I can even make an argument for a month 1 recovery, when the market realizes that there are limits to how much economic damage even a Socialist president can do, given constitutional checks and balances, “pen and phone” not withstanding.

Again, an incoming president Trump is likely to be greeted by a sharp market sell-off, based on jittery speculation about the Donald’s proclivity to start a trade war with China, or Mexico, or a ground war with Russia, Iran, or anyone else. Likewise, however, the market will fairly quickly come around to the realization that electioneering rhetoric is unlikely to provide much guidance as to what a president Trump is likely to do in practice.

A Hillary Clinton presidency is likely to be seen, ex-ante, as the most benign for the market, especially given the level of (financial) support she has received from Wall Street.  However, there’s a glitch:  Bernie is proving much tougher to shake off than she could ever have anticipated. In order to win over his supporters, she is going to have to move out of the center ground, towards the left.  Who knows what hostages to fortune a desperate Clinton is likely to have to offer the election gods in her bid to secure the White House?

In terms of the mechanics, while you could take the trade in ETF’s or futures, this is one of those situations ideally suited to options and I am inclined to suggest combining a front-month put spread with a back-month call spread.

 

Yes, You Can Time the Market. How it Works, And Why

One of the most commonly cited maxims is that market timing is impossible.  In fact, empirical evidence makes a compelling case that market timing is feasible and can yield substantial economic benefits.  What’s more, we even understand why it works.  For the typical portfolio investor, applying simple techniques to adjust their market exposure can prevent substantial losses during market downturns.

The Background From Empirical and Theoretical Research

For the last fifty years, since the work of Paul Samuelson, the prevailing view amongst economists has been that markets are (mostly) efficient and follow a random walk. Empirical evidence to the contrary was mostly regarded as anomalous and/or unimportant economically.  Over time, however, evidence has accumulated that market effects may persist that are exploitable. The famous 1992 paper published by Fama and French, for example, identified important economic effects in stock returns due to size and value factors, while Cahart (1997) demonstrated the important incremental effect of momentum.  The combined four-factor Cahart model explains around 50% of the variation in stock returns, but leaves a large proportion that cannot be accounted for.

Other empirical studies have provided evidence that stock returns are predictable at various frequencies.  Important examples include work by Brock, Lakonishok and LeBaron (1992), Pesaran and Timmermann (1995) and Lo, Mamaysky and Wang (2000), who provide further evidence using a range of technical indicators with wide popularity among traders showing that this adds value even at the individual stock level over and above the performance of a stock index.  The research in these and other papers tends to be exceptional in term of both quality and comprehensiveness, as one might expect from academics risking their reputations in taking on established theory.  The appendix of test results to the Pesaran and Timmermann study, for example, is so lengthy that is available only in CD-ROM format.

A more recent example is the work of Paskalis Glabadanidis, in a 2012 paper entitled Market Timing with Moving Averages.  Glabadanidis examines a simple moving average strategy that, he finds, produces economically and statistically significant alphas of 10% to 15% per year, after transaction costs, and which are largely insensitive to the four Cahart factors. 

Glabadanidis reports evidence regarding the profitability of the MA strategy in seven international stock markets. The performance of the MA strategies also holds for more than 18,000 individual stocks. He finds that:

“The substantial market timing ability of the MA strategy appears to be the main driver of the abnormal returns.”

An Illustration of a Simple Marketing Timing Strategy in SPY

It is impossible to do justice to Glabadanidis’s research in a brief article and the interested reader is recommended to review the paper in full.  However, we can illustrate the essence of the idea using the SPY ETF as an example.   

A 24-period moving average of the monthly price series over the period from 1993 to 2016 is plotted in red in the chart below.

Fig1

The moving average indicator is used to time the market using the following simple rule:

if Pt >= MAt  invest in SPY in month t+1

if Pt < MAt  invest in T-bills in month t+1

In other words, we invest or remain invested in SPY when the monthly closing price of the ETF lies at or above the 24-month moving average, otherwise we switch our investment to T-Bills.

The process of switching our investment will naturally incur transaction costs and these are included in the net monthly returns.

The outcome of the strategy in terms of compound growth is compared to the original long-only SPY investment in the following chart.

Fig2

The marketing timing strategy outperforms the long-only ETF,  with a CAGR of 16.16% vs. 14.75% (net of transaction costs), largely due to its avoidance of the major market sell-offs in 2000-2003 and 2008-2009.

But the improvement isn’t limited to a 141bp improvement in annual compound returns.  The chart below compares the distributions of monthly returns in the SPY ETF and market timing strategy.

Fig3

It is clear that, in addition to a higher average monthly return, the market timing strategy has lower dispersion in the distribution in returns.  This leads to a significantly higher information ratio for the strategy compared to the long-only ETF.  Nor is that all:  the market timing strategy has both higher skewness and kurtosis, both desirable features.

Fig4

These results are entirely consistent with Glabadanidis’s research.  He finds that the performance of the market timing strategy is robust to different lags of the moving average and in subperiods, while investor sentiment, liquidity risks, business cycles, up and down markets, and the default spread cannot fully account for its performance. The strategy works just as well with randomly generated returns and bootstrapped returns as it does for the more than 18,000 stocks in the study.

A follow-up study by the author applying the same methodology to a universe of 20 REIT indices and 274 individual REITs reaches largely similar conclusions.

Why Marketing Timing Works

For many investors, empirical evidence – compelling though it may be – is not enough to make market timing a credible strategy, absent some kind of “fundamental” explanation of why it works.  Unusually, in the case of the simple moving average strategy, such explanation is possible.

It was Cox, Ross and Rubinstein who in 1979 developed the binomial model as a numerical method for pricing options.  The methodology relies on the concept of option replication, in which one constructs a portfolio comprising holdings of the underlying stock and bonds to produce the same cash flows as the option at every point in time (the proportion of stock to hold is given by the option delta).  Since the replicating portfolio produces the same cash flows as the option, it must have the same value and since once knows the price of the stock and bond at each point in time one can therefore price the option.  For those interested in the detail, Wikipedia gives a detailed explanation of the technique.

We can apply the concept of option replication to construct something very close the MA market timing strategy, as follows.  Consider what happens when the ETF falls below the moving average level.  In that case we convert the ETF portfolio to cash and use the proceeds to acquire T-Bills.  An equivalent outcome would be achieved by continuing to hold our long ETF position and acquiring a put option to hedge it.  The combination of a long ETF position, and a 1-month put option with delta of -1, would provide the same riskless payoff as the market timing strategy, i.e. the return on 30-day T-Bills.  An option in which the strike price is based on the average price of the underlying is known as an Arithmetic Asian option.    Hence when we apply the MA timing strategy we are effectively constructing a dynamic portfolio that replicates the payoff of an Arithmetic Asian protective put option struck as (just above) the moving average level.

Market Timing Alpha and The Cost of Hedging

None of this explanation is particularly contentious – the theory behind option replication through dynamic hedging is well understood – and it provides a largely complete understanding of the way the MA market timing strategy works, one that should satisfy those who are otherwise unpersuaded by arguments purely from empirical research.

There is one aspect of the foregoing description that remains a puzzle, however.  An option is a valuable financial instrument and the owner of a protective put of the kind described can expect to pay a price amounting to tens or perhaps hundreds of basis points.  Of course, in the market timing strategy we are not purchasing a put option per se, but creating one synthetically through dynamic replication.  The cost of creating this synthetic equivalent comprises the transaction costs incurred as we liquidate and re-assemble our portfolio from month to month, in the form of bid/ask spread and commissions.  According to efficient market theory, one should be indifferent as to whether one purchases the option at a fair market price or constructs it synthetically through replication – the cost should be equivalent in either case.  And yet in empirical tests the cost of the synthetic protective put falls far short of what one would expect to pay for an equivalent option instrument.  This is, in fact, the source of the alpha in the market timing strategy.

According to efficient market theory one might expect to pay something of the order of 140 basis points a year in transaction costs – the difference between the CAGR of the market timing strategy and the SPY ETF – in order to construct the protective put.  Yet, we find that no such costs are incurred.

Now, it might be argued that there is a hidden cost not revealed in our simple study of a market timing strategy applied to a single underlying ETF, which is the potential costs that could be incurred if the ETF should repeatedly cross and re-cross the level of the moving average, month after month.  In those circumstances the transaction costs would be much higher than indicated here.  The fact that, in a single example, such costs do not arise does not detract in any way from the potential for such a scenario to play out. Therefore, the argument goes, the actual costs from the strategy are likely to prove much higher over time, or when implemented for a large number of stocks.

All well and good, but this is precisely the scenario that Glabadanidis’s research addresses, by examining the outcomes, not only for tens of thousands of stocks, but also using a large number of scenarios generated from random and/or bootstrapped returns.  If the explanation offered did indeed account for the hidden costs of hedging, it would have been evident in the research findings.

Instead, Glabadanidis concludes:

“This switching strategy does not involve any heavy trading when implemented with break-even transaction costs, suggesting that it will be actionable even for small investors.”

Implications For Current Market Conditions

As at the time of writing, in mid-February 2016, the price of the SPY ETF remains just above the 24-month moving average level.  Consequently the market timing strategy implies one should continue to hold the market portfolio for the time being, although that could change very shortly, given recent market action.

Conclusion

The empirical evidence that market timing strategies produce significant alphas is difficult to challenge.  Furthermore, we have reached an understanding of why they work, from an application of widely accepted option replication theory. It appears that using a simple moving average to time market entries and exits is approximately equivalent to hedging a portfolio with a protective Arithmetic Asian put option.

What remains to be answered is why the cost of constructing put protection synthetically is so low.  At the current time, research indicates that market timing strategies consequently are able to generate alphas of 10% to 15% per annum.

References

  1. Brock, W., Lakonishok, J., LeBaron, B., 1992, “Simple Technical Trading Rules and the Stochastic Properties of Stock Returns,” Journal of Finance 47, pp. 1731-1764.
  2. Carhart, M. M., 1997, “On Persistence in Mutual Fund Performance,” Journal of Finance 52, pp. 57–82.

  3. Fama, E. F., French, K. R., 1992, “The Cross-Section of Expected Stock Returns,” Journal of Finance 47(2), 427–465
  4. Glabadanidis, P., 2012, “Market Timing with Moving Averages”, 25th Australasian Finance and Banking Conference.
  5. Glabadanidis, P., 2012, “The Market Timing Power of Moving Averages: Evidence from US REITs and REIT Indexes”, University of Adelaide Business School.
  6. Lo, A., Mamaysky, H., Wang, J., 2000, “Foundations of Technical Analysis: Computational Algorithms, Statistical Inference, and Empirical Implementation,” Journal of Finance 55, 1705–1765.
  7. Pesaran, M.H., Timmermann, A.G., 1995, “Predictability of Stock Returns: Robustness and Economic Significance”, Journal of Finance, Vol. 50 No. 4

Profit Margins – Are they Predicting a Crash?

Jeremy Grantham: A Bullish Bear

Is Jeremy Grantham, co-founder and CIO of GMO, bullish or bearish these days?  According to Myles Udland at Business Insider, he’s both.  He quotes Grantham:

“I think the global economy and the U.S. in particular will do better than the bears believe it will because they appear to underestimate the slow-burning but huge positive of much-reduced resource prices in the U.S. and the availability of capacity both in labor and machinery.”

Grantham

Udland continues:

“On top of all this is the decline in profit margins, which Grantham has called the “most mean-reverting series in finance,” implying that the long period of elevated margins we’ve seen from American corporations is most certainly going to come an end. And soon. “

fredgraph

Corporate Profit Margins as a Leading Indicator

The claim is an interesting one.  It certainly looks as if corporate profit margins are mean-reverting and, possibly, predictive of recessionary periods. And there is an economic argument why this should be so, articulated by Grantham as quoted in an earlier Business Insider article by Sam Ro:

“Profit margins are probably the most mean-reverting series in finance, and if profit margins do not mean-revert, then something has gone badly wrong with capitalism.

If high profits do not attract competition, there is something wrong with the system and it is not functioning properly.”

Thomson Research / Barclays Research’s take on the same theme echoes Grantham:

“The link between profit margins and recessions is strong,” Barclays’ Jonathan Glionna writes in a new note to clients. “We analyze the link between profit margins and recessions for the last seven business cycles, dating back to 1973. The results are not encouraging for the economy or the market. In every period except one, a 0.6% decline in margins in 12 months coincided with a recession.”

barclays-margin

Buffett Weighs in

Even Warren Buffett gets in on the act (from 1999):

“In my opinion, you have to be wildly optimistic to believe that corporate profits as a percent of GDP can, for any sustained period, hold much above 6%.”

warren-buffett-477

With the Illuminati chorusing as one on the perils of elevated rates of corporate profits, one would be foolish to take a contrarian view, perhaps.  And yet, that claim of Grantham’s (“probably the most mean-reverting series in finance”) poses a challenge worthy of some analysis.  Let’s take a look.

The Predictive Value of Corporate Profit Margins

First, let’s reproduce the St Louis Fed chart:

CPGDP
Corporate Profit Margins

A plot of the series autocorrelations strongly suggests that the series is not at all mean-reverting, but non-stationary, integrated order 1:

CPGDPACF
Autocorrelations

 

Next, we conduct an exhaustive evaluation of a wide range of time series models, including seasonal and non-seasonal ARIMA and GARCH:

ModelFit ModelFitResults

The best fitting model (using the AIC criterion) is a simple ARMA(0,1,0) model, integrated order 1, as anticipated.  The series is apparently difference-stationary, with no mean-reversion characteristics at all.  Diagnostic tests indicate no significant patterning in the model residuals:

ModelACF
Residual Autocorrelations
LjungPlot
Ljung-Box Test Probabilities

Using the model to forecast a range of possible values of the Corporate Profit to GDP ratio over the next 8 quarters suggests a very wide range, from as low as 6% to as high as 13%!

Forecast

 

CONCLUSION

The opinion of investment celebrities like Grantham and Buffett to the contrary, there really isn’t any evidence in the data to support the suggestion that corporate profit margins are mean reverting, even though common-sense economics suggests they should be.

The best-available econometric model produces a very wide range of forecasts of corporate profit rates over the next two years, some even higher than they are today.

If a recession is just around the corner,  corporate profit margins aren’t going to call it for us.

Alpha Extraction and Trading Under Different Market Regimes

Market Noise and Alpha Signals

One of the perennial problems in designing trading systems is noise in the data, which can often drown out an alpha signal.  This is turn creates difficulties for a trading system that relies on reading the signal, resulting in greater uncertainty about the trading outcome (i.e. greater volatility in system performance).  According to academic research, a great deal of market noise is caused by trading itself.  There is apparently not much that can be done about that problem:  sure, you can trade after hours or overnight, but the benefit of lower signal contamination from noise traders is offset by the disadvantage of poor liquidity.  Hence the thrust of most of the analysis in this area lies in the direction of trying to amplify the signal, often using techniques borrowed from signal processing and related engineering disciplines.

There is, however, one trick that I wanted to share with readers that is worth considering.  It allows you to trade during normal market hours, when liquidity is greatest, but at the same time limits the impact of market noise.

SSALGOTRADING AD

Quantifying Market Noise

How do you measure market noise?  One simple approach is to start by measuring market volatility, making the not-unreasonable assumption that higher levels of volatility are associated with greater amounts of random movement (i.e noise). Conversely, when markets are relatively calm, a greater proportion of the variation is caused by alpha factors.  During the latter periods, there is a greater information content in market data – the signal:noise ratio is larger and hence the alpha signal can be quantified and captured more accurately.

For a market like the E-Mini futures, the variation in daily volatility is considerable, as illustrated in the chart below.  The median daily volatility is 1.2%, while the maximum value (in 2008) was 14.7%!

Fig1

The extremely long tail of the distribution stands out clearly in the following histogram plot.

Fig 2

Obviously there are times when the noise in the process is going to drown out almost any alpha signal. What if we could avoid such periods?

Noise Reduction and Model Fitting

Let’s divide our data into two subsets of equal size, comprising days on which volatility was lower, or higher, than the median value.  Then let’s go ahead and use our alpha signal(s) to fit a trading model, using only data drawn from the lower volatility segment.

This is actually a little tricky to achieve in practice:  most software packages for time series analysis or charting are geared towards data occurring at equally spaced points in time.  One useful trick here is to replace the actual date and time values of the observations with sequential date and time values, in order to fool the software into accepting the data, since there are no longer any gaps in the timestamps.  Of course, the dates on our time series plot or chart will be incorrect. But that doesn’t matter:  as long as we know what the correct timestamps are.

An example of such a system is illustrated below.  The model was fitted  to  3-Min bar data in EMini futures, but only on days with market volatility below the median value, in the period from 2004 to 2015.  The strategy equity curve is exceptionally smooth, as might be expected, and the performance characteristics of the strategy are highly attractive, with a 27% annual rate of return, profit factor of 1.58 and Sharpe Ratio approaching double-digits.

Fig 3

Fig 4

Dealing with the Noisy Trading Days

Let’s say you have developed a trading system that works well on quiet days.  What next?  There are a couple of ways to go:

(i) Deploy the model only on quiet trading days; stay out of the market on volatile days; or

(ii) Develop a separate trading system to handle volatile market conditions.

Which approach is better?  It is likely that the system you develop for trading quiet days will outperform any system you manage to develop for volatile market conditions.  So, arguably, you should simply trade your best model when volatility is muted and avoid trading at other times.  Any other solution may reduce the overall risk-adjusted return.  But that isn’t guaranteed to be the case – and, in fact, I will give an example of systems that, when combined, will in practice yield a higher information ratio than any of the component systems.

Deploying the Trading Systems

The astute reader is likely to have noticed that I have “cheated” by using forward information in the model development process.  In building a trading system based only on data drawn from low-volatility days, I have assumed that I can somehow know in advance whether the market is going to be volatile or not, on any given day.  Of course, I don’t know for sure whether the upcoming session is going to be volatile and hence whether to deploy my trading system, or stand aside.  So is this just a purely theoretical exercise?  No, it’s not, for the following reasons.

The first reason is that, unlike the underlying asset market, the market volatility process is, by comparison, highly predictable.  This is due to a phenomenon known as “long memory”, i.e. very slow decay in the serial autocorrelations of the volatility process.  What that means is that the history of the volatility process contains useful information about its likely future behavior.  [There are several posts on this topic in this blog – just search for “long memory”].  So, in principle, one can develop an effective system to forecast market volatility in advance and hence make an informed decision about whether or not to deploy a specific model.

But let’s say you are unpersuaded by this argument and take the view that market volatility is intrinsically unpredictable.  Does that make this approach impractical?  Not at all.  You have a couple of options:

You can test the model built for quiet days on all the market data, including volatile days.  It may perform acceptably well across both market regimes.

For example, here are the results of a backtest of the model described above on all the market data, including volatile and quiet periods, from 2004-2015.  While the performance characteristics are not quite as good, overall the strategy remains very attractive.

Fig 5

Fig 6

 

Another approach is to develop a second model for volatile days and deploy both low- and high-volatility regime models simultaneously.  The trading systems will interact (if you allow them to) in a highly nonlinear and unpredictable way.  It might turn out badly – but on the other hand, it might not!  Here, for instance, is the result of combining low- and high-volatility models simultaneously for the Emini futures and running them in parallel.  The result is an improvement (relative to the low volatility model alone), not only in the annual rate of return (21% vs 17.8%), but also in the risk-adjusted performance, profit factor and average trade.

Fig 7

Fig 8

 

CONCLUSION

Separating the data into multiple subsets representing different market regimes allows the system developer to amplify the signal:noise ratio, increasing the effectiveness of his alpha factors. Potentially, this allows important features of the underlying market dynamics to be captured in the model more easily, which can lead to improved trading performance.

Models developed for different market regimes can be tested across all market conditions and deployed on an everyday basis if shown to be sufficiently robust.  Alternatively, a meta-strategy can be developed to forecast the market regime and select the appropriate trading system accordingly.

Finally, it is possible to achieve acceptable, or even very good results, by deploying several different models simultaneously and allowing them to interact, as the market moves from regime to regime.

 

How to Make Money in a Down Market

The popular VIX blog Vix and More evaluates the performance of the VIX ETFs (actually ETNs) and concludes that all of them lost money in 2015.  Yes, both long volatility and short volatility products lost money!

VIX ETP performance in 2015

Source:  Vix and More

By contrast, our Volatility ETF strategy had an exceptional year in 2015, making money in every month but one:

Monthly Pct Returns

How to Profit in a Down Market

How do you make money when every product you are trading loses money?  Obviously you have to short one or more of them.  But that can be a very dangerous thing to do, especially in a product like the VIX ETNs.  Volatility itself is very volatile – it has an annual volatility (the volatility of volatility, or VVIX) that averages around 100% and which reached a record high of 212% in August 2015.

VVIX

The CBOE VVIX Index

Selling products based on such a volatile instrument can be extremely hazardous – even in a downtrend: the counter-trends are often extremely violent, making a short position challenging to maintain.

Relative value trading is a more conservative approach to the problem.  Here, rather than trading a single product you trade a pair, or basket of them.  Your bet is that the ETFs (or stocks) you are long will outperform the ETFs you are short.  Even if your favored ETFs declines, you can still make money if the ETFs you short declines even more.

This is the basis for the original concept of hedge funds, as envisaged by Alfred Jones in the 1940’s, and underpins the most popular hedge fund strategy, equity long-short.  But what works successfully in equities can equally be applied to other markets, including volatility.  In fact, I have argued elsewhere that the relative value (long/short) concept works even better in volatility markets, chiefly because the correlations between volatility processes tend to be higher than the correlations between the underlying asset processes (see The Case for Volatility as an Asset Class).

 

Overnight Trading in the E-Mini S&P 500 Futures

Jeff Swanson’s Trading System Success web site is often worth a visit for those looking for new trading ideas.

A recent post Seasonality S&P Market Session caught my eye, having investigated several ideas for overnight trading in the E-minis.  Seasonal effects are of course widely recognized and traded in commodities markets, but they can also apply to financial products such as the E-mini.  Jeff’s point about session times is well-made:  it is often worthwhile to look at the behavior of an asset, not only in different time frames, but also during different periods of the trading day, day of the week, or month of the year.

Jeff breaks the E-mini trading session into several basic sub-sessions:

  1. “Pre-Market” Between 530 and 830
  2. “Open” Between 830 and 900
  3. “Morning” Between 900 though 1130
  4. “Lunch” Between 1130 and 1315
  5. “Afternoon” Between 1315 and 1400
  6. “Close” Between 1400 and 1515
  7. “Post-Market” Between 1515 and 1800
  8. “Night” Between 1800 and 530

In his analysis Jeff’s strategy is simply to buy at the open of the session and close that trade at the conclusion of the session. This mirrors the traditional seasonality study where a trade is opened at the beginning of the season and closed several months later when the season comes to an end.

Evaluating Overnight Session and Seasonal Effects

The analysis evaluates the performance of this basic strategy during the “bullish season”, from Nov-May, when the equity markets traditionally make the majority of their annual gains, compared to the outcome during the “bearish season” from Jun-Oct.

None of the outcomes of these tests is especially noteworthy, save one:  the performance during the overnight session in the bullish season:

Fig 1

The tendency of the overnight session in the E-mini to produce clearer trends and trading signals has been well documented.  Plausible explanations for this phenomenon are that:

(a) The returns process in the overnight session is less contaminated with noise, which primarily results from trading activity; and/or

(b) The relatively poor liquidity of the overnight session allows participants to push the market in one direction more easily.

Either way, there is no denying that this study and several other, similar studies appear to demonstrate interesting trading opportunities in the overnight market.

That is, until trading costs are considered.  Results for the trading strategy from Nov 1997-Nov 2015 show a gain of $54,575, but an average trade of only just over $20:

Gross PL

# Trades

Av Trade

$54,575

2701

$20.21

Assuming that we enter and exit aggressively, buying at the market at the start of the session and selling MOC at the close, we will pay the bid-offer spread and commissions amounting to around $30, producing a net loss of $10 per trade.

The situation can be improved by omitting January from the “bullish season”, but the slightly higher average trade is still insufficient to overcome trading costs :

Gross PL

# Trades

Av Trade

$54,550

2327

$23.44

SSALGOTRADING AD

Designing a Seasonal Trading Strategy for the Overnight Session

At this point an academic research paper might conclude that the apparently anomalous trading profits are subsumed within the bid-offer spread.  But for a trading system designer this is not the end of the story.

If the profits are insufficient to overcome trading frictions when we cross the spread on entry and exit, what about a trading strategy that permits market orders on only the exit leg of the trade, while using limit orders to enter?  Total trading costs will be reduced to something closer to $17.50 per round turn, leaving a net profit of almost $6 per trade.

Of course, there is no guarantee that we will successfully enter every trade – our limit orders may not be filled at the bid price and, indeed, we are likely to suffer adverse selection – i.e. getting filled on every losing trading, while missing a proportion of the winning trades.

On the other hand, we are hardly obliged to hold a position for the entire overnight session.  Nor are we obliged to exit every trade MOC – we might find opportunities to exit prior to the end of the session, using limit orders to achieve a profit target or cap a trading loss.  In such a system, some proportion of the trades will use limit orders on both entry and exit, reducing trading costs for those trades to around $5 per round turn.

The key point is that we can use the seasonal effects detected in the overnight session as a starting point for the development for a more sophisticated trading system that uses a variety of entry and exit criteria, and order types.

The following shows the performance results for a trading system designed to trade 30-minute bars in the E-mini futures overnight session during the months of Nov to May.The strategy enters trades using limit prices and exits using a combination of profit targets, stop loss targets, and MOC orders.

Data from 1997 to 2010 were used to design the system, which was tested on out-of-sample data from 2011 to 2013.  Unseen data from Jan 2014 to Nov 2015 were used to provide a further (double blind) evaluation period for the strategy.

Fig 2

 

 

  

ALL TRADES

LONG

SHORT

Closed Trade Net Profit

$83,080

$61,493

$21,588

  Gross Profit

$158,193

$132,573

$25,620

  Gross Loss

-$75,113

-$71,080

-$4,033

Profit Factor

2.11

1.87

6.35

Ratio L/S Net Profit

2.85

Total Net Profit

$83,080

$61,493

$21,588

Trading Period

11/13/97 2:30:00 AM to 12/31/13 6:30:00 AM (16 years 48 days)

Number of Trading Days

2767

Starting Account Equity

$100,000

Highest Equity

$183,080

Lowest Equity

$97,550

Final Closed Trade Equity

$183,080

Return on Starting Equity

83.08%

Number of Closed Trades

849

789

60

  Number of Winning Trades

564

528

36

  Number of Losing Trades

285

261

24

  Trades Not Taken

0

0

0

Percent Profitable

66.43%

66.92%

60.00%

Trades Per Year

52.63

48.91

3.72

Trades Per Month

4.39

4.08

0.31

Max Position Size

1

1

1

Average Trade (Expectation)

$97.86

$77.94

$359.79

Average Trade (%)

0.07%

0.06%

0.33%

Trade Standard Deviation

$641.97

$552.56

$1,330.60

Trade Standard Deviation (%)

0.48%

0.44%

1.20%

Average Bars in Trades

15.2

14.53

24.1

Average MAE

$190.34

$181.83

$302.29

Average MAE (%)

0.14%

0.15%

0.27%

Maximum MAE

$3,237

$2,850

$3,237

Maximum MAE (%)

2.77%

2.52%

3.10%

Win/Loss Ratio

1.06

0.92

4.24

Win/Loss Ratio (%)

2.10

1.83

7.04

Return/Drawdown Ratio

15.36

14.82

5.86

Sharpe Ratio

0.43

0.46

0.52

Sortino Ratio

1.61

1.69

6.40

MAR Ratio

0.71

0.73

0.33

Correlation Coefficient

0.95

0.96

0.719

Statistical Significance

100%

100%

97.78%

Average Risk

$1,099

$1,182

$0.00

Average Risk (%)

0.78%

0.95%

0.00%

Average R-Multiple (Expectancy)

0.0615

0.0662

0

R-Multiple Standard Deviation

0.4357

0.4357

0

Average Leverage

0.399

0.451

0.463

Maximum Leverage

0.685

0.694

0.714

Risk of Ruin

0.00%

0.00%

0.00%

Kelly f

34.89%

31.04%

50.56%

Average Annual Profit/Loss

$5,150

$3,811

$1,338

Ave Annual Compounded Return

3.82%

3.02%

1.22%

Average Monthly Profit/Loss

$429.17

$317.66

$111.52

Ave Monthly Compounded Return

0.31%

0.25%

0.10%

Average Weekly Profit/Loss

$98.70

$73.05

$25.65

Ave Weekly Compounded Return

0.07%

0.06%

0.02%

Average Daily Profit/Loss

$30.03

$22.22

$7.80

Ave Daily Compounded Return

0.02%

0.02%

0.01%

INTRA-BAR EQUITY DRAWDOWNS

ALL TRADES

LONG

SHORT

Number of Drawdowns

445

422

79

Average Drawdown

$282.88

$269.15

$441.23

Average Drawdown (%)

0.21%

0.20%

0.33%

Average Length of Drawdowns

10 days 19 hours

10 days 20 hours

66 days 1 hours

Average Trades in Drawdowns

3

3

1

Worst Case Drawdown

$6,502

$4,987

$4,350

Date at Trough

12/13/00 1:30

5/24/00 4:30

12/13/00 1:30