The post Protected: Systematic Strategies Fund – Jan 2018 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Protected: Systematic Strategies Fund – Jan 2018 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Finding Alpha in 2018 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>Let’s begin by reviewing some of the best and worst performing assets of 2017 (I am going to exclude cryptocurrencies from the ensuing discussion). Broadly speaking, the story across the piste has been one of strong appreciation in emerging markets, both in equities and currencies, especially in several of the Eastern European economies. In Government bond markets Greece has been the star of the show, having stepped back from the brink of the economic abyss. Overall, international diversification has been a key to investment success in 2017 and I believe that pattern will hold in 2018.

Another key development that investors need to take account of is the extraordinary degree of flattening of the yield curve in US fixed income over the course of 2017:

This process has now likely reached the end point and will begin to reverse as the Fed and other central banks in developed economies start raising rates. In 2018 investors should seek to protect their fixed income portfolios by shortening duration, moving towards the front end of the curve.

A prominent feature of US markets during 2017 has been the continuing collapse of equity index volatility, specifically the VIX Index, which reached an all-time low of 9.14 in November and continues to languish at less than half the average level of the last decade:

Source: Wolfram Alpha

One consequence of the long term decline in volatility has been to drastically reduce the profitability of derivatives markets, for both traders and market makers. Firms have struggled to keep up with the high cost of technology and the expense of being connected to the fragmented U.S. options market, which is spread across 15 exchanges. Earlier in 2017, Interactive Brokers Group Inc. sold its Timber Hill options market-making unit — a pioneer of electronic trading — to Two Sigma Securities. Then, in November, Goldman Sachs announced it was shuttering its option market making business in US exchanges, citing high costs, sluggish volume and low volatility.

The impact has likewise been felt by volatility strategies, which performed well in 2015 and 2016, only to see returns decline substantially in 2017. Our own Systematic Volatility strategy, for example, finished the year up only 8.08%, having produced over 28% in the prior year.

One side-effect of low levels of index volatility has been a fall in stock return correlations, and, conversely, a rise in the dispersion of stock returns. It turns out that index volatility and stock correlation are themselves correlated and indeed, cointegrated:

In simple terms, stocks have a tendency to disperse more widely around an increasingly sluggish index. The “kinetic energy” of markets has to disperse somewhere and if movements in the index are muted then relative movement in individual equity returns will become more accentuated. This is an environment that ought to favor stock picking and both equity long/short and market neutral strategies should outperform. This certainly proved to be the case for our **Quantitative Equity** long/short strategy, which produced a net return of 17.79% in 2017, but with an annual volatility of under 5%:

Looking ahead to 2018, I expect index volatility and equity correlations rise as the yield curve begins to steepen, producing better opportunities for volatility strategies. Returns from equity long/short and market neutral strategies may moderate a little as dispersion diminishes.

Big increases in commodity prices and dispersion levels also lead to improvements in the performance of many CTA strategies in 2017. In the low frequency space our **Futures WealthBuilder** strategy produced a net return of 13.02% in 2017, with a Sharpe Ratio above 3 (CAGR from inception in 2013 is now at 20.53%, with an average annual standard deviation of 6.36%). The star performer, however, was our **High Frequency Futures** strategy. Since launch in March 2017 this has produce a net return of 32.72%, with an annual standard deviation of 5.02%, on track to generate an annual Sharpe Ratio above 8 :

Looking ahead, the World Bank has forecast an increase of around 4% in energy prices during 2018, with smaller increases in the price of agricultural products. This is likely to be helpful to many CTA strategies, which will likely see further enhancements in performance over the course of the year. Higher frequency strategies are more dependent on commodity market volatility, which is seen more likely to rise than fall in the year ahead.

US fixed income investors are likely to want to shorten duration as the yield curve begins to steepen in 2018, bringing with it higher levels of index volatility that will favor equity high frequency and volatility strategies. As in 2017, there is likely much benefit to be gained in diversifying across international equity and currency markets. Strengthening energy prices are likely to sustain higher rates of return in futures strategies during the coming year.

The post Finding Alpha in 2018 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Trading Bitcoin appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>It’s such a successful strategy that we are tempted to clone it. Buying Litecoin, or Ethereum, spring to mind.

As freshly-minted, bona-fide cryptocurrency entrepreneurs, it is perhaps timely to ponder the roots of this success story and share our discoveries with other, perhaps more rational investors, who may be inclined to treat the whole cryptocurrency malarky as a tulip-Ponzi scheme. First, there is a back-story to this. During the mid 1990’s I was teaching computational finance at Carnegie Mellon to some very bright students who were, naturally enough, playing the market on the side. This was the time of the internet boom with tech stocks like Amazon, Ebay, Sun Microsystems, et al, leading the charge to ever higher levels in the market. The multiples that some of these stocks were trading at were truly astonishing. I had seen something similar just before the crash in the Japanese market towards the end of the 1980’s, when stocks were trading at three-figure multiples. So by around 1997/98 I was becoming increasingly nervous that the tech boom might be at the point of imminent collapse. I conveyed these sentiments to my students, expressing concern that they should not over-commit themselves to what might turn out to be a bubble. My advice was roundly ignored and for the next couple of years I suffered the almost daily humiliation of watching the market indices reach even higher levels, to the joy of dot com investors. When the crash came many lost most, if not all, of their investment. It was like a funeral in the Hamptons that summer. Teary-eyed students asked me for advice as to what they should do to salvage what was left of their investment nest egg. I didn’t have the stomach for gloating. The only piece of advice I could think to offer them was: “learn”. I hope they did. Because here we are again.

What concerned me back in ’98 was not that I didn’t understand the importance of the new internet paradigm: on the contrary, I was an early adopter of the new technologies. I fully understood the potential benefits that a digital business like Amazon enjoyed versus bricks and mortar rivals. But it’s also the case that I under-estimated the potential of a company like Amazon, in several important ways. For instance, I did not foresee how useful and important customer reviews would become (facilitated by the digital medium); nor did I anticipate Amazon being as successful as it has been in broadening the scope of its services from books (then) to just about everything (now); and I also under-estimated the challenge that the new entrants would pose to traditional rivals, who struggled (and often failed) to adapt their hitherto successful business strategies. In other words, my concern didn’t stem from a lack of appreciation of the potential of the new tech companies, although I certainly under-estimated that potential in some cases. Rather, my thinking was that it had gone too far, too fast and that the blistering pace of the market melt-up would inevitably slow. I was right, but way too early. It is notoriously difficult to get the timing of the bubble-popping right, even if the call is correct.

In one wild 20 minute period, the price of bitcoin soared $2,000 per coin to more than $19,000 only to drop to $15,000 on the Coinbase trading venue. – Financial Times, Dec 8th, 2017

So to Bitcoin, which is undergoing a similar melt-up. Again, the rationale for the popularity of cryptocurrencies is not hard to fathom, given all the central bank shenanigans of the last decade and the poor reputation that several major banks have earned as serial manipulators of markets in fiat currency substitutes, like gold, or silver. Once again, it appears to me, the entities whose well established business models are most threatened by the arrival of cryptocurrencies have been slow on the uptake and most, like JP Morgan, for instance, are still in denial. The chief threat from cryptocurrencies lies in their potential to dis-intermediate the banks, by allowing users to transact directly with one another, and also Governments, who stand to lose considerable sums in tax revenue. No doubt they will eventually wake up to the scale of threat that Bitcoin poses and respond accordingly in due course – i.e. expect an avalanche of new regulation and government propaganda seeking to equate ownership of Bitcoin with “money laundering”, whatever that preposterous phrase might actually mean. I am not convinced that the genie can be stuffed back into the bottle so easily.

Given the value and scale of the market that cryptocurrencies are in the process of disrupting, i.e. global banking and taxation, the upside potential is indeed enormous. I fully expect the surge to continue for some time. But we can expect a great deal of volatility and several corrections of 20%, or more, along the way. The first of these might arrive next week, as Bitcoin futures start trading, enabling speculators to initiate short positions against the cryptocurrency. Other adverse events are likely to include increased scrutiny by government agencies like the IRS and market regulators like the SEC, although it remains to be seen how effectively they are able to operate in this sphere.

So, with all that said, here are some thoughts on how to play the market, if you must:

- Invest no more than 10% to 20% of your net worth. Losing this will hurt, but not kill you. Yes, you probably won’t become a Bitcoin billionaire, but neither will you end up in the poor house. Do not, under any circumstances, sell all your assets and plunge in. If you lose your home and the college fund, your wife and kids will never forgive you.
- Wait to see if we get a decent pullback after futures trading starts before you buy (more). If that doesn’t happen, it’s up to you to decide whether you want to wait it out or get aboard the train immediately. There is no right answer. At some point you are going to lose at least 20% of the value of your investment. It could be on day one, or six months from now. There is no way to know.
- I have read a few articles by traders threatening to short the heck out of the futures market as soon as it opens. For some of them this appears to be revenge for having missed a golden opportunity to buy Bitcoin when it was worth a fraction of the price it trades at today. Please don’t do this. It will be like trying to stop a freight train with your hand. You might get lucky, once or twice, but sooner or later you are going to get run over. And it will hurt a lot.
- If you want to trade the short side, or trade the long side more conservatively, consider a pairs trade. By this I mean, for instance, if you do decide to sell Bitcoin futures, consider hedging the position by buying another cryptocurrency like Ethereum or Litecoin. For a detailed description of how to approach this in a more sophisticated way, see these posts:

Developing Statistical Arbitrage Strategies Using Cointegration

As always, readers are entirely responsible for making their own investment decisions and for any and all consequences arising from them. The author bears no responsibility for any action or decision taken, or not taken, by any investor pursuant to this or other articles and disclaims any responsibility for investment decisions taken by readers of this blog.

The post Trading Bitcoin appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Systematic Futures Trading appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>

In the high frequency space, our focus is on strategies with very high Sharpe Ratios and low drawdowns. We trade a range of futures products, including equity, fixed income, metals and energy markets. Despite the current low levels of market volatility, these strategies have performed well in 2017:

Building high frequency strategies with double-digit Sharpe Ratios requires a synergy of computational capability and modeling know-how. The microstructure of futures markets is, of course, substantially different to that of equity or forex markets and the components of the model that include microstructure effects vary widely from one product to another. There can be substantial variations too in the way that time is handled in the model – whether as discrete or continuous “wall time”, in trade time, or some other measure. But some of the simple technical indicators we use – moving averages, for example – are common to many models across different products and markets. Machine learning plays a role in most of our trading strategies, including high frequency.

Here are some relevant blog posts that you may find interesting:

The post Systematic Futures Trading appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Analyzing the FDIC Dataset appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post A Winer Process appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>

But, in fact, I really did have in mind something more like this:

We are following an example from the recently published *Mathematica Beyond Mathematics* by Jose Sanchez Leon, an up-to-date text that describes many of the latest features in Mathematica, illustrated with interesting applications. Sanchez Leon shows how Mathematica’s machine learning capabilities can be applied to the craft of wine-making.

We begin by loading a curated Wolfram dataset comprising measurements of the physical properties and quality of wines:

We’re going to apply Mathematica’s built-in machine learning algorithms to train a predictor of wine quality, using the training dataset. Mathematica determines that the most effective machine learning technique in this case is Random Forest and after a few seconds produces the predictor function:

Mathematica automatically selects what it considers to be the best performing model from several available machine learning algorithms:

Let’s take a look at how well the predictor perform on the test dataset of 1,298 wines:

We can use the predictor function to predict the quality of an unknown wine, based on its physical properties:

Next we create a function to predict the quality of an unknown wine as a function of just two of its characteristics, its pH and alcohol level. The analysis suggests that the quality of our unknown wine could be improved by increasing both its pH and alcohol content:

This simple toy example illustrates how straightforward it is to deploy machine learning techniques in Mathematica. Machine Learning and Neural Networks became a major focus for Wolfram Research in version 10, and the software’s capabilities have been significantly enhanced in version 11, with several applications such as text and sentiment analysis that have direct relevance to trading system development:

For other detailed examples see:

The post A Winer Process appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post The Story of a HFT Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Correlation Copulas appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>In case you missed it, the post can be found here:

We saw previously that the levels of the three indices are all highly correlated, and we were able to successfully account for approximately half the variation in the VIX index using either linear regression models or non-linear machine-learning models that incorporated the two correlation indices. It turns out that the log-returns processes are also highly correlated:

We can create a simple linear regression model that relates log-returns in the VIX index to contemporaneous log-returns in the two correlation indices, as follows. The derived model accounts for just under 40% of the variation in VIX index returns, with each correlation index contributing approximately one half of the total VIX return.

Although the linear model is highly statistically significant, we see clear evidence of lack of fit in the model residuals, which indicates non-linearities present in the relationship. So, ext we use a nearest-neighbor algorithm, a machine learning technique that allows us to model non-linear components of the relationship. The residual plot from the nearest neighbor model clearly shows that it does a better job of capturing these nonlinearities, with lower standard in the model residuals, compared to the linear regression model:

Another approach entails the use of copulas to model the inter-dependency between the volatility and correlation indices. For a fairly detailed exposition on copulas, see the following blog posts:

We begin by taking a smaller sample comprising around three years of daily returns in the indices. This minimizes the impact of any long-term nonstationarity in the processes and enables us to fit marginal distributions relatively easily. First, let’s look at the correlations in our sample data:

We next proceed to fit margin distributions to the VIX and Correlation Index processes. It turns out that the VIX process is well represented by a Logistic distribution, while the two Correlation Index returns processes are better represented by a Student-T density. In all three cases there is little evidence of lack of fit, wither in the body or tails of the estimated probability density functions:

The final step is to fit a copula to model the joint density between the indices. To keep it simple I have chosen to carry out the analysis for the combination of the VIX index with only the first of the correlation indices, although in principle there no reason why a copula could not be estimated for all three indices. The fitted model is a multinormal Gaussian copula with correlation coefficient of 0.69. of course, other copulas are feasible (Clayton, Gumbel, etc), but Gaussian model appears to provide an adequate fit to the empirical copula, with approximate symmetry in the left and right tails.

The post Correlation Copulas appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post A Tactical Equity Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>Systematic Strategies is a hedge fund rather than an RIA, so we have no plans to offer the product to the public. However, we are currently holding exploratory discussions with Registered Investment Advisors about how the strategy might be made available to their clients.

For more background, see this post on Seeking Alpha: http://tiny.cc/ba3kny

The post A Tactical Equity Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Correlation Cointegration appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>

The question was put to me whether the VIX and correlation indices might be cointegrated.

Let’s begin by looking at the pattern of correlation between the three indices:

If you recall from my previous post, we were able to fit a linear regression model with the Year 1 and Year 2 Correlation Indices that accounts for around 50% in the variation in the VIX index. While the model certainly has its shortcomings, as explained in the post, it will serve the purpose of demonstrating that the three series are cointegrated. The standard Dickey-Fuller test rejects the null hypothesis of a unit root in the residuals of the linear model, confirming that the three series are cointegrated, order 1.

We can attempt to take the modeling a little further by fitting a VAR model. We begin by splitting the data into an in-sample period from Jan 2007 to Dec 2015 and an out-of-sample test period from Jan 2016 to Aug 2017. We then fit a vector autoregression model to the in-sample data:

When we examine how the model performs on the out-of-sample data, we find that it fails to pick up on much of the variation in the series – the forecasts are fairly flat and provide quite poor predictions of the trends in the three series over the period from 2016-2017:

The VIX and Correlation Indices are not only highly correlated, but also cointegrated, in the sense that a linear combination of the series is stationary.

One can fit a weakly stationary VAR process model to the three series, but the fit is quite poor and forecasts from the model don’t appear to add much value. It is conceivable that a more comprehensive model involving longer lags would improve forecasting performance.

The post Correlation Cointegration appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>