The post Modeling Asset Processes appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>Over the last twenty five years significant advances have been made in the theory of asset processes and there now exist a variety of mathematical models, many of them computationally tractable, that provide a reasonable representation of their defining characteristics.

While the Geometric Brownian Motion model remains a staple of stochastic calculus theory, it is no longer the only game in town. Other models, many more sophisticated, have been developed to address the shortcomings in the original. There now exist models that provide a good explanation of some of the key characteristics of asset processes that lie beyond the scope of models couched in a simple Gaussian framework. Features such as mean reversion, long memory, stochastic volatility, jumps and heavy tails are now readily handled by these more advanced tools.

In this post I review a critical selection of asset process models that belong in every financial engineer’s toolbox, point out their key features and limitations and give examples of some of their applications.

The post Modeling Asset Processes appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Systematic Strategies Fund Jan 2017 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>

The quote from Bloomberg says it all:

Last month featured more than its fair share of political excitements, as Donald Trump arrived in the White House. Yet it was resolutely boring for U.S. stocks, with one-month realized volatility on the S&P 500 coming in at 6.51 as the index moved steadily higher. In records going back to 1928, there have only been four other times when the year has begun so calmly — and none of those witnessed the inauguration of a first-time president.

January was indeed very quiet. What was particularly extraordinary about this was that the first month of an incoming president often sees a significant sell off, as described in this post. However, it was not to be: the market rallied as if Santa had decided to stick around for an another month. This outcome proved benign for both our Systematic Volatility and Quantitative Equity strategies, which saw gains of 0.89% and 0.38% for the month, respectively. This was pleasant enough, although as an investor one is always hoping for more. But the lack of market volatility makes life difficult for quantitative strategies in general, which tend to be positive correlated with volatility – and our own strategies are no exception to the rule. We don’t regard this as a situation requiring remedial action of any kind – it is simply the product of the monthly to month ebb and flow of the markets.

Looking ahead, we do not envisage the current quietude continuing indefinitely, given the economic and political risks – both domestic and international – looming large on the horizon. As and when we finally see a market sell-off the ensuing higher levels of volatility should open up more opportunities for our algorithms and with them, the potential for greater levels of profitability in the strategies.

The post Systematic Strategies Fund Jan 2017 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Conditional Value at Risk Models appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>One of the most widely used risk measures is the Value-at-Risk, defined as the expected loss on a portfolio at a specified confidence level. In other words, VaR is a percentile of a loss distribution.

But despite its popularity VaR suffers from well-known limitations: its tendency to underestimate the risk in the (left) tail of the loss distribution and its failure to capture the dynamics of correlation between portfolio components or nonlinearities in the risk characteristics of the underlying assets.

One method of seeking to address these shortcomings is discussed in a previous post Copulas in Risk Management. Another approach known as Conditional Value at Risk (CVaR), which seeks to focus on tail risk, is the subject of this post. We look at how to estimate Conditional Value at Risk in both Gaussian and non-Gaussian frameworks, incorporating loss distributions with heavy tails and show how to apply the concept in the context of nonlinear time series models such as GARCH.

The post Conditional Value at Risk Models appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Copulas in Risk Management appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post The Systematic Volatility Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>Systematic Volatility Strategy Presentation Jan 2017

The post The Systematic Volatility Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post The Systematic Strategies Quantitative Equity Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>In 2012 we created a research program into quantitative equity strategies using machine learning techniques, earlier versions of which were successfully employed in Proteom Capital, a hedge fund that pioneered the approach in the early 2000’s. Proteom managed capital for Paul Tudor Jones’s Tudor Capital and several other major institutional investors until 2007.

Systematic Strategies began trading its Quantitative Equity Strategy, the result of its research program, in 2013. Having built a four year live track record, the firm is opening the strategy to external investors in 2017. The investment program will be offered in managed account format and, for the first time, as a hedge fund product. For more details about the hedge fund offering, please see this post about the Systematic Strategies Fund.

Designed to achieve returns comparable to the benchmark S&P 500 Index, but with much lower risk, the Quantitative Equity Strategy has produced annual returns at a compound rate of 14.74% (net of fees) since 2013, with an annual volatility of 4.46% and realized Sharpe Ratio of 3.31. By contrast, the benchmark S&P 500 Index yielded a compound annual rate of return of 11.93%, with annual volatility of 10.40% and Sharpe Ratio of 1.15. In other words, the strategy has outperformed the benchmark S&P 500 Index by a significant margin, but with much reduced volatility and downside risk.

For more details about the hedge fund offering, please see this post about the Systematic Strategies Fund. If you are an Accredited Investor and wish to receive a copy of the fund offering documents, please write to us at info@systematic-stratgies.com.

Presentation deck (please allow time to load):

Quantitative Equity Strategy Presentation February 2017 (Updated Jan 2017)

The post The Systematic Strategies Quantitative Equity Strategy appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Strategy Portfolio Construction appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The essence of the strategy portfolio approach lies in the understanding that it is much easier to create a diversified portfolio of equity strategies than a diversified portfolio of the underlying assets. In principle, it is quite feasible to create a basket of strategies for highly correlated stocks that are uncorrelated, or even negatively correlated with one another. For example, one could combine a mean-reversion strategy on a stock like Merck & Co. (MRK) with a momentum strategy on a correlated stock such as Pfizer (PFE), that will typically counteract one another rather than moving in lockstep as the underlying equities tend to do.

In fact this approach is widely employed by hedge fund investors as well as proprietary trading firms and family offices, who seek to diversify their investments across a broad range of underlying strategies in many different asset classes, rather than by investing in the underlying assets directly.

What is to be gained from such an approach to portfolio construction, compared to the typical methodology? The answer is straightforward: lower risk, which results from the lower correlation between strategies, compared to the correlation between the assets themselves. That in turn produces a more diversified portfolio, reducing strategy beta, volatility and tail risk. We see all of these characteristics in the Systematic Strategies Quantitative Equity Strategy, described below, which has an average beta of only 0.2 and a maximum realized drawdown of -2.08%.

Risk reduction is only part of the story, however. The Quantitative Equity Strategy has also yielded a combined alpha in excess of 4% per annum, outperforming the benchmark by a total of 16.35% (net of fees) since 2013. In other words, the individual equity strategies produce alphas that are conserved when combined together to create the overall portfolio. But even if the strategies produced little or no alpha, the risk-reduction benefits of the approach would likely still apply.

What are the drawbacks to this approach to portfolio construction? One major challenge is that it is much harder to create strategy portfolios than asset portfolios. The analyst is obliged to create (at least) one individual strategy for each asset in the universe, rather than a single strategy for the portfolio as a whole. This constrains the rate at which the investment universe can grow (it takes time to research and develop good strategies!), limiting the rate of growth of assets under management. So it is not an approach that I would necessarily recommend if your goal is to deploy multiple billions of dollars; but AUM up to around a billion dollars is certainly a feasible target.

From a risk perspective, the chief limitation is that we lack the ability to control the makeup of the resulting portfolio as closely as we can with traditional approaches to portfolio construction. In a typical equity long/short strategy the portfolio is constrained to have a specified maximum beta, and overall net exposure. With the strategy portfolio approach we are unable to guarantee a specific limit on the net exposure, or the beta of the portfolio. We may be able to quantify that, historically, the portfolio has an average net long exposure, of, say, 25% and an average beta of 0.2; but there is nothing to guarantee that a situation may not arise in future in which all of the strategies align to produce a 100% net long or net short exposure and a beta of +1/-1, or greater. This is extremely unlikely, of course, and may never happen even in geological timescales, as can be demonstrated by Monte Carlo simulation. What is likely, however, is that there will be periods in which the beta and net exposure of the portfolio may fluctuate widely.

Is this a problem? Yes and no. The point about constraining the portfolio beta and net exposure in a typical long/short strategy is to manage portfolio risk. Such constraints decrease the likelihood of substantial losses – but they cannot guarantee that outcome, as has been demonstrated during prior periods of severe market stress such as 2000/01 and 2008/09. Asset correlations tend to increase significantly when markets suffer major declines, often undermining the assumptions on which the portfolio and its associated risk limits were originally based.

Similarly, the way in which we construct strategy portfolios takes a statistical approach to risk control, using stochastic process control. Just as with the traditional approach to portfolio construction, statistical analysis cannot guarantee that market conditions may not arise that give rise to substantial losses, however unlikely such circumstances may be.

*Read more about stochastic process control:*

*More on genetic programming:*

Is the risk of as yet unknown future market conditions greater for strategy portfolios greater than for traditional equity long/short portfolios? I would say not. In fact, during periods of market stress I would prefer to be in a portfolio of strategies, some of which at least are likely to perform well, rather than in a portfolio of equities which are now all likely to be highly correlated to the benchmark index.

The benefit of being able to check the boxes for risk controls such as limits on portfolio beta and net exposure is, I would argue, somewhat illusory. They might be characterized as just a placebo for those who are, literally, unable (or, more likely, not paid to) to think outside the box.

But if this argument is not sufficiently persuasive, it is perfectly feasible to overlay risk controls on a portfolio comprising several underlying strategies. For example one can:

- Hedge out portfolio beta and net exposure above a specified level, using index ETFs, futures or options
- Reduce the capital allocations during periods of market stress, or when portfolio beta or market exposure exceeds a threshold level
- Turn off strategies that under-performing in current market conditions

All of these measures will impact strategy performance. By and large, our preference is to let the strategy play out as it may, trusting in the robustness of the strategy portfolio to produce an acceptable outcome. We leave it to the investor to decide what additional risk controls he wishes to implement, either independently, or within a separately managed account.

Systematic Strategies started out in 2009 as a proprietary trading firm engaged in high frequency trading. In 2012 the firm expanded into low frequency systematic trading strategies with the launch of our VIX ETF strategy, which was superseded in 2015 by the Systematic Volatility Strategy. The firm began managing external capital in its managed account platform in 2015.

In 2012 we created a research program into quantitative equity strategies using machine learning techniques, earlier versions of which were successfully employed in Proteom Capital, a hedge fund that pioneered the approach in the early 2000’s. Proteom managed capital for Paul Tudor Jones’s Tudor Capital and several other major institutional investors until 2007.

Systematic Strategies began trading its Quantitative Equity Strategy, the result of its research program, in 2013. Having built a four year live track record, the firm is opening the strategy to external investors in 2017. The investment program will be offered in managed account format and, for the first time, as a hedge fund product. For more details about the hedge fund offering, please see this post about the Systematic Strategies Fund.

Designed to achieve returns comparable to the benchmark S&P 500 Index, but with much lower risk, the Quantitative Equity Strategy has produced annual returns at a compound rate of 14.74% (net of fees) since 2013, with an annual volatility of 4.46% and realized Sharpe Ratio of 3.31. By contrast, the benchmark S&P 500 Index yielded a compound annual rate of return of 11.93%, with annual volatility of 10.40% and Sharpe Ratio of 1.15. In other words, the strategy has outperformed the benchmark S&P 500 Index by a significant margin, but with much reduced volatility and downside risk.

For more details about the hedge fund offering, please see this post about the Systematic Strategies Fund. If you are an Accredited Investor and wish to receive a copy of the fund offering documents, please write to us at info@systematic-stratgies.com.

The post Strategy Portfolio Construction appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post HFT VIX Scalper Leads on Collective2 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>For more background on HFT scalping strategies see the following post:

The post HFT VIX Scalper Leads on Collective2 appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post Systematic Strategies Fund appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>In 2012 the firm began R&D of a new Quantitative Equity strategy, which was opened to investors at the beginning of 2017. Over the four years since inception in 2013, the Quantitative Equity strategy has achieved a compound annual rate of return of 14.74%, with a realized Sharpe Ratio of 3.31. More information about the strategy can be found in this post:

Due to increasing demand for the firm’s investment products, at the beginning of 2017 we have launched a new hedge fund vehicle, the Systematic Strategies Fund LLC, that will be open to accredited investors wishing to invest in any of our existing or future investment programs. These currently include the Systematic Volatility Strategy (Series A) and the Quantitative Equity Strategy (Series B). Additional investment programs will be added to the fund offering in due course. The new fund will give smaller investors access to investment programs that are available in SMA form only for larger managed accounts.

If you are an accredited investor and wish to receive copies of the fund offering documents, please contact us on info@systematic-strategies.com

The post Systematic Strategies Fund appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>The post The Algorithm appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>`teststring = "ItellyoumadamthecatisnotacivicanimalalthoughtisdeifiedinEgypt";`

`nlargest = 5;`

`TakeLargestBy[Cases[StringCases[teststring, __, Overlaps -> All], _?PalindromeQ], StringLength, nlargest] // Flatten`

which produces the n largest palindromes, in this case:

`{"deified", "acivica", "madam", "civic", "eifie"}`

My offering was fairly quickly superseded by a faster and more elegant solution from another Mathematica aficionado:

```
MaximalBy[StringCases[teststring, x : Repeated[_, {2, \[Infinity]}]
/; PalindromeQ[x], Overlaps -> True], StringLength, nlargest]
```

I have written previously about some of the new developments in programming languages, for example:

The joy of high-level, functional programming languages like Mathematica is that they often allow complex tasks to be accomplished easily, or at least in very few lines of code, compared to procedural languages like C, C++, Java or Python. The latter would probably require at least a dozen lines of code to achieve what Mathematica is able to accomplish in just one: see, for example, the table below, which compares the relative verbosity of various programming languages, including Mathematica.

Source: Wolfram Research

We can compare “tokens,” or any string of letters and numbers that are not interrupted by a number or punctuation. This lets us classify length in terms of “units of syntax,” which, while it isn’t perfect, gives us a clearer picture of the number of different elements required to build a function or program. Below, the Wolfram Language appears to, on average, increase in token count at a slower rate than Python. Using `FindFit`, we can estimate that a typical Python program that requires *x* tokens can be written in the Wolfram Language with 3.48 tokens, meaning a Python program that requires 1,000 tokens would require just 110 tokens in the Wolfram Language.

Source: Wolfram Research

So what is the message here?

For now, let me skate over the question of execution speed, typically the foremost objection that arises at this stage of the endless argument about interpreted vs. compiled programming languages. I promise to return to that topic very shortly. Instead let me first point out an often underestimated obstacle one faces in programming in Mathematica: the idiom. It can sometimes take as long to figure out the correct programming method in a single line of Mathematica code as it might take to code an algorithm more prosaically in a relatively verbose language like Java. And even after you have successfully achieved the goal, you are left with the challenge of understanding your own code when you return to it, perhaps months (or years) later. Even worse, you may have to disentangle the complex prose of another Mathematica programmer.

On the plus side, one of the great advantages of languages like Mathematica is that programs are largely a compilation of functions. So one assembles programs piece by piece, testing the functionality of each component function before adding another layer of complexity. Reverse engineering a Mathematica program – no matter how complex – follows roughly the same process, in reverse: in the above code you would first try to understand what the function MaximalBy does, then StringCases, then PalindromeQ (that one is fairly obvious!). The nature of functional programming languages makes it straightforward to assemble complex pieces of code from relatively simple sub-components, and also to reverse the process to dis-assemble them. The interpretative nature of the language is helpful here, too, since one can (unit) test each component on the fly, something that is much more tedious to do in a compiled language.

In everyday usage we often conflate the terms “algorithm” and “code”. The code given above is an implementation of an algorithm, but it is not the algorithm itself. An algorithm is a recipe or sequence of steps that will produce a specified result. It can be written down as a series of instructions, in English or any other language, just as easily as a computer language and can be executed manually, if necessary. During WWII, before the advent of modern computers, large teams of women were employed to do just that:

In the case of the present example the algorithm runs something like this:

- Examine every substring of the string
- Test each substring to see if it is a palindrome
- Add any that are palindromes to the list
- Sort the list of substrings by string length
- Pick the n longest substrings

In principle, there is nothing to prevent us executing the algorithm by hand, except possibly time and boredom. Computer code is just a very much more convenient and efficient means of execution. But this usefulness can give rise to a kind of mental laziness, in which we focus on the problem of producing the most elegant computer code, rather than the most efficient algorithm. We have, in essence, got the problem turned around: we should first think about the algorithm and only then begin to think about how we might implement it. Not proceeding in that logical fashion risks producing solutions that are distinctly sub-optimal, whatever the efficiency and elegance of the implemented code.

In this case, we have used a “brute force” algorithm that examines every substring for possible palindromes. This is highly inefficient, scaling with the square of the number of characters n in the string, i.e. O(n^2). To see this, consider the 4-character string “abcd”. There are a total of 10 substrings:

- 4 single character substrings: {a, b, c, d}
- 3 two-character substrings: {ab, bc, cd}
- 2 three-character substrings {abc, bcd}
- 1 four-character string {abcd}

In general, for a string of length n, there are:

- n single-character substrings,
- (n-1) 2-character substrings,

. . . . . . . . . . . . ,

- 2 substrings length n-1
- and a single string length n.

So the total number of substrings is:

Let’s demonstrate the inefficiency of the algorithm using Virgil’s Aeneid as our source of palindromes – perhaps not the smartest choice, given the nature of poetry!

`Aeneid = ExampleData[{"Text", "AeneidEnglish"}]; StringLength[Aeneid] 606071 teststring = StringTake[Aeneid, 1000]; nlargest = 20; StringTake[teststring, 100] I Arms, and the man I sing, who, forc'd by fate, And haughty Juno's unrelenting hate, Expell'd`

`MaximalBy[StringCases[teststring, x : Repeated[_, {2, \[Infinity]}] /; PalindromeQ[x], Overlaps -> True], StringLength, nlargest] // Timing`

{12.1068, {" I ", " I ", "ele", "t t", "a a", "t t", "ivi", "g g", " O ", "ses", "ovo", " a ", "s s", "h h", "ese", "exe", "t t", "awa", "t t", "s s"}

It takes a full 12 seconds to find the 20 largest palindromes in only the first 1,000 characters of the 600,00 character text.

If we double the length of the test string, the execution time increases by a factor of over 4x:

`teststring = StringTake[Aeneid, 2000];`

`MaximalBy[StringCases[teststring, x : Repeated[_, {2, \[Infinity]}] /; PalindromeQ[x], Overlaps -> True], StringLength, nlargest] // Timing`

{56.4254, {" I ", " I ", "ele", "t t", "a a", "t t", "ivi", "g g"," O ", "ses", "ovo", " a ", "s s", "h h", "ese", "exe", "t t", "awa", "t t", "s s"}

It is clear that even if we succeed in speeding up the code, the inefficiency of the algorithm will render it useless for all but the shortest strings – scanning the entire Aeneid for palindromes would take the algorithm at least 100 hours!

Fortunately, in 1975 Glenn Manacher came up with a much smarter algorithm that scales linearly with the length of the string, which you can read about here. In essence his approach exploits the symmetry in palindromes to reduce the search time.

An implementation of Manacher’s algorithm in Mathematica is given below. The code executes extremely quickly, taking only around a quarter of a second to scan the entire 600,000 character text of the Aeneid to find the longest palindrome (“man nam”).

AbsoluteTiming[findLongestPalindrome[Aeneid]] (* {0.236135, "man nam"} *)

A large part of the speed enhancement is due to the much greater efficiency of Manacher’s algorithm. Notice, too, however, that further significant speed gains are made by compiling the Mathematica code via C, which Mathematica is able to do as standard.

Doc, note: I dissent. A fast never prevents a fatness. I diet on cod

– Mathematician Peter Hilton

The lessons here are, firstly, to focus on the problem of the algorithm rather than the code, clearly distinguishing between the two.

Secondly, the complex idiom of functional programming languages like Mathematica can complicate the task of programming an algorithm, or understanding code written by others. On the other hand, the brevity of the language makes it easier to get an overview of the functionality of a program, while the functional structure makes the process of assembling – or disassembling – complex programs at least logically coherent.

Finally, it is important to understand that modern interpreted functional languages like Mathematica are a great deal more viable as production systems than they ever used to be, thanks to the introduction of capabilities to produce compiled code.

```
findLongestPalindrome[""] = "";
findLongestPalindrome[s_String] :=
FromCharacterCode @ findLongestPalindromeList[ToCharacterCode @ s];
findLongestPalindromeList = Compile[{{s, _Integer, 1}},
Module[{s2, p, c, r, n, m, i2, len, cc},
s2 = Riffle[s, -1, {1, -1, 2}];
p = ConstantArray[0, Length[s2]];
c = 1; r = 1; m = 1; n = 1; len = 0; cc = 1;
Do[
If[i > r,
p[[i]] = 0; m = i - 1; n = i + 1,
i2 = 2 c - i;
If[p[[i2]] < (r - i),
p[[i]] = p[[i2]]; m = 0,
p[[i]] = r - i; n = r + 1; m = 2 i - n
]
];
If[OddQ[m],p[[i]]++; m--; n++];
While[m > 0 && n <= Length[s2] && s2[[m]] == s2[[n]],
p[[i]] += 2; m -= 2; n += 2;
];
If[(i + p[[i]]) > r,
c = i; r = i + p[[i]];
];
If[len < p[[i]],
len = p[[i]]; cc = i;
],
{i,2,Length[s2]}
];
s[[Quotient[cc - len + 1, 2] ;; Quotient[cc + len - 1, 2]]]
]
]
```

The post The Algorithm appeared first on QUANTITATIVE RESEARCH AND TRADING.

]]>