Experiment: What the random walk hypothesis says about forecasting financial markets

Published on April 05, 2016

Experiment: What the random walk hypothesis says about forecasting financial markets



    In the blog on Habré and the analytical section of our site, we write a lot about algorithms and tools for forecasting movement in financial markets. At the same time, many observers believe that such activities are akin to playing in a casino - everything is random on the exchange, which means that nothing can be predicted. Stuart Reed, a quantitative analyst at the NMRQL hedge fund, published the results of a study on the Turing Finance website , during which he used the random walk hypothesis, trying to confirm or refute the thesis about the randomness of financial markets. We bring to your attention the main thoughts of this material.

    According to Reid, hackers and traders, in fact, are doing the same thing - they find the system ineffective and exploit it. The only difference is that some, pursuing a variety of goals, hack into computers, networks or even people, and the second - financial markets and their goal is to make a profit.

    In this context, the topic of random number generators is very interesting - they are used to encrypt data and communications, but if a vulnerability is found in the generator, it ceases to be protected, and hackers can use an error to decrypt information. There are various sets of tests that such generators must pass in order to evaluate their cryptographic strength. One of these kits is the NIST test group. In this article, we will examine the application of these tests to financial strategies in order to understand whether the market can be “hacked”.

    Random walk hypothesis


    In the real world, many systems exhibit random properties. For example - the spread of epidemics like Ebola, the behavior of cosmic radiation, the movement of particles in water, luck during a game of roulette, and, according to the hypothesis of random walk , even the movement of financial markets.

    Consider an interesting test.conducted by Burton G. Malkiel, professor of economics at Princeton University. In its course, students were “given” a hypothetical action, which initially cost $ 50. The bid closing price for this stock was determined every day using a coin toss. If an eagle fell, then the price was half a point higher, and in the case of tails, half a point lower. Thus, each time the chance of an increase or decrease in value compared to the previous "trading day" was 50%. Thus, price cycles and trends were determined.

    Subsequently, Malkiel visualized the results using charts and showed them to the “chartists,” that is, specialists who predicted future price movements based on patterns of their past fluctuations. The chartists advised him to immediately buy stocks. But since this stock did not exist, and its price was determined by tossing a coin, then no real patterns existed, and therefore there could be no trend. The result of the experiment allowed Malkiel to argue that the stock market is as random as tossing a coin.

    This is similar to the “ Turing financial test ”, in which people familiar with the financial market are invited to look at the time series chart and determine on which real market data they are, which is a simulation using random processes:



    Is this a real market?



    Is it random?



    Or is there no difference at all?

    This is pretty hard to determine. It was such observations that led many researchers in the field of financial markets to think about finding out how truly random the behavior of stocks on the stock exchange is. The theory that prices are moving randomly is called the random walk hypothesis.

    Many of the researchers conducted tests similar to the Malkiel experiment, but in reality they do not prove that the stock market is developing by chance. They only prove that for the human eye, in the absence of additional information, real price movements cannot be distinguished from random ones.

    The hypothesis itself also has drawbacks:

    1. She considers different markets as a homogeneous environment, not taking into account the differences between them.
    2. It does not explain many empirical examples in which people constantly won in the market.
    3. It is based on a statistical definition of randomness, and not on an algorithmic one. And this means that the hypothesis does not distinguish between local and global randomness, does not take into account the concept of the relativity of randomness.

    Nevertheless, whether someone likes it or not, it cannot be denied that the widespread occurrence of the random walk hypothesis among quantitative analysts in the stock market as a whole had a serious impact on how various financial instruments are valued - for example, derivatives or structured products .

    Algorithmic vs statistical randomness


    Any function whose output cannot be predicted is stochastic (random). And a set, any function whose output can be predicted is deterministic (non-random). Everything is complicated by the fact that many deterministic functions can be similar to stochastic ones. For example, most random number generators are actually deterministic functions whose output is stochastic. Most random number generators are actually not random, so they are described with pseudo- or quasi- prefixes .

    In order to test the "validity" of the random walk hypothesis, it is necessary to determine whether the financial results of a particular stock (our function) are stochastic or deterministic. Theoretically, there is an algorithmic and statistical approach to the problem, but in practice only the latter is used (and there are explanations for this).

    Algorithmic Approach

    The theory of computable functions, also known as recursion theory or Turing computability, is a branch of theoretical computer science that works with the concept of computable and non-computable functions. A function is called computable depending on whether it is possible to write an algorithm that, in the presence of some input data, can always calculate it.

    If randomness is a property of unpredictability, then the conclusion of the function can never be accurately predicted. It follows logically from this that random processes are non-computable functions, since it is impossible to create an algorithm for calculating them. The famous Church-Turing thesis postulates that a function is computable only if it can be calculated using a Turing machine:



    It would seem that everything is simple - you just need to use the Turing machine to determine if there is an algorithm that predicts the behavior of stock prices (our function). But here we are faced with the problem of stopping , that is, the task of determining whether the algorithm will work forever, or someday it will end.

    It is proved that this problem is unsolvable, which means it is impossible to know in advance whether the program will stop or continue to work. This means that it is impossible to solve the problem of finding an algorithm that can “calculate” a function (predict the stock price) - before stopping, the Turing machine will need to sort through all possible algorithms, and this will take an infinitely long time. Therefore, it is impossible to prove that the financial market is completely random.

    If this fact is not taken into account, then such studies have led to the emergence of an interesting field called the algorithmic theory of information . It deals with the relationship between computability theory and information theory. It defines various types of randomness - one of the most popular is the definition of randomness according to Martin-Lef, according to which, in order for a string to be recognized as random, it must:

    • To be incompressible - compression involves finding a representation of information that uses less information. For example, the infinite long binary string 0101010101 .... can be expressed more precisely as 01, repeated infinitely many times, while the infinitely long line 0110000101110110101 ... does not have a distinct pattern, which means it cannot be compressed to anything shorter than the same line 0110000101110110101 ... This means that if the Kolmogorov complexity is greater than or equal to the length of the string, then the sequence is algorithmically random.
    • Pass statistical randomness tests - There are many randomness tests that check the difference between the distribution of a sequence relative to the expected distribution of any sequence that is considered random.
    • Not to bring benefits is an interesting concept, which implies that if it is possible to create some kind of bet that leads only to success, then it is no coincidence.

    In general, one should distinguish between global and local random walks. The first refers to markets in the long run, while the local random walk hypothesis may state that the market is random for some minimum period of time.

    In the absence of additional information, many systems may seem random without being such - for example, the same random number generators. Or, a more complex example, the price movement of some stock may seem random. But if you look at the financial statements and other fundamental indicators, then everything may turn out to be completely random.

    Statistical approach

    A sequence is statistically random when it does not contain any detectable patterns. This does not mean real randomness, that is, unpredictability - most pseudorandom random number generators that are not unpredictable, while being statistically random. The main thing here is to pass the NIST test suite. Most of these tests involve checking whether the output distribution of a presumably random system matches the results of a truly random system. The link provides the Python code for such tests.

    Hacking the market


    After reviewing the theoretical foundations of the concept of randomness and examining the tests that make it possible to identify it, another important question is whether it is possible to create a system using such tests that will determine the randomness or nonrandomness of market sequences better than a person.

    The researcher decided to conduct his own experiment, for which he used the following data:


    Assets of various types were also analyzed:


    The NIST test suite worked on real data sets - they were discretized and divided into periods of 3.5.7 and 10 years. In addition, there are two ways to generate test windows - overlapping windows and non-overlapping windows. The first option is better because it allows you to see the future randomness of the market, but affects the quality of aggregated P-values, since the windows are not independent.

    In addition, two simulated data sets were used for comparison. The first of these is a binary data set generated using the Mersenne vortex algorithm discretization strategy (one of the best pseudo-random generators).



    The second is binary data generated by the SIN function.



    Problems


    Each experiment has its own weaknesses. Not without them, and this time:

    1. Some tests require more data than the market generated (except for the use of minute or tick charts, which is not always possible), which means that their statistical significance is slightly less than ideal.
    2. NIST tests check only standard randomness - this does not mean that the markets are not distributed normally or somehow differently, but they are still random.
    3. Randomly selected time periods (starting on January 1 of each year) and significance level (0.005). Tests should be carried out on a much more extensive set of samples that begin with each month or quarter. The P-value did not have a significant impact on the final conclusions, since at different values ​​(0.001, 0.005, 0.05), some tests still did not pass in certain periods (for example, 1954-1959).

    results


    Here are the results that were achieved using two testing methods with overlapping or non-overlapping windows:





    The following conclusions can be drawn:

    1. The values ​​lie between the values ​​of the two benchmarks, which means that markets are less random than the Mersenne vortex and more random than the SIN function. But in the end, they are not random.
    2. Values ​​vary greatly in measurement - window size seriously affects the result, and uniqueness - markets are not equally random, some of them are more random than others.
    3. Values ​​for benchmarks are consistently good for the Mersenne vortex (more than 90% of the tests passed on average) and bad for the SIN graph (10-30% of the tests passed on average).

    At the beginning of the article, we looked at an example of an experiment by Professor Burton Malkiel, who wrote the famous book A Random Walk Down Wall Street — he presented random walks by tossing a coin and showing the results to the chartrist. When the chartrist stated that the “stock” needed to be bought, Professor Malkiel compared the financial market with a coin toss and used this thesis to justify a passive purchase strategy and hold positions.

    However, the author of the current study believes that this conclusion is erroneous, since the professor’s experiment only says that, from the point of view of the chartist, there is no difference between the tossing of a coin and the market. However, from the point of view of quantitative analysts and traders or their algorithms, this is not obvious. And an experiment conducted using the NIST test suite showed that although it can be difficult for a person to distinguish between randomly generated data and real financial information, markets are actually far from random.