
About fractals, martingales and random integrals. Part one
- Tutorial

In my opinion, stochastic calculus is one of those magnificent sections of higher mathematics (along with topology and complex analysis), where formulas meet with poetry; this is the place where they find beauty, the place where the scope for artistic creation begins. Many of those who read the article Wiener Chaos or Another way to flip a coin , even if they understood a little, still managed to appreciate the magnificence of this theory. Today we will continue our mathematical journey, we will plunge into the world of random processes, non-trivial integration, financial mathematics, and even a little touch on functional programming. I warn you, keep your gyrus ready, as we have a serious conversation.

By the way, if you are interested in reading in a separate article, then suddenly it turned out that
The hidden threat of fractals
Imagine some simple function
By the time Benoit Mandelbrot came up with the very concept of “fractals”, they had not let many world scientists sleep for a hundred years. One of the first to whom fractals tarnished the reputation was Andre-Marie Ampère. In 1806, the scientist put forward a rather convincing "proof" that any function can be divided into intervals of continuous functions, and those, in turn, due to continuity, must have a derivative. And many of his contemporaries mathematicians agreed with him. In general, Ampère cannot be blamed for this seemingly rather crude mistake. The mathematicians of the beginning of the 19th century had a rather vague definition of such now basic concepts of mathematical analysis as continuity and differentiability, and, as a result, could be very mistaken in their proofs. Not even an exception were such eminent scientists as Gauss, Fourier, Cauchy, Legendre, Galois,
Nevertheless, in 1872, Karl Weierstrass refuted the hypothesis of the existence of derivatives of continuous functions by presenting the world's first fractal monster, which is continuous everywhere and nowhere differentiable.

Here, for example, how a similar function is set:
The Brownian movement also possesses the same property (by the way, for some reason, it’s not Br but Unovsky, the surname is British, nevertheless). Brownian motion is a continuous random process
Endless variation of Brownian motion
Let's imagine a function again

will be the ultimate. In which case are we not so sure? Well, for example, the function will not have final variation
Here's what a fairly simple implementation of variation on Haskell will look like:
-- | 'totalVariation' calculates total variation of the sample process X(t)
totalVariation :: [Double] -- process trajectory X(t)
-> Double -- V(X(t))
totalVariation x@(_:tx) = sum $ zipWith (\x xm1 -> abs $ x - xm1) tx x
Where in the real world do random processes occur? One of the most famous examples today is financial markets, namely the movement of prices for currencies, stocks, options and various securities. Let it
Now we can easily integrate something. Take the trajectory of some continuous random process and call it

So the question is: is it possible to take the integral in the sense of Lebesgue from the Brownian motion? We calculate the variation, and if it is finite, then we should not have any problems. Let's go from the side of the so-called quadratic variation :
Note that
Let's write a code that considers the quadratic variation of the implementation of a random process
-- | 'quadraticVariation' calculates quadratic variation of the sample process X(t)
quadraticVariation :: [Double] -- process trajectory X(t)
-> Double -- [X(t)]
quadraticVariation x@(_:tx) = sum $ zipWith (\x xm1 -> (x - xm1) ^ 2) tx x
So, now we are ready to return to calculating the variation of interest to us
We know that Brownian motion is a continuous process, which means
Oh yeah ...

Ito Integral or New Hope
In 1827, the English botanist Robert Brown, observing pollen grains in water through a microscope, noticed that the particles inside their cavity move in a certain way; however, he was then unable to identify the mechanisms that caused this movement. In 1905, Albert Einstein published an article explaining in detail how the movement observed by Brown was the result of pollen moving by individual water molecules. In 1923, the child prodigy and “father of cybernetics,” Norbert Wiener, wrote an article on differentiable spaces, where he approached this movement from the side of mathematics, determined probability measures in the space of trajectories and, using the concept of Lebesgue integrals, laid the foundation for stochastic analysis. Since then, in the mathematical community, the Brownian motion is also called the Wiener process.
In 1938, a graduate of Tokyo University Kiyoshi Ito begins to work in the Bureau of Statistics, where in his free time he gets acquainted with the work of the founder of modern probability theory, Andrei Kolmogorov, and the French scientist Paul Levy, who was studying at that time various properties of the Brownian movement. He tries to combine Levy’s intuitive visions with Kolmogorov’s exact logic, and in 1942 he wrote “On Stochastic Processes and Infinitely Divisible Distribution Laws,” in which he reconstructs from scratch the concept of stochastic integrals and the associated theory of stochastic differential equations describing the motions caused by random events.
Ito integral for simple processes
Ito developed a fairly simple way to build an integral. First, we define the Ito integral for a simple piecewise constant process

Imagine that
This integral has several interesting properties. The first and rather obvious: at the initial moment, its mathematical expectation at any point is zero:
The calculation of the second moment / variance already requires additional calculations. Let it
And finally, a quadratic variation. Let's first calculate the variation on each interval
Now we can clearly see the difference between dispersion and variation. If the process
Ito integral for complex processes
Now we are ready to expand the concept of the Ito integral to more complex processes. Let us have a function
In order to determine

The approximation is as follows: we choose a partition
In general, it is possible to choose a sequence of simple processes
- Its trajectories are continuous
- It is adapted. It is logical, because at every moment of time we must know our value of income.
- It is linear:
- He is a martingale, but more on that near the end of the article.
- Iso Itometry saved:
.
- Its quadratic variation:
A small remark: for those who want to study the construction of the Ito integral in more detail or are generally interested in financial mathematics, I highly recommend Steven Shreve's book “Stochastic Calculus for Finance. Volume II ".
Let's go?
For the following reasoning, we will denote the Itô integral in a slightly different form:
In the previous article, we implemented the Gaussian process:
data L2Map = L2Map {norm_l2 :: Double -> Double}
type ItoIntegral = Seed -- ω, random state
-> Int -- n, sample size
-> Double -- T, end of the time interval
-> L2Map -- h, L2-function
-> [Double] -- list of values sampled from Ito integral
-- | 'itoIntegral'' trajectory of Ito integral on the time interval [0, T]
itoIntegral' :: ItoIntegral
itoIntegral' seed n endT h = 0 : (toList $ gp !! 0)
where gp = gaussianProcess seed 1 (n-1) (\(i, j) -> norm_l2 h $ fromIntegral (min i j + 1) * t)
t = endT / fromIntegral n
The itoIntegral function ' takes seed as an input - a parameter for a random generator, n - the dimension of the output vector, endT - parameter
-- | 'mapOverInterval' map function f over the interval [0, T]
mapOverInterval :: (Fractional a) => Int -- n, size of the output list
-> a -- T, end of the time interval
-> (a -> b) -- f, function that maps from fractional numbers to some abstract space
-> [b] -- list of values f(t), t \in [0, T]
mapOverInterval n endT fn = [fn $ (endT * fromIntegral i) / fromIntegral (n - 1) | i <- [0..(n-1)]]
-- | 'itoIntegral' faster implementation of itoIntegral' function
itoIntegral :: ItoIntegral
itoIntegral seed 0 _ _ = []
itoIntegral seed n endT h = scanl (+) 0 increments
where increments = toList $ (sigmas hnorms) * gaussianVector
gaussianVector = flatten $ gaussianSample seed (n-1) (vector [0]) (H.sym $ matrix 1 [1])
sigmas s@(_:ts) = fromList $ zipWith (\x y -> sqrt(x-y)) ts s
hnorms = mapOverInterval n endT $ norm_l2 h
Now, using this function, we can implement, for example, the ordinary Brownian motion:
l2_1 = L2Map {norm_l2 = id}
-- | 'brownianMotion' trajectory of Brownian motion a.k.a. Wiener process on the time interval [0, T]
brownianMotion :: Seed -- ω, random state
-> Int -- n, sample size
-> Double -- T, end of the time interval
-> [Double] -- list of values sampled from Brownian motion
brownianMotion seed n endT = itoIntegral seed n endT l2_1
Let's try to draw different options for the trajectories of Brownian motion
import Graphics.Plot
let endT = 1
let n = 500
let b1 = brownianMotion 1 n endT
let b2 = brownianMotion 2 n endT
let b3 = brownianMotion 3 n endT
mplot [linspace n (0, lastT), b1, b2, b3]

It's time to double-check our previous calculations. Let's calculate the variation of the Brownian motion trajectory on the interval
Prelude> totalVariation (brownianMotion 1 100 1)
8.167687948236862
Prelude> totalVariation (brownianMotion 1 10000 1)
80.5450335433388
Prelude> totalVariation (brownianMotion 1 1000000 1)
798.2689137110999
We can observe that with increasing accuracy, the variation takes on larger and larger values, while the quadratic variation tends to
Prelude> quadraticVariation (brownianMotion 1 100 1)
0.9984487348804609
Prelude> quadraticVariation (brownianMotion 1 10000 1)
1.0136583395467458
Prelude> quadraticVariation (brownianMotion 1 1000000 1)
1.0010717246843375
Try running the calculation of variations for arbitrary Ito integrals to make sure that
Martingales
There is such a class of random processes - martingales . In order to
- It is adapted
almost everywhere of course:
- Conditional mathematical expectation of the future value
equals its last known value:
Brownian movement - martingale. The fulfillment of the first condition is obvious, the fulfillment of the second condition follows from the properties of the normal distribution. The third condition is also easily verified:
There is another interesting theorem, the so-called martingale representation theorem : if we have a martingale
And that's all for today.

Only registered users can participate in the survey. Please come in.
Is it worth it to talk about the Ito-Deblin lemma?
- 95% Yes, definitely worth it! 95
- 5% No, this is not Habr 5 format