# Divisible factorials

- Transfer

Recently, I was completely baffled by this Farm Library tweet:

When I saw it, I had to give up my affairs, grab a notebook and check the forum. The result in draft form seemed logical. Since the multiplicative version$$ while increasing $$tends to infinity, the “dividing” version should tend to zero. AND$$behaves that way; polynomial function$$ grows slower than power function $$ for large enough $$:

But why the result of division takes exactly the form $$? Where does it come from$$?

To answer this question, I had to scatter an old injury related to the study of the division of fractions, but I coped with the pain. Moving along the tweet formula from left to right, we first get$$. Then, dividing this value by$$get

Continuing in this way, we end up with:

To arrive at the result shown in the tweet $$, we simply multiply the numerator and denominator by $$. (Although for my taste, the expression$$ more understandable.)

I am an officially recognized factorial fan. Keep your Fibonacci sequences with you;

In the Farm Library tweet, dividers are arranged in descending order:$$. The most obvious thing is to replace this by ascending order:$$. What happens if we define the division factorial as$$? Another return to the school fraction division algorithm gives us a simple answer:

In other words, when we perform division many times, performing counting from $$ before $$, the final result will be equal to the inverse $$. (I would like to put an exclamation mark at the end of this sentence, but alas!) If you are looking for a canonical answer to the question “What will we get by dividing instead of multiplying into$$? ”, Then I would say that $$ - better candidate than $$. Why don't we accept symmetry between$$and its inverse?

Of course, there are many other ways of placing

And so we solved a little riddle about how in this tweet $$ turned into $$.

It is worth noting that all these functions converge to zero when aspiring$$to infinity. From the asymptotic point of view,$$ are identical.

TA-dah! Mission Complete. Problem solved. It is done. Now we know everything we need about dividing factorials, right?

Well, maybe there is another question. What will the computer say? If you take our favorite factorial algorithm, and do what is suggested in the tweet, replacing all the entries of the operator$$(or

Here is

This algorithm introduced whole generations of nerds to the concept of recursion. In text form, it reads: if$$ equally $$then $$ equally $$. Otherwise, you need to calculate the function$$and then multiply the result by $$.

Starting with any positive$$, the sequence of recursive calls will sooner or later sink to $$.

The function can be written more succinctly using the Julia single-line definition style:

The right side of an assignment operator is a conditional expression, or a ternary operator, which has the form

Just to make sure I did everything right, here are the first 10 factorials computed by this program:

Now let's change this definition and transform the only entry

And that's what the program returns if we run it for values $$ from $$ before $$:

What? It is definitely not like going to zero, as well as$$ or $$. In fact, the values do not look that way, because they are not going to converge. Judging from the graph shown below, the sequence consists of two alternating components, each of which seems to be slowly growing towards infinity, and also deviates from the other.

Understanding what we are seeing here, it will be useful to change the type of output of the function

Here is a sequence of values for

The list is full of curious patterns. This is a double helix in which even and odd numbers move in zigzags in the complementary strands. Even numbers are not just even, they are all powers$$. In addition, they appear in pairs — first in the numerator, then in the denominator — and their sequence is non-decreasing. But there are gaps; not all degrees are present$$. The odd string looks even more complex, various small simple coefficients appear and disappear in numbers. (Prime numbers

This result surprised me. I expected to see a much more meticulous sequence, like the ones I calculated on paper. All these broken jumps up and down had no meaning. The general trend towards an unlimited increase in the ratio did not make sense either. How can we constantly divide, while getting bigger and bigger numbers?

At this stage, you can pause the reading and try to come up with your own theory about where these zigzag numbers come from. If you need a hint, then you have it, and a very strong, almost spoiler: look for a sequence of numerators or a sequence of denominators in the Online Encyclopedia of Integer Sequences .

Here is another hint. A small change in the program

Now the results look like this:

This is the inverse function of factorial, which we have already seen, a series of values generated by traversing from left to right in an increasing sequence of divisors. $$.

Not surprisingly, changing the last expression in the procedure changes the result. In the end, we know that the division is not commutative and not associative. But it is difficult to understand why the sequence of values generated by the original program gives such a strange zigzag shape. What mechanism generates such pair powers of two and alternating odd and even values?

I found that explaining what is happening in a zigzag sequence is easier on an iterative version of the procedure, and not on a recursive one. (This statement may seem annoying to those who consider recursive definitions more simple, but it just so happened.) This is what the program looks like:

I declare that this procedure with a functional cycle is identical to a recursive function, in the sense that if

To understand the process that generates these numbers, consider the sequential values of the variables $$ and $$every time you run a cycle. Initially$$ and $$ are equal $$; so after the first pass of the cycle, the expression

If you expand these operations and look at the multiplications and divisions included in each element of the series, then a pattern appears:

In general:

Functions $$ for odd $$ and $$ for even $$have their own name! They are called double factorials, and are written as$$.

The double factorial

The 2012 article by Henry W. Gould and Joslin Quentins (alas, located behind the paywall) explores the use of double factorials. They are much more common than you might think. In the middle of the 17th century, John Wallis brought out the following identity:

An even more bizarre series involving the cube of double factorial values is summed up in $$. He was discovered by none other than Srinivasa Ramanujan.

Gould and Cientens also considered the equivalent of double factorial for binomial coefficients. The standard binomial coefficient is defined as:

The double version looks like this:

Note that our zigzag numbers correspond to this description, and therefore can be considered the binomial coefficients of double factorials. More specifically, they are such numbers:

Normal Bean $$not very interesting; he is just equal$$. But the double version$$as we have seen, a more lively dance is dancing. And unlike the usual Binom, it is not always integer. (The only integer values are$$ and $$.)

A look at zigzag numbers as a quotient of double factorials explains quite a few of their properties, starting with alternating even and odd values. We can also see why all the even numbers in the sequence are powers of 2. Consider an example with$$. The numerator of this fraction is$$receiving from $$ factor $$. But the denominator is$$. Top three and bottom are reduced, leaving us$$. Such reductions occur in each of the cases. Every time an odd factor appears in the sequence of even numbers$$He necessarily has the form $$but by this time itself $$ should already appear in the sequence of odd numbers.

Is the sequence of zigzag numbers a reasonable answer to the question: “What will happen if we divide, rather than multiply, in $$? Or was the computer program that generated them just turned out to be an erroneous algorithm? In my personal opinion,$$ - a more intuitive answer, but $$- more interesting.

Moreover, the very existence of a zigzag sequence expands our horizons. As stated above, if you insist that the division algorithm should always go through the list of numerators in order$$, at each step of dividing the number from the left to the number on the right, then there is a total $$possible outcomes, and they all look very similar. But the zigzag solution provides much more opportunities. We can formulate the problem as follows: take a set of numerators$$, choose its subset and reverse all elements of this subset; Now multiply all numerators, both inverse and direct. If the inverted subset is empty, then the result will be the usual factorial$$. If

These are just some of the many options available; in total there is$$ subsets of $$items. For example, you can take the inverse of each number, which is a prime or a prime power.$$. When small$$ the results are jumping, but constantly remain less than $$:

However, if I continued this schedule for more $$, he would have flown into the stratosphere. The powers of the primes become very sparse on the number line.

Now I will ask a question. We have seen factorial variations approaching zero as$$ to infinity for example $$. We have seen other variations that grow with$$ infinitely including myself $$and zigzag numbers. Are there any varieties of the factorial process converging to some finite non-zero boundary?

First of all, I came up with this algorithm:

We cycle through integer values from $$ down to $$, calculating in the process the current work / quotient $$. At each step, if the current value$$ more $$, we divide it by the next numerator, otherwise, we perform multiplication. This scheme implements some kind of feedback control or target search behavior. If a$$getting too big, we reduce it; if too small, we increase it. I suggested that while aiming$$ to infinity $$ will converge to a constantly narrowing range of values next to $$.

But the experiment gave me another surprise:

Such a sawtooth wave is not exactly what I expected. Curiously, the curve is not symmetric around$$; deviations from above have a larger amplitude than from below. But this distortion is more visual than mathematical. Because$$ is a private distance from $$ before $$ same as the distance from $$ before $$, but on a linear scale does not look like this. You can fix this by making a logarithmic graph of the private:

Now the graph is symmetrical, or at least approximately, and centered on the value $$which is logarithm $$. But there remains a more serious mystery. The sawtooth wave is very regular and has a period$$, while not showing signs of compression towards the expected limiting value $$. Numerical values suggest that$$ to infinity, the curve peaks converge to a slightly higher value $$and the minima are close to the value just below $$. (The corresponding logarithms are based on$$ about equal $$). I could not figure out why this is happening. Perhaps someone can explain.

Failure with this greedy algorithm does not mean that we cannot divide factorial converging to$$.

For anyone$$ we may find that by using the inverse of some other subset of numerators gives us a better approximation to $$. For small$$ we can solve this problem by iterating: just consider everything $$subsets and choose the best.

I calculated optimal partitions down to$$when you need to choose from a billion options.

Obviously, the graph is becoming more flat. You can use the same method to force a convergence to any other value in the range from$$ before $$.

And so we get another answer to the question posed by a tweet and began our journey. What happens if we divide, rather than multiply in$$? Whatever we like.

*“That's what happens if you do not multiply in factorial, but divide.”*When I saw it, I had to give up my affairs, grab a notebook and check the forum. The result in draft form seemed logical. Since the multiplicative version$$ while increasing $$tends to infinity, the “dividing” version should tend to zero. AND$$behaves that way; polynomial function$$ grows slower than power function $$ for large enough $$:

$$

But why the result of division takes exactly the form $$? Where does it come from$$?

To answer this question, I had to scatter an old injury related to the study of the division of fractions, but I coped with the pain. Moving along the tweet formula from left to right, we first get$$. Then, dividing this value by$$get

$$

Continuing in this way, we end up with:

$$

To arrive at the result shown in the tweet $$, we simply multiply the numerator and denominator by $$. (Although for my taste, the expression$$ more understandable.)

I am an officially recognized factorial fan. Keep your Fibonacci sequences with you;

*here is*my favorite feature. Every time I learn a new programming language, my first exercise is to write several procedures for calculating factorials. Over the years I have come up with several variations of this theme, for example, a replacement in the definition$$ on $$(which gives us triangular numbers). But it seems that I had never thought about replacing$$ on $$. It turns out strange. Since multiplication is commutative and associative, we can define$$ just like the product of all integers from $$ before $$without worrying about the order of operations. But when dividing the order ignore will not work. In general,$$ and $$.In the Farm Library tweet, dividers are arranged in descending order:$$. The most obvious thing is to replace this by ascending order:$$. What happens if we define the division factorial as$$? Another return to the school fraction division algorithm gives us a simple answer:

$$

In other words, when we perform division many times, performing counting from $$ before $$, the final result will be equal to the inverse $$. (I would like to put an exclamation mark at the end of this sentence, but alas!) If you are looking for a canonical answer to the question “What will we get by dividing instead of multiplying into$$? ”, Then I would say that $$ - better candidate than $$. Why don't we accept symmetry between$$and its inverse?

Of course, there are many other ways of placing

*n*integer values in the set$$. But how much? As it turned out, exactly$$! Therefore, it may seem that there is$$ unique ways to define the dividing function $$. However, studying the answers of the two permutations shown above makes us understand that a simpler pattern works here. Whatever element of the sequence appears first, it appears in the numerator of the large fraction, and the denominator is the product of all other elements. Therefore, in the end it remains only$$various results (assuming that we always perform division operations strictly from left to right). For any integer$$ in the range of $$ before $$by putting $$ at the top of the line, we create a delimiter $$equal to $$divided by all other factors. You can write it as follows:$$

And so we solved a little riddle about how in this tweet $$ turned into $$.

It is worth noting that all these functions converge to zero when aspiring$$to infinity. From the asymptotic point of view,$$ are identical.

TA-dah! Mission Complete. Problem solved. It is done. Now we know everything we need about dividing factorials, right?

Well, maybe there is another question. What will the computer say? If you take our favorite factorial algorithm, and do what is suggested in the tweet, replacing all the entries of the operator$$(or

`*`

) on `/`

, what will happen? Which of$$ options dividing $$will give us a program? Here is

*my*favorite algorithm for calculating factorials as a program on Julia :```
function mul!(n)
if n == 1return1elsereturn n * mul!(n - 1)
end
end
```

This algorithm introduced whole generations of nerds to the concept of recursion. In text form, it reads: if$$ equally $$then $$ equally $$. Otherwise, you need to calculate the function$$and then multiply the result by $$.

_{You may ask what will happen if$$will be zero or negative. You can ask, but better not. For our current goals$$. }Starting with any positive$$, the sequence of recursive calls will sooner or later sink to $$.

The function can be written more succinctly using the Julia single-line definition style:

`mul!(n) = n == 1 ? 1 : n * mul!(n - 1)`

The right side of an assignment operator is a conditional expression, or a ternary operator, which has the form

`a ? b : c`

. Here `a`

is the boolean test condition that should return the value either `true`

or `false`

. If `a`

equal `true`

, the expression is evaluated `b`

, and the result becomes the value of the entire expression. Otherwise, it is calculated `c`

. Just to make sure I did everything right, here are the first 10 factorials computed by this program:

```
[mul!(n) for n in 1:10]
10-element Array{Int64,1}:
126241207205040403203628803628800
```

Now let's change this definition and transform the only entry

`*`

into `/`

, leaving everything else unchanged (except for the function name).`div!(n) = n == 1 ? 1 : n / div!(n - 1)`

And that's what the program returns if we run it for values $$ from $$ before $$:

```
[div!(n) for n in 1:20]
20-element Array{Real,1}:
12.01.52.66666666666666651.8753.22.18753.6571428571428572.46093754.0634920634920632.707031254.4329004329004332.93261718754.7738927738927743.142089843755.0921522921522923.3384704589843755.3916906622788973.5239410400390635.675463855030418
```

What? It is definitely not like going to zero, as well as$$ or $$. In fact, the values do not look that way, because they are not going to converge. Judging from the graph shown below, the sequence consists of two alternating components, each of which seems to be slowly growing towards infinity, and also deviates from the other.

Understanding what we are seeing here, it will be useful to change the type of output of the function

`div!`

. Instead of using a division operator `/`

, which returns a value as a floating point number, we can replace it with an operator `//`

that returns an exact rational value, rounded to the lowest term.`div!(n) = n == 1 ? 1 : n // div!(n - 1)`

Here is a sequence of values for

`n в интервале 1:20`

:```
20-element Array{Real,1}:
12//1 3//2 8//3 15//8 16//5 35//16 128//35 315//128 256//63 693//256 1024//231 3003//1024 2048//429 6435//2048 32768//6435 109395//3276865536//12155230945//65536262144//46189
```

The list is full of curious patterns. This is a double helix in which even and odd numbers move in zigzags in the complementary strands. Even numbers are not just even, they are all powers$$. In addition, they appear in pairs — first in the numerator, then in the denominator — and their sequence is non-decreasing. But there are gaps; not all degrees are present$$. The odd string looks even more complex, various small simple coefficients appear and disappear in numbers. (Prime numbers

*should*be small, at least less.$$.)This result surprised me. I expected to see a much more meticulous sequence, like the ones I calculated on paper. All these broken jumps up and down had no meaning. The general trend towards an unlimited increase in the ratio did not make sense either. How can we constantly divide, while getting bigger and bigger numbers?

At this stage, you can pause the reading and try to come up with your own theory about where these zigzag numbers come from. If you need a hint, then you have it, and a very strong, almost spoiler: look for a sequence of numerators or a sequence of denominators in the Online Encyclopedia of Integer Sequences .

Here is another hint. A small change in the program

`div!`

completely converts the output. Just change the last expression, replacing `n // div!(n - 1)`

with `div!(n - 1) // n`

.`div!(n) = n == 1 ? 1 : div!(n - 1) // n`

Now the results look like this:

```
10-element Array{Real,1}:
11//2 1//6 1//24 1//120 1//720 1//5040 1//40320 1//362880 1//3628800
```

This is the inverse function of factorial, which we have already seen, a series of values generated by traversing from left to right in an increasing sequence of divisors. $$.

Not surprisingly, changing the last expression in the procedure changes the result. In the end, we know that the division is not commutative and not associative. But it is difficult to understand why the sequence of values generated by the original program gives such a strange zigzag shape. What mechanism generates such pair powers of two and alternating odd and even values?

I found that explaining what is happening in a zigzag sequence is easier on an iterative version of the procedure, and not on a recursive one. (This statement may seem annoying to those who consider recursive definitions more simple, but it just so happened.) This is what the program looks like:

```
functiondiv!_iter(n)
q = 1
foriin 1:nq = i // qendreturnqend
```

I declare that this procedure with a functional cycle is identical to a recursive function, in the sense that if

`div!(n)`

it `div!_iter(n)`

returns a result for some positive integer `n`

, then it will always be the same. Here is my proof:```
[div!(n) for n in 1:20] [div!_iter(n) for n in 1:20]
11//1 2//1 2//1 3//2 3//2 8//3 8//3 15//8 15//8 16//5 16//5 35//16 35//16 128//35 128//35 315//128 315//128 256//63 256//63 693//256 693//256 1024//231 1024//231 3003//1024 3003//1024 2048//429 2048//429 6435//2048 6435//2048 32768//6435 32768//6435 109395//32768 109395//3276865536//12155 65536//12155230945//65536 230945//65536262144//46189 262144//46189
```

To understand the process that generates these numbers, consider the sequential values of the variables $$ and $$every time you run a cycle. Initially$$ and $$ are equal $$; so after the first pass of the cycle, the expression

`q = i // q`

gives$$ value $$. Then$$, but $$i.e. new value $$ equally $$. At the third iteration$$, but $$what gives us $$. If this still confuses you, imagine$$ as $$. An important observation here is that with each loop traversal$$ gets the inverse of becoming $$. If you expand these operations and look at the multiplications and divisions included in each element of the series, then a pattern appears:

$$

In general:

$$

Functions $$ for odd $$ and $$ for even $$have their own name! They are called double factorials, and are written as$$.

_{Terrible terminology, right? It would be better if they were called “semi-factorials”. And if I didn't know that, I would read$$as factorial factorial. }The double factorial

*n is*defined as the product of*n*and all smaller positive integers of the same parity. So our curious sequence of zigzag values is just$$.The 2012 article by Henry W. Gould and Joslin Quentins (alas, located behind the paywall) explores the use of double factorials. They are much more common than you might think. In the middle of the 17th century, John Wallis brought out the following identity:

$$

An even more bizarre series involving the cube of double factorial values is summed up in $$. He was discovered by none other than Srinivasa Ramanujan.

Gould and Cientens also considered the equivalent of double factorial for binomial coefficients. The standard binomial coefficient is defined as:

$$

The double version looks like this:

$$

Note that our zigzag numbers correspond to this description, and therefore can be considered the binomial coefficients of double factorials. More specifically, they are such numbers:

$$

Normal Bean $$not very interesting; he is just equal$$. But the double version$$as we have seen, a more lively dance is dancing. And unlike the usual Binom, it is not always integer. (The only integer values are$$ and $$.)

A look at zigzag numbers as a quotient of double factorials explains quite a few of their properties, starting with alternating even and odd values. We can also see why all the even numbers in the sequence are powers of 2. Consider an example with$$. The numerator of this fraction is$$receiving from $$ factor $$. But the denominator is$$. Top three and bottom are reduced, leaving us$$. Such reductions occur in each of the cases. Every time an odd factor appears in the sequence of even numbers$$He necessarily has the form $$but by this time itself $$ should already appear in the sequence of odd numbers.

Is the sequence of zigzag numbers a reasonable answer to the question: “What will happen if we divide, rather than multiply, in $$? Or was the computer program that generated them just turned out to be an erroneous algorithm? In my personal opinion,$$ - a more intuitive answer, but $$- more interesting.

Moreover, the very existence of a zigzag sequence expands our horizons. As stated above, if you insist that the division algorithm should always go through the list of numerators in order$$, at each step of dividing the number from the left to the number on the right, then there is a total $$possible outcomes, and they all look very similar. But the zigzag solution provides much more opportunities. We can formulate the problem as follows: take a set of numerators$$, choose its subset and reverse all elements of this subset; Now multiply all numerators, both inverse and direct. If the inverted subset is empty, then the result will be the usual factorial$$. If

*all the*numerators have been reversed, then we get the reverse$$. And if every second numerator is addressed, starting with$$, the result will be an element of the zigzag sequence.These are just some of the many options available; in total there is$$ subsets of $$items. For example, you can take the inverse of each number, which is a prime or a prime power.$$. When small$$ the results are jumping, but constantly remain less than $$:

However, if I continued this schedule for more $$, he would have flown into the stratosphere. The powers of the primes become very sparse on the number line.

Now I will ask a question. We have seen factorial variations approaching zero as$$ to infinity for example $$. We have seen other variations that grow with$$ infinitely including myself $$and zigzag numbers. Are there any varieties of the factorial process converging to some finite non-zero boundary?

First of all, I came up with this algorithm:

```
function greedy_balance(n)
q = 1while n > 0
q = q > 1 ? q /= n : q *= n
n -= 1
end
return q
end
```

We cycle through integer values from $$ down to $$, calculating in the process the current work / quotient $$. At each step, if the current value$$ more $$, we divide it by the next numerator, otherwise, we perform multiplication. This scheme implements some kind of feedback control or target search behavior. If a$$getting too big, we reduce it; if too small, we increase it. I suggested that while aiming$$ to infinity $$ will converge to a constantly narrowing range of values next to $$.

But the experiment gave me another surprise:

Such a sawtooth wave is not exactly what I expected. Curiously, the curve is not symmetric around$$; deviations from above have a larger amplitude than from below. But this distortion is more visual than mathematical. Because$$ is a private distance from $$ before $$ same as the distance from $$ before $$, but on a linear scale does not look like this. You can fix this by making a logarithmic graph of the private:

Now the graph is symmetrical, or at least approximately, and centered on the value $$which is logarithm $$. But there remains a more serious mystery. The sawtooth wave is very regular and has a period$$, while not showing signs of compression towards the expected limiting value $$. Numerical values suggest that$$ to infinity, the curve peaks converge to a slightly higher value $$and the minima are close to the value just below $$. (The corresponding logarithms are based on$$ about equal $$). I could not figure out why this is happening. Perhaps someone can explain.

Failure with this greedy algorithm does not mean that we cannot divide factorial converging to$$.

_{If we work with the logarithms of the numerators, then this procedure becomes the case of a well-known computational problem called the “problem of breaking up a set of numbers”. We are given a set of real numbers, and we must divide them into two sets, the sum of which is equal, or as close as possible to equality. This is a proven challenge, but it is also called ( PDF ) “the simplest challenge”. }For anyone$$ we may find that by using the inverse of some other subset of numerators gives us a better approximation to $$. For small$$ we can solve this problem by iterating: just consider everything $$subsets and choose the best.

I calculated optimal partitions down to$$when you need to choose from a billion options.

Obviously, the graph is becoming more flat. You can use the same method to force a convergence to any other value in the range from$$ before $$.

And so we get another answer to the question posed by a tweet and began our journey. What happens if we divide, rather than multiply in$$? Whatever we like.