Types and Functions

Original author: Bartosz Milewski
  • Transfer
This is the third article in the series "Category Theory for Programmers."

The category of types and functions plays an important role in programming, so let's talk about what types are and why we need them.

Who needs types?


There is some disagreement in the community about the benefits of static typing versus dynamic and strong typing versus weak typing. Let me illustrate typing choices with a thought experiment. Imagine millions of monkeys with keyboards joyfully pressing random keys that write, compile, and run programs.

image




With machine language, any combination of bytes produced by monkeys will be accepted and launched. But in high-level languages, it is highly appreciated that the compiler is able to detect lexical and grammatical errors. Many programs will simply be rejected, and the monkeys will be left without bananas, but the rest will be more likely to be meaningful. Type checking provides another barrier against pointless programs. In addition, while in dynamically typed languages ​​type mismatches will be detected only at run time, in strongly typed statically checked languages, type mismatches are detected at compile time, which eliminates many incorrect programs before they have a chance to be launched.

So the question is, do we want the monkeys to be happy, or create the right programs?
(Translator's note: don’t be offended, the author simply likes less boring metaphors than RNG and “random byte sequences”, and does not call programmers monkeys).

Typically, the goal of a thought experiment with printing monkeys is to create the complete works of Shakespeare (approx. Translator: or War and Peace of Tolstoy) . Checking spelling and grammar in a loop will dramatically increase your chances of success. The analogue of type checking goes even further: after Romeo is declared a person, type checking will make sure that leaves do not grow on it and that it does not catch photons with its powerful gravitational field.

Types needed for composability


Category theory studies the composition of arrows. Not any two arrows can be arranged: the target of one arrow must match the original object of the next. In programming, we pass results from one function to another. The program will not work if the second function cannot correctly interpret the data obtained with the first. Both functions must fit together to make their composition work. The stronger the type system of the language, the better this approach can be described and automatically verified.

The only serious argument I hear against strict static typing is that it can reject some programs that are semantically correct. In practice, this is extremely rare.(Translator's note: in order to avoid srach, I note that the author did not take into account here, or disagree that there are many styles, and the duck-typing programmer who is familiar with the scripting language also has the right to life. On the other hand, duck-typing is also possible in strict the type system through templates, traits, type classes, interfaces, there are many technologies, so the author’s opinion cannot be considered strictly wrong.) and, in any case, each language contains some kind of backdoor to bypass the type system when it is really necessary . Even Haskell has unsafeCoerce. But such designs should be used wisely. The character of Franz Kafka, Gregor Zamza, violates the type system when he turns into a giant beetle, and we all know how it ended (translator's note: bad :) .

Another argument that I often hear is that strong typing puts too much strain on the programmer. I can sympathize with this problem, since I myself wrote several declarations of iterators in C ++, only there is a technology, type inference, which allows the compiler to derive most types from the context in which they are used. In C ++, you can declare an auto variable, and the compiler will infer the type for you.

In Haskell, except in rare cases, type annotations are optional. As a rule, programmers use them anyway, because types can tell a lot about the semantics of the code, and type declarations help to understand compilation errors. A common practice at Haskell is to start a project with type development. Later, type annotations are the basis for implementation and become compiler-guaranteed comments.

Strong static typing is often used as an excuse for not testing code. Sometimes you can hear Haskell programmers say: “If the code is built, it is correct.” Of course, there is no guarantee that a program that is correct in terms of types is correct in the sense of the correct result. As a result of this attitude, in a number of studies, Haskell did not become much ahead of other languages ​​in terms of code quality, as one might expect. It seems that in commercial conditions the need to fix bugs exists only up to a certain level of quality, which is mainly associated with the economics of software development and end-user tolerance, and is very weakly connected with the programming language or development methodology. The best criterion would be to measure

Now, regarding the claim that unit testing can replace strict typing. Consider the general practice of refactoring in strongly typed languages: changing the type of the argument of a function. In strongly typed languages, it is enough to change the declaration of this function, and then fix all the assembly errors. In weakly typed languages, the fact that the function in now expects other data cannot be associated with the caller.

Unit testing can catch some of the inconsistencies, but testing is almost always probabilistic, not deterministic process (note translator: There may have been due to a set of tests: you cover not all possible inputs, and some representative sample..) Testing - a poor substitute for the proof of correctness.

What are types?


The simplest description of types: they are sets of values. The Bool type (remember, specific types begin with a capital letter in Haskell) corresponds to a set of two elements: True and False. Char type is the set of all Unicode characters, for example 'a' or 'ą'.

Sets can be finite or infinite. The String type, which is essentially a synonym for the Char list, is an example of an infinite set.

When we declare x as Integer:
x :: Integer

we say that it is an element of the set of integers. Haskell's Integer is an infinite set, and can be used for arithmetic of any precision. There is a finite set Int, which corresponds to a machine type, like int in C ++.

There are some subtleties that make equating types to sets difficult. There are problems with polymorphic functions that have cyclic definitions, and also with the fact that you cannot have the set of all sets; but, as I promised, I will not be a strict mathematician. The important thing is that there is a category of sets called Set, and we will work with it.
In Set, objects are sets, and morphisms (arrows) are functions.

Set is a special category, because we can look inside its objects and this will help a lot of intuitively understand. For example, we know that an empty set has no elements. We know that there are special sets of one element. We know that functions map elements of one set to elements of another. They can display two elements in one, but not one element in two. We know that the identity function maps each element of the set into itself, and so on. I plan to gradually forget all this information and instead express all these concepts in a purely categorical form, that is, in terms of objects and arrows.

In an ideal world, we could simply say that types in Haskell are sets, and functions in Haskell are mathematical functions between them. There is only one small problem: a mathematical function does not execute any code - it only knows the answer. A function in Haskell must calculate the answer. This is not a problem if the answer can be obtained in a finite number of steps, however large it may be. But there are some calculations that include recursion, and those may never complete. We cannot simply prohibit a function that does not complete in Haskell because to distinguish whether a function completes or not - the famous stopping problem - is unsolvable. That's why computer scientists came up with a brilliant idea, or a dirty hack, depending on your point of view, to expand each type with a special value called bottom(translator's note: this term (bottom) is heard somehow foolishly in Russian, if anyone knows a good option, please suggest.) , which is indicated by _ | _ or in Unicode ⊥. This “value” corresponds to an incomplete calculation. So a function declared as:
f :: Bool -> Bool

can return True, False, or _ | _; the latter means that the function never ends.

Interestingly, as soon as you accept bottom in a type system, it is convenient to consider every runtime error as bottom, and even let the function return bottom explicitly. The latter is usually done using an undefined expression:
f :: Bool -> Bool
f x = undefined

This definition passes type checking because undefined is evaluated at the bottom, which is included in all types, including Bool. You can even write:
f :: Bool -> Bool
f = undefined

(without x) because bottom is also a member of type Bool -> Bool.

Functions that can return bottom are called partial, unlike regular functions that return the correct results for all possible arguments.

Because of the bottom, the category of Haskell types and functions is called Hask, not Set. From a theoretical point of view, this is a source of endless complications, so at this stage I use my butcher’s knife and complete this discussion. From a pragmatic point of view, you can ignore incomplete functions and bottom and work with Hask as a full Set.

Why do we need a mathematical model?


As a programmer, you are familiar with the syntax and grammar of a programming language. These aspects of the language are usually formally described at the very beginning of the language specification. But the meaning and semantics of language are much more difficult to describe; this description takes up many more pages, rarely quite formally, and almost never completely. Hence the never-ending discussions among language lawyers, and the whole artisanal industry of books devoted to the interpretation of the intricacies of language standards.

There are formal tools for describing the semantics of a language, but because of their complexity, they are mainly used for simplified, academic languages, rather than real giants of industrial programming. One of these tools is called operational semantics and describes the mechanics of program execution. It defines a formalized, idealized interpreter. The semantics of industrial languages, such as C ++, are typically described using informal reasoning, often in terms of an “abstract machine”.

The problem is that it is very difficult to prove something about programs that use operational semantics. To show a certain property of a program, you, in fact, must “run it” through an idealized interpreter.

It doesn’t matter that programmers never formally prove correctness. We always “think” that we are writing the right programs. No one sits at the keyboard, saying: “Oh, I’ll just write a few lines of code and see what happens.” (Translator's note: ah, if ...) We believe that the code we write will execute certain actions that will produce the desired results.We are usually very surprised if this is not the case.This means that we really think about the programs we write, and we usually do this by launching an interpreter in our heads. , it’s very difficult to keep track of all the variables. Computers are good for running programs, people - No. If we were, we would not need a computer!.

But there is an alternative. It is called denotational semantics and is based on mathematics. In denotational semantics, a mathematical interpretation is described for each language construct. Thus, if you want to prove a property of a program, you simply prove a mathematical theorem. You think that proving theorems is difficult, but in fact we humans have built mathematical methods for thousands of years, so there is a lot of accumulated knowledge that can be used. In addition, compared with the theorems proved by professional mathematicians, the problems we encounter in programming are usually quite simple, if not trivial. (Translator's note: for proof, the author does not try to offend programmers.)

Consider the definition of factorial function in Haskell, a language that lends itself to denotational semantics:
fact n = product [1..n]

The expression [1..n] is a list of integers from 1 to n. The product function multiplies all list items. Just like the definition of factorial, taken from the textbook. Compare this to C:
int fact(int n) {
    int i;
    int result = 1;
    for (i = 2; i <= n; ++i)
        result *= i;
    return result;
}

Should I continue? (Translator's note: the author cheated a little by taking the library function in Haskell. In fact, there was no need to trick, an honest description by definition is not more difficult) :
fact 0 = 1
fact n = n * fact (n - 1)

Well, I immediately admit that it was a cheap welcome! Factorial has an obvious mathematical definition. An astute reader may ask: What is the mathematical model for reading a character from the keyboard, or sending a packet over the network? For a long time, this would be an awkward question leading to rather confusing explanations. It seemed that denotational semantics are not suitable for a significant number of important tasks that are necessary for writing useful programs, and which can be easily solved by operational semantics. The breakthrough came from category theory. Eugenio Moji discovered that computational effects can be transformed into monads. This turned out to be an important observation, which not only gave denotational semantics a new life and made purely functional programs more convenient, but also provided new information about traditional programming.

One of the important advantages of having a mathematical model for programming is the ability to perform formal proof of software correctness. It may not seem so important when you write consumer software, but there are areas of programming where the cost of failure can be huge, or where human life is at risk. But even when writing web applications for the healthcare system, you can appreciate the idea that functions and algorithms from the Haskell standard library come with proof of correctness.

Clean and Dirty Features


What we call functions in C ++ or any other imperative language is not the same as mathematicians call functions. A mathematical function is simply mapping values ​​to values.

We can implement a mathematical function in a programming language: such a function, having an input value, will calculate the output value. The function to get the squared number is likely to multiply the input value by itself. She will do this with every call, and is guaranteed to produce the same result every time she is called with the same argument. The square of the number does not change with the phases of the moon.

In addition, calculating the square of the number should not have a side effect, such as giving a tasty nishtyachok to your dog. The “function” that does this cannot be easily modeled by a mathematical function.

In programming languages, functions that always give the same result on the same arguments and have no side effects are called pure. In a pure functional language like Haskell, all functions are pure. This makes it easier to define the denotational semantics of these languages ​​and model them using category theory. For other languages, you can always limit yourself to a pure subset, or think about side effects separately. Later we will see how monads allow you to simulate all kinds of effects using only pure functions. As a result, we lose nothing, limiting ourselves to mathematical functions.

Type Examples


Once you decide that types are sets, you can come up with some very exotic examples. For example, what type corresponds to an empty set? No, this is not void in C ++, although this type is called Void in Haskell. This is a type that is not filled with any value. You can define a function that takes Void, but you can never call it. To call it, you have to provide a value of type Void, but it simply is not there. As for what this function can return, there are no restrictions. It can return any type (although this will never happen because it cannot be called). In other words, it is a function that is polymorphic in return type. The Haskellers called her:
absurd :: Void -> a

(Translator's note: in C ++, such a function cannot be defined: in C ++, each type has at least one value.)

(Remember that a is a type variable that can be of any type.) This name is not accidental. There is a deeper interpretation of types and functions in terms of logic called the Curry-Howard isomorphism. The Void type represents untruth, and the absurd function represents the statement that something follows from falsehood, as in the Latin phrase “ex falso sequitur quodlibet.” (Translator's note: anything follows from falsehood.)

Next comes the type corresponding to a singleton set. This is a type that has only one possible value. This meaning is simply "there." You might not recognize it right away, but it's void in C ++. Think of functions from and to this type. A function from void can always be called. If it is a pure function, it will always return the same result. Here is an example of such a function:
int f44() { return 44; }

You might think that this function accepts “nothing,” but, as we just saw, a function that accepts “nothing” cannot be called because there is no value representing the type of “nothing”. So what does this function take? Conceptually, it takes a dummy value, which has only a single instance, so that we can not explicitly specify it in the code. Haskell, however, has a symbol for this value: an empty pair of brackets (). Thus, due to a funny match (or not a match?), The function call from void looks the same in C ++ and in Haskell. In addition, due to Haskell's love of conciseness, the same symbol () is also used for the type, constructor, and single value corresponding to a singleton set. Here is the function in Haskell:
f44 :: () -> Integer
f44 () = 44

The first line declares that f44 converts the type () called "unit" to type Integer. The second line determines that f44, using pattern matching, converts a single constructor for a unit, namely () to the number 44. You call this function by supplying the value ():
f44 ()

Note that each function of one is equivalent to choosing one element from the target type (here, Integer 44 is selected). In fact, you can think of f44 as another representation of the number 44. This is an example of how we can replace the direct reference to the elements of a set with a function (arrow). Functions from one to a certain type A are in one-to-one correspondence with elements of the set A.

What about functions that return void, or, in Haskell, that return one? In C ++, such functions are used for side effects, but we know that such functions are not real, in the mathematical sense of the word. A pure function that returns one does nothing: it discards its argument.

Mathematically, a function from the set A to the singleton set maps each element to a single element of this set. For each A there is exactly one such function. Here it is for Integer:
fInt :: Integer -> ()
fInt x = ()

You give her any integer, and she returns one. Following the spirit of conciseness, Haskell allows you to use the underscore as an argument, which is discarded. Thus, you do not need to come up with a name for it. The code above can be rewritten as:
fInt :: Integer -> ()
fInt _ = ()

Please note that the execution of this function not only does not depend on the value passed to it, but also on the type of argument.

Functions that can be defined by the same formula for any type are called parametrically polymorphic. You can implement a whole family of such functions with a single equation, using a parameter instead of a specific type. How to name a polymorphic function from any type to one? Of course, we will call it unit:
unit :: a -> ()
unit _ = ()

In C ++, you would implement it like this:
template
void unit(T) {}

(Translator's note: in order to help the compiler optimize it in noop, it’s better this way):
template
void unit(T&&) {}

Further in the "typology of types" is a set of two elements. In C ++, it is called bool, and in Haskell, not surprisingly, Bool. The difference is that in C ++ bool is a built-in type, while in Haskell it can be defined as follows:
data Bool = True | False

(Read this definition as follows: Bool can be either True or False.) In principle, one could describe this type in C ++:
enum bool {
    true,
    false
};

But the C ++ enumeration is actually an integer. One could use C ++ 11 “class enum”, but then one would have to specify the value with the class name: bool :: true or bool :: false, not to mention the need to include the corresponding header in each file that uses it.

Pure functions from Bool simply select two values ​​from the target type, one corresponding to True and the other False.

Functions in Bool are called predicates. For example, the Haskell Data.Char library contains many predicates, such as IsAlpha or isDigit. C ++ has a similar library, which declares, among other things, the functions isalpha and isdigit, but they return an int, not a boolean value. These predicates are defined inand are called ctype :: is (alpha, c) and ctype :: is (digit, c).

Category theory for programmers: a preface
Category: the essence of the composition
Types and functions
Categories, large and small
Categories Klesley

Also popular now: