# When not to use the algorithms STL. Example with sets

- Transfer

Comrades, good evening! You have so nicely disassembled the first edition of the book “ C ++ 17 STL. Standard Template Library ” and continue to analyze the second one, so that we finally decided to present here an alternative point of view. The author of today's article is Ivan Čukić, who also owns the book Functional Programming in C ++ , which is being prepared for release by the publishing house Manning. We offer to evaluate his skeptical thoughts, code and calculations.

I wanted to call this post “On the viciousness of STL-algorithms” to test my own skills for provoking clicks. But then I decided that it was better to write an article for the target audience, and not to write such a post, where those who want to argue about my crying theses will fly.

Thus, I can assume that you are interested in algorithms, their complexity and want to write the most perfect code.

In today's professional C ++ community, people often advise: to make your program safer, faster, more expressive, etc. - use algorithms from the standard library. I also try to popularize this advice in my books, speeches, seminars ... wherever there is a suitable audience.

Of course, it’s absolutely true that if we are going to write a cycle

We still need to know how these algorithms are implemented, what requirements and guarantees are associated with them, what their spatial and temporal complexity.

Usually, if we are faced with a task that exactly meets the requirements of the STL algorithm, and it can be applied directly, it is this algorithm that will be the most effective solution.

The problem may arise if before applying the algorithm we need to somehow prepare the data.

Suppose we are writing a tool for C ++ developers that would give hints about replacing the default capture options (talking about

When parsing a file, we need a collection in which variables from the current and neighboring scopes would be stored. As soon as we encounter a default lambda expression, we will need to see which variables are used there.

As a result, we have two sets: in one there will be variables from the surrounding scopes, and in the other, variables used in the body of a lambda expression.

The list of capture options that we are going to offer for replacement should be the intersection of these two sets (lambda expressions can use global variables that are not required to be captured, and not all variables from surrounding scopes will be used in lambda expressions).

And if we need an intersection, then we can use an algorithm

This algorithm is quite beautiful in its simplicity. It accepts two sorted collections and in parallel passes them from beginning to end:

At each iteration, at least one element (from the first or from the second input collection) is skipped - therefore, the complexity of the algorithm will be linear —

Simple and effective. Until the input collections are sorted.

Here is the problem: what to do if the collections are not sorted in advance?

In the previous example, it would be reasonable to store variables from surrounding scopes in a glass-like structure, where the parser could simply add new elements, enter a new scope, and delete the variables of the current scope as soon as it leaves.

Thus, the variables will not be sorted by name, and we will not be able to directly use them

Since it

If we forget that the sorting of the stack of variables in our example completely devaluates the entire proc of the stack we defined, the intersection algorithm for unsorted collections will look like this:

What is the complexity of this whole algorithm? It takes quasilinear time to sort, so the overall complexity of this approach is equal

Is it possible to do without sorting?

If both collections are not sorted, then we will have to go around the second collection about each item from the first — to decide whether to include it in the result set. Although this approach is quite common in real projects, it is even worse than the previous one - its complexity is equal

Instead of sorting everything in a row, or not sorting anything, recall Zen and choose the Third Way — sort only one collection.

If only one collection is sorted, then we can enumerate all the values from the unsorted collection one by one and check for each value whether it is in the sorted collection. To do this, use the binary search.

In this case, the time complexity will be equal

If we decided to sort the second collection, and not the first, then the complexity would be

To achieve maximum efficiency, we always sort the collection in which there are fewer elements, ensuring that the final complexity of our algorithm is

The implementation will look like this:

In our example with capture lists in lambda expressions, a collection of variables present in a lambda expression is usually subjected to sorting, since it is likely to be smaller than the collection of all variables from all surrounding scopes.

The last option is to build

The hashing approach fully benefits when it is necessary to calculate the intersections of several sets with a single predetermined small set. That is, we have a set

In this case, we can ignore the complexity of the construction

Of course, it is always useful to check the complexity of the algorithm, but in such cases it is also reasonable to test various approaches using control points. Especially when choosing from the last two options, where we compare binary search and hash-based sets.

My simplest test showed that the first option, where you have to sort both collections, is always the slowest.

Sorting a smaller collection wins a little

Both the second and third approaches are slightly faster than the first in the case when there is an equal number of elements in both collections, and much faster (up to six times), when the number of elements in one collection is about 1000 times more than in the second.

**Preamble**I wanted to call this post “On the viciousness of STL-algorithms” to test my own skills for provoking clicks. But then I decided that it was better to write an article for the target audience, and not to write such a post, where those who want to argue about my crying theses will fly.

Thus, I can assume that you are interested in algorithms, their complexity and want to write the most perfect code.

**Algorithms**In today's professional C ++ community, people often advise: to make your program safer, faster, more expressive, etc. - use algorithms from the standard library. I also try to popularize this advice in my books, speeches, seminars ... wherever there is a suitable audience.

Of course, it’s absolutely true that if we are going to write a cycle

`for`

to solve the task before us, we first need to think about whether the existing algorithms of the standard library (or boost) are suitable for this, and not act blindly. We still need to know how these algorithms are implemented, what requirements and guarantees are associated with them, what their spatial and temporal complexity.

Usually, if we are faced with a task that exactly meets the requirements of the STL algorithm, and it can be applied directly, it is this algorithm that will be the most effective solution.

The problem may arise if before applying the algorithm we need to somehow prepare the data.

**Intersection of many**Suppose we are writing a tool for C ++ developers that would give hints about replacing the default capture options (talking about

`[=]`

and `[&]`

) in lambda expressions, and explicitly display a list of captured variables.```
std::partition(begin(elements), end(elements),
[=] (auto element) {
^~~ - неявность - это неприкольно, заменяем на [threshold]
return element > threshold;
});
```

When parsing a file, we need a collection in which variables from the current and neighboring scopes would be stored. As soon as we encounter a default lambda expression, we will need to see which variables are used there.

As a result, we have two sets: in one there will be variables from the surrounding scopes, and in the other, variables used in the body of a lambda expression.

The list of capture options that we are going to offer for replacement should be the intersection of these two sets (lambda expressions can use global variables that are not required to be captured, and not all variables from surrounding scopes will be used in lambda expressions).

And if we need an intersection, then we can use an algorithm

` std::set_intersection`

.This algorithm is quite beautiful in its simplicity. It accepts two sorted collections and in parallel passes them from beginning to end:

- If the actual item in the first collection is equal to the actual item in the second collection, it is added to the result, which the algorithm simply moves to the next item in both collections;
- If the actual element in the first collection is less than the actual element in the second collection, then the algorithm simply skips the actual element in the first collection;
- If the actual element in the first collection is more than the actual element in the second collection, then the algorithm simply skips the actual element in the second collection;

At each iteration, at least one element (from the first or from the second input collection) is skipped - therefore, the complexity of the algorithm will be linear —

`O(m + n)`

where `m`

is the number of elements in the first collection and `n`

is the number of elements in the second collection. Simple and effective. Until the input collections are sorted.

**Sorting**Here is the problem: what to do if the collections are not sorted in advance?

In the previous example, it would be reasonable to store variables from surrounding scopes in a glass-like structure, where the parser could simply add new elements, enter a new scope, and delete the variables of the current scope as soon as it leaves.

Thus, the variables will not be sorted by name, and we will not be able to directly use them

`std::set_intersection`

for operations on them. Similarly, if we track variables in the body of a lambda expression, then we, most likely, will not be able to save them in sorted form either. Since it

`std::set_intersection`

works only with sorted collections, in many projects there is such a principle: first we sort the collections, and then we call the algorithm `std::set_intersection`

. If we forget that the sorting of the stack of variables in our example completely devaluates the entire proc of the stack we defined, the intersection algorithm for unsorted collections will look like this:

```
template <typename InputIt1, typename InputIt2, typename OutputIt>
autounordered_intersection_1(InputIt1 first1, InputIt1 last1,
InputIt2 first2, InputIt2 last2,
OutputIt dest){
std::sort(first1, last1);
std::sort(first2, last2);
returnstd::set_intersection(first1, last1, first2, last2, dest);
}
```

What is the complexity of this whole algorithm? It takes quasilinear time to sort, so the overall complexity of this approach is equal

`O(n log n + m log m + n + m)`

.
Sort **less**Is it possible to do without sorting?

If both collections are not sorted, then we will have to go around the second collection about each item from the first — to decide whether to include it in the result set. Although this approach is quite common in real projects, it is even worse than the previous one - its complexity is equal

`O(n * m)`

. Instead of sorting everything in a row, or not sorting anything, recall Zen and choose the Third Way — sort only one collection.

If only one collection is sorted, then we can enumerate all the values from the unsorted collection one by one and check for each value whether it is in the sorted collection. To do this, use the binary search.

In this case, the time complexity will be equal

`O(n log n)`

for sorting the first collection and `O (m log n)`

for sorting and checking. The total complexity will be `O((n + m) log n)`

. If we decided to sort the second collection, and not the first, then the complexity would be

`O((n + m) log m)`

. To achieve maximum efficiency, we always sort the collection in which there are fewer elements, ensuring that the final complexity of our algorithm is

`((m + n) log (min(m, n))`

. The implementation will look like this:

```
template <typename InputIt1, typename InputIt2, typename OutputIt>
autounordered_intersection_2(InputIt1 first1, InputIt1 last1,
InputIt2 first2, InputIt2 last2,
OutputIt dest){
constauto size1 = std::distance(first1, last1);
constauto size2 = std::distance(first2, last2);
if (size1 > size2) {
unordered_intersection_2(first2, last2, first1, last1, dest);
return;
}
std::sort(first1, last1);
returnstd::copy_if(first2, last2, dest,
[&] (auto&& value) {
returnstd::binary_search(first1, last1, FWD(value));
});
}
```

In our example with capture lists in lambda expressions, a collection of variables present in a lambda expression is usually subjected to sorting, since it is likely to be smaller than the collection of all variables from all surrounding scopes.

**Hashing**The last option is to build

`std::unordered_set `

(the implementation of an unordered hash-based set) from a smaller collection, rather than sorting it. In this case, the complexity of the search operations will be on average `O(1)`

, but the assembly `std::unordered_set`

will take time. The complexity of the construction can be from `O(n)`

to `O(n*n)`

, and this is a potential problem.```
template <typename InputIt1, typename InputIt2, typename OutputIt>
voidunordered_intersection_3(InputIt1 first1, InputIt1 last1,
InputIt2 first2, InputIt2 last2,
OutputIt dest){
constauto size1 = std::distance(first1, last1);
constauto size2 = std::distance(first2, last2);
if (size1 > size2) {
unordered_intersection_3(first2, last2, first1, last1, dest);
return;
}
std::unordered_set<int> test_set(first1, last1);
returnstd::copy_if(first2, last2, dest,
[&] (auto&& value) {
return test_set.count(FWD(value));
});
}
```

The hashing approach fully benefits when it is necessary to calculate the intersections of several sets with a single predetermined small set. That is, we have a set

`A`

, a set `B₁`

, `B₂…`

and we want to calculate. `A ∩ B₁, A ∩ B₂…`

In this case, we can ignore the complexity of the construction

`std::unordered_set`

, and the complexity of calculating each intersection will be linear - `O(m)`

where `m`

is the number of elements in the second collection. **Control**Of course, it is always useful to check the complexity of the algorithm, but in such cases it is also reasonable to test various approaches using control points. Especially when choosing from the last two options, where we compare binary search and hash-based sets.

My simplest test showed that the first option, where you have to sort both collections, is always the slowest.

Sorting a smaller collection wins a little

`std::unordered_set`

, but not particularly. Both the second and third approaches are slightly faster than the first in the case when there is an equal number of elements in both collections, and much faster (up to six times), when the number of elements in one collection is about 1000 times more than in the second.

Only registered users can participate in the survey. Sign in , please.