"Modern" C ++: crying session with lamentations

http://aras-p.info/blog/2018/12/28/Modern-C-Lamentations/
  • Transfer

There will be a very long wall of text with random thoughts. Key ideas:


  1. In C ++, compile time is very important,
  2. Build performance without optimizations is also important.
  3. Cognitive load is more important. I will not specifically discuss this point here, but if a programming language makes me feel stupid, it’s unlikely I will use it or, all the more, love it. C ++ does this to me all the time .

Erik Niebler's “ Standard Ranges ” blogpost dedicated to C ++ 20 range recently covered the entire twitter universe, accompanied by a bunch of not-very-flattering comments (this is also put it mildly!) About the state of modern C ++.



Even I made my contribution ( link ):


This example of Pythagorean triples on C ++ 20 ranges, in my opinion, looks monstrous. And yes, I understand that rengy can be useful, projections can be useful and so on. However, the example is creepy. Why would anyone need this?

Let's take a closer look at all this under the cut.



All of this was a little out of control (even after a week, comments continued to flow into this thread tree!).


Now, I have to apologize to Eric for starting with his article; My crying of Yaroslavna will be mainly about the “general state of C ++”. "A handful of bitchy guys from gamedev" a year ago came to explain the essence of Boost.Geometry in much the same way, and the same thing happened about dozens of other aspects of the C ++ ecosystem.


But you know, Twitter is not the best place for delicate conversations, etc., etc. We'll have to turn the thought right here and now!


Pythagorean troika in the style of crayfish C ++ 20


Keep the full example text from Eric's post :


// Пример программы на стандартном C++20.// Она печатает первые N пифагоровых троек.#include<iostream>#include<optional>#include<ranges>   // Новый заголовочный файл!usingnamespacestd;
// maybe_view создаёт вьюху поверх 0 или 1 объектаtemplate<Semiregular T>
structmaybe_view : view_interface<maybe_view<T>> {
  maybe_view() = default;
  maybe_view(T t) : data_(std::move(t)) {
  }
  T const *begin()constnoexcept{
    return data_ ? &*data_ : nullptr;
  }
  T const *end()constnoexcept{
    return data_ ? &*data_ + 1 : nullptr;
  }
private:
  optional<T> data_{};
};
// "for_each" создает новую вьюху, применяя// трансформацию к каждому элементу из изначального ренжа,// и в конце полученный ренж ренжей делает плоским.// (Тут используется синтаксис constrained lambdas из C++20.)inlineconstexprauto for_each =
  []<Range R,
     Iterator I = iterator_t<R>,
     IndirectUnaryInvocable<I> Fun>(R&& r, Fun fun)
        requires Range<indirect_result_t<Fun, I>> {
      returnstd::forward<R>(r)
        | view::transform(std::move(fun))
        | view::join;
  };
// "yield_if" берёт bool и значение, // возвращая вьюху на 0 или 1 элемент.inlineconstexprauto yield_if =
  []<Semiregular T>(bool b, T x) {
    return b ? maybe_view{std::move(x)}
             : maybe_view<T>{};
  };
intmain(){
  // Определяем бесконечный ренж пифагоровых троек:using view::iota;
  auto triples =
    for_each(iota(1), [](int z) {
      return for_each(iota(1, z+1), [=](int x) {
        return for_each(iota(x, z+1), [=](int y) {
          return yield_if(x*x + y*y == z*z,
            make_tuple(x, y, z));
        });
      });
    });
    // Отображаем первые 10 троекfor(auto triple : triples | view::take(10)) {
      cout << '('
           << get<0>(triple) << ','
           << get<1>(triple) << ','
           << get<2>(triple) << ')' << '\n';
  }
}

Eric's post appeared from his earlier post , written a couple of years ago, which in turn was a response to Bartosz Milevsky's article “ Getting Lazy with C ++ ”, in which a simple sishnaya function for printing the first N Pythagorean triples looked like this:


voidprintNTriples(int n){
    int i = 0;
    for (int z = 1; ; ++z)
        for (int x = 1; x <= z; ++x)
            for (int y = x; y <= z; ++y)
                if (x*x + y*y == z*z) {
                    printf("%d, %d, %d\n", x, y, z);
                    if (++i == n)
                        return;
                }
}

There were also problems with this code:


Everything is fine until you want to change or reuse this code. But what if, for example, instead of printing on the screen, you want to draw triples like triangles? Or suddenly you wanted to stop right away, as one of the numbers reached hundreds?

After that, lazy calculations with the assembly of lists (list comprehensions) are presented as the main way to solve these problems. Of course, this is really some way to solve these problems, because in C ++ language there is not enough built-in functionality in this language, which exists in some Haskell and other languages. C ++ 20 will get more for this built-in nishtyakov, which is hinted at by Eric's post. But before we get there.


Pythagorean triplets in the style of simple C ++


So, let's go back to the style of solving the problem, based on simple C / C ++ (“simple” - in the sense of, “fits, as long as you don't need to modify or reuse,” according to Bartosch). Keep a complete program that prints out the first hundred triples:


// simplest.cpp#include<time.h>#include<stdio.h>intmain(){
    clock_t t0 = clock();
    int i = 0;
    for (int z = 1; ; ++z)
        for (int x = 1; x <= z; ++x)
            for (int y = x; y <= z; ++y)
                if (x*x + y*y == z*z) {
                    printf("(%i,%i,%i)\n", x, y, z);
                    if (++i == 100)
                        goto done;
                }
    done:
    clock_t t1 = clock();
    printf("%ims\n", (int)(t1-t0)*1000/CLOCKS_PER_SEC);
    return0;
}

Here's how it can be collected: clang simplest.cpp -o outsimplest. The build takes 0.064 seconds, we have an executable with the size of 8480 bytes, which takes 2 milliseconds and then prints the numbers (all this on my hardware: 2018 MacBookPro; Core i9 2.9GHz; compiler - Xcode 10 clang).


(3,4,5)
(6,8,10)
(5,12,13)
(9,12,15)
(8,15,17)
(12,16,20)
(7,24,25)
(15,20,25)
(10,24,26)
...
(65,156,169)
(119,120,169)
(26,168,170)

Stand! It was a default, non-pitched (“Debug”) build; let us now will gather with optimizations ( «the Release»): clang simplest.cpp -o outsimplest -O2. This will take 0.071 seconds to compile and the output will be an executable of the same size (8480 bytes), which runs for 0 milliseconds (that is, below the sensitivity of the timer clock()).


As Bartosh correctly noted, the algorithm cannot be reused here, because it is mixed with the manipulation of the result of the calculations. The question “is this really a problem?” Is beyond the scope of this article ( I personally think that “re-usability” and the task “to avoid duplication at all costs” are too overvalued ). Let's assume that this is a problem, and we really need something that will return the first N triples, but will not produce any manipulations on them.


What I would do is the simplest and simplest of things, create something suitable for a call that will return the next three. It might look like this:


// simple-reusable.cpp#include<time.h>#include<stdio.h>structpytriples
{
    pytriples() : x(1), y(1), z(1) {}
    voidnext(){
        do
        {
            if (y <= z)
                ++y;
            else
            {
                if (x <= z)
                    ++x;
                else
                {
                    x = 1;
                    ++z;
                }
                y = x;
            }
        } while (x*x + y*y != z*z);
    }
    int x, y, z;
};
intmain(){
    clock_t t0 = clock();
    pytriples py;
    for (int c = 0; c < 100; ++c)
    {
        py.next();
        printf("(%i,%i,%i)\n", py.x, py.y, py.z);
    }
    clock_t t1 = clock();
    printf("%ims\n", (int)(t1-t0)*1000/CLOCKS_PER_SEC);
    return0;
}

It assembles and runs at about the same time. The debug file grows to 168 bytes, the release version remains the same size.


I have made a structure pytriplesfor which each next call next()passes to the next valid three; the calling code can do whatever it wants with this result. So I just call him a hundred times, and each time I print the result on the screen.


Despite the fact that the implementation is functionally equivalent to what I did a cycle of three nested for-s in the original example, in reality он стал гораздо менее очевидным, at least for me. It is quite clear how he does what he does (several branches and simple operations on whole numbers), but it is not immediately clear what he does at a high level.


If in C ++ there was something like the concept of corutin , it would be possible to implement a generator of triples, as concise as nested loops in the original example, but without having any of the “problems” listed (Jason Meisel speaks about this in the article " Ranges, Code Quality, and the Future of C ++ "); it could be something like (this is a preliminary syntax, because Corutin is not in the C ++ standard):


generator<std::tuple<int,int,int>> pytriples()
{
    for (int z = 1; ; ++z)
        for (int x = 1; x <= z; ++x)
            for (int y = x; y <= z; ++y)
                if (x*x + y*y == z*z)
                    co_yield std::make_tuple(x, y, z);
}

Let's go back to C ++.


Can the writing style in the form of C ++ 20 dies be more clear on this task? Let's take a look at Eric's post, the main part of the code:


auto triples =
    for_each(iota(1), [](int z) {
        return for_each(iota(1, z+1), [=](int x) {
            return for_each(iota(x, z+1), [=](int y) {
                return yield_if(x*x + y*y == z*z,
                    make_tuple(x, y, z));
                });
            });
        });

Everyone decides for himself. For me, the approach with corints, described above, is much more readable. The way in which C ++ creates lambdas, and the way in the C ++ standard came up with writing things in a particularly smart way (“what is iota is a Greek letter, look how clever I am!”) - both of these things look cumbersome and inconsistent. A lot of return s seem unusual if the reader is used to the imperative programming style, but maybe you can get used to it.


Perhaps you will be able to narrow your eyes in a special way and imagine that this is an acceptable and pleasant syntax.


Nevertheless, I refuse to believe that we, mere mortals without a doctoral degree in C ++, will be able to write the utilities necessary for the work of such code:


template<Semiregular T>
structmaybe_view : view_interface<maybe_view<T>> {
  maybe_view() = default;
  maybe_view(T t) : data_(std::move(t)) {
  }
  T const *begin()constnoexcept{
    return data_ ? &*data_ : nullptr;
  }
  T const *end()constnoexcept{
    return data_ ? &*data_ + 1 : nullptr;
  }
private:
  optional<T> data_{};
};
inlineconstexprauto for_each =
  []<Range R,
     Iterator I = iterator_t<R>,
     IndirectUnaryInvocable<I> Fun>(R&& r, Fun fun)
        requires Range<indirect_result_t<Fun, I>> {
      returnstd::forward<R>(r)
        | view::transform(std::move(fun))
        | view::join;
  };
inlineconstexprauto yield_if =
  []<Semiregular T>(bool b, T x) {
    return b ? maybe_view{std::move(x)}
             : maybe_view<T>{};
  };

It may be that for someone this is a native language, but for me it all feels as if someone has decided that Perl is too readable, and Brainfuck is too unreadable, so let's aim between them. I programmed mainly in C ++ for the last 20 years. Maybe I'm too stupid to figure it all out, great.


And yes, of course, maybe_view, for_each, yield_if- they are all "pereispolzuemymi components" that can be transferred to the library; this topic, about which I will tell ... yes right now.


Problems with the “Everything Is A Library” approach


There are at least two performance insights:


  1. At compile time
  2. During execution of a non-optimized build

Let us continue to illustrate this with the example of the Pythagorean triplets, but in fact, these problems are valid for many other C ++ features implemented as part of the libraries, and not as part of the syntax.


The final version of C ++ 20 has not yet been released, so for a quick check, I took the current best approximation of range, which is range-v3 (written by Eric Niebler himself), and collected a canonical example with Pythagorean triples relative to him.


// ranges.cpp#include<time.h>#include<stdio.h>#include<range/v3/all.hpp>usingnamespace ranges;
intmain(){
    clock_t t0 = clock();
    auto triples = view::for_each(view::ints(1), [](int z) {
        return view::for_each(view::ints(1, z + 1), [=](int x) {
            return view::for_each(view::ints(x, z + 1), [=](int y) {
                return yield_if(x * x + y * y == z * z,
                    std::make_tuple(x, y, z));
            });
        });
    });
    RANGES_FOR(auto triple, triples | view::take(100))
    {
        printf("(%i,%i,%i)\n", std::get<0>(triple), std::get<1>(triple), std::get<2>(triple));
    }
    clock_t t1 = clock();
    printf("%ims\n", (int)(t1-t0)*1000/CLOCKS_PER_SEC);
    return0;
}

I used the version after 0.4.0 ( 9232b449e44December 22, 2018), and compiled using the command clang ranges.cpp -I. -std=c++17 -lc++ -o outranges. It gathered in 2.92 seconds , the executable file turned out to be 219 kilobytes in size, and the execution time increased to 300 millimeters .


And yes, this is an assembly without optimizations. The optimized build ( clang ranges.cpp -I. -std=c++17 -lc++ -o outranges -O2) is compiled in 3.02 seconds, the file is 13976 bytes in size, and is executed in 1 millisecond. The execution speed in runtime is good, the size of the executable has slightly increased, but the compile time is still a problem.


Delve into the details.


Compile time is a huge problem for C ++


The compile time for this really simplest example took 2.85 seconds longer than the version with “simple C ++”.


If you suddenly thought that “less than 3 seconds” is too short a time, then absolutely not. In three seconds, a modern CPU can perform a myriad of operations. For example, in what time can clang compile a real full-fledged database engine ( SQLite ) in debug mode, including all 220 thousand lines of code? In 0.9 seconds on my laptop. In which such a universe has become normal, so that the trivial example of 5 lines compiles three times longer than the whole database engine?


C ++ compile time was a source of pain in all non-trivial code bases where I worked. Do not believe me? Well, try to collect any of the well-known code bases (Chromium, Clang / LLVM, UE4, and so on, perfect for an example). Among the many things that you really want to have in C ++, the question of compile time is probably at the very first place in the list, and has always been there. Nevertheless, the feeling is that the C ++ community pretends in the majority that this is not even a problem at all, and in each next version of the language they put even more different things into the header files , more things appear in the template code that should be in header files.


For the most part, this is related to the prehistoric concept of “just copying the entire contents of the file” of the model #includeinherited from C. But in C there is a tendency to keep in the headers only declarations of structures and function prototypes, in C ++ you usually need to dump all the template classes / functions there.


range-v3 is a 1.8 megabyte piece of code, all in header files! Despite the fact that the example with one hundred Pythagorean triples takes 30 lines, after processing the headers, the compiler will have to compile 102 thousand lines. In “simple C ++”, after all the transformations, 720 lines are obtained.


But after all exactly for this purpose there are precompiled headers and / or modules! - and I hear that you said it now. Fair Let's put the headers of the library in the precompiled header (pch.h with the text:, #include <range/v3/all.hpp>make the resulting pch.h, create the PCH:, clang -x c++-header pch.h -I. -std=c++17 -o pch.h.pchcompile with pch:) clang ranges.cpp -I. -std=c++17 -lc++ -o outranges -include-pch pch.h.pch. The compilation time will be 2.24 seconds. That is, PCH can save us about 0.7 seconds of compile time. With the remaining 2.1 seconds they will not help at all, and this is much longer than the approach with simple C ++ :(


Build performance without optimizations is important


In rantaime, the example with ranzhami was 150 times slower . Perhaps a slowdown of 2 or 3 times can still be considered acceptable. Anything 10 times slower can be categorized as unusable. More than a hundred times slower? Seriously?


On real code bases that solve real problems, a difference of two orders of magnitude means that the program simply cannot process the actual amount of data. I work in the video gaming industry; for purely practical reasons, this means that the debug builds of the game engine or tulling will not be able to handle real game levels (performance will not even come close to the required level of interactivity). Perhaps there is an industry in which you can run a program on a data set, wait for the result, and if it takes 10 to 100 times more time in the debugging mode, it will be “annoying”. Unpleasant, annoyingly inhibitory. But if something is done that must be interactive, "Annoyingly" turns into "inapplicable." You literally will not be able to play the game if it renders the image at a speed of only 2 frames per second.


Yes, the build with optimizations ( -O2in clang) works with the speed of “simple C ++” ... well, yes, yes, yes, “zero cost abstractions”, we heard somewhere. Free abstractions as long as you are not interested in the compilation time and it is possible to use an optimizing compiler.


But debugging optimized code is hard ! Of course, this is possible, and even is a very useful skill. Just as riding a one-wheeled bike is also possible and teaches the most important skill of balancing. Some people can get pleasure from it, and even quite well in this lesson. But most people will never choose unicycle as the main means of transportation, just as most people do not debug optimized code if there is even the slightest possibility to avoid it.


Arseny Kapoulkine made a cool stream “ Optimizing OBJ loader ” on YouTube, where he rested against the problem of debug build, and made it 10 times faster, throwing some pieces of STL ( commit ). Side effects were the acceleration of the compilation ( source ) and simplification of debugging, since the implementation of Microsoft's STL is hellishly obsessed with nested function calls.


This is not to say that “STL is bad”; It is possible to write such an implementation of STL, which will not slow down tenfold in a non-optimized build (EASTL and libc ++ can do this), but for some reason Microsoft STL is incredibly slow because it is too much built on the principle of “in-line repairing everything.”


As a user of the language , I don’t care whose problem it is! All that I know initially - “STL slows down in debug mode,” and I would prefer that someone fix it already. Well, or I will have to look for alternatives (for example, do not use STL, write the things I personally need myself, or refuse C ++ at all, how is this for you).


Compare with other languages


Let's take a quick look at a very similar implementation of “lazily calculated Pythagorean triples” in C #:


using System;
using System.Diagnostics;
using System.Linq;
classProgram
{
    publicstaticvoidMain()
    {
        var timer = Stopwatch.StartNew();
        var triples =
            from z in Enumerable.Range(1, int.MaxValue)
            from x in Enumerable.Range(1, z)
            from y in Enumerable.Range(x, z)
            where x*x+y*y==z*z
            select (x:x, y:y, z:z);
        foreach (var t in triples.Take(100))
        {
            Console.WriteLine($"({t.x},{t.y},{t.z})");
        }
        timer.Stop();
        Console.WriteLine($"{timer.ElapsedMilliseconds}ms");
    }
}

For me, this piece is very, very readable. Compare this line in C #:


var triples =
    from z in Enumerable.Range(1, int.MaxValue)
    from x in Enumerable.Range(1, z)
    from y in Enumerable.Range(x, z)
    where x*x+y*y==z*z
    select (x:x, y:y, z:z);

with a C ++ example:


auto triples = view::for_each(view::ints(1), [](int z) {
    return view::for_each(view::ints(1, z + 1), [=](int x) {
        return view::for_each(view::ints(x, z + 1), [=](int y) {
            return yield_if(x * x + y * y == z * z,
                std::make_tuple(x, y, z));
        });
    });
});

I clearly see that it is written here cleaner. And you? To be honest, the alternative to C # LINQ also looks overloaded:


var triples = Enumerable.Range(1, int.MaxValue)
    .SelectMany(z => Enumerable.Range(1, z), (z, x) => new {z, x})
    .SelectMany(t => Enumerable.Range(t.x, t.z), (t, y) => new {t, y})
    .Where(t => t.t.x * t.t.x + t.y * t.y == t.t.z * t.t.z)
    .Select(t => (x: t.t.x, y: t.y, z: t.t.z));

How much is this C # code going to? I use Mac, so running the Mono compiler (which is also written in C #) version 5.16, the command mcs Linq.csturned out to compile the second example in 0.20 seconds. The equivalent example on “simple C #” was set in 0.17 seconds.


That is, lazy LINQ-style calculations add 0.03 seconds to the compiler . Compare with an additional 3 seconds for C ++ - that 's 100 times longer !


But you can not just ignore what you do not like?


Yes, to some extent.


For example, we here in Unity like to joke that “for adding to the project of Boost you can be fired under the article.” It looks like they don’t get fired, because last year I discovered that someone added Boost.Asio , everything began to gather wildly and slowly, and I had to asio.hfigure out that the simple addition included everything behind me <windows.h>, with all the nightmarish macros inside .


For the most part, we try not to use most of the STL. We have our own containers created for the same reason as described in the introduction to EASTL - a more uniform access method that works between different platforms / compilers, better performance in assemblies without optimizations, better integration with our own memory allocators and allocation tracking. There are some other containers, purely for performance reasons ( unordered_mapin STL, even in theory, it cannot be fast, because the standard requires separate chaining; our hash table uses open addressing instead). We don’t need most of the standard library at all.


However.


It takes time to convince every new employee (especially the juniors who have just left the university) that no, “modern” C ++ does not automatically mean that it is better than the old one ( it may be better! But it may not be ). Or, for example, that the “C code” does not necessarily mean that it is difficult to understand and it is all overloaded with bugs ( maybe it is! But maybe not ).


Just a couple of weeks ago, I complained to one and all, as I try to understand one particular piece of (our own) code, and I can't, because this code is “too complex” for me. Another (junior) sat down next to me and asked why I looked as if I was ready (ノ ಥ 益 ಥ) ノ ┻━┻, I said "well, because I'm trying to understand this code, but for me it is too complicated." His instant response was like, “oh, is this some old code in the style of C?” . And I am like this: “no, exactly the opposite!” . (The code in question was something of a template metaprogramming ). He did not work on large code bases, nor on C or C ++, but somethingalready convinced him that the C code should be unreadable. I blame the university; usually students are immediately rubbed in that “C is bad,” and then they never explain why; this leaves an indelible imprint on the frail psyche of future programmers.


So yes, I definitely tend to ignore those parts of C ++ that I don’t like. But it’s very tiring to train all colleagues around, because too many are influenced by ideas like “modern means good” or “a standard library should be better than anything that we can write ourselves”.


Why does this happen to C ++?


I have no idea. They have a very difficult task, “how to continue the evolution of the language, while maintaining almost one hundred percent backward compatibility with decisions made over many decades.” Put this fact on the fact that C ++ is trying to serve multiple hosts at once, take into account the many uses and levels of experience, and you have a huge problem!


But to some extent, there is a feeling that most of the C ++ committee and ecosystem focuses on “complexity” in terms of proving its own utility.


In the Internet there is a joke about the stages of development of a programmer in C / C ++. I remember being in the middle stage about 16 years ago. Boost was very impressed, in the sense that: “wow, you can make such jokes , it's so cool!”. Without wondering why to do it at all.


Likewise, for example, Formula 1 cars or guitars with three necks. Amazing? Of course. A miracle of engineering? Of course. Requires a huge skill to handle them? Yes! Not the right tool for 99% of the situations you've ever been in? Tochnyak.


Christer Erickson said it beautifully here :


The programmer’s goal is to make deliveries on time and within budget. Do not "write code." IMHO, most of the supporters of modern C ++ 1) give excessive value to the source code instead of 2) compile time, debugging, cognitive load created by new concepts and added complexity, project requirements, and so on. Solves point 2.

And yes, people concerned about the state of C ++ and standard libraries, of course, can join forces and try to improve them. Some do. Some are too busy (or so they think) to waste time on committees. Some ignore pieces of standards and make their own parallel libraries (like EASTL ). Some have come to the conclusion that C ++ cannot be saved anymore, and they are trying to make their own languages ​​( Jai ) or jump to another boat ( Rust , C # subsets ).


We accept and give feedback


I know how unpleasant it is when “a bunch of angry people on the Internet” tries to say that all your work is not worth horse dung. I’m working, perhaps, on the world's most popular game engine , used by millions, and some of them like to say, directly or indirectly, how disgusting it is. It's hard; I and other colleagues have put so much thought and effort into it, and suddenly someone passes by and says that we are all idiots and our work is rubbish. Sad!


Most likely, everyone who is working on C ++, STL or any other widely used technology experiences something like this. They have been working on something important for years, and then a bunch of Furious Residents of the Lower Internet came and found your favorite job.


Too easy to go into a defensive pose, this is the most natural reaction. Usually - not the most constructive.


If you do not pay attention to the literal trolls who whine on the Internet just for fun, most complainants do have problems or troubles behind them. They may poorly articulate it, or exaggerate it, or the complainant did not think about many other points of view than his own, but nevertheless, there is a very specific problem behind all these manifestations.


What do I do when someone complains about the thing I was working on? You need to forget about “yourself” and “your work”, and accept their point of view. What are they trying to figure out what problems they are trying to solve? The task of any software / libraries / languages ​​is to help users solve their problems. It can be either the ideal tool for solving these problems, or “ok, it can work,” or a terribly bad solution.


  • “I worked very hard on this, but yes, it seems that my tool is not very good at solving your problems” - this is absolutely the right outcome!
  • “I worked very hard on this, but I don’t know or didn’t take into account your needs, let me figure out what can be done here” - this is also a great outcome!
  • "Sorry, I do not understand your problem" - also suitable!
  • “I worked very hard on this, but no one seems to have problems that my work solves” is a very sad outcome, but it can happen, and it happens in practice.

Some of the answers of the form “all feedback will be ignored if it is not in the form of a document presented at a meeting of the C ++ committee”, which I have seen lately do not seem to me a productive approach. Similarly, protecting the architecture of a library with an argument like “it was a popular library in Boost!” Does not take into account that part of the C ++ world that does not consider Boost to be something good.


Globally, the video gaming industry is also to blame. Game technologies are traditionally created using C or C ++ simply because, until very recently, the rest of the system programming languages ​​simply did not exist (but now there is at least Rust, which is worthy of competition). Given the dependency on C ++ that the industry has fallen into, it definitely has not done enough work to be noticed, and is not doing enough to improve the language, libraries, and the ecosystem.


Yes, it is hard work, and yes - complaining on the Internet is much easier. And no matter who starts working on the future C ++, this very future is not in solving “immediate problems” (like delivering a game or something like that); they have to work on something more lasting. There are companies that can afford it; Any company that produces a large game engine or a large publisher with a centralized technology group can definitely do this. If it will be worth it, but you know, it is somehow hypocritical, to say “C ++ is complete garbage, we don’t need it,” and at the same time never inform the developers of the language what you need.


My impression of all this is that most gaming technologies feel quite well with the latest (C ++ 11/14/17) innovations in the C ++ language itself - for example, lambdas turned out to be useful, constexpr ifvery cool, and so on. But there is a tendency to ignore what was added to the standard libraries, both because of the problems described above in the architecture and implementations of STL (long compilation time, poor debugging performance), and simply because these add-ons are not tasty enough, or the companies have already written their own containers / lines / algorithms / ... many years ago, and do not understand why they need to change what is already working.


Minute advertising. On April 19-20, C ++ Russia 2019 conference will take place in Moscow . There will be a lot of hardcore reports, all as you like. One of our guests, Arno Schödl , an excellent speaker and Think-Cell CTO, will talk about Text Formatting For a Future Range-Based Standard Library . Looking for a place to discuss rengy and other new features? You found him. You can find out how to get to the conference on the official website ( ticket prices will rise from the first of February ).

Also popular now: