Interview with the creator of C ++ STL, 1995. Part 1
In recent years, when the demand for C ++ is growing again , it is interesting to look into the recent past and recall how this classic development platform was created. In this question, Straustrup’s books, such as “ Design and the Evolution of C ++, ” are certainly informative . However, it is equally interesting to hear about the language from its very first followers, and sometimes full-fledged co-authors. Perhaps the most famous of them is our (in general :) compatriot Alex Stepanov , author of the Standard Template Library . The following interview was taken from Alex in 1995 by Dr. Stevens , columnist of Dr. Dobbs magazine . The material will be interesting for both beginners to learn C ++ and experienced users of the language.
Alex, tell us something about your long-term interest in generalized programming.
I started thinking about generalized programming in the late 70s, when I noticed that some algorithms do not depend on a particular implementation of a data structure, but only on a small number of significant semantic properties of this structure. So I began to consider a variety of algorithms, and found that most of them can be abstracted from a particular implementation so that efficiency is not lost. Efficiency is one of my main concerns. It is foolish to abstract the algorithm in such a way that when you use the resulting implementation, it becomes ineffective.
At that time, I thought that the right way to do this kind of research was to develop a programming language, which I started doing with my two friends, Deepak Kapoor (currently a professor at the University of New York, Albany) and David Musser, Professor of the Rensselaer Polytechnic Institute. Then the three of us worked at the General Electric Research Center in Schenectady, New York. We started working on a language called Tecton, which would allow people to describe algorithms related to what we called generalized structures. They are simply collections of formal types and their properties. Something like mathematical tools. We realized that you can define the algebra of operations on these structures: you can replenish them,
There were some interesting ideas, but the study did not lead to practical results, since Tecton was a functional language. We believe in the idea of Backus Naur form that should be exempt from the programming style of von Neumann , and we did not want to be in the language of the side effects as a result of the procedure or function. As a result, this did not allow to touch upon many algorithms, which essentially required the concepts of state and side effect.
An interesting fact about Tecton, which I realized sometime in the late 70s, was that there was a basic restriction in this language on the accepted representation of an abstract data type (ADT). Typically, an ADT is presented as something that tells you only about the behavior of the object, while the implementation is completely hidden. It was generally accepted that the complexity of the operation is part of the implementation, and the abstraction ignores the complexity. One of the facts that I now see as central to generalized programming is that complexity, or at least some general idea of complexity, should be associated with an operation.
Consider an example. Imagine ATD "Stack". It is not enough to have the “Push onto the stack” and “Push from the stack” methods: the postulate is that something is pushed onto the stack, and after the top of the stack is pushed, the same stack is obtained. A statement of paramount importance is that pushing an item onto the stack is a constant-time operation, regardless of the size of the stack. If I implement the stack in such a way that every time an element is inserted into it, this operation becomes slower and slower, then no one will want to use this stack.
We need to separate the implementation from the interface, but not at the cost of completely ignoring complexity. Complexity must be and is part of an unwritten agreement between the module and its user. The reason for introducing the concept of ADT was to allow the creation of interchangeable software modules. It is not possible to have interchangeable modules until these modules share similar complexity behavior. If I replace one module with another with the same functional behavior, but with excellent compromises in complexity, the user of this code will be unpleasantly surprised. I could tell him everything I like about data abstraction, but he still would not want to use code. Complexity statements must be part of the interface.
Around 1983, I moved from GE Research to the department of the Polytechnic University, formerly known as the Brooklyn Polytechnic, in New York. I started working on graph algorithms. My primary contributor was Aaron Kershenbaum, now with IBM Yorktown Heights. He was an expert in graph and network algorithms, and I convinced him that some of the ideas of high-level and generalized programming were applicable to graph algorithms. He had access to several research grants and provided me with support so that I could start working with him on applying these ideas to real-life network algorithms. He was interested in creating a set of tools in the form of high-level generic components, so that some of these algorithms could be implemented. The thing is, that some network algorithms are so complex that they were never implemented at that time, although they were well analyzed theoretically. I decided to use a Lisp dialect called Scheme to build such a toolbox. Aaron and I developed at Scheme a large library of components demonstrating all kinds of programming techniques. Network algorithms were the main goal. Later, Dave Musser, still working at GE Research, joined us, and we developed even more components, a fairly large library. It was used at university by graduate students, but was never used for commercial purposes. While doing this work, I realized that side effects of functions are important, because in fact, it is impossible to carry out operations on graphs without side effects. You cannot copy a graph every time you need to modify a vertex. So the insight at that time was that when building generalized algorithms, you can combine high-order techniques with moderate use of side effects. Side effects are not necessarily bad; they are bad only being misused.
In the summer of 1985, I was invited back to GE Research to teach a high-level programming course. There I showed how you can construct complex algorithms using this technique. One of those attending the course was Art Chen, later - the manager of the Laboratory of Information Systems. He was impressed enough to ask me a question: could I create with this technique a library ready for industrial use in the Ada language, while providing them with my support. Being a poor assistant professor, I replied with consent, although I did not know then any Ada. Together with Dave Musser, we started work on creating an Ada-library. This was an essential circumstance, since Switching from a dynamically typed language like Scheme to a strongly typed language like Ada made me aware of the importance of strong typing. Everyone understands that strong typing helps in finding errors. I found that strong typing in contextAda language generics(generalized procedures - that is, applicable to any type), was also a tool for identifying models. It was not only a tool for catching errors. She was also a tool for reflection. This work led me to the idea of orthogonal decomposition of the component space. I realized that software components belong to different categories. OOP adherents think that everything is an object. When I was working on a generic library for Ada, I realized that it wasn’t. There are things that are objects. Things that have a state and change their state are objects. And at the same time, there are things that are not objects. Binary search is not an object. This is an algorithm. Moreover, I realized that by decomposing the component space into several orthogonal dimensions, we can reduce the number of components, and,
Then I was offered a job at Bell Lab in the C ++ language group on C ++ libraries. They asked me if I could do the same in C ++. Of course, I did not know C ++, and of course I agreed. But I could not do this in C ++, because in 1987, C ++ did not have the templates needed for this programming style. Inheritance was the only mechanism for obtaining generalization, and it was not sufficient.
Even now, inheritance in C ++ is not particularly valuable for generalized programming. Let's discuss why. Many have tried to use inheritance to implement data structures and container classes. As we now know, there have been very few successful attempts, if any. The inheritance of C ++ and the programming style associated with it are essentially limited. So, it is impossible to implement a design in it that includes such a simple thing as using the comparison operator for equality. If you start with the base class X as the root of your hierarchy and define a virtual equality comparison operator for this class that receives an argument of type X, then inherit the class Y from X. What is the interface of the equality comparison operator? It has an equality that compares Y with X. Using animals as an example (object-oriented people love animals), define a “mammal” and inherit a “giraffe” from a mammal. Then define the member function “mate”, in which one animal mates with another and returns the animal. Then you bring the “giraffe” out of the animal and, of course, it has the “mate” function, in which the giraffe mates with the animal and returns the animal. This is definitely not what you would like. While pairing may not be very important for C ++ programmers, the equality operator is such. I do not know a single algorithm that does not use some kind of equality comparison. in which one animal mates with another and returns the animal. Then you bring the “giraffe” out of the animal and, of course, it has the “mate” function, in which the giraffe mates with the animal and returns the animal. This is definitely not what you would like. While pairing may not be very important for C ++ programmers, the equality operator is such. I do not know a single algorithm that does not use some kind of equality comparison. in which one animal mates with another and returns the animal. Then you bring the “giraffe” out of the animal and, of course, it has the “mate” function, in which the giraffe mates with the animal and returns the animal. This is definitely not what you would like. While pairing may not be very important for C ++ programmers, the equality operator is such. I do not know a single algorithm that does not use some kind of equality comparison.
You need patterns to deal with similar issues. You can have the “Animal” template class, which has a “mate” member function that takes the animal and returns the animal. When you instantiate a “Giraffe”, “mating” will do the right thing. In this regard, the template is a more powerful mechanism.
However, I was able to create a very large library of algorithms that later became part of the Unix Systems Laboratory Laboratory Standard Components Library. I learned a lot at Bell Labs while talking about programming with people like Andy Koenig and Bjarn Straustrup. I realized that C / C ++ is an important programming language with some basic paradigms that can no longer be ignored. In particular, I found out that pointers are very good. I do not mean“Hanging” pointers . I don't mean stack pointers. But I mean that general pointer notation is a powerful tool. The concept of address is used universally. It is incorrectly believed that pointers make our thinking consistent. This is not true. Without some kind of address, we cannot describe any parallel algorithm. If you try to describe the parallel summation of n numbers, you cannot do this until you can talk about the first number, which adds up to the second, while the third number adds up to the fourth. You need some kind of indexing. You need some kind of addressing to describe an algorithm of any kind, serial or parallel. The notation of an address or position is fundamental in our understanding and conceptualization of computational processes - algorithms.
Now let's think about why C is a great language. It is generally accepted that C is such a big “hack” that succeeded, because Unix was written on it. I disagree. For a long period of time, computer architectures have evolved, not because some smart people figured out how to develop architectures, but because of the need for a variety of programmers to solve real problems. In fact, the “smart” people at that time promoted tagged ( LISP-oriented) architecture, which, of course, was not very relevant to the basic needs of developers. Computers that were only able to deal with numbers evolved into computers with byte-addressable memory, flat address spaces, and pointers. This was a natural evolution, reflecting the growing array of problems that people were solving. C, which reflected the genius of Denis Ricci, provided a minimal computer model, which as a result of development has been obtained over the past 30 years. C was not a quick hack. As computers have evolved to handle all kinds of problems, C, being the smallest model of such a computer, has become a very popular language for solving all kinds of problems in various fields with a high degree of efficiency. This is the secret of C portability: he is the best abstract computer representation we have. Of course, the abstraction is carried out over many real computers, and not some imaginary computing devices. Moreover, people understood the machine model that underlies C. It is much easier for the average engineer to understand the machine model underlying C than the machine model Ada or even Scheme. C succeeded because he did the right thing, not because AT&T advertised it or because Unix wrote on it. than an Ada machine model or even a Scheme. C succeeded because he did the right thing, not because AT&T advertised it or because Unix wrote on it. than an Ada machine model or even a Scheme. C succeeded because he did the right thing, not because AT&T advertised it or because Unix wrote on it.
C ++ is successful, because, instead of trying to offer a machine model, invented only in the process of contemplating his navel, Bjarn started with C and tried to develop C further, providing more generalized programming techniques, but in the context of the framework of this machine model. Machine Model C is very simple. You have a memory where the entities are. You have pointers to consecutive memory elements. It is very easy to understand. C ++ saves this model, but makes the entities located in memory more comprehensive than in the C machine, because C has a limited set of data types. Namely, C has structures that provide a variation of the extensible type system, but it does not allow you to define operations on structures. This limits the extensibility of the type system.
In 1988, I moved to HP Labs, where I was hired to work on generic libraries. For several years, I worked on hard drives instead, and it was exciting, but completely orthogonal to this area of research. I returned to the development of a generic library in 1992, when Bill Worley, the former head of my laboratory, launched a project on algorithms with me as a manager. C ++ had templates by that time. I found that Bjarn did an excellent job of designing templates. A long time ago I took part in several discussions in Bell Labs about designing templates, and I argued quite strongly with Bjarn that he should make C ++ templates as close to Ada generics as possible. I think he argued so hard that he made the opposite decision. I realized the importance of having template functions in C ++, and not just template classes, as many people think. I thought, however, that template functions should work like Ada generics, i.e. that they should have been explicitly instantiated. Bjarn did not listen to me and designed a template function mechanism in which templates are instantiated explicitly using the overload mechanism. This specific technique has become crucial for my work, as I found that she allowed me to do a lot of what was impossible in Ada. I see this special design developed by Bjarn as an excellent result, and I am happy that he did not follow my advice then. Bjarn did not listen to me and designed a template function mechanism in which templates are instantiated explicitly using the overload mechanism. This specific technique has become crucial for my work, as I found that she allowed me to do a lot of what was impossible in Ada. I see this special design developed by Bjarn as an excellent result, and I am happy that he did not follow my advice then. Bjarn did not listen to me and designed a template function mechanism in which templates are instantiated explicitly using the overload mechanism. This specific technique has become crucial for my work, as I found that she allowed me to do a lot of what was impossible in Ada. I see this special design developed by Bjarn as an excellent result, and I am happy that he did not follow my advice then.
When did you start work on the STL and what was its original purpose?
In 1992, when the project was formed, it had 8 people. Gradually, the group was reduced, and in the end included two - me and Meng Lee. While Meng was new to that area - she developed compilers for most of her professional life, she took from start to finish a vision of generalized programming studies, and believed that they could lead to a change in the software development process at a time when a very small number of people shared this belief. I don’t think I could construct an STL without her help (after all, STL means Stepanov and Lee). We wrote a gigantic library, a lot of code with a lot of data structures and algorithms, functional objects (functors), adapters, etc. There was a lot of code, but no documentation. Our work was seen as a research project in order to demonstrate that it is possible to have algorithms defined as broadly as possible, but still as efficient as possible. We spent a lot of time taking measurements and found that we can make these algorithms as generalized as they can be, but still as efficient as manually written code. There is no speed overhead in this programming style! The library was growing, but it was not clear where it was managed as a project. It took several successful events to bring him to the STL. that we can make these algorithms as generalized as they can be, but still as efficient as manually written code. There is no speed overhead in this programming style! The library was growing, but it was not clear where it was managed as a project. It took several successful events to bring him to the STL. that we can make these algorithms as generalized as they can be, but still as efficient as manually written code. There is no speed overhead in this programming style! The library was growing, but it was not clear where it was managed as a project. It took several successful events to bring him to the STL.
( Continued )
( Original article )
Alex, tell us something about your long-term interest in generalized programming.
I started thinking about generalized programming in the late 70s, when I noticed that some algorithms do not depend on a particular implementation of a data structure, but only on a small number of significant semantic properties of this structure. So I began to consider a variety of algorithms, and found that most of them can be abstracted from a particular implementation so that efficiency is not lost. Efficiency is one of my main concerns. It is foolish to abstract the algorithm in such a way that when you use the resulting implementation, it becomes ineffective.
At that time, I thought that the right way to do this kind of research was to develop a programming language, which I started doing with my two friends, Deepak Kapoor (currently a professor at the University of New York, Albany) and David Musser, Professor of the Rensselaer Polytechnic Institute. Then the three of us worked at the General Electric Research Center in Schenectady, New York. We started working on a language called Tecton, which would allow people to describe algorithms related to what we called generalized structures. They are simply collections of formal types and their properties. Something like mathematical tools. We realized that you can define the algebra of operations on these structures: you can replenish them,
There were some interesting ideas, but the study did not lead to practical results, since Tecton was a functional language. We believe in the idea of Backus Naur form that should be exempt from the programming style of von Neumann , and we did not want to be in the language of the side effects as a result of the procedure or function. As a result, this did not allow to touch upon many algorithms, which essentially required the concepts of state and side effect.
An interesting fact about Tecton, which I realized sometime in the late 70s, was that there was a basic restriction in this language on the accepted representation of an abstract data type (ADT). Typically, an ADT is presented as something that tells you only about the behavior of the object, while the implementation is completely hidden. It was generally accepted that the complexity of the operation is part of the implementation, and the abstraction ignores the complexity. One of the facts that I now see as central to generalized programming is that complexity, or at least some general idea of complexity, should be associated with an operation.
Consider an example. Imagine ATD "Stack". It is not enough to have the “Push onto the stack” and “Push from the stack” methods: the postulate is that something is pushed onto the stack, and after the top of the stack is pushed, the same stack is obtained. A statement of paramount importance is that pushing an item onto the stack is a constant-time operation, regardless of the size of the stack. If I implement the stack in such a way that every time an element is inserted into it, this operation becomes slower and slower, then no one will want to use this stack.
We need to separate the implementation from the interface, but not at the cost of completely ignoring complexity. Complexity must be and is part of an unwritten agreement between the module and its user. The reason for introducing the concept of ADT was to allow the creation of interchangeable software modules. It is not possible to have interchangeable modules until these modules share similar complexity behavior. If I replace one module with another with the same functional behavior, but with excellent compromises in complexity, the user of this code will be unpleasantly surprised. I could tell him everything I like about data abstraction, but he still would not want to use code. Complexity statements must be part of the interface.
Around 1983, I moved from GE Research to the department of the Polytechnic University, formerly known as the Brooklyn Polytechnic, in New York. I started working on graph algorithms. My primary contributor was Aaron Kershenbaum, now with IBM Yorktown Heights. He was an expert in graph and network algorithms, and I convinced him that some of the ideas of high-level and generalized programming were applicable to graph algorithms. He had access to several research grants and provided me with support so that I could start working with him on applying these ideas to real-life network algorithms. He was interested in creating a set of tools in the form of high-level generic components, so that some of these algorithms could be implemented. The thing is, that some network algorithms are so complex that they were never implemented at that time, although they were well analyzed theoretically. I decided to use a Lisp dialect called Scheme to build such a toolbox. Aaron and I developed at Scheme a large library of components demonstrating all kinds of programming techniques. Network algorithms were the main goal. Later, Dave Musser, still working at GE Research, joined us, and we developed even more components, a fairly large library. It was used at university by graduate students, but was never used for commercial purposes. While doing this work, I realized that side effects of functions are important, because in fact, it is impossible to carry out operations on graphs without side effects. You cannot copy a graph every time you need to modify a vertex. So the insight at that time was that when building generalized algorithms, you can combine high-order techniques with moderate use of side effects. Side effects are not necessarily bad; they are bad only being misused.
In the summer of 1985, I was invited back to GE Research to teach a high-level programming course. There I showed how you can construct complex algorithms using this technique. One of those attending the course was Art Chen, later - the manager of the Laboratory of Information Systems. He was impressed enough to ask me a question: could I create with this technique a library ready for industrial use in the Ada language, while providing them with my support. Being a poor assistant professor, I replied with consent, although I did not know then any Ada. Together with Dave Musser, we started work on creating an Ada-library. This was an essential circumstance, since Switching from a dynamically typed language like Scheme to a strongly typed language like Ada made me aware of the importance of strong typing. Everyone understands that strong typing helps in finding errors. I found that strong typing in contextAda language generics(generalized procedures - that is, applicable to any type), was also a tool for identifying models. It was not only a tool for catching errors. She was also a tool for reflection. This work led me to the idea of orthogonal decomposition of the component space. I realized that software components belong to different categories. OOP adherents think that everything is an object. When I was working on a generic library for Ada, I realized that it wasn’t. There are things that are objects. Things that have a state and change their state are objects. And at the same time, there are things that are not objects. Binary search is not an object. This is an algorithm. Moreover, I realized that by decomposing the component space into several orthogonal dimensions, we can reduce the number of components, and,
Then I was offered a job at Bell Lab in the C ++ language group on C ++ libraries. They asked me if I could do the same in C ++. Of course, I did not know C ++, and of course I agreed. But I could not do this in C ++, because in 1987, C ++ did not have the templates needed for this programming style. Inheritance was the only mechanism for obtaining generalization, and it was not sufficient.
Even now, inheritance in C ++ is not particularly valuable for generalized programming. Let's discuss why. Many have tried to use inheritance to implement data structures and container classes. As we now know, there have been very few successful attempts, if any. The inheritance of C ++ and the programming style associated with it are essentially limited. So, it is impossible to implement a design in it that includes such a simple thing as using the comparison operator for equality. If you start with the base class X as the root of your hierarchy and define a virtual equality comparison operator for this class that receives an argument of type X, then inherit the class Y from X. What is the interface of the equality comparison operator? It has an equality that compares Y with X. Using animals as an example (object-oriented people love animals), define a “mammal” and inherit a “giraffe” from a mammal. Then define the member function “mate”, in which one animal mates with another and returns the animal. Then you bring the “giraffe” out of the animal and, of course, it has the “mate” function, in which the giraffe mates with the animal and returns the animal. This is definitely not what you would like. While pairing may not be very important for C ++ programmers, the equality operator is such. I do not know a single algorithm that does not use some kind of equality comparison. in which one animal mates with another and returns the animal. Then you bring the “giraffe” out of the animal and, of course, it has the “mate” function, in which the giraffe mates with the animal and returns the animal. This is definitely not what you would like. While pairing may not be very important for C ++ programmers, the equality operator is such. I do not know a single algorithm that does not use some kind of equality comparison. in which one animal mates with another and returns the animal. Then you bring the “giraffe” out of the animal and, of course, it has the “mate” function, in which the giraffe mates with the animal and returns the animal. This is definitely not what you would like. While pairing may not be very important for C ++ programmers, the equality operator is such. I do not know a single algorithm that does not use some kind of equality comparison.
You need patterns to deal with similar issues. You can have the “Animal” template class, which has a “mate” member function that takes the animal and returns the animal. When you instantiate a “Giraffe”, “mating” will do the right thing. In this regard, the template is a more powerful mechanism.
However, I was able to create a very large library of algorithms that later became part of the Unix Systems Laboratory Laboratory Standard Components Library. I learned a lot at Bell Labs while talking about programming with people like Andy Koenig and Bjarn Straustrup. I realized that C / C ++ is an important programming language with some basic paradigms that can no longer be ignored. In particular, I found out that pointers are very good. I do not mean“Hanging” pointers . I don't mean stack pointers. But I mean that general pointer notation is a powerful tool. The concept of address is used universally. It is incorrectly believed that pointers make our thinking consistent. This is not true. Without some kind of address, we cannot describe any parallel algorithm. If you try to describe the parallel summation of n numbers, you cannot do this until you can talk about the first number, which adds up to the second, while the third number adds up to the fourth. You need some kind of indexing. You need some kind of addressing to describe an algorithm of any kind, serial or parallel. The notation of an address or position is fundamental in our understanding and conceptualization of computational processes - algorithms.
Now let's think about why C is a great language. It is generally accepted that C is such a big “hack” that succeeded, because Unix was written on it. I disagree. For a long period of time, computer architectures have evolved, not because some smart people figured out how to develop architectures, but because of the need for a variety of programmers to solve real problems. In fact, the “smart” people at that time promoted tagged ( LISP-oriented) architecture, which, of course, was not very relevant to the basic needs of developers. Computers that were only able to deal with numbers evolved into computers with byte-addressable memory, flat address spaces, and pointers. This was a natural evolution, reflecting the growing array of problems that people were solving. C, which reflected the genius of Denis Ricci, provided a minimal computer model, which as a result of development has been obtained over the past 30 years. C was not a quick hack. As computers have evolved to handle all kinds of problems, C, being the smallest model of such a computer, has become a very popular language for solving all kinds of problems in various fields with a high degree of efficiency. This is the secret of C portability: he is the best abstract computer representation we have. Of course, the abstraction is carried out over many real computers, and not some imaginary computing devices. Moreover, people understood the machine model that underlies C. It is much easier for the average engineer to understand the machine model underlying C than the machine model Ada or even Scheme. C succeeded because he did the right thing, not because AT&T advertised it or because Unix wrote on it. than an Ada machine model or even a Scheme. C succeeded because he did the right thing, not because AT&T advertised it or because Unix wrote on it. than an Ada machine model or even a Scheme. C succeeded because he did the right thing, not because AT&T advertised it or because Unix wrote on it.
C ++ is successful, because, instead of trying to offer a machine model, invented only in the process of contemplating his navel, Bjarn started with C and tried to develop C further, providing more generalized programming techniques, but in the context of the framework of this machine model. Machine Model C is very simple. You have a memory where the entities are. You have pointers to consecutive memory elements. It is very easy to understand. C ++ saves this model, but makes the entities located in memory more comprehensive than in the C machine, because C has a limited set of data types. Namely, C has structures that provide a variation of the extensible type system, but it does not allow you to define operations on structures. This limits the extensibility of the type system.
In 1988, I moved to HP Labs, where I was hired to work on generic libraries. For several years, I worked on hard drives instead, and it was exciting, but completely orthogonal to this area of research. I returned to the development of a generic library in 1992, when Bill Worley, the former head of my laboratory, launched a project on algorithms with me as a manager. C ++ had templates by that time. I found that Bjarn did an excellent job of designing templates. A long time ago I took part in several discussions in Bell Labs about designing templates, and I argued quite strongly with Bjarn that he should make C ++ templates as close to Ada generics as possible. I think he argued so hard that he made the opposite decision. I realized the importance of having template functions in C ++, and not just template classes, as many people think. I thought, however, that template functions should work like Ada generics, i.e. that they should have been explicitly instantiated. Bjarn did not listen to me and designed a template function mechanism in which templates are instantiated explicitly using the overload mechanism. This specific technique has become crucial for my work, as I found that she allowed me to do a lot of what was impossible in Ada. I see this special design developed by Bjarn as an excellent result, and I am happy that he did not follow my advice then. Bjarn did not listen to me and designed a template function mechanism in which templates are instantiated explicitly using the overload mechanism. This specific technique has become crucial for my work, as I found that she allowed me to do a lot of what was impossible in Ada. I see this special design developed by Bjarn as an excellent result, and I am happy that he did not follow my advice then. Bjarn did not listen to me and designed a template function mechanism in which templates are instantiated explicitly using the overload mechanism. This specific technique has become crucial for my work, as I found that she allowed me to do a lot of what was impossible in Ada. I see this special design developed by Bjarn as an excellent result, and I am happy that he did not follow my advice then.
When did you start work on the STL and what was its original purpose?
In 1992, when the project was formed, it had 8 people. Gradually, the group was reduced, and in the end included two - me and Meng Lee. While Meng was new to that area - she developed compilers for most of her professional life, she took from start to finish a vision of generalized programming studies, and believed that they could lead to a change in the software development process at a time when a very small number of people shared this belief. I don’t think I could construct an STL without her help (after all, STL means Stepanov and Lee). We wrote a gigantic library, a lot of code with a lot of data structures and algorithms, functional objects (functors), adapters, etc. There was a lot of code, but no documentation. Our work was seen as a research project in order to demonstrate that it is possible to have algorithms defined as broadly as possible, but still as efficient as possible. We spent a lot of time taking measurements and found that we can make these algorithms as generalized as they can be, but still as efficient as manually written code. There is no speed overhead in this programming style! The library was growing, but it was not clear where it was managed as a project. It took several successful events to bring him to the STL. that we can make these algorithms as generalized as they can be, but still as efficient as manually written code. There is no speed overhead in this programming style! The library was growing, but it was not clear where it was managed as a project. It took several successful events to bring him to the STL. that we can make these algorithms as generalized as they can be, but still as efficient as manually written code. There is no speed overhead in this programming style! The library was growing, but it was not clear where it was managed as a project. It took several successful events to bring him to the STL.
( Continued )
( Original article )