Plugin system as an exercise in C ++ 11
Somehow it turns out that in many systems I had to work with my own component models, or it came to the point that they should have appeared in this system, since it was already understood that the decomposition and system code in one module is more and more difficult to exist.
Does it make sense to write something like this or take a ready-made solution? The post does not provide an answer to this question. In this post there will be no philosophy on the topic "Why is it needed."
An attempt to do something similar was already in C ++ 03. It developed a component / plug-in model that lives within the framework of the process. For me, the solution to this problem is interesting. In gcc 4.7.2, everything that was interesting to me at the time the beginning of this article has already appeared, and this is the beginning of this (2013) year. And then I got to C ++ 11 ... At work in one direction, at home in another. To play with C ++ 11, I decided to rewrite the material from the old article with new features of the language. To do in some sense an exercise in C ++. But for some reason I was not able to bring the article to the end for more than six months, and the article landed in pristine drafts. He got it, shook off the naphthalene. What came of this can be read on.
We waited, waited, and finally waited for the release of C ++ updates. A new language standard has been released - C ++ 11. This edition of the programming language has brought many interesting and useful features, but is it still a moot point to use it, but not all compilers support it or do not support it in full.
Here it will be a little talk about what principles this implementation of the plug-in system or component model is based on and why it is chosen, and not the other way. Some water will be poured: my thoughts. If you are not interested in such philosophizing, then boldly proceed to implementation.
The described development will be based on the interaction of system components through interfaces. An interface in this context should be understood as a C ++ structure containing only purely virtual methods. The interface will be some logical unit around which everything is built.
One of the important issues may be - what to use as an identifier for an interface, implementation, module, and other entities. In the previous article, a C-string was used as an identifier, since it is possible to ensure greater uniqueness if, for example, uuid generated by some tool and translated into a string is used as an identifier. You can also use a numerical identifier as an identifier. With such a solution, uniqueness will be weaker, but there are advantages - it is, at a minimum, greater performance, which is obviously more difficult to compare strings than numbers. As a value of a numerical identifier, for example, CRC32 from a string can be used. Suppose there is an Ibase interface and it is in the Common namespace. CRC32 from the line “Common.IBase” can become an identifier. Yes, if interface identifiers suddenly coincide somewhere, since this is not uuid, then you will get long hours of “happy” debugging and will have a good time at mastering the strong side of the Russian language. But if you have no ambition that your model will be used worldwide in global systems, then the probability of an outcome with long debugging is minimal. In a couple of offices I dealt with my "crafts" in the style of MS-COM, in which numerical values were used as an identifier and did not stumble upon the problem described above, and there were no rumors that someone had it either. Therefore, a numerical identifier will be used in this implementation. In addition to performance from this solution, there is another positive point: You can do a lot of interesting things with a numerical identifier at the time of compilation, since manipulating strings as template parameters will not work, but it is easy to number. And here, just the first plus of C ++ 11 will be used - it is constexpr, with which you can calculate the values of the hashes at the time of compilation.
The described model will be cross-platform. The cross of something there is one of the interesting points in the development. For a C ++ developer, one of the most understandable tasks is to support cross-platform, but tasks related to cross-compilation are less common, since what is easily supported by one compiler may not be supported on another. One such example is before the implementation of the decltype implementation of an attempt to implement the receipt of the type of an expression at compile time. A good example is BOOST_TYPEOF. If you look “under the hood” of BOOST_TYPEOF, you will find a considerable set of sticks and crutches, since such things could not be implemented using C ++ 03, but were solved mainly on some advanced features of a particular compiler. Also in C ++ 11, the standard library expanded, which made it possible to refuse to write your own wrappers over streams, synchronization objects, etc. We can say a special thank you to the standard type library functions for developers, since they eliminated the need to write their own code in many cases, and, most importantly, they gave the implementation of methods such as std :: is_pod and others, which can be implemented using standard C ++ 03 language tools using compiler extensions was not possible.
There was a desire to minimize the use of third-party libraries and, if possible, reduce their use to zero. This will be pure C ++ development. When implementing the final components, anything can be used due to the task, any libraries, but the model itself, presented here, will be clean in the sense of using third-party libraries.
I have developed a certain attitude towards the use of third-party libraries: do not use the library in the project if its functionality is not used by the client code in the most possible way. You should not drag Qt into the project just because some people like to use QStrung and QList. Yes, I have met projects in which some libraries and frameworks have been far-fetched just in order to use some small and unimportant part of it simply because of the habit of some developers. In general, the use of libraries such as boost, Qt, Poco and others cannot be denied, but they should be applied to the place, included in the project only when there is a great need for them. It’s not worth it to plant a zoo, so, get in the project a couple of exotic animals and no more :) so as not to get a project in which there are 5-7 pieces, or even more types of strings, 2-3 of which are their own bikes, and the rest come from other libraries and a bunch of converters are written from one implementation to another. As a result, the developed program, instead of useful work, spends quite some time on converting between different implementations of the same entities.
Somehow I got used to sorting the code into namespaces. Boss (base objects for service solutions) will be selected as the namespace name and the entire model . The origins of the name can be found in the previous article.about this theme. In the comments to the article, it was noted that “Boss” can be confusing in the code, due to a reminder of the bosses and stereotypes associated with this. Initially, there was no purpose to emphasize the name of a certain “captain” (© Our Russia). But if someone causes negative associations, then why not look at it from a different angle? There is a wonderful book by Ken Blanchard “Leadership to the heights of success”, which describes highly effective organizations and servant leaders whose goal is to do the maximum to give the employee everything for his work with maximum productivity, and not just stand with a stick behind his back. Those. leader - an assistant in the organization of effective work. Boss should be seen as a leader in a highly effective organization that helps employees achieve maximum productivity, providing them with everything necessary for this. In the framework of the component model, this is precisely assistance in organizing a thin layer for easier interaction of entities in the system, and not a monstrous framework that needs to be fought and most of the work is aimed only at working with it, and not at business logic.
One of the criteria, which for me plays an important role when considering the next library, is the speed of starting work with the library and providing it with more and more advanced options for setting it up for the task. That is, the library should not force its user to perform a very long ritual before something begins to work with its use. But at the same time, as necessary, there should be an opportunity to get more and more opportunities for its adjustment and adaptation to more difficult tasks. Those. initially, here’s the “Acne” button for you, by clicking on which a certain sequence of actions is performed according to a certain template, and if necessary, here’s the control panel with a bunch of buttons and switches. This idea was one of the inherent in the proposed model: hide from the user as much as possible in the initial stages. Inside libraries, the code can be arbitrarily complex, but with all its complexity it should justify the maximum ease of use of the library itself.
About the multiple inheritance of implementations on the Internet, many holy wars have been and are being waged. I believe that multiple inheritance is one of the strengths of C ++. Yes, in some places there are problems with him, but without him, it is also not always possible to easily get out. Each C ++ tool is not intended to be taken and used somehow just because it exists, but when the need arises, then there is a tool.
When they begin to glorify me the advantages of languages with multiple inheritance of only interfaces, I like to ask about a solution to the following problem. Suppose there are two interfaces and an implementation for each. These interfaces and implementations are used in the project for some considerable time. Yes, the design problem and bold interfaces is bad, but let's say these interfaces have more than a dozen methods, and accordingly, all of them implement them. And now there is a need to implement a component with the functionality of these two entities, but with the implementation of another third interface. With support for multiple inheritance of implementations, everything is solved simply: a class is derived from the new interface and from two existing implementations, and only methods of the new third interface are implemented.
Here, of course, you can raise a considerable discussion about the design of the system, but real practice is not as idealistic as theoretical code design.
Once at an interview I asked a candidate (far from a youth) that he knew about multiple inheritance. The answer was something like this: “Yes, I know that there is multiple inheritance and, it seems, there is virtual multiple inheritance, but this is bad. I never use it. And I can’t say anything more about him. ”
If you want to make new entities, collecting them from cubes of ready-made entities, then multiple inheritance is one of the most useful mechanisms. And component models are just expanse for building something new from pieces from something already existing.
As already noted, everything is built around interfaces - C ++ structures with purely virtual methods and some impurity (interface identifier).
The basic interface from which all existing in this implementation should inherit:
Hmm, a virtual destructor and a couple of macros ... Many will exclaim: "Macros are bad!" Yes, it’s bad when there is an abundance and application anywhere. In small quantities and only if necessary, it can be useful, like poison in pharmacology - it kills and treats depending on the dosage.
As an interface identifier, a string is passed by a macro parameter. Somehow, the lines look more pleasant in the code than dry numbers, and you need to have some data set from which to generate a numerical identifier. The identifier is crc32 from the string. And here it is the strength of the new standard: crc32 and other things from the lines at the time of compilation can be considered! Such a trick, of course, will not work with lines dynamically created in the program, and it will not be useful for solving this problem.
To implement the crc32 calculation, you will need some data table that can be easily found on the Internet. Using it, crc32 can be read approximately like this:
Why is the table wrapped in a structure, and even in a template? To get rid of the cpp file with data definition, i.e. everything is only in the include file and without the charms of static data in the include files.
Crc32 calculated, identifier generated. Now to the consideration of what lies beneath the second macro:
Object lifetime management is implemented through reference counting. The IBase interface functions include methods for working with the reference counter and a method for requesting interfaces from an object.
User Interface Definition Example:
The second example is the user interface, so that the further explanation is clearer:
How, from what and why to define user interfaces is considered, but they need to be implemented somewhere and somehow. First, a small example of the implementation of interfaces:
It's hard not to notice that every implementation inherits from CoClass. CoClass has a very simple implementation:
There are interfaces, there are implementations, but there has not yet been an implementation of IBase. The implementation of this interface will probably be one of the most difficult.
To create an object from the above large example would look something like this:
Boss :: Base is an implementation class of Boss :: IBase. In the implementation, to perform certain operations it is necessary to bypass the class hierarchy. So for the example above, the simplified hierarchy will look like this:
Bypassing the class hierarchy in search of the right one I will postpone for a while. Quickly walk through simpler methods.
Reference counting is performed by calling the AddRef (increases the reference count) and Release (decreases the reference count and deletes the object when zero is reached, doing delete this). Since it is assumed that objects can be used in a multi-threaded environment, work with the counter is done through std :: atomic, which allows atomically increasing and decreasing the counter in a multi-threaded environment. Yes, finally C ++ recognized the existence of threads and there was support for working with threads and synchronization primitives.
The Create method has this implementation:
To abandon static libraries and implement the “loner” pattern (for each of the modules), for the ModuleRefCounter entity, you need to implement it only in the included file, then a trick with templates and static objects is quite useful. You can read more about this in the previous article. It can be briefly described as follows: if you create a type template with a static field and instantiate it with any type, then an instance of this object will be the only one in the entire module. It turns out a little trick used to write loners in included files without implementing somewhere in a cpp-file (loners in include'ah).
And in this beautiful solution there is a rake, a children's rake: the handle is two times shorter, it beats more accurately and painfully ... This solution works fine in .dll, but in .so I caught the problem: a template with static fields instantiated by the same type became one for all .so with the components of this model as part of the process! Why I realized a little later, but I had to abandon the beautiful decision in favor of a simpler one based on anonymous namespaces and an included file, which is included in each module no more than once (who are interested in boss / include / plugin / module.h).
C ++ is considered by many to be a language that makes it easy to “shoot yourself in the foot”. And, as a rule, it is often persecuted against him precisely because of the pairing of operations related to the allocation / release of resources, and in particular memory. But if you use smart pointers, then one headache becomes less. RefObjPtr is just a smart pointer that calls AddRef and Release to control the lifetime of the object and in the program when it is used, the AddRef and Release methods should not occur in user code.
Such a bun of the new standard as r-value allows you to write more optimal entities; for example, all the same RefObjPtr to return an object without calling AddRef / Release once again on copy constructors (return std :: move (NewInst)).
There is also a Create call to no one FinalizeConstruct. What is this and why? Suppose you have a hierarchy that is approximately no simpler than that shown in the figure above and in one of the implementations of the interface you need to call something, which is defined in the class a level lower. You can use virtual functions, but, to put it simply, then the constructor does not yet have a virtual function table, but it already does not exist in the destructor. All calls to virtual functions will be like calls to ordinary functions of a class and calling an overridden function at a lower level will not work from the constructor. In this case, FinalizeConstruct is made, which will be called after the object is already fully created. It turns out that it is necessary to implement some logic, similar to the logic of calling constructors, only on their own, i.e. go around the entire hierarchy and call FinalizeConstruct on each class in the order
A class developer is not required to define FinalizeConstruct in his class. When traversing the class hierarchy, the FinalizeConstruct logic implemented in the model will determine the presence of FinalizeConstruct in the class using the good old SFINAE and, if this method is present, will call it. Basic rule: FinalizeConstruct custom code implementation should not be virtual! Otherwise, you get confusion when building entities from ready-made cubes.
The presence of the FinalizeConstruct class is determined by this code:
The analogy to the designers is ready, but what about the analogy to the destructors? Where without her. The model implements logic for traversing the class hierarchy in the order of calling the destructors, searching in the implementation class all through the same SFINAE of the BeforeRelease method and if there is a call to it. The implementation of the logic for working with BeforeRelease is similar to the logic of FinalizeConstruct, but only in the reverse order.
Now there is the opportunity to redesign the object after it is fully created and release something before the destruction of the object. But in the constructor, you can report a problem by throwing an exception from it. The same behavior is implemented in this model: in any FinalizeConstruct method in the hierarchy, an exception can be thrown and the rest of the FinalizeConstruct chain will no longer be called, in addition, for objects of the hierarchy for which FinalizeConstruct has already passed, BeforeRelease will be called successfully. It turns out a complete analogy to the C ++ constructors and destructors. BeforeRelease is called from the implementation of the Release method, and when traversing the BeforeRelease hierarchy, it will be called only for those objects for which a successful FinalizeConstruct call has passed, and the success of the call is determined by the Constructed flag located in CoClass (remember?). It’s also worth noting
It remains to implement the logic
The core is ready! All the most complex and interesting is described. Further, everything will be much simpler and more even, without puzzles.
In this part, we will focus on the organization of plugins. In the current context, plug-ins should be understood as dynamic libraries (.so / .dll), which host implementation classes of interfaces (components) and a small set of functions for accessing objects of these implementation classes.
This part of the article, in my opinion, is the simplest, since there is no "programming on templates" and other mockery of the compiler. Just creating a set of interfaces and implementations for organizing a plug-in system.
For the "life" of the component in their homes (plug-ins) within the framework of one state, called the process, not so much is needed:
Service registry - a place to store all information about the service:
Based on this information, the class factory will be able to load the necessary plug-ins and create interface implementation objects.
The role of the loader is to load the component registry, load the class factory and configure it to work together with the service registry. After that, all calls to create objects will only be to the factory and the user receives some abstraction, he should not worry in which of the modules his object is located and how to create it. The user only operates with class identifiers of implementations when requesting the creation of a new object.
The service registry supplies an interface with just one method, which is enough to get the necessary information for the class factory.
But the service registry implementation class itself can supply several interfaces. What was it all about? Make typeset components.
The bootloader code is quite simple, but unfortunately C ++ 11 has little recognized the platform (OS). They recognized multithreading, but there are no such things as dynamic libraries yet. So for loading modules the code depending on the operating system will be used. Of course hidden deep. It would be nice to recall pImple , but since the course is taken to abandon static libraries, it will be a little different: the implementation for each OS in its header file and the file interface that analyzes what to include based on the __linux__ and _WIN32 macros.
A small example of the use of services within the framework of the plugin model living in one process:
As was noted at the beginning of the section, everything is very simple, only it took to write some amount of auxiliary code.
The best example is a real task, and not an artificially invented pile up, demonstrating this or that opportunity to the maximum.
Above, when describing the kernel, a very large example was given, which tried to maximize the existing flexibility in collecting entities from ready-made implementations and adding a new interface. But the example, despite the fact that it reflects the capabilities of the model, it is contrived and does not look very friendly. Therefore, as examples, we can consider the implementation of the necessary components of the plug-in part, namely the registry of services and the class factory. Although they are part of the plug-in model, they are the same plug-ins as those that the user can develop for their needs.
Once again I will give a class implementation for the registry of services.
Now I’ll try to describe what is happening here ...
To create a class implementing one or more interfaces, you need to create a class derived from the CoClass template class. This class takes as parameters the identifier of the implementation class (which can already be used when creating the object through the class factory) and a list of inherited interfaces or ready-made implementations of the interfaces. If you look at the given implementation class of the service registry, you can see the identifier in it (Service :: Id :: ServiceRegistry) and the following are the interfaces that are implemented in this class (IServiceRegistry - the interface of the service registry that will be used by the class factory; ISrviceRegistryCtrl - registry management interface; ISerializable - the registry must be saved somewhere and loaded from somewhere, and this interface allows you to perform what is required).
The component is ready. It remains to somehow publish it, i.e. give access to it from outside the module in which it is located.
Another similar example: the implementation of a class factory, which has also been given above.
A completely similar example. Also inheritance from CoClass, identifier and a list of implemented interfaces. The class factory is located in a separate module, respectively, it has its own
These were simple component implementations in which each component inherited only a list of interfaces, implemented their methods, and that’s all. There was no inheritance of ready-made implementations. And if you look again at the services registry interface, then in it you will see work with IServiceInfo, through which all information is transmitted. IServiceInfo can transmit only general information about the service, but there is also a private one. Initially, I wanted to make plugins that live not only in dynamic libraries, but also scattered across processes in their executable modules. Hence, different information: for plugins in dynamic libraries there is only an addition about the path to it, and for plugins in separate executable modules there is a lot of additional information: information about Proxy / Stubs, transport, etc. (but, unfortunately, I did not finish this part, but cut off the rudiments, so as not to litter the code with imperfections). Now I’ll give you an example in which components are already inherited not only from interfaces, but also from implementations.
How to implement the components has become clearer. Everything is simple. And how to query and work with interfaces, to query from one another - you can consider the example of a bootloader that loads a registry of services, obtains the necessary interfaces from it, configures this registry, loads the class factory and sets it up to work with the registry. Further, of course, all the client’s work is already done with the class factory and the client should no longer work with modules, otherwise, for the sake of what, all this abstractness was started.
In addition to the above examples, examples from the article with the implementation of the previous version in C ++ 03 are relevant. The only difference is the work with identifiers. In the new model, you do not need to add a separate macro to the implementation class, which you can forget about. If you forget about the identifier in the new model, the compiler will remind you of this, since now it is a template parameter.
There was some big idea, but it was realized only on 2/3:
Somehow it happened that the most interesting moment for me is building a skeleton or skeleton of a system, but building muscle and injecting fat (developing all kinds of usefulness / pseudo-usefulness) can sometimes be a job that is done very quickly due to good knowledge of the system. By virtue of this, a very complete (sometimes excessively full) core was obtained (spherical horses in vacuum always attracted me). There is a small part of the muscles (the main components of the plug-in system: a registry of services and a class factory) so that the model can somehow exist. But this implementation turned out to be completely fat-free: there is nothing auxiliary in it. The system’s skeleton was assembled, a little muscle was built up and a kick was given in the ass, so that it would somehow move from place - it became Habr's material and article.
The project must be either released or discontinued as early as possible, until it has eaten up all the resources and has safely disappeared from the spotlight. By virtue of this judgment and the fact that the material of the article turned out to be a bit big and, possibly, complicated in some places, and the reason that I was not able to pay attention to this article for more than six months, the part with plugins is still missing. Soon, for example, C ++ 14 may appear, and then the material of this article devoted to C ++ 11 may already become irrelevant. It may well be that the unrealized part will be released as a separate post ... This material will be based on the material of the article “Proxy / Stubs with my own hands” , which I wanted to rework with the C ++ 11 standard, add interface marshaling and put all the transport under it (implement one of the mechanisms IPC).
Unfortunately and happiness at the same time, the reader does not always take out the whole plan of the author from his work, which he laid down in it. According to the source code, there are a few scattered terminations for the future, such as RemoteServiceInfo and others, which may well be skipped when considering the material.
Source code is available on github. It has a minimal build script. It can serve as a source of examples and ideas for your projects.
Thank you all for your attention!
Does it make sense to write something like this or take a ready-made solution? The post does not provide an answer to this question. In this post there will be no philosophy on the topic "Why is it needed."
An attempt to do something similar was already in C ++ 03. It developed a component / plug-in model that lives within the framework of the process. For me, the solution to this problem is interesting. In gcc 4.7.2, everything that was interesting to me at the time the beginning of this article has already appeared, and this is the beginning of this (2013) year. And then I got to C ++ 11 ... At work in one direction, at home in another. To play with C ++ 11, I decided to rewrite the material from the old article with new features of the language. To do in some sense an exercise in C ++. But for some reason I was not able to bring the article to the end for more than six months, and the article landed in pristine drafts. He got it, shook off the naphthalene. What came of this can be read on.
About the decision to use C ++ 11
We waited, waited, and finally waited for the release of C ++ updates. A new language standard has been released - C ++ 11. This edition of the programming language has brought many interesting and useful features, but is it still a moot point to use it, but not all compilers support it or do not support it in full.
Introduction and a bit of philosophy
Here it will be a little talk about what principles this implementation of the plug-in system or component model is based on and why it is chosen, and not the other way. Some water will be poured: my thoughts. If you are not interested in such philosophizing, then boldly proceed to implementation.
Interfaces and Identifiers
The described development will be based on the interaction of system components through interfaces. An interface in this context should be understood as a C ++ structure containing only purely virtual methods. The interface will be some logical unit around which everything is built.
One of the important issues may be - what to use as an identifier for an interface, implementation, module, and other entities. In the previous article, a C-string was used as an identifier, since it is possible to ensure greater uniqueness if, for example, uuid generated by some tool and translated into a string is used as an identifier. You can also use a numerical identifier as an identifier. With such a solution, uniqueness will be weaker, but there are advantages - it is, at a minimum, greater performance, which is obviously more difficult to compare strings than numbers. As a value of a numerical identifier, for example, CRC32 from a string can be used. Suppose there is an Ibase interface and it is in the Common namespace. CRC32 from the line “Common.IBase” can become an identifier. Yes, if interface identifiers suddenly coincide somewhere, since this is not uuid, then you will get long hours of “happy” debugging and will have a good time at mastering the strong side of the Russian language. But if you have no ambition that your model will be used worldwide in global systems, then the probability of an outcome with long debugging is minimal. In a couple of offices I dealt with my "crafts" in the style of MS-COM, in which numerical values were used as an identifier and did not stumble upon the problem described above, and there were no rumors that someone had it either. Therefore, a numerical identifier will be used in this implementation. In addition to performance from this solution, there is another positive point: You can do a lot of interesting things with a numerical identifier at the time of compilation, since manipulating strings as template parameters will not work, but it is easy to number. And here, just the first plus of C ++ 11 will be used - it is constexpr, with which you can calculate the values of the hashes at the time of compilation.
Cross-platform and language support
The described model will be cross-platform. The cross of something there is one of the interesting points in the development. For a C ++ developer, one of the most understandable tasks is to support cross-platform, but tasks related to cross-compilation are less common, since what is easily supported by one compiler may not be supported on another. One such example is before the implementation of the decltype implementation of an attempt to implement the receipt of the type of an expression at compile time. A good example is BOOST_TYPEOF. If you look “under the hood” of BOOST_TYPEOF, you will find a considerable set of sticks and crutches, since such things could not be implemented using C ++ 03, but were solved mainly on some advanced features of a particular compiler. Also in C ++ 11, the standard library expanded, which made it possible to refuse to write your own wrappers over streams, synchronization objects, etc. We can say a special thank you to the standard type library functions for developers, since they eliminated the need to write their own code in many cases, and, most importantly, they gave the implementation of methods such as std :: is_pod and others, which can be implemented using standard C ++ 03 language tools using compiler extensions was not possible.
Use third-party libraries
There was a desire to minimize the use of third-party libraries and, if possible, reduce their use to zero. This will be pure C ++ development. When implementing the final components, anything can be used due to the task, any libraries, but the model itself, presented here, will be clean in the sense of using third-party libraries.
I have developed a certain attitude towards the use of third-party libraries: do not use the library in the project if its functionality is not used by the client code in the most possible way. You should not drag Qt into the project just because some people like to use QStrung and QList. Yes, I have met projects in which some libraries and frameworks have been far-fetched just in order to use some small and unimportant part of it simply because of the habit of some developers. In general, the use of libraries such as boost, Qt, Poco and others cannot be denied, but they should be applied to the place, included in the project only when there is a great need for them. It’s not worth it to plant a zoo, so, get in the project a couple of exotic animals and no more :) so as not to get a project in which there are 5-7 pieces, or even more types of strings, 2-3 of which are their own bikes, and the rest come from other libraries and a bunch of converters are written from one implementation to another. As a result, the developed program, instead of useful work, spends quite some time on converting between different implementations of the same entities.
Boss ...
Somehow I got used to sorting the code into namespaces. Boss (base objects for service solutions) will be selected as the namespace name and the entire model . The origins of the name can be found in the previous article.about this theme. In the comments to the article, it was noted that “Boss” can be confusing in the code, due to a reminder of the bosses and stereotypes associated with this. Initially, there was no purpose to emphasize the name of a certain “captain” (© Our Russia). But if someone causes negative associations, then why not look at it from a different angle? There is a wonderful book by Ken Blanchard “Leadership to the heights of success”, which describes highly effective organizations and servant leaders whose goal is to do the maximum to give the employee everything for his work with maximum productivity, and not just stand with a stick behind his back. Those. leader - an assistant in the organization of effective work. Boss should be seen as a leader in a highly effective organization that helps employees achieve maximum productivity, providing them with everything necessary for this. In the framework of the component model, this is precisely assistance in organizing a thin layer for easier interaction of entities in the system, and not a monstrous framework that needs to be fought and most of the work is aimed only at working with it, and not at business logic.
Minimalism in the interface
One of the criteria, which for me plays an important role when considering the next library, is the speed of starting work with the library and providing it with more and more advanced options for setting it up for the task. That is, the library should not force its user to perform a very long ritual before something begins to work with its use. But at the same time, as necessary, there should be an opportunity to get more and more opportunities for its adjustment and adaptation to more difficult tasks. Those. initially, here’s the “Acne” button for you, by clicking on which a certain sequence of actions is performed according to a certain template, and if necessary, here’s the control panel with a bunch of buttons and switches. This idea was one of the inherent in the proposed model: hide from the user as much as possible in the initial stages. Inside libraries, the code can be arbitrarily complex, but with all its complexity it should justify the maximum ease of use of the library itself.
Multiple Inheritance of Implementations
About the multiple inheritance of implementations on the Internet, many holy wars have been and are being waged. I believe that multiple inheritance is one of the strengths of C ++. Yes, in some places there are problems with him, but without him, it is also not always possible to easily get out. Each C ++ tool is not intended to be taken and used somehow just because it exists, but when the need arises, then there is a tool.
When they begin to glorify me the advantages of languages with multiple inheritance of only interfaces, I like to ask about a solution to the following problem. Suppose there are two interfaces and an implementation for each. These interfaces and implementations are used in the project for some considerable time. Yes, the design problem and bold interfaces is bad, but let's say these interfaces have more than a dozen methods, and accordingly, all of them implement them. And now there is a need to implement a component with the functionality of these two entities, but with the implementation of another third interface. With support for multiple inheritance of implementations, everything is solved simply: a class is derived from the new interface and from two existing implementations, and only methods of the new third interface are implemented.
Here, of course, you can raise a considerable discussion about the design of the system, but real practice is not as idealistic as theoretical code design.
Once at an interview I asked a candidate (far from a youth) that he knew about multiple inheritance. The answer was something like this: “Yes, I know that there is multiple inheritance and, it seems, there is virtual multiple inheritance, but this is bad. I never use it. And I can’t say anything more about him. ”
If you want to make new entities, collecting them from cubes of ready-made entities, then multiple inheritance is one of the most useful mechanisms. And component models are just expanse for building something new from pieces from something already existing.
Implementation
Core
As already noted, everything is built around interfaces - C ++ structures with purely virtual methods and some impurity (interface identifier).
The basic interface from which all existing in this implementation should inherit:
namespace Boss
{
struct IBase
{
BOSS_DECLARE_IFACEID("Boss.IBase")
virtual ~IBase() {}
BOSS_DECLARE_IBASE_METHODS()
};
}
Hmm, a virtual destructor and a couple of macros ... Many will exclaim: "Macros are bad!" Yes, it’s bad when there is an abundance and application anywhere. In small quantities and only if necessary, it can be useful, like poison in pharmacology - it kills and treats depending on the dosage.
BOSS_DECLARE_IFACEID
adds some static method with which you can get the interface identifier. Since the static method will not affect the data structure in any way, therefore, you can safely transfer the interface between modules built even on different compilers, and constexpr will allow you to use the resulting value for parameterization of templates.#define BOSS_DECLARE_IFACEID(ifaceid_) \
static constexpr Boss::InterfaceId const GetInterfaceId() \
{ \
return Boss::Crc32(ifaceid_); \
}
As an interface identifier, a string is passed by a macro parameter. Somehow, the lines look more pleasant in the code than dry numbers, and you need to have some data set from which to generate a numerical identifier. The identifier is crc32 from the string. And here it is the strength of the new standard: crc32 and other things from the lines at the time of compilation can be considered! Such a trick, of course, will not work with lines dynamically created in the program, and it will not be useful for solving this problem.
To implement the crc32 calculation, you will need some data table that can be easily found on the Internet. Using it, crc32 can be read approximately like this:
namespace Boss
{
namespace Private
{
template
struct Crc32TableWrap
{
static constexpr uint32_t const Table[256] =
{
0x00000000L, 0x77073096L, 0xee0e612cL, 0x990951baL, 0x076dc419L,
0x706af48fL, 0xe963a535L, 0x9e6495a3L, 0x0edb8832L, 0x79dcb8a4L,
... etc
};
};
typedef Crc32TableWrap Crc32Table;
template
inline constexpr std::uint32_t Crc32Impl(char const *str)
{
return (Crc32Impl < I - 1>(str) >> 8) ^
Crc32Table::Table[(Crc32Impl< I - 1>(str) ^ str[I]) & 0x000000FF];
}
template<>
inline constexpr std::uint32_t Crc32Impl<-1>(char const *)
{
return 0xFFFFFFFF;
}
}
template
inline constexpr unsigned Crc32(char const (&str)[N])
{
return (Private::Crc32Impl(str) ^ 0xFFFFFFFF);
}
}
Why is the table wrapped in a structure, and even in a template? To get rid of the cpp file with data definition, i.e. everything is only in the include file and without the charms of static data in the include files.
Crc32 calculated, identifier generated. Now to the consideration of what lies beneath the second macro:
BOSS_DECLARE_IBASE_METHODS
However! Really it was impossible to take and put three methods into a structure? Why macro? And finish off the question of the presence of relatives in India ... But since there is no rejection of multiple inheritance and, moreover, it is very welcome in this model, in order to calm the compiler’s worries that he does not understand which branch of inheritance to take from of the methods described under the macro, this macro will be used in several more places. #define BOSS_DECLARE_IBASE_METHODS() \
virtual Boss::UInt BOSS_CALL AddRef() = 0; \
virtual Boss::UInt BOSS_CALL Release() = 0; \
virtual Boss::RetCode BOSS_CALL QueryInterface(Boss::InterfaceId ifaceId, Boss::Ptr *iface) = 0;
Object lifetime management is implemented through reference counting. The IBase interface functions include methods for working with the reference counter and a method for requesting interfaces from an object.
User Interface Definition Example:
struct IFace
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace")
virtual void BOSS_CALL Mtd() = 0;
};
Almost everything is clear: the interface, the declaration of its methods, the definition of an identifier. But why not just inherit from Ibase? The second example is the user interface, so that the further explanation is clearer:
struct IFace1
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace1")
virtual void BOSS_CALL Mtd1() = 0;
};
struct IFace2
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace2")
virtual void BOSS_CALL Mtd2() = 0;
};
struct IFace3
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace3")
virtual void BOSS_CALL Mtd3() = 0;
};
Now everything is revealed? Not? It's simple: in the presence of multiple inheritance, even only interfaces need to somehow be able to "bypass" them in search of the right one when implementing QueryInterface. The case is a little esoteric, but sometimes I came across this. Suppose you have a pointer to IFace3, it is clear that you can call all the methods of its base classes in place here. And if you pass it to another function, more generalized, which always requests IFace1 or IFace2 from some interface, not necessarily with such an inheritance structure, then it no longer relies on C ++ mechanisms, but on the implemented QueryInterface, the implementation of which needs this hierarchy somehow get around. This is where some impurity comes in handy: Boss :: Inherit, which has the following implementation:namespace Boss
{
template
struct Inherit
: public T ...
{
virtual ~Inherit() {}
typedef std::tuple BaseInterfaces;
BOSS_DECLARE_IBASE_METHODS()
};
}
This impurity is simply inherited from the transferred list of basic interfaces, the “compiler” reassures the reader of the illegibility of the desired method (using BOSS_DECLARE_IBASE_METHODS), and “drops” the list of inherited interfaces. Here the new standard gives such an advantage as templates with a variable number of parameters. Hooray, wait! Previously, this was done through bulky type lists in the Alexandrescu style. Well, also here the new "pluses" still gave a bonus in the form of a tuple, eliminating the need to write your own similar bike. How, from what and why to define user interfaces is considered, but they need to be implemented somewhere and somehow. First, a small example of the implementation of interfaces:
struct IFace1
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace1")
virtual void Mtd1() = 0;
};
class Face_1
: public Boss::CoClass
{
public:
virtual void Mtd1()
{
// TODO:
}
};
And a great example
with all the "meannesses" for the implementation, which she will have to make out. It is clear that you can make more chaos. The given example describes the implementation possibilities in the construction of "cubes". struct IFace1
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace1")
virtual void Mtd1() = 0;
};
struct IFace2
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace2")
virtual void Mtd2() = 0;
};
struct IFace3
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace3")
virtual void Mtd3() = 0;
};
class Face1
: public Boss::CoClass
{
public:
virtual void Mtd1()
{
// TODO:
}
};
class Face2
: public Boss::CoClass
{
public:
virtual void Mtd2()
{
// TODO:
}
};
class Face123
: public Boss::CoClass
{
public:
virtual void Mtd3()
{
// TODO:
}
};
struct IFace4
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace4")
virtual void Mtd4() = 0;
};
struct IFace5
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace5")
virtual void Mtd5() = 0;
};
struct IFace6
: Boss::Inherit
{
BOSS_DECLARE_IFACEID("IFace6")
virtual void Mtd6() = 0;
};
class Face123456
: public Boss::CoClass
{
public:
virtual void Mtd4()
{
// TODO:
}
virtual void Mtd5()
{
// TODO:
}
virtual void Mtd6()
{
// TODO:
}
};
It's hard not to notice that every implementation inherits from CoClass. CoClass has a very simple implementation:
namespace Boss
{
template
class CoClass
: public virtual Private::CoClassAdditive
, public T ...
{
public:
typedef std::tuple BaseEntities;
CoClass()
: Constructed(false)
{
}
// IBase
BOSS_DECLARE_IBASE_METHODS()
private:
template
friend void Private::SetConstructedFlag(Y *, bool);
template
friend bool Private::GetConstructedFlag(Y *);
bool Constructed;
};
}
This class, as well as the Inherit structure, inherits from the list of transferred entities, "drops" this inheritance list, inherits from some impurities (Private :: CoClassAdditive) / brands
(which will be used to classify entities: interface or implementation), the compiler also eliminates illegibility (by pushing methods through BOSS_DECLARE_IBASE_METHODS) and contains a sign of the object being constructed (Constructed). namespace Boss
{
namespace Private
{
struct CoClassAdditive
{
virtual ~CoClassAdditive() {}
};
}
}
There are interfaces, there are implementations, but there has not yet been an implementation of IBase. The implementation of this interface will probably be one of the most difficult.
To create an object from the above large example would look something like this:
auto Obj = Boss::Base::Create();
Boss :: Base is an implementation class of Boss :: IBase. In the implementation, to perform certain operations it is necessary to bypass the class hierarchy. So for the example above, the simplified hierarchy will look like this:
Bypassing the class hierarchy in search of the right one I will postpone for a while. Quickly walk through simpler methods.
Reference counting is performed by calling the AddRef (increases the reference count) and Release (decreases the reference count and deletes the object when zero is reached, doing delete this). Since it is assumed that objects can be used in a multi-threaded environment, work with the counter is done through std :: atomic, which allows atomically increasing and decreasing the counter in a multi-threaded environment. Yes, finally C ++ recognized the existence of threads and there was support for working with threads and synchronization primitives.
The Create method has this implementation:
template
static RefObjPtr Create(Args const & ... args)
{
Private::ModuleCounter::ScopedLock Lock;
RefObjPtr NewInst(new Base(args ...));
Private::FinalizeConstruct::Construct(NewInst.Get());
return std::move(NewInst);
}
The presence of templates with a variable number of parameters allows you to create a method for constructing objects and transfer the necessary parameters to the constructor. Previously, this could not be done, and if the object needed some initial settings, then you had to create an object that had some method (its own specific) of type Init, into which the necessary was passed.Modulecounter
Controls the module reference count. There are two reference counters - this is the reference counter directly at the object and the counter of all module links. The reference counter of the module is needed in order to be able to understand when there are “live” objects in the module, and when there is not a single one and the module can be unloaded.namespace Boss
{
namespace Private
{
struct ModuleRefCounterTypeStub
{
};
template
class ModuleRefCounter
{
public:
static void AddRef()
{
Counter.fetch_add(1, std::memory_order_relaxed);
}
static void Release()
{
Counter.fetch_sub(1, std::memory_order_relaxed);
}
static UInt GetCounter()
{
return Counter;
}
private:
static std::atomic Counter;
public:
class ScopedLock
{
public:
ScopedLock(ScopedLock const &) = delete;
ScopedLock(ScopedLock &&) = delete;
ScopedLock operator = (ScopedLock const &) = delete;
ScopedLock operator = (ScopedLock &&) = delete;
ScopedLock()
{
ModuleRefCounter::AddRef();
}
~ScopedLock()
{
ModuleRefCounter::Release();
}
};
};
template
std::atomic ModuleRefCounter::Counter(0);
typedef ModuleRefCounter ModuleCounter;
}
}
To abandon static libraries and implement the “loner” pattern (for each of the modules), for the ModuleRefCounter entity, you need to implement it only in the included file, then a trick with templates and static objects is quite useful. You can read more about this in the previous article. It can be briefly described as follows: if you create a type template with a static field and instantiate it with any type, then an instance of this object will be the only one in the entire module. It turns out a little trick used to write loners in included files without implementing somewhere in a cpp-file (loners in include'ah).
And in this beautiful solution there is a rake, a children's rake: the handle is two times shorter, it beats more accurately and painfully ... This solution works fine in .dll, but in .so I caught the problem: a template with static fields instantiated by the same type became one for all .so with the components of this model as part of the process! Why I realized a little later, but I had to abandon the beautiful decision in favor of a simpler one based on anonymous namespaces and an included file, which is included in each module no more than once (who are interested in boss / include / plugin / module.h).
C ++ is considered by many to be a language that makes it easy to “shoot yourself in the foot”. And, as a rule, it is often persecuted against him precisely because of the pairing of operations related to the allocation / release of resources, and in particular memory. But if you use smart pointers, then one headache becomes less. RefObjPtr is just a smart pointer that calls AddRef and Release to control the lifetime of the object and in the program when it is used, the AddRef and Release methods should not occur in user code.
Such a bun of the new standard as r-value allows you to write more optimal entities; for example, all the same RefObjPtr to return an object without calling AddRef / Release once again on copy constructors (return std :: move (NewInst)).
There is also a Create call to no one FinalizeConstruct. What is this and why? Suppose you have a hierarchy that is approximately no simpler than that shown in the figure above and in one of the implementations of the interface you need to call something, which is defined in the class a level lower. You can use virtual functions, but, to put it simply, then the constructor does not yet have a virtual function table, but it already does not exist in the destructor. All calls to virtual functions will be like calls to ordinary functions of a class and calling an overridden function at a lower level will not work from the constructor. In this case, FinalizeConstruct is made, which will be called after the object is already fully created. It turns out that it is necessary to implement some logic, similar to the logic of calling constructors, only on their own, i.e. go around the entire hierarchy and call FinalizeConstruct on each class in the order
A class developer is not required to define FinalizeConstruct in his class. When traversing the class hierarchy, the FinalizeConstruct logic implemented in the model will determine the presence of FinalizeConstruct in the class using the good old SFINAE and, if this method is present, will call it. Basic rule: FinalizeConstruct custom code implementation should not be virtual! Otherwise, you get confusion when building entities from ready-made cubes.
The presence of the FinalizeConstruct class is determined by this code:
template
class HasFinalizeConstruct
{
private:
typedef char (&No)[1];
typedef char (&Yes)[10];
template
struct CheckMtd
{
typedef Yes Type;
};
template
static typename CheckMtd::Type Check(U const *);
static No Check(...);
public:
enum { Has = sizeof(Check(static_cast(0))) == sizeof(Yes) };
};
All logic for calling FinalizeConstruct
It is built on private specializations of templates and walking through the hierarchy through “dug” tuples with types of base classes. The standard library began to have tools for working with types, so to determine whether a class belongs to an implementation class, you can now use std :: is_base_of, rather than writing your own implementation. You can also use std :: tuple instead of type lists in Alexandrescu style. namespace Boss
{
namespace Private
{
template
struct CallFinalizeConstruct
{
template
static void Call(ObjType *obj)
{
obj->FinalizeConstruct();
SetConstructedFlag(obj, true);
}
};
template <>
struct CallFinalizeConstruct
{
template
static void Call(ObjType *obj)
{
SetConstructedFlag(obj, true);
}
};
template
<
typename T,
bool IsCoClass = std::is_base_of::value
>
struct FinalizeConstruct
{
template
static void Construct(ObjType *)
{
}
};
template
struct FinalizeConstructIter
{
template
static void Construct(ObjType *obj)
{
typedef typename std::tuple_element::type CurType;
FinalizeConstructIter::Construct(obj);
FinalizeConstruct::Construct(static_cast(obj));
}
};
template
struct FinalizeConstructIter
{
template
static void Construct(ObjType *)
{
}
};
template
struct FinalizeConstruct
{
template
static void Construct(ObjType *obj)
{
typedef typename T::BaseEntities BaseEntities;
enum { BaseEntityCount = std::tuple_size::value - 1 };
FinalizeConstructIter::Construct(obj);
CallFinalizeConstruct::Has>::Call(obj);
}
};
}
}
The analogy to the designers is ready, but what about the analogy to the destructors? Where without her. The model implements logic for traversing the class hierarchy in the order of calling the destructors, searching in the implementation class all through the same SFINAE of the BeforeRelease method and if there is a call to it. The implementation of the logic for working with BeforeRelease is similar to the logic of FinalizeConstruct, but only in the reverse order.
Now there is the opportunity to redesign the object after it is fully created and release something before the destruction of the object. But in the constructor, you can report a problem by throwing an exception from it. The same behavior is implemented in this model: in any FinalizeConstruct method in the hierarchy, an exception can be thrown and the rest of the FinalizeConstruct chain will no longer be called, in addition, for objects of the hierarchy for which FinalizeConstruct has already passed, BeforeRelease will be called successfully. It turns out a complete analogy to the C ++ constructors and destructors. BeforeRelease is called from the implementation of the Release method, and when traversing the BeforeRelease hierarchy, it will be called only for those objects for which a successful FinalizeConstruct call has passed, and the success of the call is determined by the Constructed flag located in CoClass (remember?). It’s also worth noting
It remains to implement the logic
QueryInterface
which by and large is not very different from the hierarchy traversal described above. When traversing the hierarchy and when meeting in the tree of the implementation class, its "tricked" list of basic entities is taken and it recursively goes around in search of the desired interface. There is one addition. Since it is possible to work with interfaces that are multiple inherited from other interfaces, when meeting in the search hierarchy of an interface, it manages to find the desired interface similar to bypassing the implementation class, but only with a “tricked” list of only one interface.namespace Boss
{
namespace Private
{
template
struct QueryInterface;
template
struct QueryInterfacesListIter
{
template
static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface)
{
typedef typename std::tuple_element::type CurType;
if (ifaceId == InterfaceTraits::Id)
{
*iface = static_cast(obj);
return Status::Ok;
}
return QueryInterfacesListIter::Query(obj, ifaceId, iface) == Status::Ok ?
Status::Ok : QueryInterface::Query(obj, ifaceId, iface);
}
};
template
struct QueryInterfacesListIter
{
template
static RetCode Query(ObjType *, InterfaceId, Ptr *)
{
return Status::InterfaceNotFound;
}
};
template
struct QueryFromInterfacesList
{
template
static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface)
{
typedef typename T::BaseInterfaces BaseInterfaces;
enum { BaseInterfaceCount = std::tuple_size::value - 1 };
return QueryInterfacesListIter::Query(obj, ifaceId, iface);
}
};
template <>
struct QueryFromInterfacesList
{
template
static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface)
{
if (ifaceId == InterfaceTraits::Id)
{
*iface = static_cast(obj);
return Status::Ok;
}
return Status::InterfaceNotFound;
}
};
template
<
typename T,
bool IsCoClass = std::is_base_of::value
>
struct QueryInterface
{
template
static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface)
{
if (ifaceId == InterfaceTraits::Id)
{
*iface = static_cast(obj);
return Status::Ok;
}
return QueryFromInterfacesList::Query(static_cast(obj), ifaceId, iface);
}
};
template
struct QueryInterfaceIter
{
template
static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface)
{
typedef typename std::tuple_element::type CurType;
return QueryInterface::Query(static_cast(obj), ifaceId, iface) == Status::Ok ?
Status::Ok : QueryInterfaceIter::Query(obj, ifaceId, iface);
}
};
template
struct QueryInterfaceIter
{
template
static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface)
{
return Status::InterfaceNotFound;
}
};
template
struct QueryInterface
{
template
static RetCode Query(ObjType *obj, InterfaceId ifaceId, Ptr *iface)
{
typedef typename T::BaseEntities BaseEntities;
enum { BaseEntityCount = std::tuple_size::value - 1 };
return QueryInterfaceIter::Query(static_cast(obj), ifaceId, iface);
}
};
}
}
Boss Implementation :: IBase
a sheet in the inheritance hierarchy of user implementations and other auxiliary classes, and in order to exclude the possibility of inheritance from this implementation, you can use the final keyword, just to explicitly exclude the possibility of copying and moving objects of this type, you can explicitly say this in the interface of this class, ticking all what is not needed as deleted. namespace Boss
{
template
class Base final
: public T
{
public:
Base(Base const &) = delete;
Base const & operator = (Base const &) = delete;
Base(Base &&) = delete;
Base const & operator = (Base &&) = delete;
template
static RefObjPtr Create(Args const & ... args)
{
Private::ModuleCounter::ScopedLock Lock;
RefObjPtr NewInst(new Base(args ...));
Private::FinalizeConstruct::Construct(NewInst.Get());
return std::move(NewInst);
}
private:
std::atomic Counter;
template
Base(Args const & ... args)
: T(args ...)
, Counter(0)
{
Private::ModuleCounter::AddRef();
}
virtual ~Base()
{
Private::ModuleCounter::Release();
}
// IBase
virtual UInt BOSS_CALL AddRef()
{
return Counter.fetch_add(1, std::memory_order_relaxed) + 1;
}
virtual UInt BOSS_CALL Release()
{
UInt CurValue = Counter.fetch_sub(1, std::memory_order_relaxed);
if (CurValue == 1)
{
Private::BeforeRelease::Release(static_cast(this));
std::atomic_thread_fence(std::memory_order_acquire);
delete this;
}
return CurValue - 1;
}
virtual RetCode BOSS_CALL QueryInterface(InterfaceId ifaceId, Ptr *iface)
{
RetCode Ret = Private::QueryInterface::Query(static_cast(this), ifaceId, iface);
if (Ret == Status::Ok)
AddRef();
return Ret;
}
};
}
The core is ready! All the most complex and interesting is described. Further, everything will be much simpler and more even, without puzzles.
Plugins
In this part, we will focus on the organization of plugins. In the current context, plug-ins should be understood as dynamic libraries (.so / .dll), which host implementation classes of interfaces (components) and a small set of functions for accessing objects of these implementation classes.
This part of the article, in my opinion, is the simplest, since there is no "programming on templates" and other mockery of the compiler. Just creating a set of interfaces and implementations for organizing a plug-in system.
For the "life" of the component in their homes (plug-ins) within the framework of one state, called the process, not so much is needed:
- Registry of plugins / components / services
- Class factory
- Loader
Service registry - a place to store all information about the service:
- Service ID
- List of implementation classes contained in the plugin
- The path to the module (so / dll) in the case of plugins that live in the same process
- Some information on downloading Proxy / Stubs and organizing the communication channel between the client and server. This is a bit of a step forward in overcoming the boundaries of the process
Based on this information, the class factory will be able to load the necessary plug-ins and create interface implementation objects.
The role of the loader is to load the component registry, load the class factory and configure it to work together with the service registry. After that, all calls to create objects will only be to the factory and the user receives some abstraction, he should not worry in which of the modules his object is located and how to create it. The user only operates with class identifiers of implementations when requesting the creation of a new object.
The service registry supplies an interface with just one method, which is enough to get the necessary information for the class factory.
namespace Boss
{
struct IServiceRegistry
: public Inherit
{
BOSS_DECLARE_IFACEID("Boss.IServiceRegistry")
virtual RetCode BOSS_CALL GetServiceInfo(ClassId clsId, IServiceInfo **info) const = 0;
};
}
But the service registry implementation class itself can supply several interfaces. What was it all about? Make typeset components.
Service Registry Implementation Class
those. the implementation provides an interface for manipulating the registry (IServiceRegistryCtrl,) and loading and saving it (ISerializable).namespace Boss
{
class ServiceRegistry
: public CoClass
<
Service::Id::ServiceRegistry,
IServiceRegistry,
IServiceRegistryCtrl,
ISerializable
>
{
public:
ServiceRegistry();
virtual ~ServiceRegistry();
private:
// IServiceRegistry
virtual RetCode BOSS_CALL GetServiceInfo(ClassId clsId, IServiceInfo **info) const;
// IServiceRegistryCtrl
virtual RetCode BOSS_CALL AddService(IServiceInfo *service);
virtual RetCode BOSS_CALL DelService(ServiceId serviceId);
// ISerializable
virtual RetCode BOSS_CALL Load(IIStream *stream);
virtual RetCode BOSS_CALL Save(IOStream *stream);
// ...
};
}
Class Factory Implementation
It also supplies several interfaces: one primary (IClassFactory), which all clients will use to create objects, and a secondary (IClassFactoryCtrl), which the loader uses to configure the factory on the registry. namespace Boss
{
class ClassFactory
: public CoClass
<
Service::Id::ClassFactory,
IClassFactory,
IClassFactoryCtrl
>
{
public:
// IClassFactory
virtual RetCode BOSS_CALL CreateObject(ClassId clsId, IBase **inst);
// IClassFactoryCtrl
virtual RetCode BOSS_CALL SetRegistry(IServiceRegistry *registry);
// ...
};
}
The bootloader code is quite simple, but unfortunately C ++ 11 has little recognized the platform (OS). They recognized multithreading, but there are no such things as dynamic libraries yet. So for loading modules the code depending on the operating system will be used. Of course hidden deep. It would be nice to recall pImple , but since the course is taken to abandon static libraries, it will be a little different: the implementation for each OS in its header file and the file interface that analyzes what to include based on the __linux__ and _WIN32 macros.
A small example of the use of services within the framework of the plugin model living in one process:
#include
#include "plugin/loader.h"
#include "plugin/module.h"
int main()
{
try
{
Boss::Loader Ldr("Registry.xml", "./libservice_registry.so", "./libclass_factory.so");
Boss::RefObjQIPtr Inst;
Inst = Ldr.CreateObject(Boss::Crc32("MyClass"));
}
catch (std::exception const &e)
{
std::cerr << e.what() << std::endl;
}
return 0;
}
As was noted at the beginning of the section, everything is very simple, only it took to write some amount of auxiliary code.
Examples
The best example is a real task, and not an artificially invented pile up, demonstrating this or that opportunity to the maximum.
Above, when describing the kernel, a very large example was given, which tried to maximize the existing flexibility in collecting entities from ready-made implementations and adding a new interface. But the example, despite the fact that it reflects the capabilities of the model, it is contrived and does not look very friendly. Therefore, as examples, we can consider the implementation of the necessary components of the plug-in part, namely the registry of services and the class factory. Although they are part of the plug-in model, they are the same plug-ins as those that the user can develop for their needs.
Once again I will give a class implementation for the registry of services.
Service Registry Implementation
namespace Boss
{
class ServiceRegistry
: public CoClass
<
Service::Id::ServiceRegistry,
IServiceRegistry,
IServiceRegistryCtrl,
ISerializable
>
{
public:
ServiceRegistry();
virtual ~ServiceRegistry();
private:
// IServiceRegistry
virtual RetCode BOSS_CALL GetServiceInfo(ClassId clsId, IServiceInfo **info) const;
// IServiceRegistryCtrl
virtual RetCode BOSS_CALL AddService(IServiceInfo *service);
virtual RetCode BOSS_CALL DelService(ServiceId serviceId);
// ISerializable
virtual RetCode BOSS_CALL Load(IIStream *stream);
virtual RetCode BOSS_CALL Save(IOStream *stream);
// ...
};
}
Now I’ll try to describe what is happening here ...
To create a class implementing one or more interfaces, you need to create a class derived from the CoClass template class. This class takes as parameters the identifier of the implementation class (which can already be used when creating the object through the class factory) and a list of inherited interfaces or ready-made implementations of the interfaces. If you look at the given implementation class of the service registry, you can see the identifier in it (Service :: Id :: ServiceRegistry) and the following are the interfaces that are implemented in this class (IServiceRegistry - the interface of the service registry that will be used by the class factory; ISrviceRegistryCtrl - registry management interface; ISerializable - the registry must be saved somewhere and loaded from somewhere, and this interface allows you to perform what is required).
The component is ready. It remains to somehow publish it, i.e. give access to it from outside the module in which it is located.
To do this, use the macro BOSS_DECLARE_MODULE_ENTRY_POINT
A string is passed to the macro from which CRC32 will be calculated, used as the identifier of the module and a list of class implementations that are exported by this module. After which the component and its module are ready (and several components can be in the module), you can use it by registering it in the registry (exception: the services registry and class factory for normal use of the model can not be registered). #include "service_registry.h"
#include "plugin/module.h"
namespace
{
typedef std::tuple
<
Boss::ServiceRegistry
>
ExportedCoClasses;
}
BOSS_DECLARE_MODULE_ENTRY_POINT("ServiceRegistry", ExportedCoClasses)
Another similar example: the implementation of a class factory, which has also been given above.
Class factory
namespace Boss
{
class ClassFactory
: public CoClass
<
Service::Id::ClassFactory,
IClassFactory,
IClassFactoryCtrl
>
{
public:
// IClassFactory
virtual RetCode BOSS_CALL CreateObject(ClassId clsId, IBase **inst);
// IClassFactoryCtrl
virtual RetCode BOSS_CALL SetRegistry(IServiceRegistry *registry);
// ...
};
}
A completely similar example. Also inheritance from CoClass, identifier and a list of implemented interfaces. The class factory is located in a separate module, respectively, it has its own
entry point
The same entry point for the services registry.#include "class_factory.h"
#include "plugin/module.h"
namespace
{
typedef std::tuple
<
Boss::ClassFactory
>
ExportedCoClasses;
}
BOSS_DECLARE_MODULE_ENTRY_POINT("ClassFactory", ExportedCoClasses)
These were simple component implementations in which each component inherited only a list of interfaces, implemented their methods, and that’s all. There was no inheritance of ready-made implementations. And if you look again at the services registry interface, then in it you will see work with IServiceInfo, through which all information is transmitted. IServiceInfo can transmit only general information about the service, but there is also a private one. Initially, I wanted to make plugins that live not only in dynamic libraries, but also scattered across processes in their executable modules. Hence, different information: for plugins in dynamic libraries there is only an addition about the path to it, and for plugins in separate executable modules there is a lot of additional information: information about Proxy / Stubs, transport, etc. (but, unfortunately, I did not finish this part, but cut off the rudiments, so as not to litter the code with imperfections). Now I’ll give you an example in which components are already inherited not only from interfaces, but also from implementations.
Implementing Service Information
Implementing ServiceInfo may seem a bit complicated. Why is there a template here? This is the subtlety of the implementation of the data structure that occurred to me, and not the tribute paid to the component model / plug-in system. To clarify the reason for this implementation a bit, I’ll give the interface:#ifndef __BOSS_PLUGIN_SERVICE_INFO_H__
#define __BOSS_PLUGIN_SERVICE_INFO_H__
#include "../core/base.h"
#include "../core/error_codes.h"
#include "../core/ref_obj_ptr.h"
#include "../common/enum.h"
#include "../common/entity_id.h"
#include "../common/string.h"
#include "iservice_info.h"
#include
namespace Boss
{
namespace Private
{
template ::value>
class ServiceInfo;
template
class ServiceInfo
: public CoClass
{
public:
// …
void SetServiceId(ServiceId srvId)
{
// ...
}
void AddCoClassId(ClassId clsId)
{
// ...
}
void AddCoClassIds(RefObjPtr coClassIds)
{
// ...
}
private:
// …
// IServiceInfo
virtual RetCode BOSS_CALL GetServiceId(ServiceId *serviceId) const
{
// ...
}
virtual RetCode BOSS_CALL GetClassIds(IEnum **ids) const
{
// ...
}
};
}
class LocalServiceInfo
: public CoClass>
{
public:
void SetModulePath(std::string const &path)
{
// ...
}
void SetModulePath(RefObjPtr path)
{
// ...
}
private:
// ...
// ILocalServiceInfo
virtual RetCode BOSS_CALL GetModulePath(IString **path) const
{
// ...
}
};
class RemoteServiceInfo
: public CoClass>
{
public:
void SetProps(RefObjPtr props)
{
// ...
}
private:
// ...
// IRemoteServiceInfo
virtual RetCode BOSS_CALL GetProperties(IPropertyBag **props) const
{
// ...
}
};
}
#endif // !__BOSS_PLUGIN_SERVICE_INFO_H__
Service Information Interface
A slightly more understandable implementation with inheritance of interfaces and implementations is given in the kernel description with the fancy Face123456 class without any templates :) #ifndef __BOSS_PLUGIN_ISERVICE_INFO_H__
#define __BOSS_PLUGIN_ISERVICE_INFO_H__
#include "../core/ibase.h"
#include "../common/ienum.h"
#include "../common/istring.h"
#include "../common/iproperty_bag.h"
namespace Boss
{
struct IServiceInfo
: public Inherit
{
BOSS_DECLARE_IFACEID("Boss.IServiceInfo")
virtual RetCode BOSS_CALL GetServiceId(ServiceId *serviceId) const = 0;
virtual RetCode BOSS_CALL GetClassIds(IEnum **ids) const = 0;
};
struct ILocalServiceInfo
: public Inherit
{
BOSS_DECLARE_IFACEID("Boss.ILocalServiceInfo")
virtual RetCode BOSS_CALL GetModulePath(IString **path) const = 0;
};
struct IRemoteServiceInfo
: public Inherit
{
BOSS_DECLARE_IFACEID("Boss.IRemoteServiceInfo")
virtual RetCode BOSS_CALL GetProperties(IPropertyBag **props) const = 0;
};
}
#endif // !__BOSS_PLUGIN_ISERVICE_INFO_H__
How to implement the components has become clearer. Everything is simple. And how to query and work with interfaces, to query from one another - you can consider the example of a bootloader that loads a registry of services, obtains the necessary interfaces from it, configures this registry, loads the class factory and sets it up to work with the registry. Further, of course, all the client’s work is already done with the class factory and the client should no longer work with modules, otherwise, for the sake of what, all this abstractness was started.
Loader
#ifndef __BOSS_PLUGIN_LOADER_H__
#define __BOSS_PLUGIN_LOADER_H__
#include "iservice_registry.h"
#include "iclass_factory.h"
#include "iclass_factory_ctrl.h"
#include "module_holder.h"
#include "service_ids.h"
#include "core/exceptions.h"
#include "common/file_stream.h"
#include "common/iserializable.h"
#include
namespace Boss
{
BOSS_DECLARE_RUNTIME_EXCEPTION(Loader)
class Loader final
{
public:
Loader(Loader const &) = delete;
Loader& operator = (Loader const &) = delete;
Loader(std::string const ®istryFilePath,
std::string const &srvRegModulePath,
std::string const &clsFactoryModulePath)
: SrvRegistry([&] ()
{
auto SrvRegModule(ModuleHolder(std::move(DllHolder(srvRegModulePath))));
auto SrvReg = SrvRegModule.CreateObject(Service::Id::ServiceRegistry);
RefObjQIPtr Serializable(SrvReg);
if (!Serializable.Get())
throw LoaderException("Failed to get ISerializable interface from Registry object.");
if (Serializable->Load(Base::Create(registryFilePath).Get()) != Status::Ok)
throw LoaderException("Failed to load Registry.");
return std::move(std::make_pair(std::move(SrvRegModule), std::move(SrvReg)));
} ())
, ClsFactory([&] ()
{
auto ClassFactoryModule(ModuleHolder(std::move(DllHolder(clsFactoryModulePath))));
auto NewClsFactory = ClassFactoryModule.CreateObject(Service::Id::ClassFactory);
RefObjQIPtr Ctrl(NewClsFactory);
if (!Ctrl.Get())
throw LoaderException("Failed to get ICalssFactoryCtrl interface from ClassFactory object.");
if (Ctrl->SetRegistry(SrvRegistry.second.Get()) != Status::Ok)
throw LoaderException("Failed to set Registry into ClassFactory.");
return std::move(std::make_pair(std::move(ClassFactoryModule), std::move(NewClsFactory)));
} ())
{
}
template
RefObjPtr CreateObject(ClassId clsId)
{
RefObjPtr NewInst;
if (ClsFactory.second->CreateObject(clsId, NewInst.GetPPtr()) != Status::Ok)
throw LoaderException("Failed to create object.");
RefObjQIPtr Ret(NewInst);
if (!Ret.Get())
throw LoaderException("Interface not found.");
return Ret;
}
~Loader()
{
ClsFactory.second.Release();
SrvRegistry.second.Release();
}
private:
std::pair> SrvRegistry;
std::pair> ClsFactory;
};
}
#endif // !__BOSS_PLUGIN_LOADER_H__
In addition to the above examples, examples from the article with the implementation of the previous version in C ++ 03 are relevant. The only difference is the work with identifiers. In the new model, you do not need to add a separate macro to the implementation class, which you can forget about. If you forget about the identifier in the new model, the compiler will remind you of this, since now it is a template parameter.
Conclusion
There was some big idea, but it was realized only on 2/3:
- Fully implemented core
- Implemented basic services for the existence of a plug-in system
- Plugins that were supposed to live in other processes are not brought to mind, and all the rudiments are cut out so as not to clog the code
Somehow it happened that the most interesting moment for me is building a skeleton or skeleton of a system, but building muscle and injecting fat (developing all kinds of usefulness / pseudo-usefulness) can sometimes be a job that is done very quickly due to good knowledge of the system. By virtue of this, a very complete (sometimes excessively full) core was obtained (spherical horses in vacuum always attracted me). There is a small part of the muscles (the main components of the plug-in system: a registry of services and a class factory) so that the model can somehow exist. But this implementation turned out to be completely fat-free: there is nothing auxiliary in it. The system’s skeleton was assembled, a little muscle was built up and a kick was given in the ass, so that it would somehow move from place - it became Habr's material and article.
The project must be either released or discontinued as early as possible, until it has eaten up all the resources and has safely disappeared from the spotlight. By virtue of this judgment and the fact that the material of the article turned out to be a bit big and, possibly, complicated in some places, and the reason that I was not able to pay attention to this article for more than six months, the part with plugins is still missing. Soon, for example, C ++ 14 may appear, and then the material of this article devoted to C ++ 11 may already become irrelevant. It may well be that the unrealized part will be released as a separate post ... This material will be based on the material of the article “Proxy / Stubs with my own hands” , which I wanted to rework with the C ++ 11 standard, add interface marshaling and put all the transport under it (implement one of the mechanisms IPC).
Unfortunately and happiness at the same time, the reader does not always take out the whole plan of the author from his work, which he laid down in it. According to the source code, there are a few scattered terminations for the future, such as RemoteServiceInfo and others, which may well be skipped when considering the material.
Source code is available on github. It has a minimal build script. It can serve as a source of examples and ideas for your projects.
Thank you all for your attention!