WildData: Lightweight Data Access Framework

    Good day,% username%! So I decided to write an article on this resource. It will be about accessing data from applications written in .NET, in particular in C #. All my thoughts, and what they eventually spilled out, I will try to set out under the cut. Welcome!

    Under the DBMS in the article we understand the relational DBMS. Looking ahead, I’ll say that the presented library (framework) is not a replacement for the Entity Framework, it does not depend on it and has nothing to do with it.

    Nonetheless, let's take a break from the aforementioned framework. One of his ideas is an attempt (and very successful) to introduce an abstraction of data access. In other words, avoiding a particular DBMS.

    Each DBMS has its pros and cons: some have some features, some others; some do one thing well, others do another, etc. Let's imagine that after weighing all the pros and cons, we chose a DBMS to implement some grandiose (or not so) project and decided to write all this using ... ADO.NET.

    Pros of ADO.NET:

    1. full control over requests;
    2. relative simplicity: create a connection, create a transaction (optional), create a command, add query text and parameters, start, read data (optional);
    3. There is support for almost any DBMS.
    4. close interaction with the DBMS (for example, support for such "non-standard" data types as coordinates, JSON, etc.).

    Cons ADO.NET (prerequisites for the implementation of the project):

    1. for each model, it is necessary to re-write the code each time for reading, adding and changing a record in the database - in other words, mapping an object to a record in a table, view, etc. and, conversely, mapping the record to an object;
    2. There is no abstraction like in Java (although there are DbConnection / DbCommand and other classes, specific types are often used, for example, SqlConnection / SqlCommand);
    3. There is no universal support for working with a package of records (add, update, add or update, delete);

    Most likely, the reader has already guessed that we will think how to get rid of the aforementioned shortcomings. Let's talk about what is the key to implementing the entire project.

    Let's start in order.

    1. What can we do to write this code once? First of all, we note that the skeleton for reading one or more objects remains unchanged regardless of what kind of object it is or what fields it contains. This means that we need a universal function for reading a single object. The same is true for adding and updating records.

      How should such a function be written? The first thing that comes to mind is Reflection. Let's say. But Reflection has one significant drawback - it is slow. If, when reading / adding / changing / deleting one object, the speed will not be significant, then with a large number of objects the overhead will be noticeable.

      Expressions and the ability to compile them on the fly will come to our aid. The idea is that the body of the function is generated, compiled and the link to it is saved as a delegate. You need to do this only once - during initialization.

      What should functions work with? With three entities:

      • object as such (model);
      • data reading object (for example, SqlDataReader);
      • collection of parameters (e.g. SqlParameterCollection).

      To ensure that the point of generation of these functions was the same, the following wrapper interfaces were introduced: IDbParameterCollectionWrapper and IReaderWrapper (see the link to the project repository below). For each DBMS, it is necessary to implement these interfaces individually. Looking ahead: the framework is primarily aimed at speed, so in some cases delayed ("lazy") initialization is used. Also, the framework contains several auxiliary attributes for greater flexibility (for example, calculated fields, required fields, etc.).

    2. The whole common part of the framework is put in a separate common project. The user can see mostly interfaces. It is highly recommended to use only interfaces.

    3. Batch work with records has not yet been implemented, but this is already a “technical matter."

    The project can already be tested (see links below). There is support for Linq! The project is in alpha version, so the code is not perfect yet.

    What is planned to add:

    • more tests;
    • support for other databases: primarily SQL Server, MySQL;
    • Microsoft.AspNet.Identity support.

    Links:

    » WildData Project on GitHub.
    » Nuget-package WildData.
    » Nuget-package WildData (implementation for PostgreSQL).
    » A very simple example of using the framework.

    I strictly ask you not to judge. This is my first article on Habrahabr. Thanks for attention!

    PS For all questions, I ask in PM and comments.

    Also popular now: