Impressions of using RavenDB

    A little over a year ago, the problem arose of choosing a NoSQL solution for the project. There were a number of specific requirements.
    1. Possibility of expansion (triggers, stored procedures);
    2. Full text search;
    3. Availability of a provider for .NET;
    4. POCO support;
    5. The ability to deploy on a Windows platform;
    6. Support for lucene.net is desired;
    7. Transaction support is desirable;
    8. Support for asynchronous requests is desirable;
    9. Desirable map reduce;
    10. It is advisable to have documentation and a developer community.


    After a long search and comparison, the choice fell on RavenDb . I was a little surprised that not many publications are devoted to this product. After a year of communication with this NoSQL solution, I decided to share my impressions. I don’t see any sense in rewriting the documentation, everything is very well described on the project website and in official groups. As well as the developer blog Ayende blog .


    Like


    Easy to deploy. You can use both the solution built into the project, the Windows service and the console application.

    I have never lost data. Given the constant bullying and experiments with the base, I consider it a merit.

    Easy transition to the new version. You just need to stop Raven and drop the new build into the server’s working folder. Everything in the project is bypassed by updating the NuGet package.

    Extensibility. There are many possibilities for embedding their functionality on the server side. Moreover, the code can be written in .NET. We have everything at hand, from triggers to change data and indexes to creating our own extensions to enrich the server API.

    Lucene.net indices. A good, well-known solution. It is possible to use your analysis classes. Working with full-text search is simple and straightforward.

    Debug mode. By running in this mode, you can view all processed requests and their execution time in the console, which is quite convenient.

    Built-in caching functionality. Quite a convenient thing, it allows you to specify in the code which requests should be cached on the server side.

    Quick community response to questions. I encountered bugs a couple of times, corrections appeared almost online.

    The ability to query using both LINQ and the lucene syntax.

    Scripted Patch API. A fairly powerful tool that allows you to send requests for document changes using javascript scripts. It is possible to shovel the document as you like using javascript.

    Description of indexes in the code. It is very convenient when you can create a separate .NET assembly with a description of indexes. When you start the application, the presence of indexes and the relevance of their description will be checked. In case of a mismatch, the indices will be rebuilt.

    Transaction support. Everything is quite simple and clear, TransactionScope is used.

    Working with POCO is very convenient. If necessary, you can easily add new fields to the document object. For old documents, the fields will be filled with default values.

    Ability to set priorities for index processing. If we have a set of indices and some of them should work online, while the other can wait, then we can set their operation modes. Normal - changes are made immediately. Idle - changes are made when there is no more work. Disable - the index is not rebuilt. Abandoned - changes are made when there is no work for a long time.

    I do not like


    Admin interface It is written in Silverlight, which I would not attribute to the pluses. In general, it works stably, but small kosyachki constantly appear. This is a bit annoying. For example, the status of the index can be displayed as stale, but if you open the studio again or in parallel in another browser, it will show that the index is in the current state.

    Automatically bring index definitions up to date. If there are few data and indexes, then everything works fine, but with the growth of stored information, this process becomes more and more mysterious. Sometimes there is a rebuild of unchanged indices, and the most complex ones. This makes it necessary to spend some time in anticipation, which infuriates a little.

    There is support for asynchronous access, but only one asynchronous request can be executed in a session. That is, it is impossible to execute several requests in one connection in parallel.

    Bad statistics. When you need to analyze the operation of the index, you have practically nothing. There is a map reduce time of one portion, but this is not enough.

    HTTP Rest API. The connection is only via http, the data is transmitted in JSON, which affects performance. I would like to be able to connect via http.

    Patch API does not support transactions. This is insulting and sometimes it is very lacking.

    Weird


    Indexing speed. The work of indexing threads is not very clear. Sometimes they eat almost all the resources, and at some point, when there is work, everything seems to freeze. I would like more information about their work.

    Request execution speed. Also not very clear. I will not compare it with other NoSql solutions, but sometimes it seems that it could work faster.

    Conclusion


    In general, the product is quite interesting. It is a pity that little attention is paid to him. Raven is constantly evolving, new functionality appears regularly. There are some disadvantages, but at the moment .NET NoSql solutions are few. Therefore, if someone faces the choice of NoSql for their .NET project, I would dare to recommend a closer look at RavenDb.

    Also popular now: