Found a formula for a painless transition to .Net Core

    For everything about everything enough 50 cups of coffee.


    In addition to the rule of thumb outlined above, we publish a brief note about points that need to be paid close attention so that nothing breaks in battle and in the process. A note was made hot on the heels of the release of the mobile service that had completely migrated to .Net Core (the beginning was made here ). We managed to perform this operation imperceptibly for the customer, almost without stopping the main development process.


    Below will be a ready action plan, there will be a very capacious test list, there will be this picture for the mood:



    So, in steps:


    1. Plan a long sprint with big features and / or regression


    With fundamental rewriting of the code, the service will need time to infuse as long as possible in order to have time to fix all the flaws on the test environment.


    2. In order to sprint to rewrite the code on .Net Core


    Why is it important not to start this business ahead of time? Because you have to pull two branches of the code with the new and old .net, because at any moment the bird can fly urgent or need to hold a demo of new features, and then you need to make changes to the old stable branch. To experience minimal concern about this, it is better to shorten the transition state.


    By the way, when working with code, we quickly came to the conclusion that it is more convenient to keep two copies of the repository locally. It is easier and more convenient than switching two massive branches.


    • If possible, in the services used to rewrite WCF interfaces on webapi

    The implementation of the .Net Core WCF client is still far from ideal. Despite the fact that the old sores are in some sense fixed, newer versions still have to use workaround ( 1 , 2 ).


    For history: on .Net Core 2.0, the stable working version of WCF is 4.4.2 from myget repository. She, for example, has no problems with early timeout

    At the time of the start of the migration, we used the .Net Core 2.0 version. Meanwhile, Microsoft relies .Net Core 2.1. Who cares to admire the success of the Redmond guys in optimizing the platform, please read what progress the Bing search engine has made when upgrading to the new version (spoiler: latency has fallen by 34%!)


    We also upgraded to .Net Core 2.1 and WCF 4.5.3. And they didn’t forget to specify a fresh microsoft / dotnet base image in Dockerfile: 2.1-aspnetcore-runtime. What a surprise when, instead of 1.4GB, they saw an image size of 0.5GB (talking about a Windows-image, if suddenly).


    3. Apply for test and demo


    We have two environments at our disposal. We left the demo with the old version as a reference. On the test environment, a new service has been laid out - run in on developers and testers.


    There was some confusion due to the fact that usually developers work with dough, and testers mostly with demo. In case it was necessary to refresh the old service, the situation was exactly the opposite of normal. Therefore, the discussion and the crib are useful, where and what to look for.


    • Configure IIS

    To run the .Net Core service in IIS, you need to install a module that comes with runtime.


    AppPool switch to CLR Runtime = No Managed Code.


    In a solution in a standard web.config, it is important not to forget to set the desired requestTimeout and disable the WebDAV module if there are DELETE methods.


    Further, there are two options for publishing a service in IIS:


    • you do MSDeploy sync - it means additionally you need the -enableRule key : AppOffline
    • you are doing file publish - it means that just before publishing, you need to put the app_offline.htm file in the service directory, and after publication, delete it

    Both that, and another allows to stop working process and to unlock the executed files. Otherwise, an error will appear that the files are not available for rewriting.


    We refused to log in via Nlog in favor of Serilog, and lost automatic log compression - there is simply no such feature in Serilog. In this case, you can be saved using the standard Windows tools and set NTFS compression in the properties of the directory.

    4. Test


    Here is the most compressed checklist for the most fragile places:


    • check status return Bad request, Unauthorized, Not modified, Not found - everything that the API can give
    • check query logging for all status codes
    • chart external dependencies; As a rule, all the necessary information is in appsettings
      • drive away methods that affect their work
      • check external requests logging
    • check the functioning of the settings for the appsettings parameters; try to change them for hot
    • check http caching for positive and negative status codes
      • Header ETag
      • Header Сache-Сontrol
    • check long requests and timeouts
    • check requests with an empty answer
    • check DELETE methods (WebDAV is disabled or not)
    • check raw content
      • upload and download single / multiple files
      • fill files with a size above the limit
      • Content-Disposition header
    • check all other heders; it’s pretty easy to put them all together by code
    • check conditional code execution when switching environments if (env.IsDevelopment())
    • check disconnection from client and server
    • compare with the standard swagger.json - helps to detect the difference in the transmitted fields
      In our mobile application, a code generator is used to work with the API based on the description of swagger.json, so it was important that the difference from the original description was minimal. The latest version of Swashbuckle.AspNetCore greatly changed the interface and the generated swagger.json. I had to roll back to the old version of Swashbuckle.AspNetCore 1.2.0 and add a couple of filters.

    5. Seize the fight while drinking coffee.


    In our case, the combat environment consists of two nodes: active and passive.
    In order to switch to a new service unnoticed, we duplicated the pool and the site on each node, and wrote a script to switch the binding between the old and the new site.


    Thus, in the case of an emergency, we were able to quickly switch to the old version.


    Further, after deploying to the battle, during the week we were convinced of the viability of the service and lit the green light for the release of the mobile application. Life on the project safely returned to its former course.


    Subtotal


    Now our service is completely ready to acquire a docker-container for delivery to the cluster. We are ready to deploy to Kubernetes and to the Service Fabric.


    Now the preparation for the presentation of the new infrastructure to the customer is in full swing. We will tell about your achievements in the next series, keep abreast;)


    Also popular now: