“Roslyn is still a very raw technology” - an interview with Sergey Shkredov, head of .NET-direction in JetBrains

    Hi, this again. No slides . I am Alexei Fedorov, and this time I was visited by Sergey Shkredov , the head of the entire .NET-direction at JetBrains. We talked with Sergey:




    • The latest ReSharer releases
    • About the new subscription and license scheme;
    • about a difficult relationship with Microsoft;
    • about runtime and language development;
    • how Roslyn's exit changed the situation;
    • on working with user feedback to improve the product;
    • development plans for other .NET stack products;
    • the importance of intra-industry communication and exchange of experience;
    • about developing products for C ++;
    • a bit about ReSharper C ++, which even Microsoft developers should get hooked on;
    • How users will feel the change;
    • How ReSharper will develop further.


    Here's a video



    Under the cut is a text version of the interview.



    About the new ReSharer and Update Rate releases


    - Sergey, you recently had a big release, even two.

    - Yes.

    - Can you tell me a little bit about what's new in the new ReSharper? So, very short. And, in fact, why was there just two releases?

    - Over the previous year, we made three major releases - these are ReSharper 9.1, 9.2 and 10.0. And this year we took such a pace for ourselves - to make releases every four months, and in every release do not hide features, release everything as soon as possible. This is better suited to how Visual Studio evolves - some new events happen all the time. And from this point of view, release 10.0 was not much different in number of features from release 9.2. The only thing was that we had a strict restriction on this release: together with the release of all products, we changed the licensing system of all our IDEs, so the release date was fixed long before this release. Actually, due to the fixed release date, two came out.

    - That is, it was a bugfix?

    - Yes, a bugfix release that came out two weeks after 10.0.

    - Have you fixed many bugs?

    - A lot, about a hundred. Basically, of course, users complained about Unit Testing and poor resolution of UWP applications.

    - Well now users have calmed down? Everything is good?

    - Now, yes. In fact, you should look not at what they write on Twitter, but at the update rate (the percentage of people who switched to the new version), then it is twice as good as it was for ReSharper 9. That is, in the first three weeks of its existence ReSharper 10 doubled more people on it than a year ago.

    - And a lot of people, according to your data, use the old versions?

    -I think no. That is, by the time the next version is released, the penultimate version is used by about 30%, and 70% of people are already sitting on the new one. And ready to move to the next, newest version.

    —That is, now there are 30% who use ReSharper 8?

    - Yes.

    - Clear. Did you somehow try to communicate with them, understand why they do not want to go over?

    - I think that this is the normal distribution of such developer activity. And this has little to do with some internal product reasons.

    About the cunning scheme of subscriptions and licenses


    - Old license - was it on time or was it on version? Could I, having that license, upgrade for free?

    - Somewhere about two or three years ago, we began to sell subscriptions, which allowed updating to any version during the year. You buy, and you have all the versions that we release throughout the year, no matter what we call them. Thus, we got rid of versioning, which is what allowed us to issue more frequent releases, without fear that we would not get a sufficient Upgrade Rate on the major version. Well, for commercial users we still had perpetual licenses without subscription for updates. Now such licenses are already gone. I can, of course, comment on the new subscription scheme.

    - Yes, tell me, please. Everything has changed again, you wrote and talked about this a lot. You can briefly summarize what happened in the end. It seems to me that this will not be amiss for our viewers.

    - IDE is a product that is developing along with the market, along with technologies, together with our users. It develops in one large avalanche. We as a company, of course, would like all users to be able to use the latest version of our product, and that licenses should not be a restriction on this path.

    Accordingly, when we understood this, we introduced subscriptions. Subscriptions are good for everyone, but a subscription is a thing you must buy for yourself. That is, you buy yourself a perpetual license and a subscription to it. We did not immediately come to this. We actually split the subscription fee for the year ahead and the perpetual license fee.

    Our first version of the subscription scheme was this: you buy a license for a year, and after a year everything is taken from you. And we, of course, met a lot of negative feedback about this. I think this is a great merit of our marketers, our CEOs, that among all this feedback we were able to understand what exactly in this situation excites users. And users care about the lack of guarantees that they can continue to use the product.

    For example, there were companies that explicitly wrote to us about whether their budget is approved, for example, by the United States Congress. And if he does not approve the budget for updating the version, we will simply be left without a product for the next year. This is the most indicative, I think, story that helped to understand that a perpetual license should be left.

    Now you are buying from us a year of subscription. During this year, you can decide what you want to update. You update and use, and then pay, after a year, by renewing this subscription. Thus, when you buy a version, you buy it at a given moment in time with no possibility to upgrade to the next. But you can deferredly buy yourself updates on the version that will be released during this year.

    - If you don’t buy, then what happens?

    - You will have to use the version that you bought in a year.

    —That is, there is a rollback to the version that was current at the time of purchase?

    -Yes. You can consider this as a rollback, or you can consider it as the fact that during the year you decide that “I want a new one”. And in this case, you pay in a year when you renew your subscription.

    - Listen, this scheme is somehow not simple. Did you take it from someone or did you come up with it yourself? Does anyone else on the market work by this principle?

    - Adobe. But there the story is simpler, of course. We listen to users a lot, and because of this, our solutions sometimes become sometimes more difficult.

    - Okay, well, there is a rather unique story.

    “I can give even more examples.” Two days ago, Microsoft switched to the same scheme.

    - Interesting.

    “Absolutely the same.” And on this subject there was not a single large post on the Internet.

    - How did it happen? Why were there discussions in your case?

    - We are a company that works more with the community. And even when we introduced subscriptions optionally for commercial users and optionally for our staffers, we received a lot of feedback.

    - So you can say that your product management lives on feedback?

    - Yes, to a large extent.

    About a difficult relationship with Microsoft


    - And what is your general relationship with Microsoft? I myself am a person from the Java world, and there for a long time, already ten years ago, the platform is more or less open. Talking with the guys from JetBrains, I realized that, in principle, they are well aware of what is happening inside the Java platform itself and can see any complex code at any time, figure it out, even patch something somewhere.

    - There is no need to communicate with the vendor.

    - Exactly. And in your .NET world, how does this happen?

    “The way we work, the way we plan our releases, has changed a lot over the last few years.” And it has changed due to the fact that Microsoft has changed its attitude to the ecosystem, to work with partners, to the level of openness of the company. Three years ago, the situation was this: there were private Visual Studio builds that were issued only to partners - there was an affiliate program through which we closely communicated with Microsoft. We had the right to submit higher priority bugs to Visual Studio.

    That is, Microsoft purposefully supported the selected manufacturers of extensions for Visual Studio, about a hundred or two hundred companies, and for this there were all sorts of private programs. Now most of the feedback, most of the changes and builds can be obtained absolutely legally through open sources, and most of the teams we communicate with are developed in Open Source. Now less and less Visual Studio is becoming a closed product that requires knocking on Microsoft in order to bring feedback, or change something, or learn something new.

    —It turns out that you began to do many things yourself?

    - The Visual Studio Industry Partner (VSIP) Program, we can say, has ceased to be some kind of big benefit for us. We go to events where you can chat with the team, and this is almost the only request from the VSIP Program.

    “Why did they change their approach so much, what do you think?”

    - They began to lose developers. Microsoft has long been a vendor who introduced developer tools that covered all the needs for all scenarios. With the advent of mobile platforms, everything has changed. Microsoft’s toolkit was no longer enough for Microsoft customers to write their programs for Android, for iOS. This is the first reason that triggered this shift.

    The second, probably, factor that influenced openness is probably Azure. To drag as many developers as possible, not only .NET, but Java, Python, PHP, Ruby - all developers into your Cloud, you need to introduce tools to them. There is a clear policy - Microsoft provides tools for developers now shareware. Although there are cases when it is not free (for example, Visual Studio), but there are cases when it is far from free (for example, Visual Studio Enterprise). Although $ 6,000 is not $ 14,000 as it was worth a year ago.

    - A year ago, and the ruble was different. But, let’s say, the arrival of a new CEO at Microsoft a year ago affected the situation?

    - I think that he greatly influenced, and in this sense he continued, he gave further opportunities to make tools for developers more open. And move towards the fact that Microsoft is a cloud provider and will strengthen this position. To develop an ecosystem based on Microsoft tools, an ecosystem of development for all platforms.

    - There was still a very interesting story related to cross-platform. There is Mono, which is under Linux, and there is something in this place from Microsoft that will somehow compete. Do you know anything about this?

    - With Mono, the story is such that this is exactly the same hole in the Microsoft technology stack that I spoke about. The situation is the same: a client comes to Microsoft with three thousand licenses for Visual Studio, says: “we need to write an iOS program. I’ve been sitting on your instruments for 10 years now, what should I do? ”He turns to a Microsoft consultant, and he needs an answer. Therefore, Microsoft has a business task - to provide development for iOS.

    Accordingly, what are the options? There is Xamarin, a completely working thing, there is Apache Cordova, there is a native C ++ compilation for different devices. Here are three tools that we will develop in order to cover this scenario. That is, cross-platform development is such a plugging of holes with the help of an external partner.

    Usually at Microsoft, this happens so that at first they plug holes using an external partner, then they release the product themselves and take this place. But now I do not see such trends, I do not see that Microsoft would try to drag cross-platform development to itself. While they are at the stage of cooperation. Those libraries that are written in Mono are pulled to themselves. Core CLR, virtual machines, some elements of the framework ...

    —That is, is it a mutual process?

    - Yes, this is now a very mutual process. I hope, of course, that it is unified at some point.

    —Now they have their own implementation of the .NET virtual machine under Linux. Pretty raw, right?

    - She has a chance to become a normal product.

    - But Microsoft talks about it as a ready-made ecosystem, as if it were a finished product, take it and use it. Apparently, this is not entirely true?

    - In order to drag users onto it, it needs to be positioned like this. I do not see users flowing from classic ASP.NET to ASP.NET based on Core .NET. Just not ready yet. I think this will happen, but not now. Now there are problems with dragging and dropping your code.

    The fact is that now there are significant problems on the way to dragging your code under DNX, under .NET Core. They are connected with the lack of a released version of the framework for which you can target your libraries so that they work both in the classic ASP, under the full .NET framework, and in the Core CLR. For this, Microsoft has many versions of .NET: there are Silverlight in browsers, there is a Windows Phone, there are Desktop applications, there is a “full” .NET Framework.

    Now there is another stack - Core CLR. And in principle, as a separate implementation, it is no different from everyone else. Microsoft has a solution to write code for different stacks. A kind of hack, which for each version of combinations of these platforms can generate code that works on just three or two of them.

    This is not a very working scenario, because the number of such combinations is growing, roughly speaking, exponentially. Now Microsoft is actively working to get rid of this exhibitor and make a platform that you can target. So that the code that targets this common platform works for you everywhere. Then there will be updated versions of NuGet packages that you can use everywhere and compile them for everything using the same libraries.

    “They do it, but apparently not very fast.” Yes?

    - A lot of design solutions are changing right before our eyes. Now the version that is, it is not final, Microsoft is already working on another version.



    About runtime and language development


    - How much do you think .NET is a legacy technology, how much is it a living technology? I will explain my question now. Until a certain moment, maybe a year before 2008, there was a very powerful development of the language. I can’t say about Runtime, I don’t have enough expertise. It seems to me that Runtime is not moving very forward. But the language at that very same time developed very much. C Java is the exact opposite, the language has been dull for a very long time, and Runtime has been developing at a wild pace. It is interesting that, in my opinion, recently nothing global has been happening with C # either. The changes were more noticeable before. How do you think?

    - That's right. I think Runtime is very far behind the JVM. A virtual machine in .NET has a very poor garbage collector and a very weak JIT compiler. The result is a slowly executing code in which you have to insert gags on gags in order to avoid unnecessary allocations and cope with those functions that are not automatically inline. The code is not automatically optimized properly. There is no such problem in Java.

    - And the language?

    - There was a period when the language did not develop much - before the release of C # 6. It was associated with the transition to the new Roslyn compiler, which was delayed for several years. Two years, according to the sensation.

    - That is, the language did not develop around the version with the third or fourth?

    - With the fifth. The fifth version was released, and then they started writing a new compiler. They wrote it for a long time and painfully. Basically, they tried to achieve the same performance that was in the native compiler, subject to all the architectural improvements.

    As a result, the set goal was achieved in compilation mode. That is, when you use C # 6 and C # 5 now - the difference in speed is not noticeable. Compared to the rest of the build steps, compilation did not take longer.

    But in terms of support in the IDE - there is a massive failure. In terms of C # 6 support in Visual Studio 2015, this is a complete fail. We cannot edit the ReSharper Visual Studio project in the 2015 release without ReSharper. It eats away all memory, hangs and that's it. Such situation.

    - Yes, it is quite interesting. Especially at the first stages, when you saw the first builds, there were probably stormy emotions. How did you live with this?

    - Hair stood on end.

    - Yes. It is clear that at some point you apparently fixed all the major problems there and somehow you were able to at least distinctly live with this?

    - Now we support Visual Studio 2015, roughly speaking, with our eyes closed. We ourselves do not use it. We plan to move slowly to her, there in the update it became better. We cut ReSharper so that you could run pieces of it. We create smaller Solutions so that you can start using all of this.

    - IntelliJ IDEA, from the point of view of perceiving it as a tool, has made tremendous progress in 2007, immediately after the introduction of Core 2 Duo processors. There was a very strong (compared to Pentium D) performance spurt. Accordingly, IntelliJ IDEA moved from the “IDEA slow” position to “Oh, IDEA is a great tool. Now you can work! ”IDEA began to work faster simply because iron became sharply better. That was enough. Since then, people have not particularly complained about IDEA performance. Do I understand correctly that in the .NET stack the performance of an IDE is still a headache?

    - Unfortunately yes. We are in a rather difficult situation. We have about half of the performance bugs caused by Visual Studio, and the second half are caused by us. Very often we cannot distinguish one from the other. Maybe in ReSharper typing is slow, because now in the background Roslyn analyzes files and allocates hundreds of megabytes per second, why does GC simply die?

    When you are “as if inside your code”, Microsoft made some special optimizations that take into account how typing in IntelliSense interacts with the background code analyzer. Accordingly, when we launch our autocomplete and type assist activities, we have no influence on how Roslyn background thread works.

    “Did they intentionally do that?”

    - I think they solved their problem. Of course, they did not think about us. It `s naturally.

    About how Roslyn's exit changed the situation


    - And how did Roslyn's exit change the situation? Is this a headache for you?

    - It became easier for us. The fact is that we began to better understand the different types of projects in which Roslyn is used as a Language service. We pull out the compilation model, files, and references going to compilation from there. In previous versions of Visual Studio, when there was no Roslyn, this was a rather time-consuming moment related to calling the build, pulling the references from Visual Studio directly. It was difficult with bugs. Now this is a much more direct process. We use Roslyn to create modules and how they will interact with the compiler.

    - And how does Visual Studio interact with the compiler? Does the studio use the compiler itself to build its own code model?

    - Visual Studio - yes. In process downloads Roslyn.

    - And in the old case?

    - In the old case it was the same, only in native C ++ code. One could only guess what he was doing and where. But he did not influence the Garbage Collector in any way, we never saw him in profilers. He may have been there, but he was very fast.

    - Is this due to the fact that Roslyn is still raw technology? Or is it due to the fact that it is somehow fundamentally incorrectly designed?

    - Oh sure. Very raw. In terms of optimizing the data designer, which is used in Roslyn for the simplest Rename functionality or when finding errors in the entire Solution, these are direct algorithms that work stupidly. But, of course, affect performance in the end.

    - That is, in principle, is there a chance that Microsoft developers will speed up Roslyn?

    - There is a chance, of course. But there may be a problem with Microsoft, where such a need may not be realized at the proper level. And time will not be allocated, and money for it.

    - Clear. And what is the Open Source that is happening now, is it Open Source only from the point of view of “I can watch” or “I can copy”?

    - I have seen very few contractions. Now, basically the Core CLR, this is all the source code shown, which you can compile or just see, read. I didn’t hear that anyone accepted them en masse.

    - That is, in principle, the chances that you will come and help them, fix them all the problems with the performance, are also not very many, apparently?

    - These performance problems often lie in the area of ​​studio compilation with Roslyn. There is, of course, a chance that we will come and we will change something at Roslyn. We look, analyze what is happening and how. We have an idea about architecture, about architectural problems.

    How ReSharper will develop further


    “So Roslyn is building his own model, his own tree, right?” This, apparently, is something like Concrete Syntax Tree.

    - Yes. Very specific, with spaces, with comments ... Specific syntax, as it is written in the editor. What the IDE uses is a very specific syntax tree, by no means an AST.

    - You are building a code model. Do you have this tree of your own?

    - Yes.

    - And Roslyn has his own. Apparently, the studio uses it - this is logical. Actually, how do you live with this? How will you continue to live with this?

    - We now have several directions. The first is not to use Visual Studio. The second is to use Visual Studio, but run ReSharper as a separate process. We have a vision, a design, a solution, when all ReSharper's code models, all syntax trees, indexes, caches and everything related to the semantic model are stored in a separate process.

    - Kirill Skrygan and I discussed this when he said that ReSharper strongly rests on the memory limit of 32-bit Visual Studio. I told him that it was obvious then to do the ReSharper out-of-process, to which he replied that yes, it was necessary, but it was fraught with Memory Traffic.

    - Actually, the design solution is to minimize this Memory-Traffic. It works like this: we can perceive the studio as a UI application. That is, do MVVM. You can consider that the ReSharper backend is such a ViewModel for the studio, which is View. If we consider the traffic between them, then this will be the traffic of the data that is sufficient to display the changes in the UI. You will never find massive data transfers at the UI level. You must always use two characters, two highlights ...

    It all lives on the studio side, on the UI side. The data that needs to be sent in order to display the UI is scarce. Thousands of objects to send - it is instant. Forward changes to the document during tipping - also instantly. On this idea, you can build the code so that only the data that is enough to display the UI is synchronized.

    - How well designed is Visual Studio, how much will it allow you to do this?

    - This is exclusively a matter of our code, that is, our integration with the studio will not change in any way.

    “You described everything very beautifully.” But in reality it will work? Will Visual Studio let you pull all the UI work out somewhere?

    - This is the question of writing our UI elements. Let's give an example. Here, for example, is shown "Bulbochka". In order to show it, PSI, syntax trees, documents, the entire project model are now used. If we leave what we have and send these syntax trees in their entirety, then of course this will not take off. But in general, to draw a “bulb”, we need an icon, text and that's it.

    When we pressed Alt + Enter, we passed the item in the form of text and an icon, when we pressed "apply some kind of bulb", we sent one command to the backend that works out-of-process. All data changes in syntax trees and documents - they all occurred Out-of-process. Now, as a result, you need to return to the frontend the new cursor position and change the documents that were opened in the editors. And that’s it.

    - The task is to develop a protocol for exchanging data between ReSharper and what the user sees on the UI, with minimal traffic.

    - We have developments according to the protocol. The protocol is very interesting, reactive. We synchronize the same data structure that will work on both sides. This is a big change - you need to change the entire source code of ReSharper.

    This change is that the ViewModel needs to be rewritten so that they do not contain references to the sematic code models. This is a huge change, so you have to do it gradually. We will slowly begin to make our product work this way and that. And we will lead the architecture to the fact that the UI will not depend on the semantic model. And this is again the inverse of dependency.

    About how users will feel the change
    - How transparent will it be for users?

    - In the end, it should be the same User Experience.

    - The user should not feel that the backend is different for the whole story?

    - Of course. At some point, we will replace the old with the new. And that’s it.

    - Usually when they do some kind of thing that works with code, they rewrite the compiler frontend and are limited to this. It turns out right away that you are already building a whole building, not just a front-end?

    - Of course. In ReSharper, now code that directly parses resolves something — about 10%.

    - How much does it make sense to do all the fuss with Visual Studio, given that your company has awesome experience and very successful demand for building your own IDEs?

    - Visual Studio, of course, makes sense, we won’t get down from it. This is a tool that provides development on all platforms that Microsoft needs. It changes every three months and supports new platforms from Microsoft. Repeating this work is not our priority at all.

    First of all, you need to understand that Visual Studio solves Microsoft's internal tasks. For example, the Universal Windows Platform came out, and for it you need to debug, run, profile, configure all projects for projects that will work on different platforms ... We will not repeat this.



    A bit about ReSharper C ++, which even Microsoft developers should get hooked on


    - Do Microsoft developers who do all this use ReSharper?

    - We do not disclose this information. It is clear that someone is using it, but we will not say how much.

    - So, Microsoft developers are interested in making ReSharper work and develop. This is probably a very big help - if you hooked them on your instrument.

    - And now we want to hook them on ReSharper C ++ - this is our big goal.

    - Please tell us about this project.

    - We started writing ReSharper C ++ 3-4 years ago. Released in the spring. We sell it as a separate product, such a figured-out C ++ language without everything else. Of those who use ReSharper C ++, approximately 2/3 use it without ReSharper, and a third put ReSharper C ++ as part of ReSharper Ultimate.

    - To what extent do people who write C ++ under Windows use Visual Studio exactly?

    - I think a lot of people use Visual Studio to develop in C ++ in completely different scenarios.

    - Is this exactly Managed C ++ or any?

    - Managed C ++ is an absolutely dead-end branch of Microsoft's technology development, which was designed to simplify the integration of Managed code with non-Managed code.

    - Well, somehow it was necessary somehow C ++, that there are some

    - Just do marshalling, make interaction easier. When you have the header described for C ++, you can directly use it in a Managed C ++ project - this is convenient. I see that now most people who need interop use either COM or Implicit PInvoke. Our experience with Managed C ++ is rather negative - there are bugs in the compiler, etc.

    Returning to your question, people use Visual Studio not for Managed C ++, but for native C ++ - it is development for various devices, cartographic applications, games, etc. - in general, for everything that is written in C ++.

    - You can say that for programming in C ++ under Windows, Visual Studio is the main development tool?

    - Of course. And not only under Windows. People who program under Linux also very often sit in Visual Studio - and that's fine. It’s just a good editor.

    About developing other products for C ++


    - I have a question related to the development of your other products for C ++, in particular CLion - it also supports C / C ++. And there is also AppCode for Objective-C. How does ReSharper live in parallel with these IDEs? Is there anything in common in these products? Do you share experiences with the developers of these IDEs?

    - We focus on two things. Firstly, to the C ++ language standard, and secondly, to the Microsoft compiler. CLion and AppCode have slightly different priorities. We exchange experience with them. There are quite a few design solutions that seamlessly flow into CLion from ReSharper. When you started writing ReSharper C ++, you already had experience writing ReSharper C ++ in CLion.
    In general, C ++ in AppCode began in an absolutely enchanting way. There was Objective-C, and at some point we realized that there are header files that are used for both Objective-C and C ++. That is, there somewhere under the definitions large constructions are written in C ++, on which the parser broke, simply because they did not understand them. And then I had to somehow support C ++ in order to support Objective C. And that was the beginning of C ++ support in AppCode.

    Do CLion developers and AppCode developers communicate with each other, share experiences?

    - They, of course, communicate with each other.

    “Do your three tools have much in common?”

    “He's not at all.” ReSharper C ++ was written much later, and it was written by the person who had previously done C ++ support in AppCode. And therefore, initially ReSharper C ++ was designed better, that is, it is more architecturally correct.

    - And this is “more correct" expressed in what? That fewer crutches have to be inserted in order for everything to work?

    - Yes. In the end, there are simply fewer code errors and bugs in IDE support.

    - And how much does the Microsoft compiler actually support the C ++ language standard?

    - Already better and constantly improve.

    - The reality is that the implementation does not always comply with the standard.

    - From this point of view, C # is no different from a language that in implementation often does not meet the standard. Now, with the opening of the source code, it has, of course, become much easier. When something does not behave as it is written in the specification, we look at the code as it is written in the compiler, and everything.

    - And for you, what is more important - compliance with the specification or implementation?

    - We look at the Use Case every time. In those places where the specification does not correspond to the implementation, most likely, if we show the error where it does not exist, the user corrects it, simply rewrites the code in a different way, it will be completely normal. Those. it will really be some kind of difficult case.

    - And how did ReSharper appear in JetBrains?

    - It appeared even before me: I in ReSharper only in 2007 came already on version 3 or on the 4th. ReSharper appeared when Eclipse appeared. That is, the company needed to diversify its activities, since there was a serious threat to the main product: after all, free platforms, free IDEs for Java are a serious competitor.

    - That is, it was a business decision?

    - I think yes.

    - And now do you and your JetBrains colleagues feel that you won the war against Eclipse?

    - Yes, we feel.

    On the importance of intra-industry communication and exchange of experience


    - What about ReSharper? There are different people, including in Russia, who make development tools for C #: for example, guys from DevExpress make CodeRush, which works with Roslyn. Do you communicate with them, exchange experience or not?

    - I think that DevExpress somehow fell in love with the development of tools, was disappointed. CodeRush is more of a side project: already in the late 2000s, it became clear that ReSharper dominated.

    - The question is how do they solve the problems you mentioned earlier. Do you communicate with them? Do not communicate?

    - They move to Roslyn and write features that were not written in Roslyn. Architecturally, it seems to me that this is impossible. You cannot write ReSharper functionality on top of Roslyn without changing Roslyn. We have too many internal compiler data structures that are used to implement analysis features. A feature is not written on top of some model code that is fixed. In the process of the programmer’s work in Visual Studio, the model code constantly changes a little, indexes change, what we store, how we store, changes. To correctly refactor, we use the compiler infrastructure that we have written. We then use the checks that we do to check compilation errors to write a sentence stating that, for example, “this type can be replaced with var”. And it is everywhere.

    —And now you have Roslyn. And it seems like you need to live in parallel.

    - Yes, we live.

    “Do you plan to use it at all?”

    - Not.

    —At this stage, you won the war. And you dominate. But now with the release of Roslyn, the others have a chance.

    “We have no performance problems.” We have no doubt that we will be able at the same pace to write new analyzes or even smarter completions and even smarter refactorings.

    “You already have complex products, and now they will be even more complicated.”

    - The complexity of language support does not present serious problems now. Now the biggest difficulty is supporting Microsoft platforms. Here are these stacks, universal applications, resolving links for all platforms. The difficulty is more in how the assembly works than in how the compiler works.

    The compiler works simply. And we have it well written, well supported. Now the problem is in the platforms. And performance issues. We are going to solve them in such an ultimatum way - to exit the Visual Studio process.



    About development plans for other .NET stack products


    - In addition to the ReSharper, which we talked about, you have other products in the .NET stack. What is happening to them now and what do you plan to do with them?

    - The story began a long time ago when we wrote the dotTrace profiler. The first version of the dotTrace profiler was written by Misha Senkov, who is currently writing ReSharper C ++. At some point, we decided to fork our code base. There was a lot of code in ReSharper that wrote UIs, collections, primitives, Application models, etc. For the release of the dotTrace product, we forked the platform, and from that moment we had unsynchronized releases based on the same platform of different products: dotTrace and ReSharper.

    - That is, within your code base there was an entity called a “platform”, and both of these products operated on it?

    - Yes, the repository, there are many assemblies. Each product used a platform. And each product had its own release cycle. The overhead to organize all this, make stabilization was just darkness. Accordingly, when I came to the leadership of the .NET department, it began with the idea that the platform should be common, the products should have a common release cycle, and preferably a single Solution, in which all these products are developed. And we, accordingly, began accordingly the unification of our code base so that new products could be built from one solution.

    If in previous versions dotTrace integration consisted of two parts: from a ReSharper plug-in and a separate Extension for Visual Studio, because in order to integrate with Visual Studio, you need to integrate the menus, make a manifest, package - all the attributes of Visual Studio extension. In the new scheme, we made one product that integrates fully with Visual Studio, but is assembled in parts. ReSharper is probably the only Extension that does this in Visual Studio. Our installer allows you to select several products and collect Extension on the fly. That is, all the attributes of the extension are generated and compiled with us. The code is written directly in the installer, which integrates several products into side-by-side studio in Runtime.

    We made buttons to turn on and off each product. If we consider the architecture of this solution in principle, then different programs, for example, ReSharper, Command Line Inspect Code Tool (this is a utility for running code analysis on a build server), dotPeek, dotTrace, dotCover, dotMemory - they are all run with the same code in absolutely identical conditions, just with different parameters. That is, this is actually one program that runs with different parameters, and these programs can be run on all of our assemblies.

    I used to make a talk on DotNextabout how the application-based model based on zones is arranged, which allows you to do all this. Actually, everything has become easier. Releases have become easier. Programs have become all integrated now. Users no longer ask: “Which dotCover is compatible with this version of ReSharper?” All this allowed us to release ReSharper C ++ as a separate product that can work together with ReSharper, and ReSharper can work without it, and together they can work.

    - How much is your new model already sitting in the heads of users?

    - In the minds of users, I see that a lot of people are putting - dotTrace installations in the studio have grown significantly.

    - Here is a good potential for all kinds of cross-sales.

    - Cross-sales have led us to stop selling profilers altogether, and they simply cannot be bought as separate products. Because it leads more to problems than to good. The problem is that I have a license for ReSharper, separately for dotTrace, I want to install a new version of the product, and the versions are synchronized. And these licenses for individual versions did not work at all. They may be incompatible, they may differ, they may end - one later, the other earlier ... To get rid of all these problems, simply by making a more expensive version.

    About working with user feedback to improve the product


    - You and I are constantly talking about Performance. The specificity of ReSharper is such that the application has a lot of different functionality, and each user uses its own subset. And most people who write on .NETe write programs with a completely visible code base, where there is a fixed workload, where the same operations are performed in a loop, where you can actually remove some profile and see the hot methods. This is somehow completely wrong with you. How do you live with this? How do you keep all this in mind?

    - There are a few things. We have such a button - Profile Visual Studio. You, as a user, can record a piece of time when ReSharper slowed down, send it to us and we are likely to do something about it. This is the mechanism that works. With it, we, version by version, fix bugs. And further ... further difficult.

    How does a programmer approach optimization, preferably foreseeable? Find Usage, file analysis, navigation, maybe in some difficult places. And he looks how much time was spent on this operation. Find Usage now takes 10 seconds, and should take seven. But in the process of change, we are doing business caches, changing the data structures that are also needed. And you can easily lose performance in some other scenario, for example, in Code Completion code, and even worse in some synchronous part of it, which works in your Foreground and directly affects the response time when printing.

    Accordingly, having done some optimization or running some activity in the background, which produces many objects, or taking up static memory, you, as a ReSharper programmer, can touch the most sensitive part to which users are most anxious - typing, switching between editors The carriage movement is something that should work without any delay at all.

    - How can this be tested at all?

    - It practically does not lend itself to regression testing. One of the most important criteria for the correctness of any performance test is the repeatability of the result. This works if you have a stable virtual machine on a server that has no other load. And when your typing slows down for every 10th character, and nine pass - it's pretty hard to catch.

    - It is not clear how to formulate a metric in which this can be seen.

    - And it’s not always just fixed when the influence of one code on another occurs. And if you are engaged in Code Completion, and it slows down due to the fact that the unit testing window, for example, is running a unit test, which displays its output in the background. It’s a very real scenario.

    - Is it true that this is treated with different approaches to the development itself, as approximately Dima Ivanov told? And the second - is it true that in this place the only thing that works is a user feedback?

    - Yes, it works. It works like this: each of our developers, when something slows down, takes a profiler, finds a problem, realizes it and communicates. This is dogfood in all its manifestations and the thorough elaboration of screenshots that come to us from users. Other languages ​​with which they work, other types of solutions.

    - A lot of these screenshots?

    - A few a day. It often happens that it is not ReSharper that slows down, but some kind of extension to ReSharper, extension to Visual Studio or Visual Studio itself. Well, or something that does not lend itself to any analysis at all.

    - In this case, are you going to Microsoft or to the one who did this extension for ReSharper?

    - Microsoft is very hard to get through, and we do not waste time on it. If you want Microsoft to fix something, you need to be very active. Roughly speaking, take your computer and send it to Microsoft for them to analyze it. And when a feedback comes to us, with a script that we really can't even reproduce, none of Microsoft will watch it. Because they have enough of their feedback.




    You can find other issues of “Without Slides” on our Youtube channel , and transcripts on the hub, simply by searching on our blog or by the corresponding tag .

    Also popular now: