“Don't be shy. Try it! Interviews about life, compilers and compiler life with Alexandre Mutel from Unity

    How to succeed in system programming, what you need to know and understand, especially if you have been working for the third decade? C # and performance - is it worth rewriting everything you see in C #? What future in terms of low-level compiler technologies is waiting for us?

    Today in our virtual studio Alexandre Mutel answers questions .

    Alexandre Mutel works as Lead Software Architect at Unity Technologies. In addition, he is a well-known developer in open source, contributing to SharpDX, Markdig, Zio and other projects, and since 2014 - MVP in the category “Visual Studio and Development Technologies”.

    Alexandre is working on various low-level and high-level issues in the areas of real-time graphics rendering, GPGPU, sound synthesis, efficient use and managed language architecture, code generation and documentation.

    As always, the interviews are conducted by Evgeny Trifonov ( phillennium ) and Oleg Chirukhin ( olegchir ) from the JUG.ru Group.

    At the end of the post there is a surprise from Dylan Beatty (another well-known dötnetchik) - we didn’t expect it ourselves.

    E.: You have a long career, three decades - for a start, can you briefly tell about it?

    It all started as a child - I got Amstrad PC 464. When I started programming on this computer, I was either 11 or 12, I don’t remember exactly. I quickly learned programming on BASIC and bought books on game development: then it seemed very interesting. I played very little games where it was more interesting to develop and write code. Then I continued to write assembly code for Amstrad.

    At 16 I had an Amiga 500. I was met by guys who were engaged in writing demos - it was not at all what it is now. Now it is WebGL, and this is a completely different demoscene. I started writing a lot of demos that I didn’t always show, but I just liked writing in assembler. And it was that simple.

    Then I went to a technology college where I studied computer engineering. It was already something completely different compared to games and assembler. I loved to learn things that I didn’t even know about before: operating systems, UNIX, work with the C language (before that I used only BASIC or an assembler, because I did not have money to buy a C / C ++ compiler).

    When he graduated from college, he began working in the currency market industry. This was a job for a French company in New York. Two years later I came back and went to the bank. In general, I did not want to work in a bank, I wanted to work in a game dev. But as a result, a new area is stuck there, a lot of things can be learned. So I spent there 8-9 years, mostly dealing with Java and a bit with C ++. A lot of distributed servers and SQL database, copying databases ... Not at all what I'm doing now.

    Then I took some creative leave and went on a tourist trip around the world: I was in Africa, in South America, a whole year in Asia. The journey changed me and shook. When I returned, I could not work with computers, in the IT sphere, I could not work for a bank. I quit and spent 4 years on social worker courses to work with children, the homeless, the disabled, the elderly. For three years I studied this, and it was very interesting, because most of my life I worked in the area of ​​exact sciences: mathematics, projects, abstractions. And then suddenly he went and moved into a very humanitarian area. I even tried to work in this area after training, but just at this period one friend with whom I made demos in my childhood hinted that I could do it again.

    I started working on demos all my free time, and very quickly it began to take more than working with children on the street. That was bad. People said: “We need to try to find a job in game dev, why not? You can. But I thought it was impossible, because I hadn’t been working with computers for a long time, and it was difficult to find a job in IT with my resume.

    I started working on open-source apps and released a couple of projects that companies began to use. Once, one of these companies contacted me, they used one of the latest projects, called SharpDX. I went to Japan with my family, because I already had two children. We lived 5 years in Japan. At that time I was working on building a game engine from scratch in C #.

    About two years ago I returned to France and started working at Unity. This interfered with what I did before, but they offered work on a very difficult and interesting task, a real challenge: to make a native compiler that generates native code from IL .NET code. It was exactly what I always wanted to do, but could not, because I would not be paid for it. And then there was a chance, a great opportunity. I worked on this project for 2 years.

    Yes, it seems, the story was not very short.

    E.: Nothing, such a career is worth a long story. Because of your experience I want to ask this. Now some people say "Moore's Law no longer works, computers are not getting faster, we are all doomed." Others answer: "Oh, well, although they are not accelerating at the same pace, growth is still there, so there is no reason for panic." Since you are close to the topic of performance, and while you follow the industry for a long time, what position do you have?

    I stick to this issue of the golden mean. (laughs) I believe that many, if not most of the applications we develop will adapt to performance requirements from the very beginning, resulting in the best quality.

    See what happened in the IT industry before our eyes. For example, when Windows for several years became a little slower - Windows Vista, etc. In essence, there was a natural work to improve the performance, because for years we didn’t worry about it. When Windows 8 came out, it was already a bit better. Then Windows 10 came out and it became even better. As a result, we have a system that works quite well compared to what it was before. It was really important for them to make these optimizations, because one day people would definitely “live beyond their means” and start saying, “Oh! This software no longer works, we are switching to Linux, because it is faster and less tupit. ”

    The same can be said about all the software that we are developing. And what is surprising: there was always a tendency to work with native code, at some point even in Windows they decided to return to C ++, they said: “C ++ is a solution, .NET is very slow, here Garbage Collector and blah blah blah ... ". And here again native languages ​​became actual.

    At the same time, V8 in Chrome has reintroduced JavaScript with JIT. JS is a scripting language, not super-fast, but sometimes it works twice as fast as C ++. That was enough for him to survive and for us to use it right now for things like writing code in Visual Studio Code. But if you look closely, this is all because the performance requirements were laid there from the very beginning. Even in VSCode, even though there is a lot of JavaScript and script code in general, everything else is V8, the rendering stack, JIT is all written in a language designed for maximum performance, that is, in C ++. Everything could be written in another language, not necessarily C ++, but the fact is that all this software was developed taking into account performance from the very beginning.

    So yes, we can use less efficient and productive languages, but only because all the underlying technologies are designed to get a fantastic user experience . For example, Visual Studio Code is amazing software that works very well for developers, solving their problems. Many people say: “Although we like to use more native code editors, right now we are switching to Visual Studio Code” - because they consider it quite effective. Performance is everywhere, but sometimes we do not see it, because it is already embedded in everything that we use.

    We think: it is written in JavaScript, because it is fast enough. But JavaScript is so fast only because hundreds of hundreds of development engineers have been working to optimize JIT for years. Now we can use scripting languages ​​even for writing very complex applications. Scripting languages ​​that without all this preparatory work would be much slower. We live in a strange time. We have a choice, but still there is a story with a performance that repeats for each language over and over again.

    So .NET is a typical example. Over the past three or four years, a great deal of work has been done there If you ever look at ASP.NET Core, if you look at all the work done with CoreCLR ... Performance is a well-selling thing, it costs money and allows you to achieve more. Trying to meet the stringent requirements, you can try to become more productive, you can save power, save some money at the end of the month - performance affects everything. When I hear people saying, “It's all right, I am developing my application, it has average performance, it will come down ...”, what are they thinking about? It takes a little time to check if you can make your application a bit more productive. If you can save resources or at least a tenth of the time the application runs, that's good.

    E.: There is a partly philosophical question. You think Slack is not the best place for technical solutions, and on your website you offer to subscribe to old-school RSS. Do you think that the new era of instant messaging makes developers less productive?

    No, I do not think so. Now I work remotely. At work, in Unity, we can work remotely, so I constantly use Slack to communicate with colleagues. This is the best way for me to be in touch and stay productive. It takes a lot of time to work, because you need to check channels and so on, but I can temporarily turn off Slack and focus on work. While I was working in a company in openspace, I had no choice: if someone wants to ask a question, you have to answer right away, this is much more complicated.

    As for Twitter and email, I don’t check them so often. I read Twitter once or twice a day, it depends on various factors: do I participate in some discussions and what do I discuss. If you use something like Slack, you can join different channels in the company, you can follow many topics that you would not be able to follow if you worked alone. We need to find a middle ground: we all worry about a lot of things that are happening in the company, but we need to be selective, because you cannot participate in all discussions at the same time. Some people may read so many channels that I am simply amazed at their abilities, I myself am not like that. Today I read about 30 channels, this is not so much.

    E.: Thanks, time for Oleg's questions!

    A .: My career is somewhat similar to yours: I worked in a bank, now in general in another area - in organizing conferences, and at the same time I try to figure out how to build compilers. What can you advise to those of the simple enterprise web developers trying to go into system programming, are there any tips for such a transition? I am sure that there are enough of us here if not a lot.

    Not sure that there is a prepared path for such a transition. If you are interested in such technologies, then do some ordinary homework. At home you write parsers and compiler related stuff. It is not necessary to write a complete compiler from start to finish, until the generation of machine code. You start to be interested in writing compiler infrastructure. This is what I have been doing in recent years, working in Unity. If you are passionate about low-level things, then this is one of those places where you can understand how it all works. How can you improve the work, where should the performance be improved, and where have they not done it yet? If you are worried about performance, then it is very important to be aware of what the application will run on in the end.

    Performance is my theme, and all this was a great opportunity for me. I would like to approach the solution of the problem in its basis, that is, at the level of the compiler. It is here that we can dozens of times increase productivity in those places where it is necessary for our users. If we run games, apps, movies, or something like that, sometimes it is relatively easy to achieve such results.

    Passion for low-level pieces and compiler components led me to the current work. But this was not something that I specifically wanted to do. Sometimes, when you get a lot of experience with different languages, you write applications - there is a desire to even invent your own language. I started doing this, but I stopped it because it’s too much work, and I have very little free time. But you have a subconscious desire to return “to the roots”, try to do something yourself, in order to understand everything. Of course, I understood how compilers and all that work, but did not understand the complexity of the requirements. Difficult tradeoffs, which will have to deal, for example, in the field of memory management. It is very difficult to choose what will simultaneously give greater productivity to the work of the applied developer and will be effective. This problem is still not completely resolved. Rust or .NET will never decide this. Rust is beautiful, amazing, but it is difficult to work with, especially if you switch to it from something like JavaScript. However, there are examples of Python or JavaScript developers who are switching to Rust, although this is somewhat surprising.

    A .: You mentioned that you programmed in C # for the last 10 years. So what's so good about C #? Why not C ++, for example? C ++ seems to be a more system language.

    To be honest, I hate C ++, I hate C, but I work with them. I sincerely believe that they lead to a bunch of bugs, to the huge inefficiency of development. Many people think that since you are programming in C, then you are de facto writing fast code so that your program will be performance-oriented. It is not true. If you make heaps of malloc and all that, it will be slow even compared to what is written in .NET. Good C / C ++ developers have to use tricks like region memory allocator. You have to dig into a bunch of weird things that no one has heard of. Although here the game developers usually know about such things. At a minimum, AAA developers or people who make games in C / C ++ - frameworks. Part of the problem comes from the complexity of the language itself. Previously, I did not read books on C ++ at all, and only three or four years ago I began to read books only on C ++, just to feel the language. I programmed on it, but I did not have a systematic, formal approach, and I was struck by its complexity, the number of things that you can spoil if you do not write everything correctly.

    Not later, as a couple of months ago in Unity there was a bug, someone made a mistake in a piece of C ++ code, it was in the constructor, something was passed by value, and as a result we took the address from this value and looked for it in the cache. In fact, we referred to a value that was no longer in memory. And all this is because there they mixed up pointers with non-pointers, and the one who did this refactoring did not check all the places of use. A completely different code that worked perfectly suddenly stopped working. It seems to be a small mistake, but it broke everything. In fact, this is a mistake in working with memory. So, yes, when I see such things, I am convinced that we must restrict access to work in C and C ++ and minimize their use as much as possible. In the .NET part, I really limited their use to platform-specific things. But writing everything in C # is rather a chore. In order to access the API, you need to do a bunch of dlopen. Although, for example, you can try to encapsulate all this in a wrapper in C and organize access through just one function. I would prefer to isolate such things and develop them further in C and C ++. But this is such a narrow topic about interop, and then you stay with normal controlled language, use it most of the time, enjoy faster compilation.

    I hate C ++ compiler and linker errors, I hate the need to work with different platforms, all this is very, very difficult. You start compiling to MSVC, then you have to switch to Clang, then to GCC. On Linux, on Mac, on Windows, on Android, on iOS, and so on and so forth. Working with C ++ is a nightmare!

    I hate the separation between files in the editor, .h files and cpp.-files. People finally get confused in the language and start programming in macros. I love metaprogramming, but in modern C ++ we can do just total madness. By themselves, these things are amazing, but actually this is too much.

    To summarize: yes, I think we can develop effective C # software. Maybe not as fast as in C ++, but we can. This is exactly what we are trying to do in Unity - for example, we are making a burst compiler to compile a particular subset of C #, achieving maximum performance in some places even more than C ++ would have done - but remaining in C #. It is completely safe. For pointers, you need to declare unsafe, do not generate exceptions, do everything explicitly. And this is a bitter experience. But still, you can write code that will be as fast as in C ++. I think this is exactly the direction in which .NET should go and where we should go.

    A .: If we talk about open source code, for example, we have a garbage collector in the .NET Core, which is a very big and scary C-file. Two megabytes of garbage, most likely, generated from some lisp (hardly so many letters it was worth writing by hand). Perhaps it makes sense to rewrite everything here in C #?

    Yes! I communicate with people who work on JIT in both Microsoft and the community. There is something I truly believe in. I believe that there is a moment when your language becomes more mature and fundamental, and then you have to challenge, test it for strength. You need to be able to use it as a foundation. Prove that you can apply it even to create something very demanding for performance. And this is a story about the Garbage Collector and JIT. A very, very large percentage of .NET runtime subsystems, including JIT and GC, can be done in C #. If we follow the rule that in C ++ it is possible to describe only abstractions of the base platform, this will make most runtime platform independent. I would be very happy if it happened. But this is a great job.

    There is one reason why I especially like this idea. I have already spoken about this, refactoring and improving the code base in C / C ++ is so complex that at some point you stop doing this refactoring. It hurts so much that you just don't touch it anymore. You have to transfer and change some files with your hands, because refactoring in the IDE will work poorly, for example, because there are too many templates - and so on and so forth. When developing in C #, you could be more ambitious about your desires. Adding new optimizations would be possible much faster and easier, simply because the compile time was reduced. The iteration time for testing would decrease, and so on. It's nice that, for example, in CoreRT try to make the most of C # instead of C ++. The situation is changing.

    But we're still halfway through rewriting .NET GC to C #. But we could. For example, I know that I can rewrite .NET GC, and rewrite it differently. A few years ago I became very interested in GC, read books about it, wrote something like a prototype of the GC implementation. I was amazed at the work of the Java-community on Jikes RVM - they did this work on Java. Later I discovered that in Golang the compiler was first written in C, and then in Golang. When I started reading the source code of the Golang compiler, I was surprised at the organization of the code base and how the optimizations are structured. For example, there is a huge file that describes the allowed optimizations that can be applied to some instructions. Instructions can be mapped into faster native instructions, and this is all described in a huge text file. I have not seen this either in LLVM or in the .NET JIT.

    Summing up, yes, the use of a managed language should give us more opportunities to write a better runtime.

    A. You told about the complexity of the code, the danger of errors, and so on. For me, the most difficult part is reading the source code of the compiler and understanding them. For example, here is a very large file with tens of thousands of lines of AST transformations and intermediate representations. Here is this file from Golang with Lovering, for example. What do you think about writing good, understandable code for system parts? How to write it?

    Ha, you say, as if for any thing there are pre-prepared rules! Compilers are more demanding on code than regular applications. First: you need to be extremely careful when designing a basic architecture. It is necessary to carefully monitor reuse and isolation: if you change something, then in other places nothing should fall in and ruin the rest of the code.

    Second: I am convinced that we are obliged to accompany the code with more comments to explain internal dependencies, which are not obvious when you just look at the code. I really do not like it when they say: “The code must describe itself! It should be obvious! ” This is completely wrong and misleading. Engineers love to tell such tales to young developers, and you shouldn't do that.

    When I see a code base like LLVM, for example, it may not be perfect, but it’s full of comments. For example, when there is a structure in the header file, comments may explain where this structure is declared, why it is so declared, why there are such fields here. There is a reason for all these things. If you do not explain this reason, no one will understand why you did it that way. If you just read the code, nothing will become clear. For example, by aligning data for efficient caching, if you don’t explain it in a comment, how will others understand that all this was done for efficiency? Perhaps later, even you yourself will have to re-read the entire code, restore the entire structure of the application and the layout of the data in memory, and then the insight will come. But if you do not add comments to the code and explain this,

    And I say this not only about good, but also about bad work. Not once did I have to insert a comment into the code to say that I did not do this part very well, I am not very proud of it, this place needs to be fixed, for the time being we will leave it in its current form, but someday we will need to return to it and fix it. I explained the reason why it was done this way: for example, there was not enough time to do it well. You documented that you did only a small part of what was needed, and this is normal. You used such a crutch, and if you tried to do it more correctly, you would break everything. But you have to explain it even for those parts of the code that do not work very well. So, comments are the second.

    And the third for me is testing. Need to do more unit tests. Not large integration tests, namely, small unit tests, testing parts of your API, starting from the lower levels. So your application will be improved over time. For example, I wrote SharpDX without any tests at all. And at the time of writing, he was not bad. But the fact is that I created it not for people, but for myself, in my free time. I wanted to have access to the DirectX C ++ API, which was already available in C ++. Over the years, I have verified that everything works. But every time I made any changes, I had to check the functionality again. The last few years I have switched to other projects: there is no time, and I don’t use it anymore. And then came one developer from the open source community, selected the compiler in a separate package and tried to do something separate from SharpDX on this. The thing is, I didn’t completely check this PR, because we haven’t done a single test. Just froze his pull request (he seemed perfect). He started doing mini-tests on his repository on his own separate application. But we did not take into account something, and SharpDX itself turned out to be completely broken in one of the versions. People with problems of the form immediately appeared: “Why does this method cause an exception?” On subsequent projects (both open-end and workers), I tried to be very conscious in terms of testing, occasionally even trying to increase coverage. But trying to reach 90% coverage is very difficult. Sometimes real tests get too weird, you just wrote them for debugging and you can’t put them under cover,

    In general, yes, I believe that these are the three things that need to be approached very carefully: architecture, comments, testing.

    A. Compilers are quite an old field, right? The story begins in the middle of the last century. What do you think are the most important challenges now for modern developers and companies creating compilers?

    I think that today the main challenge is to create a compiler that will be effective in working with SIMD. And this is really a challenge, for which compilers are a bit difficult to answer, because SIMD and similar optimizations are usually added later, not necessarily from the very beginning of the compiler. (In the case of LLVM, they probably were from the very beginning). The fact is that it brings a lot of new problems to the optimizer, it should be able to perform them correctly. As you can see, compilers are now at a loss, for example, to automatically vectorize code. In some form, they can do it, but sometimes it is easier to do it manually. Compilers are not able to define all the places for optimization, and as a result, inefficient code is generated. This is a complex topic. Some of Intel is working on adding vectorization technology to LLVM. The project has been under development for several years, and in general it is only preparation to make these changes to upstream LLVM. They are not there yet, it is very difficult, and it will take years. LLVM is a very good system, if only because it has things that are missing in .NET. Using the advantages of SIMD, CPU and different cores - this will be the greatest modern challenge. That is why, for example, vector intrinsics are added to .NET languages ​​— you can use the vectorizer and SIMD instructions explicitly. But in many cases, like cycles, one would expect auto-vectoring. CPU and different cores - this will be the greatest modern challenge. That is why, for example, vector intrinsics are added to .NET languages ​​— you can use the vectorizer and SIMD instructions explicitly. But in many cases, like cycles, one would expect auto-vectoring. CPU and different cores - this will be the greatest modern challenge. That is why, for example, vector intrinsics are added to .NET languages ​​— you can use the vectorizer and SIMD instructions explicitly. But in many cases, like cycles, one would expect auto-vectoring.

    Before, I would say: “We need compilers as services and compilers as libraries.” Today it is no longer a problem. People are convinced of the usefulness of these ideas. Therefore, LLVM has become such a famous thing: it was developed from the very beginning as a compiler-library in which you can use any language you want. The same, for example, with Roslyn on C #, there is also a compiler. Of course, this compiler is not of the same type, but nevertheless it can be used in your own code. I think now everyone is already aware of these pieces, people realize that the compiler benefits. So for me, a SIMD-compatible code is more important. Programming with an eye on the GPU, CPU, on the number of cores used is just the beginning. We have 6, 8, 16, 42 cores. At some point, their increase will begin, we must be ready for it,

    A. You have your own Markdown implementation called Markdig, right? Could you explain quite a bit what is the status of Markdown? Does he have a formal grammar, how is he doing?

    Yes, for me, Markdown is a way of writing documentation that developers like. Most likely, this is not for ordinary citizens who are still easier to use Word or something like that. But for developers it is good. Developers for many years used what? Text files, headers were marked in various ways, and so on. And it was ok. You know, this is how to read RFC on the Internet - there are a lot of text file formats, very limited, very well made, but not formalized. They were not always easy to read, there were no pictures, and it was crazy. But we had no choice, and even ASCII-art had to be used. And then came Word, PDF, and the like. In the 2000th year, a lot of documentation was in Word. Developers were not very easy to persuade to work with such documents - it was not fashionable. It was inconvenient to watch changes in the documentation when you make changes to the associated code and vice versa. When Markdown appeared, it was amazing, it became possible to produce something very pleasant from it - for example, HTML. And this is still a text format that is easy to read and add to your code base. Even if you have not generated HTML, Markdown can be conveniently read in a text editor. Now, when you open a project on Github, it identifies Markdown files and displays them beautifully on its own. And all this is next to your code. Documentation among the code is the best form of technical documentation. For example, Microsoft transferred all the technical documentation to Markdown, and now the documentation can be read directly on GitHub,

    In terms of normalizing Markdown, there is a project called CommonMark, which was launched several years ago, the purpose of which is standardization of Markdown. I think that today CommonMark is the standard. People begin to respect him, follow him, but there are many other implementations. For example, Github switched to CommonMarklast year. No one noticed this - and that's good. But they were able to go on it. And Microsoft also translated the CommonMark documentation because they used my Markdig. They started working on using Markdig many years ago, and before that they had taken a samopisny Markdown-engine that uses inside the regulars. This is not bad, but, as far as I know, they didn’t really follow the CommonMark specifications in this engine, so as a result they switched to Markdig. They contacted me and it was awesome. I wrote Markdig to use it, and wanted to use it in my own project, but then I put it aside for a long time. I am very glad that a company like Microsoft decided to pick it up.

    A .: What do you think, how to be a good programmer today? What does a good modern programmer need to know unlike programmers of the last century?

    This applies not only to programming, but also to any field. The main thing is to do your homework, learn things yourself. It is important to ask questions. It is not necessary to accept the programming language as it is. Not that you need to change it, but it is important to understand why different things work the way they work. You should be curious and open to the study of the causes, should ask good questions. "I do not understand how this works, but I want to understand it." What would you do to improve the language? The more questions you ask the language you use, the more opportunities you have to help, improve it, learn new things and discover why something is done in software in a certain way. So you can learn much more than just driving in Google “I want to make X” and find ready-made projects from one function “do well” on the githaba. When we write a real application, it would be too difficult to go into the details of each dependency used - it would be a great job of tinkering with details. But in the part that cares for you, which is interesting for you personally, it is worth going deeper and asking questions. How it works, whether it can be done faster, why some thing does not work very well - you have to understand why. “Maybe I can write something better?” It is worth a deeper dive and ask questions. How it works, whether it can be done faster, why some thing does not work very well - you have to understand why. “Maybe I can write something better?” It is worth a deeper dive and ask questions. How it works, whether it can be done faster, why some thing does not work very well - you have to understand why. “Maybe I can write something better?”

    Many of the projects I worked with were, for the most part, complete. Sometimes the project owners did not want to change it. This is sad, because you expect that in the open source, everyone will cooperate with each other. The reality of open-source is such that sometimes you need to do something else. Sometimes what you propose is good, but it leads to too much change. It can be hard for people to accept such offers - perhaps it will only bring more bugs and more support, and who will support all this? Anyone who came up with this offer? Probably not.

    Sometimes I also forget that I need to ask different questions. Simple questions. Can we do it faster? Yes, no? It is necessary to act instead of simply using and swearing at slow-working things, and doing nothing about it. We need to dig, fix ... Do something at home, cope with pain. Each time you become more and more ready to understand the challenges and develop more complex programs. And maybe in the end you will be a little less disappointed by the reaction of other people, because you will begin to better understand their limitations.

    And one more important thing. Do not be shy. Try it! Try to do what you want. If you have an idea - try to embody it. Even if this idea is crazy, and sometimes you can get some crazy results - there is something in it too. While you are doing such things, you can meet a lot of weird and crazy people from whom you can get some cool insights on how to develop. You need to be very attentive to what they say, how they work, what they think. Do not copy them, because you have your own way and your future. It's cool when you can meet such people on your career path. I had several such people. I was not always aware of this, but after a while I understood: “Why, this guy was cool!”. I learned something from them, but maybe could learn more. This is the golden mean between trying to think for yourself and take the best from the best minds around you. They can affect you in a good way and can help develop more cool things.

    This is a story about developing independently, asking as many questions as possible, trying to be open to the people you meet, to events that can be used, and to take part in something more than ourselves.

    Already next Friday, Alexandra will give a talk on “Behind the burst compiler, converting. NET IL to highly optimized native code by using LLVM” at DotNext 2018 Moscow . The conference will be held on November 22-23 in Moscow and it seems that this will be the biggest DotNext of all. We would tell you what else to expect from the conference, but her other speaker, Dylan Beatty, did better for us — he recorded a whole song:

    Also popular now: