Severe Siberian JVM: great interview about Excelsior JET

    Recently, we wrote about what tricks Alibaba went to in order to make life with OpenJDK more acceptable. There were comments like "it turns out, while we are suffering here with ordinary java, the Chinese have already made their own special." Alibaba, of course, is impressive - but Russia also has its own fundamental projects, where they also make “special java”.


    In Novosibirsk, for 18 years now they have been making their own JVM , written entirely independently and in demand far beyond the borders of Russia. This is not just some kind of HotSpot fork that does the same thing, but a little better. Excelsior JET is a set of solutions that allow you to do completely different things in terms of AOT compilation. “Pff, AOT is in GraalVM,” you can say. But GraalVM is still a very research piece, and JET is a proven solution for use in production over the years.


    This is an interview with some of the Excelsior JET developers . I hope it will be especially interesting for anyone who wants to discover new things that can be done with Java. Or people who are interested in the life of JVM-engineers and want to participate in it themselves.



    In the autumn I flew to one of the Novosibirsk Java conferences, we sat down with Ivan Uglyansky dbg_nsk and Nikita Lipsky pjBooms (one of the JET developers) and recorded this interview. Ivan deals with JET: GC runtime, class loading, multithreading support, profiling, plugin for GDB. Nikita is one of the initiators of the JET project, participated in the research and development of virtually all components of the product from core to product properties - OSGi at the JVM level, Java Runtime Slim Down (Java modules in JET were already in 2007), two bytecode verifiers, support Spring Boot and so on.




    Oleg Chirukhin: Can you tell unknowing people about your product?


    Nikita Lipsky: It is surprising, of course, that we have been on the market for 18 years, and we are not so much known. We make an unusual JVM. Unusual for an AOT compilation bet, i.e. we try to precompile java bytecode into machine code.


    Initially, the idea of ​​the project was to make Java fast. Productivity is what we went to the market with. And when we went, Java was still interpreted, and static compilation into machine code promised to improve Java performance not even times, but orders of magnitude. But even in the presence of JIT, if you compile everything in advance, then you do not spend resources during execution on compilation, and thus you can spend more time and memory and end up with more optimized code.


    In addition to performance, a side effect of static bytecode compilation is to protect Java applications from decompilation. Because after compilation no bytecode remains, only the machine remains. It is much more difficult to decompile to source code than Java bytecode. Actually impossible. That is, it can be disassembled, but you will not generate the source code. But it is easy to generate Java source codes from bytecode up to variable names, there are many tools for this.


    In addition, once upon a time it was assumed that Java is on all computers, you distribute your Java applications as bytecode, and they are executed the same way everywhere. But in reality, everything was not so good, because one has one Java, the other has another. Because of this, if you distributed the program as bytecode, various kinds of surprises could occur, starting with the fact that the user simply could not start your program, ending with some strange behavior that you could not manifest at yourself. Our product has always had the advantage that you distribute your Java application simply as a native application. You are not dependent on runtime, which is (or is not worth) the user.


    Ivan Uglyansky: You don’t need to require Java to be installed at all.


    Oleg: Remains a dependency on the operating system, right?


    Nikita: That's right. Many people criticize that if you compile into native code, then you have the slogan "Write once - run anywhere" stops working. But it is not. I occasionally talk about it on my reports that “write once” sounds like “write once” and not “build once”. That is, you can build your Java application for each platform, and it will work everywhere.


    Oleg: Straight everywhere?


    Nikita: Wherever supported. We have a Java-compatible solution. If you write in Java, it will work where Java works. And if you use the compiled by us, then where we support it is Windows, Linux, Mac, x86, amd64, ARM32. But where you don’t support, you can still use regular Java for your applications, that is, the portability of your Java applications in this sense does not suffer.


    Oleg: Are there such constructions that are performed differently on different platforms? Pieces of the platform, which are not fully implemented, any standard libraries.


    Ivan: It happens, but it is not JET-specific. You can look, for example, at the AsynchronousFileChannel implementation in the JDK itself, it is completely different on different Windows and on Posix, which is logical. Some things are implemented only on certain platforms, SCTP support (see sun.nio.ch.sctp.SctpChannelImpl on Windows) and SDP (see sun.net.sdp.SdpSupport). There is no particular contradiction in this either, but it really turns out that “Write once, run anywhere” is not entirely true.


    If we talk about the implementation of JVM, then on different OSs the difference, of course, can be huge. What is the fact that on OS X in the main thread you need to run the Cocoa event loop, so the launch there is different from the rest.


    Nikita: However, outside for the user it all looks and works almost the same.


    Oleg: What about performance? Is it the same on all platforms?


    Nikita: There is a difference. For example, the Linux file system works better and faster than the Windows one.


    Oleg: And porting between processors?


    Nikita: This is a fun activity! The whole team suddenly starts to port. Entertainment is usually for six months or a year.


    Oleg: Does it happen that a piece of code on another platform starts to slow down more?


    Ivan: This may be due to the fact that we just did not have time to do or adapt some kind of optimization. It worked well on x86, then we switched to AMD64, and just did not have time to adapt it. Because of this, it may be slower.


    Another example about performance. There is a weak memory model on ARM, there you need to put many more barriers so that everything works correctly. We had AMD64, some places worked, consider, free then, because there the memory model is different. On ARM you need to put more barriers, and this is not free.


    Oleg: Let's talk about the hot topic now - “Java on embedded-devices”.
    Suppose I make a plane that flies with control on a Raspberry Pi. What typical problems does a person have when he does this? And how can JET and AOT compilation in general help in this matter?


    Nikita: Airplane on the Raspberry Pi is, of course, an interesting topic. We did the ARM32, and now JET is on the Raspberry Pi. We have a certain number of customers on embedded, but there are not so many of them to talk about their “typical” problems. Although what problems they have with Java, it’s not hard to guess.


    Ivan: What are the problems with Java on the Raspberry Pi? The problem is with memory consumption. If it needs too much, then the application and the JVM is hard to live on poor Raspberry Pi. In addition, on embedded devices it is important to have a quick launch so that the application does not overclock there for too long. AOT solves both of these tasks well, so we are working to improve support for embedded. Specifically about the Raspberry Pi is to say about Bellsoft , who are now actively engaged in this c HotSpot. Normal Java is fully present there.


    Nikita: In addition, there are few resources on embedded systems, there is no place for the JIT compiler. Accordingly, AOT compilation itself accelerates performance.


    Again, embedded-devices are without inclusion in the network, on the battery. Why is there a battery for the JIT compiler, if you can assemble everything in advance?


    Oleg: What features do you have? I understand that JET is a very large complex system with a lot of everything. You have an AOT compilation, that is, you can compile the file. What else? What are some interesting components to talk about?


    Ivan: We have a number of features related to performance.


    I recently talked about PGO, our relatively new feature. We have a profiler built right into the JVM, as well as a set of optimizations based on the profile it collects. After recompilation taking into account the profile, we often get a serious performance boost. The fact is that performance information is added to our powerful static analyzes and optimizations. This is such a slightly hybrid approach - take the best from JIT and AOT compilation.


    We have two great features to further accelerate the launch. The first is that when you look at the order in which the memory pages are poked initially, you simply monitor it and appropriately link the file.


    Nikita: Second, when you launch the executable, you understand which pieces of memory are pulled up, and then, instead of pulling them up in any order, you pull up the right piece right away. Also greatly accelerates the launch.


    Ivan: These are separate product features.


    Nikita: The first is called Startup Optimizer, and the second is Startup Accelerator. Features work differently. To use the first one, you need to run the application before compiling it, it will remember the order in which the code was executed. Then in the correct order, this code will link. And the second is the launch of your application after compilation, then we already know what went where, and after that we launch everything in the right order.


    In addition to performance-related features, there are a number of product features that make JET more convenient to use.


    For example, we are able to pack, say, Windows distributions. Once - and got a Windows installer. You can distribute Java applications like real native applications. There are many more. For example, there is such a standard problem with AOT compilers and Java, when an application uses its own class loaders. If you have your own class loader, it’s not clear which classes we will AOT compile? Because there is a logic rezolv between classes can be anything. Accordingly, none of the Java AOT compilers, except ours, work with non-standard class loaders. We have special support in AOT for some classes of applications, where we know how their custom loaders work, how links between classes are resolved. For example, we have Eclipse RCP support and we have clients, that write desktop applications on Eclipse RCP and compile us. There is support for Tomcat, there are also used custom loaders. You can compile tomcat applications with us. We also recently released a JET version with Spring Boot support out of the box.


    Oleg: What is the server down there?


    Nikita: Whatever you want. Which Spring Boot supports, it will work with such. Tomcat, Undertow, Jetty, Webflux.


    Ivan: Here it is necessary to mention that for Jetty we do not have the support of its custom class coolers.


    Nikita: Jetty, as a standalone web server, has a custom classifier. But there is such a thing as Jetty Embedded, which can work without custom loaders. Jetty Embedded is quietly working on Excelsior JET. Inside Spring Boot Jetty will work in the next version, like any other servers supported by Spring Boot.



    Oleg: Essentially, the user interface with JET is javac and Java or something else?


    Ivan: No. For the user, we have several options for using JET. Firstly, this is a GUI in which the user pierces all the features, after that he presses the button and his application is compiled. When he wants to make some installer so that the user can install the application, he once again pierces the buttons. But this approach is a bit outdated (the GUI was developed back in 2003), so now Nikita is developing and developing plug-ins for Maven and Gradle, which are much more convenient and familiar for modern users.


    Nikita: You substitute seven lines in pom.xml or build.gradle, you say mvn jet:build, and you have a sausage stick on the way out.


    Oleg: And now everyone loves Docker and Kubernetes very much, can we put together for them?


    Nikita: Docker is the next topic. We have this parameter - packaging in Maven / Gradle plugins. I can add packaging apps for Docker.


    Ivan: This is still work in progress. But in general, we tried running JET-compiled applications on Docker.


    Nikita: It works. Without java. Naked Linux, thrust there JET-compiled application, and it starts.


    Oleg: And what about the output from the packaging docker? Do you cram a container or an executable in a Docker file?


    Nikita: Now you just write a JET-specific Docker file - these are three lines. Further everything works through regular Docker-tools.


    I'm playing with microservices now. I compile them with JET, I run, they discover each other, communicate. The JVM did not have to do anything for this.


    Oleg: Now all sorts of cloud providers have launched things like Amazon Lambda, Google Cloud Functions, can you use it there?


    Nikita: I think we need to go to the providers of all these things and say that if you use us, all your lambdas will work faster. But this is still just an idea.


    Oleg: So they really will work faster!


    Nikita: Yes, most likely, there will be more work to be done in this direction.


    Oleg: I see a problem here in the compilation time of the lambda. What is your compilation time?


    Ivan: It is, and this is a problem that users of ordinary JVM with JIT do not think about. Usually, because how - launched the application, it works (albeit slowly at first due to compilation). And here there is a separate step for the AOT compilation of the entire application. This may be sensitive, so we are working to accelerate this stage. For example, we have an incremental recompilation, when only the changed parts of the application are recompiled. We call it smart recompilation. We were just doing this in the past dev.period with Nikita, sat in a pair.


    Nikita: There are certain problems with Java and smart recompilation, for example, circular dependencies within Java applications - they are everywhere.


    Ivan: There are a lot of problems that are not quite obvious until you think about this task. If you have a static AOT compiler that does different global optimizations, it’s not so easy to figure out what exactly needs to be recompiled. It is necessary to remember all these optimizations. And optimizations can be non-trivial. For example, you could do all sorts of difficult devirtualization, inline something the devil knows where. If you changed one classic or one JAR, it does not mean that only it needs to be recompiled and that's it. No, it's all much more complicated. It is necessary to calculate and remember all the optimizations that the compiler has done.


    Nikita: Actually doing the same thing that JIT does when it makes a decision about deoptimization, but only in the AOT compiler. Only the solution will be not about deoptimization, but about recompilation.


    Oleg: About the speed of smart compilation. If I take Hello, World, I compile it, then I change two letters in the word Hello ...


    Nikita: It compiles quickly.


    Oleg: In the sense of not a minute?


    Nikita: Seconds.


    Ivan: But it still depends on whether we include platform classes in the executable.


    Oleg: And what can be without it?


    Nikita: Yes, by default our platform is being sawn into several DLLs. We implemented Jigsaw at the very beginning :-) That is, we drank Java SE classes into components a long time ago, back in the 90th year.


    Ivan: The point is that our runtime plus platform classes - they are all precompiled by us, and yes - are divided into DLLs. When you run a JET compiled application, the runtime and the entire platform are represented as these DLLs. That is, as a result, it looks like this: you take “Hello, world”, compile, you actually compile one class. This happens in a few seconds.


    Nikita: For 4 seconds, if in the global; in a couple of seconds, if not in the global. Global is when you link: all platform classes compiled into native code are in one large file.


    Oleg: Can I do some hot reload?


    Nikita: No.


    Oleg: No? Sadness But it would be possible to generate one DLL, link it again, and then ...


    Nikita: Since we have JIT (by the way, yes, we also have JIT too!), Then of course you can load pieces of code, upload, back upload. For example, all the code that works through our JIT, in the same OSGI, can be reloaded if you want. But here is the hot reload, which is in HotSpot, when you sit in the debugger, and change the code on the fly, we don’t. This can not be done without loss of performance.


    Oleg: At the development stage, performance is not so important.


    Nikita: At the development stage, you use HotSpot, and you don't need anything else. We are a Java-compliant solution. If you use HotSpot and use hot reload in debugging, everything is fine. You debug, compile JET, and everything works as on HotSpot. It must be so. Usually so. If not, you write to the support, we understand.


    Oleg: And what about debugging in JET? JVM TI? How much is all supported?


    Ivan: One of the core values ​​of using JET is security. Custom code will not be available to anyone. Because everything is compiled into the native. There are some contradictions with this. Do we support TI JVM? If we support it, it means that any pumped developer who knows how the TI JVM works will be able to get access to anything very quickly. We do not support the JVM TI now.


    Nikita: This is an optional specification. It may be supported by platform implementers, may not be supported.


    Ivan: There are many reasons. It is optional and violates security, which means that users will not say “thank you” to us. And it is very HotSpot-specific inside. Not so long ago, our guys supported the JVM TI as a pilot project, they reached a certain stage, but all the time they were confronted with the fact that it was very sharpened by HotSpot. In principle, this is possible, but what business problem will be solved by this?


    Nikita: Once you have earned on HotSpot, but it did not work on the jet - this is not your problem. This is our problem.


    Oleg: Got it. And do you have any additional features that are
    not in HotSpot, but you have, and they require direct control? Which I would like to pooh, to sort out them.


    Nikita: Exactly one feature. We have an official extension of the platform called Windows Services, that is, you can compile Java applications in the form of real Windows services that will be monitored through standard Windows tools for Windows services and so on. In this case, you have to pick up our own JAR to compile such applications.


    Oleg: This is not the greatest problem.


    Nikita: The interface is very simple for these services. And for debugging, you use the methods of running your own application not through the Windows Service, but through main. Some kind of service-specific debugging, I don’t know if it is needed.



    Oleg: Suppose a new developer, who previously worked at HotSpot, wants to develop something using JET, does he need to learn something, understand something at all about life or about JET?


    Ivan: He needs to copy seven lines into pom.xml, if Maven is used, then run jet: build, and if JET is on the machine, then the Java application will be compiled into an executable. In theory, it’s just that you don’t do anything special, just take it, get it, and that's it.


    Nikita: Either you know the command line from which your application is running, then you put this command line into our GUI, it will figure it out. You give the build command, you get the executable, that's it.


    Ivan: It's very simple, you do not need to invent anything new. How Hotspotov AOT works, you yourself showed on the report that you need a list of all the methods to get into the file, bind it, transform it - we don’t need to do anything like that. You just take your launch string on HotSpot, paste it into a special GUI, or add a small piece to pom.xml, and, hurray, after a while (because this is an AOT compilation), you get an exe file, which can be run.


    Oleg: Do I need to learn to work with GC?


    Nikita: Yes, we have our own GC, we need to seize it differently, not like in HotSpot. We have very few public pens.


    Oleg: Is there a “do well” or “not done” pen?


    Ivan: There is such a pen. There is a handle “set Xmx”, there is a handle “set number of workers” ... Many pens, but why do you need to know about them? If something unexpected happens to you - write.


    Of course, we have a lot to configure GC. We can tyunit the younger generation, we can frequency of the arrival of GC. All this is tyunitsya, but these are not common options. We understand that people know -Xmx and point it out, so we parse it. There are a few more common options that work with JET, but in general everything is different.


    Nikita: We also have a public option that allows you to set how much you allow the GC to spend the time of your application.


    Oleg: In percentage?


    Nikita: In the tenth percent. We understood that interest is a bit too much, it is rude.


    Ivan: If you spend interest on GC, you have something wrong.


    Oleg: But all these people from enterprises, who do everything during work hours, open a printout of GC work with a schedule and meditate. Can you meditate?


    Nikita: We have special people inside the company who meditate.


    Ivan: We have our own log format, so people are unlikely to be able to understand something about it. Although it is not enough? If they look at him for a long time, they can, perhaps, understand. Everything is written there. But, most likely, it is better to send us, and we will meditate.


    Oleg: But naturally, you will do it for money, but you can watch for free on your own.


    Nikita: If you are our client, then you have a support, and we do it, of course, as part of a support.


    Ivan: But if you have some obvious problem, we can even say without a support.


    Oleg: If this is a bug of some kind?


    Nikita: If a bug, then, of course, we accept from everyone and always. It’s not like that “until you buy it, we won’t fix the bug.” If a bug, then we fix it. In general, users love our support. They usually say that it is of very high quality, and that they have never seen anything like it anywhere. Perhaps the fact is that we ourselves sit in the support, rotate in turn.


    Oleg: Who is who?


    Nikita: Developers, JVM-engineers.


    Oleg: How often?


    Nikita: The periodicity is different. Usually we sit for two weeks in turns. But if you are obliged to make a mega-smart for a certain number of days, then at this moment you will receive immunity from the support so that you can focus on this task.


    Ivan: In theory, everyone should do it in turn. But sometimes someone takes a heroic second dose and supports a month or more, rather than two weeks. Personally, I like to support, but if you do it for too long, then you forget a little what you do in life, because you only begin to answer letters. And you still want to sausage JVM. Therefore, after some time you need to return.


    Oleg: Do you have a hierarchy, how many levels of management? 20 or more?


    Nikita: What are you, there are only 10 of us in the team.


    Ivan: 15 with students.


    Oleg: I'm talking about the fact that the authorities are involved in it or just watching?


    Nikita: Pro bosses. Of course, there is a main person, and there are many local leaders.


    Oleg: Each person has his own area?


    Nikita: A person who takes on a big task and leads it. This is also rotated. You can take a big task and lead once, and next time you will be led.


    Ivan: In general, we do not have a large hierarchy. We have one level of superiors. And about looking from above - absolutely not. Sometimes our authorities heroically take over the support if the release is near.


    Nikita: Bosses - one person, his name is Vitaly Mikheev.


    Oleg: You can see him somewhere? At conferences or somewhere else?


    Nikita: In general, my presentations at conferences began with the arrival of the St. Petersburg Java Day in Novosibirsk, organized by Belokrylov from Oracle, who is now in Bellsoft. He asked if we would like to perform, and we then performed together with Vitaly. Then I offered him to continue to perform together, but he decided that he no longer wanted.


    Oleg: What is the report?


    Nikita: “The Story of a JVM in Pictures” .


    Ivan: I remember this report, I was either an intern, or just stopped being one. And I still think that this is one of the best reports that I have seen.


    Nikita: Maybe it was the “premiere effect”, when you are performing for the first time in your life, you push well with energy.


    Oleg: What did you talk about?


    Nikita: The two of them told about JET from beginning to end for 20 minutes.


    Oleg: For two, just 20 minutes? Usually, when several people, the time for the report only increases.


    Nikita: We are very cheerful and lively told all the key topics.


    Oleg: Vanya, did it influence your decision, what to do next, whether to work in a company?


    Ivan: It may well be!


    Oleg: People usually just ask why go to conferences, to reports, why to listen to something, if you can google it.


    Nikita: Of course, in my opinion, it is very useful to attend conferences. When you have a live contact, direct participation - this is not at all something to look at in a YouTube room. It is important that you directly, and not virtually participate. You come into contact with the original source. The difference is about the same as attending a live concert or listening to it in a recording. Even, probably, big, because how much can you communicate with your favorite performer after the concert? And at the conference you can find the speaker you need and all you need to ask him is no problem.


    Ivan: By the way, about the decision to "stay in the company", this is another story. We have a rather unique system of recruiting staff / trainees. We take interns at 2-3 courses, usually with a mekhmat or physics department. Interns are very deeply immersed in the topic, curators help them understand the various mechanisms of VM, implementation details, etc. - it's worth a lot.


    After some time, they begin to give combat missions, to write real code in production. You make changes to the JVM, you go through reviews, tests, benches - you check that they have not sunk. Commit for the first time. After that you focus on your diploma. Usually a diploma is also a cool part of the JVM, experimental, research. This is usually done by students. Then you, perhaps, produce it - and perhaps not. I have never seen such a thing so much time was spent on interns. And I personally appreciate it very much, because I remember how much time was spent on me. The output is a JVM engineer. Where else is there such a factory about the production of JVM-engineers?


    Oleg: And you are not afraid of information leakage from interns, they will then describe it in the diploma in the open?


    Nikita: This is not a problem, because we are afraid of a leak abroad, and in Russian no one will particularly read, this is such a defense, obfuscation :-)


    Ivan: I defended a student this year, I was a leader, and there was such a problem that not everyone wanted to write a diploma. We opened something from an absolutely closed topic, and, for example, our reviewer asked why we are not talking about certain things. I answered him that it was impossible to tell about this, it is very secret, we cannot. This is, I will tell you in your ear, but it will not go anywhere else. So a little bit we still fear it, but in general, in diplomas you can find a lot of interesting things.


    Oleg: And how to look for diplomas that are with Excelsior?


    Nikita: You come to the dean's office and ask to read such a diploma.


    Ivan: We have a list of successfully defended diplomas on our site, but only names, without references. And we have no unsuccessfully defended ones.


    Oleg: They either die or are protected.


    Ivan: So it is! We have an average grade of graduate students 5.0, there are those who simply do not go for a diploma.


    Oleg: In this preparation, the factory of JVM-engineers, tell us what are the stages of becoming a Jedi? When they give you a lightsaber, when can you start waving them?


    Nikita: Pretty fast. Now young people are quickly becoming Jedi, I am proud of them.


    Ivan: By the way, Nikita was my first curator when I was an intern. Regarding the internship: first you pass the selection: you solve problems, you come to one or several interviews. If everything is ok, then you become an intern. The first time you read scientific articles on topics that are most likely related to your future diploma, or just about-JVM subjects in English to fit the context. You read, write an essay on them, explain what is happening there. It is very hard to review. Some scientists would envy such a proofreading and such preparation of the essay. It turns out full articles in Russian. After that, it’s time for you to write code, and you are slowly introduced into the essence of the matter - how it all works.


    Nikita: And that’s where science ends.


    Ivan: Not necessarily!


    Nikita: It is a little disappointing, at the beginning of the 2000s we published articles that were taken to ACM's journals.


    Ivan: Well, we are still doing it, what is the problem?


    Nikita: What was our last article in the ACM magazine?


    Ivan: So in ACM, we just did not try! Now we are blogging - it's the same thing, only people read it too. This is a similar activity.


    Returning to the topic of Jedi, after this you do something for the first time in production under strict control. Need a small task, not related to your future diploma.


    Oleg: For example, writing a comment.


    Ivan: No. True functionality. The student must make his first real contribution so that he remains in the product. Then he begins to engage in a diploma, this is some kind of research project, which then turns into a diploma. Then, in a good way, he should produce this research - this is the fourth stage. This does not always happen, because not any research needs and can be productized right now. In any case, after these stages a new JVM engineer is obtained.


    Oleg: And what are the most difficult stages? What do people spend a lot of time on? Is there any math, or understanding of the standard, or some other deep thing? What is the structure of knowledge?


    Ivan: I would say that it is rather difficult to absorb so much context. What distinguishes our internship from any other is that you can’t just take a hustle and make a task in production, before that you need to understand how things work, to see at least part of the global picture, to take into account a lot of factors, and in general, to understand a lot about the world of JVM. I remember how I studied, and I initially did not have this understanding, of course. I remember how flashes of understanding appeared in my mind: “So this is how it works!”. And little by little everything was combined into a general picture.


    Oleg: Is this JET-specific picture?


    Nikita: No, JVM is specific.


    Ivan: Some parts explain to you why JET is JET. I remember that I once came and asked one of the curators why we have two compilers: the optimizer and the baseline compiler. I did not really understand why and why. And that was the moment when I had a flash of understanding how this JET works.


    Oleg: Why two compilers?


    Ivan: One optimizing, powerful. The second is not as optimizing, but it works faster and more reliably.


    Nikita: An optimizer may someday break, and the baseline should never break.


    Ivan: In addition, we still use the baseline as a JIT. Yes, we have JIT too, it is necessary for correctness. But we do not overclock the optimizing compiler in it, instead we use the baseline. Here is a side effect. If something suddenly went wrong in the optimizer, we can use the baseline as a fallback at compile time, it will work exactly.


    Oleg: Do you compile any UI applications? It is important to launch speed.


    Nikita: Of course. We have long positioned ourselves as a desktop solution. Most of our users are still on Windows.


    Ivan: But as far as I heard, the attitude is changing towards other platforms. For example, the same desktop users use Mac.


    Oleg: Can I compile on Android?


    Nikita: We studied this question. You can compile, under Android there is a Linux emulator, and we can do Linux. That is, you can compile for this Linux on Android. On the phone or tablet to launch any swing application. There were successful experiments.


    Oleg: And without Linux?


    Nikita: Compile dexes? Not.


    Oleg: Android has the Native SDK. Compile DLL and pick up.


    Nikita: There were unsuccessful experiments, something didn’t take off, unfortunately. In Android, SO-shki somehow worked out, they did not work as they did in Linux, there was no time to figure out exactly what differences. But there is an idea to make it possible to use real Java on Android, not Androids, to compile it into an SO-shku, and then this piece of Java could interface with Dalvik as a normal native dynamic library. You can then push 90 percent of your application into this library, for example, all business logic.


    Oleg: And then you could still compile for iOS and make a universal platform to run everything?


    Nikita: Yes. Start with Android, then go for iOS. But investment there requires a lot, until we go there, unfortunately.


    Oleg: Given that you have 15 people.


    Nikita: iOS, we have been around for ten years. Large investments are needed there, we cannot decide on this.


    Ivan: The problem of limited resources is, unfortunately.


    Oleg: Tell us about the features of the team device?


    Ivan: It is worth saying that there are two powerful camps - compiler developers and runtime developers.


    Oleg: Your native code is compiled. Compiler and runtime are not the same?


    Nikita: Rantaim is a full-fledged JVM, there are a lot of things in JVM - threads, reflection, synchronization, garbage collection, memory management - these are healthy pieces, they are very complex. They are much more nontrivial than the compiler in some places. Not every compiler engineer in our team can effectively implement the same synchronization. And if you're in the GC somewhere wrong, then everything is bad, since there debugging does not work. There, no debugging is impossible, you sit and meditate at night, debug in a dream what happened there.


    Oleg: I saw that Shipilev wrote, for example, all sorts of visualizations of Shinanda’s work. And he had such a debugging, there the color showed how the blocks move. And you can certainly insert GDB into the native code.


    Ivan: Of course, we do. Runtimers and compilers have slightly different approaches to development, debugging, etc. For example, because Since compilers are written in Java / Scala managed languages, we can develop and (most often) debug them directly from IDEA, which is very convenient and cool. In runtime, you only have two allies - the GDB and the debug print.


    Nikita: But, by the way, you can write unit tests for runtime, and you can also debug them in the Idea!


    Ivan: Yeah. But in general, a little mentality is different when developing a compiler and runtime. I would say that in runtime the main difficulty is how much can happen at the same time, it needs to be understood and even felt. This is such a strange sensation of time and life of the whole JVM. The compiler is different, but also very interesting. We discussed this with Pasha Pavlov recently, and he formulated very coolly: in the compiler, there are difficulties of a different kind, they arise from the “combinatorial explosion of possible states and situations due to the (often completely unobvious) hidden mat. models.


    In general, these are two very different worlds, but in total they make up one whole - the whole JVM itself.


    Oleg: What part of the barricades are you from?


    Ivan: I'm a runner.


    Nikita: I'm a compiler. But lately, I've been doing grocery features. The same support for Spring Boot means that you need to touch almost all the JET components a bit to make it work.


    Ivan: If necessary, we immerse ourselves in other components and do something there. In addition, there are people who, from either side of the barricades, they are full-fledged hybrids - semi-compilers and half-tailers. For example, we are currently doing a new JIT, and this is a full-fledged work on both sides of the barricades. It both in the compiler, and in it is necessary to launch.


    Oleg: Is JIT both a compiler and runtime at the same time?


    Ivan: Yes, you can say that.


    Nikita: It turns out that there is some kind of specialization, but you often have to work at the border and cut in everything around.


    Oleg: And tell us what the day of the employee looks like in your company? For example, I could tell the web developer’s daily routine, but this is terrible, so we’ll skip it and go straight to you.


    Ivan: In fact, not that there was something super-special. We have some goals, plans that we can do. It happens that the plan is spelled out clearly. It happens that you just need to do some kind of task. There is an issue tracker in which there are bugs that need to be fixed. You come, see what your hottest task is and start doing it.


    Of the unusual that distinguishes it from others. Suppose you wrote the code - in runtime, in the compiler or somewhere else. Usually, after this, the programmer starts and looks to see if it works. And after that we need to run a check-in test, which in itself takes 1.5-2 hours.


    Nikita: Previously, our sources were in Visual Source Safe, and there the commit was called the check-in. Before you could check into this VSS, you had to pass a check-in test. After our departure from VSS, it is still called the check-in test.


    Ivan: This is a test where all the JET is going to, that is, the whole JVM with your edits, the base tests are being run. It takes 1.5-2 hours. Only after this time you will have JET ready to try, work or not. The bug manifested itself in a specific application, which is probably why you need to compile this application, go through the script and see if it worked. How many attempts do you have per day? Not so much. Therefore, we try to immediately write code with high quality.


    Oleg: JET written in Scala?


    Ivan: One of the compilers is written in Scala.


    The JET compiler itself is written in Scala, while it is compiled by the JET itself. That is, it is going to JET of the previous version. It turns out an executable that is later used. Imagine that you first need to take the Scala sources, walk on them with the help of scalac, you get baytkod ...


    Nikita: Most of the time in this check-in test is the compilation of the Java platform itself into machine code, because it needs to be compiled all, and it is healthy, and it is compiled for an hour and a half.


    Oleg: You can somehow break up into units and scatter into an assembly cluster, how are you doing with C ++?


    Nikita: Good idea, we periodically think about it. I have ideas on how to do this, but my hands are not able to parallelize our favorite compilation.


    Ivan: From an unusual JVM engineer in the day, it happens that you have been debugging a bug for a very long time, because it is sporadic. I had a record - the bug appeared once a year. A misfortune happened, I sort of understood what happened, I checked - it didn’t show up, I committed. A year has passed, and it happened again. I called it a one-year bug. There is such a feature. I hope that now I have repaired it, but the year has not yet passed, so I don’t know :-) Such bugs are very unpleasant. I had a rather large project related to GC, which due to some unreported cases or errors could cause a huge number of sporadic bugs. You learn that something went wrong with you, only after the fact, when you look at the ruins. You do not have a play script, no stress testing will help you. Nothing helps at all.


    Oleg: What to do, contact Kashpirovsky?


    Ivan: I leave a trap in such cases. I understand exactly what happened on the current crash log. If this happens again, then I will print a huge amount of information on this issue into a separate file. Then I tell the QA-engineer that if suddenly it happens again - look for this file, it is very important. In it, new clues that specifically went wrong.


    Oleg: Do these traps have any effect on performance?


    Ivan: I make them so that they don't influence. Minimal impact, they start to spend the performance when the problem happened. It seems, it turns out. There are very different problems with GC. It is good if you see that you have some kind of broken object. But the problem can be much more complicated, the GC did something wrong — assembled an object or did not assemble an object. As a result, after two hours of application work, you have a crash. What was it? What is the method?


    Nikita: I missed one bitik in one place, and next time it will be unknown when.


    Ivan: You already have no logs, no hip-dump, nothing will help. No information. The only information you have is that it exploded once. After that you can only leave traps. In general, that's all.


    Nikita: Either convince yourself that, probably, a particle flew from space and broke through the memory in one bit.


    Oleg: And if it constantly manifests itself, can something be done?


    Ivan: This is some very simple bug that we will quickly fix.


    Oleg: What about bugs in JIT?


    Nikita: There are simple bugs in JIT. You look at the assert that happened on the stack trace, and it becomes clear what happened. Immediately corrected.


    Ivan: It is worth saying that we have very good testing. We are driving JCK. If something passes JCK, it already means that it is well written. In addition, we have a large number of real applications. In testing, we use UI tests that run through JET-compiled GUI applications. There are just some tricky scenarios. Recently, we did a test that took a specific project from GitHub and started to check it out one at a time and collect the JET. Check out the next commit and rebuild the JET and so on. Our QA department works very well, everything is well covered with testing.


    Oleg: The tester must understand what he is testing? Could it be that the qualifications of the tester should be higher than the qualifications of the developer?


    Ivan: We have a QA Lead - this is a runtime engineer. He is also engaged in QA now, but he also writes runtime. I think this is quite eloquent. Yes, he knows what is happening and how it should be. How to test, understand the specifics.


    Oleg: How many years do you need to spend to become a QA in your project? Relatively speaking, on the Web often QA-Schnick, unfortunately, is a position with the most monotonous work, where people poke buttons and see that they are painted blue. You probably have something different.


    Ivan: The situation is the same as with support. We cannot hire a person from the street to sit on the support, because he does not have enough qualifications, so we ourselves become supports once in a while. Accordingly, QA Lead does not write tests all the time, he directs. And although he does not lead JVM-engineers, he himself is, it says a lot.


    Oleg: How does testing look like? There is some prepared reference code, and QA-shnik are simply compared with it, if it is different, then the problem?


    Nikita: We have JCK, tests from Oracle. Oracle testers supply them to us, which is good. Because almost every letter of the specification is written some test. In addition to JCK, some real applications are usually taken, compiled, poked, and then this poking is automated.


    Oleg: What do you need to get JCK?


    Nikita: Pay Oracle money. In addition, you have to do a JVM, which is something very different from that of Oracle.


    Ivan: There are different ways. For example, if you invest in the Open JDK, they will give you the JCK.


    Nikita: If you were given some kind of special star, that you are not an ordinary person, but you have already done something good, then yes, they will give it to you. To become a licensee, you need to prove that your product is somehow non-trivially different from an Oracle product. For example, that it is on a platform that Oracle does not support.


    Oleg: And if it is trivially different?


    Nikita: Then they have the right not to give. We managed to convince that we give something beyond the existing one, it is called “value add”. And they gave us JCK, under a commercial license for Java, which is for money. They gave us JCK and said that if you don’t pass it in three months, then close the shop.


    Oleg: Tin what! Why then everyone who does fork OpenJDK, not to gather and not write the option?


    Ivan: This is a rather expensive topic. Imagine that you have 70 thousand tests. We have some secret test suite that is not in JCK, but maybe it would be nice if they were there, but we do not publish them. Because, maybe, and JET will not pass them, and HotSpot will not pass them. There are such thin places in the specification.


    Oleg: Then you will make a report about the thin points of the specification? It would be interesting.


    Nikita: My Oracle and Misha Bykov had a joint report on JVM support. There I told you about some thin points of the specification. There were "life stories supporting JVM" at the Joker conference.


    Oleg: Is the recording left?


    Nikita: Yes, of course. About the JVM specification. Here is a real case from a recent one: some of our JVM engineers noticed that the code was not written according to the specification, and decided to rewrite it, then they sent it to me on a review. I read and understand that the specification is a bug.


    Oleg: Maybe in JCK bug?


    Nikita: No, in the specification. I sent his description to the appropriate mailing list, and Alex Buckley (Alex Buckley), the person who is currently the leader in Java / JVM-specifications, fixed this bug in the JVM Specification 12.


    Ivan: There are also bugs in JCK.


    Nikita: When we started to go through JCK, we started sending incorrect tests to Sun in batches. We proved with huge sheets that the test was incorrect - and they had to exclude these incorrect tests.


    Oleg: Proving the incorrectness of a test is almost more difficult than writing?


    Nikita: Of course, harder. Very hard work. There were, for example, incorrect tests for CORBA. You sit such a complicated, multi-test test that is breaking somewhere, and you invent and explain different situations. I almost wrote 6 pages about why the CORBA test is incorrect. The fact is that the multi-edit picture may not be the same as on HotSpot, and we must prove that a situation is possible that the test does not expect.


    Oleg: Is it possible to convert this into a doctoral, six reports - one doctoral?


    Nikita: I sat for two weeks and wrote everything down. CORBA was a mandatory part, but now it is finally cut out. And swing is required. In JCK, there are automatically tests on Swing, and there are several hundred tests where you need to pierce everything with pens. And after each release we have one selected person sits down and pokes all these forms on each platform.


    Ivan: This is called the JCK interactive tests.


    Nikita: To release a product, we must provide evidence at any time that JCK is passing with us. And for this you need to pass these interactive tests.


    Oleg: They write this person on a video camera or what?


    Nikita: No, he checks that everything works, and then the result is cryptic so that after this the result is proof that we actually passed these tests.


    Oleg: It would just be possible to write a script that does this.


    Ivan: That's the point, no. There are those that are pushed through by robots. They are also graphic.


    Oleg: So there are tests that are not automated at all?


    Nikita: It is written that they cannot be passed by a robot.


    Oleg: That is, everything is designed for honesty. I wonder how many other people besides you are honest. I see such a picture: a hundred years pass, everything around is done by artificial intelligence. And artificial intelligence has one person that runs JCK, because everything is written in Java.


    I have a feeling that it is necessary to round out. It would be nice for us to come up with some afterword and wish for the readers.


    Nikita: Do some interesting things - it drives. It doesn't matter what you do, if you like it - that's great. My motivation to work in this project is that over the years, for some reason, interesting tasks do not end. Constantly Challenge, Challenge on Challenge. I would advise everyone to look for work with Challenges, because then it is more fun for you to live.


    Oleg: Is it somehow connected with the fact that all the work is connected with virtual machines or with system programming? Why Challenge?


    Nikita: Virtual machines are a constant research. Of course, you will not be doing any S-shny compiler for twenty years, because new knowledge will ever run out there. For some reason, they don’t end with Java! It is still not clear how to do the GC correctly. How to do the compiler correctly is also still not clear to anyone - neither the hotspotovtsam, nor us.


    Ivan: This is not just research, it is the cutting edge of any compiler runtime science. We now have guys reworking the compiler backend, and for this they are reviewing the latest scientific articles (it is clear that for a real JVM they require adaptation and addition). Usually you read the article, so what? And nothing. The article describes a prototype, someone measured something on it, and that’s it. And here there is an opportunity to implement it on a real JVM and see how it will behave in practice. This is very cool, in few places in the world there is such an opportunity.


    There is a lot of cool stuff going on in Java right now. All this is the acceleration of releases, the megaprojects that are emerging now - they are all very cool. The same Loom and Metropolis, these are such very solid and very cool projects in the Java ecosystem. Even without being tied to a specific JVM, the point is simply that progress is moving forward, it's cool, and you have to watch, understand and admire it. Look at the report about Loom and just see how he behaves in practice, that such prototypes work. It gives hope for the future. Therefore, I finally urge everyone not only to follow the new technologies and understand them, but also to participate in their development - all this is really very cool.


    Minute advertising. Once you've read this far, Java is obviously very important to you - so you should know that the JPoint Java Conference will take place on April 5-6 in Moscow . There just speak Nikita Lipsky , as well as other well-known Java-speakers (Simon Ritter, Rafael Winterhalter). You can also apply for a report and try yourself as a speaker, Call for Papers is open until January 31. You can see the current information about the program and buy tickets on the official conference website . To assess the quality of the remaining reports from the last conference, you can watch the archive of videos on YouTube. See you at the JPoint!

    Also popular now: