Performance and runtimes at JPoint 2018

    We all have some expectations from conferences. Usually we go to a very specific group of reports, very specific topics. The set of topics differs from platform to platform. Here's what Javists are interested in right now:


    • Performance
    • Virtual machines and runtime features
    • JDK 9/10 / ...
    • Frameworks
    • Architecture
    • Enterprise
    • Big Data and Machine Learning
    • Database
    • JVM languages ​​(including Kotlin)
    • Devops
    • Various small topics

    The conference program is designed so that for each of the topics they try to select at least one good report. JPoint is held within two days, there will be about forty reports, so all the main issues will be highlighted one way or another.


    In this short post I will talk about those reports that I liked as a person who goes mainly to reports on performance and runtimes.


    We will not consider scaling, clusters and all that, suffice it to say that it exists (Christopher Batey from Lightbend will talk about Akka , Victor Gamov from Confluent will talk about Kafka , and so on).



    Disclamer The article was written on the impressions of the contents of the program on the official website . All of the following are my own thoughts, not quotes from the reports. The text may (and certainly does) have incorrect assumptions and inaccuracies.

    Performance


    Remember the joke article "Java with assembly language inserts" ? In the comments, apangin said he would give a talk about VMStructs. No sooner said than done, here it is: "VMStructs: why does the application need to know about the internals of the JVM . " The report is devoted to the use of VMStructs - a special API of the HotSpot virtual machine, thanks to which you can learn about the internal structures of the JVM, including TLAB, Code Cache, Constant Pool, Method, Symbol, etc. Despite its “hacking" nature, this API can be useful and regular program. Andrey will show examples of how VMStructs helps in the development of real tools (which they use in Odnoklassniki).


    The second talk, "Hardware Transactional Memory in Java," is by Nikita Koval, a research engineer at the dxLab research group at Devexperts. If you were at JBreak earlier this month, you may notice that he talked about completely different things there (writing a fast multi-threaded hash table using the power of modern multi-core architectures and special algorithms). In the same report, we will talk about transactional memory, which gradually appears in modern processors, but which is still unclear how to use it for an ordinary person. Nikita should talk about the ways to use it, what optimizations are already in OpenJDK, and how to perform transactions directly from Java-code.


    And finally, Enterprise Without Brakes . Where are we without a bloody enterprise! Sergey Tsypanov deals with performance issues at Luxoft, in the Deutsche Bank domain. The report will examine patterns that kill the performance of your applications - light enough to be found on code review, but complex enough so that they are not emphasized in red in the IDE. All examples are based on the code of applications running in production.


    Profiling


    On eye profiling caught on three reports. First report by Sasha Goldstein, "Linux container performance tools for JVM applications . " Sasha is the serial creator of the performance hardcore. Last year at JPoint, he made an excellent talk about using the Berkeley Packet Filter for JVM.(I desperately recommend watching the recording on YouTube), and it was only a matter of time before he got to the detailed analysis of containerization. The world goes into clouds and dockers, which in turn brings us many new problems. As you may have noticed, most low-level debugging and profiling systems, when applied to containers, are overgrown with different features and jambs. Sasha will consider the main scenarios (CPU utilization, IO responsiveness, access to shared databases, etc.) through the prism of using modern tools on the GNU / Linux platform, including BCC and perf.


    “Profiling accurate to microseconds and processor instructions” is the second profiling report made by Sergey Melnikov from Raiffeisenbank. Interestingly, before working on low-latency Java code, he worked at Intel as a compiler performance engineer for C / C ++ / FORTRAN languages. This report will also have perf! :-) There will also be about the hardware features of processors and Intel Processor Trace technology, which allows you to take the next step in profiling accuracy and reconstruct the execution of a program section. There are few such reports (for example, you can find the Andi Kleen reportat Tracing Summit 2015), they usually leave a sea of ​​questions and do not shine with practicality in relation to Java. Here we do not just have a person who has visited both worlds (both Intel and Java in the bank), he can also be found in the discussion area and ask uncomfortable questions.


    The third report - "Universal profilers and where they live". It is made by Ivan Uglyansky, one of the developers of Excelsior JET (certified Java SE implementation based on optimizing AOT compilation), which deals with runtime: GC, class loading, multithreading support, profiling, etc. The essence of the report is that recently they needed to collect a profile of applications running on Excelsior JET. You need to do this on all supported systems and architectures, without recompiling the application, and even with acceptable performance. It turned out that the usual methods of profiling at the same time do not fit all these points, so I had to come up with something of my own. Ivan will tell you which profiling methods are suitable for AOT, what you can afford if you profile the code from within the JVM, and what you have to pay for the versatility of the profiler.


    Custom runtimes


    Rantime, in short, is a thing that takes your high-level code in the JVM language, turns it into a low-level (machine code, for example) and controls the execution process. Usually there is some kind of assembler, compiler, interpreter, virtual machine. The features of runtime define the features of performance of applied tasks.


    The first thing that the eye falls on the program is the Alibaba report about their JDK . Who did not dream of making their own JDK with blackjack and coroutines? But it is clear to everyone that this is hellish labor, pain and suffering. But in Alibaba it turned out. Here is what they have:


    • A certain mechanism that allows you to select objects in hidden regions without an overhead on the GC;
    • Lightweight threads (coroutines) built right into the JVM, they need it for asynchronous programming;
    • Ability to profiling live
    • Different nice little things

    Yes, we (the general public using OpenJDK) will soon have a Project Loom . But there is a nuance - the development of coroutine in Loom is secondary to the main goal - the fiber. Fibers require delimited continuations , but it’s not necessary that they appear soon, or ever at all, in the public API. It seems that in Alibaba all this has already been washed down on their own.


    As I understand it, this is not a report from the category of "use our proprietary proprietary JDK", but a guide for people who are going to master the development of similar features, or fight against their absence in OpenJDK. For example, profiling tools depend on the areas to be profiled and workloads - for each product they will be different. A speaker from Alibaba will not only talk about his tools, but about the process of classifying workloads, which is developing such tools in the right direction.


    By the way, since we are talking about coroutines. They appeared in Kotlin starting with version 1.1 ( in experimental status ), and about them there will be a report by Roman Elizarov from JetBrains. The novel will tell about the evolution of approaches to asynchronous programming, about their differences and similarities. Plus we will hear the official position, why what is now in Kotlin is better than familiar to everyone async/await.


    In order not to go far, Alibaba JDK are not the only representatives of unusual ecosystems. Of course, there is a report about Azul Zing, and as many as two about OpenJ9 ( one , two ).


    All reports about the insides of Azul products have a certain shade of sadness for me, because never in my life did I have to enter the circle of the elect, using their cool, but very expensive decisions. Therefore, for me, this fresh report of theirs is more likely to have theoretical significance as a source of information about technologies competing with our native OpenJDK. Now in OpenJDK, the AOT theme is actively developing - in OpenJDK JDK 9 there was already a built-in AOT (only for 64-bit Linux), there is SubstrateVM , and then it will only get better, up to the implementation of the Metropolis project . Unfortunately, with AOT in Java, it’s not so simple, it’s very unpleasant to screw part of the modern infrastructure (remember the epic talk by Nikita Lipsky about the crookedly designed OSGi?). Azul already has some kind of ready-made AOT solution called ReadyNow , built into their Zing, trying to combine the best qualities of JIT and AOT - this is what this report will be about.


    As rightly noted in the comments, you need to introduce a speaker. In short, Douglas Hawkins - a leading developer in Azul, has been involved in Java for 15 years, participated in various fields: bioinformatics, finance, retail. The more he lived in javamir, the more he went deeper into the insides of the JVM, and once he simply left Azul to work on Zing, and became the main developer of that ReadyNow. That is, this is a person who has visited both sides of the barricade: both as an applied developer, and as a system developer, and as a result having a very unique experience.


    On the other hand, OpenJ9 can be downloaded right now . Since IBM opened up its virtual machine at the Eclipse Foundation, there has been a lot of hype around it. In the general public, there is a certain set of ideas and facts about the fact that HotSpot can be replaced with it, while the libraries from OpenJDK can be easily reused, that the amount of memory used should be reduced, and even something can be transferred to the GPU ... and, in general, all. (By the way, the GPU generally seems like black magic - good, in the past Joker Dmitry Alexandrov made an excellent report “Java and GPU: where are we now?”. There is no video yet, but you can look at the slides ).


    First report, "The Eclipse OpenJ9 JVM: a deep dive!" says Tobi Ajila, an IBM J9 developer working on Valhalla and Panama , with a long track record like improvements to the interpreter, JVMTI, and lambdas. Apparently, there will be a description of some technical features of OpenJ9, thanks to which you can overclock your cloud solutions and other performance-critical things. The second report, "Deep dive into the Eclipse OpenJ9 GC technologies," is led by the OpenJ9 garbage collector architect, also from IBM, there will be a very pragmatic story about the four garbage collection policies, where to use them, and how it all works under the hood. I hope that after listening to these reports, the aura of magic around OpenJ9 will slightly decrease.


    Conclusion


    During these two days you can visit 12 reports. Of these, 3 keynotes are common for everyone, so you need to make a choice 9 times. If you choose reports only from this list, then you can make 7 decisions out of 9. The remaining two are to your taste (do you have to have horizons on "universal" topics as well?). Some reports intersect each other (the most difficult choice at 13.45 of the first day is between profiling containers of Sasha Goldstein, hardware transactional memory of Nikita Koval and Kotlin coroutines of Roman Elizarov). There is an idea that from the point of view of a person who is interested in performance and runtimes, the program is designed well enough to be interesting from beginning to end. Meet me at the conference!


    I remind you that less than a month is left before JPoint 2018. Tickets can still be purchased on the official website .


    Also popular now: