JVM TI: how to make a plugin for a virtual machine



    Would you like to add some useful feature to the JVM? Theoretically, every developer can contribute to OpenJDK, however, in practice, any non-trivial changes to HotSpot are not very welcome from the side, and even with the current shortened release cycle, it may take years before JDK users see your feature.

    Nevertheless, in some cases it is possible to expand the functionality of a virtual machine without even touching its code. The JVM Tool Interface, the standard API for interacting with the JVM, helps.

    In the article, I will show with concrete examples what can be done with it, tell what has changed in Java 9 and 11, and honestly warn about the difficulties (spoiler: I have to deal with C ++).

    I also talked about this material on JPoint. If you prefer the video, you can watchvideo report.

    Introduction


    The social network Odnoklassniki, where I work as a leading engineer, is almost entirely written in Java. But today I’ll tell you just about another part, which is not entirely in Java.

    As you know, the most popular problem with Java developers is NullPointerException. Once, while on duty on the portal, I also came across NPE in production. The error was accompanied by something like this stack



    trace : Of course, by the stack trace you can trace the place where the exception occurred up to a specific line in the code. Only in this case it didn’t make me feel better, because here the NPE can meet a lot where:



    It would be great if the JVM suggested where exactly this error was, for example, like this:
    java.lang.NullPointerException: Called 'getUsers()' method on null object

    But, unfortunately, NPE now contains nothing of the kind. Although they have been asking for this for a long time, at least with Java 1.4: this bug has been 16 years old. Periodically, more and more bugs were opened on this topic, but they were invariably closed as "Won't Fix":



    This does not happen everywhere. Volker Simonis from SAP told how they had implemented this feature in SAP JVM for a long time and helped it more than once. Another SAP employee once again submitted a bug in OpenJDK and volunteered to implement a mechanism similar to what is in the SAP JVM. And, lo and behold, this time the bug was not closed - is there a chance that this feature will enter JDK 14.

    But when will JDK 14 come out again , and when will we switch to it? What to do if you want to investigate the problem here and now?

    You can, of course, maintain your fork of OpenJDK. The NPE reporting feature itself isn’t so complicated, we could very well have implemented it. But at the same time, there will be all the problems of supporting your own assembly. It would be great to implement the feature once, and then simply connect it to any version of the JVM as a plugin. And this is really possible! The JVM has a special API (originally developed for all kinds of debuggers and profilers): JVM Tool Interface.

    Most importantly, this API is standard. He has a strict specification , and when implementing a feature in accordance with it, you can be sure that it will work in new versions of the JVM.

    To use this interface, you need to write a small (or large, depending on what your tasks) program. Native: usually it is written in C or C ++. The standard JDK comes with a header file jdk/include/jvmti.hthat you must include.

    The program is compiled into a dynamic library, and connected by a parameter -agentpathat the start of the JVM. It is important not to confuse it with another similar parameter: -javaagent. In fact, Java agents are a special case of JVM TI agents. Further in the text under the word "agent" is meant precisely the native agent.

    Where to begin


    Let's see in practice how to write the simplest JVM TI agent, a kind of "hello world".

    #include 
    #include 
    JNIEXPORT jint JNICALL Agent_OnLoad(JavaVM* vm, char* options, void* reserved) {
        jvmtiEnv* jvmti;
        vm->GetEnv((void**) &jvmti, JVMTI_VERSION_1_0);
        char* vm_name = NULL;
        jvmti->GetSystemProperty("java.vm.name", &vm_name);
        printf("Agent loaded. JVM name = %s\n", vm_name);
        fflush(stdout);
        return 0;
    }
    

    The first line I include the same header file. Then there is the main function that must be implemented in the agent: Agent_OnLoad(). The virtual machine itself calls it when the agent boots, passing a pointer to the object JavaVM*.

    Using it, you can get a pointer to TI's environment the JVM: jvmtiEnv*. And through it, in turn, already call JVM TI-functions. For example, using GetSystemProperty, read the value of a system property.

    If now I run this "hello world", passing it to a -agentpathcompiled dll-file, then the line printed by our agent will appear in the console even before the Java-program starts running:



    Enrichment NPE


    Since hello world is not the most interesting example, let's get back to our exceptions. The full agent code that complements NPE reports is on GitHub .

    Here's what it looks like Agent_OnLoad()if I want to ask the virtual machine to notify us of all exceptions that occur:

    JNIEXPORT jint JNICALL Agent_OnLoad(JavaVM* vm, char* options, void* reserved) {
        jvmtiEnv* jvmti;
        vm->GetEnv((void**) &jvmti, JVMTI_VERSION_1_0);
        jvmtiCapabilities capabilities = {0};
        capabilities.can_generate_exception_events = 1;
        jvmti->AddCapabilities(&capabilities);
        jvmtiEventCallbacks callbacks = {0};
        callbacks.Exception = ExceptionCallback;
        jvmti->SetEventCallbacks(&callbacks, sizeof(callbacks));
        jvmti->SetEventNotificationMode(JVMTI_ENABLE, JVMTI_EVENT_EXCEPTION, NULL);
        return 0;
    }
    

    First I ask the JVM TI for the corresponding capability (can_generate_exception_events). We’ll talk about capability separately.

    The next step is subscribing to the Exception events. Whenever the JVM throws exceptions (no matter if it is caught or not), our function will be called ExceptionCallback().

    The final step is a call SetEventNotificationMode()to enable delivery of notifications.

    In ExceptionCallback, the JVM passes everything we need to handle exceptions.
    void JNICALL ExceptionCallback(jvmtiEnv* jvmti, JNIEnv* env, jthread thread,
                                   jmethodID method, jlocation location,
                                   jobject exception,
                                   jmethodID catch_method, jlocation catch_location) {
        jclass NullPointerException = env->FindClass("java/lang/NullPointerException");
        if (!env->IsInstanceOf(exception, NullPointerException)) {
            return;
        }
        jclass Throwable = env->FindClass("java/lang/Throwable");
        jfieldID detailMessage = env->GetFieldID(Throwable, "detailMessage", "Ljava/lang/String;");
        if (env->GetObjectField(exception, detailMessage) != NULL) {
            return;
        }
        char buf[32];
        sprintf(buf, "at location %id", (int) location);
        env->SetObjectField(exception, detailMessage, env->NewStringUTF(buf));
    }
    


    Here there is an object of the thread that threw the exception (thread), and the place where this happened (method, location), and the object of the exception (exception), and even the place in the code that catches this exception (catch_method, catch_location).

    What is important: in this callback, in addition to the pointer to the JVM TI environment, JNI environment (env) is also passed. This means that we can use all the JNI functions in it. That is, JVM TI and JNI coexist perfectly, complementing each other.

    In my agent I use both. In particular, through JNI I check that my exception is of type NullPointerException, and then I replace the field detailMessagewith an error message.

    Since the JVM itself passes us the location - the bytecode index on which the exception occurred, then I just put this location here in the message:



    The number 66 indicates the index in bytecode where this exception occurred. But analyzing the bytecode manually is dreary: you need to decompile the class file, look for the 66th instruction, try to understand what it was doing ... It would be great if our agent himself could show something more human-readable.

    However, in this case, the JVM TI has everything you need. True, you have to request additional features of the JVM TI: get bytecode and constant pool method.

    jvmtiCapabilities capabilities = {0};
    capabilities.can_generate_exception_events = 1;
    capabilities.can_get_bytecodes = 1;
    capabilities.can_get_constant_pool = 1;
    jvmti->AddCapabilities(&capabilities);
    

    Now I’ll expand on ExceptionCallback: through the JVM TI function, GetBytecodes()I get the method body to check what is in it by location index. Next comes a large switch bytecode instructions: if this is an access to the array, there will be one error message, if the access to the field is another message, if the method call is the third, and so on.

    ExceptionCallback Code
    jint bytecode_count;
    u1* bytecodes;
    if (jvmti->GetBytecodes(method, &bytecode_count, &bytecodes) != 0) {
        return;
    }
    if (location >= 0 && location < bytecode_count) {
        const char* message = get_exception_message(bytecodes[location]);
        if (message != NULL) {
            ...
            env->SetObjectField(exception, detailMessage, env->NewStringUTF(buf));
        }
    }
    jvmti->Deallocate(bytecodes);
    


    It remains only to substitute the name of the field or method. You can get it from the constant pool , which is available again thanks to the JVM TI.

    if (jvmti->GetConstantPool(holder, &cpool_count, &cpool_bytes, &cpool) != 0) {
        return strdup("");
    }
    

    Next comes a bit of magic, but in reality nothing tricky, just in accordance with the class file format specification we analyze the constant pool and from there we isolate the line - the name of the method.

    Constant pool analysis
    u1* ref = get_cpool_at(cpool, get_u2(bytecodes + 1));       // CONSTANT_Fieldref
    u1* name_and_type = get_cpool_at(cpool, get_u2(ref + 3));   // CONSTANT_NameAndType
    u1* name = get_cpool_at(cpool, get_u2(name_and_type + 1));  // CONSTANT_Utf8
    size_t name_length = get_u2(name + 1);
    char* result = (char*) malloc(name_length + 1);
    memcpy(result, name + 3, name_length);
    result[name_length] = 0;
    


    Another important point: some JVM TI functions, for example, GetConstantPool()or GetBytecodes(), allocate a certain structure in the native memory, which must be freed upon completion of work with it.

    jvmti->Deallocate(cpool);
    

    Run the source program with our extended agent, and here is a completely different description of the exception: it reports that we called the longValue () method on the null object.



    Other applications


    Generally speaking, developers often want to handle exceptions in their own way. For example, automatically restart the JVM if it occurs StackOverflowError.

    This desire can be understood, because it StackOverflowErroris the same fatal error as it is OutOfMemoryError, after its occurrence it is no longer possible to guarantee the correct operation of the program. Or, for example, sometimes to analyze the problem, I want to receive a thread dump or heap dump when an exception occurs.



    In fairness, the IBM JDK has such an opportunity out of the box. But now we already know that using the JVM TI agent, you can implement the same thing in HotSpot. It is enough to subscribe to exception callback and analyze the exception. But how to remove thread dump or heap dump from our agent? The JVM TI has everything you need for this case:



    It is not very convenient to implement the whole mechanism of bypassing heap and creating a dump. But I will share the secret of how to make it easier and faster. True, this is no longer included in the standard JVM TI, but is a private extension of Hotspot.

    You need to connect the header file jmm.h from the HotSpot sources and call the function JVM_GetManagement():

    #include "jmm.h"
    JNIEXPORT void* JNICALL JVM_GetManagement(jint version);
    void JNICALL ExceptionCallback(jvmtiEnv* jvmti, JNIEnv* env, ...) {
        JmmInterface* jmm = (JmmInterface*) JVM_GetManagement(JMM_VERSION_1_0);
        jmm->DumpHeap0(env, env->NewStringUTF("dump.hprof"), JNI_FALSE);
    }
    

    It will return a pointer to the HotSpot Management Interface, which in a single call will generate a Heap Dump or Thread Dump. The complete code for the example can be found in my answer to Stack Overflow.

    Naturally, you can handle not only exceptions, but also a bunch of other various events related to the JVM operation: starting / stopping threads, loading classes, garbage collection, compiling methods, entering / exiting methods, even accessing or modifying specific fields of Java objects.

    I have an example of another vmtrace agent that subscribes to many standard JVM TI events and logs them. If I run a simple program with this agent, I will get a detailed log, which, when done, with timestamps:



    As you can see, to simply print hello world, hundreds of classes are loaded, tens and hundreds of methods are generated and compiled. It becomes clear why Java takes so long to run. Everything about everything took more than two hundred milliseconds.

    What JVM TI can do


    In addition to event handling, the JVM TI has a bunch of other features. They can be divided into two groups.

    One is mandatory, which any JVM that supports the JVM TI must implement. These include the operations of analyzing methods, fields, flows, the ability to add new classes to the classpath, and so on.

    There are optional features that require a preliminary capabilities request. JVM is not required to support all of them, however HotSpot implements the entire specification in full. Optional features are divided into two subgroups: those that can be connected only at the start of the JVM (for example, the ability to set breakpoint or analyze local variables), and those that can be connected at any time (in particular, bytecode or constant pool, which I used above).



    You may notice that the list of features is very similar to the capabilities of the debugger. In fact, a Java debugger is nothing more than a special case of the JVM TI agent, which takes advantage of all these capabilities and requests all capabilities.

    The separation of capabilities into those that can be enabled at any time, and those that are only at boot time, is done on purpose. Not all features are free, some carry overhead.

    If everything is clear with the direct overheads that accompany the use of the feature, then there are even less obvious indirect ones that appear even if you do not use the feature, but simply through capabilities you declare that it will be needed sometime in the future. This is due to the fact that the virtual machine can compile the code differently or add additional checks to runtime.

    For example, the already considered capability for subscribing to exceptions (can_generate_exception_events) leads to the fact that all throwing exceptions will go in a slow way. In principle, this is not so scary, because exceptions are a rare thing in a good Java program.

    The situation with local variables is slightly worse. For can_access_local_variables, which allows you to get the values ​​of local variables at any time, you need to disable some important optimizations. In particular, Escape Analysis completely stops working, which can give a noticeable overhead: depending on the application, 5-10%.

    Hence the conclusion: if you run Java with the debug agent turned on, without even using it, applications will run slower. Anyway, to include a debugging agent in production is not a good idea.

    A number of features, for example, setting a breakpoint or tracing all the inputs / outputs from a method, carry much more serious overhead. In particular, some JVM TI events (FieldAccess, MethodEntry / Exit) work only in the interpreter.

    One agent is good, and two is better


    You can connect multiple agents to a single process by simply specifying several parameters -agentpath. Everyone will have their own JVM TI environment. This means that everyone can subscribe to their capabilities and intercept their events independently.

    And if two agents subscribed to the Breakpoint event, and in one the breakpoint is set in some method, then when this method is executed, will the second agent receive the event?

    In reality, such a situation cannot occur (at least in HotSpot JVM). Because there are some capabilities that only one of the agents can possess at any given time. These include breakpoint_events in particular. Therefore, if the second agent requests the same capability, it will receive an error in response.

    This is an important conclusion: the agent should always check the result of the capabilities request, even if you are running on HotSpot and know that all of them are available. The JVM TI specification says nothing about exclusive capabilities, but HotSpot has such an implementation feature.

    True, agent isolation does not always work perfectly. During the development of async-profiler, I came across this problem: when we have two agents and one requests the generation of method compilation events, then all agents receive these events. Of course, I filed a bug , but you should keep in mind that events that you do not expect may occur in your agent.

    Usage in a regular program


    JVM TI may seem like a very specific thing for debuggers and profilers, but it can also be used in a regular Java program. Consider an example.

    The reactive programming paradigm is now widespread when everything is asynchronous, but there is a problem with this paradigm.

    public class TaskRunner {
        private static void good() {
            CompletableFuture.runAsync(new AsyncTask(GOOD));
        }
        private static void bad() {
            CompletableFuture.runAsync(new AsyncTask(BAD));
        }
        public static void main(String[] args) throws Exception {
            good();
            bad();
            Thread.sleep(200);
        }
    }
    

    I run two asynchronous tasks that differ only in parameters. And if something goes wrong, an exception arises:



    From the stack trace it is completely unclear which of these tasks caused the problem. Because the exception occurs in a completely different thread, where we have no context. How to understand in which task?

    As one of the solutions, you can add information about where we created it to the constructor of our asynchronous task:

    public AsyncTask(String arg) {
        this.arg = arg;
        this.location = getLocation();
    }
    

    That is, remember location - a specific place in the code, right down to the line from where the constructor was called. And in case of an exception to pledge it:

    try {
        int n = Integer.parseInt(arg);
    } catch (Throwable e) {
        System.err.println("ParseTask failed at " + location);
        e.printStackTrace();
    }
    

    Now, when an exception occurs, we will see that this happened on line 14 in the TaskRunner (where the task with the BAD parameter is created):



    But how to get the place in the code where the constructor is called from? Prior to Java 9, there was the only legal way to do this: get a stack trace, skip a few irrelevant frames, and a little lower on the stack will be the place that our code called.

    String getLocation() {
        StackTraceElement caller = Thread.currentThread().getStackTrace()[3]; 
        return caller.getFileName() + ':' + caller.getLineNumber();
    }
    

    But there is a problem. Getting the full StackTrace is pretty slow. I have a whole report devoted to this .

    This would not be such a big problem if it happened rarely. But, for example, we have a web service - a frontend that accepts HTTP requests. This is a great application, millions of lines of code. And for catching rendering errors, we use a similar mechanism: in the components for rendering, we remember the place where they are created. We have millions of such components, so getting all the stack traces takes a tangible time to start the application, not just one minute. Therefore, this feature was previously disabled in production, although for analysis of problems it is needed in production.

    Java 9 introduced a new way to bypass stream stacks: StackWalker, which through the Stream API can do all this lazily, on demand. That is, we can skip the right number of frames and get only one that interests us.

    String getLocation() {
        return StackWalker.getInstance().walk(s -> {
            StackWalker.StackFrame frame = s.skip(3).findFirst().get();
            return frame.getFileName() + ':' + frame.getLineNumber(); 
         });
    }
    

    It works a little better than getting the full stack trace, but not by an order of magnitude or even many times. In our case, it turned out to be about one and a half times faster:



    There is a known problem related to the non-optimal implementation of StackWalker, and most likely it will even be fixed in JDK 13. But again, what should we do right now in Java 8, where StackWalker not even slow?

    The JVM TI comes to the rescue again. There is a function GetStackTrace()that can do everything you need: get a fragment of a stack trace of a given length, starting from the specified frame, and do nothing more.

    GetStackTrace(jthread thread,
                          jint start_depth,
                          jint max_frame_count, 
                          jvmtiFrameInfo* frame_buffer, 
                          jint* count_ptr)
    

    There is only one question left: how to call the JVM TI function from our Java program? Just like any other native method: load using the System.loadLibrary()native library where the JNI implementation of our method will be.

    public class StackFrame {
        public static native String getLocation(int depth);
        static { 
            System.loadLibrary("stackframe");
        } 
    }
    

    A pointer to the JVM TI environment can be obtained not only from Agent_OnLoad (), but also while the program is running, and to continue to use it from ordinary native JNI methods:

    JNIEXPORT jstring JNICALL
    Java_StackFrame_getLocation(JNIEnv* env, jclass unused, jint depth) {
        jvmtiFrameInfo frame;
        jint count;
        jvmti->GetStackTrace(NULL, depth, 1, &frame, &count);
    

    This approach is already several times faster and allowed us to save several minutes of starting the application:



    True, with the next JDK update, we were surprised by the fact that the application suddenly started to start very, very slowly. The investigation led to the very native library for receiving stack traces. Understanding, we came to the conclusion that the bug appeared not in our place, but in the JDK. Starting with the JDK 8u112, all JVM TI functions that work with methods (GetMethodName, GetMethodDeclaringClass, and so on) have become terribly slow.

    I started a bug , did a little research, and discovered a funny story: some JVM TI functions added debugging checks, but did not notice that they were called from the production code as well. This usage scenario was not found, because it was not in the source code in C ++, but in the filejvmtiEnter.xsl .

    Imagine: during compilation of HotSpot, part of the source code is generated on the fly through the XSLT transform. This is how Enterprise hit HotSpot back.

    What could be the solution? Just do not call these functions too often, try to cache the results. That is, if for some jmethodID information was received, remember it locally in your agent. Applying such caching at the agent level, we returned the performance to the previous level.

    Dynamic connection


    As a previous example, I showed that JVM TI can be used directly from Java code using ordinary native methods using System.loadLibrary.

    In addition, we have already seen how to connect JVM TI agents through -agentpathwhen starting the JVM.

    And there’s another third way: dynamic attach.

    What is the idea? If you started the application and didn’t think that in the future you would need some kind of feature, or if you suddenly needed to investigate a bug on production, then you can download the JVM TI agent right at runtime.

    Starting with JDK 9, this is made possible directly from the command line using the jcmd utility:

    jcmd  JVMTI.agent_load /path/to/agent.so [arguments]
    

    And for older versions of JDK, you can use my jattach utility . For example, async-profiler can connect on-the-fly to applications running without any additional JVM arguments, thanks in part to jattach.

    In order to use the possibility of dynamic connection in your JVM TI agent, you need, in addition Agent_OnLoad(), to implement a similar function Agent_OnAttach(). The only difference: Agent_OnAttach()you cannot use those capabilities that are available only at boot time of the agent.

    It is important to remember that you can dynamically connect the same library several times, so it Agent_OnAttach()can be called repeatedly.

    I will demonstrate by example. IntelliJ IDEA will be in the production role: this is also a Java application, which means we can also connect to it on the fly and do something.

    We will find the process ID of our IDEA, then with the jattach utility we will connect the patcher.dll TI library JVM to this process:
    jattach 8648 load patcher.dll true

    And right on the fly it changed the menu color to red:



    What does this agent do? Finds all Java objects of the given class ( javax.swing.AbstractButton) and calls through the JNI method setBackground(). Full code can be seen here .

    What's New in Java 9


    JVM TI has existed for a long time, and, despite the existing bugs, there is already a well-established debugged API that has not changed for a long time. The first significant innovations appeared in Java 9.

    As you know, Java 9 brought developers the pain and suffering associated with modules. First of all, it has become difficult to use the "secrets" of JDK, without which sometimes, in principle, can not do.

    For example, in the JDK there is no legal way to clear Direct ByteBuffer. Only through a private API:



    Say, in Cassandra, there’s nowhere without this feature, because all DBMS work is built on working with MappedByteBuffer, and if you do not clear them manually, the JVM will quickly crash.

    And if you try to run the same code on JDK 9, you get IllegalAccessError:



    The situation with Reflection is approximately the same: it has become difficult to reach out to private fields.

    For example, not all file operations from Linux are available in Java. Therefore, for Linux-specific features, programmers retrieved the java.io.FileDescriptorsystem file descriptor from the object through reflection and using JNI called some system functions on it. And now, if you run it on JDK 9, you will see curses in the logs:



    Of course, there are JVM flags that open the necessary private modules and allow you to use private classes and reflection. But you need to manually register all the packages that you intend to use. For example, to just run Cassandra on Java 11, you need to register such a banner:

    --add-exports java.base/jdk.internal.misc=ALL-UNNAMED
    --add-exports java.base/jdk.internal.ref=ALL-UNNAMED
    --add-exports java.base/sun.nio.ch=ALL-UNNAMED
    --add-exports java.management.rmi/com.sun.jmx.remote.internal.rmi=ALL-UNNAMED
    --add-exports java.rmi/sun.rmi.registry=ALL-UNNAMED
    --add-exports java.rmi/sun.rmi.server=ALL-UNNAMED
    --add-exports java.sql/java.sql=ALL-UNNAMED
    --add-opens java.base/java.lang.module=ALL-UNNAMED
    --add-opens java.base/jdk.internal.loader=ALL-UNNAMED
    --add-opens java.base/jdk.internal.ref=ALL-UNNAMED
    --add-opens java.base/jdk.internal.reflect=ALL-UNNAMED
    --add-opens java.base/jdk.internal.math=ALL-UNNAMED
    --add-opens java.base/jdk.internal.module=ALL-UNNAMED
    --add-opens java.base/jdk.internal.util.jar=ALL-UNNAMED
    --add-opens jdk.management/com.sun.management.internal=ALL-UNNAMED
    

    However, along with the modules, JVM TI functions for working with them appeared:

    • GetAllModules
    • AddModuleExports
    • AddModuleOpens
    • etc.

    Looking at this list, the solution suggests itself: you can wait for the JVM to load, get a list of all the modules, go over all the packages, open everything for everyone and enjoy.

    Here is the same example with Direct ByteBuffer:

    public static void main(String[] args) {
        ByteBuffer buf = ByteBuffer.allocateDirect(1024);
        ((sun.nio.ch.DirectBuffer) buf).cleaner().clean();
        System.out.println("Buffer cleaned");
    }
    

    If we run it without agents, we expect to get an IllegalAccessError. And if you add an antimodule agent written by me to the agentpath , the example will work without errors. Same thing with reflexion.

    What's New in Java 11


    Another innovation appeared in Java 11. It is only one, but what a! The possibility of lightweight profiling of allocations has appeared: a new event has been added SampledObjectAlloc, which you can subscribe to, so that selective notifications about allocations come.

    All that is needed for further analysis will be transferred to the callback: the thread that allocates, the selected object itself, its class, size. Another method SetHeapSampingIntervalis to change the frequency of how often these notifications come.



    Why is this needed? Allocation profiling was earlier in all popular profilers, but worked through instrumentation, which is fraught with high overhead. The only low-overhead profiling tool was the Java Flight Recorder.

    The idea of ​​the new method is to instrument not all allocations, but only some of them, in other words, to sample.

    In the fastest and most frequent case, allocation occurs inside Thread Local Allocation Buffer by simply increasing the pointer. And with the inclusion of sampling in TLAB, a virtual border is added corresponding to the sampling frequency. As soon as the next allocation exceeds this boundary, an event is sent about the allocation of the object.



    In some cases, large objects that do not fit into TLAB are allocated directly in the heap. Such objects also go along the slow allocation path through the JVM runtime and are also sampled.

    Due to the fact that now sampling is performed only for some objects, overhead is already acceptable for production - in most cases less than 5%.

    Interestingly, this feature has been a long time ago, since the time of JDK 7, made specifically for Flight Recorder. But through the private Hotspot API, async-profiler also used this. And now, starting with JDK 11, this API has become public, has entered the JVM TI, and other profilers can use it. In particular, YourKit already knows how. And how to use this API, you can see in the example posted in our repository.

    Using this profiler, you can build beautiful allocation diagrams. Watch what objects stand out, how many of them stand out and, most importantly, where.



    Conclusion


    JVM TI is a great way to interact with a virtual machine.

    Plugins written in C or C ++ can be started at the start of the JVM or can be connected dynamically directly while the application is running. In addition, the application itself can use the JVM TI functions through native methods.

    All demonstrated examples are posted in our repository on GitHub . Use, study and ask questions.

    Also popular now: