JVM optimization example

    I came across an interesting example of optimization during dynamic compilation
    public class StupidMathTest {
      public interface Operator {
        public double operate(double d);
      }

      public static class SimpleAdder implements Operator {
        public double operate(double d) {
          return d + 1.0;
        }
      }

      public static class DoubleAdder implements Operator {
        public double operate(double d) {
          return d + 0.5 + 0.5;
        }
      }

      public static class RoundaboutAdder implements Operator {
        public double operate(double d) {
          return d + 2.0 - 1.0;
        }
      }

      public static void runABunch(Operator op) {
        long start = System.currentTimeMillis();
        double d = 0.0;
        for (int i = 0; i < 5000000; i++)
          d = op.operate(d);
        long end = System.currentTimeMillis();
        System.out.println("Time: " + (end-start) + "  ignore:" + d);
      }

      public static void main(String[] args) {
        Operator ra = new RoundaboutAdder();
        runABunch(ra);
        Operator sa = new SimpleAdder();
        Operator da = new DoubleAdder();
        runABunch(sa);
        runABunch(da);
      }
    }

    Method        Runtime
    SimpleAdder     88ms
    DoubleAdder     76ms
    RoundaboutAdder 14ms


    * This source code was highlighted with Source Code Highlighter.


    It seems like a lot faster to add 1 to a double by adding 2 and subtracting 1 than just adding 1 right away. And a little faster to add 0.5 twice than to add 1. Is this possible? (Answer: no.)

    What happened here? After a training cycle, RoundaboutAdder and runABunch () are compiled, and the compiler performs a monomorphic call transformation (converting a virtual method call to a direct method call is called a monomorphic call conversion) with Operator and RoundaboutAdder, so the first pass is pretty fast. In the second pass (SimpleAdder), the compiler must deoptimize and roll back to dispatching the virtual method, so the second pass is slower due to the inability to optimize (delete) the call to the virtual function, time is spent on recompilation. On the third pass (DoubleAdder), less recompilation is performed, so it works faster.

    So, what can we conclude from this "performance test"? Virtually nothing, except that testing the performance of dynamically compiled programming languages ​​is a much more insidious process than you might imagine.

    Original article
    www.ibm.com/developerworks/ru/library/j-jtp12214/index.html

    Also popular now: