“The Java world will never be the same again” - an interview with Alexander Belokrylov and Alexey Voitylov from BellSoft
Alexander ( alexbel ) Belokrylov and Lesha Voytylov, together with Grigory Lubzovsky, who led the Oracle development center in St. Petersburg, founded BellSoft a little over a year ago. Now the company is successfully working, developing and has already managed to gain fame in the Java-world.
By the volume of commits in OpenJDK over the past year, they took the fifth place, and now only Oracle, Red Hat, SAP and Google are ahead:
We must understand that BellSoft is not only Arm:
- Liberica JDK 11 released , Linux x86_64, Windows, Linux ARMv8, Linux ARMv7 (including Raspberry Pi) are supported. Will be laid out assembly for Mac and Solaris Sparc.
- Images for all architectures are published on Docker Hub for Debian, CentOS, Alpine. The image for Alpine is made from the lite version with
--compress 2therefore significantly less than the usual JDK.
In this interview we will touch only Arm, and leave the rest for the next time.
So, today in our virtual studio:
Oleg Chirukhin - edited by JUG.ru Group
Tell us more about the company?
BellSoft company is engaged in several areas. Everyone probably knows that Oracle in St. Petersburg had a very serious low-level expertise in the development of Java Runtime, in the development of compilers, in the development of Oracle Cloud Services systems. And this expertise from Oracle migrated to BellSoft. Today, our company is developing Java Runtime, we are an active OpenJDK contributor, we are developing gcc and llvm compilers, contributing to the Apache, Graal stack. We are engaged in building systems for analyzing big data, recommender systems and have built a small project on IoT, for collecting data from devices from the real world. At some point, we saw that Oracle stopped releasing a Java distribution for Arm platforms, and we released our distribution, which we called Liberica JDK for Raspberry Pi.
Let's take a closer look. What is, for example, an Apache stack?
We started contributing to the Apache Foundation with Hadoop - much is tied to certain parts of this project. OpenJDK and large Apache projects, though not directly, but strongly interrelated.
Why all this may be necessary? For example, some classes that inhibit them, you can overclock them?
Yes, this is one of the areas we are engaged in - improving performance. For example, platform-specific parts, the acceleration of which in OpenJDK can help speed up Hadoop. If interested, we can talk about it.
When you solve problems with performance, it makes sense to look at something close. Maybe somewhere there is the same problem. Very often you see that having corrected in one place, you need to fix it in a couple more places so that in general it becomes better. Sometimes (and very often) performance optimization is decomposed into contributions to several projects. If you want to improve, for example, performance
checksum, you look at the very bottom of the stack. Let's say this is java. If you look a little higher, it will be Hadoop, Spark or something else. Usually having understood how to improve one place, one can understand how to do it in another place. Of course, it makes sense in this case to go and improve there too.
Everyone knows that you are Liberica :-) Let's talk about this.
Yes, we are Liberica JDK. Liberica started with the fact that we saw that there is no port for the ARM32, and it urgently needs to be done, because the Raspberry Pi was left without Java 9 and Java 10. That was in 2017, when Java 9 was released. Now Liberica JDK supports many architectures and operating systems.
It became clear that Oracle is not going to further develop the code for Arm, and we began to actively contribute and release our distribution to close this gap. It became clear that people need it.
So, now there are several distributions of Arm?
Yes, there are several Java distributions for Arm, they are different. In our case, you actually get what used to be part of the Oracle port distribution. Our distribution has JavaFX, device input / output, and an embedded API. This is a kind of package, and it all works with modules, starting with JDK 9. Using a modular system, you can build Runtime as you wish. If you want, you can make a small Runtime of 16 megabytes. If you want to include more features, for example, a web server, then you need to spend about 32 megabytes of static space. You can get a working runtime for your needs.
As far as I understood, it was about arm servers. Not to say that they have used us massively. Tell me about the server? In real life, are they anyway?
This story for many years. The very first Arm server was made based on the ARMv7 architecture, 32-bit. It was a terribly noisy box, which practically did not work, because the BIOS, Linux did not work there, anything else disappeared in a few hours. The company that started it, Calxeda, closed over time. But the idea of developing an alternative architecture for servers was sown into society. Arm eventually released a new ARMv8 architecture specification, which supports both 32 and 64 bits. Based on the 64-bit version of this specification, several manufacturers are now building their processor implementations for servers. For example, Ampere Computing, Cavium, which is now purchased by Marvell, and Qualcomm. And there is another company - AMD released a server based on the Arm-architecture a few years ago. To my mind,
If you remove one letter L from Marvell, you get superheroes. A good way to remember the names of all these offices.
The superheroes there are actually Cavium / Marvell, because of all they were able to assemble the most productive chip up to 128 threads on a single CPU, and comparable or best in performance with Xeon Gold and Platinum. You can put several CPUs in one server, you get a monstrous thing with fast memory, which can be used for serious tasks.
How does the scaling limit grow for normal use? How much CPU does it make sense to stick into one server?
It all depends on what task you want to build a server for. Different manufacturers focus on different niches, but if we are talking about Cavium / Marvell, they clearly focus on computing niche, where you need to quickly chew a large amount of data in parallel. They do not focus on the super high performance of one thread (at the same time, it is very good), namely, that, in general, this CPU showed great performance with low cost consumption.
And why Arm, but not Intel? We have wonderful Intel servers, why invent something else?
This question is both difficult and easy to answer. First, the holy place is never empty. We see that AMD is trying to build some kind of alternative to Intel for server applications. And of course, there will always be some alternative piece of the market among alternative manufacturers.
No one wants to live with one monopolist.
Very true remark. All consumers of processors, and this is mainly Cloud providers, want to be able to alternatively. In order to be able to choose, compare the cost with the cost and for specific applications choose a more profitable architecture.
And what about the cost? How expensive is it than Intel solutions?
Complex issue. First, as Alex said, manufacturers are enough. It is clear that now manufacturers Arm-processors do not compete with each other, and compete with someone else. Take a little different niche. If Cavium is high performance computing, then Qualcomm is mid-range servers, Ampere is either workstations or low-end servers.
If I remember correctly, the price of the Ampere Computing CPU itself is $ 600-900, and they compete with Intel, a CPU costing about $ 1,500. Cavium is a bit more expensive. Again, they will compete with Intel, which is significantly more expensive. You need to understand that the price of the server is not only the price of the CPU. The server price is also memory, disks, support, consumption. If you win by one parameter, for example, the cost of CPU - this is fine, but you will be only slightly cheaper. If you win in two ways, for example, being cheaper, and offer the best performance, they will look at you more closely. And if three, for example, also do all this with less electricity consumption, then this is already an application for victory.
In addition to hardware and its support, software support is also important. You can not run on Arm all that you have now spins on Intel.
Of course. It must be said that the Arm-ecosystem of software has advanced far ahead. If five years ago there were problems in order to raise a piece of metal, now there are no such problems. You just come and everything works out of the box for you. Everything that you are used to works - Linux, Docker, Kubernetes, Xen, Java, Hadoop, Spark, Kafka, whatever.
What about java? Tell us how it works, how it differs from the “usual”?
No different, this is its main advantage. It is productive enough to cope with the tasks assigned to Java for servers. You transfer your application (I hope that it does not have a native part, otherwise you will have to recompile it), to the Arm server, check the performance and in most cases - rejoice. Recently an article appeared where we compare the performance of the Arm server with the Intel server . Article published in Java Magazine.
Did Oracle allow you to advertise in your own magazine? Seriously.
Apparently, there is a demand. It turns out that the Java Arm server for Java workloads do quite well. They are the same, or even better in comparison with their counterparts from Intel.
Who should read your article?
Anyone who wants to see, test a new architecture, whether it is suitable for its loads. Try Java and those Arm-servers. In Google, you drive in Arm Server Cloud, and you drop out several cloud providers, you can spend a card and try what you need.
Is Java preinstalled already?
Yes. Plain OpenJDK.
Is the regular OpenJDK and your Liberica distribution the same thing? I saw your commits there - is that it or something else?
In general, the history of Arm ports and OpenJDK is quite interesting and ornate. Initially, Oracle developed the Arm port and, when Arm released the ARMv8 architecture, an additional port was added to this Arm port that allowed Java to run on ARMv8. Parallel to this, Red Hat also worked in this direction and in OpenJDK poured in its port for this architecture. It so happened that the community focused on the port of Red Hat. Therefore, the one side that was in OpenJDK for the ARM32 port, which actually duplicated the functionality of the aarch64 port, is now our will, we will remove it from the JDK 12 for this.
It must be said that Oracle poured into OpenJDK all of its work from embedded, all Arm ports before discontinuing support. Now all features for Arm which became in Oracle are poured.
This is logical, because it is iron manufacturers and manufacturers of specifications that should be primarily interested in the software ecosystem working on their hardware and compatible with their specifications. To do this, the code must be open.
I saw an infographic on which amazing numbers were drawn that a certain BellSoft company located in St. Petersburg flooded a huge number of commits.
Yes, we are in the top 5 OpenJDK committers. Naturally, Oracle is out of competition, there are about 4 thousand commits per year.
Next comes Red Hat, SAP, Google and BellSoft. We have not reached Google a little bit. And right behind us is IBM.
What percentage of your employees used to work in Oracle?
100 percent. BellSoft consists of former Oracle employees.
This is unfair competition, because Google does not consist of 100 percent of Oracle employees. What commits tell? How to achieve such a success? How to get to the top 5 committers?
We work in several directions. Now the main direction where our commits go is the ARM64 port, which is the same server port. It is interesting to iron producers. They are interested in Java to quickly work on their hardware, to cope with the loads. The second, where commit, is the port ARM32, which is supported by us, this is the embedded port. The third is commits to support, fix and improve the overall functionality of Java.
We just talked about 64 bits on the servers. Why is the 32-bit port still alive?
Because it is used in embedded.
Because so many companies have implemented a CPU for the ARMv7 architecture for embedded applications. They have a large number of chips in stock. If my memory serves me, of the variety of these ARM32 chips, the most popular is ARMv5. This architecture has been around for a lot of years, but nevertheless, the CPUs are fairly cheap, and manufacturers are still considering the creation of new devices on this architecture.
What amounts are we talking about when we are talking about building? Can an ordinary person buy something and experiment?
The most popular of the ARM32 platform is the Raspberry Pi, starting with the second version - the second and third versions, plus all this is supported by the ARM32 port. One of our distributions is one that, for ARM32, is tested and works on Raspberry Pi. We see that this is the most common platform for a wide audience, and therefore we release the port for Raspberry Pi. We have more specific ports for highly specialized hardware, but that’s another story.
Maybe you do not need to buy. You can see what you have in your home router. It is very likely that there is something like that.
How much should the developer's skills match?
You need to be a Java developer.
Do we need tricky ways, the Kirchhoff's law to know to code?
You just have a computer to which you can connect via SSH. No ability to flash it is necessary. You take a Linux microSD card for the Raspberry Pi, insert it, and everything starts. This is the main advantage of Raspberry Pi compared to all other single board computers. Ease of setup, getting a working system.
And how to work with the sensors? We do all this for the sake of external systems, right?
The Raspberry Pi has a GPIO system and pins to which you can connect anything. On which all enthusiasts are usually all peripherals to the Raspberry Pi and connect.
How does the API look like? What do I need to write, like, “get me a number from a thermostat”?
You need to read the thermostat datasheet, and understand what registers are, how to initialize it, how to configure it. If I2C, call the method for configuration, transfer all parameters to it. Then tell him i2c.open, and work with it as a Java object.
Is it possible to write beautiful object wrappers around thermostats in order to work in a purely object model? Can I do this in order not to read more datasheets, to close it with a facade?
It would be nice if the manufacturer of this sensor made a ready config, and we, as Java programmers, simply took it and used it. The library works with one sensor, a library for working with another sensor. Such a library, or something close to it, is called Pi4J. It is developing now not as rapidly as at the time when Oracle moved Java embedded, but it still did not die, some updates periodically come out. There is a choice: either work with the thing that in OpenJDK is GPIO, or work with the Pi4J library.
If I’m a hardware manufacturer, I don’t know anything about Java, but I would like Java programmers to use it, who to contact? To contact you? Or are there experts who do this?
Yes, we are such specialists.
So far we have not fled. I remember that you had some kind of JEP, yes?
OpenJDK version 11 includes 17 JEP. 14 was done by Oracle, 1 by Google, 1 by Red Hat, 1 by BellSoft with Cavium. Our JEP is a hodgepodge of performance improvements in Java on the ARM64 platform for specific workloads. JEP, respectively, is called the Improve Aarch64 Intrinsics . In short, we have improved the performance of operations with String, with arrays and a bit with math and trigonometry.
What is intrinsiki? Not everyone knows.
When a virtual machine considers a sine, instead of executing direct Java code, it can substitute an optimized assembler box for a particular architecture.
Which directly calls the processor, which has the command "sine"?
Which will calculate it by a complex algorithm. There are intrinsiki that cause some kind of assembly command. For example, intrinsiki associated with computing cheksumm. Such assembly instructions are available for almost all architectures. There are more complex intrinsiki when to write a lot of assembly language to get a good performance boost.
And encryption, is it in the gland?
Yes, usually it is a call to existing instructions of a particular processor. Sometimes - work with extensions on those chips where they are.
Returning to your JEP: how to determine which code is so hot that it is so hardcoded?
Great question. When we began to optimize something for the ARM64 platform, we did not have a large number of bodies, besides perf. Yes, and he did not work everywhere. JFR implementation for ARM64-port was missing, Oracle by that time had not yet laid out in Open Source. Different tools for measuring performance, which we used to use, for example, async-profiler, honest-profiler - they did not work for ARM64 platforms either. The first thing we did was get all these tools on this architecture.
Why not work out of the box?
Because there is some kind of CPU-specific part.
Then you run these tools on the workload you are trying to optimize, look at the screen for a long time, try to figure out which methods are hot there, which places are hot. There are simple cases when some assembler inserts for a specific architecture are not implemented. In this case, a fallback occurs in the Java code. Just by implementing these assembler inserts you can get an increase in performance. There are more complicated places when you need to understand what kind of new assembly insertion you need to create in Java for all architectures. Such a job.
Where to get it? Download the entire githab and run under JIT?
It is clear that some benchmarks or workloads are being optimized. Known benchmarks - SpecJBB, SpecJVM. There are specific workloads that interest specific customers. Just run these workloads and look at the bottlenecks.
All these SpecJVM are very old tests, right? What about new lambdas, streams, big dates?
Nothing. There is no it.
And where to get it?
For example, the Apache stack?
Yes. Hadoop has a standard TeraSort benchmark, which hardware manufacturers like to measure. Also one of the interesting tasks for optimization.
What are the top features that should be optimized? For example, the top of the very same JEP.
The main problem areas for this architecture that were there, we have closed. There, of course, there is still an uncultivated field of what we have not done and what we will continue to do. We will continue to work with trigonometry, we will look at new intrinsiki that will appear. They are not yet, but we understand that they will soon appear. We'll have to look at the Panama project, which Intel is actively contributing to right now.
How does the compiler see what you do? Conditionally, in a magical way, he will understand that you consider some known formula and optimize it?
If you call
Math.sin, then instead of a Java implementation of this,
sinit is quite possible to substitute an assembler box.
Is there a regular section somewhere that is looking for everything
sinand replacing it with it?
Yes. This is usually done not even in the compiler, but from the interpreter.
Something more difficult can catch, for example, operations in counting cycles?
Usually such tasks are solved within the framework of C2, and there is no point in writing and maintaining specialized intrinsky.
For this you need to have some kind of specialist. For example, Ivanova or Chuiko?
Chuiko works with us.
He has now left for Canada, will be telling at the Linaro Connect conference about our achievements in improving OpenJDK on ARM64 architecture. The Linaro Foundation is developing an ecosystem for Arm platforms.
Where did they get the money for it?
From Arm, first. And from iron producers.
What were the most challenging or interesting challenges you had?
It is difficult to say so. I had to pump my knowledge of mathematics a little. Come to Joker, tell.
Programmers need math!
Yes, suddenly. Not easy to deal with floating point arithmetic. I had to pump the ability to understand what the instructions were here, what were their weights, how much time they took. These assembler inserts are a very complicated and laborious process, and then they are difficult to maintain. Imagine that your specification is changing, you have new, more optimal instructions - you have to rewrite them. But it turns out that from the point of view of obtaining instant benefits, it is quite profitable.
You said about the weight. Correctly understand that there is a set of optimizations that run in order of weight gain?
Yes. Because Arm provides the specification ... They make their Cortex, but the core business is the provision of specification licenses. Further, various manufacturers do, who in that much. Someone instruction will take some time, and another manufacturer - another time. All this complexity that you have to deal with when you write this assembler code. You need to very carefully understand that for one type of processors one sequence will be optimal, for another type another will be optimal. Of course, the usual Java programmer has nothing to worry about, he was already worried about it.
Let's say that some kind of corporate optimization engineer doesn’t work a bit differently. What does he need to do?
Contribute to OpenJDK or go to BellSoft.
Tell us what you need to do to get into your company as a developer? What you need to know? If you are not an Oracle employee, everything is clear. What knowledge kit does a JVM engineer need?
Probably you need to be a contributor to OpenJDK. (laughs)
Well, I changed 250 comments and became a contributor. Will fit?
You will not become a significant contributor by changing 250 comments. That will not work.
And if a person often changes libraries - this is also not much about that? This is the virtual machine itself.
The main thing is to be a good person. To be well versed in algorithms. I understood how the processor works - since we mostly work with fairly low-level things. Even it is not necessary to know some kind of subsoil.
How much time passes before a person can make a meaningful commit for the first time?
Usually it is several months.
And what does he do these months?
First learns to build a project. Then he learns to understand. Suppose he was given some kind of simple bug, and he is learning to understand in which area of this diversity he needs to make changes. Then he tries to make this change, and everything falls for him, he runs to his neighbor, shakes him. Then something starts working for him and he learns to run tests. Then he learns to understand exactly which tests to run. Then he learns to understand how many architectures need to run tests in order to test this patch normally. Then he learns to communicate, because within the framework of OpenJDK, communication is a rather important component.
How do you communicate?
Mailing list. If something is not in the mailing list, then it is not.
And the bug tracker?
Jira In OpenJDK open Jira, in which all the people who have become authors, get access.
You do not have your Jira for Jira, which will Jiri, while Jiri?
There is no such thing for OpenJDK. Naturally there are other projects in which we have our own Jira.
How many architectures do you have? I follow the list that you mentioned, and everything is clear about communication, but it’s not clear why running on several architectures if you only have Arm?
If only Arm, then maybe it is not necessary to run on other architectures. But if you make a change in a shared part, which even looks absolutely harmless, it can haunt a lot of places. First you need to understand where it may haunt. And then you need to test.
What is the matrix of tests, compatibility. Because I have a suspicion that it is very big.
There are tests, and there are configurations. We counted the number of combinations of HotSpot flags, architectures, tests, and got the number ten in the fiftieth. Understanding where to run tests in order to test something is not such a trivial task.
Do you have any standard set of configurations?
Yes. In the same HotSpot tests are organized by tiers. Starting with the usual smoke tests and ending with quite serious tests. By testing various shooting galleries and getting the whole picture, you understand how it looks. Of course, performance testing, stress testing.
A more general question. Imagine that another miracle happened yesterday, and in St. Petersburg they released not only their JDK, but also their processor. You need to quickly sport the JDK on it, and you do it. What are the major blocks of porting to a new processor, in addition to testing?
The main component of the JDK that will need to be ported is the JVM. JVM can be divided into 4 parts: runtime, serviceability, garbage collector, compiler. Virtually all four components will need to make some changes. If you have your own new operating system for this processor, then most likely you will need to change the runtime. If this processor has some not very clear relationship with memory and with out of order execution, then most likely you will have to get into GC. If you just have a new architecture, then basically it will be JIT, in which you will need to invest. The main contribution of the community and iron producers to OpenJDK is precisely in JIT.
If you look not at creation, but at support, where are the most changes?
Everything changes, everything flows. This is not visible from the outside, but a lot of changes occur in libraries. Much work is underway on optimizing libraries. But the compiler has the most changes. And new compilers appear, and old ones start working differently, and new architectures appear, for which you need to write new optimizations.
What part of your work on creating this thing is connected with support? Ten percent? Ninety? If everything is constantly changing, then it is necessary to monitor this and change something?
It is difficult to calculate the percentage of work. Of course, this is a very significant percentage.
Alexander, I see that you want to tell something interesting for a long time, and we are here about compilers. Your move.
I'll tell you about the article. They asked who should read Alexey's article about OpenJDK on Arm. I believe that it should be read to all people who are related to IT. First, Alexey in the article talks about the landscape that is now in the Arm ecosystem. About how the specification of Arm evolved, how Arm reached such a life that it turned out to be on servers. Next, Alexey talks about what is happening in OpenJDK, in the annex to the Arm ecosystem. And it shows how the benchmarks work, what results are there. The manufacturers of Arm processors are compared with similar Intel processors on the SPEC. Therefore, it seems to me that this information should be useful to all. The world is changing right before our eyes. And no one knows about it practically!
My ears didn't turn red while Alexander told me?
No, why should they?
I should be ashamed of what I did.
We do everything right. Alexey conducts community education. Who knows today that servers are released on ARM64 architecture? Units of people. But in fact, we have information that computing centers and supercomputers are built on the Arm processors in the states.
Very large companies are looking at this alternative.
But now supercomputers are, roughly speaking, a kind of data center, in which a bunch of units are interconnected. Are you talking about it? Or is it some big blocks in which billions of Arm cores?
From public sources you can see that these are usually superclusters.
What is the future of this business?
Now Sunrise Arm in the server market. It is still in its infancy and is not yet available to the wide market. Who is the main consumer of this whole history of processors, memory, iron? These are cloud providers. They look at this architecture very seriously.
Sash, I allow myself to disagree with you here. The fact that they are seriously looking, is already an indicator that it is already in a fairly serious condition. If someone 10 years ago told you that you would consider anything on the GPU, you would be surprised. And now all Cloud providers provide this feature. If you look at it a little more globally, there are certain pressures, manufacturers are doing something for them. There is a load that is optimally calculated on the GPU, there is a load for the CPU. There is a load just to answer something on the Internet, HTTP 404. These are different loads, you need different hardware for them. All of these cloud providers are either becoming more specialized, or are starting to offer products for specific loads.
How to understand that some of the technologies are mature enough to make sense to look at it? Are there any major indicators?
It all starts with a load. With what is needed for this particular consumer. Does he need to get an answer in 30 milliseconds to any query, or does he need to consider a heavy math problem as quickly as possible, or does he need to calculate this math problem the cheapest? Under all these various exercises, it will be optimal to use a different iron. It cannot be said that everything is ready for a specific task or not. Just a solution becomes more competitive at some point than something that was before. And you start sorting through several parameters: this parameter is 20% better, this is also 20% better, and this is 10% better, let me try this technology. It seems to me that for servers on Arm such a moment has arrived.
That is, we have a pretty good technology, we have Java, and we have people who all support it anyway. It is a pity that next to us there are no iron producers now, so that there is a complete set.
We talk all the time about Arm and it may appear that BellSoft is exclusively Arm, but it’s not. We connect to all ports of OpenJDK if we have any problems with the port. And Liberica today is not only Liberica on Arm 32 and 64, Liberica is now available on Linux (64 bits), on Windows and in the near future we will release Liberica for Mac. And we have Liberica for Solaris Sparc. This is for very specific customers.
Why Liberica on a Mac? There are no data centers on it.
Well, do you need java on poppy? Now Oracle will stop releasing patches, literally there is not much time left ... until January. And what will happen?
What happens to the patches?
At another time, we can tell in detail about what is happening with the patches in Oracle. In short, the Java world will never be the same again.
But then in the Java-world now there will be assemblies from other manufacturers.
Yes, and that's good. The more manufacturers will work in one project, the better.
There will be competition.
Yes. First, the competition will be for customer support. Someone needs support - banks, payment systems, Cloud-providers. It is important for them that in case a problem arises, it should be solved in a short time.
I used to work in a bank, many support engineers have always set up an Oracle JDK not for the reason that there are additional APIs. No one in my memory wrote
-XX:+UnlockCommercialFeatures. I am not talking about a specific bank, but about selling support in general. But people think that only Oracle knows exactly how to cook it.
So you're absolutely right, because we always counted on Oracle and on Oracle update releases, which came out regularly. They all used. Even on Java desktops, it updated itself, nothing needed to be done.
According to the statistics we see, more and more external contributors are coming to OpenJDK and more and more new features are being developed by external contributors. More and more support is provided by these vendors. BellSoft in this regard is in a very interesting position, because we also know how to cook.
We have discussed many different directions and you seem to understand all of them. Let us then talk separately for each direction. For example, about how OpenJDK is built inside is a great big topic.
Good. We are nearing the end of this fascinating conversation, and therefore the last question. Can you advise something to our readers on Habré?
I advise you to read the article by Alexei . Your horizons will expand understanding that there is another architecture, alternative to the one to which we are accustomed. And she has her own software ecosystem.
Take some simple bug and try to fix it in OpenJDK.
Sounds like a huge quest. What else?
Download Liberica, run. Not necessarily on the Raspberi Pi, you can on a Linux server. Tell your friends about Russian Java, which is being done in St. Petersburg.
And you can also come to us on Joker, where you will have a report, and BellSoft has a stand in the demo zone.
Come chat with us. I will, Alexey, Dmitry Chuiko. Dmitry will not leave, we reserved him :-) Joker is a very important event for us.
Thank you, it was very cool. See you next time!
Minute advertising. Alexander and Alexey will come to the conference Joker 2018 with the report “Honey, let's try ARM? Theory, Applications and Workloads . ” You can buy tickets on the official conference website .