Java conference with English roots. Continued mega review

Published on January 23, 2018

Java conference with English roots. Continued mega review

    Me and mpryakhin , my colleague from CleverDATA, managed to go to the British capital for a Java conference - Jax London 2017. Last week you already read about Chaos Engineering, lambda expressions, catastrophic bugs and Continuous Delivery Java applications in containers .

    And here, in the second part of the review, you will find a story about how to build a career according to your own plan, and not how you have to; how to use metrics to optimize work on new functionality. You will also learn about the intricacies of building highly loaded event processing systems and find useful links for working with Ethereum smart contracts using the Java API.




    Day Three Professional growth in the right direction and control of the development process using metrics


    At many technical conferences, it is customary to open the day with a report that does not illuminate the technical side of our professional life, but helps to comprehend those things that elude attention in everyday routine.

    Many people consider these reports to be uninteresting or not relevant to the conference topics, but, in my opinion, they are one of the most valuable, as they allow everyone to change their focus, reflect on what has been spinning in the language for a long time, but has never been realized. New thoughts, ideas and approaches - this is exactly what many people go to foreign conferences for.

    "The Long Road" by Sandro Mancuso


    This report addresses one of the most important aspects of our professional life - building a career path. How to approach this issue, what actions to perform? As a rule, most of the advice in this area is rather vague and sometimes difficult to imagine how you can apply them in your life.

    I am sure that there are many books, manuals, methodologies, reports and other materials on this subject, many of which we have already been acquainted with, but this particular report was somehow especially remembered and deposited in my mind.

    The author suggests looking at your professional path as a ladder in which each of the steps is our current or former work. Building a career is directly related to how this staircase looks, and to the reasons why new steps appear on it.


    Source

    A large company can offer the employee many career options, interest him in millions of different KPIs. But it happens that an employee, having advanced several positions along the career ladder, is disappointed. Because he built his career in accordance with KPI, which were the goal of the company, and not with his own plans. And then he will have to go down a couple of steps.

    Hence the conclusion: periodically it is necessary to slow down and evaluate our activities from the perspective of our long-term goals, to understand how our current actions help us achieve our goals and whether the goals require adjustments based on the experience that we gained as they were achieved.

    The choice of the next job is influenced by many aspects: the success of the company, social benefits, the size of the team and its average age. All this is very important, but not always a guarantee that we will get some development in this company.

    It is worth considering the strategic points: the company's position on the market (both local and international), its goals and ambitions, the business domain within which the company operates, and the most important aspect is the team of specialists that you will become a part of.

    Unfortunately, many technical specialists make their choice in the direction of a particular place of work only on the basis of short-term improvements in their social status, which the company will bring to them, but at the same time remain completely or partially ignorant of its global goals, do not get acquainted with the team.

    Often, the choice is made with the help of a team of recruiters, whose main task is to close the vacancy, and not at all engage in your development. The help of recruiters is undoubtedly needed, but the initiative should come first from us - on the basis of our interest in a particular business domain and the desire to work with the best people in order to develop in accordance with long-term goals.

    Other thoughts by Sandro Mancuso are in the video of his talk .

    "Measuring the DevOps: The Key Metrics that Matter" by Anders Wallgren


    When developing software, we are sometimes so keen on the idea that everything else for us ceases to exist. The idea is seen as a flaming torch, we pull our hands towards it. It seems that success is already very near, but suddenly the earth goes underfoot, the burning torch turns out to be an unattainable star, and we are left alone with our product and the question “What went wrong?”

    Well, good. In life, everything happens not so pathetic. But how many times it happened that when solving a new, long-awaited, complex and interesting task, we go inside and only after a couple of hours (days, and you won’t believe it, I saw that even weeks - it all depends on the size of your organization and your guilt ) Scrum-master recalls that it is time to explain what exactly you are doing and when to expect the result.

    At this stage, we understand that there is already no time for reflection, and great goals and thoughts are postponed until the next time, we begin to work hard.

    Such cases are not uncommon in teams and companies of any level and profile. It does not always depend on the experience or professionalism of the technical specialists working in them. Often the fault is the lack of understanding of the place where you are now, the path that you have to go, and the final goal, which may not be so obvious if you look at it more closely.

    So, we spend part of the time that we devote ourselves to completing a task not exactly as originally expected, but it is not always clear where exactly it goes. And when we misunderstand something, the easiest way is to start collecting some information about our activities. Here metrics help us. More about all this, we were told by specialists at Electric Cloud.



    This figure depicts a scientific approach to working with a particular observation. I am sure that it is familiar to many and is actively used constantly, because in fact there is nothing innovative in it. This is just a sequence of logical actions that are guaranteed to lead us to an understanding of what we are observing.

    The main attention here should be paid to the sequence of operations and not to try to skip any steps, because each of them helps us better and more thoroughly understand the problems and more consciously go to the final goal.

    But back to the metrics, or rather their types:

    • the effectiveness of the development and implementation of features;
    • the impact of features on customer satisfaction;
    • the impact of features on the product and our business;
    • employee satisfaction.

    Most of us will be faced with optimizing the development / testing process and introducing new functionality, and here we will come to the aid of the types of metrics from the first group.

    As a rule, any process of developing a new feature can be reduced to the next set of development stages (pipeline).

    The picture is clickable: So, we have five stages of developing a typical project:





    • development;
    • testing;
    • deployment;
    • implementation;
    • escort.

    If your process has not yet been divided into similar parts, perhaps this will be the first step towards more conscious control over the development and implementation time of a feature.

    After the process is divided into separate pieces, we can measure the time of each of them and find out which stages it takes us the most time. Further, having a certain set of observations and the diagram above, we can carry out a number of improvements and use the pipeline to assess the quality of the impact of our changes on the entire development process.

    Thus, we get the opportunity to iteratively customize the process of developing and delivering new features, like a sports car engine, under optimal working conditions. In the future, this will help us win the race for the best product.

    Do not forget that the implementation of each feature pursues certain goals of your company. The following metrics can serve as metrics.

    • The cost of attracting a new customer. If new features are relevant for the market, environment, updated in a timely manner and the number of bugs is quite small, then the degree of loyalty of your customers increases, your popularity in the market grows and, as a result, the cost of attracting a new client decreases.
    • Profit. Here, everything is quite simple: increasing the number of customers, reducing the cost of maintenance.
    • Market share. It is always useful to soberly assess the environment.

    The product must meet the goals of your customers / users. Here the metrics are as follows.

    • Satisfaction with the product. The continuous collection of feedback allows you to identify many nuances that can affect both the development process and the alignment of the process of communication with customers.
    • Benefit from features. Our ideas, unfortunately, do not always coincide with the needs of the business domain for which we are designing our solution, and the value of the product lies not only in its quality, but also in its ability to dynamically adapt and hear the needs of the market. Therefore, if some part of the features does not bring the expected benefits, this is a good marker for revising them.
    • Usability features. Often, the user cannot appreciate our idea, because it is something new, something that no one has yet done before us, and business has not seen it anywhere. Often people are rather conservative and skeptical of everything new, but if we collect information on usability, we can hold seminars on such functionality in a timely manner, and thereby help our clients' businesses achieve the best results by using the new functionality.

    Google has a rule: if the impact of changes cannot be assessed in terms of business value, such changes are inappropriate. Because of this, everything around is covered in metrics.

    We ourselves use part of the described approaches when developing our product (1DMP). The best part is that, having some of these metrics in hand, we can quantitatively and qualitatively evaluate those or other changes that we make to the development / implementation and maintenance of our product. This allows us to fine-tune the process with each iteration and move faster to the goal.

    More details on this topic can be found in the video speakers .

    “Scaling Event Sourcing for the IoT and mobile” by Lorenzo Nicora


    The last report of this day promised to tell us about the subtleties, nuances and approaches used in the construction of highly loaded event processing systems. The description of the report was full of many terms: appeared relatively recently (reactive design patterns, actor model), and more fundamental, used by many, but no less interesting (DDD, CQRS, Event Sourcing).

    In many articles, the application of one or another approach in the context of various tasks causes fierce debate, which further fuels interest in these topics.

    The report was built on the principle of “from simple to complex” and the main goal was to understand all these terms and understand what and in which situations it is worth using and what is not.

    Everyone knows the problems of scaling a typical information system with increasing load. As a rule, the database server is a bottleneck, which does not scale well during intensive recording.



    One of the common solutions to the problem of slow write to the storage is to add an intermediate buffer between the service and the destination storage. Thus, we can continue to scale our application horizontally and to a lesser extent depend on the speed of adding data to the storage.

    This solution has one big drawback. As soon as we added the intermediate buffer, we lost the guarantee that the two operations performed from the client’s point of view sequentially reach our database in the same sequence, i.e. system consistency may be compromised.

    So, we have a certain source of events whose sequence matters. We want to continue to scale horizontally, but the consistency of the data and the transactional nature of their changes are important to us. DDD (Domain-Driven Design) to help us.

    The main idea of ​​this principle is to create a business domain within which data consistency can be guaranteed. In its simplest form, this can be information about the user and the status of his account. Further, each input request always passes through only one process (aggregator), which serves only this user data.

    If the action requested by the user is feasible from the point of view of the business logic of our domain model, the process changes its state and generates an event characterizing the change made, which is recorded in the transaction log.

    Thus, if something happens to the process, its state can always be restored from this log. By the way, the actor model is just very relevant when designing systems of this type.



    So, the incoming request is processed, on the basis of it an event is generated and recorded in the transaction log. But what to do if it is necessary to present the results of incoming events in some new form, different from the state of the process? (For example, to build reporting.)

    The CQRS approach comes to the rescue(command-query responsibility segregation), from the name of which it is clear that the processes of writing and reading are divorced at different angles. Since the system has a full log of events generated during the impact of users on it, it will not be difficult to re-read this log in any form optimized for the reading task.



    But do not forget that the system is distributed between different application instances, which can be located on different servers. This adds the overhead of routing the request to the desired instance of the application, on which the process processing requests to this domain object is available. In turn, this leads to an increased sensitivity of the system to repartitioning.

    It should be noted that the application of this approach is justified only if the order of receipt is important for processing requests, and the source of the requests guarantees their consistency. A good example of requests whose order is important is the withdrawal / replenishment of a bank account.

    In some cases, the request source cannot guarantee the sequence and order of the requests, for example, the IoT sensors of the devices and the signals they send. Part of the time, these devices can be offline or in power saving mode (they buffer information in themselves and periodically send it to the server).

    Also a good example can be various mobile games that provide their main functionality if the device is offline. When it appears, they send statistics about the player to the server for subsequent accounting in game mechanics. In this case, the processing of an incoming request is reduced to its basic validation and recording in a large analytical repository for further analytics. This eliminates the need to store state in the service and horizontally scale the request processing service in a standard way. This approach is known as Weak Write Consistency.



    Further, the received events are analyzed by various tasks that build these events in the desired sequence and on the basis of this make certain decisions.

    More details on this topic can be found invideo report .

    So the third day of the conference flew by completely unnoticed by us. Most of the reports were left behind. The head was full of thoughts, and the issued notebooks were scribbled with notes. We spent the rest of the evening in one of London's pubs with a mug of good ale and a piece of delicious steak.

    Advice to those who flew to London not only for the sake of the conference
    Программа конференции оказалась насыщенной, и мы не хотели упустить ничего интересного, поэтому для изучения города оставалось только вечернее время. Прогулка по вечернему или ночному Лондону оставляет самые яркие впечатления.

    После десяти вечера улицы потихоньку пустеют, на них загораются фонари, заполняя все вокруг теплым желтым светом, а в окнах домов можно заметить гостиные, полные книг. Все это создает ощущение, что ты идешь в теплых тапочках по страницам произведений Артура Конан Дойла или Чарльза Диккенса, настраивает на особенный лад.



    Только, пожалуйста, если вы гуляете в октябре, не забудьте надеть ветровку и прихватить зонтик. В один из вечеров, когда мы выходили из гостиницы, небо было полно звезд и ярко сияла луна, а ближе к середине нашей прогулки начался мелкий дождик. Это не испортило нашей экскурсии, но под зонтом было бы приятнее.

    Мы прошлись мимо музея Мадам Тюссо, по знаменитой улице Ноттинг Хилл, видели настоящие английские парки и пришли к мнению, что нам обязательно стоит приехать сюда в будущем, чтобы иметь побольше времени на ознакомление с культурным наследием Англии в светлое время суток.


    Day four. Blockchain using Java




    “Developing Java Applications on the Blockchain with web3j” by Conor Swensson


    During the seminar, Conor spoke about the main milestones in the development of blockchain technologies. The report turned out to be quite detailed, and it was interesting to find out that one of the trends in the world of blockchain, which many corporations are already working on, is the construction of a distributed private blockchain. In addition, some of the technological solutions have already taken shape in libraries and frameworks, and gathered under the umbrella of the Hyperledger Foundation, formed with the participation of the Linux Foundation in 2015.

    During the seminar, we learned about the basic principles of blockchain operation using the example of the Etherium network. The code base of the seminar was built on the basis of the open web3j library , which is a bridge between smart contracts written in Solidity and the Java machine.

    The seminar participants completed the following tasks:

    • got yourself a test wallet;
    • connected to the test network;
    • wrote the simplest smart contract and published it on the Ethereum Blockchain;
    • wrote a small service that interacts with the published smart contract.

    Since all the materials of the seminar are publicly available on github , everyone can, like us, go through this seminar and understand all the nuances of Ethereum smart contracts using the Java API.

    A video appeared in the public domain with content similar to the seminar.

    Finally


    We mpryakhin still remember this trip with great pleasure. In preparing the review, we revised the materials and reports more than once and did not cease to be surprised: each time we paid attention to something new, which we did not notice while sitting in the audience. A trip to the conference, especially so long, is always a big event in the life of the developer. We hope that our review will be useful to you. Conference slides are available here .