@Kubernetes Meetup # 3 at Mail.ru Group: June 21

    From February Love Kubernetes has passed, it seems to us, an eternity. The only difference was that we managed to enter the Cloud Native Computing Foundation, certify our Kubernetes distribution under the Certified Kubernetes Conformance Program, and also launch our Kubernetes Cluster Autoscaler implementation in Mail.ru Cloud Containers .

    It's time for the third @Kubernetes Meetup! In short:

    • Gazprombank will tell how they use Kubernetes in their R&D to manage OpenStack;
    • Mail.ru Cloud Solutions - how to scale applications in K8S using scalers and how Kubernetes Cluster Autoscaler prepared their implementation;
    • and Wunderman Thompson, how Kubernetes helps them optimize their development approach and why DevOps has more Dev than Ops.

    The meeting will take place on June 21 (Friday) at 18:30 in the Moscow office of Mail.ru Group (Leningradsky Prospekt, 39, p. 79). Registration is required and closes on June 20 at 11:59 a.m. (or earlier if the seats run out).

    “Kubernetes for developers: how many Dev are there in DevOps?”

    Grigory Nikonov, Wunderman Thompson, Managing Director

    We do not have clusters of 500 nodes. We do not have a harsh DevOps. We do not have dedicated product teams. But we have many interesting projects and answers to the questions that we found while developing and supporting these projects. First of all, we are developers and are used to creating tools ourselves, which we will use later. Perhaps they will help you in your work.

    The Wunderman Thompson Agency is one of the pioneers in the development of Internet solutions in Russia, and is now developing both simple landing pages and complex distributed systems. Kubernetes helps to optimize the approach to development, and for agency customers - hosting and operation of the created solutions.

    In distributed systems with a large number of integrations and internal components, the microservice architecture is a natural response to the requirements for updating and maintaining the solution, however, the transition to such an architecture poses a whole series of problems related to versioning and publication. The fact that we are an agency, and not a dedicated product team, and our developers do not constantly keep the detailed context of a specific solution on their machines, imposes its requirements on the reproducibility of the development environment, the ability to make changes to several teams at the same time and return to the project after some time . The answers to these challenges are the processes and tools that we have developed and which make it easier for our developers and DevOps to develop and maintain solutions.

    You will find out why DevOps is more Dev than Ops, and how lazy it allows you to reduce the time and cost of development / support, as well as:

    • how Kubernetes has changed our approach to project development;
    • What does the life cycle of our code look like?
    • what tools we use for the controlled publication of microservices;
    • how we solve the problem of assembling obsolete artifacts;
    • how we deploy to the cluster with pleasure.

    “Scaling up applications with Kubernetes Cluster Autoscaler: Autoscaler nuances and Mail.ru Cloud Solutions implementation”

    Alexander Chadin, Mail.ru Cloud Solutions, PaaS Services Developer

    In today's world, users expect as a given that your application is always online and always available - which means it can withstand any traffic stream, no matter how large it is. Kubernetes offers a rather elegant solution that allows you to scale yourself according to the load - Kubernetes Cluster Autoscaler.

    In general, Kubernetes has two types of scaling in terms of what it is scalable: more copies of the application or more resources. Vertical scaling when we increase the number of application replicas within existing nodes. And more complex horizontal scaling - increase the number of nodes itself.

    In the second case, we can raise even more copies of the application - which will ensure its high availability. We’ll talk about horizontal scaling with Cluster Autoscaler. It can not only increase, but also reduce the number of nodes depending on the load. For example, the peak of the load passes - then Autoscaler itself will reduce the number of nodes to the required one and thus the fee for the provider’s resources.

    At the meeting we will tell you more about the nuances of Kubernetes Cluster Autoscaler, as well as what difficulties we encountered when launching our implementation of Cluster Autoscaler as part of the Mail.ru Cloud Containers service. You will learn:

    • what scalers are in Kubernetes, what is the peculiarity of their use;
    • what you should pay attention to when using scalers;
    • how we segmented nodes by accessibility zones using Node Groups;
    • How they implemented support for Kubernetes Cluster Autoscaler in MCS.

    Gazprombank R&D: How K8S Helps Manage OpenStack

    Maxim Kletskin, Gazprombank, product manager

    In a world where the trend is set for everything as a service, Time-to-Market is above all. Applications need to be developed quickly in order to test hypotheses and find new markets at the time of their initial formation. Speed ​​is especially important for banks, and new technologies help here - in particular, containerization technologies and Kubernetes.

    Maxim Kletskin is a product manager at Gazprombank and is developing a sandbox for launching pilot products. Gazprombank R&D conducts various studies in its cloud, which is OpenStack. Kubernetes is used in two ways: 1) Kubernetes on Bare Metal as the management layer of the OpenStack cloud, and 2) K8S as an OpenShift distribution for development.

    In the report, we will talk about the first case and find out how Gazprombank uses Kubernetes to manage OpenStack. If you look at the OpenStack architecture, you can see that it is quite atomic, so using Kubernetes as an OpenStack control layer seems very interesting and logical. In addition, it will facilitate the addition of nodes to the OpenStack cluster and increase the reliability of the Control Plane. And, like a cherry on a cake, it will simplify the collection of telemetry from a cluster.

    You will learn:

    • why R&D to the bank: test and experiment;
    • How we containerize OpenStack
    • how and why to deploy OpenStack in K8S.

    After the speeches, we will smoothly switch to the @Ku beer netes After-Party format , and we have also prepared some cool announcements for you. Be sure to register here , we review all applications within a couple of days. We immediately inform you

    about new events of the @Kubernetes Meetup series and other Mail.ru Cloud Solutions events in our Telegram channel: t.me/k8s_mail

    Do you want to speak at the next @Kubernetes Meetup? Application can be left here: mcs.mail.ru/speak

    Also popular now: