Conducting containers: how Kubernetes and Istio bundle work

    Our conference on DevOps tools and approaches is already tomorrow, which means that the time has come for the last interview! This time, we asked several questions to one of the leaders of the Google development teams about the work of the Kubernetes and Istio bundle, the release of which is scheduled for early next year.

    Craig will tell you why it is worth deploying in containers even to one machine, when to connect the orchestration system, what alternatives Kubernetes has and what awaits us in the future. Details - under the cut.




    Craig Box (Craig Box) - an expert and the head of one of the divisions in Google Cloud. His responsibilities include working with platforms, collecting user reviews and interacting with engineers. Started with a system administrator, then switched to development, implementation, DevOps, consulting and management.

    Please tell us a little about your report. Is that bunch you are talking about already used by someone in production or is it a concept? How mature is lstio?

    Craig Boxing: Istio is an open source service network that decouples workflows from development. You can imagine it as a network of services, not bytes or packets.

    When people transfer their applications from a monolithic process to microservices, they implement a network that does not always work reliably, like many distributed endpoints. Some companies, notably Netflix, solved the network problems of microservices using a library that should be included with each microservice. Since microservices allow each component to be developed in any language, libraries must be supported in several languages, and soon this became an additional problem.

    Istio allows you to manage existing and new services in any language in a single way. It helps manage, monitor and protect microservices in any language and deployed anywhere.

    The administrator defines the rules (for example, "send 10% of the backend traffic to my service" or "make sure that all traffic between A and B is carried out using TLS mutual encryption"). Istio implements a proxy server in front of each service that is programmed to enforce these rules. And without even defining anything, you immediately get a rich arsenal of traffic monitoring and providing distributed tracing between your endpoints after installing Istio on a Kubernetes cluster.

    The proxy server used by Istio is called Envoy. It was written by a Lyft team led by Matt Klein. Lyft have been using Envoy in their projects for a long time, and during that time we tested its work in systems of various sizes.

    The Istio community initially included Google, IBM, and Lyft, but after joining many other members, version 0.2 was released. We see it as a “beta version” and plan limited industrial use by version 0.3 at the end of this year, and release 1.0 is scheduled for 2018, it will support even more environments.

    Istio was originally designed with Kubernetes in mind, and Kubernetes is, of course, a mature, ready-to-use product. Research firm Redmonk claims Kubernetes is used by 54% of Fortune 100 companies, representing 75% of the total container companies.

    - Unlike most DevOps practices, the need to use orchestration systems is not always obvious, especially for relatively small teams and projects. Are they always needed for everyone, or do such systems appear and benefit only where microservices and a large number of machines?

    Craig Boxing: Even if you only have one service on one machine, you still have advantages in using containers: you can deploy your application with all of its dependencies in one atomic unit, be sure that it does not take more resources than allowed under at any time, you can easily roll back all changes.

    Even if you only have one container, Kubernetes is a great API for managing the life cycle of this container, running on any machine; it handles abstractions lower than your application, such as memory and network, and provides an API server that you can easily configure using your deployment tools.

    You now have a portable application that you can move between environments. When your application is configured, you can move on to scaling.

    - By what signs can we understand what is now needed if the project initially began without a similar orchestration system?

    Craig Boxing:A system like Kubernetes is managed through the API from start to finish. You can automate everything from clustering with a service such as the Google Container Engine to deploying and updating applications on that cluster.

    One of the most common reasons people do this is to increase the rate of change. This happens something like this: moving containers, orchestration and microservices to the cloud reduces the risk of a single deployment. This allows you to ensure continuous operation. Now you are confident in your process, which is activated using templates, such as canary deployment (where at first 1% of the traffic goes to the new service, then 5%, then 20%, etc.). You can also go back if something goes wrong.

    Our customers told us that they can go from one deployment per month to tens per day. This means that the new code is faster, and ultimately leads to more flexible management of business processes.

    - Are there any serious alternatives to Kubernetes? Or is his model so good and versatile that they are in principle not needed, and in the end there will be only one?

    Craig Boxing: Kubernetes is an evolution of the clustering model created by Google. We published documents about our work that influenced other systems in this space, for example, Mesos.

    Around the same time that Kubernetes came about, other similar systems appeared. Many of them soon switched to Kubernetes (for example, OpenShift, Deis Workflow, and recently Rancher).

    One of the reasons Google decided to create Kubernetes and release it under an open license was because everyone needed a standard API that could be used for all providers.

    Kubernetes is called Linux for the Cloud. While Linux is in some ways a de facto choice when you need Unix, there is still an ecosystem of other open source operating systems for various occasions - not to mention the closed-source world and Windows.

    Similarly, we created a fantastic set of APIs in Kubernetes that allows you to manage all kinds of applications: from stateful web services to batch workloads for ML models with big data and GPUs. We also spent a lot of time creating extensible functions in Kubernetes that allow you to define types of objects that we did not even think about. We see Kubernetes as the core of the ecosystem and something that you can expand.

    - How difficult is it to maintain existing orchestration decisions? Is it possible to quickly enter, launch them and forget? Or is there a constant tuning, cluster collapse and other problems?

    Craig Boxing:Each user will have their own reasons for using clusters. Some will share them for maximum power utilization and may want them to work for a long time. Others will want to include them only when they need them, and give different business units their own clusters. People working with a fixed number of machines have a different view of their use than people working in the cloud, when they can automatically add machines as needed.

    We designed Kubernetes to work well in all situations, and our product, Google Container Engine, allows you to deploy a cluster in less than three minutes.

    - What tasks cannot be or are difficult to solve with the help of existing orchestration systems? Where is their development going?

    Craig Boxing:Container orchestration systems are well suited for general dynamic workloads in scalable systems. If you want to run one commercial database on one machine and it uses all the resources of this machine, we would recommend that you do not change anything. If this server breaks down, the effort and cost of moving it can be greater than the effort and cost of repair. With this in mind, our recommendation is to simply connect to an external service inside your cluster.

    Similarly, when you have a managed service, such as Google BigQuery, you do not need to start the data warehouse in your cluster.

    Istio knows that not all applications will run on Kubernetes. In version 0.2, you can add services running on both virtual and real machines.

    Oracle published scripts to host the Oracle Database in a container, while Microsoft took it a step further and uploaded SQL Server images to the Docker Hub. If you want to move such workloads, it is more than possible!

    - What about stateful services? Should we expect the emergence of a universal mechanism?

    Craig Boxing: MVP for Kubernetes hosted stateless web services, but we always thought about what we needed to do to be able to run stateful workloads.

    We worked on the “StatefulSet” concept, which gives each member an ordinal (numbered) identifier and a separate repository. In addition, you can create “statements” for applications that know what needs to happen in order to add or remove an item from the cluster.

    The Kubernetes community has developed a number of templates for common open source applications such as Cassandra and MongoDB, and, as I said, many companies migrate traditional enterprise applications to containers. We hope that over time, all communities will use Kubernetes as a first-class supported deployment platform.



    If microservice management issues are relevant for you, we invite you to listen to Craig Box's talk Managing your microservices with Kubernetes and Istio at DevOops 2017 .

    Also, you will probably be interested in other speeches at the conference, including:


    Also popular now: