Kubernetes developers answer questions from Reddit users

Original author: Kubernetes developers
  • Transfer


On April 10, an AMA (Ask My Anything) campaign took place at Reddit , in which 9 Kubernetes developers from around the world answered questions from Internet users. A total of 326 comments were collected, and we present a translation of some of them - containing answers to the most interesting (in our opinion) questions.

Questions and answers are divided into conditional categories for convenience. So:

General technical issues


Question : Are there any plans to add network limits to the existing restrictions on CPU and RAM? Can we also expect autoscaling based on network resources and without the use of custom metrics?

Answer No. 1 : At the moment, work in the scheduler on network bandwidth is not expected. Given the experience of Borg, I doubt that such restrictions will work as users want. The more preferable, in my opinion, way is to add something like QoS ranges in which traffic with a high priority will be preferable, but such an implementation has not yet been designed and it will require taking into account the features of the large number of plugins supported in Kubernetes.

Clarification: “Network bandwidth” is not one of the other scalar resources. You cannot just say, “The hearth must have XX bandwidth,” because the available bandwidth is a property of a particular network path, not a single endpoint. Thus, it must be described as a property for a pair (“This hearth must have a throughput of XX to another hearth”), which quickly ceases to be described in cases that go beyond several hearths and requires deep network integration for implementation. TL; DR: We need to think more creatively about network bandwidth.

Answer # 2 : In fact, there are bandwidth-limiting annotations ( kubernetes.io/ingress-bandwidthandkubernetes.io/egress-bandwidth), applicable to pods, but whether the network plug-in you can use them depends on the individual case (for example, the built-in kubenet plug-in and the OpenShift SDN plug-in can, but I'm not sure about the rest).

Autoscaling without custom metrics is hardly possible. We try to support the receipt of non-custom-metrics in the API by limited resources, which can be expressed by limits and requests, so that the API does not grow and for a host of other reasons. However, we also hope to better stabilize the name of some of the metrics currently exported by kubelet , so that scaling the network using the capabilities of custom metrics becomes easier.


Question: What do you think about the use of completely separate clusters for dev and production environments instead of using namespaces for such a separation? What do you usually see in real life?

Answer # 1 : Kubernetes is inspired by Borg, which has a relatively small number of clusters on a large number of machines. I would like to follow this direction. At the same time, in life I meet both of these models (and their various intermediate versions), and usually there are good justifications for them. Upstream cores are not perfect in isolation (help is appreciated). Security teams can be meticulous. Multi -tenancyKubernetes is in an early stage of development. And so on ... Despite all this, I think that the benefits of large clusters will ultimately outweigh all the costs.

Answer No. 2 : This is a wonderful question. They ask him almost every week. Unfortunately, clusters are very rarely identical - too many variables are involved. Therefore, introducing the next cluster, you add new risks. Managing a Kubernetes cluster is not so easy in itself, and managing multiple conflicting clusters is even more difficult.

Speaking of what I see in real installations, I think that many clusters for test / staging / dev are a fairly common pattern. Ultimately, I agree that you should not run the developed code in the same place as production. Namespaces are wonderful - you will be surprised at how much more large cloud providers use them than you might think.


Question : What factors influence the maximum supported number of hearths per node (now it is 100)?

Answer : Docker scalability, kernel settings, testing, user needs.

Addition: The size of the application, the complexity of the application, and how demanding the application is for various subsystems. I saw a working production with 300-400 hearths per node, which were run on medium machines (16-32 cores each) for small applications.

For a long time, the container runtime (container runtime) was a narrow neck - now it is usually the network and iptables, and storage can also be a problem.


Question : Are there any plans to improve the logging infrastructure? Now it seems that this is only some kind of temporary solution ...

Answer : Work is underway to improve the situation and to enable easier connection of third-party journaling solutions. I can’t immediately find suitable links, but we are following this.


Question: Which of the K8s features currently in development are you most worried about?

Answer No. 1 : For me personally: local volumes (I think this is useful work as much as I hate the need for them), identification (all kinds), remaking Ingress , generalization of API machinery. I also improvise on how to develop Services - they are a mess.

Answer No. 2 : There are many of them! See the backlog for features .


Subsidiaries / Related Projects


Question : Are there good GUI frontends for container management and orchestration?

Answer # 1 : Kubernetes Dashboard web UI works wonderfully with Kubernetes primitives. He has a “Create” page where you can quickly deploy new Deployments , just by filling out the form.

Some distributions (such as OpenShift and Tectonic) have their own web interfaces. Kubernetic is a desktop GUI for Kubernetes that looks similar to the Kubernetes Dashboard. There is even a Cabin mobile app! If you are looking for something higher-level and application-oriented, there is Kubeapps through which Helm charts are installed and managed.

Answer # 2 : Have you seen the Weave Scope ?


Weave Scope interface


Question : Do you expect kubeadm to become the standard way to create / update clusters (outside hosted installations)?

Answer # 1 : Kubeadm is a widely recognized Kubernetes cluster deployment method with many options available. Although the Kubernetes community simultaneously supports many solutions for deploying clusters (mainly because there is no single best solution that covers all needs), Kubeadm- A suitable solution that we usually offer for deploying Kubernetes clusters ready for production. But I repeat that there are many other solutions for deployment and cluster management, and each of them has its pros and cons. Kubeadm is just one of them.

Answer No. 2 : I put on kubeadm. And I really bet on him every day. We are working to prepare it for a wide audience (GA) as soon as possible, and it will be really cool to finally see some stability in the most fragmented (in my opinion) part of Kubernetes - the installation.


Question : Are there any good guides / utilities to look at my existing clusters and see what access controls I need before activating RBAC?

Answer: Take a look at audit2rbac - it allows you to scan the log auditfor unauthorized API requests and create the corresponding roles in RBAC for them.

Update : This presentation on audit2rbac is a good starting point.


Question : What project / integration are you most pleased with?

Answer # 1 : Istio .

Answer # 2 : Istio.

Answer # 3 : Cluster API ! Wow, he's finally available!


Project community


Question : Advise good resources for those who want to become an active contributor to Kubernetes project (s).

Answer No. 1 : I always say that those who want to contribute should choose one binary ( kubelet , controller manager , apiserver , etc.) and start reading frommain(). If you can read this for an hour without finding anything worth fixing (or at least just rename something for better readability), you are not looking carefully enough.

Answer # 2 : Start withmain()- This is my preferred way to learn anything. The Kubernetes code base may seem confusing and huge to new users, so I always remind you that there is nothing wrong with adding a few fmt.Printf()to the code, recompiling it, and running it. In addition, running some parts of Kubernetes can be much more complicated than others. Sometimes you just need to be creative - we all left crazy pieces of code while working on various parts of the system. Bash is your friend.

Answer # 3 : It's best to start contributing to Kubernetes with the newly created Kubernetes Contributor guide. It describes in detail almost all aspects of making changes to the Kubernetes project and is the No. 1 source for those who want to become an active contributor to the project. In addition, we regularly conduct # meet-our-contributors (in Slack - approx. Transl. ) Sessions , where you can ask your questions related to the process of making changes to Kubernetes.


Question : What is the main challenge for Kubernetes in 2018?

Answer No. 1 : Readiness for enterprise. This is a classic 80/20 issue. The rest of the work is difficult , "dirty" and hard in a good grade.

Question : What is this 20% that are so complicated?

Clarification Answer: Security requirements. Integration with networks. A long list of features that people think they need. Integration with the audit. Policy Management. Compliance with regulatory requirements and regulations. Each enterprise customer has accumulated over the years the “status quo" in an environment that must be reckoned with so that you can even begin to be seriously considered.

Update : I think that part of the enterprise readiness problem is teaching people how to operate applications in Kubernetes. The way we work with orchestrated containers is different from the traditional methods used in enterprise today. Teaching people what they need to succeed is an urgent omission.

Answer №2: People. Project management is crucial: we have reached a tipping point in which we must figure out how to scale the human factor. We have to work on this - managing all people.


Question : What is the weirdest bug in K8s that you found or fixed?

Answer # 1 : It is possible that a one-time collection of metrics of a certain storage class could become infinite, because the method used for this clogged the underlying storage layer, which caused a delay in timestamps, which in turn caused missing metrics in Heapster.

Answer # 2 : Hanging kube-proxy it caused ICMP errors of “no route to host”, which led us to hell of bewilderment and to search for problems throughout the network stack except for the recipient.


Other ecosystem projects


Q : What advice would you give the Docker Swarm team?

Answer : Take a close look at what happened well and what turned out bad, and learn a lesson from this. But remember that not only technical issues are important. Do not underestimate the impact of timeliness and good luck. The team has great engineers who do a really good job. Swarm is of great value to many users.

Update : Docker Swarm is a great technology, but unfortunately, not everything is Open Source. I agree that timeliness and luck are very significant. Docker Swarm is great, and I would like to do a joint project with Kubernetes to help users understand these paradigms in work.


Question: What other CNCF projects do you enjoy most? Are there any other existing projects that you will be happy to see among them soon?

Answer No. 1 : So many options. I think Prometheus and OpenTracing are great. Envoy also makes me happy . For a long time I worked with CNI , so it would be unfair not to mention him.

Answer # 2 : Envoy, Jaeger , Prometheus.

№3 Answer : I am glad to see that Telepresence filed applicationto join CNCF projects. Recently, a lot of great development tools have appeared for Kubernetes (as well as Draft, Skaffold, freshpod) - I'm waiting for this area to grow!

Answer # 4 : kubicorn .


List of CNCF projects under Incubating (as of April 17, 2018)


Question : What are the plans for OpenShift after the acquisition of CoreOS by Red Hat? Will they be merged or maintained separately?

Answer: We expect a lot of news on this issue soon. The goal is to take the best of Tectonic and combine them with the best of OpenShift, as well as provide the ability to use all of these parts directly with Kubernetes and simplify the extension of Kubernetes with them and create applications based on this.


And finally - the most memorable answer to the question about what recommendations developers can give to medium and large companies migrating to K8s: “ Know what problems you are trying to solve. Don't boil the ocean . "


PS from the translator


Read also in our blog:


Also popular now: