Introducing the kubedog library for tracking Kubernetes resources

    We are glad to announce a new Open Source-development of the company “Fant” for DevOps-specialists and not only - kubedog . It is a Go-written library and a CLI based on it for tracking events of Kubernetes resources and collecting their logs.


    The library currently supports tracking of the following resources: Pod (and Container), Job, Deployment, StatefulSet, and DaemonSet. Events and logs are transmitted via callbacks.

    CLI in kubedog has two modes of operation:

    • rollout track - tracking the resource until the Ready state is reached and exit in case of an error for convenient use in CI / CD;
    • follow - print events and logs to the screen without exit, similarly tail -f.

    Problem


    Why did we start writing a new library if similar projects already exist (see “Working with logs” in this review ) ? Kubedog is used in our devOps-utility dapp to track the rolls of the Helm charts. Helm himself does not know how to monitor the state of the resources that he adds, and the transfer of logs is not provided for at the level of the GRPC interaction between Helm and tiller. On this occasion, we have our issue 3481 , in the framework of which we also implemented tracking of added resources ... However, the Helm project is now reluctant to add new functions to Helm 2, since all efforts are focused on the new version of Helm 3 . For this reason, we decided to separate kubedog into a separate project.

    What do you need from the resource tracking library?

    • Get logs of Pods that belong to a resource — for example, Deployment.
    • Respond to changes in the composition of Pods, which belong to the resource: add logs from new Pods, disable logs from Pods of old ReplicaSet.
    • Tracking Event'ami , in which come the decoding of various errors. For example, Pod cannot be created due to an unknown image or Pod has been created, but the command specified in the template is not in the image.
    • And another requirement - tracking the transition of a resource from mode rolloutto mode ready. And each resource has its own conditions for this.

    As it is easy to guess, in kubedog we tried to take into account all of the above .

    In an amicable way, at the beginning of work on something new, they analyze the existing solutions. But it turned out that although there are many solutions in the form of CLI, there are simply no Go libraries. Therefore, we can only make a small comparison of the main features of the existing CLI utilities for tracking Kubernetes resources.

    Existing Solutions


    kubespy


    GitHub

    • Able to follow only Deployment and Service, responds to new Pods.
    • There is a mode of tracking the description of the resource and its status and the output of changes in the form of json diff.
    • There is a color tabular representation of changes, where you can see the status of ReplicaSet'ov and conditions.
    • Does not show logs Pod'ov.
    • Written on Go, but can not be used as a library.

    kubetail


    GitHub

    • Bash script that calls kubectl.
    • Able to show logs of existing Pods.
    • New Pods do not detect if rollout occurs, then kubetail needs to be restarted.

    stern


    GitHub

    • Shows pod logs filtered by pod-query.
    • Discovers new pods.
    • Log lines are colored for better perception.
    • Shows the events of adding and removing Pods with the names of the containers in them.
    • Does not follow Events, therefore does not show the cause of Pod errors.
    • Written on Go, but as a library will not work.

    kail


    GitHub

    • Able to show logs simultaneously from different namespaces for different resources.
    • Does not monitor Events, does not show the cause of errors, for example, for Deployment.
    • Do not paint logs Pod'ov.
    • Written on Go, but as a library will not work.

    k8stail


    GitHub

    • Selection of Pods by namespace and labels.
    • Monitors the emergence of new, for removal.
    • Does not follow Events, will not show an error.
    • On Go, but not the library.

    kubedog


    GitHub

    • The CLI operates in two modes: endless tracking and tracking until a resource transitions to READY status.
    • Monitors one resource.
    • It responds to changes in the resource, subscribes to the logs of the new Pods.
    • Able to follow Deployment, StatefulSet, DaemonSet, Job or individual Pod'om.
    • Written on Go, you can use it as a library to add resources to your status monitoring program and get logs from containers.

    If you take a closer look, you can say that each utility in some way beats its rivals and there is no definite winner who can do everything that everyone else does.

    So, kubedog!


    The essence of the work of kubedog is as follows: for the specified resource, run Watchers on the Event and on the Pods belonging to the resource, and when Pod appears, start the collector of its logs. Everything that happens with the resource is transmitted to the client by calling the callbacks.

    Consider the example of DaemonSet, which is available for the code that uses the library. The callback interface for Deployment, StatefulSet and DaemonSet is the same * ControllerFeed:

    type ControllerFeed interface {
        Added(ready bool) error
        Ready() error
        Failed(reason string) error
        EventMsg(msg string) error
        AddedReplicaSet(ReplicaSet) error
        AddedPod(ReplicaSetPod) error
        PodLogChunk(*ReplicaSetPodLogChunk) error
        PodError(ReplicaSetPodError) error
    }

    * An exception is AddedReplicaSet, which makes sense only for Deployment (you can not define this method to monitor DaemonsSet).

    Explanations for other interface methods:

    • Addedcorresponds to the event of the watch.Addedobserver for the selected resource;
    • ReadyIt is called when the resource is in a state Ready(for example, for DaemonSet this is the moment when the number of updated and available Pods coincided with the “desired” number of Pods);
    • Failed- this method is called when a resource is deleted or when an Event is received with the cause and description of the error (for example, FailedCreate);
    • EventMsgcalled for each received Event from a resource or its Pods: these are events about the creation of a resource, about downloading an image, etc. Including error messages;
    • AddedPod - the method by which you can catch the moments of creating new Pods;
    • PodLogChunk it is called when the next piece of logs comes from the Kubernetes API;
    • PodError will be called in case of Pod error.

    Each callback can return a type error StopTrackand the tracking will be completed. So, for example, done in rollout tracker - Readyвозвращает StopTrackand CLI completes its work.

    To facilitate the definition of callbacks, there is a structure ControllerFeedProto, when creating an object of which you can define the desired callback method.

    So, for example, an endless output of DaemonSet logs will look without additional information about events and status:

    // kubedog может подключаться как снаружи Kubernetes'а, так и внутри// см. https://github.com/flant/kubedog/blob/master/pkg/kube/kube.go
    kubeClient, err := kubernetes.NewForConfig(config)
    if err != nil {
      return err
    }
    feed := &tracker.ControllerFeedProto{
      PodLogChunkFunc: func(chunk *tracker.ReplicaSetPodLogChunk)error {
        for _, line := range chunk.LogLines {
         fmt.Printf(">> po/%s %s: %s\n", chunk.PodName, chunk.ContainerName, line)
        }
        returnnil
      },
    }
    // Опциями можно задать timeout ответа от API сервера и время, начиная с которого показывать логи. Если время пусто, то логи будут выведены все, начиная от старта Pod'а.
    opts := tracker.Options{
      Timeout:      time.Second * time.Duration(300),
      LogsFromTime: time.Now(),
    }
    tracker.TrackDaemonSet(dsName, dsNamespace, kubeClient, feed, opts)

    The last call is blocking : it starts an infinite loop of receiving events from Kubernetes and a call to the corresponding callbacks. Programmatically interrupt this cycle by returning StopTrackfrom callback.

    Application examples


    The use of kubedog as a library can be seen in the dapp utility. Here, ready-made rollout trackers are launched to check the resources that Helm creates or updates.

    Kubedog CLI is able to help with rollout in the system CI / CD , regardless of what is used: kubectl, Helm or something else. After all, you can run kubectl apply, and then - kubedog rollout trackand in the roll-out logs you will see an error, if there is something wrong with the resource. Such use of kubedog will help reduce the time to diagnose problems with rollouts.

    What's next?


    In our plans to develop the library in the direction of supporting more resources - for example, I really want to follow Service and Ingress. In addition, it is supposed to carry out work on the classification reasonin Event'ah, to more accurately determine the time when you can assume that the rollout of the resource failed. Another vector of library development is tracking several resources at once, for example, by labelSelectoror by name namespace. I would also like to support various annotations that can change the nature of tracking, for example, for Helm's hooks, but this is still more relevant for dapp.

    In the near future, the focus will be on the library, but improvements are also planned for the CLI: more convenient commands and flags, coloring logs, messages about deleting Pods, as in stern. We are also considering the possibility of creating an interactive mode with the Deployment status table and events in one window and with logs in another.

    How to try?


    The kubedog CLI builds for Linux and macOS are available on bintray .

    We are looking forward to your feedback and issues in GitHub !

    PS


    Read also in our blog:


    Also popular now: