![]() ![]() So now we need to install the Metrics Server service in the cluster if we want to use HPA. Without the mechanisms proposed here, community members may be forced to roll their own stuff, which is likely to result in inconsistencies between community members and community conventions. Ensure that new APIs follow Kubernetes conventions.Develop experimental APIs in phases, so that new APIs can be developed in a separate aggregation service, and when it is stable, it is easy to merge in the APIServer. ![]() Enriching the API, the core kubernetes team blocked many new API proposals by allowing developers to expose their APIs as separate services, without the need for cumbersome community review.Increased API extensibility, developers can write their own API services to expose the APIs they want.This aggregation layer brings a number of benefits. However, it should be noted that we can get resource monitoring data through the standard API here, not because Metrics Server is part of the APIServer, but through the Aggregator aggregation plugin provided by Kubernetes, which runs independently of the APIServer.Īggregator allows developers to write a service of their own, register it with the Kubernetes APIServer, so we can use our own API like the native APIServer provides, we run our service inside the Kubernetes cluster, and then The Kubernetes Aggregator can be forwarded to the Service we wrote by the Service name. ![]() In the first version of HPA, we needed Heapster to provide CPU and memory metrics, and after HPA v2, we needed to install Metrcis Server, Metrics Server to expose monitoring data through the standard Kubernetes API.įor example, when we access the API above, we can get the resource data of the Pod, which is actually collected from the Summary API of the kubelet. We can simply create an HPA resource object with the kubectl autoscale command, and the HPA Controller will poll once by default 30s (which can be set with the -kube-controller-manager -horizontal-pod-autoscaler- sync-period parameter of -kube-controller-manager) to query the Pod resource utilization in the specified resource and compare it with the value and metrics set at creation time to achieve the auto-scaling feature. To this end, Kubernetes also provides us with such a resource object: Horizontal Pod Autoscaling, or HPA for short, which monitors and analyzes the load changes of all Pods controlled by some controllers to determine whether the number of copies of Pods needs to be adjusted. In the previous study, we used a kubectl scale command to implement Pod scaling, but this is after all a completely manual operation. ![]()
0 Comments
Leave a Reply. |