I'd like to set up horizontal auto-scaling for a deployment based on the metrics of the ingress-controller deployed in another namespace.
I have a deployment (petclinic
) deployed in a certain namespace (petclinic
).
I have an ingress controller (nginx-ingress
) deployed in another namespace (nginx-ingress
).
The ingress controller has been deployed with Helm and Tiller, so I have the following ServiceMonitor
entity:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"monitoring.coreos.com/v1","kind":"ServiceMonitor","metadata":{"annotations":{},"creationTimestamp":"2019-08-19T10:48:00Z","generation":5,"labels":{"app":"nginx-ingress","chart":"nginx-ingress-1.12.1","component":"controller","heritage":"Tiller","release":"nginx-ingress"},"name":"nginx-ingress-controller","namespace":"nginx-ingress","resourceVersion":"7391237","selfLink":"/apis/monitoring.coreos.com/v1/namespaces/nginx-ingress/servicemonitors/nginx-ingress-controller","uid":"0217c466-5b78-4e38-885a-9ee65deb2dcd"},"spec":{"endpoints":[{"interval":"30s","port":"metrics"}],"namespaceSelector":{"matchNames":["nginx-ingress"]},"selector":{"matchLabels":{"app":"nginx-ingress","component":"controller","release":"nginx-ingress"}}}}
creationTimestamp: "2019-08-21T13:12:00Z"
generation: 1
labels:
app: nginx-ingress
chart: nginx-ingress-1.12.1
component: controller
heritage: Tiller
release: nginx-ingress
name: nginx-ingress-controller
namespace: nginx-ingress
resourceVersion: "7663160"
selfLink: /apis/monitoring.coreos.com/v1/namespaces/nginx-ingress/servicemonitors/nginx-ingress-controller
uid: 33421be7-108b-4b81-9673-05db140364ce
spec:
endpoints:
- interval: 30s
port: metrics
namespaceSelector:
matchNames:
- nginx-ingress
selector:
matchLabels:
app: nginx-ingress
component: controller
release: nginx-ingress
I also have the Prometheus Operaton instance, it has found this entity and it has updated the Prometheus' configuration with this stanza:
- job_name: nginx-ingress/nginx-ingress-controller/0
honor_labels: false
kubernetes_sd_configs:
- role: endpoints
namespaces:
names:
- nginx-ingress
scrape_interval: 30s
relabel_configs:
- action: keep
source_labels:
- __meta_kubernetes_service_label_app
regex: nginx-ingress
- action: keep
source_labels:
- __meta_kubernetes_service_label_component
regex: controller
- action: keep
source_labels:
- __meta_kubernetes_service_label_release
regex: nginx-ingress
- action: keep
source_labels:
- __meta_kubernetes_endpoint_port_name
regex: metrics
- source_labels:
- __meta_kubernetes_endpoint_address_target_kind
- __meta_kubernetes_endpoint_address_target_name
separator: ;
regex: Node;(.*)
replacement: ${1}
target_label: node
- source_labels:
- __meta_kubernetes_endpoint_address_target_kind
- __meta_kubernetes_endpoint_address_target_name
separator: ;
regex: Pod;(.*)
replacement: ${1}
target_label: pod
- source_labels:
- __meta_kubernetes_namespace
target_label: namespace
- source_labels:
- __meta_kubernetes_service_name
target_label: service
- source_labels:
- __meta_kubernetes_pod_name
target_label: pod
- source_labels:
- __meta_kubernetes_service_name
target_label: job
replacement: ${1}
- target_label: endpoint
replacement: metrics
I also have a Prometheus-Adapter instance, so I have the custom.metrics.k8s.io
API in the list of available APIs.
The metrics are being collected and exposed, so the following command:
$ kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests" | jq .
gives the following result:
{
"kind": "MetricValueList",
"apiVersion": "custom.metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests"
},
"items": [
{
"describedObject": {
"kind": "Ingress",
"namespace": "nginx-ingress",
"name": "petclinic",
"apiVersion": "extensions/v1beta1"
},
"metricName": "nginx_ingress_controller_requests",
"timestamp": "2019-08-20T12:56:50Z",
"value": "11"
}
]
}
So far so good, right?
And what I need is to set up the HPA entity for my deployment. Doing something like that:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: petclinic
namespace: petclinic
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: petclinic
minReplicas: 1
maxReplicas: 10
metrics:
- type: Object
object:
metricName: nginx_ingress_controller_requests
target:
apiVersion: extensions/v1beta1
kind: Ingress
name: petclinic
targetValue: 10k
Of course, this is incorrect, as the nginx_ingress_controller_requests
is related to the nginx-ingress
namespace, so it doesn't work (well, as it was expected):
annotations:
autoscaling.alpha.kubernetes.io/conditions: '[{"type":"AbleToScale","status":"True","lastTransitionTime":"2019-08-19T18:43:42Z","reason":"SucceededGetScale","message":"the
HPA controller was able to get the target''s current scale"},{"type":"ScalingActive","status":"False","lastTransitionTime":"2019-08-19T18:55:26Z","reason":"FailedGetObjectMetric","message":"the
HPA was unable to compute the replica count: unable to get metric nginx_ingress_controller_requests:
Ingress on petclinic petclinic/unable to fetch metrics
from custom metrics API: the server could not find the metric nginx_ingress_controller_requests
for ingresses.extensions petclinic"},{"type":"ScalingLimited","status":"False","lastTransitionTime":"2019-08-19T18:43:42Z","reason":"DesiredWithinRange","message":"the
desired count is within the acceptable range"}]'
autoscaling.alpha.kubernetes.io/current-metrics: '[{"type":""},{"type":"Resource","resource":{"name":"cpu","currentAverageUtilization":1,"currentAverageValue":"10m"}}]'
autoscaling.alpha.kubernetes.io/metrics: '[{"type":"Object","object":{"target":{"kind":"Ingress","name":"petclinic","apiVersion":"extensions/v1beta1"},"metricName":"nginx_ingress_controller_requests","targetValue":"10k"}}]'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"autoscaling/v2beta1","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"petclinic","namespace":"petclinic"},"spec":{"maxReplicas":10,"metrics":[{"object":{"metricName":"nginx_ingress_controller_requests","target":{"apiVersion":"extensions/v1beta1","kind":"Ingress","name":"petclinic"},"targetValue":"10k"},"type":"Object"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"petclinic"}}}
And here's what I see in the log-file of Prometheus-Adapter:
I0820 15:42:13.467236 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/petclinic/ingresses.extensions/petclinic/nginx_ingress_controller_requests: (6.124398ms) 404 [[kube-controller-manager/v1.15.1 (linux/amd64) kubernetes/4485c6f/system:serviceaccount:kube-system:horizontal-pod-autoscaler] 10.103.98.0:37940]
HPA looks for this metric in the deployment's namespace, but I need it to fetch it from the nginx-ingress
namespace, just like so:
I0820 15:44:40.044797 1 wrap.go:42] GET /apis/custom.metrics.k8s.io/v1beta1/namespaces/nginx-ingress/ingresses/petclinic/nginx_ingress_controller_requests: (2.210282ms) 200 [[kubectl/v1.15.2 (linux/amd64) kubernetes/f627830] 10.103.97.0:35142]
Alas, the autoscaling/v2beta1
API doesn't have the spec.metrics.object.target.namespace
entity, so I can't "ask" it to fetch the value from another namespace. :-(
Would anyone be so kind as to help me to solve this puzzle? Is there a way to set up auto-scaling based on the custom metrics that belong to another namespace?
Maybe there's the way to make this metric available in the same namespace to which this ingress.extension belongs?
Thanks in advance for any clues and tips.
Ah, I've figured it out. Here's the part of the prometheus-adapter configuration I needed:
Ta-da! :-)
My choice would be to export an external metric from prometheus, as those are not namespace dependent.
@Volodymyr Melnyk You need prometheus adapter to export the custom metric to the petclinic namespace, and I don't see that being solved in your config, maybe you also made other configurations you forgot to mention?