With single replicaset, the application is working. So I changed the replicaset to 3.
Then, the other two copies of pods are not starting.: kubectl get pods
customservice-c8645cd6-7gghm 0/3 Init:CrashLoopBackOff 8 17m
customservice-c8645cd6-f5nbn 3/3 Running 0 6h50m
customservice-c8645cd6-fh57n 0/3 Init:CrashLoopBackOff 8 17m
kubectl describe pod customservice-c8645cd6-fh57n
Name: customservice-c8645cd6-7gghm
Namespace: default
Priority: 0
Node: ip-192-168-93-234.us-west-2.compute.internal/192.168.93.234
Start Time: Tue, 20 Jul 2021 19:34:48 +0530
Labels: app=customservice
consul.hashicorp.com/connect-inject-status=injected
pod-template-hash=c8645cd6
service=customservice
Annotations: consul.hashicorp.com/connect-inject: true
consul.hashicorp.com/connect-inject-status: injected
consul.hashicorp.com/connect-service: customservice
consul.hashicorp.com/connect-service-port: 18170
consul.hashicorp.com/connect-service-upstreams: dashboard:9002
kubernetes.io/psp: eks.privileged
prometheus.io/path: /metrics
prometheus.io/port: 20200
prometheus.io/scrape: true
Status: Pending
IP: 192.168.93.88
IPs:
IP: 192.168.93.88
Controlled By: ReplicaSet/customservice-c8645cd6
Init Containers:
consul-connect-inject-init:
Container ID: docker://a9bf6bb490f5c21637c18aff681d49d53692f09a3333bf34adb2080816953e26
Image: hashicorp/consul:1.9.7
Image ID: docker-pullable://hashicorp/consul@sha256:37c7a001af46a68f8e3513bd8180e7f84133d428b0e4ce5cf385d3e54f894760
Port: <none>
Host Port: <none>
Command:
/bin/sh
-ec
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
export CONSUL_GRPC_ADDR="${HOST_IP}:8502"
# Register the service. The HCL is stored in the volume so that
# the preStop hook can access it to deregister the service.
cat <<EOF >/consul/connect-inject/service.hcl
services {
id = "${SERVICE_ID}"
name = "customservice"
address = "${POD_IP}"
port = 18170
meta = {
pod-name = "${POD_NAME}"
k8s-namespace = "${POD_NAMESPACE}"
}
}
services {
id = "${PROXY_SERVICE_ID}"
name = "customservice-sidecar-proxy"
kind = "connect-proxy"
address = "${POD_IP}"
port = 20000
meta = {
pod-name = "${POD_NAME}"
k8s-namespace = "${POD_NAMESPACE}"
}
proxy {
config {
envoy_prometheus_bind_addr = "0.0.0.0:20200"
}
destination_service_name = "customservice"
destination_service_id = "${SERVICE_ID}"
local_service_address = "127.0.0.1"
local_service_port = 18170
upstreams {
destination_type = "service"
destination_name = "dashboard"
local_bind_port = 9002
}
}
checks {
name = "Proxy Public Listener"
tcp = "${POD_IP}:20000"
interval = "10s"
deregister_critical_service_after = "10m"
}
checks {
name = "Destination Alias"
alias_service = "${SERVICE_ID}"
}
}
EOF
/bin/consul services register \
/consul/connect-inject/service.hcl
# Generate the envoy bootstrap code
/bin/consul connect envoy \
-proxy-id="${PROXY_SERVICE_ID}" \
-prometheus-scrape-path="/metrics" \
-prometheus-backend-port="20100" \
-bootstrap > /consul/connect-inject/envoy-bootstrap.yaml
# Copy the Consul binary
cp /bin/consul /consul/connect-inject/consul
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 20 Jul 2021 19:51:19 +0530
Finished: Tue, 20 Jul 2021 19:51:23 +0530
Ready: False
Restart Count: 8
Limits:
cpu: 50m
memory: 150Mi
Requests:
cpu: 50m
memory: 25Mi
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
POD_NAME: customservice-c8645cd6-7gghm (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
SERVICE_ID: $(POD_NAME)-customservice
PROXY_SERVICE_ID: $(POD_NAME)-customservice-sidecar-proxy
Mounts:
/consul/connect-inject from consul-connect-inject-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from customservice-token-4xf6t (ro)
Containers:
customservice:
Container ID:
Image: customserverlinux.azurecr.io/custom:latest
Image ID:
Port: 18170/TCP
Host Port: 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
DASHBOARD_CONNECT_SERVICE_HOST: 127.0.0.1
DASHBOARD_CONNECT_SERVICE_PORT: 9002
Mounts:
/home/spring/AppData/Local/erwin/custom Server/ from custom-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from customservice-token-4xf6t (ro)
envoy-sidecar:
Container ID:
Image: envoyproxy/envoy-alpine:v1.16.0
Image ID:
Port: <none>
Host Port: <none>
Command:
envoy
--config-path
/consul/connect-inject/envoy-bootstrap.yaml
-l
debug
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment:
HOST_IP: (v1:status.hostIP)
CONSUL_HTTP_ADDR: $(HOST_IP):8500
Mounts:
/consul/connect-inject from consul-connect-inject-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from customservice-token-4xf6t (ro)
consul-sidecar:
Container ID:
Image: hashicorp/consul-k8s:0.25.0
Image ID:
Port: <none>
Host Port: <none>
Command:
consul-k8s
consul-sidecar
-service-config
/consul/connect-inject/service.hcl
-consul-binary
/consul/connect-inject/consul
-enable-metrics-merging=true
-merged-metrics-port=20100
-service-metrics-port=18170
-service-metrics-path=/metrics
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 20m
memory: 50Mi
Requests:
cpu: 20m
memory: 25Mi
Environment:
HOST_IP: (v1:status.hostIP)
CONSUL_HTTP_ADDR: $(HOST_IP):8500
Mounts:
/consul/connect-inject from consul-connect-inject-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from customservice-token-4xf6t (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
custom-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
customservice-token-4xf6t:
Type: Secret (a volume populated by a Secret)
SecretName: customservice-token-4xf6t
Optional: false
consul-connect-inject-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned default/customservice-c8645cd6-7gghm to ip-192-168-93-234.us-west-2.compute.internal
Normal Pulled 17m (x5 over 19m) kubelet Container image "hashicorp/consul:1.9.7" already present on machine
Normal Created 17m (x5 over 19m) kubelet Created container consul-connect-inject-init
Normal Started 17m (x5 over 19m) kubelet Started container consul-connect-inject-init
Warning BackOff 3m50s (x68 over 18m) kubelet Back-off restarting failed container
As the init container is failing,logs for that container:
kubectl logs customservice-c8645cd6-64j2j -c consul-connect-inject-init
Registered service: customservice
Registered service: customservice-sidecar-proxy
flag provided but not defined: -prometheus-scrape-path
Usage:
-address value
LAN address to advertise in the gateway service registration
-admin-access-log-path string
The path to write the access log for the administration server. If no access log is desired specify "/dev/null". By default it will use "/dev/null". (default "/dev/null")
-admin-bind string
The address:port to start envoy's admin server on. Envoy requires this but care must be taken to ensure it's not exposed to an untrusted network as it has full control over the secrets and config of the proxy. (default "localhost:19000")
-bind-address <name>=<ip>:<port>
Bind address to use instead of the default binding rules given as <name>=<ip>:<port> pairs. This flag may be specified multiple times to add multiple bind addresses.
-bootstrap
Generate the bootstrap.json but don't exec envoy
-ca-file value
Path to a CA file to use for TLS when communicating with Consul. This can also be specified via the CONSUL_CACERT environment variable.
-ca-path value
Path to a directory of CA certificates to use for TLS when communicating with Consul. This can also be specified via the CONSUL_CAPATH environment variable.
-client-cert value
Path to a client cert file to use for TLS when 'verify_incoming' is enabled. This can also be specified via the CONSUL_CLIENT_CERT environment variable.
-client-key value
Path to a client key file to use for TLS when 'verify_incoming' is enabled. This can also be specified via the CONSUL_CLIENT_KEY environment variable.
-deregister-after-critical string
The amount of time the gateway services health check can be failing before being deregistered (default "6h")
-envoy-binary string
The full path to the envoy binary to run. By default will just search $PATH. Ignored if -bootstrap is used.
-envoy-version string
Sets the envoy-version that the envoy binary has. (default "1.16.4")
-expose-servers
Expose the servers for WAN federation via this mesh gateway
-gateway string
The type of gateway to register. One of: terminating, ingress, or mesh
-grpc-addr string
Set the agent's gRPC address and port (in http(s)://host:port format). Alternatively, you can specify CONSUL_GRPC_ADDR in ENV. (default "192.168.93.234:8502")
-http-addr address
The address and port of the Consul HTTP agent. The value can be an IP address or DNS address, but it must also include the port. This can also be specified via the CONSUL_HTTP_ADDR environment variable. The default value is http://127.0.0.1:8500. The scheme can also be set to HTTPS by setting the environment variable CONSUL_HTTP_SSL=true.
-mesh-gateway
Configure Envoy as a Mesh Gateway.
-namespace default
Specifies the namespace to query. If not provided, the namespace will be inferred from the request's ACL token, or will default to the default namespace. Namespaces are a Consul Enterprise feature.
-no-central-config
By default the proxy's bootstrap configuration can be customized centrally. This requires that the command run on the same agent as the proxy will and that the agent is reachable when the command is run. In cases where either assumption is violated this flag will prevent the command attempting to resolve config from the local agent.
-omit-deprecated-tags
In Consul 1.9.0 the format of metric tags for Envoy clusters was updated from consul.[service|dc|...] to consul.destination.[service|dc|...]. The old tags were preserved for backward compatibility,but can be disabled with this flag.
-proxy-id string
The proxy's ID on the local agent.
-register
Register a new gateway service before configuring and starting Envoy
-service string
Service name to use for the registration
-sidecar-for string
The ID of a service instance on the local agent that this proxy should become a sidecar for. It requires that the proxy service is registered with the agent as a connect-proxy with Proxy.DestinationServiceID set to this value. If more than one such proxy is registered it will fail.
-tls-server-name value
The server name to use as the SNI host when connecting via TLS. This can also be specified via the CONSUL_TLS_SERVER_NAME environment variable.
-token value
ACL token to use in the request. This can also be specified via the CONSUL_HTTP_TOKEN environment variable. If unspecified, the query will default to the token of the Consul agent at the HTTP address.
-token-file value
File containing the ACL token to use in the request instead of one specified via the -token argument or CONSUL_HTTP_TOKEN environment variable. This can also be specified via the CONSUL_HTTP_TOKEN_FILE environment variable.
-wan-address value
WAN address to advertise in the gateway service registration. For ingress gateways, only an IP address (without a port) is required.
Any suggestion on how to fix this?
Im not a consul expert, however.. check consul-connect-inject-init gets pods stuck in podInitialisation loop because of the -prometheus-scrape-path flag #919. The issue itself the same you have, behavior the same and error also the same. Maybe that information will help you
What version do you have?
MOre details: Consul helm annotation connectinject-metrics-defaultenablemerging