We have created a ECS cluster with Fargate and have created few tasks in service to be used in a CI/CD pipeline. Service has desired count of 1. So if we terminate any task, new task comes up automatically. Is there a way in which we can make sure that task completes and terminates? And when it does, service also should get terminated. We basically want to spin up one task for every stage of our pipeline and delete that task and service once the pipeline stage is completed. Is there any CLI commands that we can use for this case?
Meghana B Srinath's questions
We have deployed an application behind the istio ingress gateway and is accessible at test.domain.com/jenkinscore.We have used istio 1.4.5. The domain name is created for the istio ingress gateway service IP. As per the below logs, when we hit this URL, istio-proxy is throwing a 403 error: upstream connect error or disconnect/reset before headers. reset reason: connection failure
.
Below are the logs. This happens only intermittently. On re-starting the ingress gateway pod, issue gets resolved. Can anyone let us know what could be the reason for this error?
:42:20.798][46][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:259] [C2469] new stream
[Envoy (Epoch 1)] [2020-06-09 11:42:20.798][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:483] [C2469] recv frame type=1
[Envoy (Epoch 1)] [2020-06-09 11:42:20.798][46][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:708] [C2469][S10386582713969444678] request headers complete (end_stream=true):
':method', 'GET'
':authority', 'test.domain.com'
':scheme', 'https'
':path', '/jenkinscore'
'cache-control', 'max-age=0'
'upgrade-insecure-requests', '1'
'user-agent', 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36'
'accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9'
'sec-fetch-site', 'cross-site'
'sec-fetch-mode', 'navigate'
'sec-fetch-user', '?1'
'sec-fetch-dest', 'document'
'accept-encoding', 'gzip, deflate, br'
'accept-language', 'en-US,en;q=0.9'
[Envoy (Epoch 1)] [2020-06-09 11:42:20.798][46][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1257] [C2469][S10386582713969444678] request end stream
[Envoy (Epoch 1)] [2020-06-09 11:42:20.798][46][debug][jwt] [external/envoy/source/extensions/filters/http/jwt_authn/filter.cc:101] Called Filter : setDecoderFilterCallbacks
[Envoy (Epoch 1)] [2020-06-09 11:42:20.798][46][debug][filter] [src/envoy/http/mixer/filter.cc:47] Called Mixer::Filter : Filter
[Envoy (Epoch 1)] [2020-06-09 11:42:20.798][46][debug][filter] [src/envoy/http/mixer/filter.cc:148] Called Mixer::Filter : setDecoderFilterCallbacks
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][filter] [external/envoy/source/extensions/filters/http/ext_authz/ext_authz.cc:80] [C2469][S10386582713969444678] ext_authz filter calling authorization server
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][router] [external/envoy/source/common/router/router.cc:434] [C0][S9059969016458298666] cluster 'ext_authz' match for URL '/envoy.service.auth.v2.Authorization/Check'
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][router] [external/envoy/source/common/router/router.cc:549] [C0][S9059969016458298666] router decoding headers:
':method', 'POST'
':path', '/envoy.service.auth.v2.Authorization/Check'
':authority', 'ext_authz'
':scheme', 'http'
'te', 'trailers'
'grpc-timeout', '10000m'
'content-type', 'application/grpc'
'x-b3-traceid', 'a4xxxx3471f0f7496063d056b2d9'
'x-b3-spanid', '7a236se1c6c190'
'x-b3-parentspanid', 'f7496063d056b2d9'
'x-b3-sampled', '0'
'x-envoy-internal', 'true'
'x-forwarded-for', '10.48.3.5'
'x-envoy-expected-rq-timeout-ms', '10000'
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][client] [external/envoy/source/common/http/codec_client.cc:31] [C2470] connecting
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][connection] [external/envoy/source/common/network/connection_impl.cc:711] [C2470] connecting to 127.0.0.1:10003
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][connection] [external/envoy/source/common/network/connection_impl.cc:720] [C2470] connection in progress
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][http2] [external/envoy/source/common/http/http2/codec_impl.cc:912] [C2470] setting stream-level initial window size to 268435456
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][http2] [external/envoy/source/common/http/http2/codec_impl.cc:934] [C2470] updating connection-level initial window size to 268435456
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][router] [external/envoy/source/common/router/router.cc:1475] [C0][S9059969016458298666] buffering 1023 bytes
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:966] [C2469][S10386582713969444678] decode headers called: filter=0x559dc3768780 status=4
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:424] [C2469] dispatched 441 bytes
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:666] [C2469] about to send frame type=4, flags=0
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:720] [C2469] send data: bytes=15
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][connection] [external/envoy/source/common/network/connection_impl.cc:398] [C2469] writing 15 bytes, end_stream false
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:608] [C2469] sent frame type=4
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:666] [C2469] about to send frame type=4, flags=1
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:720] [C2469] send data: bytes=9
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][connection] [external/envoy/source/common/network/connection_impl.cc:398] [C2469] writing 9 bytes, end_stream false
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:608] [C2469] sent frame type=4
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:666] [C2469] about to send frame type=8, flags=0
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:720] [C2469] send data: bytes=13
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][connection] [external/envoy/source/common/network/connection_impl.cc:398] [C2469] writing 13 bytes, end_stream false
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http2] [external/envoy/source/common/http/http2/codec_impl.cc:608] [C2469] sent frame type=8
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][connection] [external/envoy/source/common/network/connection_impl.cc:462] [C2469] socket event: 2
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][connection] [external/envoy/source/common/network/connection_impl.cc:550] [C2469] write ready
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:259] [C2469] ssl write returns: 37
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][connection] [external/envoy/source/common/network/connection_impl.cc:462] [C2470] socket event: 3
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][connection] [external/envoy/source/common/network/connection_impl.cc:550] [C2470] write ready
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][connection] [external/envoy/source/common/network/connection_impl.cc:568] [C2470] delayed connection error: 111
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][connection] [external/envoy/source/common/network/connection_impl.cc:193] [C2470] closing socket: 0
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][client] [external/envoy/source/common/http/codec_client.cc:88] [C2470] disconnect. resetting 0 pending requests
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][pool] [external/envoy/source/common/http/http2/conn_pool.cc:152] [C2470] client disconnected
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][router] [external/envoy/source/common/router/router.cc:911] [C0][S9059969016458298666] upstream reset: reset reason connection failure
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][http] [external/envoy/source/common/http/async_client_impl.cc:93] async http request response headers (end_stream=true):
':status', '200'
'content-type', 'application/grpc'
'grpc-status', '14'
'grpc-message', 'upstream connect error or disconnect/reset before headers. reset reason: connection failure'
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][filter] [external/envoy/source/extensions/filters/http/ext_authz/ext_authz.cc:244] [C2469][S10386582713969444678] ext_authz filter rejected the request with an error. Response status code: 403
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1354] [C2469][S10386582713969444678] Sending local reply with details ext_authz_error
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1441] [C2469][S10386582713969444678] encode headers called: filter=0x559dc3646d20 status=0
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1441] [C2469][S10386582713969444678] encode headers called: filter=0x559dc3554730 status=0
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][filter] [src/envoy/http/mixer/filter.cc:135] Called Mixer::Filter : encodeHeaders 0
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1441] [C2469][S10386582713969444678] encode headers called: filter=0x559dc35ce1e0 status=0
[Envoy (Epoch 1)] [2020-06-09 11:42:20.799][46][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1552] [C2469][S10386582713969444678] encoding headers via codec (end_stream=true):
':status', '403'
'date', 'Tue, 09 Jun 2020 11:42:20 GMT'
'server', 'istio-envoy'
We have a GKE cluster with autoscaling enabled. We are having google schedulers to shut down instances daily at a specific time. This also shuts down the GKE nodes. Since these clusters has autoscaling enabled and has a minimum node count, the nodes are getting recreated to match the minimal level, even after they are shut down by the script. Hence we are temporarily making the node count to 0 in the cluster and manually changing it back to the required number, when the nodes need to be started. When the node count is made 0, the original nodes are deleted from the cluster and new nodes are created when nodes are started. Is there any way in which nodes can just be shut down through the scripts(like normal GCP instances) instead of deleting and re-creating? There is a functionality in AWS autoscaling group where the suspend settings can be changed. So is there anything similar in GCP as well?
Can LDAP features be integrated with Istio to provide user authentication? We basically want to use Istio on top of our existing services. Our goal is to make Istio authenticate with LDAP for the list of users and their passwords. And based on this data, Istio should route the request to the appropriate service. Is there any utility through which this can be done? If LDAP cant be integrated with Istio, are there any other ways to have the user authentication in Istio?
I'm trying to install Istio on a private GKE cluster. I have downloaded the version 1.4.3 of Istio and then applied the default profile. But not all the components are getting installed from the manifest. Below are the error logs.
$ istioctl manifest apply
This will install the default Istio profile into the cluster. Proceed? (y/N) y
- Applying manifest for component Base...
✔ Finished applying manifest for component Base.
- Applying manifest for component Pilot...
- Applying manifest for component Policy...
- Applying manifest for component Galley...
- Applying manifest for component Citadel...
- Applying manifest for component IngressGateway...
- Applying manifest for component Prometheus...
- Applying manifest for component Telemetry...
- Applying manifest for component Injector...
✘ Finished applying manifest for component Pilot.
✘ Finished applying manifest for component Telemetry.
✔ Finished applying manifest for component Prometheus.
✔ Finished applying manifest for component Citadel.
✔ Finished applying manifest for component Galley.
✔ Finished applying manifest for component Policy.
✔ Finished applying manifest for component Injector.
✔ Finished applying manifest for component IngressGateway.
Component Pilot - manifest apply returned the following errors:
Error: error running kubectl: signal: killed
Component Kiali - manifest apply returned the following errors:
Error: error running kubectl: exit status 1
Error detail:
Unable to connect to the server: dial tcp 192.168.0.2:443: i/o timeout (repeated 1 times)
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Component Grafana - manifest apply returned the following errors:
Error: error running kubectl: exit status 1
Error detail:
Unable to connect to the server: dial tcp 192.168.0.2:443: i/o timeout (repeated 1 times)
apiVersion: v1
items: []
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Component Telemetry - manifest apply returned the following errors:
Error: error running kubectl: exit status 1
Error detail:
Unable to connect to the server: net/http: request canceled (Client.Timeout exceeded while awaiting headers) (repeated 1 times)
✘ Errors were logged during apply operation. Please check component installation logs above.
Failed to generate and apply manifests, error: errors were logged during apply operation
Also, the ingress gateway is not getting created from any of the sample applications (helloworld, bookinfo). Below is the error:
$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
Error from server (Timeout): error when creating "samples/bookinfo/networking/bookinfo-gateway.yaml": Timeout: request did not complete within requested timeout 30s
Error from server (Timeout): error when creating "samples/bookinfo/networking/bookinfo-gateway.yaml": Timeout: request did not complete within requested timeout 30s
However, I tried to use istio along with GKE on the same private cluster by following the guide here
This worked and all the components are installed successfully, along with the ingress gateway. I have enabled the ports 80,8080,1000-2000,22,443,9443 on the network as well. Can please someone tell us what would be the issue causing this error.
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.12-gke.25", GitCommit:"654de8cac69f1fc5db6f2de0b88d6d027bc15828", GitTreeState:"clean", BuildDate:"2020-01-14T06:01:20Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"}
Istio version:
client version: 1.4.3
control plane version: 1.4.3
data plane version: 1.4.3 (1 proxies)
Platform: GKE
OS: Ubuntu
We have configured vault to run as a pod in the cluster. In the below deployment YAML file, we have included the vault initialisation and unsealing to happen when the pod comes up initially. But when the pod gets restarted, the pod is going to crashLoopBackOff state because the vault is getting reinitialised. This is because we have included both the initialisation and unsealing command in the postStart lifecycle command of the deployment file. Is there any way in which we could initialise the pod only once and later when pod restarts, unseal the vault using the existing keys?
Deployment file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: vault
name: vault
spec:
replicas: 1
template:
metadata:
labels:
app: vault
spec:
containers:
- image: vault
name: vault
imagePullPolicy: Always
ports:
- containerPort: 8200
name: vaultport
protocol: TCP
securityContext:
capabilities:
add:
- IPC_LOCK
env:
- name: VAULT_ADDR
value: "http://0.0.0.0:8200"
command: ["vault", "server"]
args:
- "-config=/vault/config/config.hcl"
volumeMounts:
- name: vault-unseal
mountPath: /vault/file/unseal.sh
subPath: unseal.sh
- name: vault-config
mountPath: /vault/config/config.hcl
subPath: config.hcl
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "vault operator init > /vault/file/keys.txt; sh /vault/file/unseal.sh" ]
volumes:
- name: vault-unseal
configMap:
name: vault-unseal
- name: vault-config
configMap:
name: vault-config
imagePullSecrets:
- name: regcred
Output of kubectl describe pod:
Name: vault-677bfd9c9c-dwsgv
Namespace: xxx
Priority: 0
Node: xxxxxxx-5b587f98-ljf4/10.0.0.11
Start Time: Thu, 30 Jan 2020 06:26:21 +0000
Labels: app=vault
pod-template-hash=677bfd9c9c
Annotations: <none>
Status: Running
IP: 10.4.2.10
IPs: <none>
Controlled By: ReplicaSet/vault-677bfd9c9c
Containers:
vault:
Container ID: xxxxxxxxxxx
Image: xxxxxxxxxxxxxxxx
Image ID: xxxxxxxxxxxxxxxxxxxxxxxxx
Port: 8200/TCP
Host Port: 0/TCP
Command:
vault
server
Args:
-config=/vault/config/config.hcl
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 30 Jan 2020 06:26:26 +0000
Finished: Thu, 30 Jan 2020 06:26:27 +0000
Ready: False
Restart Count: 1
Environment:
VAULT_ADDR: http://0.0.0.0:8200
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-kxfdb (ro)
/vault/config/config.hcl from vault-config (rw,path="config.hcl")
/vault/file from vault-data (rw)
/vault/file/unseal.sh from vault-unseal (rw,path="unseal.sh")
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
vault-unseal:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-unseal
Optional: false
vault-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-config
Optional: false
vault-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: vault-data
ReadOnly: false
default-token-kxfdb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-kxfdb
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18s default-scheduler Successfully assigned xxx/xxxxxxxxxx
Normal Pulling 13s (x2 over 15s) kubelet, gke-cluster-testing--np-testing-featu-5b587f98-ljf4 pulling image "xxxxxxxxx"
Normal Pulled 13s (x2 over 15s) kubelet, gke-cluster-testing--np-testing-featu-5b587f98-ljf4 Successfully pulled image "xxxxxxx"
Normal Created 13s (x2 over 15s) kubelet, gke-cluster-testing--np-testing-featu-5b587f98-ljf4 Created container
Normal Started 13s (x2 over 14s) kubelet, gke-cluster-testing--np-testing-featu-5b587f98-ljf4 Started container
Warning FailedPostStartHook 12s (x2 over 14s) kubelet, gke-cluster-testing--np-testing-featu-5b587f98-ljf4 Exec lifecycle hook ([/bin/sh -c vault operator init > /vault/file/keys.txt; sh /vault/file/unseal.sh]) for Container "vault" in Pod "vault-677bfd9c9c-dwsgv_xxx(6ebdc17a-4329-11ea-9fc1-4201c0a80004)" failed - error: command '/bin/sh -c vault operator init > /vault/file/keys.txt; sh /vault/file/unseal.sh' exited with 2: Error initializing: Error making API request.
URL: PUT http://0.0.0.0:8200/v1/sys/init
Code: 400. Errors:
* Vault is already initialized
An error occurred attempting to ask for an unseal key. The raw error message
is shown below, but usually this is because you attempted to pipe a value
into the unseal command or you are executing outside of a terminal (tty). You
should run the unseal command from a terminal for maximum security. If this
is not an option, the unseal key can be provided as the first argument to the
unseal command. The raw error was: file descriptor 0 is not a terminal
An error occurred attempting to ask for an unseal key. The raw error message
is shown below, but usually this is because you attempted to pipe a value
into the unseal command or you are executing outside of a terminal (tty). You
should run the unseal command from a terminal for maximum security. If this
is not an option, the unseal key can be provided as the first argument to the
unseal command. The raw error was: file descriptor 0 is not a terminal
An error occurred attempting to ask for an unseal key. The raw error message
is shown below, but usually this is because you attempted to pipe a value
into the unseal command or you are executing outside of a terminal (tty). You
should run the unseal command from a terminal for maximum security. If this
is not an option, the unseal key can be provided as the first argument to the
unseal command. The raw error was: file descriptor 0 is not a terminal
Token (will be hidden):
Error authenticating: An error occurred attempting to ask for a token. The raw error message is shown below, but usually this is because you attempted to pipe a value into the command or you are executing outside of a terminal (tty). If you want to pipe the value, pass "-" as the argument to read from stdin. The raw error was: file descriptor 0 is not a terminal
, message: "Unseal Key (will be hidden): \nUnseal Key (will be hidden): \nUnseal Key (will be hidden): \nKey Value\n--- -----\nSeal Type shamir\nInitialized true\nSealed true\nTotal Shares 5\nThreshold 3\nUnseal Progress 0/3\nUnseal Nonce n/a\nVersion 1.3.2\nHA Enabled false\n++++++++++++ Vault Status +++++++++\nKey Value\n--- -----\nSeal Type shamir\nInitialized true\nSealed true\nTotal Shares 5\nThreshold 3\nUnseal Progress 0/3\nUnseal Nonce n/a\nVersion 1.3.2\nHA Enabled false\nError initializing: Error making API request.\n\nURL: PUT http://0.0.0.0:8200/v1/sys/init\nCode: 400. Errors:\n\n* Vault is already initialized\nAn error occurred attempting to ask for an unseal key. The raw error message\nis shown below, but usually this is because you attempted to pipe a value\ninto the unseal command or you are executing outside of a terminal (tty). You\nshould run the unseal command from a terminal for maximum security. If this\nis not an option, the unseal key can be provided as the first argument to the\nunseal command. The raw error was: file descriptor 0 is not a terminal\nAn error occurred attempting to ask for an unseal key. The raw error message\nis shown below, but usually this is because you attempted to pipe a value\ninto the unseal command or you are executing outside of a terminal (tty). You\nshould run the unseal command from a terminal for maximum security. If this\nis not an option, the unseal key can be provided as the first argument to the\nunseal command. The raw error was: file descriptor 0 is not a terminal\nAn error occurred attempting to ask for an unseal key. The raw error message\nis shown below, but usually this is because you attempted to pipe a value\ninto the unseal command or you are executing outside of a terminal (tty). You\nshould run the unseal command from a terminal for maximum security. If this\nis not an option, the unseal key can be provided as the first argument to the\nunseal command. The raw error was: file descriptor 0 is not a terminal\nToken (will be hidden): \nError authenticating: An error occurred attempting to ask for a token. The raw error message is shown below, but usually this is because you attempted to pipe a value into the command or you are executing outside of a terminal (tty). If you want to pipe the value, pass \"-\" as the argument to read from stdin. The raw error was: file descriptor 0 is not a terminal\n"
Normal Killing 12s (x2 over 14s) kubelet, gke-cluster-testing--np-testing-featu-5b587f98-ljf4 Killing container with id docker://vault:FailedPostStartHook
Warning BackOff 10s (x2 over 11s) kubelet, gke-cluster-testing--np-testing-featu-5b587f98-ljf4 Back-off restarting failed container
I have created a private cluster on GKE and a NAT is configued along with the cluster. I also have a bastion setup to access the private cluster. I'm trying to SSH into one of the nodes and unable to do so since private nodes do not have an external IP. Is there any way in which I can do this?
I have a private cluster created in GKE and gitlab is running as a pod in this cluster. Here, nodeport is not sending the traffic to service port and hence unable to push images to gitlab.
Error response from daemon: Get http://localhost:32121/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting
Can we connect to private docker repo using localhost from within a private cluster.
I have nginx service created on a GKE cluster as a load balancer type. I'm looking for enabling SSL certificates on this service. Is there any way in which I can achieve this in GKE? Below is the nginx service YAML:
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
ports:
- name: "gitlab-docker-registry"
port: 8123
targetPort: 8123
- name: "443"
port: 443
targetPort: 443
selector:
app: nginx
type: LoadBalancer
I have created a private cluster in GCP using terraform modules. As per the documentation here: https://www.terraform.io/docs/providers/google/r/container_cluster.html#master_ipv4_cidr_block, I have configured the private cluster as below:
private_cluster_config {
enable_private_endpoint = true
enable_private_nodes = true
master_ipv4_cidr_block = "${cidrsubnet(var.cidr, 28, 1)}"
}
This cluster is provisioned in a subnet whose CIDR range is 10.15.0.0/16 (var.cidr
is set to 10.15.0.0/16).
When I run terraform apply, I get the below error:
Error waiting for creating GKE cluster: The given master_ipv4_cidr 10.15.0.16/28 overlaps with an existing network 10.15.0.0/16.
"${cidrsubnet(var.cidr, 12, 1)}"
How do I provide the master_ipv4_cidr_block IPV4 address range and subnet range using value provided in var.cidr
so that the ranges dont overlap?
How should the cidrsubent() be modified to suit this requirement?
I have set up few resources in my project on Google Cloud Platform. I was looking for the resource quotas that is applicable for all the resources under a project. As per the documentation here: https://cloud.google.com/compute/quotas, running the following commands will provide the region wise and project wise quotas:
gcloud compute project-info describe --project myproject
gcloud compute regions describe [REGION]
But the resources listed from these are limited and does not contain specific ones. For instance, I want to know the quotas for the load balancers, storage buckets, VMs, NAT, Firewall etc. Is there any other way to specifically get quotas for every resource?
I'm trying to create a private cluster in GCP as per the steps mentioned here: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
As per this, the ipv4 address for the master node is set to 172.16.0.32/28. I have also seen that the same CIDR blocks are used in many other examples as well. Is there a restriction that only this particular CIDR block should be used for master when configuring a GKE private cluster? If yes, then, can my VPC/subnets have a different range of CIDR , for ex, 10.1.0.0/16? As in, can the master node reside in one subnet and the nodes in a different subnet?
If there is no restriction on the master ipv4 address range, then can I use any RFC1918 range for this?
I'm trying to set up CMEK in my cluster as per the details mentioned here: https://cloud.google.com/kubernetes-engine/docs/how-to/dynamic-provisioning-cmek#dynamically_provision_an_encrypted
I have deployed the Compute Engine Persistent Disk CSI Driver to my cluster as per the steps mentioned in: https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/docs/kubernetes/development.md
I have then created the key/key ring and have created the below storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-gce-pd
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
disk-encryption-kms-key: "projects/xx/locations/us-central1/keyRings/xx/cryptoKeys/xx
Below is the YAML for the PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: encrypt-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: csi-gce-pd
resources:
requests:
storage: 5Gi
However, when i apply the PVC YAML, it fails with the below error and PVC status will be at pending:
Name: encrypted-pvc
Namespace: gce-pd-csi-driver
StorageClass: csi-gce-pd
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"csi-gce-pd"},"nam...
volume.beta.kubernetes.io/storage-class: csi-gce-pd
volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 4s (x3 over 15s) pd.csi.storage.gke.io_csi-gce-pd-controller-0_5c51fedd-8092-4c71-aca9-5a13b566bb8a External provisioner is provisioning volume for claim "gce-pd-csi-driver/encrypted-pvc"
Normal ExternalProvisioning 2s (x2 over 15s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator
Warning ProvisioningFailed 0s (x3 over 11s) pd.csi.storage.gke.io_csi-gce-pd-controller-0_5c51fedd-8092-4c71-aca9-5a13b566bb8a failed to provision volume with StorageClass "csi-gce-pd": rpc error: code = Internal desc = CreateVolume failed to create single zonal disk "pvc-1524bf19-f6f1-11e9-a706-4201ac100007": failed to insert zonal disk: unkown Insert disk error: googleapi: Error 400: Invalid resource usage: 'Cloud KMS error when using key projects/acn-devopsgcp/locations/us-central1/keyRings/testkeyring1/cryptoKeys/testkey1: Permission 'cloudkms.cryptoKeyVersions.useToEncrypt' denied on resource 'projects/acn-devopsgcp/locations/us-central1/keyRings/testkeyring1/cryptoKeys/testkey1' (or it may not exist).'., invalidResourceUsage
I have given the below roles to the service account and the KMS key resource identifier is also correct. Cloud KMS CryptoKey Encrypter/Decrypter Cloud KMS CryptoKey Encrypter Cloud KMS CryptoKey Decrypter
Kubectl version :
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2", GitCommit:"c97fe5036ef3df2967d086711e6c0c405941e14b", GitTreeState:"clean", BuildDate:"2019-10-15T19:18:23Z", GoVersion:"go1.12.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.7-gke.10", GitCommit:"8cea5f8ae165065f0d35e5de5dfa2f73617f02d1", GitTreeState:"clean", BuildDate:"2019-10-05T00:08:10Z", GoVersion:"go1.12.9b4", Compiler:"gc", Platform:"linux/amd64"}
I'm trying to create PVC which has the storage class encrypted. These PVCs are created dynamically. As per this link- https://kubernetes.io/docs/concepts/storage/storage-classes/#gce-pd for AWS EBS, there is a parameter 'encrypted' which can be set to true or false to enable the encryption for the disk/volume. Example below for AWS:
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: ebs
provisioner: kubernetes.io/aws-ebs
parameters:
zone: "###ZONE###"
encrypted: "true"
However, there is no such parameter for GCE PD in GCP. Is there any way in which I can provide the encryption parameter for the GCE PD so that the resulting disk is encrypted?
I have the GCP service account key file in JSON format, which I need to export as GOOGLE_APPLICATION_CREDENTIALS. Is there a way in Terraform to provide the "contents" of this JSON file directly (instead of specifying the path to the file) in a Terraform variable block and then have Terraform interpret is as JSON? I have seen that Terraform has jsonencode and jsondecode functions, but not able to find many examples on it. Is there any other way to do this? Below is the approach I'm looking at:
variable "credentials"{
type = "string"
default="<contents of service account key file in JSON format>"
}
In Bastion start up script:
#!/bin/bash
export GOOGLE_APPLICATION_CREDENTIALS= jsonencode("${file(var.credentials)}")
So ultimately, GOOGLE_APPLICATION_CREDENTIALS should have the contents of the key file in JSON format. Can this be done in any way?
I have downloaded the GCP service account key to my local system. In Terraform, I have set the GOOGLE_APPLICATION_CREDENTIALS as a path to this file in the startup-script part of my bastion instance. Below is a snippet:
variable "credentials"{
default="C:/GCP/service-account-key.json"
}
. . . . . .
metadata= {
startup-script=<<SCRIPT
export GOOGLE_APPLICATION_CREDENTIALS="${file("${var.credentials}")}"
SCRIPT
}
Later I have written a #!/bin/bash script to store this credentials to another file as below:
#!/bin/bash
printf "$GOOGLE_APPLICATION_CREDENTIALS" > /home/ubuntu/credentials
But when I open the above credentials file, the file is truncated as below and the entire key is missing:
{
type: service_account,
project_id: acn-devopsgcp,
private_key_id: xxxxx,
private_key: -----BEGIN
Can please someone let me know why the service account key is not getting exported properly to the file or if there is anything that needs to be corrected.