I use Jenkins running with the official helm chart, spawning kubernetes pods on GKE, and I have the following part in my Jenkinsfile
:
...
withCredentials([file(credentialsId: "${project}", variable: 'key')]) {
withEnv(["GOOGLE_APPLICATION_CREDENTIALS=${key}"]) {
sh("gcloud --verbosity=debug auth activate-service-account --key-file ${key} --project=${project_id}")
sh("gcloud --verbosity=debug container clusters get-credentials ${project} --zone europe-west1-b")
...
And this is randomly failing, here is the output that is not really helpful:
+ gcloud --verbosity=debug container clusters get-credentials tastetastic --zone europe-west1-b
DEBUG: Running gcloud.container.clusters.get-credentials with Namespace(_deepest_parser=ArgumentParser(prog='gcloud.container.clusters.get-credentials', usage=None, description='Updates a kubeconfig file with appropriate credentials to point\nkubectl at a Container Engine Cluster. By default, credentials\nare written to HOME/.kube/config. You can provide an alternate\npath by setting the KUBECONFIG environment variable.\n\nSee [](https://cloud.google.com/container-engine/docs/kubectl) for\nkubectl documentation.', version=None, formatter_class=<class 'argparse.HelpFormatter'>, conflict_handler='error', add_help=False), _specified_args={'verbosity': '--verbosity', 'name': 'NAME', 'zone': '--zone'}, account=None, api_version=None, authority_selector=None, authorization_token_file=None, calliope_command=<googlecloudsdk.calliope.backend.Command object at 0x7f48d48e6e10>, command_path=['gcloud', 'container', 'clusters', 'get-credentials'], configuration=None, credential_file_override=None, document=None, flatten=None, format=None, h=None, help=None, http_timeout=None, log_http=None, name='$project', project=None, quiet=None, trace_email=None, trace_log=None, trace_token=None, user_output_enabled=None, verbosity='debug', version=None, zone='europe-west1-b').
Fetching cluster endpoint and auth data.
DEBUG: unable to load default kubeconfig: [Errno 2] No such file or directory: '/home/jenkins/.kube/config'; recreating /home/jenkins/.kube/config
DEBUG: Saved kubeconfig to /home/jenkins/.kube/config
kubeconfig entry generated for $project.
INFO: Display format "default".
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code -1
Finished: FAILURE
Do you have an idea where this could come from?
How to further debug this random failure?
And finally, what about looping, say 5 times, to be sure to avoid any network hiccup?
It looks like what I'm doing is fine: https://github.com/NYTimes/drone-gke/blob/f23a63fd8269182c4ce1d86302e1affc505b6441/main.go#L145
0 Answers