Have a cluster setup with the following permissions.
I tried creating a node pool with new permissions, which seems to be able to enable some things. I didn't find the scope required for stackdriver trace permissions in the documentation located here.
Further, trying to enable monitoring via creating a node pool, and deleting the old one didn't seem to successfully flip the switch.
gcloud container node-pools create pool-2 \
--cluster=cluster-1 \
--scopes=compute-rw,storage-rw,taskqueue,logging-write,monitoring-write,datastore,service-control,service-management
To add to the previous response, it is possible to enable Stackdriver Monitoring by running the previously referenced
gcloud alpha container clusters update --monitoring-service=monitoring.googleapis.com
command as instructed in the Container Engine metrics troubleshooting steps. More information on this command can be found on its Cloud SDK Reference page.However, it is currently not possible to modify the Stackdriver Trace permission for an existing Container Engine instance due to the URI scope being configured at the moment of the cluster’s creation. See the Google Container Engine for Node.js Stackdriver Trace module documentation for more details.
Alternatively, if you can port your application on a new Google Container instance, you can always recreate a new cluster and enable the desired Stackdriver services/permissions on its configuration page.
try to enable it this way:
At least for Google Compute, you can use the
gcloud alpha compute
which offers thetrace-append
scope.I suppose using the
alpha
for Container also allow this. Trygcloud alpha container clusters create --help
to see allowed scopesA bit late, but the solution is to just stop the VMs, go to each one of them, click edit, go to the bottom and change the permissions :)
PS My solution is for Dataproc cluster but think that it will be similar for kubernetes