I cannot create Dataproc Spark cluster for some reason. I run:
gcloud dataproc \
--region us-west1 clusters create my-test1 \
--project some_project \
--scopes 'https://www.googleapis.com/auth/cloud-platform' \
--bucket my-bucket \
--zone us-west1-b \
--single-node \
--master-machine-type n1-highmem-16 \
--master-boot-disk-size 500 \
--image-version 1.3 \
--initialization-actions \
'gs://dataproc-initialization-actions/connectors/connectors.sh',\
'gs://dataproc-initialization-actions/zeppelin/zeppelin.sh'\
--metadata 'gcs-connector-version=1.9.4'
but get
+ VM_CONNECTORS_DIR=/usr/lib/hadoop/lib
+ declare -A MIN_CONNECTOR_VERSIONS
+ MIN_CONNECTOR_VERSIONS=(["bigquery"]="0.11.0" ["gcs"]="1.7.0")
++ /usr/share/google/get_metadata_value attributes/bigquery-connector-version
++ true
+ BIGQUERY_CONNECTOR_VERSION=
++ /usr/share/google/get_metadata_value attributes/gcs-connector-version
+ GCS_CONNECTOR_VERSION=1.9.4
+ [[ -z '' ]]
+ [[ -z 1.9.4 ]]
+ [[ -z '' ]]
+ [[ 1.9.4 = \1\.\7\.\0 ]]
+ [[ '' = \0\.\1\1\.\0 ]]
+ update_connector bigquery ''
+ local name=bigquery
+ local version=
+ [[ -n '' ]]
+ update_connector gcs 1.9.4
+ local name=gcs
+ local version=1.9.4
+ [[ -n 1.9.4 ]]
+ validate_version gcs 1.9.4
+ local name=gcs
+ local version=1.9.4
+ local min_valid_version=1.7.0
++ min_version 1.7.0 1.9.4
++ echo -e '1.7.0\n1.9.4'
++ sort -r -t. -n -k1,1 -k2,2 -k3,3
++ tail -n1
+ [[ 1.7.0 != \1\.\7\.\0 ]]
+ rm -f /usr/lib/hadoop/lib/gcs-connector-1.9.0-hadoop2.jar
++ gsutil ls 'gs://hadoop-lib/gcs/gcs-connector-*1.9.4*.jar'
++ grep hadoop2
AccessDeniedException: 403 7**********[email protected] does not have storage.objects.list access to hadoop-lib.
+ local path=
++ echo ''
++ wc -w
+ local path_count=0
+ [[ 0 != 1 ]]
+ echo -e 'ERROR: Only one gcs connector path should be listed for 1.9.4 version, but listed 0 paths:\n'
ERROR: Only one gcs connector path should be listed for 1.9.4 version, but listed 0 paths:
I checked that the service user has "Storage Object Viewer" role assigned, along with "Editor".
Do you have any idea how to create a Spark Cluster with GCS connector on Google Cloud Dataproc?
The problem here is that the
gs://hadoop-lib
is not a public bucket, which is why you don't have access to it. It's too bad that this script is not usable as-is, or at least that this is not specified in the documentation.I solved this issue by updating the script and using a bucket I have access to to store and then retrieve the connectors from my initialization script.