I'm trying to assign an external static IP address to our Cloud Run instance, so that I can use it with websockets (which I've read require a static IP in order to work, versus the self-appointed/load-balanced GCloud app domain name). The regular URL is accessible, but the static IP is just hanging.
I tried following this guide mostly: https://cloud.google.com/run/docs/configuring/static-outbound-ip
I'm not entirely sure how the external IP gets routed to the Cloud Run instance, because it's a self-managed instance (ie. there is no private IP address on it), but assume that the other pars are figuring that out...
What I've done:
- Created a cloud router:
gcloud compute routers create cloud-run-router --network=default --region=us-central1
- Created external static IP:
gcloud compute addresses create cloud-run --region=us-central1
- I also created a subnet on the network:
gcloud compute networks subnets create cloud-run-subnet --range=10.20.0.0/28 --network=default --region=us-central1
- Then I created a Serverless VPC Access connector with this subnet:
gcloud beta compute networks vpc-access connectors create cloud-run-sub-conn --subnet-project=project-name --subnet=cloud-run-subnet --region=us-central1
- Then I created a new Cloud NAT Gateway with this router and subnet:
gcloud compute routers nats create cloud-run-nat --router=cloud-run-router --region=us-central1 --nat-custom-subnet-ip-ranges=cloud-run-subnet --nat-external-ip-pool=cloud-run
I also setup some firewall rules, to try to allow everything:
I then deployed the Cloud Run instance referring to the VPC egress and connector:
gcloud beta run deploy api-node --image gcr.io/project-name/api-node:latest --platform managed --allow-unauthenticated --set-env-vars REDISHOST='10.0.0.4',REDISPORT=6379,GOOGLE_APPLICATION_CREDENTIALS=credentials.json --set-cloudsql-instances=project-name:us-central1:mysql-db --vpc-egress=all --vpc-connector=cloud-run-sub-conn
Service [api-node] revision [api-node-00035-waw] has been deployed and is serving 100 percent of traffic.
Service URL: https://api-node-ojzumfbnoq-uc.a.run.app
For some reason the Cloud Run instance is still accessible by the self-appointed url: https://api-node-ojzumfbnoq-uc.a.run.app/
...but the external IP doesn't work: http://104.197.97.194/ https://104.197.97.194/ http://104.197.97.194:8080/ https://104.197.97.194:80/
The Cloud Run service yaml file looks like this:
gcloud run services describe api-node --format export > service.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
annotations:
client.knative.dev/user-image: gcr.io/project-name/api-node
run.googleapis.com/ingress: all
run.googleapis.com/ingress-status: all
run.googleapis.com/launch-stage: BETA
labels:
cloud.googleapis.com/location: us-central1
name: api-node
namespace: '938045200399'
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/maxScale: '1000'
autoscaling.knative.dev/minScale: '4'
client.knative.dev/user-image: gcr.io/project-name/api-node
run.googleapis.com/client-name: gcloud
run.googleapis.com/client-version: 329.0.0
run.googleapis.com/cloudsql-instances: project-name:us-central1:mysql-db
run.googleapis.com/sandbox: gvisor
run.googleapis.com/vpc-access-connector: cloud-run-sub-conn
run.googleapis.com/vpc-access-egress: all
name: api-node-00034-xah
spec:
containerConcurrency: 250
containers:
- env:
- name: REDISHOST
value: 10.0.0.4
- name: REDISPORT
value: '6379'
- name: GOOGLE_APPLICATION_CREDENTIALS
value: credentials.json
image: gcr.io/project-name/api-node
ports:
- containerPort: 8080
resources:
limits:
cpu: '4'
memory: 2Gi
serviceAccountName: [email protected]
timeoutSeconds: 20
traffic:
- latestRevision: true
percent: 100
Here is my Dockerfile, if it helps:
# Use the official lightweight Node.js 12 image.
# https://hub.docker.com/_/node
FROM node:12-slim
# Create and change to the app directory.
WORKDIR /usr/src/app
ENV REDISHOST='10.0.0.4'
ENV REDISPORT=6379
ENV GOOGLE_APPLICATION_CREDENTIALS=credentials.json
ENV PORT=80
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure copying both package.json AND package-lock.json (when available).
# Copying this first prevents re-running npm install on every code change.
COPY package*.json ./
# Install production dependencies.
# If you add a package-lock.json, speed your build by switching to 'npm ci'.
# RUN npm ci --only=production
RUN npm install --only=production
# Copy local code to the container image.
COPY . ./
EXPOSE 80/tcp
EXPOSE 8080/tcp
EXPOSE 9001/tcp
# Run the web service on container startup.
CMD [ "node", "build/index.js" ]
Does anyone see anything wrong with the above steps? Would appreciate any help! Been at it for a few days.