I am using nodejs on appengine. We had a perfectly stable app scaffold on tuesday, and by friday it was completely broken after attempting to deploy. We made some minor changes to frontend code, but nothing that I think would prevent the instance from spinning up.
Here are steps to reproduce.
gcloud --project "{appname}" preview app deploy
logs show npm install, container build, etc.
Hangs on
Updating service [default]...
for 5 minutesFails with error.
ERROR: (gcloud.preview.app.deploy) Error Response: [13] Timed out when starting VMs. It's possible that the application code is unhealthy. (0/1 ready, 1 still deploying).
I have tried reverting the repository to when we had stable deployments, and it didn't help. This makes me think something on GCP is broken.
I have tried deleting all current versions and then deploying, but to no avail.
When you get this error, you can take a look at crash.log in the Cloud Console Logs Viewer (Logging -> Logs) for your specific service / version which will usually tell you exactly what happened. In my case when I reproduced the same error, crash.log shows me the output of NPM which tells me I have a 'SyntaxError: Unexpected identifier'.
YMMV of course, but this can tell you if the issue is related to your application code or if there is something more sinister going on.
The answer for me was different... The version started to get created but never finished the creation process. This is scenario is visible if you go into the logs for your AppEngine service (check all options to see ALL logs) and you may notice "Container called exit(1)."
In my case I found that running "gcloud init" to reset my credentials helped. Hope this helps someone else.Scratch that... I found that the Cloud Build API was just WAY behind / slow. This has been happening for the last couple of hours. I just noticed 4 versions pop into my dashboard all at once from prior hours in the evening. Apparently there is no way to cancel prior deployments? So ironically once Google's build service gets behind everyone probably starts spamming / retrying their deployments and the issue gets worse... so bad.
Check to make sure your package.json has this section
with the "msg" section containing some string that the health check can look for. Haven't been able to find documentation for this, so if anyone else does I'd love to see it.