I was tracking down an unknown timeout problem in our cluster and found something interesting.
We currently have 30 pods for a service. When I run kubectl get event
Those 30 pods have been failing for readiness. All of them have "LASTSEEN" less than 10 mins and they are keep failing.
However, I can still access the service without a problem.
I thought Kube removes those pods that failed readiness from being accessed.
Why can I still access the service? I've doubled checked that every single one of them are still failing every 10 minutes or so.
Answering first your main question in the title.
Quoting the official documentation on readiness probe concept
or after blog post, which explains very well main differences between liveness or readiness probes
Now on the reason why you still can access your service. I assume you have some Deployment object, that controls your application instances` life cycle. Please note, when updating a deployment, it will also leave old replica(s) running until probes have been successful on new replica. That means that if your new pods are broken in some way, they’ll never see traffic, your old pods will continue to serve all traffic for the deployment.