A Kubernetes application which uses local node storage to hold mutable state (as in the Kubernetes 101 example) loses its storage when the app is updated. This is a side-effect of the typical Deployment update approach of turning up new pods and turning down the old pods. This is unfortunate, as it means recopying data (possible hundreds of gigabytes) onto each node even though the data are often already there in an unreachable volume. This greatly slows down updates.
What can an application programmer do to optimize this? Some pod attributes can be updated in-place, but this only covers a small subset of updates. Persistent volumes are intrinsically remote, not local, so they can't be mmapped and won't have the same performance as local storage; and they inappropriately have lifetime independent of the deployment that should own them. Issue #9043 discusses the issue, but it doesn't seem to be reaching any consensus; and, anyway, sometimes the pod can be replaced on the same node but not updated in-place. Issue #7562 started to discuss it, but it turned into a discussion of persistent volumes. Issue #598 is related, but it's really for times when you'd rather the pod remain unassigned to any node instead of starting it with an empty directory.