I'm new to Kubernetes and I'm trying to figure out what resource limits I should set for my php webapp.
My first guess is that if I have configured php fpm to use a maximum of 256M of memory then should I configure kubernetes to limit to 256M as well.
The setup would be like this:
In the Dockerfile that builds the php container, I set php memory_limit to 256M:
sed -i '/^;php_admin_value\[memory_limit\]/cphp_value[memory_limit] = 256M' /usr/local/etc/php-fpm.d/www.conf
In my Kubernetes
Deployment
I set resource limits like this on the fpm container:resources: requests: cpu: "500m" memory: "128Mi" limits: cpu: "1" memory: "256Mi"
The thing that makes me ask the question is that PHP.net's documentation says:
memory_limit
integerThis sets the maximum amount of memory in bytes that a script is allowed to allocate. This helps prevent poorly written scripts for eating up all available memory on a server.
php fpm can serve many requests at the same time and I guess each request runs a script completely independent from the others. They do not share the memory_limit
setting. If that's the case, maybe my Kubernetes container limit should be higher than the php memory_limit
setting?
Ok, but how much higher!? If I limit cpu: 1
I guess it means there can't be real parallel-processing going on, but there can still be many processes sharing the CPU, with the OS switching when blocked by IO. I think the fpm pm.max_children
setting will tell me how many scripts can be run so how much memory can be taken at the maximum. But this is where it gets confusing because in our non-Kubernetes setup we choose the max_children
based on how much memory the instance has. The formula we use is:
max_children = (RAM - 2Gb) / 80Mb
# Where:
# * We Reserve 2Gb for the system.
# * We saw that each request needs on average 40Mb of memory, so we chose 80Mb to be safe.
This equation goes out the window in the Kubernetes world as the RAM is prescribed at runtime and you don't need to reserve any for the system because the container ONLY runs php.
I'm thinking to use:
memory_limit = 256M
max_children = 10
resources:
requests:
cpu: "500m"
memory: "512Mi"
limits:
cpu: "1"
memory: "2Gi"
- Normal case: 10 children using 40M each. Memory usage: 400M
- High case: 10 children using 80M each Memory usage: 800M
- Max case: 10 children using 256M each Memory usage: 2560M. Very unlikely to happen but if it did, Kubernetes would set Pod to
OOMKilled
I think because it's more than the limit.
Does this thinking seem reasonable? I don't think it'll allow for good packing because 10 children will hardly ever need the 2Gi limit. We will use HPA to add more pods if resources are stretched.
What did you do in your Kubernetes setup?