I read the documentation of apache kafka but I couldn't find an example about how many partitions should I use in any scenario.
For example lets say that I have 5000 msgs/entries per minute, for this situation how many partitions should I have (or you recommend)?
or is there any way to calculate this? maybe there's a table of values where I can refer to?
There is no good default number of partitions, and you should provide more information.
It depends on the size of messages, your platform and the usage pattern. Can a server store all messages with the retention set? if not you should split the data with several partitions for instances. Same case if you need better throughput or if you need to process messages sequentially or data can be consumed with no particular constraint on the order. There is also a matter of the latency you expect for a message to be consumed. If your messages matter, you'll have to add replicas for each partition and ack all messages on all replica, so it'll slow down the throughput.
You need also to specify if the number you gave is about messages produced or consumed.
5000 messages per minute is very low considering Kafka is build to be fast to process messages. I reached easily 10000 messages / second injected per server with 1kb size.
5000 messages per minute make 84 messages per second, so if one instance of your consumer application can handle this amount you're good, else your consider adding partitions and run several consumer application in parallel, one of each will be responsible of a partition.
Confluent Inc has published a blog post about how to choose number of partitions (and the number of replicas also).