we are on a debian system and are trying to tune the tcp/ip stack to our needs. We all know that you can set the maximum tcp buffer size with some kernel parameters like this:
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.udp_wmem_min = 4096
net.core.wmem_max = 261071
To calculate the maximum buffer size for your needs you "just" have to calculate it. (see http://fasterdata.es.net/TCP-tuning/ )
But as we do not know the round-trip-time of our users, it is quite difficult. It might be ok to assume something between 20 and 60 ms. But for an mobile network it is something like 100-300 ms (tested with my phone). So it is quite difficult to know how many data might be "on the line".
We would like to see the actual buffer size and the utilization of it.
Does anybody know how to sneek into the actual tcp write and receive buffers?
Then what's the point in trying to measure another part of the equation?
(read the man pages for lsof (-T flag) also netstat).
But trust me, the TCP system is quite smart - and the guys who wrote it are smarter than both you and I - and know a lot more about this stuff. The only thing to be concerned about is the window-scaling (these days should be disabled by default) although unless you are transferring very large files across a very high speed WAN you probably don't need it - if the files are not huge, or the bandwidth is low, then there's no beneift over using the default scale.
As they said it's useless to do something like that. It may be useful if you had a wired network with constant traffic, but even then it's not suggested. What you could propably do is to define another tcp implementation (Vegas , Tahoe, Reno etc are some implementation).
Each one of them is focused on improving some aspect of tcp. For example one is trying to improve delay, another try to import jitter etc.