During typical capacity planning processes i see a perhaps simplistic consideration for estimating only the page or data sizes that are expected to pass through the network pipes. That means like in regular application development the average HTTP page response size is 30KB, or the XML web service data size averages 100KB, such figures are taken raw and multiplied by the forecasted volume to get the expected bandwidth requirements.
It seems application developers and architects tend to forget, or are totally unaware of, the fact that application data are encapsulated into lower-layer transport layers for delivery. So they seldom consider the bandwidth eaten up by header overheads in TCP/IP packets and Ethernet frames.
I feel these overhead should be considered during bandwidth planning, but do not know how to accurately calculate these. Is there an basic formula or process that factors these overhead percentages? So far I have not seen any article that discusses this matter.
There isn't a single formula, because the amount of encapsulation overhead is dependent on the size of the data payload in the packets -- there's a lot more overhead if you're serving 10Mbps of DNS traffic than there is in serving 10Mbps of ISOs. The overheads are usually negligible, however, and swamped by the inaccuracies inherent in estimating request volume and response sizes. The safety margins built into your capacity planning should more than cover the overheads.