I want to measure the login via htaccess to some servers (how fast is processed by the server) and I want to ignore the time lost on the newtork. So should i deduct the ping latency once or twice? Or my calculations are wrong
I want to measure the login via htaccess to some servers (how fast is processed by the server) and I want to ignore the time lost on the newtork. So should i deduct the ping latency once or twice? Or my calculations are wrong
The best way to remove network latency from your results is to try test from a host local to the server. This can be the server itself unless you are performing a high-load test (in which case the "client side" processing will interfere with your benchmark).
Judging network latency of an authenticated HTTP request via ping readings is not going to be particularly accurate unless you take a lot of readings.
In any case it will be more than 2x ping time. Creating a TCP connection (I'm ignoring persistent HTTP connections here) takes at least three IP packets travelling over the network: a SYN packet from client to server, a SYN+ACK back to the client, and a final ACK from client to server. Assuming the travel time from your to the server is the same as the travel time back, that is 1.5 pings. IIRC the HTTP request if small enough can be included in the final ACK packet. After that there will be the HTTP response which will be at least one packet. If either of the request or response are larger than the MTU between you and the server then there will be more packets back and forth, of course.
If you are sending authentication details directly, that is it: at least 4 IP packets, there+back+there+back so 2x ping (which is two packet, one there an done back). But it is not uncommon to make an unauthenticated request first, get a 401 response back, and then make the authenticated request in response to that, at least doubling this to 8 packets.
Also, be aware that the ICMP/IP packets used by ping by default may have a different priority in places along the route to the TCP/IP packets used for HTTP, and the packets containing HTTP request and HTTP response data will be larger than the default ping packets (ping usually sends packets with zero payload).
So if you can, test from a machine that is on the same LAN - remote testing and trying to guess what part of the time taken is network dalay and what is processing delay is not going to give you very accurate answers.
It depends exactly how you measure. For example, do you measure from when the TCP connection is established until the data is received? Or do you measure from when the request is sent over the TCP connection until the TCP connection is closed?
If you want to do this correctly, you'll have to look at exactly what's going on in the time interval you're measuring, see how many "handoffs" are required, and subtract the ping time that many times. Perhaps the easiest way to do this is to measure from two different machines with different ping times. Then plot them on a graph and extrapolate to a ping time of zero.
The way this is typically done is much simpler -- just test from a computer on the same LAN so the ping time is less than a millisecond.
Note that ping time includes more than just network latency. It also includes the time it takes the other computer to form and send the response.