From the Wikipedia page:
The checksum field is the 16 bit one's complement of the one's complement sum of all 16-bit words in the header and text.
Why, after summing the 16-bits in the header and text, is the one's complement taken to compute the TCP checksum?
The complement is taken to make checksum validation simpler — instead of calculating the checksum again and then comparing the calculated value with the checksum field in the header (which is in the middle of the summed data), it is possible to sum (using one's complement arithmetic) all 16-bit words in the header (including the checksum word) and compare the result with zero.
It might be because a value of all zeros is meaningful (from RFC 793):
so it can preserve this meaning against the unlikely event that the checksum really is zero.
Checksumming for the result to be ZEROs is problematic as all ZEROs is also generated by a powered off device or something not having power. With the result being ALL ONES there is more assurance that the hardware is properly functioning.
One's complement is more complicated term for analyzing the bits one-by-one and not as an entire unit (i.e. AND/OR/XOR/NOT).
One's complement arithmetic was used because TCP was designed in the 1970s for 1970s computers, most of which used one's complement arithmetic. The rise of two's complement arithmetic, which modern computers use, didn't really begin in earnest until the personal computers of the late 1970s and 1980s.