A reminder that TCP_NODELAY should be set by default, and you should remember to set it if it’s not.
Many protocols end up being ping-pong ones. This includes HTTP/2 for example.
I already mentioned why! It’s common pitfall.
For example, try a large HTTP/2 transfer over a socket where TCP_NODELAY is not set (or rather, explicitly unset), and see how the transfer rate would be limited because of it.
The only think that TCP_NODELAY does is disabling packet batching/merging through Naggle’s algorithm. Supposedly that increases throughput by reducing the volume of redundant information required to send small data payloads in individual packets, with the tradeoff of higher latency. It’s a tradeoff between latency and throughput. I don’t see any reason for transfer rates to lower; quite the opposite. In fact the very few benchmarks I saw showed exactly that: TCP_NODELAY causing a drop in the transfer rate.
There are also articles on the cargo cult behind TCP_NODELAY.
You should also find plenty of blog posts where “unexplainable delay”/“unexplainable slowness”/“something is stuck” is in the premise, and then after a lot of story development and “suspense”, the big reveal comes that it was Nagle’s fault.
As with many things TCP. A technique that may have been useful once, ends up proving to be counterproductive when used with modern protocols, workflows, and networks.
A reminder that
TCP_NODELAY
should be set by default, and you should remember to set it if it’s not. Many protocols end up being ping-pong ones. This includes HTTP/2 for example.Why do you believe that?
I already mentioned why! It’s common pitfall. For example, try a large HTTP/2 transfer over a socket where
TCP_NODELAY
is not set (or rather, explicitly unset), and see how the transfer rate would be limited because of it.The only think that
TCP_NODELAY
does is disabling packet batching/merging through Naggle’s algorithm. Supposedly that increases throughput by reducing the volume of redundant information required to send small data payloads in individual packets, with the tradeoff of higher latency. It’s a tradeoff between latency and throughput. I don’t see any reason for transfer rates to lower; quite the opposite. In fact the very few benchmarks I saw showed exactly that:TCP_NODELAY
causing a drop in the transfer rate.There are also articles on the cargo cult behind
TCP_NODELAY
.But feel free to show your data.
You clearly have no idea what ping-pong protocol means.
Okay, so can you explain?
I specifically mentioned HTTP/2 because it should have been easy for everyone to both test and find the relevant info.
But anyway, here is a short explanation, and the curl-library thread where the issue was first encountered.
You should also find plenty of blog posts where “unexplainable delay”/“unexplainable slowness”/“something is stuck” is in the premise, and then after a lot of story development and “suspense”, the big reveal comes that it was Nagle’s fault.
As with many things TCP. A technique that may have been useful once, ends up proving to be counterproductive when used with modern protocols, workflows, and networks.