I recently learned about tcp_sack while I certainly don’t understand every detail of this feature, its clear it should be a huge help in cases where packets (in TCP protocol) are dropped and latency is relative high. From my basic understanding, when sending TCP protocol packages, every package has a number (sequence) when tcp_sack is enabled on both client/server. Tcp_sack will be able to respond to the server which range has been dropped. When tcp_sack is not enabled, the client will only send the “last” sequentially received packet, and everything has to be resend from the last received packet.
Eg : packet 1-10 are received, packet 11-14 are lost, packet 15-35 are received; with tcp_sack : client will tell that it received packet 10 and packet 15, and hence the server can respond with packets 11-14. Without tcp_sack : client will tell that it received up till packet 10, and hence the server will have to resend packets 11-35
In all the distro’s I could get my hands on (Centos, Debian, Ubuntu, …), it was on by default! The question however, is how many packets are commonly dropped and does communication even have “high” latency ? At what cost does tcp_sack come ? I found little data about resource consumption by this feature, but since its “on-by-default” I assume its trivial. I did however find this article that claimed on a ~4MB file, with emulated connection, that tcp_sack actually made the transfer slower (above 2 min vs below 2 min with tcp_sack for a 1% loss) That seems to defeat he purpose of tcp_sack all together. I am not as interested in these situations, as my environment is local servers talking to each other, I don’t really care that much if they go faster or slower in packet loss situations, as its a red flag is latency or packet loss happens all together.
I copied over a random payload to check if the parameter has any influence on the time spend to transfer.