...
Code Block |
---|
CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 300 CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 300 CONFIG proxy.config.http.transaction_no_activity_timeout_in INT 10 CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 10 CONFIG proxy.config.http.transaction_active_timeout_in INT 14400 CONFIG proxy.config.http.transaction_active_timeout_out INT 14400 CONFIG proxy.config.http.accept_no_activity_timeout INT 10 CONFIG proxy.config.net.default_inactivity_timeout INT 10 |
Network
...
Settings
The following settings control various network-related settings within ATS.
The first This setting controls how often Traffic Server will internally poll to process network events. Even though I'm now on a machine that can handle 2-3% CPU load, I decided to reduce this. I haven't noticed any significant performance difference as a result of this.
The second and third/fourth settings relate more closely to OS-tuning that's documented in the next wiki page.
The second setting removes the TCP_NODELAY option from origin server connections. Once one has told Linux to optimize for latency, this appears to be no longer necessary.
The third/fourth settings specify the socket buffer sizes for origin server connections. I've found setting this to roughly my "average object size" as reported by "traffic_top" seems to be optimal.
Code Block |
---|
CONFIG proxy.config.net.poll_timeout INT 50
CONFIG proxy.config.net.sock_option_flag_out INT 0
CONFIG proxy.config.net.sock_recv_buffer_size_out INT 49152
CONFIG proxy.config.net.sock_send_buffer_size_out INT 49152 |
Cache Control
The following configurations tell Traffic Server to be more aggressive than it would otherwise, with regard to caching overall as well as some speed-ups.
...