...
- Server Host: Vultr (www.vultr.com)
- CPU: 3.6 Ghz Intel CPU (dual single core)
- Memory: 2GB1GB
- Disk: 40GB 20GB SSD
- OS: CentOS Linux 7.0
- Cache Size: 2GB1GB
- Browser: Google Chrome v39v43
Testing Regimen
The following settings have been tested against the following:
...
Code Block |
---|
CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 300900 CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 300900 CONFIG proxy.config.http.transaction_no_activity_timeout_in INT 105 CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 105 CONFIG proxy.config.http.transaction_active_timeout_in INT 14400 CONFIG proxy.config.http.transaction_active_timeout_out INT 14400 CONFIG proxy.config.http.accept_no_activity_timeout INT 105 CONFIG proxy.config.net.default_inactivity_timeout INT 10 |
Origin Server Connect Attempts
I had a similar experience tuning Squid in this regard. This first setting controls how many connections ATS can make outbound to various Internet servers,
on a per-server basis. The default allows for unlimited connections, and while that may be useful on a heavily loaded server I find that it actually slows things down a bit.
I decided to go with 32 connections per origin server simultaneously. I also found keeping a connection open for additional requests speeds things up.
5 |
Network Settings
The following settings control various network-related settings within ATS.
The first The third setting controls how often Traffic Server will internally poll to process network events. Even though I'm now on a machine that can handle 2-3% CPU load, I reduced this.decided to reduce this. I haven't noticed any significant performance difference as a result of this.
The second and third/fourth settings relate more closely to OS-tuning that's documented in the next wiki page.
The second setting removes the TCP_NODELAY option from origin server connections. Once one has told Linux to optimize for latency, this appears to be no longer necessary.
The third/fourth settings specify the socket buffer sizes for origin server connections. I've found setting this to roughly my "average object size" as reported by "traffic_top" appears to be optimal.
Code Block |
---|
Code Block |
CONFIG proxy.config.http.origin_max_connections INT 32 CONFIG proxy.config.httpnet.origin_min_keep_alive_connectionspoll_timeout INT 150 CONFIG proxy.config.net.poll_timeoutsock_option_flag_out INT 500 |
Cache Control
The following configurations tell Traffic Server to be more aggressive than it would otherwise, with regard to caching overall as well as some speed-ups.
...
Code Block |
---|
CONFIG proxy.config.http.cache.cache_urls_that_look_dynamic INT 0 CONFIG proxy.config.http.chunking.size INT 128K64K CONFIG proxy.config.http.cache.ims_on_client_no_cache INT 0 CONFIG proxy.config.http.cache.ignore_server_no_cache INT 1 |
...
I'd prefer that they stick around for between 1-3 months4 weeks. This setting is contentious in that what it should be is debatable.
The goal here is to enforce a window of between 1 and 3 months 4 weeks to keep objects in the cache, using Traffic Server's built-in heuristics.
Code Block |
---|
CONFIG proxy.config.http.cache.heuristic_min_lifetime INT 2592000604800 CONFIG proxy.config.http.cache.heuristic_max_lifetime INT 77760002592000 |
Network Configuration
The default config for Traffic Server allows for up to 30,000 simultaneous connections.
...
Code Block |
---|
CONFIG proxy.config.net.connections_throttle INT 2K1K |
RAM And Disk Cache Configuration
...
NOTE: One should always halve the setting for that configuration, as it allows "headroom" within Traffic Server such that one will never run out of slots in which to store objects.Third, I discovered that backing off the cache directory's sync frequency helps populate the RAM cache under low load. I definitely recommend trying this.
Code Block |
---|
CONFIG proxy.config.cache.ram_cache.size INT 32M8M CONFIG proxy.config.cache.ram_cache_cutoff INT 4M1M CONFIG proxy.config.cache.ram_cache.algorithm INT 1 CONFIG proxy.config.cache.min_average_object_size INT 32K24K CONFIG proxy.config.cache.mutextarget_retryfragment_delaysize INT 504M CONFIG proxy.config.cache.enablemutex_readretry_while_writerdelay INT 050 CONFIG proxy.config.cache.dir.sync_frequencyenable_read_while_writer INT 9002 |
Logging Configuration
The defaults for Traffic Server specify a squid-compatible logfile that's binary in nature. I prefer to have the file readable so I'm overriding this.
...
Third, I also allow the cache to use stale DNS records for up to 15 minutes 60 seconds while they're being updated. This also contributes to cache speed.
...
Code Block |
---|
############################################################################## # HostDB ############################################################################## CONFIG proxy.config.hostdb.ip_resolve STRING ipv6;ipv4 CONFIG proxy.config.hostdb.size INT 96K48K CONFIG proxy.config.hostdb.storage_size INT 24M12M CONFIG proxy.config.hostdb.serve_stale_for INT 90060 CONFIG proxy.config.cache.hostdb.sync_frequency INT 900 CONFIG proxy.config.hostdb.timeout INT 10080 |
Restart Traffic Server
Once you've updated the relevant records.config settings, simply refresh your disk cache if necessary and then restart Traffic Server.
...
Previous Page: WebProxyCacheSetup
Next Page: WebProxyCacheBrowser WebProxyCacheOS