...
NOTE: Please use the following with Apache Traffic Server v5.0.0 and higher.
Home Server Virtual Machine
- Make/Model: Apple iMac Mid 2011Server Host: Vultr (www.vultr.com)
- CPU: 3.4 6 Ghz Intel Core i7 CPU (quad-single core w/Hyperthreading)
- Memory: 16GB1GB
- Disk: 1TB20GB SSD
- OS: Mac OS X v10.9.3CentOS Linux 7.0
- Cache Size: 8GB1GB
- Browser: Google Chrome v35v43
Testing Regimen
The following settings have been tested against the following:
...
I'm using Traffic Server on a pretty speedy broadband datacenter-grade connection. As such, I've configured it to be somewhat pretty impatient in terms of timeouts.
Code Block |
---|
CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 600900 CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 600900 CONFIG proxy.config.http.transaction_no_activity_timeout_in INT 305 CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 305 CONFIG proxy.config.http.transaction_active_timeout_in INT 4320014400 CONFIG proxy.config.http.transaction_active_timeout_out INT 4320014400 CONFIG proxy.config.http.accept_no_activity_timeout INT 305 CONFIG proxy.config.net.default_inactivity_timeout INT 30 |
Origin Server Connect Attempts
I had a similar experience tuning Squid in this regard. This first setting controls how many connections ATS can make outbound to various Internet servers,
on a per-server basis. The default allows for unlimited connections, and while that may be useful on a heavily loaded server I find that it actually slows things down a bit.
I decided to go with 32 connections per origin server simultaneously. I also found keeping a few extra connections open for additional requests speeds things up.
5 |
Network Settings
The following settings control various network-related settings within ATS.
The first The third setting controls how often Traffic Server will internally poll to process network events. As Even though I'm now on a machine that can 't easily handle 2-3% CPU load constantly, I reduced this.I decided to reduce this. I haven't noticed any significant performance difference as a result of this.
The second and third/fourth settings relate more closely to OS-tuning that's documented in the next wiki page.
The second setting removes the TCP_NODELAY option from origin server connections. Once one has told Linux to optimize for latency, this appears to be no longer necessary.
The third/fourth settings specify the socket buffer sizes for origin server connections. I've found setting this to roughly my "average object size" as reported by "traffic_top" appears to be optimal.
Code Block |
---|
Code Block |
CONFIG proxy.config.http.origin_max_connections INT 32 CONFIG proxy.config.httpnet.origin_min_keep_alive_connectionspoll_timeout INT 250 CONFIG proxy.config.net.poll_timeoutsock_option_flag_out INT 500 |
Cache Control
The following configurations tell Traffic Server to be more aggressive than it would otherwise, with regard to caching overall as well as some speed-ups.
...
Code Block |
---|
CONFIG proxy.config.http.cache.cache_urls_that_look_dynamic INT 0 CONFIG proxy.config.http.chunking.size INT 128K64K CONFIG proxy.config.http.server_session_sharing.match STRING host.cache.ims_on_client_no_cache INT 0 CONFIG proxy.config.http.cache.ignore_server_no_cache INT 1 |
...
I'd prefer that they stick around for between 1-3 months4 weeks. This setting is contentious in that what it should be is debatable.
The goal here is to enforce a window of between 1 and 3 months 4 weeks to keep objects in the cache, using Traffic Server's built-in heuristics.
Code Block |
---|
CONFIG proxy.config.http.cache.heuristic_min_lifetime INT 2592000604800 CONFIG proxy.config.http.cache.heuristic_max_lifetime INT 7776000 CONFIG proxy.config.http.cache.heuristic_lm_factor FLOAT 0.752592000 |
Network Configuration
The default config for Traffic Server allows for up to 30,000 simultaneous connections.
...
Code Block |
---|
CONFIG proxy.config.net.connections_throttle INT 2K1K |
RAM And Disk Cache Configuration
...
NOTE: One should always halve the setting for that configuration, as it allows "headroom" within Traffic Server such that one will never run out of slots in which to store objects.
Third, I've explicitly tuned the average disk fragment setting as well as disabled a feature that for me slows down the cache a bit.
NOTE: These settings require one to refresh the disk cache to take effect. Simply remove /usr/local/var/trafficserver/cache.db and restart Traffic Server to refresh the disk cache.
Code Block |
---|
CONFIG proxy.config.cache.ram_cache.size INT 8M
|
Code Block |
CONFIG proxy.config.cache.ram_cache.size_cutoff INT 64M1M CONFIG proxy.config.cache.ram_cache_cutoff.algorithm INT 8M1 CONFIG proxy.config.cache.min_average_object_size INT 64K24K CONFIG proxy.config.cache.target_fragment_size INT 2621444M CONFIG proxy.config.cache.mutex_retry_delay INT 1050 CONFIG proxy.config.cache.enable_read_while_writer INT 02 |
Logging Configuration
The defaults for Traffic Server specify a squid-compatible logfile that's binary in nature. I prefer to have the file readable so I'm overriding this.
...
Third, I also allow the cache to use stale DNS records for up to 5 minutes 60 seconds while they're being updated. This also contributes to cache speed.
...
Code Block |
---|
############################################################################## # HostDB ############################################################################## CONFIG proxy.config.hostdb.ip_resolve STRING ipv6;ipv4 CONFIG proxy.config.hostdb.size INT 64K48K CONFIG proxy.config.hostdb.storage_size INT 16M12M CONFIG proxy.config.hostdb.serve_stale_for INT 300 60 CONFIG proxy.config.cache.hostdb.sync_frequency INT 900 |
Restart Traffic Server
Once you've updated the relevant records.config settings, simply refresh your disk cache if necessary and then restart Traffic Server.
...
Previous Page: WebProxyCacheSetup
Next Page: WebProxyCacheBrowser WebProxyCacheOS