While the default configuration values for ATS will get you up and running,
at the moment they they're somewhat designed for regression testing and not real-world applications.
This page documents what I've discovered myself through a fair amount of experimentation
and experimentation and real-world experience.
The following lists the steps involved in taking a generic configuration,
and and modifying it for my own needs. Yours may vary, however, and I'll do my best
to indicate which settings should be sized based on your install.
All three Wiki pages use configuration examples from my running home Traffic Server setup.
Current Home Machine
.
Please keep in mind the following only applies to creating a forward-only web proxy caching setup.
The following lists steps involved in taking a generic Traffic Server install and customizing it for my own needs.
NOTE: Please use the following with Apache Traffic Server v5.0.0 and higher.
Server Virtual Machine
- Server Host: Vultr (www.vultr.com)Make/Model: Apple iMac Mid 2011
- CPU: 3.4 6 Ghz Intel Core i7 CPU (quad-single core w/Hyperthreading)
- Memory: 8GB1GB
- Disk: 1TB20GB SSD
- OS: Mac OS X v10.8.5CentOS Linux 7.0
- Cache Size: 4GB
...
- 1GB
- Browser: Google Chrome v43
Testing Regimen
The following settings have been tested against the following:
- IPv4 websites
- IPv6 websites
- Explicitly difficult web pages (i.e. Bing Image Search)
- Explicitly SSL web sites (i.e. Facebook)
- Internet Radio (various typesHTTP streaming, as well as iTunes Radio & Pandora)
The following settings are all located in /usr/local/etc/trafficserver/records.config. When adding lines, simply organize them in alphabetic sequence.
Step 1 – Disable Reverse Proxy
Reverse Proxy Settings
Since Traffic Server v5.0.0 has reorganized this file, I'll go through the relevant sections here. When adding configurations, simply add the settings below the existing ones.
Thread Configuration
As I'm using ATS as purely a forward-only web proxy cacheTraffic Server on a personal basis, I decided to turn these off.
I believe the default settings enable ATS as both a forward and reverse cache.
...
explicitly configure it to not consume as many CPU cores as it might do otherwise.
If your situation is different, simply change proxy.config.exec_thread.limit to set how many CPU cores you'd like to use.
Code Block |
---|
reverse_proxy.enabled INT 0 CONFIG proxy.config.urlexec_remapthread.remap_requiredautoconfig INT 0 |
Step 2 – Optimize
CPU Cores (also multiple CPUs)
The default config for ATS supports up to 2 CPU cores. I have 4 and decided to update the config to reflect that.
One could actually increase this setting higher, but I'm not a huge fan of Hyperthreading so I didn't bother.
Code Block |
---|
CONFIG proxy.config.exec_thread.limit INT 4 1 |
HTTP
...
Connection Timeouts
The default config for ATS specifies that the proxy itself use data "chunks" of 4KB each. Being that I'm using Traffic Server on a
high-speed Internet link at home, I decided to increase this. I originally went with 128KB, only to find my
Internet Radio seemed to be having problems. 16KB should remedy that.speedy datacenter-grade connection. As such, I've configured it to be pretty impatient in terms of timeouts.
Code Block |
---|
Code Block |
CONFIG proxy.config.http.chunking.sizekeep_alive_no_activity_timeout_in INT 16384 |
HTTP Connection Sharing
The default config for ATS specifies that outbound connections be shared within connection pools
on a per-thread basis. Fair enough. For some reason, though, this caused me no end of performance
problems. Thus, the following specifies one "global" connection pool which is much faster.
Code Block |
---|
900 CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 900 CONFIG proxy.config.http.share_server_sessionstransaction_no_activity_timeout_in INT 1 |
Inbound And Outbound HTTP Connections
The default config for ATS sets these artificially low. I found that remote webservers themselves actually
slow down if more than 16 simultaneous connections are attempted. Also, most popular browsers support
up to 256 simultaneous connections from browser to proxy server so our ATS config should reflect that.
Code Block |
---|
5 CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 5 CONFIG proxy.config.http.transaction_active_timeout_in INT 14400 CONFIG proxy.config.http.origintransaction_active_servertimeout_pipelineout INT 1614400 CONFIG proxy.config.http.useraccept_no_agentactivity_pipelinetimeout INT 256 |
HTTP Background Fill Competion
There's an algorithm here that I don't fully understand, but this setting should guarantee that objects
loaded in the background are cached regardless of their size.
Code Block |
---|
5 CONFIG proxy.config.httpnet.backgrounddefault_fillinactivity_completed_thresholdtimeout FLOAT 1.000000 |
...
INT 5 |
Network Settings
The following settings are pretty important. I'll go through them one at a time.
This one defines a global variable whose function is to indicate whether specific HTTP headers
are necessary to properly cache an object. As it turns out, much of this functionality is
included in HTTP 1.1 and thus additional headers aren't really necessary.
settings control various network-related settings within ATS.
The first setting controls how often Traffic Server will internally poll to process network events. Even though I'm now on a machine that can handle 2-3% CPU load, I decided to reduce this. I haven't noticed any significant performance difference as a result of this.
The second and third/fourth settings relate more closely to OS-tuning that's documented in the next wiki page.
The second setting removes the TCP_NODELAY option from origin server connections. Once one has told Linux to optimize for latency, this appears to be no longer necessary.
The third/fourth settings specify the socket buffer sizes for origin server connections. I've found setting this to roughly my "average object size" as reported by "traffic_top" appears to be optimal.
Code Block |
---|
CONFIG proxy.config.net.poll_timeout INT 50 |
Code Block |
CONFIG proxy.config.http.cache.required_headersnet.sock_option_flag_out INT 0 |
The next specifies how "stale" an object should be before it gets fetched again from the Internet.
The default config for ATS specifies that after 1 week(604800 seconds), any object that is "stale"
should be flushed from the cache. I'd prefer that it stick around for about 3 months.
Cache Control
The following configurations tell Traffic Server to be more aggressive than it would otherwise, with regard to caching overall as well as some speed-ups.
Also, I found that from a correctness point of view, my cache behaves better when not caching "dynamic" URLs.
Code Block |
---|
Code Block |
CONFIG proxy.config.http.cache.max_stale_agecache_urls_that_look_dynamic INT 7776000 |
This one specifies whether or not to use HTTP "range" requests. While having this option enabled
will save bandwidth, it also slows page loading.
Code Block |
---|
0 CONFIG proxy.config.http.chunking.size INT 64K CONFIG proxy.config.http.cache.range.lookupims_on_client_no_cache INT 0 CONFIG proxy.config.http.cache.ignore_server_no_cache INT 1 |
Heuristic Cache Expiration
The default config for ATS Traffic Server specifies that after 1 day(86,400 seconds), any object without a specific expiration
should be flushed from the cache. expiration cannot be cached.
I'd prefer that it they stick around for about 3 monthsbetween 1-4 weeks. This setting is contentious
in that what it should be is debatable.
The goal here is to enforce a window of between 1 and 4 weeks to keep objects in the cache, using Traffic Server's built-in heuristics.
Code Block |
---|
CONFIG proxy.config.http.cache.heuristic_maxmin_lifetime INT 7776000 |
HTTP "Last Modified" Support
There's an algorithm here that I don't fully understand, but this setting should guarantee that the relevant
information regarding an object's "last modified" date is fully support by the cache.
Code Block |
---|
604800 CONFIG proxy.config.http.cache.heuristic_lmmax_factorlifetime FLOAT 1.000000 |
HTTP Reverse Proxy Object Expiration Fuzzy Logic
I had to dig into the codebase for this one. Apparently this is a feature designed for reverse proxies
to "sweep" the cache at defined intervals. My memory's a bit foggy, so apologies if I got the definition wrong.
In any case, this functionality doesn't help a forward proxy cache whose goal is to keep objects in the cache.
The following setting disables this feature.
INT 2592000 |
Network Configuration
The default config for Traffic Server allows for up to 30,000 simultaneous connections.
I decided for my purposes that's pretty excessive.
Code Block |
---|
Code Block |
CONFIG proxy.config.http.cache.fuzz.min_timenet.connections_throttle INT -1 |
Cache Minimum Average Object Size
This setting is pretty important. It defines a global variable whose function is to both structure the cache
for future objects, as well as optimize other areas. For my purposes, I decided an "average Internet object"
is roughly 32KB in size, and so we can do the following math:
Average Internet Object Size: 32KB
Disk Cache Size: 4GB
Disk Cache Size In Bytes: 4294967296 (4 * 1024 * 1024 * 1024)
Average Internet Object Size In Bytes: 32768 (32 * 1024)
Disk Cache Object Capacity: 131072 (4294967296 / 32768)
Code Block |
---|
CONFIG proxy.config.cache.min_average_object_size INT 131072
|
NOTE: This setting should be sized relative to the size of your disk cache.
Also, it requires clearing the disk cache and restarting ATS to properly take effect.
Cache Threads Per Disk Spindle
My setting here is somewhat of a rough guess. I've had issues in the past with Squid as a web cache
and increasing the threads dedicated to disk access definitely helped. However, with ATS I've actually
noticed a speed boost by decreasing this setting. My current theory is that this setting should allow
for one thread per CPU core.
1K |
RAM And Disk Cache Configuration
The default config for Traffic Server specifies a few things here that can be tuned.
First, I decided to explicitly set my RAM cache settings. If your situation is different, simply change proxy.config.ram_cache.size to set how much RAM you'd like to use.
Second, I observed my cache running via the "traffic_top" utility and have set the average object size accordingly.
NOTE: One should always halve the setting for that configuration, as it allows "headroom" within Traffic Server such that one will never run out of slots in which to store objects.
Code Block |
---|
CONFIG proxy.config.cache.ram_cache.size INT 8M |
Code Block |
CONFIG proxy.config.cache.threadsram_percache_diskcutoff INT 4 |
Maximum Concurrent DNS Queries
The default settings for ATS regarding DNS are set pretty high. I decided for my purposes to lower them,
Your Milage May Vary on these.
Code Block |
---|
1M CONFIG proxy.config.cache.ram_cache.algorithm INT 1 CONFIG proxy.config.dnscache.maxmin_dnsaverage_inobject_flightsize INT 512 |
DNS Internet Protocol Preference
I've no idea if this setting really helps or not, but I like to specify my preference for IPv6 over IPv4
as much as possible.
Code Block |
---|
24K CONFIG proxy.config.hostdbcache.iptarget_fragment_resolvesize STRING ipv6;ipv4 |
DNS Host Cache Database Size
The default settings for ATS regarding DNS are set pretty high. I think the following represents a pretty
good balance between caching too much and caching too little in terms of DNS.
Code Block |
---|
INT 4M CONFIG proxy.config.hostdb.sizecache.mutex_retry_delay INT 819250 CONFIG proxy.config.hostdb.storage_sizecache.enable_read_while_writer INT 4194304 |
...
2 |
Logging Configuration
The default config for ATS specifies that after 1 day(1,440 minutes), all DNS records should be flushed
from the cache. I'd prefer that they stick around for about 3 monthsdefaults for Traffic Server specify a squid-compatible logfile that's binary in nature. I prefer to have the file readable so I'm overriding this.
Code Block |
---|
CONFIG proxy.config.hostdb.timeoutlog.squid_log_is_ascii INT 129600 |
HTTP Socket I/O Buffers
The default config for ATS leaves these disabled. I believe these to be useful
for HTTP streaming applications such as Internet Radio and YouTube.
However, setting these too large tends to slow down the cache overall.
1 |
HostDB Configuration
The defaults for Traffic Server configure the disk-based DNS cache to be rather large. First, I found I got a decent speed improvement by sizing this down.
Second, I specifically prefer IPv6 over IPv4. This simply tells the cache to prefer the newer IPv6 when possible.
Third, I also allow the cache to use stale DNS records for up to 60 seconds while they're being updated. This also contributes to cache speed.
If your situation is different, simply get to know the following settings. It takes a bit of practice to get used to, but they're all tunable.
Code Block |
---|
##############################################################################
# HostDB
##############################################################################
|
Code Block |
CONFIG proxy.config.nethostdb.sock_send_buffer_size_in INT 131072ip_resolve STRING ipv6;ipv4 CONFIG proxy.config.net.sock_recv_buffer_size_inhostdb.size INT 13107248K CONFIG proxy.config.nethostdb.sock_sendstorage_buffer_size_out INT 13107212M CONFIG proxy.config.nethostdb.sockserve_recv_buffer_size_outstale_for INT 131072 |
Miscellaneous Task Threads
Similar to the first setting in this document, the default config for ATS supports up to 2 CPU cores.
I have 4 and decided to update the config to reflect that. One could actually increase this setting higher,
but I'm not a huge fan of Hyperthreading so I didn't bother.
Code Block |
---|
60 CONFIG proxy.config.task_threadscache.hostdb.sync_frequency INT 4 900 |
Restart Traffic Server
Once you've updated the relevant records.config settings, simply refresh your disk cache if necessary and then restart Traffic Server.
After that's been done, enjoy your newly-tuned proxy server.
Code Block |
---|
sudo /usr/local/bin/trafficserver stop
sudo /usr/local/bin/trafficserver start |
Previous Page: WebProxyCacheSetup
Next Page: WebProxyCacheOSThat's it. Go ahead and refresh your cache(if necessary), restart ATS, and enjoy your tuned proxy server.