While the default configuration values for ATS will get you up and running,
they they're somewhat designed for regression testing and not real-world applications.
This page documents what I've discovered myself through a fair amount of experimentation
and experimentation and real-world experience.
The following lists the steps involved in taking a generic configuration,
and and modifying it for my own needs. Yours may vary, however, and I'll do my best
to indicate which settings should be sized based on your install.
NOTE: My goal here is to give myself pretty aggressive caching at the highest throughput possible.
All three Wiki pages use configuration examples from my running home Traffic Server setup.
Current Home Machine
Please keep in mind the following only applies to creating a forward-only web proxy caching setup.
The following lists steps involved in taking a generic Traffic Server install and customizing it for my own needs.
NOTE: Please use the following with Apache Traffic Server v5.0.0 and higher.
Server Virtual Machine
- Server Host: Vultr (www.vultr.com)Make/Model: Apple iMac Mid 2011
- CPU: 3.4 6 Ghz Intel Core i7 CPU (quad-single core w/Hyperthreading)
- Memory: 16GB1GB
- Disk: 1TB20GB SSD
- OS: Mac OS X v10.9.1CentOS Linux 7.0
- Cache Size: 8GB
...
- 1GB
- Browser: Google Chrome v43
Testing Regimen
The following settings have been tested against the following:
- IPv4 websites
- IPv6 websites
- Explicitly difficult web pages (i.e. Bing Image Search)
- Explicitly SSL web sites (i.e. Facebook)
- Internet Radio (various typesHTTP streaming, as well as iTunes Radio & Pandora)
The following settings are all located in /usr/local/etc/trafficserver/records.config. When adding lines, simply organize them in alphabetic sequence.
Step 1 – Configure
You may or may not wish to enable these configuration options.
They essentially make ATS more aggressive in caching than its default configuration would allow.
HTTP Background Fill Completion
There's an algorithm here that I don't fully understand, but this setting should guarantee that objects
loaded in the background are cached regardless of their size.
From Leif H: "This recommendation is wrong, it should be set to 0.0 for it to always kick in. It allows the server to continue fetching / caching a large object even if the client disconnects. This setting (with a value of 0.0) is a prerequisite for getting read-while-writer to kick in."
Since Traffic Server v5.0.0 has reorganized this file, I'll go through the relevant sections here. When adding configurations, simply add the settings below the existing ones.
Thread Configuration
As I'm using Traffic Server on a personal basis, I decided to explicitly configure it to not consume as many CPU cores as it might do otherwise.
If your situation is different, simply change proxy.config.exec_thread.limit to set how many CPU cores you'd like to useI've since updated this setting.
Code Block |
---|
CONFIG proxy.config.http.background_fill_completed_threshold FLOAT 0.000000
|
HTTP Cache Options
The default config for ATS specifies that URLs that look dynamic be cached anyways.
Unfortunately, this can break some web applications. I decided to turn this off.
Code Block |
---|
exec_thread.autoconfig INT 0 CONFIG proxy.config.http.cache.cache_urls_that_look_dynamicexec_thread.limit INT 01 |
HTTP
...
The default config for ATS specifies that headers for "No-Cache" from the client be ignored, and that those from a server be honored.
Connection Timeouts
I'm using Traffic Server on a speedy datacenter-grade connection. As such, I've configured it to be pretty impatient in terms of timeoutsI actually prefer the reverse to be true, hoping that the browser knows when bypassing the cache would be useful.
Code Block |
---|
CONFIG proxy.config.http.cache.ignorekeep_clientalive_no_activity_timeout_cachein INT 0900 CONFIG proxy.config.http.cache.ims_on_client_no_cachekeep_alive_no_activity_timeout_out INT 0900 CONFIG proxy.config.http.cache.ignore_server_no_cache INT 1 |
HTTP Cache Options
The default config for ATS specifies that after 1 day(86,400 seconds), any object without a specific expiration
cannot be cached. I'd prefer that they stick around for between 1-3 months. This setting is contentious in that what it should be is debatable.
...
transaction_no_activity_timeout_in INT 5 CONFIG proxy.config.http. |
...
transaction_ |
...
I've since updated these settings. I now believe I'm enforcing a window of between 1 and 3 months
to keep objects in the cache, using ATS's built-in heuristics.
Code Block |
---|
CONFIG proxy.config.http.cache.heuristic_min_lifetime INT 2592000no_activity_timeout_out INT 5 CONFIG proxy.config.http.cache.heuristic_max_lifetimetransaction_active_timeout_in INT 777600014400 CONFIG proxy.config.http.cache.heuristic_lm_factor FLOAT 0.500000 |
Fuzzy Object Prefetch Logic
From Leif H: "As described here, is not what it does, at all. Fuzzy logic is there to allow (random chance) a client to go to origin before the object is stale in cache. The idea is that you would (for some reasonably active objects) prefetch the object such that you always have it fresh in cache."
An interesting notion, but not one I desire. The following setting disables this feature.
Code Block |
---|
transaction_active_timeout_out INT 14400 CONFIG proxy.config.http.cache.fuzz.min_timeaccept_no_activity_timeout INT 0 |
Step 2 – Optimize
Accept & Execution Threads
The default config for ATS supports as many CPU cores as you have in your machine.
Typically, ATS will configure 1.5 threads per CPU and automatically scale upwards.
I have 4, but discovered over time that I get a decent performance boost by assigning slightly more threads than ATS would otherwise.
Code Block |
---|
5 CONFIG proxy.config.exec_thread.autoconfig INT 0 CONFIG proxy.config.exec_thread.limit INT 8 CONFIG proxy.config.accept_threadsnet.default_inactivity_timeout INT 25 |
HTTP Chunking
The default config for ATS specifies that the proxy itself use data "chunks" of 4KB each.
Being that I'm on a high-speed Internet link at home, I decided to increase this.
If you find yourself annoyed with how long streaming Internet Radio takes when rebuffering, try setting this to 16KB or 64KB instead.
From what I understand it's a balancing act between what Internet Radio will accept, and throughput of the cache overall.
Code Block |
---|
CONFIG proxy.config.http.chunking.size INT 512K
|
HTTP Server Sessions
The default config for ATS specifies that connections to origin servers be shared on a per-thread basis.
Network Settings
The following settings control various network-related settings within ATS.
The first setting controls how often Traffic Server will internally poll to process network events. Even though I'm now on a machine that can handle 2-3% CPU load, I decided to reduce this. I haven't noticed any significant performance difference as a result of this.
The second and third/fourth settings relate more closely to OS-tuning that's documented in the next wiki page.
The second setting removes the TCP_NODELAY option from origin server connections. Once one has told Linux to optimize for latency, this appears to be no longer necessary.
The third/fourth settings specify the socket buffer sizes for origin server connections. I've found setting this to roughly my "average object size" as reported by "traffic_top" appears to be optimalI found that changing this behavior to a global connection pool made the cache significantly faster.
Code Block |
---|
CONFIG proxy.config.httpnet.sharepoll_server_sessionstimeout INT 1 |
HTTP Connection Timeouts
...
50 CONFIG proxy.config. |
...
net. |
...
sock_ |
...
option_flag_out INT 0 |
Cache Control
The following configurations tell Traffic Server to be more aggressive than it would otherwise, with regard to caching overall as well as some speed-ups.
Also, I found that from a correctness point of view, my cache behaves better when not caching "dynamic" URLstimeout_in was essentially shutting down my
streaming Internet Radio connections. I increased that setting from 15 minutes to 12 hours.
Code Block |
---|
CONFIG proxy.config.http.cache.keepcache_aliveurls_nothat_activitylook_timeout_indynamic INT 1150 CONFIG proxy.config.http.keep_alive_no_activity_timeout_outchunking.size INT 11564K CONFIG proxy.config.http.transactioncache.ims_noon_activityclient_timeoutno_incache INT 100 CONFIG proxy.config.http.cache.transactionignore_server_no_activity_timeout_out INT 10 CONFIG proxy.config.http.transaction_active_timeout_in INT 43200 cache INT 1 |
Heuristic Cache Expiration
The default config for Traffic Server specifies that after 1 day(86,400 seconds), any object without a specific expiration cannot be cached.
I'd prefer that they stick around for between 1-4 weeks. This setting is contentious in that what it should be is debatable.
The goal here is to enforce a window of between 1 and 4 weeks to keep objects in the cache, using Traffic Server's built-in heuristics.
Code Block |
---|
CONFIG proxy.config.http.transactioncache.heuristic_activemin_timeout_outlifetime INT 43200604800 CONFIG proxy.config.http.cache.acceptheuristic_nomax_activity_timeoutlifetime INT 10 |
HTTP Origin Server Connections
I had a similar experience tuning Squid in this regard.
This setting controls how many connections ATS can make outbound to various Internet servers,
on a per-server basis. The default allows for unlimited connections, and while that may be
useful on a heavily loaded server I find that it actually slows things down a bit.
2592000 |
Network Configuration
The default config for Traffic Server allows for up to 30,000 simultaneous connections.
I decided for my purposes that's pretty excessiveI decided to go with 48 connections per origin server simultaneously.
Code Block |
---|
CONFIG proxy.config.httpnet.origin_max_connections 48 |
HTTP RAM Cache
While the default ATS options for this may be optimal under heavy load,
I found using the simpler LRU algorithm much faster and more useful.
connections_throttle INT 1K |
RAM And Disk Cache Configuration
The default config for Traffic Server specifies a few things here that can be tuned.
First, I decided to explicitly set my RAM cache settings. If your situation is different, simply change proxy.config.ram_cache.size to set how much RAM you'd like to use.
Second, I observed my cache running via the "traffic_top" utility and have set the average object size accordingly.
NOTE: One should always halve the setting for that configuration, as it allows "headroom" within Traffic Server such that one will never run out of slots in which to store objectsThe following specifies 128MB of RAM cache, with objects of up to 8MB in size,
to be managed using LRU. During normal use, RAM utilization should rise and rise
until all 128MB is used, then the LRU algorithm should kick in. Also,
figure on at least 100MB of general RAM overhead for ATS in addition to this.
Code Block |
---|
CONFIG proxy.config.cache.ram_cache.size INT 128M8M CONFIG proxy.config.cache.ram_cache_cutoff INT 16M1M CONFIG proxy.config.cache.ram_cache.algorithm INT 1 CONFIG proxy.config.cache.rammin_cache.useaverage_seenobject_filtersize INT 0 |
NOTE: This setting should be sized relative to the amount of memory you want to use.
Also, it requires restarting ATS to properly take effect.
Disk Target Fragment Size
This setting defines the optimal fragment size when storing objects to disk. I noticed a considerable performance improvement when changing this from its default. Please be aware I'm not sure if this setting willl scale properly, but works well for me.
Code Block |
---|
24K CONFIG proxy.config.cache.target_fragment_size INT 262144 |
NOTE: This setting requires clearing the disk cache and restarting ATS to properly take effect.
Cache Minimum Average Object Size
This setting is pretty important. It defines a global variable whose function is to both structure the cache
for future objects, as well as optimize other areas.
From Leif H: "Your setting for proxy.config.cache.min_average_object_size seems wrong. If your average object size is 32KB, you should set this to, hem, 32KB . However, to give some headroom, my personal recommendation is to 2x the number of directory entries, so set the configuration to 16KB.
The math is mostly correct, except the calculation for "Disk Cache Object Capacity” is in fact the max number of directory entries the cache can hold. Each object on disk consumes at least one directory entry, but can consume more."
As it turns out, my original thoughts on this were a bit misguided. For my purposes, I first monitored ATS with the command-line tool "traffic_top".
After letting the cache run for quite some time, I discovered the Average Internet Object Size was much larger than I'd guessed. As it turns out, my "average internet object" is roughly 80KB in size, and so we can do the following math:
Average Internet Object Size: 80KB
Directory Entries Required Per Object(headroom): 2
Cache Minimum Average Object Size: 40960 (81920 / 2)
Code Block |
---|
4M CONFIG proxy.config.cache.minmutex_averageretry_object_sizedelay INT 40K |
NOTE: This setting requires clearing the disk cache and restarting ATS to properly take effect.
Cache Threads Per Disk Spindle
My setting here is somewhat of a rough guess. I've had issues in the past with Squid as a web cache
and increasing the threads dedicated to disk access definitely helped. However, with ATS I've actually
noticed a speed boost by decreasing this setting. My current theory is that this setting should allow
for one thread per CPU core.
Code Block |
---|
50 CONFIG proxy.config.cache.threadsenable_read_perwhile_diskwriter INT 4 |
...
2 |
Logging Configuration
The default setting for ATS is unfortunately quite low. For some reason this caused a repeatable,
large delay when loading Bing Image Search results. The following setting removes most of the delays
and seems to speed up the cache overall a bit.
From Leif H: "The text around proxy.config.cache.mutex_retry_delay is confusing. Setting this higher would increase latency, not reduce it, at the expense of possibly consuming more CPU. If you experience something different, I think it’d be worthwhile to file a Jira."
I'm sure I have experienced higher latency, but also somehow avoided a disk contention problem.
P.S. Since testing this explicitly, Microsoft Bing has done some backend work that has improved HTTP responses.
I currently believe this setting only really matters when TCP retries come into play.
defaults for Traffic Server specify a squid-compatible logfile that's binary in nature. I prefer to have the file readable so I'm overriding this.
Code Block |
---|
CONFIG proxy.config.cachelog.mutexsquid_log_retryis_delayascii INT 25 |
...
1 |
HostDB Configuration
The default settings for ATS regarding DNS are set pretty high. I decided for my purposes to lower them,
Your Milage May Vary on these.
Code Block |
---|
CONFIG proxy.config.dns.max_dns_in_flight INT 512
|
DNS Internet Protocol Preference
I've no idea if this setting really helps or not, but I like to specify my preference for IPv6 over IPv4
as much as possible.
Code Block |
---|
CONFIG proxy.config.hostdb.ip_resolve STRING ipv6;ipv4
|
DNS Host Cache Database
The default settings for ATS regarding DNS are set pretty high. I think the following represents a pretty good balance between caching too much and caching too little in terms of DNS. I've also set the cache to retain DNS records for up to a month, and serve stale records if necessary for up to 5 minutes. This seems to speed things up a bit.
defaults for Traffic Server configure the disk-based DNS cache to be rather large. First, I found I got a decent speed improvement by sizing this down.
Second, I specifically prefer IPv6 over IPv4. This simply tells the cache to prefer the newer IPv6 when possible.
Third, I also allow the cache to use stale DNS records for up to 60 seconds while they're being updated. This also contributes to cache speed.
If your situation is different, simply get to know the following settings. It takes a bit of practice to get used to, but they're all tunable.
Code Block |
---|
##############################################################################
# HostDB
############################################################################## |
Code Block |
CONFIG proxy.config.hostdb.size INT 32K CONFIG proxy.config.hostdb.storageip_sizeresolve INTSTRING 8Mipv6;ipv4 CONFIG proxy.config.hostdb.timeoutsize INT 4320048K CONFIG proxy.config.hostdb.verifystorage_aftersize INT 4320012M CONFIG proxy.config.hostdb.serve_stale_for 300 |
HTTP Socket I/O Buffers
The default config for ATS leaves enables some buffering between ATS and the client. While this isn't necessary, I eventually decided on the following settings. Combined with other performance tweaks, these work well overall.
Code Block |
---|
CONFIG proxy.config.net.sock_send_buffer_size_in INT 64KINT 60 CONFIG proxy.config.net.sock_recv_buffer_size_in INT 8K CONFIG proxy.config.net.sock_send_buffer_size_out INT 8K CONFIG proxy.config.net.sock_recv_buffer_size_out INT 64K |
Step 3 - Secure
Maximum Inbound Concurrent Connections
The default config for ATS specifies that this server can handle up to 30,000 connections.
For my purposes, that's a bit excessive. I figure with 2,048 connections there's plenty
of "elbow room". Keep in mind this is a global connection limit, so don't forget about
outbound connections!
Code Block |
---|
CONFIG proxy.config.net.connections_throttle INT 2K
|
cache.hostdb.sync_frequency INT 900 |
Restart Traffic Server
Once you've updated the relevant records.config settings, simply refresh your disk cache if necessary and then restart Traffic Server.
After that's been done, enjoy your newly-tuned proxy server.
Code Block |
---|
sudo /usr/local/bin/trafficserver stop
sudo /usr/local/bin/trafficserver start |
Previous Page: WebProxyCacheSetup
Next Page: WebProxyCacheOSThat's it. Go ahead and refresh your cache(if necessary), restart ATS, and enjoy your tuned proxy server.