Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

While the default configuration values for ATS will get you up and running,
they they're somewhat designed for regression testing and not real-world applications.

This page documents what I've discovered myself through a fair amount of experimentation
and experimentation and real-world experience.

The following lists the steps involved in taking a generic configuration,
and  and modifying it for my own needs. Yours may vary, however, and I'll do my best
to indicate which settings should be sized based on your install.

NOTE: My goal here is to give myself pretty aggressive caching at the highest throughput possible.

All three Wiki pages use configuration examples from my running home Traffic Server setup.

Current Home Machine

Please keep in mind the following only applies to creating a forward-only web proxy caching setup.

The following lists steps involved in taking a generic Traffic Server install and customizing it for my own needs.

NOTE:  Please use the following with Apache Traffic Server v5.0.0 and higher.

 Server Virtual Machine

  • Server Host: Vultr (www.vultr.com)Make/Model: Apple iMac Mid 2011
  • CPU: 3.4 6 Ghz Intel Core i7 CPU (quad-single core w/Hyperthreading)
  • Memory: 8GB1GB
  • Disk: 1TB20GB SSD
  • OS: Mac OS X v10.9.1CentOS Linux 7.0
  • Cache Size: 16GB

...

  • : 1GB
  • Browser: Google Chrome v43

Testing Regimen

The following settings have been tested against the following:

  • IPv4 websites
  • IPv6 websites
  • Explicitly difficult web pages (i.e. Bing Image Search)
  • Explicitly SSL web sites (i.e. Facebook)
  • Internet Radio (various typesHTTP streaming, as well as iTunes Radio & Pandora)

The following settings are all located in /usr/local/etc/trafficserver/records.config. When adding lines, simply organize them in alphabetic sequence.

Step 1 – Disable Reverse Proxy

Reverse Proxy Settings

Since Traffic Server v5.0.0 has reorganized this file, I'll go through the relevant sections here.  When adding configurations, simply add the settings below the existing ones.

Thread Configuration

As I'm using ATS as purely a forward-only web proxy cacheTraffic Server on a personal basis, I decided to turn these off.
I believe the default settings enable ATS as both a forward and reverse cacheto explicitly configure it to not consume as many CPU cores as it might do otherwise.

If your situation is different, simply change proxy.config.exec_thread.limit to set how many CPU cores you'd like to use.

Code Block
CONFIG proxy.config.reverseexec_proxythread.enabledautoconfig INT 0
CONFIG proxy.config.urlexec_remapthread.remap_requiredlimit INT 0

Step 2 – Configure

You may or may not wish to enable these configuration options.
They essentially make ATS more aggressive in caching than its default configuration would allow.

HTTP Background Fill Completion

There's an algorithm here that I don't fully understand, but this setting should guarantee that objects
loaded in the background are cached regardless of their size.

From Leif H: "This recommendation is wrong, it should be set to 0.0 for it to always kick in. It allows the server to continue fetching / caching a large object even if the client disconnects. This setting (with a value of 0.0) is a prerequisite for getting read-while-writer to kick in."

1

HTTP Connection Timeouts

I'm using Traffic Server on a speedy datacenter-grade connection.  As such, I've configured it to be pretty impatient in terms of timeoutsI've since updated this setting.

Code Block
CONFIG proxy.config.http.background_fill_completed_threshold FLOAT 0.000000

HTTP Cache Options

The default config for ATS specifies that headers for "No-Cache" from the client be ignored, and that those from a server be honored.

I actually prefer the reverse to be true, hoping that the browser knows when bypassing the cache would be useful.

Code Block
keep_alive_no_activity_timeout_in INT 900
CONFIG proxy.config.http.cache.ignorekeep_clientalive_no_activity_timeout_cacheout INT 0900
CONFIG proxy.config.http.cache.imstransaction_onno_clientactivity_notimeout_cachein INT 05
CONFIG proxy.config.http.cache.ignore_server_no_cache INT 1

HTTP Cache Options

The default config for ATS specifies that after 1 day(86,400 seconds), any object without a specific expiration
cannot be cached. I'd prefer that they stick around for between 1-3 months. This setting is contentious in that what it should be is debatable.

...

transaction_no_activity_timeout_out INT 5
CONFIG proxy.config.http.

...

transaction_

...

I've since updated these settings. I now believe I'm enforcing a window of between 1 and 3 months
to keep objects in the cache, using ATS's built-in heuristics.

Code Block
CONFIG proxy.config.http.cache.heuristic_min_lifetime INT 2592000active_timeout_in INT 14400
CONFIG proxy.config.http.cache.heuristic_max_lifetimetransaction_active_timeout_out INT 777600014400
CONFIG proxy.config.http.cache.heuristic_lm_factor FLOAT 0.500000

Fuzzy Object Prefetch Logic

From Leif H: "As described here, is not what it does, at all. Fuzzy logic is there to allow (random chance) a client to go to origin before the object is stale in cache. The idea is that you would (for some reasonably active objects) prefetch the object such that you always have it fresh in cache."

An interesting notion, but not one I desire. The following setting disables this feature.

Code Block
accept_no_activity_timeout INT 5
CONFIG proxy.config.http.cache.fuzz.min_timenet.default_inactivity_timeout INT 0

Step 3 – Optimize

Execution Threads

The default config for ATS supports as many CPU cores as you have in your machine.
Typically, ATS will configure 1.5 threads per CPU and automatically scale upwards.

I have 4, but discovered over time that using all of them causes problems with iTunes, and World of Warcraft.
In the hopes of avoiding problems like that, I decided to configure my ATS to use two cores where possible.

Code Block
CONFIG proxy.config.exec_thread.autoconfig INT 0
CONFIG proxy.config.exec_thread.limit INT 2

HTTP Chunking

The default config for ATS specifies that the proxy itself use data "chunks" of 4KB each.
Being that I'm on a high-speed Internet link at home, I decided to increase this.
I'm currently using a setting of 64KB. If you find yourself annoyed with how long
streaming Internet Radio takes when rebuffering, try setting this to 16KB instead.

5

Network Settings

The following settings control various network-related settings within ATS.

The first setting controls how often Traffic Server will internally poll to process network events.  Even though I'm now on a machine that can handle 2-3% CPU load, I decided to reduce this.  I haven't noticed any significant performance difference as a result of this.

The second and third/fourth settings relate more closely to OS-tuning that's documented in the next wiki page.

The second setting removes the TCP_NODELAY option from origin server connections.  Once one has told Linux to optimize for latency, this appears to be no longer necessary.

The third/fourth settings specify the socket buffer sizes for origin server connections.  I've found setting this to roughly my "average object size" as reported by "traffic_top" appears to be optimalFrom what I understand it's a balancing act between what Internet Radio will accept,
and throughput of the cache overall.

Code Block
CONFIG proxy.config.http.chunking.sizenet.poll_timeout INT 64K

HTTP Connection Timeouts

...

50
CONFIG proxy.config.

...

net.

...

sock_

...

option_flag_out INT 0

Cache Control

The following configurations tell Traffic Server to be more aggressive than it would otherwise, with regard to caching overall as well as some speed-ups.

Also, I found that from a correctness point of view, my cache behaves better when not caching "dynamic" URLstimeout_in was essentially shutting down my
streaming Internet Radio connections. I increased that setting from 15 minutes to 12 hours.

Code Block
CONFIG proxy.config.http.cache.keepcache_aliveurls_nothat_activitylook_timeout_indynamic INT 1150
CONFIG proxy.config.http.keep_alive_no_activity_timeout_out.chunking.size INT 11564K
CONFIG proxy.config.http.cache.transactionims_noon_activityclient_timeoutno_incache INT 900
CONFIG proxy.config.http.cache.transactionignore_server_no_activity_timeout_out INT 90
CONFIG proxy.config.http.transaction_active_timeout_in INT 43200
cache INT 1

Heuristic Cache Expiration

The default config for Traffic Server specifies that after 1 day(86,400 seconds), any object without a specific expiration cannot be cached.

I'd prefer that they stick around for between 1-4 weeks. This setting is contentious in that what it should be is debatable.

The goal here is to enforce a window of between 1 and 4 weeks to keep objects in the cache, using Traffic Server's built-in heuristics.

Code Block
CONFIG proxy.config.http.cache.transactionheuristic_activemin_timeout_outlifetime INT 43200604800
CONFIG proxy.config.http.acceptcache.heuristic_nomax_activity_timeoutlifetime INT 30

HTTP Origin Server Connections

I had a similar experience tuning Squid in this regard.
This setting controls how many connections ATS can make outbound to various Internet servers,
on a per-server basis. The default allows for unlimited connections, and while that may be
useful on a heavily loaded server I find that it actually slows things down a bit.

2592000

Network Configuration

The default config for Traffic Server allows for up to 30,000 simultaneous connections.

I decided for my purposes that's pretty excessiveI decided to go with 32 connections per origin server simultaneously.

Code Block
CONFIG proxy.config.httpnet.origin_max_connections 32

HTTP RAM Cache

While the default ATS options for this may be optimal under heavy load,
I found using the simpler LRU algorithm much faster and more useful.

connections_throttle INT 1K

RAM And Disk Cache Configuration

The default config for Traffic Server specifies a few things here that can be tuned.

First, I decided to explicitly set my RAM cache settings.  If your situation is different, simply change proxy.config.ram_cache.size to set how much RAM you'd like to use.

Second, I observed my cache running via the "traffic_top" utility and have set the average object size accordingly.

NOTE:  One should always halve the setting for that configuration, as it allows "headroom" within Traffic Server such that one will never run out of slots in which to store objectsThe following specifies 256MB of RAM cache, with objects of up to 64MB in size,
to be managed using LRU. During normal use, RAM utilization should rise and rise
until all 256MB is used, then the LRU algorithm should kick in. Also,
figure on at least 100MB of general RAM overhead for ATS in addition to this.

Code Block
CONFIG proxy.config.cache.ram_cache.size INT 256M8M
CONFIG proxy.config.cache.ram_cache_cutoff INT 64M1M
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.rammin_cache.useaverage_seenobject_filtersize INT 1

NOTE: This setting should be sized relative to the amount of memory you want to use.
Also, it requires restarting ATS to properly take effect.

Cache Minimum Average Object Size

This setting is pretty important. It defines a global variable whose function is to both structure the cache
for future objects, as well as optimize other areas.

...

24K
CONFIG proxy.config.cache.

...

target_

...

The math is mostly correct, except the calculation for "Disk Cache Object Capacity” is in fact the max number of directory entries the cache can hold. Each object on disk consumes at least one directory entry, but can consume more."

As it turns out, my original thoughts on this were a bit misguided.

For my purposes, I decided an "average Internet object" is roughly 32KB in size, and so we can do the following math:

Average Internet Object Size: 32KB
Directory Entries Required Per Object(headroom): 2
Cache Minimum Average Object Size: 16384 (32768 / 2)

Code Block
fragment_size INT 4M
CONFIG proxy.config.cache.minmutex_averageretry_object_sizedelay INT 16K

NOTE: This setting requires clearing the disk cache and restarting ATS to properly take effect.

Cache Threads Per Disk Spindle

My setting here is somewhat of a rough guess. I've had issues in the past with Squid as a web cache
and increasing the threads dedicated to disk access definitely helped. However, with ATS I've actually
noticed a speed boost by decreasing this setting. My current theory is that this setting should allow
for one thread per CPU core.

Code Block
50
CONFIG proxy.config.cache.threadsenable_read_perwhile_diskwriter INT 2

...

Logging Configuration

The default setting for ATS is unfortunately quite low. For some reason this caused a repeatable,
large delay when loading Bing Image Search results. The following setting removes most of the delays
and seems to speed up the cache overall a bit.

From Leif H: "The text around proxy.config.cache.mutex_retry_delay is confusing. Setting this higher would increase latency, not reduce it, at the expense of possibly consuming more CPU. If you experience something different, I think it’d be worthwhile to file a Jira."

I'm sure I have experienced higher latency, but also somehow avoided a disk contention problem.

P.S. Since testing this explicitly, Microsoft Bing has done some backend work that has improved HTTP responses.
I currently believe this setting only really matters when TCP retries come into play.

defaults for Traffic Server specify a squid-compatible logfile that's binary in nature.  I prefer to have the file readable so I'm overriding this.

Code Block
CONFIG proxy.config.cachelog.mutexsquid_log_retryis_delayascii INT 25

...

1

HostDB Configuration

The default settings for ATS regarding DNS are set pretty high. I decided for my purposes to lower them,
Your Milage May Vary on these.

Code Block
CONFIG proxy.config.dns.max_dns_in_flight INT 512

DNS Internet Protocol Preference

I've no idea if this setting really helps or not, but I like to specify my preference for IPv6 over IPv4
as much as possible.

defaults for Traffic Server configure the disk-based DNS cache to be rather large.  First, I found I got a decent speed improvement by sizing this down.

Second, I specifically prefer IPv6 over IPv4.  This simply tells the cache to prefer the newer IPv6 when possible.

Third, I also allow the cache to use stale DNS records for up to 60 seconds while they're being updated.  This also contributes to cache speed.

If your situation is different, simply get to know the following settings.  It takes a bit of practice to get used to, but they're all tunable.

Code Block
##############################################################################
# HostDB
##############################################################################
Code Block
CONFIG proxy.config.hostdb.ip_resolve STRING ipv6;ipv4

DNS Host Cache Database Size

The default settings for ATS regarding DNS are set pretty high. I think the following represents a pretty
good balance between caching too much and caching too little in terms of DNS.

Code Block
CONFIG proxy.config.hostdb.size INT 8K48K
CONFIG proxy.config.hostdb.storage_size INT 4M

HTTP Socket I/O Buffers

The default config for ATS leaves enables some buffering between ATS and the client.  While I believe some inbound buffering to be useful for HTTP streaming applications such as Internet Radio and YouTube, I've had some memory issues with this.  For now, I'm leaving these disabled.

Code Block
12M
CONFIG proxy.config.nethostdb.sockserve_send_buffer_size_instale_for INT 060
CONFIG proxy.config.net.sock_recv_buffer_size_in INT 0
CONFIG proxy.config.net.sock_send_buffer_size_out INT 0
CONFIG proxy.config.net.sock_recv_buffer_size_out INT 0

Step 3 - Secure

Maximum Inbound Concurrent Connections

The default config for ATS specifies that this server can handle up to 30,000 connections.
For my purposes, that's a bit excessive. I figure with 2,048 connections there's plenty
of "elbow room". Keep in mind this is a global connection limit, so don't forget about
outbound connections!

Code Block
CONFIG proxy.config.net.connections_throttle INT 2K
cache.hostdb.sync_frequency INT 900

Restart Traffic Server

Once you've updated the relevant records.config settings, simply refresh your disk cache if necessary and then restart Traffic Server.

After that's been done, enjoy your newly-tuned proxy server.

Code Block
sudo /usr/local/bin/trafficserver stop
sudo /usr/local/bin/trafficserver start

 

Previous Page:  WebProxyCacheSetup

Next Page: WebProxyCacheOSThat's it. Go ahead and refresh your cache(if necessary), restart ATS, and enjoy your tuned proxy server.