Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Server Host: Vultr (www.vultr.com)
  • CPU: 3.6 Ghz Intel CPU (dual single core)
  • Memory: 2GB1GB
  • Disk: 40GB 20GB SSD
  • OS: CentOS Linux 7.0
  • Cache Size: 2GB1GB
  • Browser: Google Chrome v39v43

Testing Regimen

The following settings have been tested against the following:

...

I'm using Traffic Server on a speedy datacenter-grade connection.  As such, I've configured it to be somewhat pretty impatient in terms of timeouts.

Code Block
CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 600900
CONFIG proxy.config.http.keep_alive_no_activity_timeout_out INT 600900
CONFIG proxy.config.http.transaction_no_activity_timeout_in INT 205
CONFIG proxy.config.http.transaction_no_activity_timeout_out INT 205
CONFIG proxy.config.http.transaction_active_timeout_in INT 4320014400
CONFIG proxy.config.http.transaction_active_timeout_out INT 4320014400
CONFIG proxy.config.http.accept_no_activity_timeout INT 205
CONFIG proxy.config.net.default_inactivity_timeout INT 20

Origin Server Connect Attempts

I had a similar experience tuning Squid in this regard.  This first setting controls how many connections ATS can make outbound to various Internet servers,
on a per-server basis. The default allows for unlimited connections, and while that may be useful on a heavily loaded server I find that it actually slows things down a bit.

I decided to go with 32 connections per origin server simultaneously.  I also found keeping a connection open for additional requests speeds things up.

5

Network Settings

The following settings control various network-related settings within ATS.

The first The third setting controls how often Traffic Server will internally poll to process network events.  Even though I'm now on a machine that can handle 2-3% CPU load, I reduced this.I decided to reduce this.  I haven't noticed any significant performance difference as a result of this.

The second and third/fourth settings relate more closely to OS-tuning that's documented in the next wiki page.

The second setting removes the TCP_NODELAY option from origin server connections.  Once one has told Linux to optimize for latency, this appears to be no longer necessary.

The third/fourth settings specify the socket buffer sizes for origin server connections.  I've found setting this to roughly my "average object size" as reported by "traffic_top" appears to be optimal.

Code Block
Code Block
CONFIG proxy.config.http.origin_max_connections INT 32
CONFIG proxy.config.httpnet.origin_min_keep_alive_connectionspoll_timeout INT 150
CONFIG proxy.config.net.poll_timeoutsock_option_flag_out INT 500

Cache Control

The following configurations tell Traffic Server to be more aggressive than it would otherwise, with regard to caching overall as well as some speed-ups.

...

Code Block
CONFIG proxy.config.http.cache.cache_urls_that_look_dynamic INT 0
CONFIG proxy.config.http.chunking.size INT 128K64K
CONFIG proxy.config.http.server_session_sharing.match STRING host.cache.ims_on_client_no_cache INT 0
CONFIG proxy.config.http.cache.ignore_server_no_cache INT 1

...

I'd prefer that they stick around for between 1-3 months4 weeks. This setting is contentious in that what it should be is debatable.

The goal here is to enforce a window of between 1 and 3 months 4 weeks to keep objects in the cache, using Traffic Server's built-in heuristics.

Code Block
CONFIG proxy.config.http.cache.heuristic_min_lifetime INT 2592000604800
CONFIG proxy.config.http.cache.heuristic_max_lifetime INT 77760002592000

Network Configuration

The default config for Traffic Server allows for up to 30,000 simultaneous connections.

...

Code Block
CONFIG proxy.config.net.connections_throttle INT 2K1K

RAM And Disk Cache Configuration

...

Second, I observed my cache running via the "traffic_top" utility and have set the average object size accordingly.

NOTE:  One should always double halve the setting for that configuration, as it allows "headroom" within Traffic Server such that one will never run out of slots in which to store objects.Third, I've explicitly tuned the average disk fragment setting as well as disabled a feature that for me slows down the cache a bit.

NOTE:  These settings require one to refresh the disk cache to take effect.  Simply remove /usr/local/var/trafficserver/cache.db and restart Traffic Server to refresh the disk cache.

Fourth, I discovered that backing off the cache directory's sync frequency helps populate the RAM cache under low load.  I definitely recommend trying this.

Code Block
CONFIG proxy.config.cache.ram_cache.size INT 128M8M
CONFIG proxy.config.cache.ram_cache_cutoff INT 16M1M
CONFIG proxy.config.cache.ram_cache.algorithm INT 1
CONFIG proxy.config.cache.min_average_object_size INT 64K24K
CONFIG proxy.config.cache.target_fragment_size INT 2621444M
CONFIG proxy.config.cache.mutex_retry_delay INT 50
CONFIG proxy.config.cache.enable_read_while_writer INT 0
CONFIG proxy.config.cache.dir.sync_frequency INT 12002

Logging Configuration

The defaults for Traffic Server specify a squid-compatible logfile that's binary in nature.  I prefer to have the file readable so I'm overriding this.

...

Third, I also allow the cache to use stale DNS records for up to 5 minutes 60 seconds while they're being updated.  This also contributes to cache speed.

...

Code Block
##############################################################################
# HostDB
##############################################################################
CONFIG proxy.config.hostdb.ip_resolve STRING ipv6;ipv4
CONFIG proxy.config.hostdb.size INT 32K48K
CONFIG proxy.config.hostdb.storage_size INT 8M12M
CONFIG proxy.config.hostdb.serve_stale_for INT 60
CONFIG proxy.config.cache.hostdb.sync_frequency INT 300900

Restart Traffic Server

Once you've updated the relevant records.config settings, simply refresh your disk cache if necessary and then restart Traffic Server.

...

Previous Page:  WebProxyCacheSetup

Next Page: WebProxyCacheBrowser WebProxyCacheOS