Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Identifying and tackling scale problems in AMS

Understanding scale issues in AMS (Why)

The Metrics Collector component is the central daemon that receives metrics from ALL the service sinks and monitors that sends metrics. The collector uses HBase as its store and phoenix as the data accessor layer.

In a high level, the metrics collector performs 2 operations related to scale in a continuos basis.

  • Handle raw writes -  A raw write is a bunch of metric data points received from services written onto HBase through phoenix. There is no read or aggregation involved. 
  • Periodically aggregate data - AMS aggregates data across cluster and across time. 
    • Cluster Aggregator - Computing the min,max,avg and sum of memory across all hosts is done by a cluster aggregator. This is called a 'TimelineClusterAggregatorSecond' which runs every 2 mins. In every run it reads the entire last 2 mins data and calculates aggregates and writes back. The read is expensive since it has to read non-aggregated data, while the write volume is smaller since it is aggregated data. For example, in a 100 node cluster, mem_free from 100 hosts becomes 1 aggregate metric value in this aggregator.
    • Time Aggregator - Also called 'downsampling', this aggregator rolls up the data in the time dimension. This helps AMS TTL out smaller precision seconds data and hold aggregate data for a longer time. For example, if we have data point for every 10 seconds, the 5min time aggregator takes the 30 data points every 5 mins and creates 1 rolled up value. There are higher level downsamplers (1hour, 1day) as well, and they use their immediate predecessors data (1hr => 5mins, 1day => 1hr ). However, it is the 5min aggregator that is high compute since it reads the entire last 5 mins data  and downsamples it. Again, the read is very expensive since it has to read non-aggregated data, while the write volume is smaller.

Scale problems occur in AMS when one or both of the above operations cannot happen smoothly. The 'load' on AMS is decided based on following factors

  • How many hosts in the cluster?
  • How many metrics each component is sending to AMS?

Either of the above can cause performance issues in AMS. 

How do we find out if AMS is experiencing scale problems?

...

#What to get?How to get?How to identify red flag?
1Is AMS able to handle raw writes*?

Look for log lines like 'AsyncProcess:1597 - #1, waiting for 13948 actions to finish' in the log.

 

If the number of actions to finish keep increasing and eventually AMS shuts down,

then it could mean AMS is not able to handle raw writes.

2How long does it take for 2 min cluster aggregator to finish?

grep "TimelineClusterAggregatorSecond" /var/log/ambari-metrics-collector/ambari-metrics-collector.log | less.

Look for the time taken between 'Start aggregation cycle....' and 'Saving ## metric aggregates'

>2 mins aggregation time
3How long does it take for 5 min host aggregator to finish?

grep "TimelineHostAggregatorMinute" /var/log/ambari-metrics-collector/ambari-metrics-collector.log | less.

Look for the time taken between 'Start aggregation cycle....' and 'Saving ## metric aggregates'

>5 mins aggregation time
4How many metrics are being collected?
  • curl -K http://<ams-host>:6188/ws/v1/timeline/metrics/metadata -o /tmp/metrics_metadata.txt
  • Number of metrics is the output of the command 'grep -o "metricname" /tmp/metrics_metadata.txt | wc -l'
  • Also find out which component is sending a lot of metrics.
> 15000 metrics
5What is the number of regions and store files in AMS HBase?

This can be got from AMS HBase Master UI.

http://<METRICS_COLLECTOR_HOST>:61310

> 150 regions

> 2000 store files

6How fast is AMS HBase flushing, and how much data is being flushed?

Check for master log in embedded mode and RS log in distributed mode.

grep "memstore flush" /var/log/metric_collector/hbase-ams-<>.log | less

Check how often METRIC_RECORD flushes happen and how much data is flushed?

 

>10 flushes in a minute could be a problem.

The flush size should be approx equal to flush size config in ams-hbase-site

7If AMS is in distributed mode, is there a local Datanode?From the cluster.

In distributed mode, a local datanode helps with HBase read shortcircuit feature.

(http://hbase.apache.org/0.94/book/perf.hdfs.html)

...


Fixing / Recovering from the problem.

...