You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 10 Next »

In this blog post we will walk through what it takes to setup a new telemetry source in Metron.  For this example we will setup a new sensor, capture the sensor logs, pipe the logs to Kafka, pick up the logs with a Metron parsing topology, parse them, and run them through the Metron stream processing pipeline.  

Our example sensor will be a Squid Proxy.  Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more.  Squid logs are simple to explain and easy to parse and the velocity of traffic coming from Squid is representative of a a typical network-based sensor.  Hence, we feel it's a good telemetry to use for this tutorial.

 

Prior to going through this tutorial make sure you have Metron properly installed.  Please see here for Metron installation and validation instructions.  We will be using a single VM setup for this exercise.  To setup the VM do the following steps:

 

cd deployment/vagrant/singlenode-vagrant
vagrant plugin install vagrant-hostmanager
vagrant up
vagrant ssh

After executing the above commands a Metron VM will be build (called node1) and you will be logged in as user vagrant.  Now lets install the Squid sensor.  

sudo yum install squid

sudo service squid start 

This will run through the install and the Squid sensor will be installed and started.  Now lets look at Squid logs.

sudo su -

cd /var/log/squid

ls 

You see that there are three types of logs available: access.log, cache.log, and squid.out.  We are interested in access.log as that is the log that records the proxy usage.  We see that initially the log is empty.  Lets generate a few entries for the log.

squidclient http://www.cnn.com

squidclient http://www.nba.com

vi /var/log/squid/access.log

In production environments you would configure your users web browsers to point to the proxy server, but for the sake of simplicity of this tutorial we will use the client that is packaged with the Squid installation  After we use the client to simulate proxy requests the Squid log entries would look as follows:

1461576382.642    161 127.0.0.1 TCP_MISS/200 103701 GET http://www.cnn.com/ - DIRECT/199.27.79.73 text/html

1461576442.228    159 127.0.0.1 TCP_MISS/200 137183 GET http://www.nba.com/ - DIRECT/66.210.41.9 text/html

The format of the log is timestamp | time elapsed | remotehost | code/status | bytes | method | URL rfc931 peerstatus/peerhost | type

Now that we have the sensor set up and generating logs we need to figure out how to pipe these logs to a Kafka topic.  To do so the first thing we need to do is setup a new Kafka topic for Squid.

 

cd /usr/hdp/2.3.4.0-3485/kafka/bin/

./kafka-topics.sh --zookeeper localhost:2181 --create --topic squid --partitions 1 --replication-factor 1

./kafka-topics.sh --zookeeper localhost:2181 --list

The following commands will setup a new Kafka topic for squid.  Now let's test how we can pipe the Squid logs to Kakfka

tail /var/log/squid/access.log | /usr/hdp/2.3.4.0-3485/kafka/bin/kafka-console-producer.sh --broker-list node1:6667 --topic squid

./kafka-console-consumer.sh --zookeeper node1:2181 --topic squid --from-beginning

This should ingest our Squid logs into Kafka.  Now we are ready to tackle the Metron parsing topology setup.  The first thing we need to do is decide if we will be using the Java-based parser of a Grok-based parser for the new telemetry.  In this example we will be using the Grok parser.  Grok parser is perfect for structured or semi-structured logs that are well understood (check) and telemetries with lower volumes of traffic (check).  The first thing we need to do is define the Grok expression for our log.  Refer to Grok documentation for additional details.  In our case the pattern is:


WEBURL (?i)\b((?:https?:(?:/{1,3}|[a-z0-9%])|[a-z0-9.\-]+[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)/)(?:[^\s()<>{}\[\]]+|\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\))+(?:\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\)|[^\s`!()\[\]{};:'".,<>?«»“”‘’])|(?:(?<!@)[a-z0-9]+(?:[.\-][a-z0-9]+)*[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)\b/?(?!@)))

 

%{NUMBER:timestamp} %{SPACE:UNWANTED}  %{INT:elapsed} %{IPV4:ip_src_addr} %{WORD:action}/%{NUMBER:code} %{NUMBER:bytes} %{WORD:method} %{WEBURL:url}

Notice that I define a WEBURL pattern (that is more tailored to Squid instead of using the generic Grok URL pattern) before defining the Squid log pattern.  This is optional and is done for ease of use.  Also, notice that I apply the UNWANTED tag for any part of the message that I don't want included in my resulting JSON structure.  Finally, notice that I applied the naming convention to the IPV4 field by referencing the following list of field conventions.  The last thing I need to do is to validate my Grok pattern to make sure it's valid. For our test we will be using a free Grok validator called Grok Constructor.  A validated Grok expression should look like this:

 

 

Now that the Grok pattern has been defined we need to save it and move it to HDFS.  Existing Grok parsers that ship with Metron are staged under /apps/metron/patterns/

[root@node1 bin]# hdfs dfs -ls /apps/metron/patterns/

Found 5 items

-rw-r--r--   3 hdfs hadoop      13427 2016-04-25 07:07 /apps/metron/patterns/asa

-rw-r--r--   3 hdfs hadoop       5203 2016-04-25 07:07 /apps/metron/patterns/common

-rw-r--r--   3 hdfs hadoop        524 2016-04-25 07:07 /apps/metron/patterns/fireeye

-rw-r--r--   3 hdfs hadoop       2552 2016-04-25 07:07 /apps/metron/patterns/sourcefire

-rw-r--r--   3 hdfs hadoop        879 2016-04-25 07:07 /apps/metron/patterns/yaf

We need to move our new Squid pattern into the same directory





 

 

 

 

 

 

 

  • No labels