You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 14 Next »

In this blog post we will walk through what it takes to setup a new telemetry source in Metron.  For this example we will setup a new sensor, capture the sensor logs, pipe the logs to Kafka, pick up the logs with a Metron parsing topology, parse them, and run them through the Metron stream processing pipeline.  

Our example sensor will be a Squid Proxy.  Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more.  Squid logs are simple to explain and easy to parse and the velocity of traffic coming from Squid is representative of a a typical network-based sensor.  Hence, we feel it's a good telemetry to use for this tutorial.

 

Prior to going through this tutorial make sure you have Metron properly installed.  Please see here for Metron installation and validation instructions.  We will be using a single VM setup for this exercise.  To setup the VM do the following steps:

 

cd deployment/vagrant/singlenode-vagrant
vagrant plugin install vagrant-hostmanager
vagrant up
vagrant ssh

After executing the above commands a Metron VM will be build (called node1) and you will be logged in as user vagrant.  Now lets install the Squid sensor.  

sudo yum install squid

sudo service squid start 

This will run through the install and the Squid sensor will be installed and started.  Now lets look at Squid logs.

sudo su -

cd /var/log/squid

ls 

You see that there are three types of logs available: access.log, cache.log, and squid.out.  We are interested in access.log as that is the log that records the proxy usage.  We see that initially the log is empty.  Lets generate a few entries for the log.

squidclient http://www.cnn.com

squidclient http://www.nba.com

vi /var/log/squid/access.log

In production environments you would configure your users web browsers to point to the proxy server, but for the sake of simplicity of this tutorial we will use the client that is packaged with the Squid installation  After we use the client to simulate proxy requests the Squid log entries would look as follows:

1461576382.642    161 127.0.0.1 TCP_MISS/200 103701 GET http://www.cnn.com/ - DIRECT/199.27.79.73 text/html

1461576442.228    159 127.0.0.1 TCP_MISS/200 137183 GET http://www.nba.com/ - DIRECT/66.210.41.9 text/html

The format of the log is timestamp | time elapsed | remotehost | code/status | bytes | method | URL rfc931 peerstatus/peerhost | type

Now that we have the sensor set up and generating logs we need to figure out how to pipe these logs to a Kafka topic.  To do so the first thing we need to do is setup a new Kafka topic for Squid.

 

cd /usr/hdp/2.3.4.0-3485/kafka/bin/

./kafka-topics.sh --zookeeper localhost:2181 --create --topic squid --partitions 1 --replication-factor 1

./kafka-topics.sh --zookeeper localhost:2181 --list

The following commands will setup a new Kafka topic for squid.  Now let's test how we can pipe the Squid logs to Kakfka

tail /var/log/squid/access.log | /usr/hdp/2.3.4.0-3485/kafka/bin/kafka-console-producer.sh --broker-list node1:6667 --topic squid

./kafka-console-consumer.sh --zookeeper node1:2181 --topic squid --from-beginning

This should ingest our Squid logs into Kafka.  Now we are ready to tackle the Metron parsing topology setup.  The first thing we need to do is decide if we will be using the Java-based parser of a Grok-based parser for the new telemetry.  In this example we will be using the Grok parser.  Grok parser is perfect for structured or semi-structured logs that are well understood (check) and telemetries with lower volumes of traffic (check).  The first thing we need to do is define the Grok expression for our log.  Refer to Grok documentation for additional details.  In our case the pattern is:


WEBURL (?i)\b((?:https?:(?:/{1,3}|[a-z0-9%])|[a-z0-9.\-]+[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)/)(?:[^\s()<>{}\[\]]+|\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\))+(?:\([^\s()]*?\([^\s()]+\)[^\s()]*?\)|\([^\s]+?\)|[^\s`!()\[\]{};:'".,<>?«»“”‘’])|(?:(?<!@)[a-z0-9]+(?:[.\-][a-z0-9]+)*[.](?:com|net|org|edu|gov|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|name|post|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|gp|gq|gr|gs|gt|gu|gw|gy|hk|hm|hn|hr|ht|hu|id|ie|il|im|in|io|iq|ir|is|it|je|jm|jo|jp|ke|kg|kh|ki|km|kn|kp|kr|kw|ky|kz|la|lb|lc|li|lk|lr|ls|lt|lu|lv|ly|ma|mc|md|me|mg|mh|mk|ml|mm|mn|mo|mp|mq|mr|ms|mt|mu|mv|mw|mx|my|mz|na|nc|ne|nf|ng|ni|nl|no|np|nr|nu|nz|om|pa|pe|pf|pg|ph|pk|pl|pm|pn|pr|ps|pt|pw|py|qa|re|ro|rs|ru|rw|sa|sb|sc|sd|se|sg|sh|si|sj|Ja|sk|sl|sm|sn|so|sr|ss|st|su|sv|sx|sy|sz|tc|td|tf|tg|th|tj|tk|tl|tm|tn|to|tp|tr|tt|tv|tw|tz|ua|ug|uk|us|uy|uz|va|vc|ve|vg|vi|vn|vu|wf|ws|ye|yt|yu|za|zm|zw)\b/?(?!@)))

 

 SQUID_DELIMITED %{NUMBER:start_time} %{SPACE:UNWANTED}  %{INT:elapsed} %{IPV4:ip_src_addr} %{WORD:action}/%{NUMBER:code} %{NUMBER:bytes} %{WORD:method} %{WEBURL:url}

Notice that I define a WEBURL pattern (that is more tailored to Squid instead of using the generic Grok URL pattern) before defining the Squid log pattern.  This is optional and is done for ease of use.  Also, notice that I apply the UNWANTED tag for any part of the message that I don't want included in my resulting JSON structure.  Finally, notice that I applied the naming convention to the IPV4 field by referencing the following list of field conventions.  The last thing I need to do is to validate my Grok pattern to make sure it's valid. For our test we will be using a free Grok validator called Grok Constructor.  A validated Grok expression should look like this:

 

 

Now that the Grok pattern has been defined we need to save it and move it to HDFS.  Existing Grok parsers that ship with Metron are staged under /apps/metron/patterns/

[root@node1 bin]# hdfs dfs -ls /apps/metron/patterns/

Found 5 items

-rw-r--r--   3 hdfs hadoop      13427 2016-04-25 07:07 /apps/metron/patterns/asa

-rw-r--r--   3 hdfs hadoop       5203 2016-04-25 07:07 /apps/metron/patterns/common

-rw-r--r--   3 hdfs hadoop        524 2016-04-25 07:07 /apps/metron/patterns/fireeye

-rw-r--r--   3 hdfs hadoop       2552 2016-04-25 07:07 /apps/metron/patterns/sourcefire

-rw-r--r--   3 hdfs hadoop        879 2016-04-25 07:07 /apps/metron/patterns/yaf

We need to move our new Squid pattern into the same directory. 

touch /tmp/squid

vi /tmp/squid

su - hdfs

hdfs dfs -put /tmp/squid /apps/metron/patterns/

Now that the Grok pattern is staged in HDFS we need to define Storm Flux configuration for the Metron Parsing Topology.  The configs are staged under 

/usr/metron/0.1BETA/config/topologies/ and each parsing topology has it's own set of configs.  Each directory for a topology has a remote.yaml which is designed to be run on AWS and local/test.yaml designed to run locally on a single-node VM.  At the moment of publishing this blog entry the following configs are available:


/usr/metron/0.1BETA/config/topologies/yaf/test.yaml

/usr/metron/0.1BETA/config/topologies/yaf/remote.yaml

/usr/metron/0.1BETA/config/topologies/sourcefire/test.yaml

/usr/metron/0.1BETA/config/topologies/sourcefire/remote.yaml

/usr/metron/0.1BETA/config/topologies/asa/test.yaml

/usr/metron/0.1BETA/config/topologies/asa/remote.yaml

/usr/metron/0.1BETA/config/topologies/fireeye/test.yaml

/usr/metron/0.1BETA/config/topologies/fireeye/remote.yaml

/usr/metron/0.1BETA/config/topologies/bro/test.yaml

/usr/metron/0.1BETA/config/topologies/bro/remote.yaml

/usr/metron/0.1BETA/config/topologies/ise/test.yaml

/usr/metron/0.1BETA/config/topologies/ise/remote.yaml

/usr/metron/0.1BETA/config/topologies/paloalto/test.yaml

/usr/metron/0.1BETA/config/topologies/paloalto/remote.yaml

/usr/metron/0.1BETA/config/topologies/lancope/test.yaml

/usr/metron/0.1BETA/config/topologies/lancope/remote.yaml

/usr/metron/0.1BETA/config/topologies/pcap/test.yaml

/usr/metron/0.1BETA/config/topologies/pcap/remote.yaml

/usr/metron/0.1BETA/config/topologies/enrichment/test.yaml

/usr/metron/0.1BETA/config/topologies/enrichment/remote.yaml

/usr/metron/0.1BETA/config/topologies/snort/test.yaml

/usr/metron/0.1BETA/config/topologies/snort/remote.yaml

Since we are going to be running locally on a VM we need to define a test.yaml for Squid.  The easiest way to do this is to copy one of the existing Grok-based configs (YAF) and tailor it for Squid.  


mkdir /usr/metron/0.1BETA/config/topologies/squid

cp /usr/metron/0.1BETA/config/topologies/yaf/test.yaml /usr/metron/0.1BETA/config/topologies/squid/test.yaml

vi /usr/metron/0.1BETA/config/topologies/squid/test.yaml

And edit your config to look like this:

 

name: "squid-test"

config:

    topology.workers: 1

 

 

components:

    -   id: "parser"

        className: "org.apache.metron.parsing.parsers.GrokParser"

        constructorArgs:

            - "../Metron-MessageParsers/src/main/resources/patterns/squid"

            - "SQUID_DELIMITED"

        configMethods:

            -   name: "withMetronHDFSHome"

                args:

                    - ""

    -   id: "writer"

        className: "org.apache.metron.writer.KafkaWriter"

        constructorArgs:

            - "${kafka.broker}"

    -   id: "zkHosts"

        className: "storm.kafka.ZkHosts"

        constructorArgs:

            - "${kafka.zk}"

    -   id: "kafkaConfig"

        className: "storm.kafka.SpoutConfig"

        constructorArgs:

            # zookeeper hosts

            - ref: "zkHosts"

            # topic name

            - "${spout.kafka.topic.squid}"

            # zk root

            - ""

            # id

            - "${spout.kafka.topic.squid}"

        properties:

            -   name: "ignoreZkOffsets"

                value: false

            -   name: "startOffsetTime"

                value: -2

            -   name: "socketTimeoutMs"

                value: 1000000

 

spouts:

    -   id: "kafkaSpout"

        className: "storm.kafka.KafkaSpout"

        constructorArgs:

            - ref: "kafkaConfig"

 

bolts:

    -   id: "parserBolt"

        className: "org.apache.metron.bolt.ParserBolt"

        constructorArgs:

            - "${kafka.zk}"

            - "${spout.kafka.topic.squid}"

            - ref: "parser"

            - ref: "writer"

 

streams:

    -   name: "spout -> bolt"

        from: "kafkaSpout"

        to: "parserBolt"

        grouping:

            type: SHUFFLE











 

 

 

 

 

 

 

  • No labels