Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Export enrichment node to the environment variable 

Export zookeeper url 

Export kafka broker url 

Reference ambari url 

0.xBETA

...

https://github.com/dlyle65535/incubator-metron/blob/METRON-260/metron-deployment/README.md 

Step 3 : Installing a sample sensor

Log into the sensors node and install the squid sensor.  If you are on the QuickDev platform your VM will be called node1.  If you are on AWS environment your sensor node will be tagged with the [sensors] tag.  You can look through the AWS console to find which node in your cluster has this tag.  

 

Image Modified

 

Once you log into the sensor node you can install the Squid sensor.  

...

Now that we have the sensor set up and generating logs we need to figure out how to pipe these logs to a Kafka topic.  To do so the first thing we need to do is setup a new Kafka topic for Squid.

TODO

...

 

Step 4 : Define Environment Variables 

export ZOOKEEPER=

export BROKERLIST=

export HDP_HOME=

export METRON_VERSION=

Step 5 : Create Kafka topics and ingest sample data 

/usr/hdp/current/kafka-broker/bin//kafka-topics.sh --zookeeper localhost$ZOOKEEPER:2181 --create --topic squid --partitions 1 --replication-factor 1

/usr/hdp/current/kafka-broker/bin//kafka-topics.sh --zookeeper localhost$ZOOKEEPER:2181 --list

The following commands will setup a new Kafka topic for squid.  Now let's test how we can pipe the Squid logs to Kakfka

tail cat /var/log/squid/access.log | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list node1$BROKERLIST:6667 --topic squid

.$HDP_HOME/kafka/bin/kafka-console-consumer.sh --zookeeper node1$ZOOKEEPER:2181 --topic squid --from-beginning

This should ingest our Squid logs into Kafka.  Now we are ready to tackle the Metron parsing topology setup.  The first thing we need to do is decide if we will be using the Java-based parser of a Grok-based parser for the new telemetry.  In this example we will be using the Grok parser.  Grok parser is perfect for structured or semi-structured logs that are well understood (check) and telemetries with lower volumes of traffic (check).  The first thing we need to do is define the Grok expression for our log.  Refer to Grok documentation for additional details.  In our case the pattern is:

...

Notice that I apply the UNWANTED tag for any part of the message that I don't want included in my resulting JSON structure.  Finally, notice that I applied the naming convention to the IPV4 field by referencing the following list of field conventions.  The last thing I need to do is to validate my Grok pattern to make sure it's valid. For our test we will be using a free Grok validator called Grok Constructor.  A validated Grok expression should look like this:

TODOImage Added

update graphic 

Image Removed

 

Now that the Grok pattern has been defined we need to save it and move it to HDFS.  Existing Grok parsers that ship with Metron are staged under /apps/metron/patterns/

...

A script is provided to upload configurations to Zookeeper.  Upload the new parser config to Zookeeper:

/usr/metron/0.1BETA$METRON_VERSION/bin/zk_load_configs.sh --mode PUSH -i /usr/metron/0.1BETA$METRON_VERSION/config/zookeeper -z node1$ZOOKEEPER:2181 

Start the new squid parser topology:

/usr/metron/0.1BETA$METRON_VERSION/bin/start_parser_topology.sh -k node1$BROKERLIST:6667 -z node1$ZOOKEEPER:2181 -s squid

Navigate to the squid parser topology in the Storm UI at http://node1:8744/index.html and verify the topology is up with no errors:


TODOUpdate Graphic 

CREATE ES template before deployment

Image AddedImage Removed

Now that we have a new running squid parser topology, generate some data to parse by running this command several times:

...