Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

There you will see two files: hosts and environment_vars/all.  The first thing we need to define are environment variables for the Ansible scripts in the environment_vars/all file.  Lets briefly go through the explanation of what each variable section means: This file contains granular settings for installation of each Metron component, including enabling or disabling the installation of a specific component as well as additional specifications.  The second file you need to define is the hosts file.  The hosts file defines the Metron cluster and what role individual nodes in the cluster will play.  The following roles are possible:

  •  [ambari_master] - host running Ambari
  • [ambari_slaves] - all Ambari-managed hosts
  • [metron_kafka_topics] - host used to create the Kafka topics required by Metron. Requires a Kafka broker.
  • [meron_hbase_tables] - host used to create the HBase tables required by Metron. Requires a HBase client.
  • [enrichment] - submits the topology code to Storm and requires a Storm client
  • [search] - host(s) where Elasticsearch will be installed
  • [web] - host where the Metron UI and underlying services will be installed
  • [sensors] - host where network data will be collected and published to Kafka

Once you configure the hosts and services, then run the following command:

cd incubator-metron/metron-deployment/playbooks

ansible-playbook -i ../inventory/project_name metron_install.yml --skip-tags="solr"

Step 2d : Setup Metron on an existing Ambari-managed Cluster (bare metal or AWS)

For this part it does not matter if you are installing core Metron components on bare metal or VMs.  However it does matter for Metron sensors, as they need to be custom-compiled to the specific environment on which they are running.  Currently we only support sensor installs on CentOS 6.7, Ansible 2.0.0.2, Java 8, and Intel x520 series of network cards.  

First, we pre-assume that the Ambari cluster already exists.  If it does not exist, you can deploy it by using the following set of instructions:

https://ambari.apache.org/1.2.1/installing-hadoop-using-ambari/content/ambari-chap1.html

The sample configuration for a 12-node cluster would be as follows:

node1 - [ambari_master] 

node2 - [ambari_slaves] 

node3 - [ambari_slaves] 

node4 - [ambari_slaves]

node5 - [ambari_slaves]

node6 - [ambari_slaves]

node7 - [ambari_slaves]

node8 - [ambari_slaves]

node9-12 provision the OS + Java, but leave alone for now

Now we need to define a hosts file to install Metron on top of this cluster in the inventory hosts file.  First we need to define a host to provision Metron Hbase tables (if using canned enrichments provided with Metron)

[metron_hbase_tables]
node9

Then we need to define a host to provision Metron's Kafka topics (if using canned sensors provided with Metron)

[metron_kafka_topics]
node9

The next thing to do is to go into our inventory file and match up the 

 

 

Leave the enrichment topology running and kill the other parser topologies (bro, snort, or yaf) with either the "storm kill" command or with the Storm UI at http://node1:8744/index.html.  Now lets install the Squid sensor.  

...