...
sudo useradd --home-dir /var/hadoop --create-home --shell /bin/bash --user-group hadoop
sudo tar xzf hadoop-2.5.2.tar.gz -C /usr/local
cd /usr/local
sudo ln -s hadoop-2.5.2 hadoop
sudo chown hadoop -R hadoop hadoop-2.5.2
sudo chgrp hadoop -R hadoop hadoop-2.5.2
sudo su - hadoop
Now let's follow the below steps to install/configure Ranger HDFS plugin.
- Start by extracting binaries at the appropriate place (/usr/local).
cd /usr/local
sudo tar zxf ~/dev/ranger/target/ranger-0.4.0-hdfs-plugin.tar.gz
sudo ln -s ranger-0.4.0-hdfs-plugin ranger-hdfs-plugin
cd ranger-hdfs-plugin - Now let’s edit the install.properties file. Here are the relevant lines that you should edit:
POLICY_MGR_URL=http://localhost:6080
REPOSITORY_NAME=local_hdfs
XAAUDIT.DB.HOSTNAME=localhost
XAAUDIT.DB.DATABASE_NAME=ranger
XAAUDIT.DB.USER_NAME=rangerlogger
XAAUDIT.DB.PASSWORD=rangerlogger - Now enable the hdfs-plugin by running the enable-hdfs-plugin.sh command (Remember to set JAVA_HOME)
- Create a symlink as conf dir of hadoop linking to hadoop conf dir
- cd /usr/local/hadoop
- ln -s /usr/local/hadoop/etc/hadoop conf
- Export HADOOP_HOME to bashrc
- echo “export HADOOP_HOME=/usr/local/hadoop” >> /etc/bashrc
- Enable Ranger HDFS plugin
- cd /usr/local/ranger-hdfs-plugin
- ./enable-hdfs-plugin.sh
- Copy all the jar files from ${hadoop_home}/lib
- cp /usr/local/hadoop/lib/* /usr/local/hadoop/share/hadoop/hdfs/lib/
- Create a symlink as conf dir of hadoop linking to hadoop conf dir
- Now edit the xasecure-audit.xml file.
- cd /usr/local/hadoop/conf
- change the xasecure-audit.xml file to look like the below. Make sure the JDBC properties are correct.
<property> <name>xasecure.audit.jpa.javax.persistence.jdbc.url</name>
<value>jdbc:mysql://localhost/ranger</value>
</property>
<property>
<name>xasecure.audit.jpa.javax.persistence.jdbc.user</name>
<value>rangerlogger</value>
</property>
<property> <name>xasecure.audit.jpa.javax.persistence.jdbc.password</name>
<value>rangerlogger</value>
</property>
- Once these changes are done Restart Hadoop namenode. This should start the association of ranger-hdfs-plugin with hadoop.
- You can verify by logging into the Ranger Admin Web interface -> Audit -> Agents
- Now HDFS resources will be authorized via Ranger policies.
...