Summary

In order to install Ranger in kerberized environment, user will have to enable kerberos on the cluster where Ranger is to be installed. Once, cluster is kerberized, user will have to create principals for each Ranger service and then follow below given steps to install Ranger.

Creating Keytab and principals

Note: Below steps required only for manual installation of ranger services and plugins

Do some initial Checks :

  • Login as “ranger” user: 

        If ranger user not found then create it i.e. useradd ranger

        E.g :  su - ranger

  • Check for HTTP Principal

    -> kinit -kt <HTTP keytab path> HTTP/<FQDN_OF_Ranger_Admin_Cluster>@<REALM>

    E.g: kinit -kt /etc/security/keytabs/spnego.service.keytab HTTP/test-dummy-X.openstacklocal@EXAMPLE.COM

    (After above command there should not be any error. You can check using “klist” whether the above command was successful)

           -> kdestroy (Please don't miss kdestroy after above step)

 For Ranger Admin

  • Create rangeradmin/<FQDN of Ranger Admin>@<REALM>
    -> kadmin.local
    -> addprinc -randkey rangeradmin/<FQDN of Ranger Admin> 

    E.g: addprinc -randkey rangeradmin/test-dummy-X.openstacklocal@EXAMPLE.COM

    -> xst -k /etc/security/keytabs/rangeradmin.keytab rangeradmin/<FQDN of Ranger Admin>@<REALM>

    -> exit

  • Check ranger-admin created principal

    -> kinit -kt  /etc/security/keytabs/rangeradmin.keytab rangeradmin/<FQDN of Ranger Admin>@<REALM>

    E.g :kinit -kt /etc/security/keytabs/rangeradmin.keytab rangeradmin/test-dummy-X.openstacklocal@EXAMPLE.COM

    (After above command there should not be any error. You can check using “klist” whether the above command was successful)

    -> kdestroy (Please don’t miss kdestroy after above step)

 For Ranger Lookup

  • Create rangerlookup/<FQDN of Ranger Admin>@<REALM>

    -> kadmin.local 

    -> addprinc -randkey  rangerlookup/<FQDN of Ranger Admin>

    Eg: addprinc -randkey rangerlookup/test-dummy-X.openstacklocal@EXAMPLE.COM

    -> xst -k /etc/security/keytabs/rangerlookup.keytab rangerlookup/<FQDN of Ranger Admin>@<REALM>           

    -> exit

  • Check ranger-lookup created principal
    -kinit -kt  /etc/security/keytabs/rangerlookup.keytab rangerlookup/<FQDN of Ranger Admin>@<REALM>

    E.g : kinit -kt  /etc/security/keytabs/rangerlookup.keytab rangerlookup/test-dummy-X.openstacklocal@EXAMPLE.COM

    (After above command there should not be any error u can check using “klist” whether the above command was successful)         

          -> kdestroy (Please don’t miss kdestroy after above step)

 For Ranger Usersync

  • Create rangerusersync/<FQDN>@<REALM>

    -> kadmin.local

    -> addprinc -randkey rangerusersync/<FQDN of Ranger usersync>

    E.g: addprinc -randkey rangerusersync/test-dummy-X.openstacklocal@EXAMPLE.COM

    -> xst -k /etc/security/keytabs/rangerusersync.keytab rangerusersync/<FQDN>@<REALM>

   -> exit

  • Check rangerusersync created principal
    -> kinit -kt  /etc/security/keytabs/rangerusersync.keytab rangerusersync/<FQDN of Ranger usersync>@<REALM>

    E.g : kinit -kt  /etc/security/keytabs/rangerusersync.keytab rangerusersync/test-dummy-X.openstacklocal@EXAMPLE.COM

    (After above command there should not be any error u can check using “klist” whether the above command was successful)

         -> kdestroy (Please don’t miss kdestroy after above step)

 For Ranger Tagsync

  • Create rangertagsync/<FQDN>@<REALM>
    -> kadmin.local

    -addprinc -randkey rangertagsync/<FQDN of Ranger tagsync>

     Eg: addprinc -randkey rangertagsync/test-dummy-X.openstacklocal

    -> xst -k /etc/security/keytabs/rangertagsync.keytab rangertagsync/test-dummy-X.openstacklocal@<REALM>

   -> exit

  • Check rangertagsync created principal
    -kinit -kt  /etc/security/keytabs/rangertagsync.keytab rangertagsync/<FQDN of Ranger tagsync>@<REALM>

    E.g : kinit -kt  /etc/security/keytabs/rangertagsync.keytab rangertagsync/test-dummy-X.openstacklocal@EXAMPLE.COM

    (After above command there should not be any error u can check using “klist” whether the above command was successful)

 -> kdestroy (Please don’t miss kdestroy after above step)

 Note: Change the keytab permission to read only and assign it to “ranger” user


Installation Steps for Ranger-Admin

  1. Untar the ranger-<verison>-admin.tar.gz

    -> tar zxf ranger-<version>-admin.tar.gz

  2. Change directory to ranger-<version>-admin

    -> cd ranger-<version>-admin

  3.  Edit install.properties (Enter appropriate values for the below given properties)

 

db_root_user=<username>
db_root_password=<password of db>
db_host=test-dummy-X.openstacklocal
db_name=
db_user=
db_password=
policymgr_external_url=http://<FQDN_OF_Ranger_Admin_Cluster>:6080
authentication_method=UNIX or LDAP or AD
spnego_principal=HTTP/<FQDN_OF_Ranger_Admin_Cluster>@<REALM>
spnego_keytab=<HTTP keytab path>
token_valid=30
cookie_domain=<FQDN_OF_Ranger_Admin_Cluster>
cookie_path=/
admin_principal=rangeradmin/<FQDN_OF_Ranger_Admin_Cluster>@<REALM>
admin_keytab=<rangeradmin keytab path>
lookup_principal=rangerlookup/<FQDN_OF_Ranger_Admin_Cluster>@<REALM>
lookup_keytab=<rangerlookup keytab path>
hadoop_conf=/etc/hadoop/conf

Note: If kerberos server and admin are on different host then copy the keytab on admin host and assign permission to “ranger” user

  • scp the rangeradmin keytab file to the respective path of another host
  • chown ranger <rangeradmin keytab path>
  • chmod 400 <rangeradmin keytab path>

4. Run setup   

    -> ./setup.sh

5. Start Ranger admin server 

    -> ./ranger-admin-services.sh start 

Installation Steps for Ranger-Usersync

  1. Untar the ranger-<verison>-usersync.tar.gz

    -> tar zxf ranger-<version>-usersync.tar.gz

  2. Change directory to ranger-<version>-usersync

    -> cd ranger-<version>-usersync

  3.  Edit install.properties (Enter appropriate values for the below given properties)

 

POLICY_MGR_URL =http://<FQDN_OF_Ranger_Admin_Cluster>:6080
usersync_principal=rangerusersync/test-dummy-X.openstacklocal@<REALM>
usersync_keytab=<rangerusersync keytab path>
hadoop_conf=/etc/hadoop/conf

Note: If kerberos server and usersync are on different host then copy the keytab on usersync host and assign permission to “ranger” user

  • scp the rangerusersync keytab file to the respective path of another host
  • chown ranger <rangeusersync keytab path>
  • chmod 400 <rangerusersync keytab path>

4. Run setup   

   -> ./setup.sh

5. Start Usersync server

   ->  ./ranger-usersync-services.sh start 

Installation Steps for Ranger-Tagsync

  1. Untar the ranger-<verison>-tagsync.tar.gz

    -> tar zxf ranger-<version>-tagsync.tar.gz

  2. Change directory to ranger-<version>-tagsync

    -> cd ranger-<version>-tagsync

  3.  Edit install.properties (Enter appropriate values for the below given properties)

 

TAGADMIN_ENDPOINT =http://<FQDN_OF_Ranger_Admin_Cluster>:6080
 tagsync_principal=rangertagsync/test-dummy-X.openstacklocal@<REALM>
 tagsync_keytab=<rangertagsync keytab path>
 hadoop_conf=/etc/hadoop/conf
TAG_SOURCE= (either 'atlas' or 'file' or 'atlasrest')

Note: If kerberos server and tagsync are on different host then copy the keytab on tagsync host and assign permission to “ranger” user

  • scp the rangertagsync keytab file to the respective path of another host
  • chown ranger <rangetagsync keytab path>
  • chmod 400 <rangertagsync keytab path>

4. Run setup   

   -> ./setup.sh

5. Start Ranger tagsync server 

   -> ./ranger-tagsync-services.sh start

Installation Steps for Ranger-KMS

  1. Untar the ranger-<verison>-SNAPSHOT-kms.tar.gz

    -> tar zxf ranger-<version>-SNAPSHOT-kms.tar.gz

  2. Change directory to ranger-<version>-SNAPSHOT-kms

    -> Cd ranger-<version>-SNAPSHOT-kms

  3.  Edit install.properties (Enter appropriate values for the below given properties)

    KMS_MASTER_KEY_PASSWD=<Master Key Password>
    kms_principal=rangerkms/<FQDN of ranger kms host>@<REALM>
    kms_keytab=<ranger kms keytab path>
    hadoop_conf=<hadoop core-site.xml path>
    POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080

Note: if kerberos server and Ranger KMS are on different host then copy the keytab on Ranger KMS host and assign permission to “kms” user

  • scp the rangerkms keytab file to the respective path
  • chown ranger <rangekms keytab path>
  • chmod 400 <rangerkms keytab path>

4. Run setup   

   -> ./setup.sh

5. Follow other setup required for kerberized cluster like creating keytab adding proxy user

6. Start Ranger tagsync server 

   -> ./ranger-kms start

Installing Ranger Plugins Manually

Installing/Enabling Ranger HDFS plugin

  1. We’ll start by extracting our build at the appropriate place

    -> copy ranger-<version>-SNAPSHOT-hdfs-plugin.tar.gz to nameNode host in /usr/hdp/<hdp-version> directory 

    -> cd ranger-<version>-SNAPSHOT-hdfs-plugin

  2.  Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-hdfs-plugin.tar.gz 

    ->cd ranger-<version>-SNAPSHOT-hdfs-plugin

  3. Edit the install.properties file.  Here are the relevant lines that you should edit:

    -> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080

    -> REPOSITORY_NAME=hadoopdev

    -> Audit info (Solr/HDFS options available)

  4. Enable the HDFS plugin by running the below commands

    -> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64

    -> ./enable-hdfs-plugin.sh

  5. After enabling plugin, follow the below steps to stop/start namenode.

    ->su hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh stop namenode"

    ->su hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh start namenode"


  6.  Create the default repo for HDFS with proper configuration. 

    -> In Custom repo config add component user (eg. hdfs) as value for below properties

    1.  policy.download.auth.users OR policy.grantrevoke.auth.users

    2. tag.download.auth.users


  7. You can verify the plugin is communicating to ranger admin in Audit->plugins tab

Installing/Enabling Ranger HIVE plugin

  1. We’ll start by extracting our build at the appropriate place

    -> copy ranger-<version>-SNAPSHOT-hive-plugin.tar.gz to hiveServer2 host in /usr/hdp/<hdp-version> directory

    -> cd /usr/hdp/<hdp-version>

  2.  Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-hive-plugin.tar.gz

    -> cd ranger-<version>-SNAPSHOT-hive-plugin

  3. Edit the install.properties file.  Here are the relevant lines that you should edit:

    -> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080

    -> REPOSITORY_NAME=hivedev

    -> Audit info (Solr/HDFS options available)

  4. Enable the Hive plugin by running the below commands

    -> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64

    -> ./enable-hive-plugin.sh

  5. After enabling plugin, follow the below steps to stop/start hiveserver2.

    -> ps -aux | grep hive | grep -i hiveserver2 | awk '{print $1,$2}' | grep hive | awk '{print $2}' | xargs kill >/dev/null 2>&1

    -> su hive -c "nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris="" -hiveconf hive.log.dir=/var/log/hive -hiveconf hive.log.file=hiveserver2.log >/var/log/hive/hiveserver2.out 2> /var/log/hive/hiveserver2err.log &"

  6.  Create the default repo for Hive with proper configuration

     -> In Custom repo config add component user (eg. hive) as value for below properties

    1. policy.download.auth.users OR policy.grantrevoke.auth.users

    2. tag.download.auth.users


  7. You can verify the plugin is communicating to ranger admin in Audit->plugins tab

 

Installing/Enabling Ranger HBASE plugin

 

  1. We’ll start by extracting our build at the appropriate place

    -> copy ranger-<version>-SNAPSHOT-hbase-plugin.tar.gz to Active Hbasemaster host in /usr/hdp/<hdp-version> directory

    -> cd /usr/hdp/<hdp-version>

  2.  Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-hbase-plugin.tar.gz

    -> cd ranger-<version>-SNAPSHOT-hbase-plugin

  3. Edit the install.properties file.  Here are the relevant lines that you should edit:

    -> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080

    -> REPOSITORY_NAME=hbasedev

    -> Audit info (Solr/HDFS options available)

  4. Enable the Hbase plugin by running the below commands

    -> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64

    -> ./enable-hbase-plugin.sh

  5. After enabling plugin, follow the below steps to stop/start hbase.

    -> su hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh stop regionserver; sleep 25"

    -> su hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh stop master"

    -> su hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start master; sleep 25"

    -> su hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh start regionserver"


  6.  Create the default repo for HBase with proper configurations

     -> In Custom repo config add component user (eg. hbase) as value for below properties

    1. policy.grantrevoke.auth.users

    2. tag.download.auth.users


  7. You can verify the plugin is communicating to ranger admin in Audit->plugins tab.

Installing/Enabling Ranger YARN plugin

  1. We’ll start by extracting our build at the appropriate place

    -> copy ranger-<version>-SNAPSHOT-yarn-plugin.tar.gz to Active ResourceManager host in /usr/hdp/<hdp-version> directory

    -> cd /usr/hdp/<hdp-version>

  2.  Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-yarn-plugin.tar.gz

    -> cd ranger-<version>-SNAPSHOT-yarn-plugin

  3. Edit the install.properties file.  Here are the relevant lines that you should edit:

    -> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080

    -> REPOSITORY_NAME=yarndev

    -> Audit info (Solr/HDFS options available)

  4. Enable the YARN plugin by running the below commands

    -> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64

    -> ./enable-yarn-plugin.sh

  5. Make sure HADOOP_YARN_HOME and HADOOP_LIBEXEC_DIR is set.

    -> export HADOOP_YARN_HOME=/usr/hdp/current/hadoop-yarn-nodemanager/

    -> export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec/ 


  6.  After enabling plugin, follow the below steps to stop/start.

    -> Stop/Start the ResourceManager on all your ResourceManager hosts.

    1. su yarn -c "/usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh stop resourcemanager"

    2. su yarn -c "/usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh start resourcemanager"  

    3. ps -ef | grep -i resourcemanager  

    -> Stop/Start the NodeManager on all your NodeManager hosts. 

    1. su yarn -c "/usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh stop nodemanager"
    2. su yarn -c "/usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh start nodemanager"
    3. ps -ef | grep -i nodemanager

  7. Create the default repo for Yarn with proper configuration  In Custom repo config add component user (eg. yarn) as value for below propertiesYou can verify the plugin is communicating to ranger admin in Audit->plugins tab. 

    1. policy.download.auth.users OR policy.grantrevoke.auth.users
    2. tag.download.auth.users
  8.  You can verify the plugin is communicating to ranger admin in Audit->plugins tab.

Installing/Enabling Ranger KNOX plugin 

  1. We’ll start by extracting our build at the appropriate place

    -> copy ranger-<version>-SNAPSHOT-knox-plugin.tar.gz to Active ResourceManager host in /usr/hdp/<hdp-version> directory

    -> cd /usr/hdp/<hdp-version>

  2.  Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-knox-plugin.tar.gz

    -> cd ranger-<version>-SNAPSHOT-knox-plugin

  3. Edit the install.properties file.  Here are the relevant lines that you should edit:

    -> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080

    -> REPOSITORY_NAME=knoxdev

    -> Audit info (Solr/HDFS options available)

  4. Enable the Knox plugin by running the below commands

    -> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64

    -> ./enable-knox-plugin.sh

  5. After enabling plugin, follow the below steps to stop/start knox gateway.

    -> su knox -c "/usr/hdp/current/knox-server/bin/gateway.sh stop"

    -> su knox -c "/usr/hdp/current/knox-server/bin/gateway.sh start"


  6.  Create the default repo for Knox with proper configuration

     -> In Custom repo config add component user (eg. hive) as value for below properties

    1. policy.grantrevoke.auth.users

    2. tag.download.auth.users

  7. You can verify the plugin is communicating to ranger admin in Audit->plugins tab.

    Note: 
       -> For Test Connection to be successful follow addition step Trusting Self Signed Knox Certificate
       -> Knox plugin must be enabled in all Knox instances (in HA env).  

Installing/Enabling Ranger STORM plugin

  1. We’ll start by extracting our build at the appropriate place

    -> copy ranger-<version>-SNAPSHOT-storm-plugin.tar.gz to Active host in /usr/hdp/<hdp-version> directory

    -> cd /usr/hdp/<hdp-version>

  2.  Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-storm-plugin.tar.gz

    -> cd ranger-<version>-SNAPSHOT-storm-plugin

  3. Edit the install.properties file.  Here are the relevant lines that you should edit:

    -> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080

    -> REPOSITORY_NAME=stormdev

    -> Audit info (Solr/HDFS options available)

  4. Enable the Storm plugin by running the below commands

    -> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64

    -> ./enable-storm-plugin.sh

  5. After enabling plugin, follow the below steps to stop/start storm.

    -> su - storm -c 'source /usr/hdp/current/storm-nimbus/conf/storm-env.sh ; export PATH=/usr/jdk64/jdk1.8.0_60/bin:$PATH ; storm nimbus > /var/log/storm/nimbus.out'

    -> su - storm -c 'source /usr/hdp/current/storm-client/conf/storm-env.sh ; export PATH=/usr/jdk64/jdk1.8.0_60/bin:$PATH ; storm drpc > /var/log/storm/drpc.out'

    -> su - storm -c 'source /usr/hdp/current/storm-client/conf/storm-env.sh ; export PATH=/usr/jdk64/jdk1.8.0_60/bin:$PATH ; storm ui > /var/log/storm/ui.out'

    -> su - storm -c 'source /usr/hdp/current/storm-supervisor/conf/storm-env.sh ; export PATH=/usr/jdk64/jdk1.8.0_60/bin:$PATH ; storm logviewer > /var/log/storm/logviewer.out'


  6.  Create the default repo for Storm with proper configuration

    -> In Custom repo config add component user (eg. storm) as value for below properties

    1. policy.grantrevoke.auth.users OR policy.grantrevoke.auth.users

    2. tag.download.auth.users


  7. You can verify the plugin is communicating to ranger admin in Audit->plugins tab.

Installing/Enabling Ranger KAFKA plugin

  1. We’ll start by extracting our build at the appropriate place

    -> copy ranger-<version>-SNAPSHOT-kafka-plugin.tar.gz to Active host in /usr/hdp/<hdp-version> directory

    -> cd /usr/hdp/<hdp-version>

  2.  Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-kafka-plugin.tar.gz

    -> cd ranger-<version>-SNAPSHOT-kafka-plugin

  3.  Edit the install.properties file.  Here are the relevant lines that you should edit:

    -> COMPONENT_INSTALL_DIR_NAME=/usr/hdp/<hdp-version>/kafka
    -> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080

    -> REPOSITORY_NAME=kafkadev

    -> Audit info (Solr/HDFS options available)

  4. Enable the KAFKA plugin by running the below commands

    -> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64

    -> ./enable-kafka-plugin.sh
     

  5.  Create the default repo for Storm with proper configuration

    -> In Custom repo config add component user (eg. storm) as value for below properties

    1.  policy.grantrevoke.auth.users OR policy.grantrevoke.auth.users

    2. tag.download.auth.users

  6. You can verify the plugin is communicating to ranger admin in Audit->plugins tab.

Note: If plugin is not able to communicate then check property “authorizer.class.name” in /usr/hdp/<hdp-version>/kafka/config/server.properties, value of authorizer.class.name should be org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer.

  • No labels