...
Check for HTTP Principal
-> kinit -kt <HTTP keytab path> HTTP/<FQDN_OF_Ranger_Admin_Cluster>@<REALM>
E.g: kinit -kt /etc/security/keytabs/spnego.service.keytab HTTP/<FQDN>@EXAMPLEtest-dummy-X.openstacklocal@EXAMPLE.COM
(After above command there should not be any error. You can check using “klist” whether the above command was successful)
...
Create rangeradmin/<FQDN of Ranger Admin>@<REALM>
-> kadmin.local
-> addprinc -randkey rangeradmin/<FQDN of Ranger Admin>E.g: addprinc -randkey rangeradmin/<FQDN>@EXAMPLEtest-dummy-X.openstacklocal@EXAMPLE.COM
-> xst -k /etc/security/keytabs/rangeradmin.keytab rangeradmin/<FQDN of Ranger Admin>@<REALM>
-> exit
Check ranger-admin created principal
-> kinit -kt /etc/security/keytabs/rangeradmin.keytab rangeradmin/<FQDN of Ranger Admin>@<REALM>
<FQDN>@EXAMPLEE.g :kinit -kt /etc/security/keytabs/rangeradmin.keytab rangeradmin/
test-dummy-X.openstacklocal@EXAMPLE.COM
(After above command there should not be any error. You can check using “klist” whether the above command was successful)
-> kdestroy (Please don’t miss kdestroy after above step)
...
Create rangerlookup/<FQDN of Ranger Admin>@<REALM>
-> kadmin.local
-> addprinc -randkey rangerlookup/<FQDN of Ranger Admin>
<FQDN>@EXAMPLEEg: addprinc -randkey rangerlookup/
test-dummy-X.openstacklocal@EXAMPLE.COM
-> xst -k /etc/security/keytabs/rangerlookup.keytab rangerlookup/<FQDN of Ranger Admin>@<REALM>
-> exit
Check ranger-lookup created principal
...
-
...
> kinit -kt /etc/security/keytabs/rangerlookup.keytab rangerlookup/<FQDN of Ranger Admin>@<REALM>
...
E.g : kinit -kt
...
/etc/security/keytabs/rangerlookup.keytab rangerlookup/
...
test-dummy-X.openstacklocal@EXAMPLE.COM
...
(After above command there should not be any error u can check using “klist” whether the above command was successful)
-> kdestroy (Please don’t miss kdestroy after above step)
...
Create rangerusersync/<FQDN>@<REALM>
-> kadmin.local
-> addprinc -randkey rangerusersync/<FQDN of Ranger usersync>
...
E.g: addprinc -randkey
...
rangerusersync/test-dummy-X.openstacklocal@EXAMPLE.COM
...
-> xst -k /etc/security/keytabs/rangerusersync.keytab rangerusersync/<FQDN>@<REALM>
-> exit
Check rangerusersync created principal
...
-> kinit -kt /etc/security/keytabs/rangerusersync.keytab rangerusersync/<FQDN of Ranger usersync>@<REALM>
...
E.g : kinit -kt /etc/security/keytabs/rangerusersync.keytab rangerusersync/
...
test-dummy-X.openstacklocal@EXAMPLE.COM
...
(After above command there should not be any error u can check using “klist” whether the above command was successful)
-> kdestroy (Please don’t miss kdestroy after above step)
...
-> xst -k /etc/security/keytabs/rangertagsync.keytab rangertagsync/<FQDN>@<REALM>test-dummy-X.openstacklocal@<REALM>
-> exit
Check rangertagsync created principal
...
E.g : kinit -kt /etc/security/keytabs/rangertagsync.keytab rangertagsync/<FQDN>@EXAMPLEtest-dummy-X.openstacklocal@EXAMPLE.COM
(After above command there should not be any error u can check using “klist” whether the above command was successful)
...
Installation Steps for Ranger-Admin
Untar the ranger-<verison>-admin.tar.gz
-> tar zxf ranger-<version>-admin.tar.gz
Change directory to ranger-<version>-admin
-> cd ranger-<version>-admin
- Edit install.properties (Enter appropriate values for the below given properties)
db_root_user= db_root_password= db_host= db_name= db_user= db_password= policymgr_external_url=http://<FQDN_OF_Ranger_Admin_Cluster>:6080 authentication_method=UNIX or LDAP or AD spnego_principal=HTTP/<FQDN_OF_Ranger_Admin_Cluster>@<REALM> spnego_keytab=<HTTP keytab path> token_valid=30 cookie_domain=<FQDN_OF_Ranger_Admin_Cluster> cookie_path=/ admin_principal=rangeradmin/<FQDN_OF_Ranger_Admin_Cluster>@<REALM> admin_keytab=<rangeradmin keytab path> lookup_principal=rangerlookup/<FQDN_OF_Ranger_Admin_Cluster>@<REALM> lookup_keytab=<rangerlookup keytab path> hadoop_conf=/etc/hadoop/conf |
Note: If kerberos server and admin are on different host then copy the keytab on admin host and assign permission to “ranger” user
- scp the rangeradmin keytab file to the respective path of another host
- chown ranger <rangeradmin keytab path>
- chmod 400 <rangeradmin keytab path>
4. Run setup
-> ./setup.sh
5. Start Ranger admin server
-> ./ranger-admin-services.sh start
Installation Steps for Ranger-Usersync
Untar the ranger-<verison>-usersync.tar.gz
-> tar zxf ranger-<version>-usersync.tar.gz
Change directory to ranger-<version>-usersync
-> cd ranger-<version>-usersync
- Edit install.properties (Enter appropriate values for the below given properties)
POLICY_MGR_URL =http://<FQDN_OF_Ranger_Admin_Cluster>:6080 usersync_principal=rangerusersync/<FQDN>@<REALM>test-dummy-X.openstacklocal@<REALM> usersync_keytab=<rangerusersync keytab path> hadoop_conf=/etc/hadoop/conf |
Note: If kerberos server and usersync are on different host then copy the keytab on usersync host and assign permission to “ranger” user
- scp the rangerusersync keytab file to the respective path of another host
- chown ranger <rangeusersync keytab path>
- chmod 400 <rangerusersync keytab path>
4. Run setup
-> ./setup.sh
5. Start Usersync server
-> ./ranger-usersync-services.sh start
Installation Steps for Ranger-Tagsync
Untar the ranger-<verison>-tagsync.tar.gz
-> tar zxf ranger-<version>-tagsync.tar.gz
Change directory to ranger-<version>-tagsync
-> cd ranger-<version>-tagsync
- Edit install.properties (Enter appropriate values for the below given properties)
TAGADMIN_ENDPOINT =http://<FQDN_OF_Ranger_Admin_Cluster>:6080 tagsync_principal=rangertagsync/<FQDN>@<REALM>test-dummy-X.openstacklocal@<REALM> tagsync_keytab=<rangertagsync keytab path> hadoop_conf=/etc/hadoop/conf TAG_SOURCE= (either 'atlas' or 'file' or 'atlasrest') |
Note: If kerberos server and tagsync are on different host then copy the keytab on tagsync host and assign permission to “ranger” user
- scp the rangertagsync keytab file to the respective path of another host
- chown ranger <rangetagsync keytab path>
- chmod 400 <rangertagsync keytab path>
4. Run setup
-> ./setup.sh
5. Start Ranger tagsync server
-> ./ranger-tagsync-services.sh start
Installation Steps for Ranger-KMS
Untar the ranger-<verison>-SNAPSHOT-kms.tar.gz
-> tar zxf ranger-<version>-SNAPSHOT-kms.tar.gz
Change directory to ranger-<version>-SNAPSHOT-kms
-> Cd ranger-<version>-SNAPSHOT-kms
- Edit install.properties (Enter appropriate values for the below given properties)
KMS_MASTER_KEY_PASSWD=<Master Key Password> kms_principal=rangerkms/<FQDN of ranger kms host>@<REALM> kms_keytab=<ranger kms keytab path> hadoop_conf=<hadoop core-site.xml path> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080 |
Note: if kerberos server and Ranger KMS are on different host then copy the keytab on Ranger KMS host and assign permission to “kms” user
- scp the rangerkms keytab file to the respective path
- chown ranger <rangekms keytab path>
- chmod 400 <rangerkms keytab path>
4. Run setup
-> ./setup.sh
5. Follow other setup required for kerberized cluster like creating keytab adding proxy user
6. Start Ranger tagsync server
-> ./ranger-kms start
Installing Ranger Plugins Manually
Installing/Enabling Ranger HDFS plugin
We’ll start by extracting our build at the appropriate place
-> copy ranger-<version>-SNAPSHOT-hdfs-plugin.tar.gz to nameNode host in /usr/hdp/<hdp-version> directory
-> cd ranger-<version>-SNAPSHOT-hdfs-plugin
Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-hdfs-plugin.tar.gz
->cd ranger-<version>-SNAPSHOT-hdfs-plugin
Edit the install.properties file. Here are the relevant lines that you should edit:
-> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080
-> REPOSITORY_NAME=hadoopdev
-> Audit info (Solr/HDFS options available)
Enable the HDFS plugin by running the below commands
-> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
-> ./enable-hdfs-plugin.sh
After enabling plugin, follow the below steps to stop/start namenode.
->su hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh stop namenode"
->su hdfs -c "/usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh start namenode"
Create the default repo for HDFS with proper configuration.
-> In Custom repo config add component user (eg. hdfs) as value for below properties
policy.download.auth.users OR policy.grantrevoke.auth.users
tag.download.auth.users
- You can verify the plugin is communicating to ranger admin in Audit->plugins tab.
Installing/Enabling Ranger HIVE plugin
We’ll start by extracting our build at the appropriate place
-> copy ranger-<version>-SNAPSHOT-hive-plugin.tar.gz to hiveServer2 host in /usr/hdp/<hdp-version> directory
-> cd /usr/hdp/<hdp-version>
Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-hive-plugin.tar.gz
-> cd ranger-<version>-SNAPSHOT-hive-plugin
Edit the install.properties file. Here are the relevant lines that you should edit:
-> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080
-> REPOSITORY_NAME=hivedev
-> Audit info (Solr/HDFS options available)
Enable the Hive plugin by running the below commands
-> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
-> ./enable-hive-plugin.sh
After enabling plugin, follow the below steps to stop/start hiveserver2.
-> ps -aux | grep hive | grep -i hiveserver2 | awk '{print $1,$2}' | grep hive | awk '{print $2}' | xargs kill >/dev/null 2>&1
-> su hive -c "nohup /usr/hdp/current/hive-server2/bin/hiveserver2 -hiveconf hive.metastore.uris="" -hiveconf hive.log.dir=/var/log/hive -hiveconf hive.log.file=hiveserver2.log >/var/log/hive/hiveserver2.out 2> /var/log/hive/hiveserver2err.log &"
Create the default repo for Hive with proper configuration
-> In Custom repo config add component user (eg. hive) as value for below properties
policy.grantrevoke.auth.users
tag.download.auth.users
- You can verify the plugin is communicating to ranger admin in Audit->plugins tab.
Installing/Enabling Ranger HBASE plugin
We’ll start by extracting our build at the appropriate place
-> copy ranger-<version>-SNAPSHOT-hbase-plugin.tar.gz to Active Hbasemaster host in /usr/hdp/<hdp-version> directory
-> cd /usr/hdp/<hdp-version>
Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-hbase-plugin.tar.gz
-> cd ranger-<version>-SNAPSHOT-hbase-plugin
Edit the install.properties file. Here are the relevant lines that you should edit:
-> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080
-> REPOSITORY_NAME=hbasedev
-> Audit info (Solr/HDFS options available)
Enable the Hbase plugin by running the below commands
-> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
-> ./enable-hbase-plugin.sh
After enabling plugin, follow the below steps to stop/start hbase.
-> su hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh stop regionserver; sleep 25"
-> su hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh stop master"
-> su hbase -c "/usr/hdp/current/hbase-master/bin/hbase-daemon.sh start master; sleep 25"
-> su hbase -c "/usr/hdp/current/hbase-regionserver/bin/hbase-daemon.sh start regionserver"
Create the default repo for HBase with proper configuration
-> In Custom repo config add component user (eg. hbase) as value for below properties
policy.grantrevoke.auth.users
tag.download.auth.users
- You can verify the plugin is communicating to ranger admin in Audit->plugins tab.
Installing/Enabling Ranger YARN plugin
We’ll start by extracting our build at the appropriate place
-> copy ranger-<version>-SNAPSHOT-yarn-plugin.tar.gz to Active ResourceManager host in /usr/hdp/<hdp-version> directory
-> cd /usr/hdp/<hdp-version>
Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-yarn-plugin.tar.gz
-> cd ranger-<version>-SNAPSHOT-yarn-plugin
Edit the install.properties file. Here are the relevant lines that you should edit:
-> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080
-> REPOSITORY_NAME=yarndev
-> Audit info (Solr/HDFS options available)
Enable the YARN plugin by running the below commands
-> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
-> ./enable-yarn-plugin.sh
Make sure HADOOP_YARN_HOME and HADOOP_LIBEXEC_DIR is set.
-> export HADOOP_YARN_HOME=/usr/hdp/current/hadoop-yarn-nodemanager/
-> export HADOOP_LIBEXEC_DIR=/usr/hdp/current/hadoop-client/libexec/
After enabling plugin, follow the below steps to stop/start.
-> Stop/Start the ResourceManager on all your ResourceManager hosts.
su yarn -c "/usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh stop resourcemanager"
su yarn -c "/usr/hdp/current/hadoop-yarn-resourcemanager/sbin/yarn-daemon.sh start resourcemanager"
- ps -ef | grep -i resourcemanager
- su yarn -c "/usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh stop nodemanager"
- su yarn -c "/usr/hdp/current/hadoop-yarn-nodemanager/sbin/yarn-daemon.sh start nodemanager"
- ps -ef | grep -i nodemanager
- policy.download.auth.users OR policy.grantrevoke.auth.users
- tag.download.auth.users
- You can verify the plugin is communicating to ranger admin in Audit->plugins tab.
Installing/Enabling Ranger KNOX plugin
We’ll start by extracting our build at the appropriate place
-> copy ranger-<version>-SNAPSHOT-knox-plugin.tar.gz to Active ResourceManager host in /usr/hdp/<hdp-version> directory
-> cd /usr/hdp/<hdp-version>
Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-knox-plugin.tar.gz
-> cd ranger-<version>-SNAPSHOT-knox-plugin
Edit the install.properties file. Here are the relevant lines that you should edit:
-> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080
-> REPOSITORY_NAME=knoxdev
-> Audit info (Solr/HDFS options available)
Enable the Knox plugin by running the below commands
-> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
-> ./enable-knox-plugin.sh
After enabling plugin, follow the below steps to stop/start knox gateway.
-> su knox -c "/usr/hdp/current/knox-server/bin/gateway.sh stop"
-> su knox -c "/usr/hdp/current/knox-server/bin/gateway.sh start"
Create the default repo for Knox with proper configuration
-> In Custom repo config add component user (eg. hive) as value for below properties
policy.grantrevoke.auth.users
tag.download.auth.users
- You can verify the plugin is communicating to ranger admin in Audit->plugins tab.
Note:
-> For Test Connection to be successful follow addition step Trusting Self Signed Knox Certificate
-> Knox plugin must be enabled in all Knox instances (in HA env).
Installing/Enabling Ranger STORM plugin
We’ll start by extracting our build at the appropriate place
-> copy ranger-<version>-SNAPSHOT-storm-plugin.tar.gz to Active host in /usr/hdp/<hdp-version> directory
-> cd /usr/hdp/<hdp-version>
Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-storm-plugin.tar.gz
-> cd ranger-<version>-SNAPSHOT-storm-plugin
Edit the install.properties file. Here are the relevant lines that you should edit:
-> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080
-> REPOSITORY_NAME=stormdev
-> Audit info (Solr/HDFS options available)
Enable the Storm plugin by running the below commands
-> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
-> ./enable-storm-plugin.sh
After enabling plugin, follow the below steps to stop/start storm.
-> su - storm -c 'source /usr/hdp/current/storm-nimbus/conf/storm-env.sh ; export PATH=/usr/jdk64/jdk1.8.0_60/bin:$PATH ; storm nimbus > /var/log/storm/nimbus.out'
-> su - storm -c 'source /usr/hdp/current/storm-client/conf/storm-env.sh ; export PATH=/usr/jdk64/jdk1.8.0_60/bin:$PATH ; storm drpc > /var/log/storm/drpc.out'
-> su - storm -c 'source /usr/hdp/current/storm-client/conf/storm-env.sh ; export PATH=/usr/jdk64/jdk1.8.0_60/bin:$PATH ; storm ui > /var/log/storm/ui.out'
-> su - storm -c 'source /usr/hdp/current/storm-supervisor/conf/storm-env.sh ; export PATH=/usr/jdk64/jdk1.8.0_60/bin:$PATH ; storm logviewer > /var/log/storm/logviewer.out'
Create the default repo for Storm with proper configuration
-> In Custom repo config add component user (eg. storm) as value for below properties
policy.grantrevoke.auth.users OR policy.grantrevoke.auth.users
tag.download.auth.users
- You can verify the plugin is communicating to ranger admin in Audit->plugins tab.
Installing/Enabling Ranger KAFKA plugin
We’ll start by extracting our build at the appropriate place
-> copy ranger-<version>-SNAPSHOT-kafka-plugin.tar.gz to Active host in /usr/hdp/<hdp-version> directory
-> cd /usr/hdp/<hdp-version>
Untar the ranger-<verison>-SNAPSHOT-SNAPSHOT-kafka-plugin.tar.gz
-> cd ranger-<version>-SNAPSHOT-kafka-plugin
Edit the install.properties file. Here are the relevant lines that you should edit:
-> COMPONENT_INSTALL_DIR_NAME=/usr/hdp/<hdp-version>/kafka
-> POLICY_MGR_URL=http://<FQDN of ranger admin host>:6080
-> REPOSITORY_NAME=kafkadev
-> Audit info (Solr/HDFS options available)
Enable the KAFKA plugin by running the below commands
-> export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk.x86_64
-> ./enable-kafka-plugin.sh
Create the default repo for Storm with proper configuration
-> In Custom repo config add component user (eg. storm) as value for below properties
policy.grantrevoke.auth.users OR policy.grantrevoke.auth.users
tag.download.auth.users
- You can verify the plugin is communicating to ranger admin in Audit->plugins tab.
Note:
-> If plugin is not able to communicate then check property “authorizer.class.name” in /usr/hdp/<hdp-version>/kafka/config/server.properties, value of authorizer.class.name should be org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer.