To decommission DataNodes in bulk:
Code Block |
---|
curl -u admin:admin -i -H 'X-Requested-By: ambari' -X POST -d '{ "RequestInfo":{ "context":"Decommission DataNodes", "command":"DECOMMISSION", "parameters":{ "slave_type":"DATANODE", "excluded_hosts":"c6401.ambari.apache.org,c6402.ambari.apache.org,c6403.ambari.apache.org" }, "operation_level":{ "level":"HOST_COMPONENT", "cluster_name":"c1" } }, "Requests/resource_filters":[ { "service_name":"HDFS", "component_name":"NAMENODE" } ] }' http://localhost:8080/api/v1/clusters/c1/requests |
"excluded_hosts" is a comma-delimited list of hostnames where DataNodes should be decommissioned.
Note that the decommission of DataNodes can take a long time if you have a lot of blocks; HDFS needs to replicate blocks belonging to decommissioning DataNodes to other live DataNodes to reach the replication factor that you have specified via dfs.replication in hdfs-site.xml. If you do not have enough live DataNodes to reach the replication factor, decommission process would hang until more DataNodes become available (e.g., if you have 3 DataNodes in your cluster with dfs.replication is set to 3 and you are trying to decommission 1 DataNode out of 3, decommission process would hang until you add another DataNode to the cluster).
To decommission NodeManagers in bulk:
Code Block |
---|
curl -u admin:admin -i -H 'X-Requested-By: ambari' -X POST -d '{ "RequestInfo":{ "context":"Decommission NodeManagers", "command":"DECOMMISSION", "parameters":{ "slave_type":"NODEMANAGER", "excluded_hosts":"c6401.ambari.apache.org,c6402.ambari.apache.org,c6403.ambari.apache.org" }, "operation_level":{ "level":"HOST_COMPONENT", "cluster_name":"c1" } }, "Requests/resource_filters":[ { "service_name":"YARN", "component_name":"RESOURCEMANAGER" } ] }' http://localhost:8080/api/v1/clusters/c1/requests |
"excluded_hosts" is a comma-delimited list of hostnames where NodeManagers should be decommissioned.