Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Node Managers that register with the Resource Manager with (0 memory, 0 CPU) are eligible for fine-grained scaling, that is, Myriad expands and shrinks the capacity of the Node Managers with the resources offered by Mesos. Further, Myriad ensures that YARN containers are launched on the Node Managers only if Mesos offers enough resources on the slave nodes running those Node Managers.
  • A zero profile is defined in the Myriad configuration file, myriad-config-default.yml, to help administrators launch Node Managers with (0 memory, 0 CPU) capacities using the xxx Myriad Cluster API.
  • Node Managers that register with Resource Manager with more than (0 memory, 0 CPU) are not eligible for fine-grained scaling. For example, Myriad will not expand and shrink the capacity of the Node Managers. These Node Managers are typically launched with a low, medium, or high profile.

...

  • Administrators launches Node Managers via Myriad Cluster API with API with zero capacities.
  • The Node Managers report zero capacities to the Resource Manager upon registration.
  • A user submits an application to YARN (for example, a MapReduce job).
  • The application is added to the Resource Manager's scheduling pipeline. However, YARN scheduler (for e.g. FairShareScheduler) will not allocate any application containers on the zero profile Node Managers.
  • If there are other Node Managers that were registered with Resource Manager using non-zero capacities (low/medium/high profiles), some containers might be allocated for those Node Managers depending on their free capacity.
  • Myriad subsequently receives resource offers from Mesos for slave nodes running zero profile Node Managers.
  • The offered resources are projected to YARN's scheduler as "available capacity" of the zero profile Node Manager. For example, if Mesos offers (10G memory, 4CPU) for a given node, then the capacity of the zero profile Node Managers running on that node increases to (10G memory, 4 CPU).
  • The YARN scheduler now goes ahead and allocates a few containers for the zero profile Node Managers.
  • For each allocated container, Myriad spins up a placeholder Mesos task that holds on to Mesos resources as long as the corresponding YARN container is alive. In reality, a bunch of "placeholder" tasks are launched in a single shot, corresponding to a bunch of containers YARN allocates.)
  • Node Managers become aware of container allocations via YARN's HBase mechanism. Myriad ensures that Node Managers are made aware of container allocations only after the corresponding placeholder Mesos tasks are launched.
  • When Node Managers report to Resource Manager that some of the containers have finished, Myriad sends out finished status updates to Mesos for the corresponding placeholder tasks.
  • Mesos takes back the resources from Myriad that were held using placeholder tasks upon receiving the finished status updates.

...

  • The zero profile Node Managers advertise zero resources to Resource Manager (RM's "Nodes" UI should show this).
  • Submit a MapReduce job to the Resource Manager.
  • When Mesos offers resources to Myriad, the Mesos UI should show placeholder mesos tasks (prefixed with "yarn_") for each yarn container allocated using those offers.
  • The Resource Manager's UI should show these containers allocated to the zero profile Node Manager nodes.
  • The placeholder mesos tasks should finish as and when the YARN containers finish.
  • The job should finish successfully (although some Node Managers were originally launched with 0 capacities).

Sample Mesos Tasks Screenshot