Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Edits per Santosh's review.

...

  • Node Managers that register with the Resource Manager with (0 memory, 0 CPU) are eligible for fine-grained scaling, that is, Myriad expands and shrinks the capacity of the Node Managers with the resources offered by Mesos. Further, Myriad ensures that YARN containers are launched on the Node Managers only if Mesos offers enough resources on the slave nodes running those Node Managers.
  • A zero profile is , as well as small, medium, and large profiles are defined in the Myriad configuration file, myriad-config-default.yml, to help administrators . A zero profile allows administrators to launch Node Managers with (0 memory, 0 CPU) capacities using via the REST /api/cluster/flexup command).
  • Node Managers that register with the Resource Manager with more than (0 memory, 0 CPU) are not eligible for fine-grained scaling. For example, Myriad does not expand and shrink the capacity of the Node Managers. Node Managers are typically launched with a low, medium, or high profile. 

Fine-grained Scaling Behavior

...

When a user submits an application to YARN (for example, a MapReduce job), the following occurshappens:

  1. The application is added to the Resource Manager's scheduling pipeline.
    • If a Node Manager has a zero profile, the YARN scheduler (for example. FairShareScheduler) does not allocate any application containers.
    • If a Node Manager has a non-zero capacity profile (low, medium, or high profiles), containers might be allocated for those Node Managers depending on their free capacity.
  2. Myriad receives resource offers from Mesos for slave nodes running zero profile Node Managers.
  3. The offered resources are projected to the YARN 's scheduler Scheduler as available capacity of the zero profile Node Manager. For example, if Mesos offers (10G memory, 4CPU) for a given node, then the capacity of the zero profile Node Managers running Manager running on that node increases to (10G memory, 4 CPU).
  4. The YARN scheduler Scheduler allocates a few containers for the zero profile Node Manager.
    • For each allocated container, Myriad spins up a placeholder Mesos task that holds on to Mesos resources as long as the corresponding YARN container is alive. The placeholder tasks are launched in a single shot, corresponding to the containers that YARN allocates.
    • Node Managers become aware of container allocations via YARN's heart-beat mechanism.
    • Myriad ensures that Node Managers are made aware of container allocations only after the corresponding placeholder Mesos tasks are launched.
  5. When containers finish, Myriad Executor sends out finished status updates to Mesos for the corresponding placeholder tasks.
  6. Mesos takes back the resources from Myriad after receiving a finished status update.

...

  1. Spin up Resource Manager with Myriad Scheduler plugged into it.
  2. Flexup a few Node Managers using Myriad Cluster API with zero profile:

    Code Block
    {
      "instances":3, "profile": "zero"
    }
    Info

    The zero profile Node Managers advertise zero resources to Resource Manager (the Resource Manager 's Nodes UI show shows this).

  3. Submit a MapReduce job to the Resource Manager.
    • The zero profile Node Managers advertise zero resources to Resource Manager (the Resource Manager 's Nodes UI show shows this).
    • When Mesos offers resources to Myriad, the Mesos UI shows placeholder mesos Mesos tasks (prefixed with "yarn_") for each yarn container allocated using those offers.
    • The Resource Manager's UI shows these containers allocated to the zero profile Node Manager nodes.
    • The placeholder mesos Mesos tasks typically finish when the YARN containers finish.
    • The job should finish successfully (although some Node Managers were originally launched with zero (0) capacities).

...