Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • Number of processors is a static configuration in yarn deployment model and a job restart is required to change the number of processors. However, an addition/deletion of a processor to a processors group in standalone is quite common and an expected behavior.
  • Existing generators discard the task to physical host assignment when generating the JobModel. However, for standalone it’s essential to consider this detail(task to physical host assignment) between successive job model generations to accomplish optimal task to processor assignment. For instance, let’s assume stream processors P1, P2 runs on host H1 and processor P3 runs on host H3. If P1 dies, it is optimal to assign some of the tasks processed by P1 to P2. If previous task to physical host assignment is not taken into account when generating JobModel, this cannot be achieved.
  • Processor is assigned a physical host to run after the JobModel generation in yarn. Physical host in which processor is going to run from is known before JobModel generation phase in standalone.
  • In an ideal world, any TaskNameGrouper implementation should be usable interchangeably between yarn and standalone deployment models. However, in existing setup some TaskNameGrouper’s are supported in standalone and some in yarn.

Overall high level changes:

  • The common layer between yarn and standalone model is the TaskNameGrouper abstraction(which is part of JobModel generation phase) in which the host aware task assignment to processors will be encapsulated
  • Deprecate different flavors of existing TaskNameGrouper implementations(each one of them primarily grouping TaskModel into containers) and provide a single unified contract which is agnostic of deployment model(standalone/yarn).
  • Introduction of MetaDataStore abstraction which will be used to store and retrieve locality information for different deployment models in appropriate storage layers(Kafka be will be used as locality storage layer for yarn and zookeeper will be used as storage layer in standalone).

If an optimal assignment for each task to a particular processor is generated in the JobModel as part of the leader in a stateful processors group, each follower will just pick up their assignments from job model after the rebalance phase and start processing(similar to non-stateful jobs). The goal is to guarantee that the optimal assignment happens which minimizes the task movement between the processors. Local state of the tasks will be persisted in a directory(local.store.dir) provided through configuration by each processor.

...