Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
  • Credit to Zhennan for this proposal (smile)

Problem

Although data parallel is used in MXNet, its performance is not good enough for the less computational intensive operators in the inference stage, especially for the small batchsize. This phenomena widely exists in many popular models, such as googlenet, wide deep and inception v3. For example in the wide deep model, 26 embedding OPs are executed in sequence and each one only consumes very little computing resource. So, the model level performance is sub-optimal since the long execution path in the low-parallelism operators.

...

Figure 1. Example for parallel embedding

Take our the wide deep model for example, after split, data flow is divided to 26 and each of them will be handled by single embedding OP. In ordinary process, these 26 embedding OPs will be executed one by one when running inference, and data parallel will be used in its kernel function. Now we replace the 26 OPS using one parallel OP which can handle inference in OP level parallel. 

...

Figure 2. Flowchart for subgraph replace.

...

As Fig.2

...

shown, we implement the whole workflow based on subgraph API. SgParallelOpSelector inherited from SubgraphSelector is used to find the parallel structure,

...

and SgParallelOpProperty inherited from SubgraphProperty is to connect its input/output entry.

The key bock in Fig.2 is Filter which is used

...

check whether the finding parallel structure meet the metrics. It must guarantee that the operator is thread safe; otherwise, it may fails during simultaneous execution by multiple threads.  From MKL-DNN 1.0 all MKLDNN operators will be thread safe and can be executed in parallel. But now, we need to maintain a whitelist for thread safe operators. There are some

...

other conditions which used to fine tune the performance such as paralleled Node number >= threshold

...

 will cause performance drop

...

. Environment variable may be add by user to add/remove whitelists in future release.

...

We implement paralle_op based on subgraph API. The main body of parallel op forward function is accelerate by OMP multithread as Figure3. This means origin OP forward function should be thread safe. As mentioned in step 4, OP whitelist is used to check if OP support thread safe. And whitelist can be add/remove in future by setting environment variables.Figure 3. Main body of parallel OP forward.Image RemovedWhen do the inference, several operators run parallel. Such as in the wide deep model, 26 embedding forward function are called simultaneously. By this parallel in OP level, performance is improved a lot.

Figure 3. Main body of parallel OP forward.Image Added

Figure 3. Main body of parallel OP forward.

To get the best performance, we need to support nested OMP and fine tune the parameters. In current version, we just simplify it by disable nested OMP. Environment variable may be added to support fine tune the performance in future release.

...