Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

PR: https://github.com/apache/incubator-mxnet/pull/15210
How Custom Operators Work

MXNet allows users to create custom operators if the existing NDArray operators cannot meet their needs. However, profiling operator is not well supported currently.

...

When users create a new custom operator, they need to define both the forward() and backward() functions in python. When this operator is executed, one of these two functions will be called. And those two functions consist of two kinds of code: 1) python code and Numpy code (I call them pure python code) which will run in the CustomOperator's own worker thread 2) code that calls NDArray operators (I call them sub-operators) which will then be pushed to the engine to run asynchronously from the CustomOperator's worker threads.![Screen Shot 2019-06-13 at 5 22 39 PM](https://user-images.githubusercontent.com/16669457/59475365-ec2c1680-8dff-11e9-8500-295e63bc8ed7.png)

Screen Shot 2019-06-13 at 5 22 39 PMImage Added

imageImage Added![image](https://user-images.githubusercontent.com/16669457/59465512-ede5e200-8ddf-11e9-9416-cc53409263be.png)

Also in CustomOperator’s Push(), a special callback named “CustomOperator”(Now renamed to “Dummy_Wait”, we will also use this name below) is pushed to the engine. The idea is that “CustomOperator” have dependencies on the custom operator and it will get executed at last to make sure the custom operator event will span over the execution of both the pure python code as well as the sub-operators.

Issues and Motivation

With the above said, there are some issues with the current profiler:

* The start timestamp of custom operator events is correct, but the end timestamp will wait for “Dummy_Wait which itself will wait for a bunch of variable deletions. The end result is that custom operator events will appear way longer than the actual computation. Also, because of parallelism, users can never know that at a specific point in a custom operator event, whether we are executing pure python code, the sub operators, or we are simply waiting for dependencies to clear.
* Because custom operator events will not end until “Dummy_Wait” starts, the former will not wrap around the latter. This violates the TraceCompass format will mess up chrome://tracing timeline (shown below). Each event has a begin and an end timestamp; however Chrome will treat one event as two separate ones, and say that one has no “BEGIN” and the other has no “END”
* All the custom operator events are called “Custom”. If users have more than one custom operator, they are not able to distinguish them.* Also, we cannot tell which NDArray operators are sub-operators of custom operators (i.e. NDArray operators called within custom operator’s forward or backward def).
![image](https://user-images.githubusercontent.com/16669457/59465549-02c27580-8de0-11e9-9e4a-682ac96e5355.png)
![image](https://user-images.githubusercontent.com/16669457/59465556-0655fc80-8de0-11e9-86ae-ea56b4ece0fa.png)

imageImage Added

imageImage Added


To avoid confusion, those issues need to be fixed.

New Design

Main changes

Regarding custom operators, users care about the performance of both the pure python code and the sub-operators. So, in our enhanced custom operator profiling, we should dissect custom operator calls into fine-gained events for both categories. Specifically, we will create a new domain called “custom operators”. There, we will have: 1) Events that represent the execution of the pure python code. 2) Events that represent the execution of the sub-operators. 3) Also, for different custom operators, we should give events different namespace prefix.

...

For 3), we will also use CustomOpProfiler. We will create a mapping from CustomOperator worker thread_id to the registered name of that custom operator. Then, in PushAsync() in threaded_engine.cc, we will call GenerateDisplayName() in CustomOpProfiler to see if we are in a CustomOperator worker thread. If so, then this operator being pushed to the engine is a sub-operator of a custom operator. We want to create a display name by concatenating the name of this operator to a prefix which is the name of the custom operator, something like “MyOp::_plus_scalar”. Furthermore, in class ProfileOperator in profiler.h, we need to check the display name of the operator. If the name contains “::”, then we profile them within domain “Custom Operator.”I have created a pull request at: https://github.com/apache/incubator-mxnet/pull/15210.

More discussions

With this enhanced custom operator profiling, we also want to get rid of profiling “Dummy_Wait” entirely. This is done by adding a check in ProfileOperator in profiler.h.

Notice that because we are adding a function call to GenerateDisplayName() in PushAsync(), we are risking adding an overhead to every operator call (we need to get thread id and and the function has a lock). However in practice, because this function is short and has early return checks, this overhead is small enough to forgive. On my machine (2017 MacBook Pro 13’ i7), on average, for regular operator calls, this overhead is less than 1 micro second (it appears as 0). And for sub-operator calls, the overhead is always < 10 micro seconds and averages to < 5 micro seconds. This is to be compared to ~150 micro seconds taken by scalar addition on a 100*100 matrix. Notice this relative larger overhead will only happen to sub-operators of custom operators.

Visualization

Below is the new visualization after my change:
![image](https://user-images.githubusercontent.com/16669457/59465592-1b329000-8de0-11e9-8e86-8a3cb70cd7b9.png)imageImage Added
This is to be compared with the old visualization:
![image](https://user-images.githubusercontent.com/16669457/59465613-22f23480-8de0-11e9-9ffe-59e007187827.png)imageImage Added
The two screenshot are produced with the same *code*:
```
import mxnet as mx
from mxnet import nd
from mxnet import profiler
from mxnet import context
import threading

...