Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Most of the implementation is in located in header files and for different frameworks and platforms separated with preprocessor defines. If an example would be compiled outside of the main library build, it could not know about all the defines the library was compiled with. Providing the binary with the headers (including headers of dependencies) is not enough to make a successful run of the application. In most cases we observed the application just crashes after start, since without the right defines it’s impossible to understand what implementation is compiled into the library.

Image Modified

The exposure of the operators need a special python script to be executed to run through the compiled MxNet binary file and the library sources to generate C++ API code. This causes significant maintenance effort to synchronize the classes and usages throughout the layers. It does not allow semantic and explicit versioning and is likely to break when implementation details will change. Though an attempt was made to hide the implementation details of the library and as such provide a proper public API of the library by duplicating classes, the package is still not standalone and decoupled from the MXNet source, as it requires additional includes that are "private" to the MXNet implementation such as dmlc and nnvm.

...

  • API / ABI compatibility

  • Portable binary library with standalone headers

  • Build configuration

Work areas

Image Modified

Remove intermediate C API layer for C++

...

 

// MultiplyOperation.h

class Tensor;

class MultiplyOperation {
public:
  void setA(const Tensor &a);
  void setB(const Tensor &b);

  Tensor getResult() const;

private:
  class MultiplyOperationPrivate;

  std::unique_ptr<MultiplyOperationPrivate> p;
};

// MultiplyOperation.cpp

#include <MultiplyOperation.h>

void MultiplyOperation::setA(const Tensor &a) {
  p->a = a;
}

void MultiplyOperation::setB(const Tensor &b) {
  p->b = b;
}

Tensor MultiplyOperation::getResult() const {
  return p->getResultImpl();
}

class MultiplyOperationPrivate {
public:
  Tensor a, b;

  Tensor getResultImpl() const;
};

// MultiplyOperation_CUDA.cpp

Tensor MultiplyOperationPrivate::getResultImpl() const {
  return CUDA_CALL(multiply(a, b));
}

// MultiplyOperation_MKL.cpp

Tensor MultiplyOperationPrivate::getResultImpl() const {
  return mkl_multiply(a, b);
}

 

Semantic versioned API with no additional dependencies

...