Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
languagepy
def convert_hybrid_block(block, target_dtype="float16", target_dtype_ops=None,
                         fp32_ops=None, widest_dtype_ops=None, conditional_fp32_ops=None,
                         excluded_sym_names=None, input_names=['data']):
    """Given a hybrid block/symbol block representing a neural network of data type FP32 and target_dtype,
    return a block with mixed precision support

    Parameters
    ----------
    block : HybridBlock or SymbolBlock object
        FP32 HybridBlock or SymbolBlock object
    target_dtype : str or numpy
        currently only supports float16. The target dtype indicates to add cast layers
        when possible so that lower precision computation can be leveraged.
    target_precision_ops : list of strs
        Override the list of operator names casted to target_dtype.
        If None, uses the framework's default list to be casted to target dtype.
    fp32_ops : list of strs
        Override the lists of operator names casted to FP32.
        If None, uses the framework's default list to be casted to FP32.
    widest_precision_ops : list of strs
        Override the list of operator names which should run in widest precision among its
        input arguments.
        If None, uses the framework's default list of widest_precision_ops.
    conditional_fp32_ops : list of (string, string, list of string)
        Override the list of functions casted to FP32.
        The format of the list is
        (name of the function, name of the parameter,
         list of values of the parameter that make the operator to be casted to
        fp32)
    excluded_sym_names : list of strs
        A list of strings that represent the names of symbols that users want to exclude
        from being quantized.
    input_names : list of strs
        A list of strings representing the names of input variables
	"""

...

After mixed precision pass is done and amp_cast and amp_multicast layers are added, the symbolic representation needs to be modified to store the right dtype attrs for some of its inputs. This will require running InferType pass after the NNVM ReducePrecision pass and then using the obtained information to set the data types of inputsweights and auxiliary states.

This will ensure that the dtype corresponding to each param or aux input will have the right dtypeis correct, by casting the arg_params and aux_params accordingly inside .

Thus the symbol returned by convert_model .

Gluon Changes

For Gluon code, we need to add an internal API to retrieve sym, arg_params and aux_params from a hybrid_block. Following this, convert_model can be used to convert a symbol json, model params and auxiliary params. After conversion, the symbolic model (json, arg_params, aux_params) can be imported back into gluon with SymbolBlock.imports. The returned symbolblock is ready to use for inference.

Frontend Bindings

Need to add amp convert_model API support for different bindings like C++, Scala etc. 

FAQ

...

API will have amp_cast and amp_multicast symbols and the "__dtype__" attribute of weight and aux symbols will be updated. Also the returned arg_params and aux_params ndarrays will have the same dtype as the "__dtype__" attribute in the returned symbol.

Gluon Changes

For Gluon code, we need to add an internal API to retrieve sym, arg_params and aux_params

...

from a hybrid_block. Following this, convert_model can be used to convert a symbol json, model params and auxiliary params. After conversion, the symbolic model (json, arg_params, aux_params) can be imported back into gluon with SymbolBlock.imports. The returned symbolblock is ready to use for inference.

Frontend Bindings

Need to add amp convert_model API support for different bindings like C++, Scala etc. 

FAQ

Will the arg_params and aux_params be casted to fp16 ?

Depends on the whitelists provided. The default whitelists have been selected in a way to avoid casting of the params, for commonly used convnet networks. If the whitelist is such that the type inference decides that certain param needs to be float16 then it will be casted.

How is this different from casting inputs to FP16 and casting params to FP16 in Gluon ?

Casting inputs to FP16 and params to FP16

Depends on the whitelists provided. The default whitelists have been selected in a way to avoid casting of the params, for commonly used convnet networks. If the whitelist is such that the type inference decides that certain param needs to be float16 then it will be casted.

How is this different from casting inputs to FP16 and casting params to FP16 in Gluon ?

Casting inputs to FP16 and params to FP16 for gluon ensures that you are able to execute the model in FP16 precision. Generally, there may be some ops which may need to run in FP16 while other in FP32 for accuracy and performance considerations. This is where the AMP APIs will be useful. 

...

What changes need to be made to existing script to convert and run inference mixed precision model ?

...

Adding the line, amp.convert_model or amp.convert_block should be sufficient to convert and run inference on a mixed precision model. Below are two user experience examples to convert a model to mixed precision model and run inference:

Module API

Code Block
sym, arg_params, aux_params = mx.model.load_checkpoint("resnet18", 0)

# Additional line below to convert to a mixed precision model. Everything else remains the same
result_sym, arg_params, aux_params = mx.contrib.amp.convert_model(sym, arg_params, aux_params, target_dtype="float16")

mod = mx.mod.Module(result_sym, data_names=['data'], label_names=None, context=mx.cpu())
mod.bind(data_shapes=[['data', (1, 3, 224, 224)]])
mod.set_params(arg_params, aux_params)
mod.forward(mx.io.DataBatch(data=[mx.nd.ones((1, 3, 224, 224))], label=None))
result = mod.get_outputs()[0].asnumpy()

Gluon API

Code Block
net = get_model(name="resnet50_v1", classes=1000, pretrained=True)
net.hybridize()
x = mx.nd.random.uniform(0, 1, shape=(1, 3, 224, 224))
out = net(x)

# Additional line below to convert to a mixed precision model. Everything else remains the same
net = mx.contrib.amp.convert_block(net, target_dtype="float16")

out = net(x)

References

  1. https://github.com/apache/incubator-mxnet/pull/14173
  2. https://github.com/apache/incubator-mxnet/pull/9552
  3. https://github.com/apache/incubator-mxnet/pull/14702