Advanced usage for CMake users

This part contains the full usage of MACE.

Deployment file

There are many advanced options supported.

  • Example

    Here is an example deployment file with two models.

    models:
      mobilenet_v1:
          platform: tensorflow
          model_file_path: https://cnbj1.fds.api.xiaomi.com/mace/miai-models/mobilenet-v1/mobilenet-v1-1.0.pb
          model_sha256_checksum: 71b10f540ece33c49a7b51f5d4095fc9bd78ce46ebf0300487b2ee23d71294e6
          subgraphs:
            - input_tensors:
                - input
              input_shapes:
                - 1,224,224,3
              output_tensors:
                - MobilenetV1/Predictions/Reshape_1
              output_shapes:
                - 1,1001
              validation_inputs_data:
                - https://cnbj1.fds.api.xiaomi.com/mace/inputs/dog.npy
          runtime: cpu+gpu
          limit_opencl_kernel_time: 0
          obfuscate: 0
          winograd: 0
      squeezenet_v11:
          platform: caffe
          model_file_path: http://cnbj1-inner-fds.api.xiaomi.net/mace/mace-models/squeezenet/SqueezeNet_v1.1/model.prototxt
          weight_file_path: http://cnbj1-inner-fds.api.xiaomi.net/mace/mace-models/squeezenet/SqueezeNet_v1.1/weight.caffemodel
          model_sha256_checksum: 625c952063da1569e22d2f499dc454952244d42cd8feca61f05502566e70ae1c
          weight_sha256_checksum: 72b912ace512e8621f8ff168a7d72af55910d3c7c9445af8dfbff4c2ee960142
          subgraphs:
            - input_tensors:
                - data
              input_shapes:
                - 1,227,227,3
              output_tensors:
                - prob
              output_shapes:
                - 1,1,1,1000
              accuracy_validation_script:
                - path/to/your/script
          runtime: cpu+gpu
          limit_opencl_kernel_time: 0
          obfuscate: 0
          winograd: 0
    
  • Configurations

Options Usage
model_name model name should be unique if there are more than one models. LIMIT: if build_type is code, model_name will be used in c++ code so that model_name must comply with c++ name specification.
platform The source framework, tensorflow or caffe.
model_file_path The path of your model file which can be local path or remote URL.
model_sha256_checksum The SHA256 checksum of the model file.
weight_file_path [optional] The path of Caffe model weights file.
weight_sha256_checksum [optional] The SHA256 checksum of Caffe model weights file.
subgraphs subgraphs key. DO NOT EDIT
input_tensors The input tensor name(s) (tensorflow) or top name(s) of inputs' layer (caffe). If there are more than one tensors, use one line for a tensor.
output_tensors The output tensor name(s) (tensorflow) or top name(s) of outputs' layer (caffe). If there are more than one tensors, use one line for a tensor.
input_shapes The shapes of the input tensors, default is NHWC order.
output_shapes The shapes of the output tensors, default is NHWC order.
input_ranges The numerical range of the input tensors' data, default [-1, 1]. It is only for test.
validation_inputs_data [optional] Specify Numpy validation inputs. When not provided, [-1, 1] random values will be used.
accuracy_validation_script [optional] Specify the accuracy validation script as a plugin to test accuracy, see doc.
validation_threshold [optional] Specify the similarity threshold for validation. A dict with key in 'CPU', 'GPU' and/or 'HEXAGON' and value <= 1.0.
backend The onnx backend framework for validation, could be [tensorflow, caffe2, pytorch], default is tensorflow.
runtime The running device, one of [cpu, gpu, dsp, hta, apu].
data_type [optional] The data type used for specified runtime. [fp16_fp32, fp32_fp32] for GPU and APU; [fp16_fp32, bf16_fp32, fp32_fp32, fp16_fp16] for CPU, default is fp16_fp32.
input_data_types [optional] The input data type for specific op(eg. gather), which can be [int32, float32], default to float32.
input_data_formats [optional] The format of the input tensors, one of [NONE, NHWC, NCHW]. If there is no format of the input, please use NONE. If only one single format is specified, all inputs will use that format, default is NHWC order.
output_data_formats [optional] The format of the output tensors, one of [NONE, NHWC, NCHW]. If there is no format of the output, please use NONE. If only one single format is specified, all inputs will use that format, default is NHWC order.
limit_opencl_kernel_time [optional] Whether splitting the OpenCL kernel within 1 ms to keep UI responsiveness, default is 0.
opencl_queue_window_size [optional] Limit the max commands in OpenCL command queue to keep UI responsiveness, default is 0.
obfuscate [optional] Whether to obfuscate the model operator name, default to 0.
winograd [optional] Which type winograd to use, could be [0, 2, 4]. 0 for disable winograd, 2 and 4 for enable winograd, 4 may be faster than 2 but may take more memory.

Note

Some command tools:

# Get device's soc info.
adb shell getprop | grep platform

# command for generating sha256_sum
sha256sum /path/to/your/file

Advanced usage

There are three common advanced use cases:
  • run your model on the embedded device(ARM LINUX)
  • converting model to C++ code.
  • tuning GPU kernels for a specific SoC.

Run you model on the embedded device(ARM Linux)

The way to run your model on the ARM Linux is nearly same as with android, except you need specify a device config file.

python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --validate --device_yml=/path/to/devices.yml

There are two steps to do before run:

  1. configure login without password

    MACE use ssh to connect embedded device, you should copy your public key to embedded device with the blow command.

    cat ~/.ssh/id_rsa.pub | ssh -q {user}@{ip} "cat >> ~/.ssh/authorized_keys"
    
  2. write your own device yaml configuration file.

    • Example

      Here is an device yaml config demo.

      # one yaml config file can contain multi device info
      devices:
        # The name of the device
        nanopi:
        # arm64 or armhf
          target_abis: [arm64, armhf]
        # device soc, you can get it from device manual
          target_socs: RK3399
        # device model full name
          models: FriendlyElec Nanopi M4
        # device ip address
          address: 10.0.0.0
        # login username
          username: user
        raspberry:
          target_abis: [armv7l]
          target_socs: BCM2837
          models: Raspberry Pi 3 Model B Plus Rev 1.3
          address: 10.0.0.1
          username: user
      
    • Configuration

      The detailed explanation is listed in the blow table.

      Options Usage
      target_abis Device supported abis, you can get it via dpkg --print-architecture and dpkg --print-foreign-architectures command, if more than one abi is supported, separate them by commas.
      target_socs device soc, you can get it from device manual, we haven't found a way to get it in shell.
      models device models full name, you can get via get lshw command (third party package, install it via your package manager). see it's product value.
      address Since we use ssh to connect device, ip address is required.
      username login username, required.

Model Protection

Model can be encrypted by obfuscation.

python tools/python/encrypt.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml

It will override mobilenet_v1.pb and mobilenet_v1.data. If you want to compiled the model into a library, you should use options --gencode_model --gencode_param to generate model code, i.e.,

python tools/python/encrypt.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --gencode_model --gencode_param

It will generate model code into mace/codegen/models and also generate a helper function CreateMaceEngineFromCode in mace/codegen/engine/mace_engine_factory.h by which you can create an engine with models built in it.

After that you can rebuild the engine.

RUNTIME=GPU RUNMODE=code QUANTIZE=OFF bash tools/cmake/cmake-build-armeabi-v7a.sh

RUNMODE=code means you compile and link model library with MACE engine.

When you test the model in code format, you should specify it in the script as follows.

python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --gencode_model --gencode_param

Of course you can generate model code only, and use parameter file.

When you need to integrate the libraries into your applications, you can link libmace_static.a and libmodel.a to your target. These are under the directory: build/cmake-build/armeabi-v7a/install/lib/, the header files you need are under build/cmake-build/armeabi-v7a/install/include.

Refer to mace/tools/mace_run.ccfor full usage. The following list the key steps.

// Include the headers
#include "mace/public/mace.h"
// If the model_graph_format is code
#include "mace/public/${model_name}.h"
#include "mace/public/mace_engine_factory.h"

// ... Same with the code in basic usage

// 4. Create MaceEngine instance
std::shared_ptr<mace::MaceEngine> engine;
MaceStatus create_engine_status;
// Create Engine from compiled code
create_engine_status =
    CreateMaceEngineFromCode(model_name.c_str(),
                             model_data_ptr, // nullptr if model_data_format is code
                             model_data_size, // 0 if model_data_format is code
                             input_names,
                             output_names,
                             device_type,
                             &engine);
if (create_engine_status != MaceStatus::MACE_SUCCESS) {
  // Report error or fallback
}

// ... Same with the code in basic usage

Transform models after conversion

If model_graph_format or model_data_format is specified as file, the model or weight file will be generated as a .pb or .data file after model conversion. After that, more transformations can be applied to the generated files, such as compression or encryption. To achieve that, the model loading is split to two stages: 1) load the file from file system to memory buffer; 2) create the MACE engine from the model buffer. So between the two stages, transformations can be inserted to decompress or decrypt the model buffer. The transformations are user defined. The following lists the key steps when both model_graph_format and model_data_format are set as file.

// Load model graph from file system
std::unique_ptr<mace::port::ReadOnlyMemoryRegion> model_graph_data =
    make_unique<mace::port::ReadOnlyBufferMemoryRegion>();
if (FLAGS_model_file != "") {
  auto fs = GetFileSystem();
  status = fs->NewReadOnlyMemoryRegionFromFile(FLAGS_model_file.c_str(),
      &model_graph_data);
  if (status != MaceStatus::MACE_SUCCESS) {
    // Report error or fallback
  }
}
// Load model data from file system
std::unique_ptr<mace::port::ReadOnlyMemoryRegion> model_weights_data =
    make_unique<mace::port::ReadOnlyBufferMemoryRegion>();
if (FLAGS_model_data_file != "") {
  auto fs = GetFileSystem();
  status = fs->NewReadOnlyMemoryRegionFromFile(FLAGS_model_data_file.c_str(),
      &model_weights_data);
  if (status != MaceStatus::MACE_SUCCESS) {
    // Report error or fallback
  }
}
if (model_graph_data == nullptr || model_weights_data == nullptr) {
  // Report error or fallback
}

std::vector<unsigned char> transformed_model_graph_data;
std::vector<unsigned char> transformed_model_weights_data;
// Add transformations here.
...
// Release original model data after transformations
model_graph_data.reset();
model_weights_data.reset();

// Create the MACE engine from the model buffer
std::shared_ptr<mace::MaceEngine> engine;
MaceStatus create_engine_status;
create_engine_status =
    CreateMaceEngineFromProto(transformed_model_graph_data.data(),
                              transformed_model_graph_data.size(),
                              transformed_model_weights_data.data(),
                              transformed_model_weights_data.size(),
                              input_names,
                              output_names,
                              config,
                              &engine);
if (create_engine_status != MaceStatus::MACE_SUCCESS) {
  // Report error or fallback
}

Tuning for specific SoC's GPU

If you want to use the GPU of a specific device, you can tune the performance for particular devices, which may get 1~10% performance improvement.

You can specify --tune option when you want to run and tune the performance at the same time.

python tools/python/run_model.py --config ../mace-models/mobilenet-v1/mobilenet-v1.yml --tune

It will generate OpenCL tuned parameter binary file in build/mobilenet_v1/opencl directory.

└── mobilenet_v1_tuned_opencl_parameter.MIX2S.sdm845.bin

It specifies your test platform model and SoC. You can use it in production to reduce latency on GPU.

To deploy it, change the names of files generated above for not collision and push them to your own device's directory. Use like the previous procedure, below lists the key steps differently.

// Include the headers
#include "mace/public/mace.h"
// 0. Declare the device type (must be same with ``runtime`` in configuration file)
DeviceType device_type = DeviceType::GPU;

// 1. configuration
MaceStatus status;
MaceEngineConfig config;
std::shared_ptr<OpenclContext> opencl_context;

const std::string storage_path ="path/to/storage";
opencl_context = GPUContextBuilder()
    .SetStoragePath(storage_path)
    .SetOpenCLBinaryPaths(path/to/opencl_binary_paths)
    .SetOpenCLParameterPath(path/to/opencl_parameter_file)
    .Finalize();
config.SetGPUContext(opencl_context);
config.SetGPUHints(
    static_cast<GPUPerfHint>(GPUPerfHint::PERF_NORMAL),
    static_cast<GPUPriorityHint>(GPUPriorityHint::PRIORITY_LOW));

// ... Same with the code in basic usage.

Multi Model Support (optional)

If multiple models are configured in config file. After you test it, it will generate more than one tuned parameter files. Then you need to merge them together.

python tools/python/gen_opencl.py

After that, it will generate one set of files into build/opencl directory.

├── compiled_opencl_kernel.bin
└── tuned_opencl_parameter.bin

You can also generate code into the engine by specify --gencode, after which you should rebuild the engine.

Validate accuracy of MACE model

MACE supports python validation script as a plugin to test the accuracy, the plugin script could be used for below two purpose.

  1. Test the accuracy(like Top-1) of MACE model(specifically quantization model) converted from other framework(like tensorflow)
  2. Show some real output if you want to see it.

The script define some interfaces like preprocess and postprocess to deal with input/outut and calculate the accuracy, you could refer to the sample code for detail. the sample code show how to calculate the Top-1 accuracy with imagenet validation dataset.

Reduce Library Size

Remove the registration of the ops and delegators unused for your models in the mace/ops/registry/ops_registry.cc and mace/ops/registry/op_delegators_registry.cc, which will reduce the library size significantly. the final binary just link the registered ops and delegators' code.

#include "mace/ops/registry/registry.h"

namespace mace {
namespace ops {
// Just leave the ops used in your models

...

}  // namespace ops


void RegisterAllOps(OpRegistry *registry) {
// Just leave the ops used in your models

  ...

  ops::RegisterMyCustomOp(registry);

  ...

}

}  // namespace mace
#include "mace/ops/registry/registry.h"

namespace mace {
namespace ops {
// Just leave the delegators used in your ops

...

}  // namespace ops


void RegisterAllOpDelegators(OpDelegatorRegistry *registry) {
// Just leave the delegators used in your ops

  ...

  ops::RegisterMyCustomDelegator(registry);

  ...

}

}  // namespace mace

Reduce Model Size

Model file size can be a bottleneck for the deployment of neural networks on mobile devices, so MACE provides several ways to reduce the model size with no or little performance or accuracy degradation.

1. Save model weights in half-precision floating point format

The data type of a regular model is float (32bit). To reduce the model weights size, half (16bit) can be used to reduce it by half with negligible accuracy degradation. Therefore, the default storage type for a regular model in MACE is half. However, if the model is very sensitive to accuracy, storage type can be changed to float.

In the deployment file, data_type is fp16_fp32 by default and can be changed to fp32_fp32, for CPU it can also be changed to bf16_fp32 and fp16_fp16``(``fp16_fp16 can only be used on armv8.2 or higher version).

For CPU, fp16_fp32 means that the weights are saved in half and actual inference is in float, while bf16_fp32 means that the weights are saved in bfloat16 and actual inference is in float, and fp16_fp16 means that the weights are saved in half and actual inference is in half.

For GPU, fp16_fp32 means that the ops in GPU take half as inputs and outputs while kernel execution in float.

2. Save model weights in quantized fixed point format

Weights of convolutional (excluding depthwise) and fully connected layers take up a major part of model size. These weights can be quantized to 8bit to reduce the size to a quarter, whereas the accuracy usually decreases only by 1%-3%. For example, the top-1 accuracy of MobileNetV1 after quantization of weights is 68.2% on the ImageNet validation set. quantize_large_weights can be specified as 1 in the deployment file to save these weights in 8bit and actual inference in float. It can be used for both CPU and GPU.

Reduce Memory Occupation

MACE creates intermediate memory for inference, which maybe large size, so MACE provides several ways to reduce the intermediate memory size.

1. Release intermediate memory between two inferences

If the interval of app inferences is long, the intermediate memory of MACE can be released temporarily to reduce the memory occupation. Before the next inference, MACE will rebuild the intermediate memory, which will take some time, so essentially this is a strategy of trading time for space. The API for temporarily releasing intermediate memory of MACE is:

MaceEngine::ReleaseIntermediateBuffer();

2. Share intermediate memory among multiple MACE engines

If app has multiple MACE engines in a process, and these engines will not be called at the same time, such as not concurrent in two threads, then we can let these engines share their intermediate memory. By doing so, multiple engines will use only one copy of the intermediate memory, thus greatly saving memory. When an engine A is initialized, if an engine B wants to share the memory of engine A, it only needs to use engine A as the tutor of engine B. You can set engine A as the engine B's tutor by CreateMaceEngineFromProto or CreateMaceEngineFromCode, the code is as follows:

std::shared_ptr<mace::MaceEngine> A;
MaceStatus create_engine_status;

// Create Engine from model file
create_engine_status =
    CreateMaceEngineFromProto(model_graph_proto,
                              model_graph_proto_size,
                              model_weights_data,
                              model_weights_data_size,
                              input_names,
                              output_names,
                              device_type,
                              &A);
MACE_CHECK(create_engine_status == MaceStatus::MACE_SUCCESS);

std::shared_ptr<mace::MaceEngine> B;
create_engine_status =
    CreateMaceEngineFromProto(model_graph_proto,
                              model_graph_proto_size,
                              model_weights_data,
                              model_weights_data_size,
                              input_names,
                              output_names,
                              device_type,
                              &B,
                              nullptr,
                              A.get());
MACE_CHECK(create_engine_status == MaceStatus::MACE_SUCCESS);

Warning

Before passing engine A as a tutor of engine B, A must be initialized first. Both CreateMaceEngineFromProto and CreateMaceEngineFromCode initialize the MACE engine after it is created.

You can use any engine as a tutor of other engines. Two engines with the same runtime can share more intermediate memory.