Build Host App With C++ API

In PCIe mode, the SiMa.ai MLSoC can be paired with a host system through PCIe and a host CPU can offload portions of the ML application to the MLSoC. The APIs are integrated into the host C++ application, which will then communicate with the MLSoC through PCIe.

Note

In PCIe mode, you can currently use the Machine Learning Accelerator (MLA) to run inference tasks (quant -> NN Model -> dequant). Additionally, with our support, you can manually generate the MPK in the SDK to enable any valid GStreamer PCIe pipeline.

In a future release, this mode will expand to include access to all hardware blocks, such as video codecs, enabling pre- and post-processing operations directly on the MLSoC. This enhancement is part of our ongoing roadmap.

Follow the instructions below to build a sample application that uses the ResNet50 model to classify images.

  • Follow this instruction to setup the development system in PCIe mode.

  • Download the test image dataset. This will be used by the host side application.

  • Download the optimized ResNet50 model and make it available to the Palette container environment. To learn more on how to optimize a standard Resnet50 ONNX model refer to this link.

    sima-user@sima-user-machine:~$ mkdir -p ~/workspace/resnet50
    sima-user@sima-user-machine:~$ mkdir -p ~/workspace/hostapp
    sima-user@sima-user-machine:~$ cp resnet50_mpk.tar.gz ~/workspace/resnet50/
    sima-user@sima-user-machine:~$ cp test_images.tar.gz ~/workspace/hostapp/