Mapping application to the MLSoC

When porting an application to the MLSoC, it is useful to:

  1. To have a reference program to debug against a set of expected outputs (resnet50_reference_classification_app.py serves that purpose)

    • This will help in ensuring that as you build the application on the MLSoC, you can check intermediate outputs of each application stage and ensure they are functionally accurate to what you expect.

  2. As you start porting functionality to the MLSoC, it will be important to check against expected outputs and ensure that each hardware-accelerated step or library is providing the correct output when compared to the reference program. This will make debugging much easier. (resnet50_reference_classification_app.py serves that purpose by dumping intermediate outputs)

  3. We then begin building the GStreamer pipeline step by step and check against known outputs to verify that we are still getting the correct functional output as expected out of each stage.

Tip

  • It is always good to start building the pipeline from the simplest configuration, for example: start with batch_size = 1 so that all outputs are easy to debug as you build the pipeline.

Building the GStreamer Pipeline

What is GStreamer?

“GStreamer is a pipeline-based multimedia framework that links together a wide variety of media processing systems to complete complex workflows. For instance, GStreamer can be used to build a system that reads files in one format, processes them, and exports them in another. The formats and processes can be changed in a plug and play fashion.”

–Wikipedia

We have adapted GStreamer for machine learning applications because it provides a high-performance framework that fits well in ML applications. Some compelling reasons include:

  • Modular Design: GStreamer’s plugin architecture allows easy integration of different processing elements.

  • Flexibility: Supports a wide range of input and output formats, making it versatile for various ML tasks.

  • Efficiency: Optimized for performance, making it suitable for real-time applications on resource-constrained devices.

  • Scalability: Can handle complex workflows, enabling the construction of advanced ML pipelines with ease.

SiMa.ai GStreamer Plugins

At SiMa.ai, we have developed a set of GStreamer Plugins specifically targeting our hardware-accelerated IPs or providing useful functionalities to enhance your development experience. These plugins are optimized to leverage the full potential of the MLSoC, ensuring high performance and efficiency in various machine learning applications.

You can find detailed information about these plugins and their usage in our documentation at: GStreamer Plugins. We will use these plugins plus available open-source GStreamer plugins to construct our pipelines on MLSoC.

We can execute GStreamer applications in two main ways:

  1. Using gst-launch-1.0 for quick-prototyping and command-line debugging.

  2. Using GStApp for building more complex and programmable applications.

In this section, we will begin by prototyping with gst-launch-1.0 to run our plugins on the SoC.

A look ahead

In short, our 5 stage ResNet50 application pipeline as defined in earlier sections (ML application overview) will look something like:

gst-launch-1.0 -v --gst-plugin-path=</path/to_gstreamer/plugins> \
simaaisrc mem-target=1 node-name="my_image_src" num-buffers=1 ... ! \
simaaiprocesscvu source-node-name="my_image_src" buffers-list="my_image_src" ... ! \
simaaiprocessmla ... ! \
simaaiprocesscvu source-node-name="mla-resnet" ... ! \
argmax_print ... ! \
fakesink

Where:

Plugin name

Description

Reference

gst-plugin-path

The path where you have compiled the plugins and stored them on the board. All plugins will be already on the board in the 1.4.0 build.

N/A

simaaisrc

simaaisrc is a multifilesource. Unlike GStreamer’s standard multifilesource, simaaisrc has built-in capabilities to utilize SiMa.ai’s memory libraries ensuring correct memory allocation for the EV74 CVU or the MLA (mem-target -> A65=0, EV74=1, MLA=4.

Not available

simaaiprocesscvu

Plugin that allows the pipeline to execute EV74 CVU graphs.

simaaiprocesscvu

simaaiprocessmla

Plugin that allows the pipeline to perform inference on models compiled using the ModelSDK as described on the MLA IP.

simaaiprocessmla

custom_plugin_sink

A custom GStreamer plugin that we will write to perform the post-processing and printing of results.

N/A

Note

Why are we not reading JPGs directly in the GStreamer version?

In order to read a JPG file in GStreamer, we would have to use a filesrc plugin followed by the jpegdec plugin and then write that output memory using the SiMa.ai memory library. The only way to do that, would be to write a custom plugin that can decode the image and use the sima memory allocation library to allocate the output correctly in order to hand it off to the simaaiprocesscvu plugin. In order to skip that complexity for this example, we have just pre-decoded the images to .bin files using our reference application, so that we can read directly from the file and do the preprocessing.

Normally inputs to pipelines come from video sources, and the simaaidecoder would take care of performing the correct allocation.

Conclusion and next steps

In this section, we:

  • Reviewed why GStreamer is used for developing end-to-end pipelines.

  • Learned about where to find the SiMa.ai GStreamer plugins and how we can use them using gst-launch

  • Had a preview of how the GStreamer gst-launch string will look like to execute our example ResNet50 classification application example

In the next page, we will work in Palette and on our MLSoC board to begin developing our GStreamer pipeline stage by stage. We will show all the steps necessary to build the example application, along with some tips on how to debug and verify each step of the process.