simaaiprocessmla
This GStreamer plugin, simaaiprocessmla
, executes Machine Learning Accelerator (MLA) inference within a GStreamer pipeline. It takes input buffers, performs inference using a specified MLA model, and outputs the results in new buffers. The plugin handles memory allocation and management, ensuring efficient integration with the SiMa.ai hardware.
Properties
Property |
Type |
Default Value |
Description |
---|---|---|---|
config |
string |
/mnt/host/model.lm |
Path to the configuration JSON file containing model and processing parameters. |
transmit |
boolean |
FALSE |
If TRUE, transmits KPI messages to the GstBus. |
multi-pipeline |
boolean |
FALSE |
If TRUE, utilizes a dispatcher enabling multiple pipelines on the device. |
num-buffers |
uint |
5 |
Number of buffers allocated for processing. |
dump-data |
boolean |
FALSE |
If TRUE, dumps intermediate data to |
silent |
boolean |
FALSE |
If TRUE, suppresses verbose output. |
Usage
The simaaiprocessmla
plugin is integrated into a GStreamer pipeline as a base-transform
element. It takes input data (the format defined in the config file), runs inference through the specified MLA model, and delivers results in a specified output format (also defined in the config file).
Example gst-launch-1.0
pipeline:
gst-launch-1.0 filesrc location=/path/to/input.file ! ...input-processing-elements... ! simaaiprocessmla name=mla config=/path/to/config.json ! ...output-processing-elements... ! fakesink
Replace /path/to/input.file
with the actual path to your input file and /path/to/config.json
with the path to your configuration file. The ...input-processing-elements...
and ...output-processing-elements...
should be replaced with the appropriate GStreamer elements for your input and output data formats, respectively. The config file must define the input and output data paths.
Config File Example
{
"version" : 0.1,
"simaai__params" : {
"next_cpu" : 1,
"outputs" : [
{
"name": "hm_tensor",
"size": 115200
},
{
"name": "paf_tensor",
"size": 172800
}
],
"batch_size" : 1,
"batch_sz_model": 1,
"model_path" : "/data/openpose.elf"
},
"caps": {
"sink_pads": [
{
"media_type": "video/x-raw",
"params": [
{
"name": "format",
"type": "string",
"values": "RGB, BGR",
"json_field": null
},
{
"name": "width",
"type": "int",
"values": "1 - 4096",
"json_field": null
},
{
"name": "height",
"type": "int",
"values": "1 - 4096",
"json_field": null
}
]
}
],
"src_pads": [
{
"media_type": "application/vnd.simaai.tensor",
"params": [
{
"name": "format",
"type": "string",
"values": "MLA",
"json_field": null
}
]
}
]
}
}
Installation
To install the simaaiprocessmla
plugin, copy the plugin files from /usr/local/simaai/plugin_zoo/gst-simaai-plugins-base/gst/
into your project’s plugins/
directory. The plugin needs to be placed inside a subdirectory of the same name as its plugin name, e.g. plugins/simaaiprocessmla/
.
Integration
The plugin integrates into a GStreamer pipeline as a base-transform
element. The pipeline needs to correctly provide input caps that are compatible with the ones expected by the plugin, as specified in the config file. The plugin then processes the buffers, performing inference, and pushing the results downstream. Output caps are negotiated according to the parameters found in the config file. The plugin manages its own internal buffer pool to improve performance.