PePPi
PePPi (Performant Python Pipelines) is a high-performance Python module included in the Palette SDK distribution. It enables developers to quickly build and deploy real-time machine learning pipelines in Python on the SiMa.ai MLSoC.
Installation
PePPi is bundled with the Palette SDK. Follow Palette SDK installation instructions to set up your environment.
Typical Workflow
First, Compile the model using the ModelSDK to generate the project_mpk.tar.gz
file. For convenience, we provide sample pipelines include precompiled, ready-to-use models.
Implement the following logic in the Python script:
Load model parameters from a YAML config.
Use
sima.VideoReader()
to ingest a video stream.Initialize
sima.MLSoCSession()
with the desired model.Run
sima.MLSoCSession.run_model()
to perform inference.Optionally, render results using
sima.SimaBoxRender()
.Output the processed video using
sima.VideoWriter()
.
Once the script is prepared, use Palette MPK CLI tool to build and deploy the app to the SiMa.ai MLSoC.
Project Structure
project.yaml
The project.yaml
file contains the configuration for the Peppi application.
1source:
2 name: "rtspsrc"
3 value: "<RTSP_URL>"
4udp_host: "<HOST_IP>"
5port: "<PORT_NUM>"
6pipeline: <pipeline_name>
7
8Models:
9 - name: "pd"
10 targz: "<targz_path>"
11 label_file: labels.txt
12 normalize: true
13 channel_mean: [0.408, 0.447, 0.470]
14 channel_stddev: [0.289, 0.274, 0.278]
15 padding_type: "BOTTOM_LEFT"
16 aspect_ratio: true
17 topk: 10
18 detection_threshold: 0.7
19 decode_type: "centernet"
The parameters in the project.yaml
file are defined as follows:
Parameter |
Description |
Examples |
---|---|---|
source |
The input source for the pipeline. Supported sources: |
RTSP: PCIe: File: |
udp_host |
The UDP host IP address for communication. |
Example: “10.1.9.226” |
port |
The port number for UDP communication. |
Example: “7000” |
targz |
The tarball file contains model. |
Example: people.tar.gz |
label_file |
The file containing class labels for the model. |
Example: “labels.txt” |
input_img_type |
Input image format. |
Options: |
channel_mean |
Mean values for input image normalization for each channel (RGB). |
Example: [0.408, 0.447, 0.470] |
channel_stddev |
Standard deviation values for input image normalization for each channel (RGB). |
Example: [0.289, 0.274, 0.278] |
original_width |
Frame width (Must be specified in case of filesrc). |
Example: 640 |
original_height |
Frame Height (Must be specified in case of filesrc). |
Example: 720 |
padding_type |
The type of padding applied to the image. |
Options: |
scaling_type |
The algorithm to be used to resize the input image to specified output dimension. |
Options: ``’NO_SCALING’, ‘NEAREST_NEIGHBOUR’, ‘BILINEAR’, ‘BICUBIC’, ‘INTERAREA’’’ |
aspect_ratio |
Maintain aspect ratio while resizing. |
Options: |
topk |
The number of top detections to keep. |
Example: 10 |
detection_threshold |
The confidence threshold for detections. |
Range: |
decode_type |
The decoding type used for detections. |
Options: |
num_classes |
The number of classes that the model is configured to detect. Refer FP32 pipeline to get num_classes. |
Example: 80 |
main.py
The main.py
contains the core processing logic of the PePPi application.
1 import sima
2 import yaml
3
4 with open("project.yaml", "r") as file:
5 external_params = yaml.safe_load(file)
6
7 reader = sima.VideoReader(external_params["source"])
8 writer = sima.VideoWriter(external_params["source"], external_params["udp_host"], external_params["port"],
9 reader.frame_width, reader.frame_height)
10 model_params = external_params["Models"][0]
11 session = sima.MLSoCSession(model_params["targz"], pipeline=external_params["pipeline"],
12 frame_width=reader.frame_width, frame_height=reader.frame_height)
13 session.configure(model_params)
14
15 while reader.isOpened():
16 ret, frame = reader.read()
17 if not ret:
18 continue
19 boxes = session.run_model(frame)
20 annotated_frame = sima.SimaBoxRender.render(
21 frame, boxes, reader.frame_width, reader.frame_height, model_params["label_file"])
22 writer.write(annotated_frame)
Build and Deploy
The mpk create command accepts an argument --peppi
to build the PePPi pipeline package.
Once the output file project.mpk
is created, it can be deployed to the SiMa.ai MLSoC using the mpk deploy command.
Note
The mpk create
command requires the --peppi
argument to build the PePPi pipeline package.
user@palette-container-id: cd /usr/local/simaai/app_zoo/Peppi
# Connect to the devkit
user@palette-container-id: mpk device connect -t <devkit_hostname>
# Modify project.yaml [RTSP stream, UDP host, IP]
user@palette-container-id: cp /path/to/downloaded/targz_file/yolov8_mpk.tar.gz YoloV8/
# Change directory to YoloV8
user@palette-container-id: cd YoloV8/
# Create the pipeline package
user@palette-container-id: mpk create --peppi -s . -d . --package-name=peppi_yoloV8 --main-file=main.py --yaml-file=YoloV8/project.yaml
# Deploy the pipeline package
user@palette-container-id: mpk deploy -f project.mpk
#Options for mpk create command
--peppi : peppi pipeline
-s : Src directory
-d : Destination directory for project.mpk
--main-file : Main file script filename
--yaml-file : Project config yaml file
--board-type : Board type (modalix, davinci)
Note
Board Type Options:
Default (Davinci): If you don’t specify
--board-type
, it defaults to DavinciModalix: Use
--board-type modalix
to create pipeline for Modalix platform
Modalix Example: mpk create --peppi -s YoloV8/ -d . --package-name=peppi_yoloV8_modalix --main-file=main.py --yaml-file=YoloV8/project.yaml --board-type modalix
Examples
Samples are located under the Palette SDK /usr/local/simaai/app_zoo/Peppi
folder with the following structure:
Sample |
Description |
---|---|
Detr |
Demonstrates a basic DETR (DEtection TRansformer) pipeline for object detection. |
Detr_custom_preproc |
DETR pipeline with custom preprocessing to show how inputs can be adapted. |
EffDet |
EfficientDet pipeline for lightweight and accurate object detection. |
Multimodel-Demo |
Example of running multiple models together in a combined pipeline. |
PeopleDetector |
Preconfigured pipeline optimized for detecting people in video streams. |
YoloV7 |
Standard YOLOv7 pipeline for object detection. |
YoloV7_pcie |
YOLOv7 pipeline configured for PCIe data input/output. |
YoloV8 |
YOLOv8 pipeline demonstrating the latest YOLO model for detection tasks. |
You can download the pre-compiled model tar.gz
file for the pipelines from the following locations:
Model |
Modalix |
MLSoC |
---|---|---|
YOLOv8 |
||
UR CenterNet |
||
EfficientDet D0 |
||
DETR (No Sigmoid) |
||
YOLOv7 Tiny |
||
YOLOv7 E6 |
— |
|
YOLOv5m Segmentation |
— |
|
YOLOv8 Pose |
— |
|
Human Pose Estimation |
— |
modified_human-pose-estimation-480x480_stage2only_mpk.tar.gz |
STAnomalyDet |
— |
Tutorials
Follow the tutorials below to build and run specific use cases.
ethernet tutorial — works with network video sources (RTSP) and streams results over the network for object detection use case.
pcie tutorial — integrates with a PCIe host to read data and produce results for object detection use case.
filesrc tutorial — processes static files as the input source using segmentation, teacher/student model for anormaly detection.
Logging
For troubleshooting, the /var/log/simaai_peppi.log
file serves as the centralized location to capture all system logs related to PePPi.
Models with BoxDecoder Support
The following models are supported with the BoxDecoder kernel, an optimized component for processing bounding boxes. BoxDecoder transforms raw model outputs into interpretable bounding box coordinates, scales them to the original image space, and applies post-processing steps such as thresholding and Non-Maximum Suppression (NMS) to deliver final detection results.
Model Name |
num_classes |
model_height |
model_width |
num_heads |
output-shape |
submodels |
---|---|---|---|---|---|---|
centernet |
1-255 |
1-2048 |
1-2048 |
1 - N/A |
model_size/4 |
|
detr |
1-255 |
ANY |
ANY |
1 |
1/1-1024 |
|
effdet |
1-255 |
1-2048 |
1-2048 |
3-5 |
model_size/4,8,16,32,64 |
|
Yolo - uncut ONNX |
1-255 |
1-2048 |
1-2048 |
3-4 |
model_size/8,16,32,64 |
YOLOv7, YOLOv8, YOLOv9, yolov10 |