.. _peppi-index: PePPi ===== **PePPi** (Performant Python Pipelines) is a high-performance Python module included in the **Palette SDK distribution**. It enables developers to quickly build and deploy real-time machine learning pipelines in Python on the SiMa.ai MLSoC. Installation ------------ PePPi is bundled with the **Palette SDK**. Follow Palette SDK :ref:`installation instructions ` to set up your environment. Typical Workflow ---------------- First, :ref:`Compile ` the model using the ModelSDK to generate the ``project_mpk.tar.gz`` file. For convenience, we provide sample pipelines include precompiled, ready-to-use models. Implement the following logic in the Python script: 1. Load model parameters from a YAML config. 2. Use :py:meth:`sima.VideoReader` to ingest a video stream. 3. Initialize :py:meth:`sima.MLSoCSession` with the desired model. 4. Run :py:meth:`sima.MLSoCSession.run_model` to perform inference. 5. Optionally, render results using :py:meth:`sima.SimaBoxRender`. 6. Output the processed video using :py:meth:`sima.VideoWriter`. Once the script is prepared, use Palette MPK CLI tool to build and deploy the app to the SiMa.ai MLSoC. Project Structure ----------------- project.yaml ^^^^^^^^^^^^ The ``project.yaml`` file contains the configuration for the Peppi application. .. code-block:: yaml :class: code-narrow :linenos: source: name: "rtspsrc" value: "" udp_host: "" port: "" pipeline: Models: - name: "pd" targz: "" label_file: labels.txt normalize: true channel_mean: [0.408, 0.447, 0.470] channel_stddev: [0.289, 0.274, 0.278] padding_type: "BOTTOM_LEFT" aspect_ratio: true topk: 10 detection_threshold: 0.7 decode_type: "centernet" The parameters in the ``project.yaml`` file are defined as follows: .. list-table:: :widths: 15 45 40 :header-rows: 1 :class: narrow-table * - Parameter - Description - Examples * - source - The input source for the pipeline. Supported sources: ``rtspsrc, pcie, filesrc`` - **RTSP:** ``name: "rtspsrc"``, ``value: "rtsp://10.1.9.226:8554/stream"`` **PCIe:** ``name: "pcie"``, ``value: "PCIE"`` **File:** ``name: "filesrc"``, ``value: "./Input_samples"`` * - udp_host - The UDP host IP address for communication. - Example: "10.1.9.226" * - port - The port number for UDP communication. - Example: "7000" * - targz - The tarball file contains model. - Example: people.tar.gz * - label_file - The file containing class labels for the model. - Example: "labels.txt" * - input_img_type - Input image format. - Options: ``'IYUV', 'NV12', 'RGB', 'BGR'`` * - normalize - A boolean (true or false) that controls the hardware pre-processing path. - Options: ``'True', 'False'`` * - channel_mean - Mean values for input image normalization for each channel (RGB). - Example: [0.408, 0.447, 0.470] * - channel_stddev - Standard deviation values for input image normalization for each channel (RGB). - Example: [0.289, 0.274, 0.278] * - original_width - Frame width (Must be specified in case of filesrc). - Example: 640 * - original_height - Frame Height (Must be specified in case of filesrc). - Example: 720 * - padding_type - The type of padding applied to the image. - Options: ``'TOP_LEFT', 'TOP_RIGHT', 'BOTTOM_LEFT', 'BOTTOM_RIGHT', 'CENTER'`` * - scaling_type - The algorithm to be used to resize the input image to specified output dimension. - Options: ``'NO_SCALING', 'NEAREST_NEIGHBOUR', 'BILINEAR', 'BICUBIC', 'INTERAREA''' * - aspect_ratio - Maintain aspect ratio while resizing. - Options: ``true, false`` * - topk - The number of top detections to keep. - Example: 10 * - detection_threshold - The confidence threshold for detections. - Range: ``0.0 to 1.0`` * - decode_type - The decoding type used for detections. - Options: ``centernet, yolo, effdet, detr`` * - num_classes - The number of classes that the model is configured to detect. Refer FP32 pipeline to get num_classes. - Example: 80 main.py ^^^^^^^ The ``main.py`` contains the core processing logic of the PePPi application. .. code-block:: python :class: code-narrow :linenos: import sima import yaml with open("project.yaml", "r") as file: external_params = yaml.safe_load(file) reader = sima.VideoReader(external_params["source"]) writer = sima.VideoWriter(external_params["source"], external_params["udp_host"], external_params["port"], reader.frame_width, reader.frame_height) model_params = external_params["Models"][0] session = sima.MLSoCSession(model_params["targz"], pipeline=external_params["pipeline"], frame_width=reader.frame_width, frame_height=reader.frame_height) session.configure(model_params) while reader.isOpened(): ret, frame = reader.read() if not ret: continue boxes = session.run_model(frame) annotated_frame = sima.SimaBoxRender.render( frame, boxes, reader.frame_width, reader.frame_height, model_params["label_file"]) writer.write(annotated_frame) Pre-processing and Normalization ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ The ``normalize`` parameter is a key setting that determines how the input data is prepared for the model: * **normalize: true (Recommended):** This enables the full, hardware-accelerated pre-processing pipeline on the CVU. This single step handles resizing from input frame resolution to model input resolution, padding, and normalization. Crucially, it assumes the input is raw image data (e.g., pixel values from 0-255) and will **automatically scale it to the [0, 1] float range** before applying the channel_mean and channel_stddev values. For this reason, standard normalization values that correspond to a ``[0, 1]`` range (e.g., ``mean=[0.485, 0.456, 0.406]``) should be used. * **normalize: false (Advanced):** This bypasses the generic pre-processing pipeline. Here only the necessary quantization and tessellation are performed, making it required to handle any necessary **resizing or normalization manually** in the application logic. This is the recommended approach if the model expects input data in its original integer range (e.g., 0-255) or requires a custom normalization scheme. Build and Deploy ---------------- The `mpk create <../palette/mpk_tools.html#mpk-create>`_ command accepts an argument ``--peppi`` to build the PePPi pipeline package. Once the output file ``project.mpk`` is created, it can be deployed to the SiMa.ai MLSoC using the `mpk deploy <../palette/mpk_tools.html#mpk-deploy>`_ command. .. note:: The ``mpk create`` command requires the ``--peppi`` argument to build the PePPi pipeline package. .. code-block:: bash user@palette-container-id: cd /usr/local/simaai/app_zoo/Peppi # Connect to the devkit user@palette-container-id: mpk device connect -t # Modify project.yaml [RTSP stream, UDP host, IP] user@palette-container-id: cp /path/to/downloaded/targz_file/yolov8_mpk.tar.gz YoloV8/ # Change directory to YoloV8 user@palette-container-id: cd YoloV8/ # Create the pipeline package user@palette-container-id: mpk create --peppi -s . -d . --package-name=peppi_yoloV8 --main-file=main.py --yaml-file=YoloV8/project.yaml # Deploy the pipeline package user@palette-container-id: mpk deploy -f project.mpk #Options for mpk create command --peppi : peppi pipeline -s : Src directory -d : Destination directory for project.mpk --main-file : Main file script filename --yaml-file : Project config yaml file --board-type : Board type (modalix, davinci) .. note:: Board Type Options: - Default (Davinci): If you don't specify ``--board-type``, it defaults to Davinci - Modalix: Use ``--board-type modalix`` to create pipeline for Modalix platform Modalix Example: ``mpk create --peppi -s YoloV8/ -d . --package-name=peppi_yoloV8_modalix --main-file=main.py --yaml-file=YoloV8/project.yaml --board-type modalix`` Examples -------- Samples are located under the Palette SDK ``/usr/local/simaai/app_zoo/Peppi`` folder with the following structure: .. list-table:: :widths: 30 70 :header-rows: 1 :class: sample-table * - **Sample** - **Description** * - Detr - Demonstrates a basic DETR (DEtection TRansformer) pipeline for object detection. * - Detr_custom_preproc - DETR pipeline with custom preprocessing to show how inputs can be adapted. * - EffDet - EfficientDet pipeline for lightweight and accurate object detection. * - Multimodel-Demo - Example of running multiple models together in a combined pipeline. * - PeopleDetector - Preconfigured pipeline optimized for detecting people in video streams. * - YoloV7 - Standard YOLOv7 pipeline for object detection. * - YoloV7_pcie - YOLOv7 pipeline configured for PCIe data input/output. * - YoloV8 - YOLOv8 pipeline demonstrating the latest YOLO model for detection tasks. You can download the pre-compiled model ``tar.gz`` file for the pipelines from the following locations: .. list-table:: :widths: 30 35 35 :header-rows: 1 :class: sample-table * - **Model** - **Modalix** - **MLSoC** * - YOLOv8 - `yolov8_mpk.tar.gz `_ - `yolov8_mpk.tar.gz `_ * - UR CenterNet - `UR_onnx_centernet_fp32_512_512.onnx_mpk.tar.gz `_ - `UR_onnx_centernet_fp32_512_512.onnx_mpk.tar.gz `_ * - EfficientDet D0 - `efficientdet_d0_mpk.tar.gz `_ - `efficientdet_d0_mpk.tar.gz `_ * - DETR (No Sigmoid) - `detr_no_sigmoid_mpk.tar.gz `_ - `detr_no_sigmoid_mpk.tar.gz `_ * - YOLOv7 Tiny - `yolov7-tiny-opt_mpk.tar.gz `_ - `yolov7-tiny-opt_mpk.tar.gz `_ * - YOLOv7 E6 - — - `yolov7-e6.tar.gz `_ * - YOLOv5m Segmentation - — - `yolov5m-seg-opt_mpk.tar.gz `_ * - YOLOv8 Pose - — - `Yolov8-Pose_mpk.tar.gz `_ * - Human Pose Estimation - — - `modified_human-pose-estimation-480x480_stage2only_mpk.tar.gz `_ * - STAnomalyDet - — - `STAnomalyDet.tar.gz `_ Tutorials --------- Follow the tutorials below to build and run specific use cases. - :ref:`Ethernet Tutorial ` — works with network video sources (RTSP) and streams results over the network for object detection use case. - :ref:`PCIe Tutorial ` — integrates with a PCIe host to read data and produce results for object detection use case. - :ref:`Filesrc Tutorial ` — processes static files as the input source using segmentation, teacher/student model for anormaly detection. .. toctree:: :maxdepth: 2 :hidden: ethernet-tutorial pcie-tutorial filesrc-tutorial Logging ------- For troubleshooting, the ``/var/log/simaai_peppi.log`` file serves as the centralized location to capture all system logs related to PePPi. Models with BoxDecoder Support ------------------------------ The following models are supported with the **BoxDecoder** kernel, an optimized component for processing bounding boxes. BoxDecoder transforms raw model outputs into interpretable bounding box coordinates, scales them to the original image space, and applies post-processing steps such as thresholding and Non-Maximum Suppression (NMS) to deliver final detection results. .. list-table:: :widths: 20 15 15 15 15 20 20 :header-rows: 1 :class: narrow-table * - Model Name - num_classes - model_height - model_width - num_heads - output-shape - submodels * - centernet - 1-255 - 1-2048 - 1-2048 - 1 - N/A - model_size/4 - * - detr - 1-255 - ANY - ANY - 1 - 1/1-1024 - * - effdet - 1-255 - 1-2048 - 1-2048 - 3-5 - model_size/4,8,16,32,64 - * - Yolo - uncut ONNX - 1-255 - 1-2048 - 1-2048 - 3-4 - model_size/8,16,32,64 - YOLOv7, YOLOv8, YOLOv9, yolov10 .. important:: :class: code-narrow :width: 60% PePPi doesn't support Single Channel models so any GRAY image input won't be processed. However, a workaround to this is to give ``normalized : False`` in the ``project.yaml`` and does any preprocessing i.e. resize, crop, normalization etc. in the ``main.py`` file.