.. _peppi: Develop with PePPi ================== **PePPi** (Performant Python Pipelines) is a high-performance Python module included in the **Palette SDK distribution**. It enables developers to quickly build and deploy real-time machine learning pipelines accelerated on the SiMa.ai MLSoC. Designed for simplicity and ease of integration, PePPi abstracts the complexity of video ingestion, model inference, result rendering, and output streaming—making it ideal for developers unfamiliar with GStreamer or those looking for a lightweight alternative. PePPi minimizes the need to write and manage GStreamer pipelines directly, while still maintaining high throughput and low-latency performance. Installation ------------ PePPi is bundled with the **Palette SDK**. Follow Palette SDK :ref:`installation instructions ` to set up your environment. Typical Workflow ---------------- First, :ref:`Compile ` the model using the ModelSDK to generate the ``project_mpk.tar.gz`` file. For convenience, we provide sample pipelines include precompiled, ready-to-use models. Implement the following logic in the Python script: 1. Load model parameters from a YAML config. 2. Use :py:meth:`sima.VideoReader` to ingest a video stream. 3. Initialize :py:meth:`sima.MLSoCSession` with the desired model. 4. Run :py:meth:`sima.MLSoCSession.run_model` to perform inference. 5. Optionally, render results using :py:meth:`sima.SimaBoxRender`. 6. Output the processed video using :py:meth:`sima.VideoWriter`. Once the script is prepared, use Palette MPK CLI tool to build and deploy the app to the SiMa.ai MLSoC. Code Example ------------ .. code-block:: python import sima import yaml # Load configuration with open("project.yaml", "r") as file: external_params = yaml.safe_load(file) reader = sima.VideoReader(external_params["source"]) writer = sima.VideoWriter( external_params["source"], external_params["udp_host"], external_params["port"], reader.frame_width, reader.frame_height ) model_params = external_params["Models"][0] session = sima.MLSoCSession( model_params["targz"], pipeline=external_params["pipeline"], frame_width=reader.frame_width, frame_height=reader.frame_height ) session.configure(model_params) while reader.isOpened(): ret, frame = reader.read() if not ret: break boxes = session.run_model(frame) annotated_frame = sima.SimaBoxRender.render( frame, boxes, reader.frame_width, reader.frame_height, model_params["label_file"] ) writer.write(annotated_frame) Build and Deploy ---------------- The `mpk create <../palette/mpk_tools.html#mpk-create>`_ command accepts an argument ``--peppi`` to build the PePPi pipeline package. Once the output file ``project.mpk`` is created, it can be deployed to the SiMa.ai MLSoC using the `mpk deploy <../palette/mpk_tools.html#mpk-deploy>`_ command. .. code-block:: bash user@palette-container-id:/home/docker/sima-cli/demo/peppi$ mpk create --peppi -s . -d . --main-file=main.py --yaml-file=project.yaml user@palette-container-id:/home/docker/sima-cli/demo/peppi$ mpk deploy -f project.mpk --target Sample Pipelines ---------------- Sample pipelines are located under the Palette SDK ``/usr/local/simaai/app_zoo/Peppi`` folder. .. toctree:: :maxdepth: 1 ../api_reference/peppi-pipelines/YoloV8.rst ../api_reference/peppi-pipelines/YoloV7_pcie.rst ../api_reference/peppi-pipelines/PeopleDetector.rst ../api_reference/peppi-pipelines/YoloV7.rst ../api_reference/peppi-pipelines/EffDet.rst ../api_reference/peppi-pipelines/Multimodel-Demo.rst ../api_reference/peppi-pipelines/Detr.rst