.. _peppi-filesrc-tutorial: Filesrc Tutorial ================ This tutorial shows how to use the **Filesrc Pipeline** for multi-model processing of local image files. You will learn how to run advanced computer vision tasks such as segmentation, detection, and anomaly analysis using YOLO, Teacher-Student distillation, and AutoEncoder networks on the SiMa.ai MLSoC platform. .. note:: This section covers building filesrc pipelines with PePPi (Python). **Features:** - Multi-model pipeline combining segmentation, detection, and anomaly analysis - Local file processing using filesrc input - Real-time anomaly detection using Teacher-Student and AutoEncoder networks - UDP streaming output for visualization Purpose ------- This application demonstrates how to use the SiMa PePPi API to create a multi-model pipeline that: - Loads video frames from a local directory using ``filesrc`` - Uses a YOLO model to perform segmentation and detect objects of interest - Processes the detected regions using Teacher, Student, and AutoEncoder models - Calculates anomaly maps via mean squared error between Teacher-Student and AutoEncoder outputs - Combines these maps to generate predictions - Writes annotated results via UDP stream for visualization Configuration Overview --------------------- All runtime settings are managed through ``project.yaml``. Below are the parameters grouped by function. Input/Output Configuration ^^^^^^^^^^^^^^^^^^^^^^^^^ .. list-table:: :widths: 20 50 30 :header-rows: 1 :class: narrow-table * - Parameter - Description - Example * - source.name - Input type; here using local image files - ``"filesrc"`` * - source.value - Folder path containing input samples - ``""`` * - udp_host - Host IP to stream annotated output - ``""`` * - port - UDP port for the output stream - ``""`` * - pipeline - Inference pipeline used - ``"AutoEncoderPipeline"`` Model Configuration ^^^^^^^^^^^^^^^^^^ YOLO Model (Index 0) ~~~~~~~~~~~~~~~~~~~ .. list-table:: :widths: 25 50 25 :header-rows: 1 :class: narrow-table * - Parameter - Description - Value * - name - Model name - ``"YOLO"`` * - targz - Path to model archive - ``"yolo_seg_cls.tar.gz"`` * - normalize - Whether to normalize input - ``true`` * - channel_mean - Input mean values - ``[0.0, 0.0, 0.0]`` * - channel_stddev - Input stddev values - ``[1.0, 1.0, 1.0]`` * - padding_type - Padding method - ``"CENTER"`` * - aspect_ratio - Maintain aspect ratio - ``false`` * - topk - Maximum number of detections - ``10`` * - detection_threshold - Detection score threshold - ``0.7`` * - label_file - Label map file path - ``"labels.txt"`` * - input_img_type - Image color format - ``"BGR"`` Teacher / Student / AutoEncoder Models (Index 1–3) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. list-table:: :widths: 25 50 25 :header-rows: 1 :class: narrow-table * - Parameter - Description - Value (same across all) * - name - Model name - ``"Teacher"``, ``"Student"``, ``"AutoEncoder"`` * - targz - Model archive - ``teacher_int8_mpk.tar.gz``, etc. * - normalize - Enable input normalization - ``true`` * - channel_mean - Input mean - ``[0.407, 0.446, 0.469]`` * - channel_stddev - Input stddev - ``[0.289, 0.273, 0.277]`` * - input_img_type - Input format - ``"BGR"`` Project Configuration -------------------- project.yaml ^^^^^^^^^^^ .. code-block:: yaml :class: code-narrow :linenos: source: name: "filesrc" value: "" udp_host: "" port: "" pipeline: "AutoEncoderPipeline" Models: - name: "YOLO" targz: "yolo_seg_cls.tar.gz" normalize: true channel_mean: [0.0, 0.0, 0.0] channel_stddev: [1.0, 1.0, 1.0] padding_type: "CENTER" aspect_ratio: false topk: 10 detection_threshold: 0.7 label_file: "labels.txt" input_img_type: "BGR" - name: "Teacher" targz: "teacher_int8_mpk.tar.gz" normalize: true channel_mean: [0.407, 0.446, 0.469] channel_stddev: [0.289, 0.273, 0.277] input_img_type: "BGR" - name: "Student" targz: "student_int8_mpk.tar.gz" normalize: true channel_mean: [0.407, 0.446, 0.469] channel_stddev: [0.289, 0.273, 0.277] input_img_type: "BGR" - name: "AutoEncoder" targz: "autoencoder_int8_mpk.tar.gz" normalize: true channel_mean: [0.407, 0.446, 0.469] channel_stddev: [0.289, 0.273, 0.277] input_img_type: "BGR" Script Behavior -------------- The main script performs the following: 1. Initializes four SiMa MLSoC sessions for the YOLO, Teacher, Student, and AutoEncoder models 2. Loads and loops through input frames using ``filesrc`` 3. Runs the YOLO segmentation model to isolate regions of interest 4. Passes the segmented region through the Teacher, Student, and AutoEncoder networks 5. Computes two anomaly maps (Teacher vs. Student, AutoEncoder vs. Student) 6. Normalizes and combines these maps to highlight regions of potential anomaly 7. Post-processes the combined map and generates a final prediction 8. Converts and streams the result via UDP using ``VideoWriter`` .. note:: This pipeline requires multiple model files and is designed for advanced anomaly detection scenarios. Model Details ------------ .. list-table:: :widths: 30 70 :header-rows: 1 :class: narrow-table Pipeline Architecture -------------------- .. list-table:: :widths: 20 80 :header-rows: 1 :class: narrow-table * - Stage - Description * - Input Processing - Loads frames from local directory using filesrc * - YOLO Segmentation - Performs object detection and segmentation to identify regions of interest * - Teacher Network - Processes segmented regions using pre-trained teacher model * - Student Network - Processes same regions using student model for knowledge distillation * - AutoEncoder Network - Reconstructs input regions to detect anomalies * - Anomaly Detection - Computes mean squared error between Teacher-Student and AutoEncoder outputs * - Output Generation - Combines anomaly maps and streams results via UDP Build and Deploy ---------------- Follow the standard PePPi build and deploy process: .. note:: This tutorial is compatible with Modalix only. #. **Prerequisites:** .. code-block:: console :class: code-narrow 1. Make sure the device is connected via SDK for deployment of project to device. Please follow the setup device section and follow the same. sima-user@docker-image-id:/home/docker/sima-cli/workspace$ mpk device connect -t sima@192.168.1.20 ℹ Connecting to sima@192.168.1.20... 🔗 Connection established to 192.168.1.20 . ℹ Fetching Device Plugin Version data file from : 192.168.1.20 ... ✔ Successfully fetched and updated Plugin Version Data file from : 192.168.1.20. #. **Prepare the project directory:** .. code-block:: console :class: code-narrow sima-user@docker-image-id:/home/docker/sima-cli/workspace$ sima-cli install -v 1.7.0 samples/peppi-tutorials/filesrc #. **Update project.yaml** .. code-block:: yaml :class: code-narrow source: name: "filesrc" value: "InputSamples" # Replace with your own sample test data folder if working with custom model udp_host: "" port: "" pipeline: "AutoEncoderPipeline" Models: - name: "YOLO" targz: "yolo_seg_cls.tar.gz" channel_mean: [0.0, 0.0, 0.0] channel_stddev: [1.0, 1.0, 1.0] padding_type: "CENTER" aspect_ratio: false topk: 10 detection_threshold: 0.7 normalize: true label_file: labels.txt input_img_type: "BGR" - name: "Teacher" targz: "teacher_int8_mpk.tar.gz" normalize: true channel_mean: [0.407, 0.446, 0.469] channel_stddev: [0.289, 0.273, 0.277] input_img_type: "BGR" - name: "Student" targz: "student_int8_mpk.tar.gz" normalize: true channel_mean: [0.407, 0.446, 0.469] channel_stddev: [0.289, 0.273, 0.277] input_img_type: "BGR" - name: "AutoEncoder" targz: "autoencoder_int8_mpk.tar.gz" normalize: true channel_mean: [0.407, 0.446, 0.469] channel_stddev: [0.289, 0.273, 0.277] input_img_type: "BGR" #. **Create the pipeline package:** .. code-block:: console :class: code-narrow sima-user@docker-image-id:/home/docker/sima-cli/workspace/filesrc_tutorial$ mpk create --peppi -s . -d . --main-file=main.py --yaml-file=project.yaml ℹ Generating requirements.txt file... ✔ Generated requirements.txt. ℹ Dowloading required packages... ✔ Dowloaded required packages. ℹ Building Rpm... ✔ Rpm built successfully. ℹ Creating mpk file... ✔ Mpk file created successfully at /home/docker/sima-cli/workspace/filesrc_tutorial/project.mpk #. **Deploy the pipeline package:** .. code-block:: console :class: code-narrow sima-user@docker-image-id:/home/docker/sima-cli/workspace/filesrc_tutorial$ mpk deploy -f project.mpk ℹ Checking if App AutoEncoderPipeline Plugin Version Index File /home/docker/sima-cli/workspace/filesrc_tutorial/AutoEncoderPipeline_plugin_version_index .json exists... ❗ App AutoEncoderPipeline : File doesn't exist at /home/docker/sima-cli/workspace/filesrc_pipeline/Multimodel-Demo/AutoEncoderPipeline_plugin_version_index .json. Please check the path/file. ❔ Proceed by Skipping Plugin Version Check? [y/n]: y ... ... ... ✔ MPK Deployed! ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% ✔ MPK Deployment is successful for project.mpk. #. **Visualize the results:** On the host machine, run the following commands. Make sure the ``udpsrc`` port matches the port specified in the ``project.yaml`` configuration. .. code-block:: console :class: code-narrow GST_DEBUG=0 gst-launch-1.0 udpsrc port=8005 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! 'video/x-h264,stream-format=byte-stream,alignment=au' ! avdec_h264 ! fpsdisplaysink .. toctree:: :maxdepth: 2 :caption: Filesrc Pipeline