.. _hello_sima_run_demos: Run Demos --------- Before building your first pipeline, try the canned demo first. SiMa provides two distinct demo experiences: - :ref:`Edgematic` is a state-of-the-art web-based development platform for Edge AI applications. Users can quickly experience the demo by simply dragging and dropping a prebuilt pipeline with a few mouse clicks. - Run a demo pipeline on the DevKit with the help of :ref:`Palette CLI`, using a companion Ubuntu machine to source the video and render the MLA accelerated YoloV7 processing results. Follow the instructions below to explore our demos. This also serves as a great way to verify that your environment is set up correctly. .. tabs:: .. tab:: Edgematic Demo .. include:: ../blocks/edgematic_beta_registration_notice.rst Log into Edgematic, create a new project called ``demo``, on the right hand side ``Catalog`` tab, find ``yolo_v5_ethernet`` application under ``SiMa`` and drag it into the Canvas. Then hit the ``play`` button the top right corner of the page. .. raw:: html .. tab:: DevKit Standalone Mode Demo If you have access to a DevKit, then running this demo helps you understand the general workflow of things. The following diagram illustrates the demo setup. An Ubuntu machine connects to the DevKit over the network and is configured for RTSP (Real-Time Streaming Protocol) video streaming to and from the DevKit, which handles accelerated ML tasks. Additionally, you need to install :ref:`Palette CLI` on the host machine to build and deploy the app. Palette can be installed on the same machine running the video streaming processes. .. image:: ../../media/pipeline-demo-architecture.jpg :alt: MIPI Camera Pipeline Architecture :align: center :width: 50% | .. note:: This is a demo for the :ref:`standalone ` mode. For more information on how to setup network in the standalone mode, refer to this :ref:`page `. .. dropdown:: Step 1. Prepare the Ubuntu Host Machine :animate: fade-in :color: secondary Requirements ------------ To run this demo you will need: - A Ubuntu machine (version 22.04 is recommended) connected to the same network as the DevKit. - Install the required GStreamer and ffmpeg dependencies .. code-block:: bash user@ubuntu-host-machine:~$sudo apt update && sudo apt -y install gstreamer1.0-tools gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav ffmpeg - Install docker engine .. code-block:: bash user@ubuntu-host-machine:~$curl -fsSL https://get.docker.com -o get-docker.sh user@ubuntu-host-machine:~$sudo sh ./get-docker.sh .. dropdown:: Step 2. Install and Setup Palette :animate: fade-in :color: secondary .. note:: .. button-link:: https://docs.sima.ai/pkg_downloads/SDK1.5.0/1.5.0_Palette_master_B156.zip :color: primary :shadow: Download Palette For more information on system requirements and installation procedure, refer to :ref:`Palette installation`. Make sure your DevKit runs v1.5 firmware or above, click :ref:`here ` to find out how to check firmware version and update if necessary. .. dropdown:: Step 3. Setup the Ubuntu Host Side Video Processes :animate: fade-in :color: secondary .. note:: The Ubuntu machine must have an internet connection to download the public RTSP server Docker container. We need to set up three processes on the Ubuntu machine: two to source the video and another to render the results coming from the DevKit. - On the Ubuntu machine, download a test 1280x720p mp4 video file to host the RTSP stream on the Ubuntu machine. We recommend a video containing people or cars as these are generally supported classes by the Yolo model in this demo. - On the ``first`` terminal on the Ubuntu machine, start the docker service that forwards the RTSP stream: .. code-block:: bash user@ubuntu-host-machine:~$docker run --name rtsp_server --rm -e MTX_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server - On the ``second`` terminal on the Ubuntu machine, stream the mp4 file to the RTSP service started from the previous step: .. code-block:: bash user@ubuntu-host-machine:~$ffmpeg -re -nostdin -stream_loop -1 -i -c:v copy -f rtsp rtsp://127.0.0.1:8554/mystream Replace ```` to the path of the video. - On the ``third`` terminal on the Ubuntu machine, launch a GStreamer window to view the pipeline result. .. warning:: Be sure to run this command from the Ubuntu Terminal GUI, not an SSH session, as it requires access to the system's graphical user interface to render video. .. code-block:: bash user@ubuntu-host-machine:~$GST_DEBUG=0 gst-launch-1.0 udpsrc port= ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! 'video/x-h264,stream-format=byte-stream,alignment=au' ! avdec_h264 ! autovideoconvert ! fpsdisplaysink sync=0 Replace ```` with the open port from which the host will recieve the output of the pipeline from the DevKit. .. dropdown:: Step 4. Edit the Demo App GStreamer Pipeline :animate: fade-in :color: secondary You need to edit the sample app configuration file ``application.json`` located inside the Palette container. You can use either ``vim`` or ``nano`` to edit the file. .. code-block:: bash user@palette-container-id:~$ cd /usr/local/simaai/app_zoo/Gstreamer/YoloV7 user@palette-container-id:/usr/local/simaai/app_zoo/Gstreamer/YoloV7$ vim application.json The ``gst`` element in this file defines the gStreamer launch command, inside this command there are 3 variables need to be replaced: - ````: Replace to ``rtsp://:8554/mystream`` - ````: Replace to the Ubuntu host IP address - ```` : Replace with the Ubuntu host port where the third terminal from the previous step indicates the listening port for the video rendering process. .. dropdown:: Click to view the explainations of the ``gst`` launch command in the ``application.json`` file :animate: fade-in :color: info The gst launch command is used to construct and execute multimedia processing and machine learning pipeline executed in SiMa MLA. It enables real-time streaming, decoding, processing, and rendering of data streams using various GStreamer plugins. Common use cases include playing media files, capturing video from a camera, streaming RTSP sources, and applying machine learning inference on video frames. By chaining different elements, gst launch allows developers to create flexible and efficient multimedia applications. .. code-block:: javascript "gst": "rtspsrc location=rtsp://:8554/mystream !\ rtph264depay wait-for-keyframe=true ! h264parse ! 'video/x-h264, parsed=true, stream-format=(string)byte-stream, alignment=(string)au, width=(int)[1,4096], height=(int)[1,4096]' !\ simaaidecoder sima-allocator-type=2 name='decoder' next-element='CVU' !\ tee name=source ! 'video/x-raw' !\ simaaiprocesscvu_new name=simaai_preprocess num-buffers=5 !\ simaaiprocessmla_new name=simaai_process_mla num-buffers=5 !\ simaaiprocesscvu_new name=simaai_postprocess num-buffers=5 !\ nmsyolov5_new name=simaai_nms_yolov5 orig-img-width=1280 orig-img-height=720 ! \ overlay. source. ! 'video/x-raw' ! \ simaai-overlay2_new name=overlay render-info='input::decoder,bboxy::simaai_nms_yolov5' labels-file='/data/simaai/applications/YoloV7/share/overlay_new/labels.txt' !\ simaaiencoder enc-bitrate=4000 name=encoder !\ h264parse !\ rtph264pay !\ udpsink host= port=PORT" 1. **RTSP Source**: - ``rtspsrc location=rtsp://:8554/mystream``: Retrieves the RTSP video stream from the Ubuntu host. 2. **H.264 Stream Handling**: - ``rtph264depay wait-for-keyframe=true``: Depayloads the H.264 stream while ensuring it starts with a keyframe. - ``h264parse``: Parses the H.264 stream. - ``'video/x-h264, parsed=true, stream-format=(string)byte-stream, alignment=(string)au, width=(int)[1,4096], height=(int)[1,4096]'``: Specifies video format constraints. 3. **Decoding**: - ``simaaidecoder sima-allocator-type=2 name='decoder' next-element='CVU'``: Uses SiMa AI's hardware decoder and allocates memory. 4. **Processing & Object Detection**: - ``tee name=source``: Duplicates the video stream for parallel processing. - ``simaaiprocesscvu_new name=simaai_preprocess num-buffers=5``: Prepares frames for processing. - ``simaaiprocessmla_new name=simaai_process_mla num-buffers=5``: Runs inference using the Machine Learning Accelerator (MLA). - ``simaaiprocesscvu_new name=simaai_postprocess num-buffers=5``: Post-processes the model output. - ``nmsyolov5_new name=simaai_nms_yolov5 orig-img-width=1280 orig-img-height=720``: Applies Non-Maximum Suppression (NMS) for YOLOv5 object detection. 5. **Overlaying Bounding Boxes**: - ``overlay. source. ! 'video/x-raw'``: Merges the original and processed video streams. - ``simaai-overlay2_new name=overlay render-info='input::decoder,bboxy::simaai_nms_yolov5' labels-file='/data/simaai/applications/YoloV7_Eth/share/overlay_new/labels.txt'``: - Overlays detected objects on the video. - Uses the provided labels file for object classification. 6. **Encoding & Transmission**: - ``simaaiencoder enc-bitrate=4000 name=encoder``: Encodes the processed video stream. - ``h264parse``: Parses the encoded H.264 stream. - ``rtph264pay``: Packs the stream into RTP format. - ``udpsink host= port=``: Sends the processed video stream over UDP to the specified host. .. dropdown:: Step 5. Build and Deploys the Pipeline :animate: fade-in :color: secondary .. include :: ../blocks/palette_create_connect_deploy.rst Now, you should see the original video overlay with Yolo bounding boxes in the Ubuntu machine.