Run Demosο
Before building your first pipeline, try the canned demo first. SiMa provides two distinct demo experiences:
Edgematic π₯ is a state-of-the-art web-based development platform for Edge AI applications. Users can quickly experience the demo by simply dragging and dropping a prebuilt pipeline with a few mouse clicks.
Run a demo pipeline on the DevKit with the help of Palette, using a companion Ubuntu machine to source the video and render the MLA accelerated YoloV7 processing results.
Follow the instructions below to explore our demos. This also serves as a great way to verify that your environment is set up correctly.
Note
Currently Edgematic is in beta and available for free on the Amazon Web Services (AWS) Marketplace. To obtain access, click link. To sign up, you will need an AWS account.
Log into Edgematic, create a new project called demo
, on the right hand side Catalog
tab, find yolo_v5_ethernet
application under SiMa
and drag it into the Canvas. Then hit the play
button the top right corner of the page.
If you have access to a DevKit, then running this demo helps you understand the general workflow of things. The following diagram illustrates the demo setup. An Ubuntu machine connects to the DevKit over the network and is configured for RTSP (Real-Time Streaming Protocol) video streaming to and from the DevKit, which handles accelerated ML tasks.
Additionally, you need to install Palette on the host machine to build and deploy the app. Palette can be installed on the same machine running the video streaming processes.

Note
This is a demo for the standalone mode. For more information on how to setup network in the standalone mode, refer to this page.
Step 1. Prepare the Ubuntu Host Machine
To run this demo you will need:
A Ubuntu machine (version 22.04 is recommended) connected to the same network as the DevKit.
Install the required GStreamer and ffmpeg dependencies
user@ubuntu-host-machine:~$sudo apt update && sudo apt -y install gstreamer1.0-tools gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav ffmpeg
Install docker engine
user@ubuntu-host-machine:~$curl -fsSL https://get.docker.com -o get-docker.sh
user@ubuntu-host-machine:~$sudo sh ./get-docker.sh
Step 2. Install and Setup Palette
Note
For more information on system requirements and installation procedure, refer to Software Installation. Make sure your DevKit runs v1.5 firmware or above, click here to find out how to check firmware version and update if necessary.
Step 3. Setup the Ubuntu Host Side Video Processes
Note
The Ubuntu machine must have an internet connection to download the public RTSP server Docker container.
We need to set up three processes on the Ubuntu machine: two to source the video and another to render the results coming from the DevKit.
On the Ubuntu machine, download a test 1280x720p mp4 video file to host the RTSP stream on the Ubuntu machine. We recommend a video containing people or cars as these are generally supported classes by the Yolo model in this demo.
On the
first
terminal on the Ubuntu machine, start the docker service that forwards the RTSP stream:
user@ubuntu-host-machine:~$docker run --name rtsp_server --rm -e MTX_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server
On the
second
terminal on the Ubuntu machine, stream the mp4 file to the RTSP service started from the previous step:
user@ubuntu-host-machine:~$ffmpeg -re -nostdin -stream_loop -1 -i <VIDEO_PATH> -c:v copy -f rtsp rtsp://127.0.0.1:8554/mystream
Replace <VIDEO_PATH>
to the path of the video.
On the
third
terminal on the Ubuntu machine, launch a GStreamer window to view the pipeline result.
Warning
Be sure to run this command from the Ubuntu Terminal GUI, not an SSH session, as it requires access to the systemβs graphical user interface to render video.
user@ubuntu-host-machine:~$GST_DEBUG=0 gst-launch-1.0 udpsrc port=<PORT> ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! 'video/x-h264,stream-format=byte-stream,alignment=au' ! avdec_h264 ! autovideoconvert ! fpsdisplaysink sync=0
Replace <PORT>
with the open port from which the host will recieve the output of the pipeline from the DevKit.
Step 4. Edit the Demo App GStreamer Pipeline
You need to edit the sample app configuration file application.json
located inside the Palette container. You can use either vim
or nano
to edit the file.
user@palette-container-id:~$ cd /usr/local/simaai/app_zoo/Gstreamer/YoloV7
user@palette-container-id:/usr/local/simaai/app_zoo/Gstreamer/YoloV7$ vim application.json
The gst
element in this file defines the gStreamer launch command, inside this command there are 3 variables need to be replaced:
<RTSP_SRC>
: Replace tortsp://<ubuntu_host_ip>:8554/mystream
<HOST_IP>
: Replace to the Ubuntu host IP address<PORT>
: Replace with the Ubuntu host port where the third terminal from the previous step indicates the listening port for the video rendering process.
Click to view the explainations of the gst
launch command in the application.json
file
The gst launch command is used to construct and execute multimedia processing and machine learning pipeline executed in SiMa MLA. It enables real-time streaming, decoding, processing, and rendering of data streams using various GStreamer plugins. Common use cases include playing media files, capturing video from a camera, streaming RTSP sources, and applying machine learning inference on video frames. By chaining different elements, gst launch allows developers to create flexible and efficient multimedia applications.
"gst": "rtspsrc location=rtsp://<HOST_IP>:8554/mystream !\
rtph264depay wait-for-keyframe=true ! h264parse ! 'video/x-h264, parsed=true, stream-format=(string)byte-stream, alignment=(string)au, width=(int)[1,4096], height=(int)[1,4096]' !\
simaaidecoder sima-allocator-type=2 name='decoder' next-element='CVU' !\
tee name=source ! 'video/x-raw' !\
simaaiprocesscvu_new name=simaai_preprocess num-buffers=5 !\
simaaiprocessmla_new name=simaai_process_mla num-buffers=5 !\
simaaiprocesscvu_new name=simaai_postprocess num-buffers=5 !\
nmsyolov5_new name=simaai_nms_yolov5 orig-img-width=1280 orig-img-height=720 ! \
overlay. source. ! 'video/x-raw' ! \
simaai-overlay2_new name=overlay render-info='input::decoder,bboxy::simaai_nms_yolov5' labels-file='/data/simaai/applications/YoloV7/share/overlay_new/labels.txt' !\
simaaiencoder enc-bitrate=4000 name=encoder !\
h264parse !\
rtph264pay !\
udpsink host=<HOST_IP> port=PORT"
RTSP Source:
rtspsrc location=rtsp://<HOST_IP>:8554/mystream
: Retrieves the RTSP video stream from the Ubuntu host.
H.264 Stream Handling:
rtph264depay wait-for-keyframe=true
: Depayloads the H.264 stream while ensuring it starts with a keyframe.h264parse
: Parses the H.264 stream.'video/x-h264, parsed=true, stream-format=(string)byte-stream, alignment=(string)au, width=(int)[1,4096], height=(int)[1,4096]'
: Specifies video format constraints.
Decoding:
simaaidecoder sima-allocator-type=2 name='decoder' next-element='CVU'
: Uses SiMa AIβs hardware decoder and allocates memory.
Processing & Object Detection:
tee name=source
: Duplicates the video stream for parallel processing.simaaiprocesscvu_new name=simaai_preprocess num-buffers=5
: Prepares frames for processing.simaaiprocessmla_new name=simaai_process_mla num-buffers=5
: Runs inference using the Machine Learning Accelerator (MLA).simaaiprocesscvu_new name=simaai_postprocess num-buffers=5
: Post-processes the model output.nmsyolov5_new name=simaai_nms_yolov5 orig-img-width=1280 orig-img-height=720
: Applies Non-Maximum Suppression (NMS) for YOLOv5 object detection.
Overlaying Bounding Boxes:
overlay. source. ! 'video/x-raw'
: Merges the original and processed video streams.simaai-overlay2_new name=overlay render-info='input::decoder,bboxy::simaai_nms_yolov5' labels-file='/data/simaai/applications/YoloV7_Eth/share/overlay_new/labels.txt'
:Overlays detected objects on the video.
Uses the provided labels file for object classification.
Encoding & Transmission:
simaaiencoder enc-bitrate=4000 name=encoder
: Encodes the processed video stream.h264parse
: Parses the encoded H.264 stream.rtph264pay
: Packs the stream into RTP format.udpsink host=<HOST_IP> port=<PORT>
: Sends the processed video stream over UDP to the specified host.
Step 5. Build and Deploys the Pipeline
From the Palette command line, execute the following commands to connect to the DevKit, build the MPK app package, deploy and execute the app.
user@palette-container-id:{project-folder}$ mpk device connect -t sima@<DevKit_IP_ADDRESS>
user@palette-container-id:{project-folder}$ mpk create -s . -d . --clean
βΉ Compiling a65-apps...
β a65-apps compiled successfully.
βΉ Compiling Plugins...
β Plugins Compiled successfully.
βΉ Copying Resources...
β Resources Copied successfully.
βΉ Building Rpm...
β Rpm built successfully.
βΉ Creating mpk file...
β Mpk file created successfully at {project-folder}/project.mpk .
user@palette-container-id:{project-folder}$ mpk deploy -f project.mpk
π Sending MPK to <DevKit IP>...
Transfer Progress for project.mpk: 100.00%
π MPK sent successfully!
β MPK Deployed! ββββββββββββββββββββββββββββββββββββββββ 100%
β MPK Deployment is successful for project.mpk.
Note
You only need to run the mpk device connect
once unless you shutdown the DevKit for extended period of time, or restarted the Palette container.
Now, you should see the original video overlay with Yolo bounding boxes in the Ubuntu machine.