sima

Submodules

Classes

SimaBoxRender

A class for rendering bounding boxes on images using ARM A65 Core.

MLSoCSession

A class for interacting with the MLSoC for model inference.

VideoReader

A class for reading video from various sources, including pcie, rtspsrc and filesrc.

VideoWriter

A class for writing video stream to various destinations, including pcie, rtspsrc and filesrc.

Functions

resize(→ numpy.ndarray)

Resize image to the specified width and height using EV74 kernel.

cvtColor(→ numpy.ndarray)

Convert the color space of an image using the EV74's Transform Color Conversion kernel.

set_log_level(level)

Set the log level for the PePPi component.

Package Contents

sima.resize(image, target_width, target_height) numpy.ndarray[source]

Resize image to the specified width and height using EV74 kernel.

Parameters:
  • image (numpy.ndarray) – The input image as a NumPy array.

  • target_width (int) – The desired width of the resized image.

  • target_height (int) – The desired height of the resized image.

Returns:

The resized image as a NumPy array.

Return type:

numpy.ndarray

sima.cvtColor(image, width: int, height: int, color_type: ColorConversionCodes) numpy.ndarray[source]

Convert the color space of an image using the EV74’s Transform Color Conversion kernel.

This operation is optimized for performance and leverages hardware acceleration for color conversion.

Parameters:
  • image (numpy.ndarray) – The input image as a NumPy array.

  • width (int) – The width of the image.

  • height (int) – The height of the image.

  • color_type (ColorConversionCodes) – The color conversion code indicating the desired color transformation.

Available color conversion options:
  • sima.COLOR_BGRTORGB

  • sima.COLOR_RGBTOBGR

  • sima.COLOR_BGRTOGRAY

  • sima.COLOR_RGBTOGRAY

  • sima.COLOR_IYUVTOBGR

  • sima.COLOR_IYUVTONV12

  • sima.COLOR_NV12TOBGR

  • sima.COLOR_BGRTONV12

  • sima.COLOR_RGBTONV12

  • sima.COLOR_NV12TORGB

Returns:

The image with the converted color space.

Return type:

numpy.ndarray

class sima.SimaBoxRender[source]

A class for rendering bounding boxes on images using ARM A65 Core.

apu_obj = None
classmethod render(image: numpy.ndarray, boxes: spy.defines.BoundingBox, frame_width: int, frame_height: int, labelfile: str) numpy.ndarray[source]

Renders bounding boxes and labels on the given image.

Parameters:
  • image (np.ndarray) – Input image.

  • boxes (list[BoundingBox]) – List of bounding boxes with attributes (_x, _y, _w, _h, _class_id).

  • frame_height (int) – Height of the frame.

  • frame_width (int) – Width of the frame.

  • label_map_file (str) – Path to the label map file (used only for first-time initialization).

Returns:

Image with rendered bounding boxes and labels.

Return type:

np.ndarray

sima.set_log_level(level: spy.logger.LogLevel)[source]

Set the log level for the PePPi component.

This function configures the verbosity of logging for PePPi, controlling which log messages are recorded. Logs are written to the file located at: /var/log/simaai_peppi.log.

Parameters:

level (LogLevel) – The desired logging level (e.g., DEBUG, INFO, WARNING, ERROR).

class sima.MLSoCSession(model_file: str, pipeline: str, frame_width: int, frame_height: int, session_name: str = 'model1', ev_preproc=True)[source]

A class for interacting with the MLSoC for model inference.

pipeline
session_name = 'model1'
session_id = '_________Pipeline: Uninferable| Model: model1_________'
ev_preproc = True
parser_obj
model_file
tensor_shapes
create_plugin(plugin_class)[source]
configure(model_external_params) bool[source]

Configures EV74, MLA components with external model parameters

Parameters:

model_external_params (dict) – A dictionary containing external parameters for configuring the model and related components.

a65_preprocess(in_frame: numpy.ndarray) numpy.ndarray[source]
preprocess(in_frame: numpy.ndarray) numpy.ndarray[source]

Preprocesses the input frame for model inference based on the EV74 configuration

Parameters:

in_frame (np.ndarray) – The input frame as a NumPy array.

Returns:

The preprocessed frame ready for model inference.

Return type:

np.ndarray

run_model(frame: numpy.ndarray) numpy.ndarray | List[numpy.ndarray][source]

Runs the model inference (Preprocess, MLA, and Postprocess) on a given frame.

For the following models, the output is from SimaBoxDecode:

  • centernet

  • yolo

  • detr

  • effdet

Parameters:

frame (np.ndarray) – Input frame as a NumPy array.

Returns:

The output tensor after running the model and applying the appropriate postprocessing.
  • For the models mentioned above, the output is from SimaBoxDecode containing bounding boxes.

  • For other models, the output is from Detess Dequant.

Return type:

np.ndarray

set_log_level(level: spy.logger.LogLevel)[source]
get_configs()[source]
get_inference_resolution()[source]
release()[source]
class sima.VideoReader(source: str = None, frame_width: int = 1280, frame_height: int = 720)[source]

A class for reading video from various sources, including pcie, rtspsrc and filesrc.

read() Tuple[int, numpy.ndarray][source]

Reads a frame from the video source.

isOpened() bool[source]

Check whether the video source has been successfully opened.

This method verifies that the underlying video reader or stream is properly initialized and ready for reading frames.

Returns:

True if the video source is open and accessible; False otherwise.

Return type:

bool

get_cam_resolution() Tuple[int, int][source]

Get frame width and frame height

release()[source]

Releases the VideoReader resources.

set_loop(val)[source]
property frame_num
class sima.VideoWriter(source: str = None, host_ip: str = None, port: int = None, frame_width: int = None, frame_height: int = None)[source]

A class for writing video stream to various destinations, including pcie, rtspsrc and filesrc.

write(frame, meta=None)[source]

Writes a frame to the video sink.

release()[source]

Releases the VideoWriter resources.