System Architectures
Modern machine learning and inferencing applications demand flexible architectures to address a variety of deployment scenarios. SiMa.ai’s versatile solutions are designed to adapt to these needs, offering configurations that optimize performance, efficiency, and scalability. A cornerstone of these architectures is the MLSoC, which provides the hardware necessary to build AI-driven products. Whether integrated into a larger system or utilized as a standalone device, the MLSoC ensures seamless adaptability for diverse use cases, enabling developers to achieve their goals with ease.
In this architecture, the SiMa MLSoC operates independently as a self-contained device. It is particularly well-suited for applications where compactness, efficiency, and minimal power consumption are critical.
- Key Use Cases
Edge AI Applications: Deployed at the edge to perform inferencing without relying on a central server or cloud infrastructure. Ideal for applications like smart cameras, industrial IoT devices, or autonomous robots.
Cost-Sensitive Deployments: Reduces the need for additional hardware, making it a cost-effective solution for standalone operations.
Power-Constrained Environments: Optimized for scenarios where energy efficiency is paramount, such as remote monitoring systems powered by batteries or solar panels.
Advantages
Self-Contained: Does not require a host system, simplifying deployment and reducing system complexity.
Energy Efficient: Designed for low power consumption, making it suitable for power-sensitive environments.
Compact and Portable: The small form factor allows it to be easily deployed in space-constrained scenarios.
Typical Data Flow
Data is received directly from network interfaces or sensors.
The SiMa MLSoC is loaded with the predefined GStreamer pipeline in MPK format through network.
The SiMa MLSoC performs inferencing and processes the data locally.
Results are sent to other devices or systems via network connections for further action or visualization.
In this architecture, the SiMa MLSoC functions as a PCIe card that integrates seamlessly into a host system. This setup is ideal for applications where the host system—such as a server, desktop, or workstation—requires additional computational resources for inferencing tasks but already has sufficient capabilities for data storage and processing.
Key Use Cases
High-Performance Systems: Suitable for data centers or high-end workstations where large-scale data processing and storage are critical. The host system handles CPU intensive tasks, while the SiMa MLSoC performs dedicated inferencing operations.
Extended I/O Solutions: Target system may require additional I/O capabilities that are not readily available on the SiMa MLSoC, such as USB interfaces or other sensory inputs. This architecture enables the system to leverage the host system’s extended I/O resources for applications like video analytics or AI model inference at scale.
Advantages
Enhanced Computational Resources: Augments the host system’s capabilities by offloading inferencing tasks to the dedicated hardware.
Flexibility: Ideal for environments where additional inferencing cards can be added as demand grows.
Typical Data Processing Flow
The host system captures data from sensors, peripherals, storage, or network interfaces.
The SiMa MLSoC is loaded with predefined GStreamer pipeline in MPK format along with adapted AI model.
Data is sent to the SiMa MLSoC via the PCIe interface for inferencing.
Results are processed by the host system or forwarded to downstream systems for further action.
In this architecture, developers can evaluate and test their pipelines without the need to purchase or set up the SiMa DevKit. Instead, the Edgematic platform provides a cloud-based, low-code environment where users can design, deploy, and benchmark pipelines. This architecture is ideal for customers who want to explore SiMa.ai’s capabilities, validate their workflows, and evaluate performance before committing to hardware.
Key Use Cases
Quick Prototyping and Evaluation: Ideal for developers and teams looking to rapidly prototype AI pipelines and evaluate performance without upfront investment in hardware.
Cloud-Based Testing: Enables developers to validate their workflows and model compatibility with SiMa.ai’s infrastructure using the cloud-based Edgematic platform, making it accessible from anywhere.
Advantages
No Hardware Requirements: Allows developers to test and optimize pipelines without purchasing a SiMa DevKit, reducing initial costs.
Seamless Cloud Integration: Provides a fully cloud-hosted environment, accessible via a web interface, for easy pipeline evaluation and benchmarking.
Flexibility and Scalability: Enables iterative testing and tuning of pipelines without the constraints of physical hardware.
Typical Data Processing Flow
Users design and define their GStreamer pipeline using the Edgematic visual interface.
The pipeline and models are uploaded to Edgematic’s cloud-based environment.
Static data is provided through cloud resources for inferencing.
Results are processed and displayed in real-time on Edgematic, enabling users to refine and optimize their pipelines.
This hardwareless architecture empowers developers with flexibility and accessibility, allowing them to explore and evaluate SiMa.ai’s platform without committing to hardware upfront.