.. _Model Quantization: quantize ######## Once the original trained model has been imported into the loaded_net format using the ``load_model()`` API, it can be quantized using the :py:meth:`afe.apis.loaded_net.LoadedNet.quantize` API. The SiMa.ai software and silicon currently support 8-bit and 16-bit integer operations on the Machine Learning Accelerator (MLA) and floating-point operations on the Application Processing Unit (APU) and Computer Vision Unit (CVU). In general, pre and post-processing functions will be implemented on the APU and CVU whilst model layers such as convolution, pooling, etc. will run on the MLA. The partitioning across compute units will be done automatically by the quantizer and only the parts that run on the MLA will be quantized. .. dropdown:: Setting the Quantization Configuration Parameters :animate: fade-in :color: secondary :open: Quantization configuration in the ModelSDK involves setting up parameters for model quantization and enabling optional features like channel equalization. The :py:meth:`afe.apis.defines.default_quantization` parameters provide a baseline configuration that can be used to quantize a model in the default way. These parameters can also serve as a starting point for creating custom quantization configurations. Here is an example of how to apply the default quantization configuration: .. code-block:: python from afe.apis.defines import default_quantization # Quantize the model with default settings quant_model = loaded_net.quantize( calibration_data=calib_data, quantization_config=default_quantization, model_name= ) Additionally, channel equalization is an optional preprocessing step that can be enabled to equalize the distribution of weight tensors across different channels, potentially improving quantization performance. This option can be configured using :py:meth:`afe.apis.defines.QuantizationParams.with_channel_equalization`. .. dropdown:: Defining Different Quantization Schemes :animate: fade-in :color: secondary :open: Use :py:meth:`afe.apis.defines.quantization_scheme` to create a quantization scheme. Note that for weights only symmetric quantization is supported, and for activations only per tensor quantization is supported. .. code-block:: python from afe.apis.defines import quantization_scheme, default_quantization import dataclasses # define quantization schemes for weights and activations symmetric_per_tensor_8_bits = quantization_scheme(asymmetric=False, per_channel=False, bits=8) symmetric_per_channel_8_bits = quantization_scheme(asymmetric=False, per_channel=True, bits=8) asymmetric_per_tensor_8_bits = quantization_scheme(asymmetric=True, per_channel=False, bits=8) # set the quantization configuration to the default quant_configs = default_quantization # modify default to change weight quantization quant_configs = dataclasses.replace(quant_configs, weight_quantization_scheme=symmetric_per_channel_8_bits) # modify default to change activation quantization quant_configs = dataclasses.replace(quant_configs, activation_quantization_scheme=symmetric_per_tensor_8_bits) # quantize quant_model = loaded_net.quantize(calibration_data=calib_data, quantization_config=quant_configs, model_name=) There is also a default ``int16`` quantization config :py:meth:`afe.apis.defines.int16_quantization` that often works better: .. code-block:: python from afe.apis.defines import int16_quantization quant_model = loaded_net.quantize(calibration_data=calib_data, quantization_config=int16_quantization, model_name=) .. dropdown:: Mixed-precision Quantization :animate: fade-in :color: secondary :open: To maximize a compiled model's execution speed while achieving a chosen accuracy, the ModelSDK supports quantizing an ``FP32`` model into mixed ``INT8`` and ``INT16`` precision. To quantize a model with mixed precision you need to add layers/operators from your models that you are targeting for ``INT16`` quantization. By default ``ModelSDK`` quantizes ``FP32`` to ``INT8``. Therefore, the remaining layers/operators will quantize to ``INT8``. To add layers to the ``INT16`` quantization scheme you will need to call ``with_custom_quantization_configs`` from ``default_quantization`` and add ``INT16`` targeted layers to ``quantization_config`` parameter of the quantize API. Automatic Mixed-precision Quantization ====================================== In this form of quantization, the software automatically quantizes a model in different configurations and measures its accuracy, and then returns the best candidate found. This option is best used when the desired accuracy is between the accuracy achieved with INT8 and the accuracy achieved with INT16. The function :py:meth:`afe.apis.loaded_net.LoadedNet.quantize_with_accuracy_feedback` is intended to minimize the number of 16-bit nodes while reaching the target accuracy. Failing to reach the target accuracy indicates an error, however, minor accuracy losses may occur due to the statistical nature of the accuracy calculation. .. code-block:: python model = loaded_net.quantize_with_accuracy_feedback( calibration_data, evaluation_dataset, quant_configs, accuracy_score=accuracy, target_accuracy=0.89) Manual Mixed-precision Quantization =================================== Below is an example code for resnet50 model to add in your code before the ``quantize()`` API call. .. code-block:: python from afe.core.configs import QuantizationPrecision # Quantize the loaded net setting quantization precision of some nodes to int16. # User can provide the custom quantization config dictionary in the following form: # custom_config_dict = {'node_name': {'quantization_config_field': value}} # For example, in order to set the precision of a certain node to int16, the user needs # to provide the config as follows: # custom_config_dict = {'node_name': {'quantization_precision': QuantizationPrecision.INT_16}} # User must get the node names from the SiMaIR graph. To obtain node names from # SiMaIR graph, user needs to run int8 quantization and get the names of nodes # examining the .sima.json in netron. It is user's responsibility to decide which # nodes should be quantized with higher precision. int16_nodes = [ "MLA_0/conv2d_add_relu_0", "MLA_0/max_pool2d_1", "MLA_0/conv2d_add_relu_2", "MLA_0/conv2d_add_relu_3", "MLA_0/conv2d_add_4", "MLA_0/conv2d_add_5" ] quant_configs = default_quantization.with_custom_quantization_configs( {int16_node: {'quantization_precision': QuantizationPrecision.INT_16} for int16_node in int16_nodes} ) sdk_net = loaded_net.quantize( calibration_data=calibration_data, quantization_config=quant_configs, model_name="tut_resnet50" ) You can verify whether the above layers/operators are added to ``INT16`` by executing and saving the ``sdk_net`` output. As shown below in ``tut_resnet_tensorflow.sima.json`` file search for any layer that you added in the ``INT16_nodes`` list. You can see that in output, the ``input_int16`` will be ``True`` for that layer/operator. .. code-block:: console sima-user@docker-image-id:model_zoo/resnet50/sdk_net_manual_mixed$ ls -ltr total 100536 -rw-r--r-- 1 sima-user sima-user 388338 Aug 1 15:18 tut_resnet_tensorflow.sima.json -rw-r--r-- 1 sima-user sima-user 102552052 Aug 1 15:18 tut_resnet_tensorflow.sima sima-user@docker-image-id:model_zoo/resnet50/sdk_net_manual_mixed$ vi tut_resnet_tensorflow.sima.json "class_name": "MLA_0/Conv2DAddActivation_5", "name": "MLA_0/conv2d_add_5", "inbound_nodes": [ ... ... "activ_attrs": null, "input_int16": true, "msb_left_shift": true, .. dropdown:: Defining a Different Calibration Method :animate: fade-in :color: secondary :open: When configuring the quantization parameters users can select from the calibration methods as listed below. The ``MSE method`` has been observed to consistenly provide better results and therefore has been set as the default calibration method. - :py:meth:`afe.apis.defines.QuantizationParams.with_calibration` - :py:meth:`afe.apis.defines.HistogramMSEMethod` - :py:meth:`afe.apis.defines.MinMaxMethod` - :py:meth:`afe.apis.defines.MovingAverageMinMaxMethod` - :py:meth:`afe.apis.defines.HistogramEntropyMethod` - :py:meth:`afe.apis.defines.HistogramPercentileMethod` The helper function :py:meth:`afe.apis.defines.CalibrationMethod.from_str` helps to construct a calibration method. For example, the following code changes the calibration method in the default quantization configuration: .. code-block:: python # Use MSE calibration method quant_configs = default_quantization calibration_method = CalibrationMethod.from_str('mse') quant_configs = quant_configs.with_calibration(calibration_method) sdk_net = loaded_net.quantize(calibration_data=calibration_data, quantization_config=quant_configs, model_name="test_model") .. code-block:: python # Use percentile calibration method with custom percentile value and number of bins quant_configs = default_quantization calibration_method = HistogramPercentileMethod(91.0, 2048) quant_configs = quant_configs.with_calibration(calibration_method) sdk_net = loaded_net.quantize(calibration_data=calibration_data, quantization_config=quant_configs, model_name="test_model") .. code-block:: python from afe.apis.defines import default_quantization,MinMaxMethod,MovingAverageMinMaxMethod,HistogramMSEMethod,HistogramEntropyMethod,HistogramPercentileMethod import dataclasses # set the quantization configuration to the default quant_configs = default_quantization # modify the default quantization to use moving average min-max calibration quant_configs = dataclasses.replace(quant_configs, calibration_method=MovingAverageMinMaxMethod()) # quantize quant_model = loaded_net.quantize(calibration_data=calib_data, quantization_config=quant_configs, model_name= ) .. dropdown:: Defining INT32 Quantization for Final MLA Layer :animate: fade-in :color: secondary :open: The ModelSDK supports quantization of FP32 models with mixed precision. That is, you need to add layers/operators from your models targeting int16 quantization. By default, the FP32 model will quantize FP32 into int8 precision and the remaining layers/operators will be quantized to int16 precision. To add layers to the int16 quantization scheme, call ``with_custom_quantization_configs`` from ``default_quantization`` and add int16 targeted layers to ``quantization_config`` parameter of the quantize API. .. function:: with_custom_quantization_configs(custom_quantization_configs: Dict[NodeName, Dict[str, Any]]) -> QuantizationParams :module: afe.apis.defines.QuantizationParams Set custom quantization options for select nodes. The custom_quantization_configs is a dictionary specifying a node name with new value in QuantizationConfigs. This API is used in the following two scenarios (see examples below): #. Enable the int32 output of the last convolution node. #. Enable mixed-precision quantization. Users must get the node names from the SiMa IR graph. To obtain node names from a SiMa IR graph, a user must run int8 quantization of a model and get the names of nodes by saving and examining the .sima.json file in Netron. :returns: Returns the quantization configuration with the custom quantization config. Below is an example of int8 quantization but the last convolutiuon node is quantized as int32. Assuming ``MLA_1/conv2d_add_84`` is the node name of the last layer of a model: .. code-block:: python custom_quantization_configs = { "MLA_1/conv2d_add_84": {"output_int32": True} } quant_configs = default_quantization.with_custom_quantization_configs( custom_quantization_configs ) The below is an example of mixed-precision quantization where most layers are quantized as int8 and the remaining layers are quantized as int16. .. code-block:: python int16_nodes = [ "MLA_0/conv2d_add_relu_0", "MLA_0/max_pool2d_1", "MLA_0/conv2d_add_relu_2", "MLA_0/conv2d_add_relu_3", "MLA_0/conv2d_add_4", "MLA_0/conv2d_add_5"] quant_configs = default_quantization.with_custom_quantization_configs( { int16_node: {'quantization_precision': QuantizationPrecision.INT_16} for int16_node in int16_nodes } ) .. dropdown:: Overriding Configuration Parameters of Quantization :animate: fade-in :color: secondary :open: Use the following APis to override configuration parameters of quantization. - :py:meth:`afe.apis.defines.QuantizationParams.with_activation_quantization` - :py:meth:`afe.apis.defines.QuantizationParams.with_weight_quantization` - :py:meth:`afe.apis.defines.QuantizationParams.with_unquantized_nodes` - :py:meth:`afe.apis.defines.QuantizationParams.with_requantization_mode` - :py:meth:`afe.apis.defines.QuantizationParams.with_bias_correction` The following is an example of how to construct quantization parameters. It overrides how activations are quantized, sets requantization mode, and enables bias correction, while leaving other settings at their default value. .. code-block:: python requantization_mode = RequantizationMode.tflite asymmetric_pertensor_int16 = quantization_scheme(asymmetric=True, per_channel=False, bits=16) enable_bias_correction = True quant_config = default_quantization \ .with_activation_quantization(asymmetric_pertensor_int16) \ .with_requantization_mode(requantization_mode) \ .with_bias_correction(enable_bias_correction) .. dropdown:: Code Template for Cycling through Quantization Options :animate: fade-in :color: secondary :open: This section describes one possible method of running all quantization options for a single model. .. code-block:: python # define quantization schemes & calibration methods symmetric_per_tensor = quantization_scheme(asymmetric=False, per_channel=False) symmetric_per_channel = quantization_scheme(asymmetric=False, per_channel=True) asymmetric_per_tensor = quantization_scheme(asymmetric=True, per_channel=False) weights_schemes = [symmetric_per_channel,symmetric_per_tensor] activation_schemes = [symmetric_per_tensor,asymmetric_per_tensor] cal_methods = [MinMaxMethod(), MovingAverageMinMaxMethod(), HistogramMSEMethod(1024), HistogramEntropyMethod(), HistogramPercentileMethod(99.9)] Track the best results in a variable: .. code-block:: python # set initial best accuracy to 0 current_best_accuracy = 0.0 Cycle through all options, evaluate each time and save the quantized model if accuracy improves: .. code-block:: python quant_configs = default_quantization for c in cal_methods: quant_configs = dataclasses.replace(quant_configs, calibration_method=c) for w in weights_schemes: quant_configs = dataclasses.replace(quant_configs, weight_quantization_scheme=w) for a in activation_schemes: quant_configs = dataclasses.replace(quant_configs, activation_quantization_scheme=a) # quantize quant_model = loaded_net.quantize(calibration_data=calib_data, quantization_config=quant_configs, model_name=) # iterate over test data to evaluate quantized model correct=0 for input_name in model_ip_names: for i in range(0,test_images): inputs = dict() inputs[input_name] = _preprocessing(data[i]) # Evaluate the quantized net - argmax post-processing quantized_net_output = np.argmax(quant_model.execute(inputs)) if labels[i]==quantized_net_output: correct+=1 accuracy = correct/test_images # save model if accuracy improved if accuracy > current_best_accuracy: current_best_accuracy = accuracy quant_model.save(model_name = model_name, output_directory = saved_model_directory) .. dropdown:: Saving and Loading a Quantized Model :animate: fade-in :color: secondary :open: We can save and load the quantized model to disk as follows: .. function:: Model.save(model_name: str, output_directory: str = '', *, log_level: Optional[int] = logging.NOTSET)→ None :module: Model Saves a Model. This API will generate two files: a SiMa.ai file for the saved network and a JSON file that can be displayed using Netron. The SiMa.ai file name is created by appending ``.sima`` to ``model_name``, if it does not contain the extension ``.sima``. The JSON file name is created by appending “.json” to the sima file name after the extension “.sima”. Both files are put into output_directory, which is by default the current working directory. :param model_name: Required. The name to save the model as. :param output_directory: The default values is ``/.``. The folder to save the model in. :param log_level: The default value is ``logging.NOTSET``. The type of logging to have during the saving of the model. .. function:: Model.load(model_name: str, network_directory: str = '', *, log_level: Optional[int] = logging.NOTSET)→ Model Loads a Model from a sima file. The file name to load is created by appending “.sima” to the model name, if it does not have the extension “sima”. It is loaded from network_directory, which is by default the current directory. :param model_name: Required. The name of the model to be loaded. :param network_directory: The default values is ``/.``. The folder to load the model from. :param log_level: The default value is ``logging.NOTSET``. The type of logging to have during the loading of the model. .. code-block:: python quant_model.save(model_name = model_name, output_directory = saved_model_directory) This will create two files: ``model_name.sima`` and ``model_name.sima.json`` which are wrtten to the folder specified by ``output_directory``. If no output directory is given, the files are written to the current working directory. The JSON file can be opened for viewing using the third-party Netron tool. .. dropdown:: TFLite Pre-Quantized Support :animate: fade-in :color: secondary :open: The ModelSDK is able to ingest a model that was previously quantized using TFLite. The model is first imported using the ``load_model()`` API function and then converted using ``.convert_to_sima_quantization()`` to prepare for compiling. Any floating-point computation in the model will remain floating-point and will not run on the MLA. This function converts a loaded pre-quantized model into a Model, using the quantization that was provided in the input model. The model should have been quantized prior to loading. Any floating-point computation in the model will remain floating-point and will not run on the MLA. .. function:: LoadedNet.convert_to_sima_quantization(*, requantization_mode: RequantizationMode = RequantizationMode.sima, model_name: Optional[str] = None, log_level: Optional[int] = logging.NOTSET) -> Model :param requantization_mode: Determines what set of arithmetic approximations to use in ModelSDK :param model_name: If given, is used as the name of the returned model. :param log_level: Default is logging.NOTSET. Sets the logging level for this API call as described in the Logging section. :module: LoadedNet :Exceptions raised: A pre-quantized operator cannot be converted to a SiMa quantized operator. This example shows how to load and convert a pre-quantized TFlite model that has a single input named ``input_1`` of shape ``1,224,224,3``: .. code-block:: python # imports from afe.load.importers.general_importer import ImporterParams, tflite_source from afe.apis.loaded_net import load_model from afe.ir.tensor_type import ScalarType from afe.ir.defines import QuantizationFlavor model_path='path/to/quantized/tflite/model' # input shapes dictionary - NHWC format # each key,value pair defines an input name and its shape input_shapes = {'input_1': (1,224,224,3)} # input types dictionary # each key,value pair defines an input and its type input_types = {'input_1': ScalarType.uint8} # importer parameters importer_params: ImporterParams = tflite_source(model_path=model_path, shape_dict=input_shapes, dtype_dict=input_types) # load the model loaded_net = load_model(importer_params) # Convert quantized model - default is QuantizationFlavor.sima quant_model = loaded_net.convert_to_sima_quantization() # or use TFLite flavor.. #quant_model = loaded_net.convert_to_sima_quantization(quantization_flavor=QuantizationFlavor.tflite) # save the quantized model quant_model.save(model_name = model_name, output_directory = saved_model_directory) The ModelSDK generally uses different quantized arithmetic than in other frameworks. Conversion will introduce differences due to rounding and other approximations. Two main quantization "flavors" are supported: ================================= =============================================================================================================================================================== TFLite Quantization Description ================================= =============================================================================================================================================================== QuantizationFlavor.sima (default) An efficient form of arithmetic that may sometimes sacrifice accuracy. Higher performance with possible loss of precision. QuantizationFlavor.tflite Performs arithmetic as described by TFlite documentation.It involves the operations multiply, shift, round, and add. Less performance but may be more accurate. ================================= =============================================================================================================================================================== It is recommended to try using the default quantization flavor, and if the model's accuracy is not sufficient, then use ``QuantizationFlavor.tflite``. .. dropdown:: Per-Layer Quantization Statistics :animate: fade-in :color: secondary :open: Given the FP32 and quantized versions of an ML network, ModelSDK has the ability to report per-layer quantization statistics. The API function to invoke this feature is shown below: .. function:: Model.analyze_quantization_error(evaluation_data: Iterable[InputValues], error_metric: Metric, *, local_feed: bool, log_level: Optional[int] = logging.NOTSET) :module: Model This function is useful for determining how each layer of the quantized model contributes to the overall error. :param evaluation_data: A Python Iterable that provides data samples for the error analysis. :param error_metric: String. Defines the error calculation method: * **mae**: Mean absolute error * **mse**: Mean squared error * **psnr**: Peak signal-to-noise ratio :param local_feed: Boolean. See description below. :param log_level: Default is logging.NOTSET. Sets the logging level for this API call as described in the Logging section. :param simulated_arm: Do not use, leave at default value. .. code-block:: python # load FP model loaded_net = load_model(params) # quantize quant_model = loaded_net.quantize(calibration_data=calib_data, quantization_config=default_quantization, model_name=model_name) # run per-layer analysis quant_model.analyze_quantization_error(evaluation_data=test_data, error_metric='mae', local_feed=True) # save quant_model.save(model_name = model_name, output_directory = saved_model_directory) Once the model is saved to disk, the ``_model_name_.sima.json`` file will contain the error values for each layer. These can be seen by examining the JSON file with a text editor and searching for the "layer Statistics" string: .. code-block:: json "_status": "SIMA_QUANTIZED", "Layer Statistics": { "metric": "mae", "error_value": 3.9165286500938237, "__decode_key__": "LayerStats" }, ..or by opening the JSON file with Netron, click on the layer in model digaram and view the 'Layer Statistics' field: .. image:: ../../_static/netron_quant_stats.png :scale: 100% .. note:: It is not currently possible to run per-layer quantization statistics on a quantized model that has first been saved to disk and then loaded. It must be run on a model that has just been quantized using the ``.quantize()`` API. - ``local_feed=True``: The floating-point model is passed the evaluation data as defined by the ``evaluation_data`` argument. The input of each node in the quantized network is fed the output data from the preceding node in the floating-point network and the output of the quantized node is dequantized and compared to the output of the equivalent node in the floating-point network - the error value is calculated as per the chosen metric. The displayed error value is the error value for that particular node only, - ``local_feed=False``: The quantized model and the floating-point model that it was generated from are passed the same identical set of evaluation data as defined by the ``evaluation_data`` argument. The output of each node of the quantized model is dequantized and compared to the output of the corresponding node in the floating-point model - the error_value in the JSON file is the error of cumulative error of the quantized network up to that particular node (calculated as per the chosen metric). The error_value shown in the last node will be the cumulative error value of the entire quantized network. .. dropdown:: Executing Loaded and Quantized Models :animate: fade-in :color: secondary :open: With the :py:meth:`afe.apis.loaded_net.LoadedNet.execute` API, the ModelSDK allows quantized models to be executed in software with user-supplied input data so that the results can be evaluated. The quantized model is executed using a software implementation of the quantized operators. The model is executed with a single set of input tensor values and the output tensor values are returned. It does not simulate execution on the processor. Evaluating Accuracy of Quantized Models ======================================= The ModelSDK allows users to evaluate the accuracy of quantized models using the :py:meth:`afe.apis.model.Model.execute` API. * (Default method) **Bit-accurate evaluation using Numpy routines in ML kernels** - specify ``fast_mode=False`` in ``Model.execute()`` API. * (Second method) **Bit-approximate evaluation using Tensorflow-optimized code** - specify ``fast_mode=True`` in ``Model.execute()`` API. The following code shows an example of how to use the :py:meth:`afe.apis.model.Model.execute` API. In this example we have a numpy file of pre-processed data samples which are fed into the quantized model. The model has a single input ``input_1`` and a single output: .. code-block:: python :linenos: :caption: Example Code # quantize quant_model = loaded_net.quantize(calibration_data=calib_data, quantization_config=quant_configs, model_name=model_name) # read numpy file of preprocessed data & labels dataset_f = np.load('preprocessed_data.npz') test_data= dataset_f['x'] labels = dataset_f['y'] inputs = dict() fp32_correct = 0 quant_correct = 0 # iterate over all test data for i in range(test_images): # key=input name, value=data sample inputs['input_1'] = data[i] # execute quantized model - returns a list of ndarray tvm_fp32_model_out = loaded_net.execute(inputs) quant_model_out = quant_model.execute(inputs, fast_mode=True) # argmax fp_32prediction = np.argmax(tvm_fp32_model_out[0]) quant_prediction = np.argmax(quant_model_out[0]) # check output if fp_32prediction == labels[i]: fp32_correct+=1 if quant_prediction == labels[i]: quant_correct+=1 (Third method) **Bit-accurate evaluation of a quantized model in Accelerator Mode**. This method executes a model on the SiMa development kit which must be connected to a network. .. function:: Model.execute_in_accelerator_mode(input_data: Iterable[InputValues], devkit: str, *, username: str = cp.DEFAULT_USERNAME, password: str = '', batch_size: int = 1, compress: bool = True, tessellate_parameters: Optional[TessellateParameters] = None, log_level: Optional[int] = logging.NOTSET, l2_caching_mode: L2CachingMode = L2CachingMode.NONE) -> List[np.ndarray] :module: Model Invoke the function ``Model.execute_on_accelerator_mode()`` with a list of pre-processed images. The API first compiles the model into an .lm file, and then obtains inference results by running the .lm file on the development kit. :param input_data: The model is executed with a single set of input tensor values and the output tensor values are returned. :param devkit: The URL for the development kit. :param username: User login credential on the development kit. :param password: User login credential on the development kit. :param batch_size: Batch size to use in the compiled model. :param compress: A flag to enable/disable DRAM data compression in the compiled LM file. :param log_level: This sets the logging level for this call as described in the Logging section. :param tessellate_parameters: Reserved parameter and takes default value. :param l2_caching_mode: Reserved parameter and takes default value. .. code-block:: python :linenos: :caption: Example Code import os import cv2 import numpy as np import argparse import time import pickle as pkl from typing import Any from afe.apis.defines import default_quantization from afe.apis.loaded_net import load_model from afe.core.utils import convert_data_generator_to_iterable, length_hinted from afe.ir.tensor_type import ScalarType from afe.load.importers.general_importer import tensorflow_source from afe.apis.release_v1 import DataGenerator from afe.apis.error_handling_variables import enable_verbose_error_messages """ Script for quantizing, compiling and running model in accelerator mode. """ MODEL_DIR = './UR_pb_resnet50-v1.5_fp32_224_224.pb' DATASET_DIR = './imagenet/val224.pkl' OUTPUT_DIR = './output' DAVINCI_HOSTNAME = "" DAVINCI_USERNAME = "sima" DAVINCI_PASSWORD = "" NUM_SAMPLES = 10 def pre_process_func(image: np.ndarray) -> np.ndarray: from tensorflow.keras.applications.imagenet_utils import preprocess_input img = image.transpose([1, 2, 0]).astype(np.float32) img = preprocess_input(img)[:, :, ::-1] img = cv2.resize(img, (224, 224)) return img def load_pickle(path: str) -> Any: with open(path, 'rb') as f: v = pkl.load(f) return v def _create_calibration_data(): dataset = load_pickle(DATASET_DIR) input_generator = DataGenerator({"input_tensor": dataset['data']}) # Apply the preprocessing function to the dataset input_generator.map({"input_tensor": pre_process_func}) input_iterable = convert_data_generator_to_iterable(input_generator, length_limit=NUM_SAMPLES) return input_iterable def main(model_path: str, hostname: str, username: str, password: str, enable_verbose_message: bool): print("SiMa ModelSDK tutorial example of running ResNet50 model in Accelerator Mode") if enable_verbose_message: enable_verbose_error_messages() os.makedirs(OUTPUT_DIR, exist_ok=True) model_name = os.path.split(model_path)[1] model_prefix = os.path.splitext(model_name)[0] # Models importer parameters input_name = "input_tensor" output_names = ["resnet_model/dense/BiasAdd"] input_shape = (1, 224, 224, 3) input_dtype = ScalarType.float32 input_shapes_dict = {input_name: input_shape} # refer to the SDK User Guide for the specific format importer_params = tensorflow_source(model_path, input_shapes_dict, output_names) loaded_net = load_model(importer_params) # quantization with real training data samples # Note that quantization data is ALWAYS NHWC calibration_data = _create_calibration_data() print("Quantizing the model.") model_sdk_net = loaded_net.quantize(length_hinted(NUM_SAMPLES, calibration_data), default_quantization, model_name=model_prefix, arm_only=False) saved_model_directory = "sdk" model_sdk_net.save(model_name=model_name, output_directory=saved_model_directory) print("Executing quantized model in accelerator mode.") input_data = _create_calibration_data() start_time = time.time() outputs = model_sdk_net.execute_in_accelerator_mode( input_data=length_hinted(NUM_SAMPLES, input_data), devkit=hostname, username=username, password=password) end_time = time.time() print(f"Model is executed in accelerator mode. Elapsed time: {end_time - start_time} [s].") print("End of tutorial example of ResNet50 in Accelerator Mode.") if __name__ == "__main__": parser = argparse.ArgumentParser(description="Script with optional arguments") parser.add_argument('--model_path', type=str, default=MODEL_DIR) parser.add_argument('--hostname', type=str, default=DAVINCI_HOSTNAME) parser.add_argument('--username', type=str, default=DAVINCI_USERNAME) parser.add_argument('--password', type=str, default=DAVINCI_PASSWORD) parser.add_argument('--enable_verbose_message', action='store_true', help="Enable verbose messages") args = parser.parse_args() main(args.model_path, args.hostname, args.username, args.password, args.enable_verbose_message) .. dropdown:: Large Tensor Models :animate: fade-in :color: secondary :open: In Machine Learning there are performance limitations when executing large model sizes on the hardware. When a model exceeds the device's internal memory capacity, it is classified as a large tensor model. SiMa's ModelSDK can also have limitations in handling and compiling large tensor models. The memory management capabilities of SiMa.ai in both software and hardware allow for the execution of models with a large number of parameters by efficiently storing intermediate activations. The compilation time for large tensor models can range from minutes to hours, depending on the model's complexity. In these cases, the MLA compiler attempts to store intermediate tensors in DRAM, which incurs a performance loss, but compilation remains successful. The MLA compiler optimizes execution by storing intermediate tensors in L1 and L2 memory, but when tensors exceed available space, they are stored and fetched from DRAM, further impacting performance. The compilation time is also proportional to the number of layers, and the compiler splits large layers into sub-layers, increasing the overall compilation time.