afe.apis.model

Classes

Model

Module Contents

class afe.apis.model.Model(net: afe.ir.net.AwesomeNet, fp32_net: afe.ir.net.AwesomeNet | None = None)
execute(inputs: afe.apis.defines.InputValues, *, fast_mode: bool = False, use_jax: bool = False, log_level: int | None = logging.NOTSET, keep_layer_outputs: list[afe.ir.defines.NodeName] | str | None = None, output_file_path: str | None = None) List[numpy.ndarray]

Run input data through the quantized model.

Parameters:
  • inputs – Dictionary of placeholder node names (str) to the input data.

  • fast_mode – If True, use a fast implementation of operators. If False, use an implementation that exactly matches execution on the MLA. Has no effect when use_jax argument is True.

  • use_jax – If True, use JAX implementation of operators.

  • log_level – Logging level.

  • keep_layer_outputs – List of quantized model layer output names that should be saved. Each element of a list must have a valid name, according to the model layer outputs. Iff β€˜all’, all intermediate results are saved.

  • output_file_path – Location where the layer outputs should be saved. If defined, keep_layer_outputs argument must also be provided by the user.

Returns: Outputs of quantized model.

Also, saves the requested intermediate results inside output_file_path location.

save(model_name: str, output_directory: str = '', *, log_level: int | None = logging.NOTSET) None
static load(model_name: str, network_directory: str = '', *, log_level: int | None = logging.NOTSET) Model
compile(output_path: str, batch_size: int = 1, compress: bool = True, log_level: int | None = logging.NOTSET, tessellate_parameters: afe.backends.mpk.interface.TessellateParameters | None = None, l2_caching_mode: afe.backends.mpk.interface.L2CachingMode = L2CachingMode.NONE, **kwargs) None

Compile the model and generate MPK JSON file.

Parameters:
  • output_path – Lm file directory path.

  • batch_size – The batch size of the input to the model.

  • compress – If True mlc file is compressed before generating .lm file.

  • log_level – Logging message level.

  • tessellate_parameters – Dictionary defining the tessellation parameters for inputs and outputs of the MLA segments.

  • l2_caching_mode – Specifies mode of L2 caching in n2a compiler.

static create_auxiliary_network(transforms: List[afe.apis.transform.Transform], input_types: Dict[afe.ir.defines.InputName, afe.ir.tensor_type.TensorType], *, target: sima_utils.common.Platform = gen1_target, log_level: int | None = logging.NOTSET) Model
static compose(nets: List[Model], combined_model_name: str = 'main', log_level: int | None = logging.NOTSET) Model
evaluate(evaluation_data: Iterable[Tuple[afe.apis.defines.InputValues, afe.apis.compilation_job_base.GroundTruth]], criterion: afe.apis.statistic.Statistic[Tuple[List[numpy.ndarray], afe.apis.compilation_job_base.GroundTruth], str], *, fast_mode: bool = False, log_level: int | None = logging.NOTSET) str
analyze_quantization_error(evaluation_data: Iterable[afe.apis.defines.InputValues], error_metric: afe.core.graph_analyzer.utils.Metric, *, local_feed: bool, log_level: int | None = logging.NOTSET)
get_performance_metrics(output_kpi_path: str, *, log_level: int | None = logging.NOTSET)
generate_elf_and_reference_files(input_data: Iterable[afe.apis.defines.InputValues], output_dir: str, *, batch_size: int = 1, compress: bool = True, tessellate_parameters: afe.backends.mpk.interface.TessellateParameters | None = None, log_level: int | None = logging.NOTSET, l2_caching_mode: afe.backends.mpk.interface.L2CachingMode = L2CachingMode.NONE) None
execute_in_accelerator_mode(input_data: Iterable[afe.apis.defines.InputValues], devkit: str, *, username: str = cp.DEFAULT_USERNAME, password: str = '', batch_size: int = 1, compress: bool = True, tessellate_parameters: afe.backends.mpk.interface.TessellateParameters | None = None, log_level: int | None = logging.NOTSET, l2_caching_mode: afe.backends.mpk.interface.L2CachingMode = L2CachingMode.NONE) List[numpy.ndarray]