afe.apis.modelο
Classesο
Module Contentsο
- class afe.apis.model.Model(net: afe.ir.net.AwesomeNet, fp32_net: afe.ir.net.AwesomeNet | None = None)ο
- execute(inputs: afe.apis.defines.InputValues, *, fast_mode: bool = False, use_jax: bool = False, log_level: int | None = logging.NOTSET, keep_layer_outputs: list[afe.ir.defines.NodeName] | str | None = None, output_file_path: str | None = None) List[numpy.ndarray] ο
Run input data through the quantized model.
- Parameters:
inputs β Dictionary of placeholder node names (str) to the input data.
fast_mode β If True, use a fast implementation of operators. If False, use an implementation that exactly matches execution on the MLA. Has no effect when use_jax argument is True.
use_jax β If True, use JAX implementation of operators.
log_level β Logging level.
keep_layer_outputs β List of quantized model layer output names that should be saved. Each element of a list must have a valid name, according to the model layer outputs. Iff βallβ, all intermediate results are saved.
output_file_path β Location where the layer outputs should be saved. If defined, keep_layer_outputs argument must also be provided by the user.
- Returns: Outputs of quantized model.
Also, saves the requested intermediate results inside output_file_path location.
- save(model_name: str, output_directory: str = '', *, log_level: int | None = logging.NOTSET) None ο
- static load(model_name: str, network_directory: str = '', *, log_level: int | None = logging.NOTSET) Model ο
- compile(output_path: str, batch_size: int = 1, compress: bool = True, log_level: int | None = logging.NOTSET, tessellate_parameters: afe.backends.mpk.interface.TessellateParameters | None = None, l2_caching_mode: afe.backends.mpk.interface.L2CachingMode = L2CachingMode.NONE, **kwargs) None ο
Compile the model and generate MPK JSON file.
- Parameters:
output_path β Lm file directory path.
batch_size β The batch size of the input to the model.
compress β If True mlc file is compressed before generating .lm file.
log_level β Logging message level.
tessellate_parameters β Dictionary defining the tessellation parameters for inputs and outputs of the MLA segments.
l2_caching_mode β Specifies mode of L2 caching in n2a compiler.
- static create_auxiliary_network(transforms: List[afe.apis.transform.Transform], input_types: Dict[afe.ir.defines.InputName, afe.ir.tensor_type.TensorType], *, target: sima_utils.common.Platform = gen1_target, log_level: int | None = logging.NOTSET) Model ο
- static compose(nets: List[Model], combined_model_name: str = 'main', log_level: int | None = logging.NOTSET) Model ο
- evaluate(evaluation_data: Iterable[Tuple[afe.apis.defines.InputValues, afe.apis.compilation_job_base.GroundTruth]], criterion: afe.apis.statistic.Statistic[Tuple[List[numpy.ndarray], afe.apis.compilation_job_base.GroundTruth], str], *, fast_mode: bool = False, log_level: int | None = logging.NOTSET) str ο
- analyze_quantization_error(evaluation_data: Iterable[afe.apis.defines.InputValues], error_metric: afe.core.graph_analyzer.utils.Metric, *, local_feed: bool, log_level: int | None = logging.NOTSET)ο
- get_performance_metrics(output_kpi_path: str, *, log_level: int | None = logging.NOTSET)ο
- generate_elf_and_reference_files(input_data: Iterable[afe.apis.defines.InputValues], output_dir: str, *, batch_size: int = 1, compress: bool = True, tessellate_parameters: afe.backends.mpk.interface.TessellateParameters | None = None, log_level: int | None = logging.NOTSET, l2_caching_mode: afe.backends.mpk.interface.L2CachingMode = L2CachingMode.NONE) None ο
- execute_in_accelerator_mode(input_data: Iterable[afe.apis.defines.InputValues], devkit: str, *, username: str = cp.DEFAULT_USERNAME, password: str = '', batch_size: int = 1, compress: bool = True, tessellate_parameters: afe.backends.mpk.interface.TessellateParameters | None = None, log_level: int | None = logging.NOTSET, l2_caching_mode: afe.backends.mpk.interface.L2CachingMode = L2CachingMode.NONE) List[numpy.ndarray] ο