sima_qat.sima_quantizer

Classes

SimaQuantizer

This quantizer definition uses XNNPACK implementation for the majority of ops. This is because

Functions

get_sima_quantization_config([is_qat])

Module Contents

sima_qat.sima_quantizer.get_sima_quantization_config(is_qat: bool = False)[source]
class sima_qat.sima_quantizer.SimaQuantizer[source]

This quantizer definition uses XNNPACK implementation for the majority of ops. This is because the XNNPACK code simply looks for appropriate patterns, and applies the QuantizationAnnotation attributes accordingly. The quantization rules are defined separately, and specified here using Sima properties.

supported_config_and_operators[source]
STATIC_QAT_ONLY_OPS = ['sima_conv_bn_hardtanh', 'conv_bn_relu', 'conv_bn'][source]
STATIC_OPS = ['linear_relu', 'linear', 'sima_conv_add_or_mul_const', 'sima_conv_hardtanh', 'conv_relu',...[source]
global_config: torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig | None = None[source]
operator_type_config: Dict[torch._ops.OpOverloadPacket, torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig | None][source]
module_type_config: Dict[Callable, torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig | None][source]
module_name_config: Dict[str, torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig | None][source]
classmethod get_supported_quantization_configs() List[torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig][source]
classmethod get_supported_operator_for_quantization_config(quantization_config: torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig | None) List[torch.ao.quantization.quantizer.xnnpack_quantizer_utils.OperatorPatternType][source]
set_global(quantization_config: torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig) SimaQuantizer[source]
set_operator_type(operator_type: torch._ops.OpOverloadPacket, quantization_config: torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig) SimaQuantizer[source]
set_module_type(module_type: Callable, quantization_config: torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig)[source]

Set quantization_config for a submodule with type: module_type, for example: quantizer.set_module_name(Sub) or quantizer.set_module_name(nn.Linear), it will quantize all supported operator/operator patterns in the submodule with this module type with the given quantization_config

set_module_name(module_name: str, quantization_config: torch.ao.quantization.quantizer.xnnpack_quantizer_utils.QuantizationConfig | None)[source]

Set quantization_config for a submodule with name: module_name, for example: quantizer.set_module_name(“blocks.sub”), it will quantize all supported operator/operator patterns in the submodule with this module name with the given quantization_config

transform_for_annotation(model: torch.fx.GraphModule) torch.fx.GraphModule[source]

Transforms scalar values to tensor attributes

annotate(model: torch.fx.GraphModule) torch.fx.GraphModule[source]

just handling global spec for now

validate(model: torch.fx.GraphModule) None[source]
classmethod get_supported_operators() List[torch.ao.quantization.quantizer.xnnpack_quantizer_utils.OperatorConfig][source]