afe.ir.operations

Attributes

T

AWESOME_ATTRS

QUANT_ATTRS

QuantizationTensorData

TODO:

Classes

AwesomeOperation

An abstract class

PlaceholderOp

An abstract class

ConstantOp

An abstract class

MaxPool2DOp

An abstract class

MaxPool3DOp

An abstract class

AvgPool2DOp

An abstract class

AvgPool3DOp

An abstract class

VarianceOp

An abstract class

MultiplyOp

An abstract class

PadOp

An abstract class

MeanOp

An abstract class

ArgMaxOp

An abstract class

SoftmaxOp

An abstract class

LRNOp

An abstract class

ExtmOp

Extremum op, can be either min or max operation. Attributes contain a boolean to determine the operation.

SumOp

An abstract class

ProdOp

An abstract class

SubtractOp

An abstract class

PowerOp

An abstract class

MaximumOp

An abstract class

MinimumOp

An abstract class

FullOp

An abstract class

TileOp

An abstract class

PReluOp

An abstract class

BroadcastToOp

An abstract class

UDFOp

An abstract class

SqrtOp

An abstract class

RsqrtOp

An abstract class

TanhOp

An abstract class

SigmoidOp

An abstract class

LogOp

An abstract class

Log2Op

An abstract class

Log10Op

An abstract class

ReciprocalOp

An abstract class

EluOp

An abstract class

SoftplusOp

An abstract class

ErfOp

An abstract class

GeluOp

An abstract class

DivideOp

An abstract class

ExpOp

An abstract class

SwishOp

An abstract class

HardSwishOp

An abstract class

UpsamplingOp

An abstract class

ImageResize2DOp

An abstract class

GridSampleOp

An abstract class

TupleOp

TupleOp takes in multiple tensors, returns a tuple

TupleGetItemOp

TupleGetItemOp takes in a tuple, returns a tensor

SqueezeOp

An abstract class

ConcatenateOp

An abstract class

TransposeOp

An abstract class

DepthToSpaceOp

An abstract class

ReshapeOp

An abstract class

ExpandDimsOp

An abstract class

SplitOp

SplitOp takes in one tensor, returns a tuple

TakeOp

An abstract class

StridedSliceOp

An abstract class

BatchFlattenOp

An abstract class

LayoutTransformOp

An abstract class

TessellationTransformOp

An abstract class

DetessellationTransformOp

An abstract class

PackTransformOp

An abstract class

UnpackTransformOp

An abstract class

NormalizationTransformOp

An abstract class

QuantizationTransformOp

An abstract class

DequantizationTransformOp

An abstract class

ResizeTransformOp

An abstract class

ChromaUpsampleTransformOp

An abstract class

YuvRgbConversionTransformOp

An abstract class

BgrRgbConversionTransformOp

An abstract class

SigmoidTransformOp

An abstract class

NmsMaxpoolTransformOp

An abstract class

CastOp

An abstract class

AddActivationOp

An abstract class

ConstantMultiplyAddOp

An add operator fused with multiplication by a scalar constant.

ConvAddActivationOp

An abstract class

TupleConcatenateOp

This composite node reuse ConcatenateOp run, quantize, and run_quant methods

ExternalOp

An abstract class

QNNQuantizeOp

An abstract class

RequantizeOp

An abstract class

QNNDequantizeOp

An abstract class

QNNMulOp

An abstract class

CustomOp

An abstract class

LeakyReluCompositeOp

An abstract class

ReluOp

An abstract class

ClipOp

An abstract class

BatchMatmulOp

Standard batch matmul operator where arguments to batch matmul operation are outputs of two different nodes.

UnaryBatchMatmulOp

Special case of batch matmul operator where both arguments to batch matmul operation are output of a same node.

LayerNormOp

An abstract class

InstanceNormOp

An abstract class

RMSNormOp

An abstract class

SliceConcatOp

This composite node uses infrastructure from StridedSliceOp and ConcatenateOp run.

HardSigmoidOp

An abstract class

Functions

make_quantized_pool_attrs(...)

Construct a PoolQuantAttrs, using values from a PoolAttrs and additional values

get_reduction_op_output_shape(attrs)

Get the output shape for the dimension-reduction operators (SumOp, MeanOp, ProdOp, ExtmOp & ArgMaxOp)

node_type_for_dimension_reduction_operators(attrs, ...)

Get NodeType for the dimension-reduction opreators (SumOp, MeanOp, ProdOp, ExtmOp & ArgMaxOp)

has_any_int8_input(→ bool)

Return True if any of the inputs identified by input_names was quantized with int8 precision.

expand_indices_to_shape_length(...)

Helper function for expanding begin, end and strides to match the shape length.

get_strided_slice_out_shape(...)

Get StridedSliceOp output shape.

get_squeeze_out_shape(→ tuple[int, Ellipsis])

Get SqueezeOp output shape.

get_expand_dims_out_shape(...)

Get ExpandDimsOp output shape.

get_batch_flatten_out_shape(→ tuple[int, Ellipsis])

Get Batch Flatten output shape.

get_pack_input_types(...)

Get pack operator input types.

make_quantization_cast(→ afe.ir.defines.QuantizationCast)

Make a quantization cast for one value.

make_quantization_casts(→ afe.ir.defines.InputsQuantCast)

Create casts for a quantized node's input types by comparing the input data type with the type

Module Contents

afe.ir.operations.T
afe.ir.operations.AWESOME_ATTRS
afe.ir.operations.QUANT_ATTRS
afe.ir.operations.QuantizationTensorData

TODO: * Merge the quantization in single node and composite node.

Ex: Use Conv2DOp.quantize in ConvAddActivationOp

  • Merge quantization, run_quant for Conv2D and Conv2DTranspose

  • Create check_attrs function to check attrs and quant_attrs

afe.ir.operations.make_quantized_pool_attrs(attrs: afe.ir.attributes.PoolAttrs, *, pad_value: int, input_int16: bool, requant: afe.ir.attributes.Optional[afe.ir.attributes.BaseRequantization] = None) afe.ir.attributes.PoolQuantAttrs

Construct a PoolQuantAttrs, using values from a PoolAttrs and additional values that were computed during quantization.

afe.ir.operations.get_reduction_op_output_shape(attrs: afe.ir.attributes.Union[afe.ir.attributes.ReduceAttrs, afe.ir.attributes.ProdAttrs, afe.ir.attributes.ExtmAttrs, afe.ir.attributes.ArgMaxAttrs])

Get the output shape for the dimension-reduction operators (SumOp, MeanOp, ProdOp, ExtmOp & ArgMaxOp) using attributes from their AwesomeAttributes class. :param attrs: AwesomeAttributes class :return: Output shape

afe.ir.operations.node_type_for_dimension_reduction_operators(attrs: afe.ir.attributes.Union[afe.ir.attributes.ReduceAttrs, afe.ir.attributes.ProdAttrs, afe.ir.attributes.ExtmAttrs, afe.ir.attributes.ArgMaxAttrs], input_dtype: afe.ir.attributes.Union[afe.ir.attributes.np.dtype, Type[afe.ir.attributes.np.number]], output_dtype: afe.ir.attributes.Union[afe.ir.attributes.np.dtype, Type[afe.ir.attributes.np.number]])

Get NodeType for the dimension-reduction opreators (SumOp, MeanOp, ProdOp, ExtmOp & ArgMaxOp) :param attrs: AwesomeAttributes class :param dtype: Data type :return: NodeType

afe.ir.operations.has_any_int8_input(quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, input_names: afe.ir.attributes.Sequence[afe.ir.defines.InputName]) bool

Return True if any of the inputs identified by input_names was quantized with int8 precision.

afe.ir.operations.expand_indices_to_shape_length(begin: afe.ir.attributes.List[int], end: afe.ir.attributes.List[int], strides: afe.ir.attributes.List[int], axes: afe.ir.attributes.Optional[afe.ir.attributes.List[int]], input_shape: afe.ir.attributes.List[int]) afe.ir.attributes.Tuple[afe.ir.attributes.List[int], afe.ir.attributes.List[int], afe.ir.attributes.List[int]]

Helper function for expanding begin, end and strides to match the shape length.

afe.ir.operations.get_strided_slice_out_shape(attrs: afe.ir.attributes.StridedSliceAttrs) afe.ir.attributes.Tuple[int, Ellipsis]

Get StridedSliceOp output shape.

Parameters:

attrs – StridedSlice attributes class.

Returns:

Output shape.

afe.ir.operations.get_squeeze_out_shape(axis: list[int], input_shape: tuple[int, Ellipsis]) tuple[int, Ellipsis]

Get SqueezeOp output shape.

Parameters:
  • axis – Set of axes to remove

  • input_shape – Shape of input tensor

Returns:

Output shape.

afe.ir.operations.get_expand_dims_out_shape(attrs: afe.ir.attributes.ExpandDimsAttrs) afe.ir.attributes.Tuple[int, Ellipsis]

Get ExpandDimsOp output shape.

Parameters:

attrs – ExpanDims attributes class.

Returns:

Output shape.

afe.ir.operations.get_batch_flatten_out_shape(attrs: afe.ir.attributes.BatchFlattenAttrs) tuple[int, Ellipsis]

Get Batch Flatten output shape.

Parameters:

attrs – BatchFlattenAttrs attributes class.

Returns:

Output shape.

Return type:

Tuple[int, …]

afe.ir.operations.get_pack_input_types(input_types: afe.ir.attributes.List[afe.ir.tensor_type.TensorType]) afe.ir.attributes.List[afe.ir.tensor_type.TensorType]

Get pack operator input types. If input tensor has 4D shape it will be reshaped to 2D MLA buffer shape.

afe.ir.operations.make_quantization_cast(provided_type: afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType], wanted_type: afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]) afe.ir.defines.QuantizationCast

Make a quantization cast for one value.

Parameters:
  • provided_type – Type and quantization of the value

  • wanted_type – Type and quantization that it should be cast to

Returns:

Cast

afe.ir.operations.make_quantization_casts(provided_input_types: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]], wanted_input_types: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]]) afe.ir.defines.InputsQuantCast

Create casts for a quantized node’s input types by comparing the input data type with the type that the node requires.

Parameters:
  • provided_input_types – Type and quantization of a node’s inputs, after quantization

  • wanted_input_types – Type and quantization that the quantized node requires

Returns:

Casts for the node

class afe.ir.operations.AwesomeOperation

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.Optional[afe.ir.attributes.List[afe.ir.defines.InputName]]] = []
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = []
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType
Abstractmethod:

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: AWESOME_ATTRS, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any
Abstractmethod:

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod run_quant(quant_attrs: QUANT_ATTRS, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any
Abstractmethod:

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: AWESOME_ATTRS, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any

The default calibration method. Executes the operation in floating point. Update the observer if the operation is associated with one. Otherwise, the operation’s quantization parameters will be calculated based on it’s input’s quantization parameters. Update the min/max values using the outputs and use the updated min/max to compute the scales and zero points.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod update_input_quant(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: Mapping[afe.ir.defines.InputName, afe.ir.attributes.Optional[afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]]])

Record quantization scales of the input tensors.

Parameters:
  • calib_attrs – Calibration results holding dynamic ranges. It will be updated with quantization parameters of the node’s inputs.

  • input_dict – Quantization parameters of the node’s inputs.

classmethod get_observed_distribution(attrs: AWESOME_ATTRS, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • attrs – Operator attributes.

  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed distributions.

classmethod quantize(attrs: AWESOME_ATTRS, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) QUANT_ATTRS
Abstractmethod:

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod type_check(value: afe.ir.attributes.Any, expected_type: Type[T]) T

Each op expects a more specific type of inputs / AwesomeAttributes so this function helps with type checking :param value: AwesomeAttributes :param expected_type: a type

class afe.ir.operations.PlaceholderOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
placeholder_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
quant_fn: Callable[[afe.ir.attributes.np.ndarray, float, int, int], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.PlaceholderAttrs, afe.ir.attributes.PlaceholderQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.PlaceholderAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod update_input_quant(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: Mapping[afe.ir.defines.InputName, afe.ir.attributes.Optional[afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]]])

Record quantization scales of the input tensors.

Parameters:
  • calib_attrs – Calibration results holding dynamic ranges. It will be updated with quantization scales of the node’s inputs.

  • input_dict – Quantization scales of the node’s inputs.

classmethod quantize(attrs: afe.ir.attributes.PlaceholderAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.PlaceholderQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.PlaceholderQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ConstantOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

constant_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ConstantAttrs, afe.ir.attributes.ConstantQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ConstantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: afe.ir.attributes.ConstantAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

The default calibration method. Executes the operation in floating point. Update the observer if the operation is associated with one. Otherwise, the operation’s quantization parameters will be calculated based on it’s input’s quantization parameters. Update the min/max values using the outputs and use the updated min/max to compute the scales and zero points.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ConstantAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.ConstantAttrs, afe.ir.attributes.ConstantQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ConstantQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MaxPool2DOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
class afe.ir.operations.MaxPool3DOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
class afe.ir.operations.AvgPool2DOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
class afe.ir.operations.AvgPool3DOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
class afe.ir.operations.VarianceOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[list[afe.ir.defines.InputName]]
var_fn
classmethod get_type(attrs: afe.ir.attributes.VarianceAttrs | afe.ir.attributes.VarianceQuantAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.VarianceAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.VarianceAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.VarianceAttrs | afe.ir.attributes.VarianceQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: QUANT_ATTRS, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MultiplyOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
multiply_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.MultiplyAttrs, afe.ir.attributes.MultiplyQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.MultiplyAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.MultiplyAttrs, afe.ir.attributes.MultiplyQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.MultiplyQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.PadOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
pad_fn: Callable[[afe.ir.attributes.PadAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.PadAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.PadAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MeanOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
mean_fn: Callable[[afe.ir.attributes.ReduceAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ReduceAttrs, afe.ir.attributes.MeanQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ReduceAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ReduceAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.MeanQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.MeanQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ArgMaxOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
argmax_fn: Callable[[afe.ir.attributes.ArgMaxAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ArgMaxAttrs, afe.ir.attributes.ArgMaxQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod quantize(attrs: afe.ir.attributes.ArgMaxAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ArgMaxQuantAttrs

Quantize argmax. The quantized operator takes int8 or bfloat16 values and returns int32 values. The int32 values represent an array index, not real numbers, so they do not have quantization scale. No quantization info is saved in attrs, as argmax’s computation is oblivious to quantization.

classmethod run(attrs: afe.ir.attributes.ArgMaxAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod run_quant(attrs: afe.ir.attributes.ArgMaxQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SoftmaxOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
softmax_fn: Callable[[afe.ir.attributes.SoftmaxAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = ['sum_exp']
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SoftmaxAttrs, afe.ir.attributes.SoftmaxQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SoftmaxAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.SoftmaxAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.SoftmaxAttrs, afe.ir.attributes.SoftmaxQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.SoftmaxQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: AWESOME_ATTRS, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any

Softmax calibration method. Executes default calibration to get results of Softmax operation in floating point. Additionally, calculate intermediate results and update the observers for intermediate values.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Parameters controlling how to calibrate.

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.LRNOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
lrn_fn: Callable[[afe.ir.attributes.LRNAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.LRNAttrs, afe.ir.attributes.LRNQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.LRNAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.LRNAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.LRNQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.LRNQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ExtmOp

Extremum op, can be either min or max operation. Attributes contain a boolean to determine the operation.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
min_fn: Callable[[afe.ir.attributes.ExtmAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
max_fn: Callable[[afe.ir.attributes.ExtmAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ExtmAttrs, afe.ir.attributes.ExtmQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ExtmAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ExtmAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.ExtmAttrs, afe.ir.attributes.ExtmQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ExtmQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SumOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
sum_fn: Callable[[afe.ir.attributes.ReduceAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ReduceAttrs, afe.ir.attributes.ReduceQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ReduceAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ReduceAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ReduceQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ReduceQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ProdOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
prod_fn: Callable[[afe.ir.attributes.ProdAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ProdAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ProdAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SubtractOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
subtract_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SubtractAttrs, afe.ir.attributes.SubtractQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SubtractAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.SubtractAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.SubtractAttrs, afe.ir.attributes.SubtractQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.SubtractQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.PowerOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
power_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.PowerAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MaximumOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
maximum_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.MaximumAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.MaximumAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MinimumOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
minimum_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.MinimumAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.FullOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
full_fn: Callable[[afe.ir.attributes.FullAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod run(attrs: afe.ir.attributes.FullAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.TileOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
tile_fn: Callable[[afe.ir.attributes.TileAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod run(attrs: afe.ir.attributes.TileAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.PReluOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
relu_fn: Callable[[afe.ir.attributes.np.ndarray, int], afe.ir.attributes.np.ndarray]
prelu_fn: Callable[[afe.ir.attributes.PReluAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.PReluAttrs, afe.ir.attributes.PReluQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.PReluAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.PReluAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, configs: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.PReluAttrs, afe.ir.attributes.PReluQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.PReluQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.BroadcastToOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
broadcast_to_fn
classmethod get_type(attrs: afe.ir.attributes.BroadcastToAttrs | afe.ir.attributes.BroadcastToQuantAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.BroadcastToAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.BroadcastToAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.BroadcastToAttrs | afe.ir.attributes.BroadcastToQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(attrs: afe.ir.attributes.BroadcastToQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.UDFOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
udf_fn: afe.ir.attributes.Optional[Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]] = None
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.UDFAttrs, afe.ir.attributes.UDFQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.UDFAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.UDFAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.UDFAttrs, afe.ir.attributes.UDFQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.UDFQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SqrtOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.RsqrtOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.TanhOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.SigmoidOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.LogOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.Log2Op

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.Log10Op

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.ReciprocalOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.EluOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.SoftplusOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.ErfOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.GeluOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.DivideOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = ['rhs_reciprocal']
divide_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
reciprocal_op: ReciprocalOp
multiply_op: MultiplyOp
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.DivideAttrs, afe.ir.attributes.DivideQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: AWESOME_ATTRS, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any

DivideOp calibration method. Executes default calibration to get results of Divide operation in floating point. Additionally, calculate intermediate results for reciprocal(rhs) and update the observer for intermediate values.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Parameters controlling how to calibrate.

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.DivideAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.DivideQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.DivideQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ExpOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.SwishOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.HardSwishOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
class afe.ir.operations.UpsamplingOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
upsampling_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.UpsamplingAttrs, afe.ir.attributes.UpsamplingQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.UpsamplingAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.UpsamplingAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.UpsamplingAttrs, afe.ir.attributes.UpsamplingQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.UpsamplingQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ImageResize2DOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
image_resize_fn: Callable[[afe.ir.attributes.ImageResize2DAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ImageResize2DAttrs, afe.ir.attributes.ImageResize2DQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ImageResize2DAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ImageResize2DAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.ImageResize2DAttrs, afe.ir.attributes.ImageResize2DQuantAttrs]

In MLA implementation of resize, output type is the same as input type. There is no intermediate int32 result. Always use int8, if integer scaling factor != (1, 2, 4).

<input_type> <enable_int16> <input_quant> <resize_kernel> <output_type>

int8 True int8 int8 int8 int8 False int8 int8 int8 int16 False int8 int8 int8 int16 True int16 int16 int16

classmethod run_quant(quant_attrs: afe.ir.attributes.ImageResize2DQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.GridSampleOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
gridsample_fn: Callable[[afe.ir.attributes.GridSampleAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.GridSampleAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.GridSampleAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.GridSampleAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.GridSampleAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

class afe.ir.operations.TupleOp

TupleOp takes in multiple tensors, returns a tuple

input_list = None
tuple_fn: Callable[[afe.ir.attributes.List[afe.ir.attributes.np.ndarray]], tuple]
classmethod get_type(attrs: afe.ir.attributes.TupleAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(_: afe.ir.attributes.TupleAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.Tuple[afe.ir.attributes.np.ndarray, Ellipsis]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.TupleAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.TupleAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(attrs: afe.ir.attributes.TupleAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • attrs – Operator attributes.

  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed distributions.

class afe.ir.operations.TupleGetItemOp

TupleGetItemOp takes in a tuple, returns a tensor

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
tuple_get_item_fn: Callable[[afe.ir.attributes.TupleGetItemAttrs, tuple], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.TupleGetItemAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.TupleGetItemAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, tuple], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.TupleGetItemAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.TupleGetItemAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(attrs: afe.ir.attributes.TupleGetItemAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • attrs – Operator attributes.

  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed distributions.

class afe.ir.operations.SqueezeOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
squeeze_fn: Callable[[afe.ir.attributes.SqueezeAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SqueezeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SqueezeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ConcatenateOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]] = None
concatenate_fn: Callable[[afe.ir.attributes.ConcatenateAttrs, afe.ir.attributes.List[afe.ir.attributes.np.ndarray]], afe.ir.attributes.np.ndarray]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ConcatenateAttrs, afe.ir.attributes.ConcatQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ConcatenateAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ConcatenateAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.ConcatenateAttrs, afe.ir.attributes.ConcatQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ConcatQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.TransposeOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
transpose_fn: Callable[[afe.ir.attributes.TransposeAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.TransposeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.TransposeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.TransposeAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.TransposeAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(attrs: afe.ir.attributes.ReshapeAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]]

Return the input’s distribution. This operator cannot handle per-channel observers.

class afe.ir.operations.DepthToSpaceOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
depth_to_space_fn: Callable[[afe.ir.attributes.DepthToSpaceAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.DepthToSpaceAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.DepthToSpaceAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.DepthToSpaceAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.DepthToSpaceAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

class afe.ir.operations.ReshapeOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
reshape_fn: Callable[[afe.ir.attributes.ReshapeAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.ReshapeAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ReshapeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ReshapeAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ReshapeAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(attrs: afe.ir.attributes.ReshapeAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]]

Return the input’s distribution. This operator cannot handle per-channel observers.

class afe.ir.operations.ExpandDimsOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
expand_dims_fn: Callable[[afe.ir.attributes.ReshapeAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ExpandDimsAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ExpandDimsAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SplitOp

SplitOp takes in one tensor, returns a tuple

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
split_fn: Callable[[afe.ir.attributes.SplitAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SplitAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SplitAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.Tuple[afe.ir.attributes.np.ndarray, Ellipsis]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.TakeOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
take_fn: Callable[[afe.ir.attributes.TakeAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.TakeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.TakeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.StridedSliceOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
strided_slice_fn: Callable[[afe.ir.attributes.StridedSliceAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.StridedSliceAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.StridedSliceAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.StridedSliceAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.StridedSliceAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

class afe.ir.operations.BatchFlattenOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
batch_flatten_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.BatchFlattenAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.BatchFlattenAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.BatchFlattenAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.BatchFlattenAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(attrs: afe.ir.attributes.BatchFlattenAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]]

Observer is not used for this operator. Return the input’s distribution if batch dimension stays the same.

class afe.ir.operations.LayoutTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
layout_transform_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.LayoutTransformAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.LayoutTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.LayoutTransformAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.AwesomeQuantAttrBase

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(attrs: afe.ir.attributes.LayoutTransformAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • attrs – Operator attributes.

  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed distributions.

class afe.ir.operations.TessellationTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.TessellationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.DetessellationTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.DetessellationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.PackTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]] = None
classmethod get_type(attrs: afe.ir.attributes.PackTransformAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(_: afe.ir.attributes.PackTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.UnpackTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.UnpackTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.NormalizationTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.NormalizationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.QuantizationTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.QuantizationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.DequantizationTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.DequantizationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ResizeTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ResizeTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ChromaUpsampleTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ChromaUpsampleTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.YuvRgbConversionTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.YuvRgbConversionTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.BgrRgbConversionTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.BgrRgbConversionTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SigmoidTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SigmoidTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.NmsMaxpoolTransformOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.NmsMaxpoolTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.CastOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
cast_fn: Callable[[afe.ir.attributes.CastAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.CastAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.CastAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.AddActivationOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]

The AddActivationOp can only handle the: * Add + Relu * Add + Clip

add_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, afe.ir.attributes.Optional[int]], afe.ir.attributes.np.ndarray]
relu_fn: Callable[[afe.ir.attributes.np.ndarray, int], afe.ir.attributes.np.ndarray]
clip_fn: Callable[[afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.AddActivationAttrs, afe.ir.attributes.AddQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AddActivationAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.AddActivationAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.AddActivationAttrs, afe.ir.attributes.AddQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.AddQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ConstantMultiplyAddOp

An add operator fused with multiplication by a scalar constant. The operator performs the floating-point operation (a*c + b*d), where c and d are scalar constants. After quantization, it behaves like an add operator. The multiplication is incorporated into the add operator’s requantization.

multiply_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ConstantMultiplyAddAttrs, afe.ir.attributes.AddQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ConstantMultiplyAddAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ConstantMultiplyAddAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.AddQuantAttrs, afe.ir.attributes.ConstantMultiplyAddAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

class afe.ir.operations.ConvAddActivationOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

add_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, afe.ir.attributes.Optional[int]], afe.ir.attributes.np.ndarray]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray]
relu_fn: Callable[[afe.ir.attributes.np.ndarray, int], afe.ir.attributes.np.ndarray]
clip_fn: Callable[[afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.ConvAddActivationAttrs | afe.ir.attributes.ConvQuantAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ConvAddActivationAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ConvAddActivationAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ConvAddActivationAttrs | afe.ir.attributes.ConvQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ConvQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: afe.ir.attributes.ConvAddActivationAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any

ConvAddActivation calibration method. Executes default calibration to get results of ConvAdd operation in floating point. Additionally, update intermediate observers for tracking mean values.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Parameters controlling how to calibrate.

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.TupleConcatenateOp

This composite node reuse ConcatenateOp run, quantize, and run_quant methods

input_list = None
tuple_fn: Callable[[afe.ir.attributes.List[afe.ir.attributes.np.ndarray]], tuple]
concatenate_op: AwesomeOperation
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.TupleConcatenateAttrs, afe.ir.attributes.ConcatQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.Union[afe.ir.attributes.TupleConcatenateAttrs, afe.ir.attributes.ConcatenateAttrs], input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.TupleConcatenateAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ConcatQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ConcatQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ExternalOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list = None
external_fn: Callable[[afe.ir.attributes.ExternalAttrs, afe.ir.attributes.Dict], afe.ir.attributes.Union[afe.ir.attributes.np.ndarray, tuple]]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ExternalAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ExternalAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ExternalAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ExternalAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(attrs: afe.ir.attributes.ExternalAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • attrs – Operator attributes.

  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed distributions.

class afe.ir.operations.QNNQuantizeOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
quant_fn: Callable[[afe.ir.attributes.QNNQuantizeAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.QNNQuantizeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.QNNQuantizeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.RequantizeOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.RequantizeAttrs, afe.ir.attributes.RequantizeQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run_quant(quant_attrs: afe.ir.attributes.RequantizeQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.QNNDequantizeOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
dequant_fn: Callable[[afe.ir.attributes.QNNDequantizeAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.QNNDequantizeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.QNNDequantizeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.QNNMulOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
mul_fn: Callable[[afe.ir.attributes.AwesomeAttributes, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, float, int, float, int, float, int], afe.ir.attributes.np.ndarray]
classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.CustomOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list = None
custom_op_fn: Callable[[afe.ir.attributes.CustomOpAttrs, afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray]], afe.ir.attributes.np.ndarray]
quant_fn: Callable[[afe.ir.attributes.np.ndarray, float, int, int], afe.ir.attributes.np.ndarray]
dequant_fn: Callable[[afe.ir.attributes.np.ndarray, float, int], afe.ir.attributes.np.ndarray]
classmethod run(attrs: afe.ir.attributes.CustomOpAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Union[afe.ir.attributes.np.ndarray, tuple]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.CustomOpAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.CustomOpQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.CustomOpQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.LeakyReluCompositeOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.LeakyReluAttrs, afe.ir.attributes.LeakyReluCompositeQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.LeakyReluAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.LeakyReluAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.LeakyReluCompositeQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.LeakyReluCompositeQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ReluOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
relu_fn: Callable[[afe.ir.attributes.np.ndarray, int], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ReluAttrs, afe.ir.attributes.ReluQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ReluAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ReluAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.ReluAttrs, afe.ir.attributes.ReluQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ReluQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ClipOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
clip_fn: Callable[[afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ClipAttrs, afe.ir.attributes.ClipQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ClipAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ClipAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(attrs: afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.BatchMatmulOp

Standard batch matmul operator where arguments to batch matmul operation are outputs of two different nodes.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
class afe.ir.operations.UnaryBatchMatmulOp

Special case of batch matmul operator where both arguments to batch matmul operation are output of a same node.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
class afe.ir.operations.LayerNormOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
layer_norm_fn: Callable[[afe.ir.attributes.LayerNormAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = ['var']
classmethod get_type(attrs: afe.ir.attributes.LayerNormAttrs | afe.ir.attributes.LayerNormQuantAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.LayerNormAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.LayerNormAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.LayerNormAttrs | afe.ir.attributes.LayerNormQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.LayerNormQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: AWESOME_ATTRS, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any

Layer Norm calibration method. Executes default calibration to get results of LN operation in floating point. Additionally, calculate intermediate results and update the observers for intermediate values.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Parameters controlling how to calibrate.

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.InstanceNormOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
instance_norm_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.InstanceNormAttrs | afe.ir.attributes.InstanceNormQuantAttrs) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.InstanceNormAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.InstanceNormAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.InstanceNormAttrs | afe.ir.attributes.InstanceNormQuantAttrs

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.InstanceNormQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.RMSNormOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
rms_norm_fn: Callable[[afe.ir.attributes.RMSNormAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = ['reduce_mean']
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.RMSNormAttrs, afe.ir.attributes.RMSNormQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.RMSNormAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.RMSNormAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.RMSNormAttrs, afe.ir.attributes.RMSNormQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.RMSNormQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: afe.ir.attributes.RMSNormAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any

RMS Norm calibration method. Executes default calibration to get results of RMSNorm operation in floating point. Additionally, calculate intermediate results and update the observers for intermediate values.

class afe.ir.operations.SliceConcatOp

This composite node uses infrastructure from StridedSliceOp and ConcatenateOp run.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SliceConcatAttrs, afe.ir.attributes.SliceConcatQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SliceConcatAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.SliceConcatAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.SliceConcatAttrs, afe.ir.attributes.SliceConcatQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.SliceConcatQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.HardSigmoidOp

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]]
hard_sigmoid_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.HardSigmoidAttrs, afe.ir.attributes.UDFQuantAttrs]) afe.ir.tensor_type.NodeType

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.HardSigmoidAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.HardSigmoidAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, configs: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.HardSigmoidAttrs, afe.ir.attributes.UDFQuantAttrs]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.UDFQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.