afe.ir.operations

Attributes

T

AWESOME_ATTRS

QUANT_ATTRS

AVGPOOL_TYPES

AVGPOOL_CLASSES

QuantizationTensorData

TODO:

Classes

AwesomeOperation

An abstract class

PlaceholderOp

An abstract class

ConstantOp

An abstract class

MaxPool2DOp

An abstract class

MaxPool3DOp

An abstract class

AvgPool2DOp

An abstract class

AvgPool3DOp

An abstract class

AdaptiveAvgPool2DOp

An abstract class

VarianceOp

An abstract class

MultiplyOp

An abstract class

PadOp

An abstract class

MeanOp

An abstract class

ArgMaxOp

An abstract class

SoftmaxOp

An abstract class

LRNOp

An abstract class

ExtmOp

Extremum op, can be either min or max operation. Attributes contain a boolean to determine the operation.

SumOp

An abstract class

ProdOp

An abstract class

SubtractOp

An abstract class

PowerOp

An abstract class

MaximumOp

An abstract class

MinimumOp

An abstract class

FullOp

An abstract class

TileOp

An abstract class

PReluOp

An abstract class

BroadcastToOp

An abstract class

UDFOp

An abstract class

SqrtOp

An abstract class

RsqrtOp

An abstract class

TanhOp

An abstract class

SigmoidOp

An abstract class

LogOp

An abstract class

Log2Op

An abstract class

Log10Op

An abstract class

ReciprocalOp

An abstract class

EluOp

An abstract class

SoftplusOp

An abstract class

ErfOp

An abstract class

GeluOp

An abstract class

DivideOp

An abstract class

ExpOp

An abstract class

SwishOp

An abstract class

HardSigmoidOp

An abstract class

HardSwishOp

An abstract class

UpsamplingOp

An abstract class

ImageResize2DOp

An abstract class

GridSampleOp

An abstract class

TupleOp

TupleOp takes in multiple tensors, returns a tuple

TupleGetItemOp

TupleGetItemOp takes in a tuple, returns a tensor

SqueezeOp

An abstract class

ConcatenateOp

An abstract class

TransposeOp

An abstract class

DepthToSpaceOp

An abstract class

ReshapeOp

An abstract class

ExpandDimsOp

An abstract class

SplitOp

SplitOp takes in one tensor, returns a tuple

TakeOp

An abstract class

StridedSliceOp

An abstract class

LayoutTransformOp

An abstract class

TessellationTransformOp

An abstract class

DetessellationTransformOp

An abstract class

PackTransformOp

An abstract class

UnpackTransformOp

An abstract class

NormalizationTransformOp

An abstract class

QuantizationTransformOp

An abstract class

DequantizationTransformOp

An abstract class

ResizeTransformOp

An abstract class

ChromaUpsampleTransformOp

An abstract class

YuvRgbConversionTransformOp

An abstract class

BgrRgbConversionTransformOp

An abstract class

SigmoidTransformOp

An abstract class

NmsMaxpoolTransformOp

An abstract class

CastOp

An abstract class

AddActivationOp

An abstract class

ConstantMultiplyAddOp

An add operator fused with multiplication by a scalar constant.

ConvAddActivationOp

An abstract class

TupleConcatenateOp

This composite node reuse ConcatenateOp run, quantize, and run_quant methods

ExternalOp

An abstract class

QNNQuantizeOp

An abstract class

RequantizeOp

An abstract class

QNNDequantizeOp

An abstract class

QNNMulOp

An abstract class

CustomOp

An abstract class

LeakyReluCompositeOp

An abstract class

ReluOp

An abstract class

ClipOp

An abstract class

BatchMatmulOp

Standard batch matmul operator where arguments to batch matmul operation are outputs of two different nodes.

UnaryBatchMatmulOp

Special case of batch matmul operator where both arguments to batch matmul operation are output of a same node.

LayerNormOp

An abstract class

InstanceNormOp

An abstract class

RMSNormOp

An abstract class

SliceConcatOp

This composite node uses infrastructure from StridedSliceOp and ConcatenateOp run.

Functions

make_quantized_pool_attrs(...)

Construct a PoolQuantAttrs, using values from a PoolAttrs and additional values

get_output_shape(attrs)

Get the output shape for the dimension-reduction operators (SumOp, MeanOp, ProdOp, ExtmOp & ArgMaxOp)

node_type_for_dimension_reduction_operators(attrs, ...)

Get NodeType for the dimension-reduction opreators (SumOp, MeanOp, ProdOp, ExtmOp & ArgMaxOp)

has_any_int8_input(→ bool)

Return True if any of the inputs identified by input_names was quantized with int8 precision.

expand_indices_to_shape_length(...)

Helper function for expanding begin, end and strides to match the shape length.

get_strided_slice_out_shape(...)

Get StridedSliceOp output shape.

get_squeeze_out_shape(→ tuple[int, Ellipsis])

Get SqueezeOp output shape.

get_expand_dims_out_shape(...)

Get ExpandDimsOp output shape.

get_pack_input_types(...)

Get pack operator input types.

make_quantization_cast(→ afe.ir.defines.QuantizationCast)

Make a quantization cast for one value.

make_quantization_casts(→ afe.ir.defines.InputsQuantCast)

Create casts for a quantized node's input types by comparing the input data type with the type

Module Contents

afe.ir.operations.T[source]
afe.ir.operations.AWESOME_ATTRS[source]
afe.ir.operations.QUANT_ATTRS[source]
afe.ir.operations.AVGPOOL_TYPES[source]
afe.ir.operations.AVGPOOL_CLASSES[source]
afe.ir.operations.QuantizationTensorData[source]

TODO: * Merge the quantization in single node and composite node.

Ex: Use Conv2DOp.quantize in ConvAddActivationOp

  • Merge quantization, run_quant for Conv2D and Conv2DTranspose

  • Create check_attrs function to check attrs and quant_attrs

afe.ir.operations.make_quantized_pool_attrs(attrs: afe.ir.attributes.PoolAttrs, *, pad_value: int, input_int16: bool, requant: afe.ir.attributes.Optional[afe.ir.attributes.BaseRequantization] = None) afe.ir.attributes.PoolQuantAttrs[source]

Construct a PoolQuantAttrs, using values from a PoolAttrs and additional values that were computed during quantization.

afe.ir.operations.get_output_shape(attrs: afe.ir.attributes.Union[afe.ir.attributes.SumAttrs, afe.ir.attributes.MeanAttrs, afe.ir.attributes.ProdAttrs, afe.ir.attributes.ExtmAttrs, afe.ir.attributes.ArgMaxAttrs])[source]

Get the output shape for the dimension-reduction operators (SumOp, MeanOp, ProdOp, ExtmOp & ArgMaxOp) using attributes from their AwesomeAttributes class. :param attrs: AwesomeAttributes class :return: Output shape

afe.ir.operations.node_type_for_dimension_reduction_operators(attrs: afe.ir.attributes.Union[afe.ir.attributes.SumAttrs, afe.ir.attributes.MeanAttrs, afe.ir.attributes.ProdAttrs, afe.ir.attributes.ExtmAttrs, afe.ir.attributes.ArgMaxAttrs], input_dtype: afe.ir.attributes.Union[afe.ir.attributes.np.dtype, Type[afe.ir.attributes.np.number]], output_dtype: afe.ir.attributes.Union[afe.ir.attributes.np.dtype, Type[afe.ir.attributes.np.number]])[source]

Get NodeType for the dimension-reduction opreators (SumOp, MeanOp, ProdOp, ExtmOp & ArgMaxOp) :param attrs: AwesomeAttributes class :param dtype: Data type :return: NodeType

afe.ir.operations.has_any_int8_input(quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, input_names: afe.ir.attributes.Sequence[afe.ir.defines.InputName]) bool[source]

Return True if any of the inputs identified by input_names was quantized with int8 precision.

afe.ir.operations.expand_indices_to_shape_length(begin: afe.ir.attributes.List[int], end: afe.ir.attributes.List[int], strides: afe.ir.attributes.List[int], axes: afe.ir.attributes.Optional[afe.ir.attributes.List[int]], input_shape: afe.ir.attributes.List[int]) afe.ir.attributes.Tuple[afe.ir.attributes.List[int], afe.ir.attributes.List[int], afe.ir.attributes.List[int]][source]

Helper function for expanding begin, end and strides to match the shape length.

afe.ir.operations.get_strided_slice_out_shape(attrs: afe.ir.attributes.StridedSliceAttrs) afe.ir.attributes.Tuple[int, Ellipsis][source]

Get StridedSliceOp output shape.

Parameters:

attrs – StridedSlice attributes class.

Returns:

Output shape.

afe.ir.operations.get_squeeze_out_shape(axis: list[int], input_shape: tuple[int, Ellipsis]) tuple[int, Ellipsis][source]

Get SqueezeOp output shape.

Parameters:
  • axis – Set of axes to remove

  • input_shape – Shape of input tensor

Returns:

Output shape.

afe.ir.operations.get_expand_dims_out_shape(attrs: afe.ir.attributes.ExpandDimsAttrs) afe.ir.attributes.Tuple[int, Ellipsis][source]

Get ExpandDimsOp output shape.

Parameters:

attrs – ExpanDims attributes class.

Returns:

Output shape.

afe.ir.operations.get_pack_input_types(input_types: afe.ir.attributes.List[afe.ir.tensor_type.TensorType]) afe.ir.attributes.List[afe.ir.tensor_type.TensorType][source]

Get pack operator input types. If input tensor has 4D shape it will be reshaped to 2D MLA buffer shape.

afe.ir.operations.make_quantization_cast(provided_type: afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType], wanted_type: afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]) afe.ir.defines.QuantizationCast[source]

Make a quantization cast for one value.

Parameters:
  • provided_type – Type and quantization of the value

  • wanted_type – Type and quantization that it should be cast to

Returns:

Cast

afe.ir.operations.make_quantization_casts(provided_input_types: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]], wanted_input_types: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]]) afe.ir.defines.InputsQuantCast[source]

Create casts for a quantized node’s input types by comparing the input data type with the type that the node requires.

Parameters:
  • provided_input_types – Type and quantization of a node’s inputs, after quantization

  • wanted_input_types – Type and quantization that the quantized node requires

Returns:

Casts for the node

class afe.ir.operations.AwesomeOperation[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.Optional[afe.ir.attributes.List[afe.ir.defines.InputName]]] = [][source]
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = [][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]
Abstractmethod:

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: AWESOME_ATTRS, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]
Abstractmethod:

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod run_quant(quant_attrs: QUANT_ATTRS, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]
Abstractmethod:

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: AWESOME_ATTRS, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]

The default calibration method. Executes the operation in floating point. Update the observer if the operation is associated with one. Otherwise, the operation’s quantization parameters will be calculated based on it’s input’s quantization parameters. Update the min/max values using the outputs and use the updated min/max to compute the scales and zero points.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod update_input_quant(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: Mapping[afe.ir.defines.InputName, afe.ir.attributes.Optional[afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]]])[source]

Record quantization scales of the input tensors.

Parameters:
  • calib_attrs – Calibration results holding dynamic ranges. It will be updated with quantization parameters of the node’s inputs.

  • input_dict – Quantization parameters of the node’s inputs.

classmethod get_observed_distribution(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]][source]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp, ReshapeOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed

distributions.

classmethod quantize(attrs: AWESOME_ATTRS, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) QUANT_ATTRS[source]
Abstractmethod:

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod type_check(value: afe.ir.attributes.Any, expected_type: Type[T]) T[source]

Each op expects a more specific type of inputs / AwesomeAttributes so this function helps with type checking :param value: AwesomeAttributes :param expected_type: a type

class afe.ir.operations.PlaceholderOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
placeholder_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
quant_fn: Callable[[afe.ir.attributes.np.ndarray, float, int, int], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.PlaceholderAttrs, afe.ir.attributes.PlaceholderQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.PlaceholderAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod update_input_quant(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: Mapping[afe.ir.defines.InputName, afe.ir.attributes.Optional[afe.ir.defines.DataValue[afe.ir.attributes.QuantResultTensorType]]])[source]

Record quantization scales of the input tensors.

Parameters:
  • calib_attrs – Calibration results holding dynamic ranges. It will be updated with quantization scales of the node’s inputs.

  • input_dict – Quantization scales of the node’s inputs.

classmethod quantize(attrs: afe.ir.attributes.PlaceholderAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.PlaceholderQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.PlaceholderQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ConstantOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

constant_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ConstantAttrs, afe.ir.attributes.ConstantQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ConstantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: afe.ir.attributes.ConstantAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

The default calibration method. Executes the operation in floating point. Update the observer if the operation is associated with one. Otherwise, the operation’s quantization parameters will be calculated based on it’s input’s quantization parameters. Update the min/max values using the outputs and use the updated min/max to compute the scales and zero points.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ConstantAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.ConstantAttrs, afe.ir.attributes.ConstantQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ConstantQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MaxPool2DOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
class afe.ir.operations.MaxPool3DOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
class afe.ir.operations.AvgPool2DOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
class afe.ir.operations.AvgPool3DOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
class afe.ir.operations.AdaptiveAvgPool2DOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
avgpool_fn: Callable[[afe.ir.attributes.AdaptiveAvgPool2DAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.VarianceOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[list[afe.ir.defines.InputName]][source]
var_fn[source]
classmethod get_type(attrs: afe.ir.attributes.VarianceAttrs | afe.ir.attributes.VarianceQuantAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.VarianceAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.VarianceAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.VarianceAttrs | afe.ir.attributes.VarianceQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: QUANT_ATTRS, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MultiplyOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
multiply_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.MultiplyAttrs, afe.ir.attributes.MultiplyQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.MultiplyAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.MultiplyAttrs, afe.ir.attributes.MultiplyQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.MultiplyQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.PadOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
pad_fn: Callable[[afe.ir.attributes.PadAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.PadAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.PadAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MeanOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
mean_fn: Callable[[afe.ir.attributes.MeanAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.MeanAttrs, afe.ir.attributes.MeanQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.MeanAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.MeanAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.MeanQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.MeanQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ArgMaxOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
argmax_fn: Callable[[afe.ir.attributes.ArgMaxAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ArgMaxAttrs, afe.ir.attributes.ArgMaxQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod quantize(attrs: afe.ir.attributes.ArgMaxAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ArgMaxQuantAttrs[source]

Quantize argmax. The quantized operator takes int8 or bfloat16 values and returns int32 values. The int32 values represent an array index, not real numbers, so they do not have quantization scale. No quantization info is saved in attrs, as argmax’s computation is oblivious to quantization.

classmethod run(attrs: afe.ir.attributes.ArgMaxAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod run_quant(attrs: afe.ir.attributes.ArgMaxQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SoftmaxOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
softmax_fn: Callable[[afe.ir.attributes.SoftmaxAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = ['sum_exp'][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SoftmaxAttrs, afe.ir.attributes.SoftmaxQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SoftmaxAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.SoftmaxAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.SoftmaxAttrs, afe.ir.attributes.SoftmaxQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.SoftmaxQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: AWESOME_ATTRS, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]

Softmax calibration method. Executes default calibration to get results of Softmax operation in floating point. Additionally, calculate intermediate results and update the observers for intermediate values.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Parameters controlling how to calibrate.

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.LRNOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
lrn_fn: Callable[[afe.ir.attributes.LRNAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.LRNAttrs, afe.ir.attributes.LRNQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.LRNAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.LRNAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.LRNQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.LRNQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ExtmOp[source]

Extremum op, can be either min or max operation. Attributes contain a boolean to determine the operation.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
min_fn: Callable[[afe.ir.attributes.ExtmAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
max_fn: Callable[[afe.ir.attributes.ExtmAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ExtmAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ExtmAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SumOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
sum_fn: Callable[[afe.ir.attributes.SumAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SumAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SumAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ProdOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
prod_fn: Callable[[afe.ir.attributes.ProdAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ProdAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ProdAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SubtractOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
subtract_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SubtractAttrs, afe.ir.attributes.SubtractQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SubtractAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.SubtractAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.SubtractAttrs, afe.ir.attributes.SubtractQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.SubtractQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.PowerOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
power_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.PowerAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MaximumOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
maximum_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.MaximumAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.MaximumAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.MinimumOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
minimum_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.MinimumAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.FullOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
full_fn: Callable[[afe.ir.attributes.FullAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod run(attrs: afe.ir.attributes.FullAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.TileOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
tile_fn: Callable[[afe.ir.attributes.TileAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod run(attrs: afe.ir.attributes.TileAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.PReluOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
relu_fn: Callable[[afe.ir.attributes.np.ndarray, int], afe.ir.attributes.np.ndarray][source]
prelu_fn: Callable[[afe.ir.attributes.PReluAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.PReluAttrs, afe.ir.attributes.PReluQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.PReluAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.PReluAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, configs: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.PReluAttrs, afe.ir.attributes.PReluQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.PReluQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.BroadcastToOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
broadcast_to_fn[source]
classmethod get_type(attrs: afe.ir.attributes.BroadcastToAttrs | afe.ir.attributes.BroadcastToQuantAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.BroadcastToAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.BroadcastToAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.BroadcastToAttrs | afe.ir.attributes.BroadcastToQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(attrs: afe.ir.attributes.BroadcastToQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.UDFOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
udf_fn: afe.ir.attributes.Optional[Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray]] = None[source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.UDFAttrs, afe.ir.attributes.UDFQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.UDFAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.UDFAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.UDFAttrs, afe.ir.attributes.UDFQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.UDFQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SqrtOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.RsqrtOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.TanhOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.SigmoidOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.LogOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.Log2Op[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.Log10Op[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.ReciprocalOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.EluOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.SoftplusOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.ErfOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.GeluOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.DivideOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = ['rhs_reciprocal'][source]
divide_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
reciprocal_op: ReciprocalOp[source]
multiply_op: MultiplyOp[source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.DivideAttrs, afe.ir.attributes.DivideQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: AWESOME_ATTRS, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]

DivideOp calibration method. Executes default calibration to get results of Divide operation in floating point. Additionally, calculate intermediate results for reciprocal(rhs) and update the observer for intermediate values.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Parameters controlling how to calibrate.

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.DivideAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.DivideQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.DivideQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ExpOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.SwishOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.HardSigmoidOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.HardSwishOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

udf_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
class afe.ir.operations.UpsamplingOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
upsampling_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.UpsamplingAttrs, afe.ir.attributes.UpsamplingQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.UpsamplingAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.UpsamplingAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.UpsamplingAttrs, afe.ir.attributes.UpsamplingQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.UpsamplingQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ImageResize2DOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
image_resize_fn: Callable[[afe.ir.attributes.ImageResize2DAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ImageResize2DAttrs, afe.ir.attributes.ImageResize2DQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ImageResize2DAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ImageResize2DAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.ImageResize2DAttrs, afe.ir.attributes.ImageResize2DQuantAttrs][source]

In MLA implementation of resize, output type is the same as input type. There is no intermediate int32 result. Always use int8, if integer scaling factor != (1, 2, 4).

<input_type> <enable_int16> <input_quant> <resize_kernel> <output_type>

int8 True int8 int8 int8 int8 False int8 int8 int8 int16 False int8 int8 int8 int16 True int16 int16 int16

classmethod run_quant(quant_attrs: afe.ir.attributes.ImageResize2DQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.GridSampleOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
gridsample_fn: Callable[[afe.ir.attributes.GridSampleAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.GridSampleAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.GridSampleAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.GridSampleAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.GridSampleAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

class afe.ir.operations.TupleOp[source]

TupleOp takes in multiple tensors, returns a tuple

input_list = None[source]
tuple_fn: Callable[[afe.ir.attributes.List[afe.ir.attributes.np.ndarray]], tuple][source]
classmethod get_type(attrs: afe.ir.attributes.TupleAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(_: afe.ir.attributes.TupleAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.Tuple[afe.ir.attributes.np.ndarray, Ellipsis][source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.TupleAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.TupleAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]][source]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp, ReshapeOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed

distributions.

class afe.ir.operations.TupleGetItemOp[source]

TupleGetItemOp takes in a tuple, returns a tensor

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
tuple_get_item_fn: Callable[[afe.ir.attributes.TupleGetItemAttrs, tuple], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.TupleGetItemAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.TupleGetItemAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, tuple], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.TupleGetItemAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.TupleGetItemAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]][source]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp, ReshapeOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed

distributions.

class afe.ir.operations.SqueezeOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
squeeze_fn: Callable[[afe.ir.attributes.SqueezeAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SqueezeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SqueezeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ConcatenateOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]] = None[source]
concatenate_fn: Callable[[afe.ir.attributes.ConcatenateAttrs, afe.ir.attributes.List[afe.ir.attributes.np.ndarray]], afe.ir.attributes.np.ndarray][source]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ConcatenateAttrs, afe.ir.attributes.ConcatQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ConcatenateAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ConcatenateAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.ConcatenateAttrs, afe.ir.attributes.ConcatQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ConcatQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.TransposeOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
transpose_fn: Callable[[afe.ir.attributes.TransposeAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.TransposeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.TransposeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.TransposeAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.TransposeAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

class afe.ir.operations.DepthToSpaceOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
depth_to_space_fn: Callable[[afe.ir.attributes.DepthToSpaceAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.DepthToSpaceAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.DepthToSpaceAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.DepthToSpaceAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.DepthToSpaceAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

class afe.ir.operations.ReshapeOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
reshape_fn: Callable[[afe.ir.attributes.ReshapeAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.ReshapeAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ReshapeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ReshapeAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ReshapeAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]][source]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp, ReshapeOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed

distributions.

class afe.ir.operations.ExpandDimsOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
expand_dims_fn: Callable[[afe.ir.attributes.ReshapeAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ExpandDimsAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ExpandDimsAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SplitOp[source]

SplitOp takes in one tensor, returns a tuple

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
split_fn: Callable[[afe.ir.attributes.SplitAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SplitAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SplitAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.Tuple[afe.ir.attributes.np.ndarray, Ellipsis][source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.TakeOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
take_fn: Callable[[afe.ir.attributes.TakeAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.TakeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.TakeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.StridedSliceOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
strided_slice_fn: Callable[[afe.ir.attributes.StridedSliceAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.StridedSliceAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.StridedSliceAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.StridedSliceAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.StridedSliceAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

class afe.ir.operations.LayoutTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
layout_transform_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.LayoutTransformAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.LayoutTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.LayoutTransformAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.AwesomeQuantAttrBase[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]][source]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp, ReshapeOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed

distributions.

class afe.ir.operations.TessellationTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.TessellationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.DetessellationTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.DetessellationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.PackTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]] = None[source]
classmethod get_type(attrs: afe.ir.attributes.PackTransformAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(_: afe.ir.attributes.PackTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.UnpackTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.UnpackTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.NormalizationTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.NormalizationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.QuantizationTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.QuantizationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.DequantizationTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.DequantizationTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ResizeTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ResizeTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ChromaUpsampleTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ChromaUpsampleTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.YuvRgbConversionTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.YuvRgbConversionTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.BgrRgbConversionTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.BgrRgbConversionTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.SigmoidTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SigmoidTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.NmsMaxpoolTransformOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[AWESOME_ATTRS, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.NmsMaxpoolTransformAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.CastOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
cast_fn: Callable[[afe.ir.attributes.CastAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.CastAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.CastAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.AddActivationOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]

The AddActivationOp can only handle the: * Add + Relu * Add + Clip

add_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, afe.ir.attributes.Optional[int]], afe.ir.attributes.np.ndarray][source]
relu_fn: Callable[[afe.ir.attributes.np.ndarray, int], afe.ir.attributes.np.ndarray][source]
clip_fn: Callable[[afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.AddActivationAttrs, afe.ir.attributes.AddQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.AddActivationAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.AddActivationAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.AddActivationAttrs, afe.ir.attributes.AddQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.AddQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ConstantMultiplyAddOp[source]

An add operator fused with multiplication by a scalar constant. The operator performs the floating-point operation (a*c + b*d), where c and d are scalar constants. After quantization, it behaves like an add operator. The multiplication is incorporated into the add operator’s requantization.

multiply_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ConstantMultiplyAddAttrs, afe.ir.attributes.AddQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ConstantMultiplyAddAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ConstantMultiplyAddAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.AddQuantAttrs, afe.ir.attributes.ConstantMultiplyAddAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

class afe.ir.operations.ConvAddActivationOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

add_fn: Callable[[afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, afe.ir.attributes.Optional[int]], afe.ir.attributes.np.ndarray][source]
requantize_fn: Callable[[afe.ir.attributes.np.ndarray, int, afe.ir.attributes.Union[int, afe.ir.attributes.np.ndarray], int, bool, str], afe.ir.attributes.np.ndarray][source]
relu_fn: Callable[[afe.ir.attributes.np.ndarray, int], afe.ir.attributes.np.ndarray][source]
clip_fn: Callable[[afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.ConvAddActivationAttrs | afe.ir.attributes.ConvQuantAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ConvAddActivationAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ConvAddActivationAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ConvAddActivationAttrs | afe.ir.attributes.ConvQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ConvQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: afe.ir.attributes.ConvAddActivationAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]

ConvAddActivation calibration method. Executes default calibration to get results of ConvAdd operation in floating point. Additionally, update intermediate observers for tracking mean values.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Parameters controlling how to calibrate.

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.TupleConcatenateOp[source]

This composite node reuse ConcatenateOp run, quantize, and run_quant methods

input_list = None[source]
tuple_fn: Callable[[afe.ir.attributes.List[afe.ir.attributes.np.ndarray]], tuple][source]
concatenate_op: AwesomeOperation[source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.TupleConcatenateAttrs, afe.ir.attributes.ConcatQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.Union[afe.ir.attributes.TupleConcatenateAttrs, afe.ir.attributes.ConcatenateAttrs], input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.TupleConcatenateAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ConcatQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ConcatQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ExternalOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list = None[source]
external_fn: Callable[[afe.ir.attributes.ExternalAttrs, afe.ir.attributes.Dict], afe.ir.attributes.Union[afe.ir.attributes.np.ndarray, tuple]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ExternalAttrs, afe.ir.attributes.AwesomeQuantAttrBase]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ExternalAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ExternalAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ExternalAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod get_observed_distribution(calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, inputs: afe.ir.attributes.Dict[afe.ir.defines.InputName, QuantizationTensorData]) afe.ir.attributes.Tuple[afe.ir.attributes.Optional[afe.ir.attributes.ObservedDistribution], afe.ir.attributes.Dict[str, afe.ir.attributes.ObservedDistribution]][source]

Get observed distribution and intermediate observed distributions. If a node doesn’t have observer, values from previous node are used. ExternalOp, TupleOp, TupleGetItemOp, LayoutTransformOp, ReshapeOp don’t use observed distribution and those values won’t be passed to any other MLA node, so observed distribution for those are set to None.

Parameters:
  • calib_attrs – Calibration attributes.

  • inputs – Properties of the inputs. It has quantization scales of the input tensors and attributes of the nodes that calculate the inputs.

Returns:

Tuple of observed distribution and dictionary of intermediate observed

distributions.

class afe.ir.operations.QNNQuantizeOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
quant_fn: Callable[[afe.ir.attributes.QNNQuantizeAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.QNNQuantizeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.QNNQuantizeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.RequantizeOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.RequantizeAttrs, afe.ir.attributes.RequantizeQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run_quant(quant_attrs: afe.ir.attributes.RequantizeQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.QNNDequantizeOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
dequant_fn: Callable[[afe.ir.attributes.QNNDequantizeAttrs, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.QNNDequantizeAttrs, QUANT_ATTRS]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.QNNDequantizeAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.QNNMulOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
mul_fn: Callable[[afe.ir.attributes.AwesomeAttributes, afe.ir.attributes.np.ndarray, afe.ir.attributes.np.ndarray, float, int, float, int, float, int], afe.ir.attributes.np.ndarray][source]
classmethod run(attrs: afe.ir.attributes.AwesomeAttributes, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.CustomOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list = None[source]
custom_op_fn: Callable[[afe.ir.attributes.CustomOpAttrs, afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray]], afe.ir.attributes.np.ndarray][source]
quant_fn: Callable[[afe.ir.attributes.np.ndarray, float, int, int], afe.ir.attributes.np.ndarray][source]
dequant_fn: Callable[[afe.ir.attributes.np.ndarray, float, int], afe.ir.attributes.np.ndarray][source]
classmethod run(attrs: afe.ir.attributes.CustomOpAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Union[afe.ir.attributes.np.ndarray, tuple][source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.CustomOpAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.CustomOpQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.CustomOpQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.LeakyReluCompositeOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.LeakyReluAttrs, afe.ir.attributes.LeakyReluCompositeQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.LeakyReluAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.LeakyReluAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.LeakyReluCompositeQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.LeakyReluCompositeQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ReluOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
relu_fn: Callable[[afe.ir.attributes.np.ndarray, int], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ReluAttrs, afe.ir.attributes.ReluQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ReluAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ReluAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.ReluAttrs, afe.ir.attributes.ReluQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.ReluQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.ClipOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
clip_fn: Callable[[afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.ClipAttrs, afe.ir.attributes.ClipQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.ClipAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.ClipAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(attrs: afe.ir.attributes.ClipAttrs | afe.ir.attributes.ClipQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.BatchMatmulOp[source]

Standard batch matmul operator where arguments to batch matmul operation are outputs of two different nodes.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
class afe.ir.operations.UnaryBatchMatmulOp[source]

Special case of batch matmul operator where both arguments to batch matmul operation are output of a same node.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
class afe.ir.operations.LayerNormOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
layer_norm_fn: Callable[[afe.ir.attributes.LayerNormAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = ['var'][source]
classmethod get_type(attrs: afe.ir.attributes.LayerNormAttrs | afe.ir.attributes.LayerNormQuantAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.LayerNormAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.LayerNormAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.LayerNormAttrs | afe.ir.attributes.LayerNormQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.LayerNormQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: AWESOME_ATTRS, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]

Layer Norm calibration method. Executes default calibration to get results of LN operation in floating point. Additionally, calculate intermediate results and update the observers for intermediate values.

Parameters:
  • attrs – AwesomeAttributes associated with this operation

  • calib_attrs – AwesomeCalibAttrs associated with operation’s node.

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Parameters controlling how to calibrate.

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.InstanceNormOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
instance_norm_fn: Callable[[afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
classmethod get_type(attrs: afe.ir.attributes.InstanceNormAttrs | afe.ir.attributes.InstanceNormQuantAttrs) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.InstanceNormAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.InstanceNormAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.InstanceNormAttrs | afe.ir.attributes.InstanceNormQuantAttrs[source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.InstanceNormQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

class afe.ir.operations.RMSNormOp[source]

An abstract class

Stores a list of input key names expected to be passed in by the AwesomeNode for developer reference.

input_list: ClassVar[Optional[List[InputName]]]. Used as reference when getting inputs

from a dictionary. If input_list is None, AFE will skip validating input_list at runtime

intermediate_names: ClassVar[List[str]]. Used for creation of intermediate observers. If the

list is empty list, empty dict for intermediate observers will be created.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
rms_norm_fn: Callable[[afe.ir.attributes.RMSNormAttrs, afe.ir.attributes.np.ndarray], afe.ir.attributes.np.ndarray][source]
intermediate_names: ClassVar[afe.ir.attributes.List[str]] = ['reduce_mean'][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.RMSNormAttrs, afe.ir.attributes.RMSNormQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.RMSNormAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.RMSNormAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.RMSNormAttrs, afe.ir.attributes.RMSNormQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.RMSNormQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.

classmethod calibrate(attrs: afe.ir.attributes.RMSNormAttrs, calib_attrs: afe.ir.attributes.AwesomeCalibAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.Any], config: afe.core.configs.RunConfigs) afe.ir.attributes.Any[source]

RMS Norm calibration method. Executes default calibration to get results of RMSNorm operation in floating point. Additionally, calculate intermediate results and update the observers for intermediate values.

class afe.ir.operations.SliceConcatOp[source]

This composite node uses infrastructure from StridedSliceOp and ConcatenateOp run.

input_list: ClassVar[afe.ir.attributes.List[afe.ir.defines.InputName]][source]
classmethod get_type(attrs: afe.ir.attributes.Union[afe.ir.attributes.SliceConcatAttrs, afe.ir.attributes.SliceConcatQuantAttrs]) afe.ir.tensor_type.NodeType[source]

Get the type of this node given its attributes. The parameter should be a QUANT_ATTRS if that data has been created, or an AWESOME_ATTRIBUTES otherwise.

Parameters:

attrs – Attributes associated with the operator. It is an AWESOME_ATTRIBUTES if quantization has not transformed the node, or a QUANT_ATTRS if it has.

Returns:

The node’s type.

classmethod run(attrs: afe.ir.attributes.SliceConcatAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Executes the operation in floating point :param attrs: AwesomeAttributes associated with this operation :param input_dict: Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays :param config: Configuration parameters for how to run the network :return: Output tensor(s) whose type is dependent on the subclass.

classmethod quantize(attrs: afe.ir.attributes.SliceConcatAttrs, quantizer_interface: afe.ir.quantization_interface.OpQuantInterface, config: afe.core.configs.QuantizationConfigs, error_reporter: afe.ir.defines.NodeReporter) afe.ir.attributes.Union[afe.ir.attributes.SliceConcatAttrs, afe.ir.attributes.SliceConcatQuantAttrs][source]

Compute quantized operator attributes, input quantization, and output quantization from floating-point operator attributes and the result of calibration.

When this function is called, calib_attrs.input_quant has the types and quantization of the input values (after the inputs have been transformed by quantization), and calib_attrs.quant holds a type and quantization of the output, which this function may overwrite. The output quantization is computed based on calibration. The output type should not be used.

This function must assign to calib_attrs.quant the output type and quantization that this operator has after quantization. It may use the default quantization if appropriate.

This function may modify attrs. It should modify attrs if the same attribute class is used for both the floating-point and the quantized operator, which would mean that it’s designed to store any quantization information in attrs.

This function may modify calib_attrs.input_quant to direct quantization to supply different inputs to this operator. The quantization algorithm will insert quantize or dequantize nodes so that the inputs have the type and quantization that were assigned. An exception will be raised if the input can’t be provided by inserting a quantize or dequantize node or leaving the input unchanged.

The quantized operator attributes are returned.

Parameters:
  • attrs – Floating-point operator attributes.

  • calib_attrs – Calibration results.

  • config – Parameters controlling how to quantize.

  • error_reporter – Node reporter of the node to be quantized.

Returns:

Quantized operator attributes

classmethod run_quant(quant_attrs: afe.ir.attributes.SliceConcatQuantAttrs, input_dict: afe.ir.attributes.Dict[afe.ir.defines.InputName, afe.ir.attributes.np.ndarray], config: afe.core.configs.RunConfigs) afe.ir.attributes.np.ndarray[source]

Execute the operation using quantized arithmetic.

Parameters:
  • quant_attrs – Parameters that define the quantized operation

  • input_dict – Dictionary of names (eg. ‘weights’ ‘data’) to numpy arrays

  • config – Configuration parameters for how to run the network

Returns:

Output tensor(s) whose type is dependent on the subclass.