afe.ir.definesο
Attributesο
Classesο
Status for AwesomeNode |
|
An abstract value in a network. The type parameter represents |
|
An abstract value associated with a tensor in a network. |
|
An abstract value associated with a tuple in a network. |
|
The position of an A within a DataValue[A]. This is an algebraic data type. |
|
Identifies the single value in a TensorValue. |
|
Identifies a position in a TupleValue. |
|
A set of abstract values associated with a network node's |
|
A way of doing quantized arithmetic. Different modes make different arithmetic simplifications |
|
A quantization scale. It represents an encoding of real numbers r as integers q where: |
|
A requantization method as defined in ml_kernels. This enum is used to |
|
A quantization-related conversion on data. When the algorithm detects |
|
A conversion that does nothing. It represents the case where no conversion is needed. |
|
A quantization cast. It represents a cast of a tensor having the given shape |
|
A quantization cast. It represents a cast of a tensor having the given shape |
|
A quantization cast. It represents a cast of a tensor having the given shape |
|
A numeric conversion. It represents a conversion from one numeric type |
|
A tuple cast. It applies a cast to each element of the tuple. |
|
A set of quantization casts to apply to a node's inputs. The dict has an entry |
|
A set of quantization casts to apply to a model. The casts are collected during a |
|
Layer statistics. For each MLA node, quantization error is calculated, |
|
A node reporter to display information or warning messages about a node during transformations |
|
A node reporter to display information or warning messages about a node during transformations |
|
A bias correction method for convolution. |
Functionsο
|
Apply a function to each tensor value in a DataValue. |
|
Get all tensor values in a DataValue. |
|
Get a value from DataValue while expecting that the type of s DataValue is a TensorValue. |
|
Get a list of values from DataValue while expecting that the type of s DataValue is |
|
Combine all values in a DataValue using the given function. |
|
Transform each tensor value in a DataValue according to the given function, |
|
Apply f to each pair of tensor values at the same positions in x and y, |
|
Convert a list to a DataValue, using heuristics to guess the data structure. This function is provided |
|
Get the value at the given index. |
Module Contentsο
- class afe.ir.defines.Status[source]ο
Status for AwesomeNode
RELAY: Right after parsing from TVM Relay IR module CALIBRATED: Calibrated SIMA_QUANTIZED: SiMa Quantized BACKEND_IR_LOWERED: After lowering MLA subgraphs to SiMa BackendIR BACKEND_IR_COMPILED: After compilation using compile_awesomenet
- class afe.ir.defines.DataValue[source]ο
An abstract value in a network. The type parameter represents the data type that stands in for a tensor value.
- class afe.ir.defines.TensorValue[source]ο
An abstract value associated with a tensor in a network.
- class afe.ir.defines.TupleValue[source]ο
An abstract value associated with a tuple in a network. An abstract value is associated with each element of the tuple.
- afe.ir.defines.foreach_data_value(f: Callable[[_TENSOR], None], v: DataValue[_TENSOR]) None [source]ο
Apply a function to each tensor value in a DataValue.
- Parameters:
f β Function to apply
v β DataValue to traverse
- afe.ir.defines.data_value_elements(v: DataValue[_TENSOR]) List[_TENSOR] [source]ο
Get all tensor values in a DataValue.
Since the DataValue structure is ignored, this function is only suitable when it doesnβt matter where the tensor values are located inside the DataValue.
- afe.ir.defines.get_expected_tensor_value(v: DataValue[_TENSOR]) _TENSOR [source]ο
Get a value from DataValue while expecting that the type of s DataValue is a TensorValue.
- afe.ir.defines.get_expected_tuple_values(v: DataValue[_TENSOR]) List[_TENSOR] [source]ο
Get a list of values from DataValue while expecting that the type of s DataValue is non-nested TupleValue.
- afe.ir.defines.reduce_data_value(f: Callable[[_A, _TENSOR], _A], v: DataValue[_TENSOR], initial: _A) _A [source]ο
Combine all values in a DataValue using the given function.
- Parameters:
f β Combining function
v β DataValue to traverse
initial β Initial value of result
- Returns:
Combined value
- afe.ir.defines.map_data_value(f: Callable[[_A], _B], v: DataValue[_A]) DataValue[_B] [source]ο
Transform each tensor value in a DataValue according to the given function, and return the results as a new DataValue.
- Parameters:
f β Function to apply
v β DataValue to transform
- Returns:
DataValue with all tensor values transformed
- afe.ir.defines.zip_data_value(f: Callable[[_A, _B], _C], x: DataValue[_A], y: DataValue[_B]) DataValue[_C] [source]ο
Apply f to each pair of tensor values at the same positions in x and y, which must have the same shape. Return the results as a new DataValue having the same shape as x and y.
- Parameters:
f β Function to apply
x β DataValue to transform
y β DataValue to transform
- Returns:
Transformed data
- afe.ir.defines.reconstruct_data_value(values: List[_TENSOR]) DataValue[_TENSOR] [source]ο
Convert a list to a DataValue, using heuristics to guess the data structure. This function is provided for compatibility with existing code that does not keep track of the data structure.
If the list has one item, itβs treated as representing a single tensor. If it has many items, itβs treated as representing a tuple of tensors.
- Parameters:
values β Values to interpret as a DataValue
- class afe.ir.defines.DataIndex[source]ο
The position of an A within a DataValue[A]. This is an algebraic data type.
- afe.ir.defines.index_data_value(v: DataValue[_TENSOR], i: DataIndex) _TENSOR [source]ο
Get the value at the given index.
- class afe.ir.defines.NodeAssociatedValue[source]ο
- A set of abstract values associated with a network nodeβs
inputs and outputs.
Input values are held in an ordered dictionary mapping strings to data values. Inputs can be examined positionally or by name.
The output value is a single data value.
- class afe.ir.defines.RequantizationMode[source]ο
A way of doing quantized arithmetic. Different modes make different arithmetic simplifications embodying different speed accuracy tradeoffs. It is expected that TFLite-style quantization would give better accuracy while Sima-style quantization will run faster. The requantiaztion mode only applies to convolution operators.
- class afe.ir.defines.Quantization[source]ο
A quantization scale. It represents an encoding of real numbers r as integers q where:
L = -2^(bits-1) (integer range lower bound) U = 2^(bits-1)-1 (integer range upper bound) q_unbounded = round((r * scale) + zero_point) (linear mapping to representable range) q = max(L, min(U, q_unbounded)) (clip to range)
Fields min_val and max_val give the range of floating-point values that are represented, for instance the range that was selected by calibration. This range must be representable within the integer range, that is,
L <= round((min_val * scale) + zero_point) <= round((max_val * scale) + zero_point) <= U
Often it spans the entire range from L to U. It may be smaller if the range was expanded due to constraints on the quantized representation, such as when using symmetric quantization for a numeric range that is not symmetric. If a larger numeric range was clipped when quantizing, min_val and max_val still describe the representable range and not the original range. When a tensor contains only zero, scale is set to 0. and min_val = max_val = 0.
The default values represent quantization of the floating-point range [-128, 127] using the integer range [-128, 127].
- static representable(scale: float, zero_point: int, bits: int) Quantization [source]ο
Create a quantization scale that includes the entire representable integer range. See Quantization for documentation of the parameters. For zero tensors, scale is 0. and min_val = max_val = 0.
- Parameters:
scale β Quantization scale.
zero_point β Quantization zero point.
bits β Quantization bits.
- Returns:
Quantization scale constructed from the given parameters.
- class afe.ir.defines.RequantMethod[source]ο
A requantization method as defined in ml_kernels. This enum is used to select which type of requantization to use when a network is quantized.
- class afe.ir.defines.QuantizationCast[source]ο
A quantization-related conversion on data. When the algorithm detects that a conversion needs to be inserted in a model graph, itβs recorded using this class.
This is an algebraic data type.
- class afe.ir.defines.IdentityCast[source]ο
A conversion that does nothing. It represents the case where no conversion is needed.
- class afe.ir.defines.QuantCast[source]ο
A quantization cast. It represents a cast of a tensor having the given shape from float32 to int8 or int32 by computing round(r * scale + zero_point).
- out_type: afe.ir.tensor_type.ScalarType[source]ο
- class afe.ir.defines.DequantCast[source]ο
A quantization cast. It represents a cast of a tensor having the given shape from an integer type to float32 by computing (q - zero_point) / scale.
- Parameters:
shape β Shape of tensor to dequantize
scale β Quantization scale
zero_point β Quantization zero point
input_dtype β Input data type. The valid Numpy data types are: np.int8, np.int16, or np.int32.
- class afe.ir.defines.RequantCast[source]ο
A quantization cast. It represents a cast of a tensor having the given shape from an int32 type to int16/int8.
- Parameters:
shape β Shape of a tensor
in_scale β Input quantization scale
in_zero_point β Input quantization zero point
out_scale β Output quantization scale
out_zero_point β Output quantization zero point
input_32_bit β If True, the input type is int32. If False, the input type is int16.
output_type β Output data type, can be int16 or int8
requantization_type β Type of requantization to use. If arith_folded is used, then the requantization will use only a shift; the scales and zero points must be related by a power of 2 factor to minimize rounding error.
- requant_method: RequantMethod[source]ο
- get_input_quantization() Quantization [source]ο
- get_output_quantization() Quantization [source]ο
- class afe.ir.defines.ConvertCast[source]ο
A numeric conversion. It represents a conversion from one numeric type to the nearest approximation in another numeric type.
- Parameters:
shape β Shape of a tensor
in_type β Scalar type of input
out_type β Scalar type of output
- out_type: afe.ir.tensor_type.ScalarType[source]ο
- class afe.ir.defines.TupleCast[source]ο
A tuple cast. It applies a cast to each element of the tuple.
- elements: List[QuantizationCast][source]ο
- class afe.ir.defines.InputsQuantCast[source]ο
A set of quantization casts to apply to a nodeβs inputs. The dict has an entry for each input.
- casts: Dict[InputName, QuantizationCast][source]ο
- class afe.ir.defines.QuantizationCasts[source]ο
A set of quantization casts to apply to a model. The casts are collected during a traversal of the model, then applied after the traversal is finished.
- Field casts holds the casts to apply to node inputs. If a node does not need casts, it
is omitted.
- casts: Dict[NodeName, InputsQuantCast][source]ο
- insert(node: NodeName, cast: InputsQuantCast)[source]ο
- class afe.ir.defines.LayerStats[source]ο
Layer statistics. For each MLA node, quantization error is calculated, that information is than forwarded to .sima.json file, and it can be viewed in Netron.
- Parameters:
metric β Metric that is used for calculating error value.
error_value β Error value.
- class afe.ir.defines.NodeReporter[source]ο
A node reporter to display information or warning messages about a node during transformations
- class afe.ir.defines.LogNodeReporter(node_name: NodeName)[source]ο
A node reporter to display information or warning messages about a node during transformations
- Parameters:
node_name β Name of the node
- class afe.ir.defines.BiasCorrectionType[source]ο
A bias correction method for convolution.
REGULAR: Bias correction using input mean estimated during calibration ITERATIVE: Bias correction using input mean estimated by executing the
quantized model with a set of calibration inputs
NONE: No bias correction