afe.ir.utils

Attributes

T

R

Functions

afe_warn(→ bool)

HEADER = ''

exclude_axes(→ List[int])

Used when you want a list of axes not in the current axis list

get_transpose_indices_according_to_layout_strings(...)

Returns a list of indices from layout strings, so that transpose with this list of indices would be equivalent.

transpose_tensor_according_to_layout_strings(...)

returns a np.ndarray transposed in the same fashion that the layout moves from the current to the desired layout

transpose_attr_according_to_layout_strings(→ List[Any])

Transpose and prune the data in the same fashion that the layout moves from the current to the desired layout

insert_according_to_layout_strings(→ Tuple[T])

Transpose and merge elements from source into target. Wherever a label appears

compare_outputs(→ None)

Compares outputs of 2 tensors. Raises an error arrays are not element wise equal within a tolerance

reverse_transpose(→ Tuple[T, Ellipsis])

Reverse the list of element given the applied transpose axes.

is_depthwise_conv(→ bool)

Return True if the parameters designate a depthwise convolution.

is_depthwise_conv_with_channel_mul(→ bool)

Return True if the parameters designate a depthwise convolution with channel multiplier.

is_group_conv(→ bool)

Return True if the parameters designate a grouped convolution that is not a depthwise convolution.

transpose_axis_to_the_last(→ numpy.ndarray)

Transpose axis in the data to the last dimension.

with_axis_last(→ numpy.ndarray)

Apply a function to a transposed view of an array and reverse the transposition on the function's result.

convert_transpose_conv2d_to_conv2d_paddings(...)

Converta transpose conv2d padding to conv2d padding

set_shape_batch_size(→ Tuple[int, Ellipsis])

Given the Tuple representing the shape of a tensor, return the Tuple

get_input_dict_batch_size(→ int)

Analyzes input dict and returns batch size. All inputs should have matching batch size.

unbatch_input_dict(→ List[Dict[str, numpy.ndarray]])

Create a list of dictionaries from dictionary of inputs containing numpy arrays where

transpose_conv_output_shape(→ Tuple[int])

Calculate the shape of the output tensor of a transposed convolution

create_and_verify_narrowing(...)

split_einsum_equation(→ Tuple[List[str], List[str]])

Separate the inputs and outputs parts of the einsum equation, returning list of strings

mla_supports_einsum_equation_strings(→ bool)

Checks if the Einsum equation can be supported on MLA. For Einsum equation to be supported on

is_mla_supported_einsum_equation(→ bool)

Checks if the einsum equation is supported on MLA for the given data layout.

Module Contents

afe.ir.utils.T[source]
afe.ir.utils.R[source]
afe.ir.utils.afe_warn(msg: str) bool[source]

HEADER = ‘’ OKBLUE = ‘’ OKCYAN = ‘’ OKGREEN = ‘’ WARNING = ‘’ FAIL = ‘’ ENDC = ‘’ BOLD = ‘’ UNDERLINE = ‘’

afe.ir.utils.exclude_axes(shape_len: int, axis: List[int]) List[int][source]

Used when you want a list of axes not in the current axis list

afe.ir.utils.get_transpose_indices_according_to_layout_strings(current_layout: str, desired_layout: str) List[int][source]

Returns a list of indices from layout strings, so that transpose with this list of indices would be equivalent. EG: NHWC NCHW -> [0, 3, 1, 2]

afe.ir.utils.transpose_tensor_according_to_layout_strings(input_tensor: numpy.ndarray, current_layout: str, desired_layout: str) numpy.ndarray[source]

returns a np.ndarray transposed in the same fashion that the layout moves from the current to the desired layout Examples: HWIO -> OIHW, NHWC -> NCHW, NCDHW -> NDHWC

afe.ir.utils.transpose_attr_according_to_layout_strings(attr: R, current_layout: str, desired_layout: str) List[Any][source]

Transpose and prune the data in the same fashion that the layout moves from the current to the desired layout EG: attr = [0,1,2,3] current_layout = ‘NHWC’ desired_layout = ‘WH’ => ouput_list = [2,1]

afe.ir.utils.insert_according_to_layout_strings(target: Tuple[T], source: Sequence[T], target_layout: str, source_layout: str) Tuple[T][source]

Transpose and merge elements from source into target. Wherever a label appears in source_layout and target_layout, the element from source at that index in source_layout is inserted into target at that index in target_layout, overwriting what was there.

For example, insert_according_to_layout_strings((0.5, 0.7, 0.9), (-2.5, -4.5), “ABC”, “BD”) returns (0.5, -2.5, 0.9). The label ‘B’ is present in both layouts, and the value at that label’s position is copied from source to target.

Parameters:
  • target – Values to combine. A copy of this tuple is created, and some tuple fields are replaced by data from source.

  • source – Values to combine. Items from this tuple are inserted into a copy of target.

  • target_layout – A string with one character for each tuple element in target. The character is a label associated with the tuple element.

  • source_layout – A string with one character for each tuple element in source. The character is a label associated with the tuple element.

Returns:

A new tuple holding the merged data.

afe.ir.utils.compare_outputs(out1: numpy.ndarray, out2: numpy.ndarray, tolerance: float = 0.001) None[source]

Compares outputs of 2 tensors. Raises an error arrays are not element wise equal within a tolerance

afe.ir.utils.reverse_transpose(original: List[T] | Tuple[T, Ellipsis], transpose_axes: Tuple[int, Ellipsis]) Tuple[T, Ellipsis][source]

Reverse the list of element given the applied transpose axes.

afe.ir.utils.is_depthwise_conv(in_channels: int, out_channels: int, groups: int) bool[source]

Return True if the parameters designate a depthwise convolution. Depthwise convolution is a special case of grouped convolution where the number of groups is equal to the number of input channels. When there is a single input channel, it is ambiguous, and we treat it as a regular convolution.

Parameters

param in_channels:

int. Number of input channels.

param out_channels:

Number of output channels.

param groups:

int. Number of convolution groups.

Return

return:

bool. Return True if the convolution is a depthwise convolution.

afe.ir.utils.is_depthwise_conv_with_channel_mul(in_channels: int, out_channels: int, groups: int) bool[source]

Return True if the parameters designate a depthwise convolution with channel multiplier. Depthwise convolution is a special case of grouped convolution where the number of groups is equal to the number of input channels and number of output channels is equal to: input_channels * channel_multiplier.

Parameters

param in_channels:

int. Number of input channels.

param out_channels:

Number of output channels.

param groups:

int. Number of convolution groups.

Return

return:

bool. Return True if the convolution is a depthwise convolution with channel multiplier.

afe.ir.utils.is_group_conv(in_channels: int, out_channels: int, groups: int) bool[source]

Return True if the parameters designate a grouped convolution that is not a depthwise convolution.

Parameters

param in_channels:

Number of input channels.

param out_channels:

Number of output channels.

param groups:

int. Number of convolution groups.

Return

return:

bool. Return True if the convolution is a group convolution

afe.ir.utils.transpose_axis_to_the_last(data: numpy.ndarray, axis: int) numpy.ndarray[source]

Transpose axis in the data to the last dimension.

Parameters

param data:

np.ndarray

param axis:

int

Return

return:

np.ndarray. Transposed data

afe.ir.utils.with_axis_last(data: numpy.ndarray, axis: int, f: Callable[[numpy.ndarray], numpy.ndarray]) numpy.ndarray[source]

Apply a function to a transposed view of an array and reverse the transposition on the function’s result. The function must return an array of the same shape as its input.

Parameters:
  • data – Array to transform

  • axis – Index of axis that will be transposed to the last axis

  • f – Function to apply on the transposed array

Returns:

Result of f with the axis transposed back to its original position

afe.ir.utils.convert_transpose_conv2d_to_conv2d_paddings(weight_shape: Tuple[int, int, int, int], weight_layout: str, data_layout: str, padding: afe.ir.defines.AwesomePad, output_padding: afe.ir.defines.AwesomePad | None = None) afe.ir.defines.AwesomePad[source]

Converta transpose conv2d padding to conv2d padding

Parameters

param weight_shape:

Tuple[int, int, int, int]. Shape of the 4-D weight

param weight_layout:

str. Weight layout

param data_layout:

str. Data layout

param padding:

AwesomePad. Padding of the transpose conv2d

param output_padding:

AwesomePad. Output padding of the transpose conv2d

Return

return:

AwesomePad. A transformed padding for regular conv2d

afe.ir.utils.set_shape_batch_size(shape: Tuple[int, Ellipsis], batch_size: int) Tuple[int, Ellipsis][source]

Given the Tuple representing the shape of a tensor, return the Tuple corresponding the same tensor shape with a given batch size.

Warning - This is a hack before we have a general solution for all dimensions

1D Override of the length is not reversible 2D Override of the first dimension is not reversible, if it not meant for batch 4D Assume batch is the first dimension Others No change

Parameters:
  • shape – Tuple representing the shape of a tensor.

  • batch_size – Integer value representing the batch size value.

Returns:

Tuple corresponding to the shape of a tensor with a given batch size and input shape.

afe.ir.utils.get_input_dict_batch_size(input_dict: Dict[str, numpy.ndarray]) int[source]

Analyzes input dict and returns batch size. All inputs should have matching batch size.

Parameters:

input_dict – Dictionary of input values, which batch size should be returned.

Returns:

The value of batch size.

afe.ir.utils.unbatch_input_dict(input_dict: Dict[str, numpy.ndarray], batch_size: int) List[Dict[str, numpy.ndarray]][source]

Create a list of dictionaries from dictionary of inputs containing numpy arrays where first dimension is a batch size. The length of returning list is equal to the batch size and the size of the first dimension for all the arrays in a returning list is equal to 1.

Parameters:
  • input_dict – Dictionary of input numpy arrays. Each array needs to be in shape (batch_size, H, W, C) or (batch_size, C, H, W) depending on the layout.

  • batch_size – The batch size of the input dictionary. User needs to make sure that it corresponds to the size of the first dimension in all arrays in the input_dict.

Returns:

The list of dictionaries of input arrays. The length of the list is batch_size and array shapes are (1, H, W, C) or (1, C, H, W) depending on the layout.

afe.ir.utils.transpose_conv_output_shape(input_shape: Sequence[int], kernel_shape: Sequence[int], padding: Sequence[Tuple[int, int]], output_padding: Sequence[Tuple[int, int]], stride: Sequence[int], dilation: Sequence[int]) Tuple[int][source]

Calculate the shape of the output tensor of a transposed convolution in spatial dimensions. All parameters sequences must have the same length as the number of spatial dimensions: two for 2D convolution, three for 3D convolution.

Parameters:
  • input_shape – Shape of the input feature map

  • kernel_shape – Shape of the convolution kernel

  • padding – Padding applied to the input

  • output_padding – Padding applied to the output. Only the second component, that is the padding at the end, is used.

  • stride – Stride of the convolution

  • dilation – Dilation of the convolution

Returns:

Shape of the output feature map in the spatial dimensions

afe.ir.utils.create_and_verify_narrowing(shift: int | numpy.ndarray, round_type: ml_kernels.math_helpers.RoundType, out_dtype: type) ml_kernels.requantization.Narrowing[source]
afe.ir.utils.split_einsum_equation(equation: str) Tuple[List[str], List[str]][source]

Separate the inputs and outputs parts of the einsum equation, returning list of strings representing specs of each tensor in equation. Note that empty list is returned if there is no output spec in an equation. Also, space characters are removed if present in an equation string.

Parameters:

equation – Einsum equation string, i.e “nchw,nqhc->nqhw”.

Returns:

Tuple containing lists of input and output tensor spec strings.

afe.ir.utils.mla_supports_einsum_equation_strings(input_strings: List[str], output_strings: List[str]) bool[source]

Checks if the Einsum equation can be supported on MLA. For Einsum equation to be supported on MLA, number of specs for input tensors must be equal to two, number of specs for output tensors must be equal to one and all the specs must contain exactly four letters.

This check is used when deciding whether Einsum operator’s layout is going to be changed during the ConvertLayout pass. We cannot base the decision on the contents of the equation here, since MLAChecker (which is coming to action in the latter, GraphPartitioning, pass) could use the wrong layout to come to partitioning decision for the Einsum operator.

Parameters:
  • input_strings – The list of spec string defining input tensors.

  • output_strings – The list of spec string defining output tensors.

Returns:

True if input and output strings meet requirements to be supported on MLA, otherwise False.

afe.ir.utils.is_mla_supported_einsum_equation(equation: str, data_layout: str) bool[source]

Checks if the einsum equation is supported on MLA for the given data layout.

Supported equations:
  • nchw,nchq->nqhw and nchw,nqhc->nqhw for NCHW data layout

  • nhwc,nhqc->nhwq and nhwc,nhcq->nhwq for NHWC data layout

Note that naming of the axes and presence of whitespaces are irrelevant for decision.