afe.ir.utils ============ .. py:module:: afe.ir.utils Attributes ---------- .. autoapisummary:: afe.ir.utils.T afe.ir.utils.R Functions --------- .. autoapisummary:: afe.ir.utils.afe_warn afe.ir.utils.exclude_axes afe.ir.utils.get_transpose_indices_according_to_layout_strings afe.ir.utils.transpose_tensor_according_to_layout_strings afe.ir.utils.transpose_attr_according_to_layout_strings afe.ir.utils.insert_according_to_layout_strings afe.ir.utils.compare_outputs afe.ir.utils.reverse_transpose afe.ir.utils.is_depthwise_conv afe.ir.utils.is_depthwise_conv_with_channel_mul afe.ir.utils.is_group_conv afe.ir.utils.transpose_axis_to_the_last afe.ir.utils.with_axis_last afe.ir.utils.convert_transpose_conv2d_to_conv2d_paddings afe.ir.utils.set_shape_batch_size afe.ir.utils.get_input_dict_batch_size afe.ir.utils.unbatch_input_dict afe.ir.utils.transpose_conv_output_shape afe.ir.utils.create_and_verify_narrowing afe.ir.utils.split_einsum_equation afe.ir.utils.mla_supports_einsum_equation_strings afe.ir.utils.is_mla_supported_einsum_equation Module Contents --------------- .. py:data:: T .. py:data:: R .. py:function:: afe_warn(msg: str) -> bool HEADER = '' OKBLUE = '' OKCYAN = '' OKGREEN = '' WARNING = '' FAIL = '' ENDC = '' BOLD = '' UNDERLINE = '' .. py:function:: exclude_axes(shape_len: int, axis: List[int]) -> List[int] Used when you want a list of axes not in the current axis list .. py:function:: get_transpose_indices_according_to_layout_strings(current_layout: str, desired_layout: str) -> List[int] Returns a list of indices from layout strings, so that transpose with this list of indices would be equivalent. EG: NHWC NCHW -> [0, 3, 1, 2] .. py:function:: transpose_tensor_according_to_layout_strings(input_tensor: numpy.ndarray, current_layout: str, desired_layout: str) -> numpy.ndarray returns a np.ndarray transposed in the same fashion that the layout moves from the current to the desired layout Examples: HWIO -> OIHW, NHWC -> NCHW, NCDHW -> NDHWC .. py:function:: transpose_attr_according_to_layout_strings(attr: R, current_layout: str, desired_layout: str) -> List[Any] Transpose and prune the data in the same fashion that the layout moves from the current to the desired layout EG: attr = [0,1,2,3] current_layout = 'NHWC' desired_layout = 'WH' => ouput_list = [2,1] .. py:function:: insert_according_to_layout_strings(target: Tuple[T], source: Sequence[T], target_layout: str, source_layout: str) -> Tuple[T] Transpose and merge elements from source into target. Wherever a label appears in source_layout and target_layout, the element from source at that index in source_layout is inserted into target at that index in target_layout, overwriting what was there. For example, `insert_according_to_layout_strings((0.5, 0.7, 0.9), (-2.5, -4.5), "ABC", "BD")` returns (0.5, -2.5, 0.9). The label 'B' is present in both layouts, and the value at that label's position is copied from source to target. :param target: Values to combine. A copy of this tuple is created, and some tuple fields are replaced by data from source. :param source: Values to combine. Items from this tuple are inserted into a copy of target. :param target_layout: A string with one character for each tuple element in target. The character is a label associated with the tuple element. :param source_layout: A string with one character for each tuple element in source. The character is a label associated with the tuple element. :return: A new tuple holding the merged data. .. py:function:: compare_outputs(out1: numpy.ndarray, out2: numpy.ndarray, tolerance: float = 0.001) -> None Compares outputs of 2 tensors. Raises an error arrays are not element wise equal within a tolerance .. py:function:: reverse_transpose(original: Union[List[T], Tuple[T, Ellipsis]], transpose_axes: Tuple[int, Ellipsis]) -> Tuple[T, Ellipsis] Reverse the list of element given the applied transpose axes. .. py:function:: is_depthwise_conv(in_channels: int, out_channels: int, groups: int) -> bool Return True if the parameters designate a depthwise convolution. Depthwise convolution is a special case of grouped convolution where the number of groups is equal to the number of input channels. When there is a single input channel, it is ambiguous, and we treat it as a regular convolution. Parameters ---------- :param in_channels: int. Number of input channels. :param out_channels: Number of output channels. :param groups: int. Number of convolution groups. Return ------ :return: bool. Return True if the convolution is a depthwise convolution. .. py:function:: is_depthwise_conv_with_channel_mul(in_channels: int, out_channels: int, groups: int) -> bool Return True if the parameters designate a depthwise convolution with channel multiplier. Depthwise convolution is a special case of grouped convolution where the number of groups is equal to the number of input channels and number of output channels is equal to: input_channels * channel_multiplier. Parameters ---------- :param in_channels: int. Number of input channels. :param out_channels: Number of output channels. :param groups: int. Number of convolution groups. Return ------ :return: bool. Return True if the convolution is a depthwise convolution with channel multiplier. .. py:function:: is_group_conv(in_channels: int, out_channels: int, groups: int) -> bool Return True if the parameters designate a grouped convolution that is not a depthwise convolution. Parameters ---------- :param in_channels: Number of input channels. :param out_channels: Number of output channels. :param groups: int. Number of convolution groups. Return ------ :return: bool. Return True if the convolution is a group convolution .. py:function:: transpose_axis_to_the_last(data: numpy.ndarray, axis: int) -> numpy.ndarray Transpose axis in the data to the last dimension. Parameters ---------- :param data: np.ndarray :param axis: int Return ------ :return: np.ndarray. Transposed data .. py:function:: with_axis_last(data: numpy.ndarray, axis: int, f: Callable[[numpy.ndarray], numpy.ndarray]) -> numpy.ndarray Apply a function to a transposed view of an array and reverse the transposition on the function's result. The function must return an array of the same shape as its input. :param data: Array to transform :param axis: Index of axis that will be transposed to the last axis :param f: Function to apply on the transposed array :return: Result of f with the axis transposed back to its original position .. py:function:: convert_transpose_conv2d_to_conv2d_paddings(weight_shape: Tuple[int, int, int, int], weight_layout: str, data_layout: str, padding: afe.ir.defines.AwesomePad, output_padding: Optional[afe.ir.defines.AwesomePad] = None) -> afe.ir.defines.AwesomePad Converta transpose conv2d padding to conv2d padding Parameters ---------- :param weight_shape: Tuple[int, int, int, int]. Shape of the 4-D weight :param weight_layout: str. Weight layout :param data_layout: str. Data layout :param padding: AwesomePad. Padding of the transpose conv2d :param output_padding: AwesomePad. Output padding of the transpose conv2d Return ------ :return: AwesomePad. A transformed padding for regular conv2d .. py:function:: set_shape_batch_size(shape: Tuple[int, Ellipsis], batch_size: int) -> Tuple[int, Ellipsis] Given the Tuple representing the shape of a tensor, return the Tuple corresponding the same tensor shape with a given batch size. Warning - This is a hack before we have a general solution for all dimensions 1D Override of the length is not reversible 2D Override of the first dimension is not reversible, if it not meant for batch 4D Assume batch is the first dimension Others No change :param shape: Tuple representing the shape of a tensor. :param batch_size: Integer value representing the batch size value. :return: Tuple corresponding to the shape of a tensor with a given batch size and input shape. .. py:function:: get_input_dict_batch_size(input_dict: Dict[str, numpy.ndarray]) -> int Analyzes input dict and returns batch size. All inputs should have matching batch size. :param input_dict: Dictionary of input values, which batch size should be returned. :return: The value of batch size. .. py:function:: unbatch_input_dict(input_dict: Dict[str, numpy.ndarray], batch_size: int) -> List[Dict[str, numpy.ndarray]] Create a list of dictionaries from dictionary of inputs containing numpy arrays where first dimension is a batch size. The length of returning list is equal to the batch size and the size of the first dimension for all the arrays in a returning list is equal to 1. :param input_dict: Dictionary of input numpy arrays. Each array needs to be in shape (batch_size, H, W, C) or (batch_size, C, H, W) depending on the layout. :param batch_size: The batch size of the input dictionary. User needs to make sure that it corresponds to the size of the first dimension in all arrays in the input_dict. :return: The list of dictionaries of input arrays. The length of the list is batch_size and array shapes are (1, H, W, C) or (1, C, H, W) depending on the layout. .. py:function:: transpose_conv_output_shape(input_shape: Sequence[int], kernel_shape: Sequence[int], padding: Sequence[Tuple[int, int]], output_padding: Sequence[Tuple[int, int]], stride: Sequence[int], dilation: Sequence[int]) -> Tuple[int] Calculate the shape of the output tensor of a transposed convolution in spatial dimensions. All parameters sequences must have the same length as the number of spatial dimensions: two for 2D convolution, three for 3D convolution. :param input_shape: Shape of the input feature map :param kernel_shape: Shape of the convolution kernel :param padding: Padding applied to the input :param output_padding: Padding applied to the output. Only the second component, that is the padding at the end, is used. :param stride: Stride of the convolution :param dilation: Dilation of the convolution :return: Shape of the output feature map in the spatial dimensions .. py:function:: create_and_verify_narrowing(shift: Union[int, numpy.ndarray], round_type: ml_kernels.math_helpers.RoundType, out_dtype: type) -> ml_kernels.requantization.Narrowing .. py:function:: split_einsum_equation(equation: str) -> Tuple[List[str], List[str]] Separate the inputs and outputs parts of the einsum equation, returning list of strings representing specs of each tensor in equation. Note that empty list is returned if there is no output spec in an equation. Also, space characters are removed if present in an equation string. :param equation: Einsum equation string, i.e "nchw,nqhc->nqhw". :return: Tuple containing lists of input and output tensor spec strings. .. py:function:: mla_supports_einsum_equation_strings(input_strings: List[str], output_strings: List[str]) -> bool Checks if the Einsum equation can be supported on MLA. For Einsum equation to be supported on MLA, number of specs for input tensors must be equal to two, number of specs for output tensors must be equal to one and all the specs must contain exactly four letters. This check is used when deciding whether Einsum operator's layout is going to be changed during the ConvertLayout pass. We cannot base the decision on the contents of the equation here, since MLAChecker (which is coming to action in the latter, GraphPartitioning, pass) could use the wrong layout to come to partitioning decision for the Einsum operator. :param input_strings: The list of spec string defining input tensors. :param output_strings: The list of spec string defining output tensors. :return: True if input and output strings meet requirements to be supported on MLA, otherwise False. .. py:function:: is_mla_supported_einsum_equation(equation: str, data_layout: str) -> bool Checks if the einsum equation is supported on MLA for the given data layout. Supported equations: - nchw,nchq->nqhw and nchw,nqhc->nqhw for NCHW data layout - nhwc,nhqc->nhwq and nhwc,nhcq->nhwq for NHWC data layout Note that naming of the axes and presence of whitespaces are irrelevant for decision.