Transformation - QONNX

Transformation (QONNX)

finn.transformation.qonnx.convert_qonnx_to_finn

class finn.transformation.qonnx.convert_qonnx_to_finn.ConvertQONNXtoFINN(filter_function=<function default_filter_function_generator.<locals>.filter_function>)

Bases: Transformation

Converts QONNX dialect to FINN ONNX dialect. First the weights are converted using the FoldQuantWeights transformation, then the ConvertQuantActToMultiThreshold transformation is used to convert the activations. If incompatibilities are found a ValueError or RuntimeError is raised.

The optional keyword argument filter_function presents a way to control which Quant and BipolarQuant nodes in the activation path are converted to MultiThreshold nodes. A warning will be emitted when a Quant node is not converted to a MultiThreshold node.

Parameters:

filter_function – Each candidate Quant and BinaryQant node is first evaluated by this function. If the function returns False, then the node is not converted to a MultiTrheshold node. The function is given the model and candidate node as parameters. Per default a filter function is inserted, which disables the conversion of Quant nodes, which have a bit width of larger than 8. Defaults to: default_filter_function_generator(max_multithreshold_bit_width=8)

apply(model)

finn.transformation.qonnx.fold_quant_weights

class finn.transformation.qonnx.fold_quant_weights.FoldQuantWeights

Bases: Transformation

Merges Quant nodes, which are used as weights into the initializer of the weight tensor.

apply(model)

finn.transformation.qonnx.infer_quant_avg_pool_2d

class finn.transformation.qonnx.infer_quant_avg_pool_2d.AvgPoolAndTruncToQuantAvgPool

Bases: Transformation

Convert a section of nodes of the pattern: AveragePool -> Mul (scalar) -> Trunc To the FINN op: QuantAvgPool2d

apply(model)

finn.transformation.qonnx.qonnx_activation_handlers

class finn.transformation.qonnx.qonnx_activation_handlers.QuantActBaseHandler(model: ModelWrapper, quant_node, quant_node_index: int)

Bases: ABC

Base class for converting quantized activation expressed in the QONNX dialect to the FINN ONNX dialect. :param model: The model on which this handler should operate. :type model: class: qonnx.core.modelwrapper.ModelWrapper :param quant_node: The Quant node which a given handler should replace. :param quant_node_index: The index of the Quant node in the given model. :type quant_node_index: int

calculate_node_parameters()

Calculate all parameters required for replacing the QONNX style activation with a FINN style one.

replace_quant_node()

Replace the given QONNX style activation with a FINN style one.

classmethod valid_predecessor_op_types()

Defines which op types the preceding node is allowed to have for this type of activation.

class finn.transformation.qonnx.qonnx_activation_handlers.QuantIdentityHandler(model: ModelWrapper, quant_node, quant_node_index: int)

Bases: QuantActBaseHandler

Class for converting a quantized identity operation expressed in the QONNX dialect to the FINN ONNX dialect. This handler also takes care of quantized HardTanh activations, because these are equivalent to quantized identity activations.

classmethod valid_predecessor_op_types()

Defines which op types the preceding node is allowed to have for this type of activation.

class finn.transformation.qonnx.qonnx_activation_handlers.QuantReluHandler(model: ModelWrapper, quant_node, quant_node_index: int)

Bases: QuantActBaseHandler

Class for converting a quantized relu operation expressed in the QONNX dialect to the FINN ONNX dialect.

classmethod valid_predecessor_op_types()

Defines which op types the preceding node is allowed to have for this type of activation.

finn.transformation.qonnx.quant_act_to_multithreshold

class finn.transformation.qonnx.quant_act_to_multithreshold.ConvertQuantActToMultiThreshold(filter_function=<function default_filter_function_generator.<locals>.filter_function>)

Bases: Transformation

Converts Quant nodes in the activation path to MultiThreshold nodes.

The optional keyword argument filter_function presents a way to control which Quant and BipolarQuant nodes in the activation path are converted to MultiThreshold nodes. A warning will be emitted when a Quant node is not converted to a MultiThreshold node.

Parameters:

filter_function – Each candidate Quant and BinaryQant node is first evaluated by this function. If the function returns False, then the node is not converted to a MultiTrheshold node. The function is given the model and candidate node as parameters. Per default a filter function is inserted, which disables the conversion of Quant nodes, which have a bit width of larger than 8. Defaults to: default_filter_function_generator(max_multithreshold_bit_width=8)

apply(model)
finn.transformation.qonnx.quant_act_to_multithreshold.default_filter_function_generator(max_multithreshold_bit_width=8)

This function generates the default filter function for the ConvertQuantActToMultiThreshold transformation. Per default the returned function disables the conversion of Quant nodes which have a bit width above 8 bit.

This function generator can be used as a template to write custom filter functions.