OpenVINO™ モデル変換 API

この Jupyter ノートブックは、ローカルへのインストール後にのみ起動できます。

GitHub

このノートブックでは、モデルを元のフレームワーク形式から OpenVINO 中間表現 (IR) に変換する方法を示します。

目次

# Required imports. Please execute this cell first.
%pip install -q --extra-index-url https://download.pytorch.org/whl/cpu \
"openvino-dev>=2023.1.0" "requests" "tqdm" "transformers[onnx]>=4.21.1" "torch" "torchvision"
Note: you may need to restart the kernel to use updated packages.

OpenVINO IR 形式

OpenVINO 中間表現 (IR) は、OpenVINO 独自のモデル形式です。モデル変換 API によってモデルを変換して作成します。モデル変換 API は、頻繁に使用されるディープラーニング操作を OpenVINO の同様の表現に変換し、トレーニングされたモデルからの関連する重みとバイアスを使用して調整します。結果の IR には 2 つのファイルが含まれます: ネットワーク・トポロジーに関する情報を含む .xml ファイルと、重みとバイアスのバイナリーデータを含む .bin ファイル。

Python 変換 API とモデル・オプティマイザー・コマンドライン・ツールを使用した IR の準備

モデルを元のフレームワーク形式から OpenVINO IR に変換するには、Python 変換 API とモデル・オプティマイザー・コマンドライン・ツールの 2 つの方法があります。あなたにとって最も便利なものに基づいて、いずれかを選択できます。同じパラメーター・セットを使用した場合、モデル変換の結果に違いは生じません。詳細については、モデル準備のドキュメントを参照してください。

# Model Optimizer CLI tool parameters description

! mo --help
usage: main.py [options]

optional arguments:
  -h, --help            show this help message and exit
  --framework FRAMEWORK
                        Name of the framework used to train the input model.

Framework-agnostic parameters:
  --model_name MODEL_NAME, -n MODEL_NAME
                        Model_name parameter passed to the final create_ir
                        transform. This parameter is used to name a network in
                        a generated IR and output .xml/.bin files.
  --output_dir OUTPUT_DIR, -o OUTPUT_DIR
                        Directory that stores the generated IR. By default, it
                        is the directory from where the Model Conversion is
                        launched.
  --freeze_placeholder_with_value FREEZE_PLACEHOLDER_WITH_VALUE
                        Replaces input layer with constant node with provided
                        value, for example: "node_name->True". It will be
                        DEPRECATED in future releases. Use "input" option to
                        specify a value for freezing.
  --static_shape        Enables IR generation for fixed input shape (folding
                                ShapeOf operations and shape-calculating sub-graphs
                        to Constant). Changing model input shape using the
                        OpenVINO Runtime API in runtime may fail for such an
                        IR.
  --use_new_frontend    Force the usage of new Frontend for model conversion
                        into IR. The new Frontend is C++ based and is
                        available for ONNX* and PaddlePaddle* models. Model
                        Conversion API uses new Frontend for ONNX* and
                        PaddlePaddle* by default that means use_new_frontend
                        and use_legacy_frontend options are not specified.
  --use_legacy_frontend
                        Force the usage of legacy Frontend for model
                        conversion into IR. The legacy Frontend is Python
                        based and is available for TensorFlow*, ONNX*, MXNet*,
                        Caffe*, and Kaldi* models.
  --input_model INPUT_MODEL, -m INPUT_MODEL, -w INPUT_MODEL
                        Tensorflow*: a file with a pre-trained model (binary
                        or text .pb file after freezing). Caffe*: a model
                        proto file with model weights.
  --input INPUT         Quoted list of comma-separated input nodes names with
                        shapes, data types, and values for freezing. The order
                        of inputs in converted model is the same as order of
                        specified operation names. The shape and value are
                        specified as comma-separated lists. The data type of
                        input node is specified in braces and can have one of
                        the values: f64 (float64), f32 (float32), f16
                        (float16), i64 (int64), i32 (int32), u8 (uint8),
                        boolean (bool). Data type is optional. If it's not
                        specified explicitly then there are two options: if
                        input node is a parameter, data type is taken from the
                        original node dtype, if input node is not a parameter,
                        data type is set to f32. Example, to set input_1
                        with shape [1,100], and Parameter node sequence_len
                        with scalar input with value 150, and boolean input
                                is_training with False value use the following
                        format:
                        "input_1[1,100],sequence_len->150,is_training->False".
                        Another example, use the following format to set input
                        port 0 of the node node_name1 with the shape [3,4]
                        as an input node and freeze output port 1 of the node
                        "node_name2" with the value [20,15] of the int32 type
                        and shape [2]:
                        "0:node_name1[3,4],node_name2:1[2]{i32}->[20,15]".
  --output OUTPUT       The name of the output operation of the model or list
                        of names. For TensorFlow*, do not add :0 to this
                        name.The order of outputs in converted model is the
                        same as order of specified operation names.
  --input_shape INPUT_SHAPE
                        Input shape(s) that should be fed to an input node(s)
                        of the model. Shape is defined as a comma-separated
                        list of integer numbers enclosed in parentheses or
                        square brackets, for example [1,3,227,227] or
                        (1,227,227,3), where the order of dimensions depends
                        on the framework input layout of the model. For
                        example, [N,C,H,W] is used for ONNX* models and
                        [N,H,W,C] for TensorFlow* models. The shape can
                        contain undefined dimensions (? or -1) and should fit
                        the dimensions defined in the input operation of the
                        graph. Boundaries of undefined dimension can be
                        specified with ellipsis, for example
                        [1,1..10,128,128]. One boundary can be undefined, for
                        example [1,..100] or [1,3,1..,1..]. If there are
                        multiple inputs in the model, --input_shape should
                        contain definition of shape for each input separated
                        by a comma, for example: [1,3,227,227],[2,4] for a
                        model with two inputs with 4D and 2D shapes.
                        Alternatively, specify shapes with the --input option.
  --example_input EXAMPLE_INPUT
                        Sample of model input in original framework. For
                        PyTorch it can be torch.Tensor. For Tensorflow it can
                        be tf.Tensor or numpy.ndarray. For PaddlePaddle it can
                        be Paddle Variable.
  --batch BATCH, -b BATCH
                        Set batch size. It applies to 1D or higher dimension
                        inputs. The default dimension index for the batch is
                        zero. Use a label 'n' in --layout or --source_layout
                        option to set the batch dimension. For example,
                        "x(hwnc)" defines the third dimension to be the batch.
  --mean_values MEAN_VALUES
                        Mean values to be used for the input image per
                        channel. Values to be provided in the (R,G,B) or
                        [R,G,B] format. Can be defined for desired input of
                        the model, for example: "--mean_values
                        data[255,255,255],info[255,255,255]". The exact
                        meaning and order of channels depend on how the
                        original model was trained.
  --scale_values SCALE_VALUES
                        Scale values to be used for the input image per
                        channel. Values are provided in the (R,G,B) or [R,G,B]
                        format. Can be defined for desired input of the model,
                        for example: "--scale_values
                        data[255,255,255],info[255,255,255]". The exact
                        meaning and order of channels depend on how the
                        original model was trained. If both --mean_values and
                        --scale_values are specified, the mean is subtracted
                        first and then scale is applied regardless of the
                        order of options in command line.
  --scale SCALE, -s SCALE
                        All input values coming from original network inputs
                        will be divided by this value. When a list of inputs
                        is overridden by the --input parameter, this scale is
                        not applied for any input that does not match with the
                        original input of the model. If both --mean_values and
                        --scale are specified, the mean is subtracted first
                        and then scale is applied regardless of the order of
                        options in command line.
  --reverse_input_channels [REVERSE_INPUT_CHANNELS]
                        Switch the input channels order from RGB to BGR (or
                        vice versa). Applied to original inputs of the model
                        if and only if a number of channels equals 3. When
                        --mean_values/--scale_values are also specified,
                        reversing of channels will be applied to user's input
                        data first, so that numbers in --mean_values and
                        --scale_values go in the order of channels used in the
                        original model. In other words, if both options are
                        specified, then the data flow in the model looks as
                        following: Parameter -> ReverseInputChannels -> Mean
                        apply-> Scale apply -> the original body of the model.
  --source_layout SOURCE_LAYOUT
                        Layout of the input or output of the model in the
                        framework. Layout can be specified in the short form,
                        e.g. nhwc, or in complex form, e.g. "[n,h,w,c]".
                        Example for many names: "in_name1([n,h,w,c]),in_name2(
                        nc),out_name1(n),out_name2(nc)". Layout can be
                        partially defined, "?" can be used to specify
                        undefined layout for one dimension, "..." can be used
                        to specify undefined layout for multiple dimensions,
                        for example "?c??", "nc...", "n...c", etc.
  --target_layout TARGET_LAYOUT
                        Same as --source_layout, but specifies target layout
                        that will be in the model after processing by
                        ModelOptimizer.
  --layout LAYOUT       Combination of --source_layout and --target_layout.
                        Can't be used with either of them. If model has one
                        input it is sufficient to specify layout of this
                        input, for example --layout nhwc. To specify layouts
                        of many tensors, names must be provided, for example:
                        --layout "name1(nchw),name2(nc)". It is possible to
                        instruct ModelOptimizer to change layout, for example:
                        --layout "name1(nhwc->nchw),name2(cn->nc)". Also "*"
                        in long layout form can be used to fuse dimensions,
                        for example "[n,c,...]->[n*c,...]".
  --compress_to_fp16 [COMPRESS_TO_FP16]
                        If the original model has FP32 weights or biases, they
                        are compressed to FP16. All intermediate data is kept
                        in original precision. Option can be specified alone
                        as "--compress_to_fp16", or explicit True/False values
                        can be set, for example: "--compress_to_fp16=False",
                        or "--compress_to_fp16=True"
  --extensions EXTENSIONS
                        Paths or a comma-separated list of paths to libraries
                        (.so or .dll) with extensions. For the legacy MO path
                        (if --use_legacy_frontend is used), a directory or a
                        comma-separated list of directories with extensions
                        are supported. To disable all extensions including
                        those that are placed at the default location, pass an
                        empty string.
  --transform TRANSFORM
                        Apply additional transformations. Usage: "--transform
                        transformation_name1[args],transformation_name2..."
                        where [args] is key=value pairs separated by
                        semicolon. Examples: "--transform LowLatency2" or "--
                        transform Pruning" or "--transform
                        LowLatency2[use_const_initializer=False]" or "--
                        transform "MakeStateful[param_res_names= {'input_name_
                        1':'output_name_1','input_name_2':'output_name_2'}]"
                        Available transformations: "LowLatency2",
                        "MakeStateful", "Pruning"
  --transformations_config TRANSFORMATIONS_CONFIG
                        Use the configuration file with transformations
                        description. Transformations file can be specified as
                        relative path from the current directory, as absolute
                        path or as arelative path from the mo root directory.
  --silent [SILENT]     Prevent any output messages except those that
                        correspond to log level equals ERROR, that can be set
                        with the following option: --log_level. By default,
                        log level is already ERROR.
  --log_level {CRITICAL,ERROR,WARN,WARNING,INFO,DEBUG,NOTSET}
                        Logger level of logging massages from MO. Expected one
                        of ['CRITICAL', 'ERROR', 'WARN', 'WARNING', 'INFO',
                        'DEBUG', 'NOTSET'].
  --version             Version of Model Optimizer
  --progress [PROGRESS]
                        Enable model conversion progress display.
  --stream_output [STREAM_OUTPUT]
                        Switch model conversion progress display to a
                        multiline mode.
  --share_weights [SHARE_WEIGHTS]
                        Map memory of weights instead reading files or share
                        memory from input model. Currently, mapping feature is
                        provided only for ONNX models that do not require
                        fallback to the legacy ONNX frontend for the
                        conversion.

TensorFlow*-specific parameters:
  --input_model_is_text [INPUT_MODEL_IS_TEXT]
                        TensorFlow*: treat the input model file as a text
                        protobuf format. If not specified, the Model Optimizer
                        treats it as a binary file by default.
  --input_checkpoint INPUT_CHECKPOINT
                        TensorFlow*: variables file to load.
  --input_meta_graph INPUT_META_GRAPH
                        Tensorflow*: a file with a meta-graph of the model
                        before freezing
  --saved_model_dir SAVED_MODEL_DIR
                        TensorFlow*: directory with a model in SavedModel
                        format of TensorFlow 1.x or 2.x version.
  --saved_model_tags SAVED_MODEL_TAGS
                        Group of tag(s) of the MetaGraphDef to load, in string
                        format, separated by ','. For tag-set contains
                        multiple tags, all tags must be passed in.
  --tensorflow_custom_operations_config_update TENSORFLOW_CUSTOM_OPERATIONS_CONFIG_UPDATE
                        TensorFlow*: update the configuration file with node
                        name patterns with input/output nodes information.
  --tensorflow_object_detection_api_pipeline_config TENSORFLOW_OBJECT_DETECTION_API_PIPELINE_CONFIG
                        TensorFlow*: path to the pipeline configuration file
                        used to generate model created with help of Object
                        Detection API.
  --tensorboard_logdir TENSORBOARD_LOGDIR
                        TensorFlow*: dump the input graph to a given directory
                        that should be used with TensorBoard.
  --tensorflow_custom_layer_libraries TENSORFLOW_CUSTOM_LAYER_LIBRARIES
                        TensorFlow*: comma separated list of shared libraries
                        with TensorFlow* custom operations implementation.

Caffe*-specific parameters:
  --input_proto INPUT_PROTO, -d INPUT_PROTO
                        Deploy-ready prototxt file that contains a topology
                        structure and layer attributes
  --caffe_parser_path CAFFE_PARSER_PATH
                        Path to Python Caffe* parser generated from
                        caffe.proto
  --k K                 Path to CustomLayersMapping.xml to register custom
                        layers
  --disable_omitting_optional [DISABLE_OMITTING_OPTIONAL]
                        Disable omitting optional attributes to be used for
                        custom layers. Use this option if you want to transfer
                        all attributes of a custom layer to IR. Default
                        behavior is to transfer the attributes with default
                        values and the attributes defined by the user to IR.
  --enable_flattening_nested_params [ENABLE_FLATTENING_NESTED_PARAMS]
                        Enable flattening optional params to be used for
                        custom layers. Use this option if you want to transfer
                        attributes of a custom layer to IR with flattened
                        nested parameters. Default behavior is to transfer the
                        attributes without flattening nested parameters.

MXNet-specific parameters:
  --input_symbol INPUT_SYMBOL
                        Symbol file (for example, model-symbol.json) that
                        contains a topology structure and layer attributes
  --nd_prefix_name ND_PREFIX_NAME
                        Prefix name for args.nd and argx.nd files.
  --pretrained_model_name PRETRAINED_MODEL_NAME
                        Name of a pretrained MXNet model without extension and
                        epoch number. This model will be merged with args.nd
                        and argx.nd files
  --save_params_from_nd [SAVE_PARAMS_FROM_ND]
                        Enable saving built parameters file from .nd files
  --legacy_mxnet_model [LEGACY_MXNET_MODEL]
                        Enable MXNet loader to make a model compatible with
                        the latest MXNet version. Use only if your model was
                        trained with MXNet version lower than 1.0.0
  --enable_ssd_gluoncv [ENABLE_SSD_GLUONCV]
                        Enable pattern matchers replacers for converting
                        gluoncv ssd topologies.

Kaldi-specific parameters:
  --counts COUNTS       Path to the counts file
  --remove_output_softmax [REMOVE_OUTPUT_SOFTMAX]
                        Removes the SoftMax layer that is the output layer
  --remove_memory [REMOVE_MEMORY]
                        Removes the Memory layer and use additional inputs
                        outputs instead
# Python conversion API parameters description
from openvino.tools import mo


mo.convert_model(help=True)
Optional parameters:
  --help
                    Print available parameters.
  --framework
                    Name of the framework used to train the input model.

Framework-agnostic parameters:
  --input_model
                    Model object in original framework (PyTorch, Tensorflow) or path to
                    model file.
                    Tensorflow*: a file with a pre-trained model (binary or text .pb file
                    after freezing).
                    Caffe*: a model proto file with model weights

                    Supported formats of input model:

                    PaddlePaddle
                    paddle.hapi.model.Model
                    paddle.fluid.dygraph.layers.Layer
                    paddle.fluid.executor.Executor

                    PyTorch
                    torch.nn.Module
                    torch.jit.ScriptModule
                    torch.jit.ScriptFunction

                    TF
                    tf.compat.v1.Graph
                    tf.compat.v1.GraphDef
                    tf.compat.v1.wrap_function
                    tf.compat.v1.session

                    TF2 / Keras
                    tf.keras.Model
                    tf.keras.layers.Layer
                    tf.function
                    tf.Module
                    tf.train.checkpoint
  --input
                    Input can be set by passing a list of InputCutInfo objects or by a list
                    of tuples. Each tuple can contain optionally input name, input
                    type or input shape. Example: input=("op_name", PartialShape([-1,
                    3, 100, 100]), Type(np.float32)). Alternatively input can be set by
                    a string or list of strings of the following format. Quoted list of comma-separated
                    input nodes names with shapes, data types, and values for freezing.
                    If operation names are specified, the order of inputs in converted
                    model will be the same as order of specified operation names (applicable
                    for TF2, ONNX, MxNet).
                    The shape and value are specified as comma-separated lists. The data
                    type of input node is specified
                    in braces and can have one of the values: f64 (float64), f32 (float32),
                    f16 (float16), i64
                    (int64), i32 (int32), u8 (uint8), boolean (bool). Data type is optional.
                    If it's not specified explicitly then there are two options: if input
                    node is a parameter, data type is taken from the original node dtype,
                    if input node is not a parameter, data type is set to f32. Example, to set
                                input_1 with shape [1,100], and Parameter node sequence_len with
                    scalar input with value 150, and boolean input is_training with
                                False value use the following format: "input_1[1,100],sequence_len->150,is_training->False".
                    Another example, use the following format to set input port 0 of the node
                                node_name1 with the shape [3,4] as an input node and freeze output
                    port 1 of the node node_name2 with the value [20,15] of the int32 type
                    and shape [2]: "0:node_name1[3,4],node_name2:1[2]{i32}->[20,15]".

  --output
                    The name of the output operation of the model or list of names. For TensorFlow*,
                    do not add :0 to this name.The order of outputs in converted model is the
                    same as order of specified operation names.
  --input_shape
                    Input shape(s) that should be fed to an input node(s) of the model. Input
                    shapes can be defined by passing a list of objects of type PartialShape,
                    Shape, [Dimension, ...] or [int, ...] or by a string of the following
                    format. Shape is defined as a comma-separated list of integer numbers
                    enclosed in parentheses or square brackets, for example [1,3,227,227]
                    or (1,227,227,3), where the order of dimensions depends on the framework
                    input layout of the model. For example, [N,C,H,W] is used for ONNX* models
                    and [N,H,W,C] for TensorFlow* models. The shape can contain undefined
                    dimensions (? or -1) and should fit the dimensions defined in the input
                    operation of the graph. Boundaries of undefined dimension can be specified
                    with ellipsis, for example [1,1..10,128,128]. One boundary can be
                    undefined, for example [1,..100] or [1,3,1..,1..]. If there are multiple
                    inputs in the model, --input_shape should contain definition of shape
                    for each input separated by a comma, for example: [1,3,227,227],[2,4]
                    for a model with two inputs with 4D and 2D shapes. Alternatively, specify
                    shapes with the --input option.
  --example_input
                    Sample of model input in original framework.
                    For PyTorch it can be torch.Tensor.
                    For Tensorflow it can be tf.Tensor or numpy.ndarray.
                    For PaddlePaddle it can be Paddle Variable.
  --batch
                    Set batch size. It applies to 1D or higher dimension inputs.
                    The default dimension index for the batch is zero.
                    Use a label 'n' in --layout or --source_layout option to set the batch
                    dimension.
                    For example, "x(hwnc)" defines the third dimension to be the batch.

  --mean_values
                    Mean values to be used for the input image per channel. Mean values can
                    be set by passing a dictionary, where key is input name and value is mean
                    value. For example mean_values={'data':[255,255,255],'info':[255,255,255]}.
                    Or mean values can be set by a string of the following format. Values to
                    be provided in the (R,G,B) or [R,G,B] format. Can be defined for desired
                    input of the model, for example: "--mean_values data[255,255,255],info[255,255,255]".
                    The exact meaning and order of channels depend on how the original model
                    was trained.
  --scale_values
                    Scale values to be used for the input image per channel. Scale values
                    can be set by passing a dictionary, where key is input name and value is
                    scale value. For example scale_values={'data':[255,255,255],'info':[255,255,255]}.
                    Or scale values can be set by a string of the following format. Values
                    are provided in the (R,G,B) or [R,G,B] format. Can be defined for desired
                    input of the model, for example: "--scale_values data[255,255,255],info[255,255,255]".
                    The exact meaning and order of channels depend on how the original model
                    was trained. If both --mean_values and --scale_values are specified,
                    the mean is subtracted first and then scale is applied regardless of
                    the order of options in command line.
  --scale
                    All input values coming from original network inputs will be divided
                    by this value. When a list of inputs is overridden by the --input parameter,
                    this scale is not applied for any input that does not match with the original
                    input of the model. If both --mean_values and --scale  are specified,
                    the mean is subtracted first and then scale is applied regardless of
                    the order of options in command line.
  --reverse_input_channels
                    Switch the input channels order from RGB to BGR (or vice versa). Applied
                    to original inputs of the model if and only if a number of channels equals
                    3. When --mean_values/--scale_values are also specified, reversing
                    of channels will be applied to user's input data first, so that numbers
                    in --mean_values and --scale_values go in the order of channels used
                    in the original model. In other words, if both options are specified,
                    then the data flow in the model looks as following: Parameter -> ReverseInputChannels
                    -> Mean apply-> Scale apply -> the original body of the model.
  --source_layout
                    Layout of the input or output of the model in the framework. Layout can
                    be set by passing a dictionary, where key is input name and value is LayoutMap
                    object. Or layout can be set by string of the following format. Layout
                    can be specified in the short form, e.g. nhwc, or in complex form, e.g.
                    "[n,h,w,c]". Example for many names: "in_name1([n,h,w,c]),in_name2(nc),out_name1(n),out_name2(nc)".
                    Layout can be partially defined, "?" can be used to specify undefined
                    layout for one dimension, "..." can be used to specify undefined layout
                    for multiple dimensions, for example "?c??", "nc...", "n...c", etc.

  --target_layout
                    Same as --source_layout, but specifies target layout that will be in
                    the model after processing by ModelOptimizer.
  --layout
                    Combination of --source_layout and --target_layout. Can't be used
                    with either of them. If model has one input it is sufficient to specify
                    layout of this input, for example --layout nhwc. To specify layouts
                    of many tensors, names must be provided, for example: --layout "name1(nchw),name2(nc)".
                    It is possible to instruct ModelOptimizer to change layout, for example:
                    --layout "name1(nhwc->nchw),name2(cn->nc)".
                    Also "*" in long layout form can be used to fuse dimensions, for example
                    "[n,c,...]->[n*c,...]".
  --compress_to_fp16
                    If the original model has FP32 weights or biases, they are compressed
                    to FP16. All intermediate data is kept in original precision. Option
                    can be specified alone as "--compress_to_fp16", or explicit True/False
                    values can be set, for example: "--compress_to_fp16=False", or "--compress_to_fp16=True"

  --extensions
                    Paths to libraries (.so or .dll) with extensions, comma-separated
                    list of paths, objects derived from BaseExtension class or lists of
                    objects. For the legacy MO path (if --use_legacy_frontend is used),
                    a directory or a comma-separated list of directories with extensions
                    are supported. To disable all extensions including those that are placed
                    at the default location, pass an empty string.
  --transform
                    Apply additional transformations. 'transform' can be set by a list
                    of tuples, where the first element is transform name and the second element
                    is transform parameters. For example: [('LowLatency2', {{'use_const_initializer':
                    False}}), ...]"--transform transformation_name1[args],transformation_name2..."
                    where [args] is key=value pairs separated by semicolon. Examples:
                     "--transform LowLatency2" or
                     "--transform Pruning" or
                     "--transform LowLatency2[use_const_initializer=False]" or
                     "--transform "MakeStateful[param_res_names=
                    {'input_name_1':'output_name_1','input_name_2':'output_name_2'}]""
                    Available transformations: "LowLatency2", "MakeStateful", "Pruning"

  --transformations_config
                    Use the configuration file with transformations description or pass
                    object derived from BaseExtension class. Transformations file can
                    be specified as relative path from the current directory, as absolute
                    path or as relative path from the mo root directory.
  --silent
                    Prevent any output messages except those that correspond to log level
                    equals ERROR, that can be set with the following option: --log_level.
                    By default, log level is already ERROR.
  --log_level
                    Logger level of logging massages from MO.
                    Expected one of ['CRITICAL', 'ERROR', 'WARN', 'WARNING', 'INFO',
                    'DEBUG', 'NOTSET'].
  --version
                    Version of Model Optimizer
  --progress
                    Enable model conversion progress display.
  --stream_output
                    Switch model conversion progress display to a multiline mode.
  --share_weights
                    Map memory of weights instead reading files or share memory from input
                    model.
                    Currently, mapping feature is provided only for ONNX models
                    that do not require fallback to the legacy ONNX frontend for the conversion.


PaddlePaddle-specific parameters:
  --example_output
                    Sample of model output in original framework. For PaddlePaddle it can
                    be Paddle Variable.

TensorFlow*-specific parameters:
  --input_model_is_text
                    TensorFlow*: treat the input model file as a text protobuf format. If
                    not specified, the Model Optimizer treats it as a binary file by default.

  --input_checkpoint
                    TensorFlow*: variables file to load.
  --input_meta_graph
                    Tensorflow*: a file with a meta-graph of the model before freezing
  --saved_model_dir
                    TensorFlow*: directory with a model in SavedModel format of TensorFlow
                    1.x or 2.x version.
  --saved_model_tags
                    Group of tag(s) of the MetaGraphDef to load, in string format, separated
                    by ','. For tag-set contains multiple tags, all tags must be passed in.

  --tensorflow_custom_operations_config_update
                    TensorFlow*: update the configuration file with node name patterns
                    with input/output nodes information.
  --tensorflow_object_detection_api_pipeline_config
                    TensorFlow*: path to the pipeline configuration file used to generate
                    model created with help of Object Detection API.
  --tensorboard_logdir
                    TensorFlow*: dump the input graph to a given directory that should be
                    used with TensorBoard.
  --tensorflow_custom_layer_libraries
                    TensorFlow*: comma separated list of shared libraries with TensorFlow*
                    custom operations implementation.

MXNet-specific parameters:
  --input_symbol
                    Symbol file (for example, model-symbol.json) that contains a topology
                    structure and layer attributes
  --nd_prefix_name
                    Prefix name for args.nd and argx.nd files.
  --pretrained_model_name
                    Name of a pretrained MXNet model without extension and epoch number.
                    This model will be merged with args.nd and argx.nd files
  --save_params_from_nd
                    Enable saving built parameters file from .nd files
  --legacy_mxnet_model
                    Enable MXNet loader to make a model compatible with the latest MXNet
                    version. Use only if your model was trained with MXNet version lower
                    than 1.0.0
  --enable_ssd_gluoncv
                    Enable pattern matchers replacers for converting gluoncv ssd topologies.


Caffe*-specific parameters:
  --input_proto
                    Deploy-ready prototxt file that contains a topology structure and
                    layer attributes
  --caffe_parser_path
                    Path to Python Caffe* parser generated from caffe.proto
  --k
                    Path to CustomLayersMapping.xml to register custom layers
  --disable_omitting_optional
                    Disable omitting optional attributes to be used for custom layers.
                    Use this option if you want to transfer all attributes of a custom layer
                    to IR. Default behavior is to transfer the attributes with default values
                    and the attributes defined by the user to IR.
  --enable_flattening_nested_params
                    Enable flattening optional params to be used for custom layers. Use
                    this option if you want to transfer attributes of a custom layer to IR
                    with flattened nested parameters. Default behavior is to transfer
                    the attributes without flattening nested parameters.

Kaldi-specific parameters:
  --counts
                    Path to the counts file
  --remove_output_softmax
                    Removes the SoftMax layer that is the output layer
  --remove_memory
                    Removes the Memory layer and use additional inputs outputs instead

サンプルモデルの取得

このノートブックでは、変換例に 2 つのモデルを使用します。

from pathlib import Path

# create a directory for models files
MODEL_DIRECTORY_PATH = Path("model")
MODEL_DIRECTORY_PATH.mkdir(exist_ok=True)

Hugging Face から distilbert NLP モデルを取得し、ONNX 形式でエクスポートします。

from transformers import AutoModelForSequenceClassification, AutoTokenizer
from transformers.onnx import export, FeaturesManager


ONNX_NLP_MODEL_PATH = MODEL_DIRECTORY_PATH / "distilbert.onnx"

# download model
hf_model = AutoModelForSequenceClassification.from_pretrained(
    "distilbert-base-uncased-finetuned-sst-2-english"
)
# initialize tokenizer
tokenizer = AutoTokenizer.from_pretrained(
    "distilbert-base-uncased-finetuned-sst-2-english"
)

# get model onnx config function for output feature format sequence-classification
model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise(
    hf_model, feature="sequence-classification"
)
# fill onnx config based on pytorch model config
onnx_config = model_onnx_config(hf_model.config)

# export to onnx format
export(
    preprocessor=tokenizer,
    model=hf_model,
    config=onnx_config,
    opset=onnx_config.default_onnx_opset,
    output=ONNX_NLP_MODEL_PATH,
)
2024-02-09 23:08:18.586507: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2024-02-09 23:08:18.621399: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-02-09 23:08:19.256172: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/.venv/lib/python3.8/site-packages/transformers/models/distilbert/modeling_distilbert.py:246: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
  mask, torch.tensor(torch.finfo(scores.dtype).min)
(['input_ids', 'attention_mask'], ['logits'])

Torchvision から Resnet50 CV 分類モデルを取得します。

from torchvision.models import resnet50, ResNet50_Weights


# create model object
pytorch_model = resnet50(weights=ResNet50_Weights.DEFAULT)
# switch model from training to inference mode
pytorch_model.eval()
ResNet(
  (conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
  (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (relu): ReLU(inplace=True)
  (maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
  (layer1): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer2): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer3): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(512, 1024, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (3): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (4): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (5): Bottleneck(
      (conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(1024, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (layer4): Sequential(
    (0): Bottleneck(
      (conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
      (downsample): Sequential(
        (0): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(2, 2), bias=False)
        (1): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (1): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
    (2): Bottleneck(
      (conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
      (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
      (bn3): BatchNorm2d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      (relu): ReLU(inplace=True)
    )
  )
  (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
  (fc): Linear(in_features=2048, out_features=1000, bias=True)
)

PyTorch モデルから ONNX 形式へ変換します。

import torch
import warnings


ONNX_CV_MODEL_PATH = MODEL_DIRECTORY_PATH / "resnet.onnx"

if ONNX_CV_MODEL_PATH.exists():
    print(f"ONNX model {ONNX_CV_MODEL_PATH} already exists.")
else:
    with warnings.catch_warnings():
        warnings.filterwarnings("ignore")
        torch.onnx.export(
            model=pytorch_model, args=torch.randn(1, 3, 780, 520), f=ONNX_CV_MODEL_PATH
        )
    print(f"ONNX model exported to {ONNX_CV_MODEL_PATH}")
ONNX model model/resnet.onnx already exists.

基本変換

モデルを OpenVINO IR に変換するには、次のコマンドを使用します。

# Model Optimizer CLI

! mo --input_model model/distilbert.onnx --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.bin
# Python conversion API
from openvino.tools import mo

# mo.convert_model returns an openvino.runtime.Model object
ov_model = mo.convert_model(ONNX_NLP_MODEL_PATH)

# then model can be serialized to *.xml & *.bin files
from openvino.runtime import serialize

serialize(ov_model, xml_path=MODEL_DIRECTORY_PATH / "distilbert.xml")
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)

モデル変換パラメーター

Python 変換 API とモデル・オプティマイザー・コマンドライン・ツールはどちらも次の機能を提供します。

  • inputおよび input_shape パラメーターを使用して、モデル変換の元の入力形状を上書きします。入力形状の設定ガイド
  • inputoutput パラメーターを使用してモデルの不要な部分 (サポートされていない操作やトレーニング・サブグラフなど) を切り取り、変換されたモデルの新しい入力と出力を定義します。モデルの一部を切り取りガイド
  • mean_valuesscales_valueslayout、およびその他のパラメーターを使用して、変換されたモデルに追加の入力前処理サブグラフを挿入します。前処理計算の埋め込みガイド
  • モデルの重み (例えば、畳み込みや行列乗算の重み) を、compress_to_fp16 圧縮パラメーターを使用して FP16 データタイプに圧縮します。モデルを FP16 に圧縮するガイド

すぐに使用できる変換 (input_model パラメーターのみを指定) が成功しない場合は、上記のパラメーターを使用して入力形状をオーバーライドし、モデルをカットする必要がある場合があります。

入力形状の設定

モデル変換は、未定義の次元を含む動的入力形状を持つモデルに対してサポートされます。ただし、データの形状が推論要求ごとに変わらない場合は、入力に対して静的な形状を設定することを推奨します (すべての次元が完全に定義されている場合)。実行時の推論中ではなく、このステージでこれを実行すると、パフォーマンスとメモリー消費の点で有利になる可能性があります。静的形状を設定するため、モデル変換 API は inputinput_shape パラメーターを提供します。

詳細については、入力形状の設定ガイドを参照してください。

# Model Optimizer CLI

! mo --input_model model/distilbert.onnx --input input_ids,attention_mask --input_shape [1,128],[1,128] --output_dir model

# alternatively
! mo --input_model model/distilbert.onnx --input input_ids[1,128],attention_mask[1,128] --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.bin
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.bin
# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(
    ONNX_NLP_MODEL_PATH,
    input=["input_ids", "attention_mask"],
    input_shape=[[1, 128], [1, 128]],
)

# alternatively specify input shapes, using the input parameter
ov_model = mo.convert_model(
    ONNX_NLP_MODEL_PATH, input=[("input_ids", [1, 128]), ("attention_mask", [1, 128])]
)

input_shape パラメーターを使用すると、元の入力形状を、指定されたモデルと互換性のある形状に上書きできます。動的形状、つまり動的な次元を使用すると、元のモデルを変換後のモデルの静的な形状に置き換えることができ、またその逆も可能です。動的次元は、モデル変換 API パラメーターで -1 または ? としてマークできます。例えば、ONNX Bert モデルのモデル変換を起動し、入力の動的シーケンス長次元を指定します。

# Model Optimizer CLI

! mo --input_model model/distilbert.onnx --input input_ids,attention_mask --input_shape [1,-1],[1,-1] --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.bin
# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(
    ONNX_NLP_MODEL_PATH,
    input=["input_ids", "attention_mask"],
    input_shape=[[1, -1], [1, -1]],
)

実行時に次元が定義されていないモデルのメモリー消費を最適化するため、モデル変換 API には次元の境界を定義する機能が用意されています。未定義の次元境界は省略記号で指定できます。例えば、ONNX Bert モデルのモデル変換を起動し、シーケンス長の次元の境界を指定します。

# Model Optimizer CLI

! mo --input_model model/distilbert.onnx --input input_ids,attention_mask --input_shape [1,10..128],[1,10..128] --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.bin
# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(
    ONNX_NLP_MODEL_PATH,
    input=["input_ids", "attention_mask"],
    input_shape=[[1, "10..128"], [1, "10..128"]],
)

モデルの一部を切り取り

次の例は、モデルカットが役立つ場合、または必要な場合を示しています。

  • モデルには、既存の OpenVINO 操作に変換できない前処理部分または後処理部分が含まれています。

  • モデルにはトレーニング部分があり、モデル内に保持しておくと便利ですが、推論時には使用されないデータがあります。

  • モデルには、カスタムレイヤーとして簡単に実装できずサポートされない操作が多数含まれているため、複雑すぎて一度に変換できません。

  • OpenVINO ランタイムでのモデル変換や推論で問題が発生することがあります。問題を特定するには、モデル内で問題のある領域を反復検索して変換範囲を制限します。

  • 単一のカスタムレイヤーまたはカスタムレイヤーの組み合わせは、デバッグの目的で分離されます。

詳細については、モデルの一部を切り取るガイドを参照してください。

# Model Optimizer CLI

# cut at the end
! mo --input_model model/distilbert.onnx --output /classifier/Gemm --output_dir model


# cut from the beginning
! mo --input_model model/distilbert.onnx --input /distilbert/embeddings/LayerNorm/Add_1,attention_mask --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.bin
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/distilbert.bin
# Python conversion API
from openvino.tools import mo


# cut at the end
ov_model = mo.convert_model(ONNX_NLP_MODEL_PATH, output="/classifier/Gemm")

# cut from the beginning
ov_model = mo.convert_model(
    ONNX_NLP_MODEL_PATH,
    input=["/distilbert/embeddings/LayerNorm/Add_1", "attention_mask"],
)

前処理計算の埋め込み

推論用の入力データはトレーニングのデータセットとは異なる場合があり、推論の前に追加の前処理が必要になります。前処理や推論を含むパイプライン全体を高速化するため、モデル変換 API は、mean_valuesscale_valuesreverse_input_channelslayout など特別なパラメーターを提供します。これらのパラメーターに基づいて、モデル変換 API は、定義された前処理を実行するために挿入されたサブグラフを含む OpenVINO IR を生成します。この前処理ブロックは、入力データの平均スケール正規化、チャネル次元に沿ったデータの反転、およびデータ・レイアウトの変更を実行できます。前処理の詳細については、埋め込み前処理計算の記事を参照してください。

レイアウトを指定

レイアウトは形状の次元の平均を定義し、入力と出力の両方に指定できます。一部の前処理では、バッチの設定、平均値またはスケールの適用、入力チャネル (BGR<->RGB) の反転など、入力レイアウトの設定が必要です。レイアウト構文の詳細については、レイアウト API の概要を参照してください。レイアウトを指定するには、レイアウトオプションの後にレイアウト値を使用します。

次のコマンドでは、ONNX 形式にエクスポートされた Pytorch Resnet50 モデルの NCHW レイアウトを指定します。

# Model Optimizer CLI

! mo --input_model model/resnet.onnx --layout nchw --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin
# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, layout="nchw")

モデルのレイアウトを変更

モデルのレイアウトが入力データと異なる場合、モデルのレイアウトの変更が必要になる場合があります。レイアウトを変更するには、layout または source_layouttarget_layout とともに使用します。

# Model Optimizer CLI

! mo --input_model model/resnet.onnx --layout "nchw->nhwc" --output_dir model

# alternatively use source_layout and target_layout parameters
! mo --input_model model/resnet.onnx --source_layout nchw --target_layout nhwc --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin
# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, layout="nchw->nhwc")

# alternatively use source_layout and target_layout parameters
ov_model = mo.convert_model(
                                    ONNX_CV_MODEL_PATH, source_layout="nchw", target_layout="nhwc"
)

平均とスケール値の指定

モデル変換 API には、値を指定するための次のパラメーターがあります: mean_valuesscale_valuesscale。これらのパラメーターを使用して、モデル変換 API は入力データの平均値の正規化に対応する前処理ブロックを埋め込み、前処理にかかる時間が推論で無視できる程度になるようにブロックを最適化します。

# Model Optimizer CLI

! mo --input_model model/resnet.onnx --mean_values [123,117,104] --scale 255 --output_dir model

! mo --input_model model/resnet.onnx --mean_values [123,117,104] --scale_values [255,255,255] --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin
# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, mean_values=[123, 117, 104], scale=255)

ov_model = mo.convert_model(
    ONNX_CV_MODEL_PATH, mean_values=[123, 117, 104], scale_values=[255, 255, 255]
)

入力チャネルの反転

状況によっては、アプリケーションの入力画像が RGB (または BGR) 形式である場合があり、モデルはカラーチャネルの順序が逆である BGR (または RGB) 形式の画像でトレーニングされることがあります。この場合、推論前にカラーチャネルを元に戻すことで入力画像を前処理することが重要です。

# Model Optimizer CLI

! mo --input_model model/resnet.onnx --reverse_input_channels --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin
# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, reverse_input_channels=True)

オプションで、モデル変換中に関連するすべての浮動小数点重みを FP16 データ型に圧縮し、圧縮された FP16 モデルを作成することもできます。この小さなモデルは、ファイルシステム内の元のスペースの約半分を占めます。圧縮によって精度が低下する可能性がありますが、ほとんどのモデルではこの低下はごくわずかです。

# Model Optimizer CLI

! mo --input_model model/resnet.onnx --compress_to_fp16=True --output_dir model
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using tokenizers before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[ INFO ] Generated IR will be compressed to FP16. If you get lower accuracy, please consider disabling compression explicitly by adding argument --compress_to_fp16=False.
Find more information about compression to FP16 at https://docs.openvino.ai/2023.0/openvino_docs_MO_DG_FP16_Compression.html
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/2023.0/openvino_2_0_transition_guide.html
[ INFO ] MO command line tool is considered as the legacy conversion API as of OpenVINO 2023.2 release. Please use OpenVINO Model Converter (OVC). OVC represents a lightweight alternative of MO and provides simplified model conversion API.
Find more information about transition from MO to OVC at https://docs.openvino.ai/2023.2/openvino_docs_OV_Converter_UG_prepare_model_convert_model_MO_OVC_transition.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.xml
[ SUCCESS ] BIN file: /opt/home/k8sworker/ci-ai/cibuilds/ov-notebook/OVNotebookOps-609/.workspace/scm/ov-notebook/notebooks/121-convert-to-openvino/model/resnet.bin
# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(ONNX_CV_MODEL_PATH, compress_to_fp16=True)

Python オブジェクトとして表現されたモデルに変換

Python 変換 API は、Pytorch モデルや TensorFlow Keras モデルなどの Python モデル・オブジェクトを、ファイルに保存したり、トレーニング環境 (Jupyter ノートブックまたはトレーニングスクリプト) を離れたりすることなく、直接渡すことができます。

# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(pytorch_model)
WARNING:tensorflow:Please fix your imports. Module tensorflow.python.training.tracking.base has been moved to tensorflow.python.trackable.base. The old module will be deleted in version 2.11.

convert_model() MO コマンドライン・ツールで使用可能なすべてのパラメーターを受け入れます。パラメーターは、コマンドライン・ツールと同様に、Python クラスまたは文字列アナログによって指定できます。

# Python conversion API
from openvino.tools import mo


ov_model = mo.convert_model(
    pytorch_model,
    input_shape=[1, 3, 100, 100],
    mean_values=[127, 127, 127],
    layout="nchw",
)

ov_model = mo.convert_model(pytorch_model, source_layout="nchw", target_layout="nhwc")

ov_model = mo.convert_model(
    pytorch_model, compress_to_fp16=True, reverse_input_channels=True
)