Class SegmentObjectsFromONNXModel


public class SegmentObjectsFromONNXModel extends SegmentStackIntoObjectsScaleDecode<ai.onnxruntime.OnnxTensor,OnnxModel>
Performs instance-segmentation using the ONNX Runtime and an .onnx model file.
Author:
Owen Feehan
  • Constructor Details

    • SegmentObjectsFromONNXModel

      public SegmentObjectsFromONNXModel()
  • Method Details

    • createModelPool

      public ConcurrentModelPool<OnnxModel> createModelPool(ConcurrencyPlan plan, Logger logger) throws CreateModelFailedException
      Description copied from class: SegmentStackIntoObjectsPooled
      Creates the model pool (to be used by multiple threads).
      Specified by:
      createModelPool in class SegmentStackIntoObjectsPooled<OnnxModel>
      Parameters:
      plan - the number and types of processors available for concurrent execution.
      logger - the logger.
      Returns:
      the newly created model pool.
      Throws:
      CreateModelFailedException - if a model cannot be created.
    • deriveInput

      protected ai.onnxruntime.OnnxTensor deriveInput(Stack stack, Optional<double[]> subtractMeans) throws OperationFailedException
      Description copied from class: SegmentStackIntoObjectsScaleDecode
      Derives the input tensor from an image.
      Specified by:
      deriveInput in class SegmentStackIntoObjectsScaleDecode<ai.onnxruntime.OnnxTensor,OnnxModel>
      Parameters:
      stack - the image which is mapped into an input tensor.
      subtractMeans - respective intensity values that are subtracted from the voxels before being added to the tensor (respectively for each channel).
      Returns:
      the tensor, representing the input image.
      Throws:
      OperationFailedException - if an input tensor cannot be created.
    • inputName

      protected Optional<String> inputName()
      Description copied from class: SegmentStackIntoObjectsScaleDecode
      The name of the tensor in the model which the input-image is mapped to.
      Specified by:
      inputName in class SegmentStackIntoObjectsScaleDecode<ai.onnxruntime.OnnxTensor,OnnxModel>
      Returns:
      the name.
    • getModelPath

      public String getModelPath()
      Relative-path to the model file in ONNX form, relative to the models/ directory in the Anchor distribution.

      If readFromResources==true, it is read instead from resources on the class-path.

    • setModelPath

      public void setModelPath(String modelPath)
      Relative-path to the model file in ONNX form, relative to the models/ directory in the Anchor distribution.

      If readFromResources==true, it is read instead from resources on the class-path.

    • isReadFromResources

      public boolean isReadFromResources()
      When true, rather than reading modelPath from the file-system, it is read from Java resources on the class-path.
    • setReadFromResources

      public void setReadFromResources(boolean readFromResources)
      When true, rather than reading modelPath from the file-system, it is read from Java resources on the class-path.
    • getInputName

      public String getInputName()
      The name of the input in the ONNX model.
    • setInputName

      public void setInputName(String inputName)
      The name of the input in the ONNX model.
    • isIncludeBatchDimension

      public boolean isIncludeBatchDimension()
      If true, a 4-dimensional tensor is created (with the first dimension describing a batch-size of 1), instead of the usual 3-dimensional tensor describing channel, height, width.
    • setIncludeBatchDimension

      public void setIncludeBatchDimension(boolean includeBatchDimension)
      If true, a 4-dimensional tensor is created (with the first dimension describing a batch-size of 1), instead of the usual 3-dimensional tensor describing channel, height, width.
    • isInterleaveChannels

      public boolean isInterleaveChannels()
      If true, the channels are placed as the final position of the tensor (**after** width/height) instead of **before** width/height.

      Consequently, in terms of raw order in a FloatBuffer, RGB values become interleaved.

    • setInterleaveChannels

      public void setInterleaveChannels(boolean interleaveChannels)
      If true, the channels are placed as the final position of the tensor (**after** width/height) instead of **before** width/height.

      Consequently, in terms of raw order in a FloatBuffer, RGB values become interleaved.