Class DecodeMaskRCNN
Object
AnchorBean<DecodeInstanceSegmentation<ai.onnxruntime.OnnxTensor>>
DecodeInstanceSegmentation<ai.onnxruntime.OnnxTensor>
DecodeMaskRCNN
Decodes the inference output from a Mask-RCNN implementation.
It is designed to work with accompanying MaskRCNN-10.onnx in resources, which expects
an image of size 1088x800 (width x height) and may throw an error if the input-size is different
than this.
The ONNX file was obtained from this GitHub source, which also describes its inputs and outputs.
This issue may also be relevant: it mentions an error message that occurs when a different sized input to the above is used.
- Author:
- Owen Feehan
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptiondecode(List<ai.onnxruntime.OnnxTensor> inferenceOutput, ImageInferenceContext context) Decodes the output tensors from inference intoObjectMasks with confidence and labels.Ordered names of the tensors we are interested in processing, as outputted from inference.The interpolator to use for scaling images.floatOnly proposals outputted from the model with a score greater or equal to this threshold are considered.floatOnly voxels with a value greater or equal to this threshold are considered as part of the mask.voidsetInterpolator(Interpolator interpolator) The interpolator to use for scaling images.voidsetMinConfidence(float minConfidence) Only proposals outputted from the model with a score greater or equal to this threshold are considered.voidsetMinMaskValue(float minMaskValue) Only voxels with a value greater or equal to this threshold are considered as part of the mask.Methods inherited from class org.anchoranalysis.bean.AnchorBean
checkMisconfigured, describeBean, describeChildren, duplicateBean, fields, findFieldsOfClass, getBeanName, getLocalPath, localise, toString
-
Constructor Details
-
DecodeMaskRCNN
public DecodeMaskRCNN()
-
-
Method Details
-
expectedOutputs
Description copied from class:DecodeInstanceSegmentationOrdered names of the tensors we are interested in processing, as outputted from inference.- Specified by:
expectedOutputsin classDecodeInstanceSegmentation<ai.onnxruntime.OnnxTensor>- Returns:
- the list of names, as above.
-
decode
public List<LabelledWithConfidence<MultiScaleObject>> decode(List<ai.onnxruntime.OnnxTensor> inferenceOutput, ImageInferenceContext context) throws OperationFailedException Description copied from class:DecodeInstanceSegmentationDecodes the output tensors from inference intoObjectMasks with confidence and labels.The created
ObjectMasks should matchunscaledDimensionsin size.- Specified by:
decodein classDecodeInstanceSegmentation<ai.onnxruntime.OnnxTensor>- Parameters:
inferenceOutput- the tensors that are the result of the inference.context- the context in which the inference is occurring.- Returns:
- a newly created list of objects, with associated confidence, and labels, that matches
unscaledDimensionsin size. - Throws:
OperationFailedException- if it cannot be decoded successfully.
-
getMinConfidence
public float getMinConfidence()Only proposals outputted from the model with a score greater or equal to this threshold are considered. -
setMinConfidence
public void setMinConfidence(float minConfidence) Only proposals outputted from the model with a score greater or equal to this threshold are considered. -
getMinMaskValue
public float getMinMaskValue()Only voxels with a value greater or equal to this threshold are considered as part of the mask. -
setMinMaskValue
public void setMinMaskValue(float minMaskValue) Only voxels with a value greater or equal to this threshold are considered as part of the mask. -
getInterpolator
The interpolator to use for scaling images. -
setInterpolator
The interpolator to use for scaling images.
-