Package org.anchoranalysis.inference.concurrency
package org.anchoranalysis.inference.concurrency
Specifying how many CPUs and GPUs can be allocated for some purpose.
-
ClassDescriptionHow many allocated CPUs and CPUs can be used concurrently for inference.ConcurrentModel<T extends InferenceModel>An instance of model that can be used concurrently for inference.This exception indicates that an error occurred when performing inference from a model concurrently.ConcurrentModelPool<T extends InferenceModel>Keeps concurrent copies of a model to be used by different threads.When creating a model to be used for inference fails.CreateModelForPool<T extends InferenceModel>Creates a model to use in the pool.WithPriority<T>Wraps an element of type
T
to ensure priority is given when the flaggpu==true
.