Package | Description |
---|---|
org.anchoranalysis.inference.concurrency |
Specifying how many CPUs and GPUs can be allocated for some purpose.
|
Modifier and Type | Method and Description |
---|---|
Optional<ConcurrentModel<T>> |
CreateModelForPool.create(boolean useGPU)
Creates a model.
|
Modifier and Type | Method and Description |
---|---|
<S> S |
ConcurrentModelPool.executeOrWait(CheckedFunction<ConcurrentModel<T>,S,ConcurrentModelException> functionToExecute)
Execute on the next available model (or wait until one becomes available).
|
Copyright © 2010–2023 Owen Feehan, ETH Zurich, University of Zurich, Hoffmann-La Roche. All rights reserved.