Skip to content

Calculator

Description

Handler designed to simplify the process of model inference for the user by eliminating the need to manually prescribe a sequence of actions. It is a kind of a wrapper function around the operations on data.

Info

In handlers, depending on the location of the data, splitting can be performed in the following ways:

  • data is placed on the disk: first, data is split into macrobatches - blocks that are entirely placed in the GPU, whereafter the macrobatch is split into smaller batches, which are then fed directly to the model input;
  • data has already been placed in the GPU: it is split into batches, which are then fed directly to the input of the model.

Initializing

def __init__(self, mod, onBatchFinish=None, batchsize=128):

Parameters

Parameter Allowed types Description Default
mod Module Trainable neural network -
onBatchFinish callable Function that will be called upon completion of processing of a data batch None
batchsize int Size of a data batch 128

Explanations

-

Methods

All the basic methods of handlers can be found in the documentation for the parent class Handler.

calcFromHost

def calcFromHost(self, data, macroBatchSize=10000, onMacroBatchFinish=None):

Functionality

Wrapper function around the handleFromHost() method of the Handler parent class, which runs the data directly through the model (inference). Returns an array of network responses.

Parameters

Parameter Allowed types Description Default
data tensor Data tensor -
macroBatchSize int Size of a macrobatch. The data will be split into macrobatches sized macrobatchSize 10000
onMacroBatchFinish callable Function that will be called after processing the macrobatch None

Explanations

-

calc

def calc(self, data, target):

Functionality

Wrapper function around the handle() method of the Handler parent class, which runs the data directly through the model (inference). Returns an array of network responses.

Parameters

Parameter Allowed types Description Default
data GPUArray Data tensor hosted in the GPU -

Explanations

-

handleBatch

def handleBatch(self, batch, idx, resid, state):
Functionality

Root method of the inference handler. It processes the transmitted batch with the model and writes the results to state.

Parameters

Parameter Allowed types Description Default
batch GPUArray Data tensor hosted in the GPU -
idx int Index number of the data batch -
state dict Dictionary containing model predictions -

Explanations

-