Parent class: Pool1D

Derived classes: -

This module implements the operation of one-dimensional average pooling. For a detailed theoretical description, see Pool1D.

For an input tensor of shape (N, C, L_{in}) and an output tensor of shape (N, C, L{out}) the operation is performed as follows (we consider the i-th element of the batch, the j-th map of the output tensor):

out(N_i, C_j, l) = \frac{1}{k}\sum_{m=0}^{k-1}(input(N_i, C_j, stride \times l + m))


N - size of the batch;
C - number of maps in the tensor;
L - size of the sequence;
stride - pooling stride;
k - size of the pooling window.


def __init__(self, size=2, stride=2, pad=0, includePad=True, name=None):


Parameter Возможные типы Description Default
size int Size of the kernel 2
stride int Pooling stride 2
pad int Padding of the input maps 0
includePad bool Flag for including the edge filling values when calculating the average value True
name str Layer name None


pad - only a single padding value can be specified for all sides of the maps. The possibility of creating asymmetric padding (filling with additional elements on only one side of the tensor) is not provided for this module, please use Pad1D;

includePad - if the pad parameter was set to nonzero and new elements were added to the original tensor along the edges, then when includePad is set, their values will also affect the result of the pooling operation.


Basic pooling example

Necessary imports.

>>> import numpy as np
>>> from PuzzleLib.Backend import gpuarray
>>> from PuzzleLib.Modules import AvgPool1D


gpuarray is required to properly place the tensor in the GPU.

Let us set the tensor parameters so that we can clearly demonstrate the operation of the module.

>>> batchsize, maps, insize = 1, 1, 10
>>> data = gpuarray.to_gpu(np.arange(batchsize * maps * insize).reshape((batchsize, maps, insize)).astype(np.float32))
>>> data

[[[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]]]

Let us initialize the module with default parameters (size=2, stride=2, pad=0, includePad=True):

>>> pool = AvgPool1D()
>>> pool(data)
[[[0.5 2.5 4.5 6.5 8.5]]]

size parameter

Let us leave all parameters the same except size:

>>> pool = AvgPool1D(size=4)
>>> pool(data)
[[[1.5 3.5 5.5 7.5]]]

stride parameter

Set the stride value to 1:

>>> pool = AvgPool1D(stride=1)
>>> pool(data)
[[[0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5]]]

Now let us change both stride and size:

>>> pool = AvgPool1D(size=4, stride=4)
>>> pool(data)

[[[1.5 5.5]]]

As can be seen, the last two elements of the initial tensor were not included in the calculations, since the subtensor of them was smaller than the pooling window.

pad parameter

To include the last elements from the previous example, let us initialize the padding:

>>> pool = AvgPool1D(size=4, stride=4, pad=1)
>>> pool(data)
[[[0.75 4.5  6.  ]]]
Please note that the padding in the module is always symmetric - one new element was added on each side of the original tensor, i.e. after padding it is as follows:
[[[0. 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 0.]]]

includePad parameter

In previous examples, the includePad parameter remained the same as the default, i.e. peripheral elements of padding were included in the calculations. Now let us see what happens if you unset this flag for the last example:

>>> pool = AvgPool1D(size=4, stride=4, pad=1, includePad=False)
>>> pool(data)
[[[1.  4.5 8. ]]]
As you can see, the pooling window took into account the presence of additional elements, but they were not included in the calculation of the average value of the subtensor.