MaxPool1D¶
Description¶
This module implements the operation of one-dimensional max pooling. For a detailed theoretical description, please see Pool1D.
For an input tensor of shape (N, C, L_{in}) and the output one of shape (N, C, L{out}) the operation is performed as follows (we consider the i-th element of the batch and the j-th map of the output tensor):
where
N - size of the batch;
C - number of maps in the tensor;
L - sequence size;
stride - pooling stride;
k - pooling stride;
Initializing¶
def __init__(self, size=2, stride=2, pad=0, useMask=False, name=None):
Parameters
Parameter | Allowed types | Description | Default |
---|---|---|---|
size | int | Kernel size | 2 |
stride | int | Pooling stride | 2 |
pad | int | Input maps padding | 0 |
useMask | bool | Whether to keep the tensor with maximum value indexes | False |
name | str | Layer name | None |
Explanations
pad
- possible to specify only a single padding value for all sides of the maps. The possibility of creating an asymmetric padding (filling with additional elements on only one side of the tensor) is not provided for this module, please use Pad1D.
Examples¶
Basic pooling example¶
Necessary imports.
import numpy as np
from PuzzleLib.Backend import gpuarray
from PuzzleLib.Modules import MaxPool1D
Info
gpuarray
is required to properly place the tensor in the GPU
Let us set the tensor parameters to clearly demonstrate the operation of the module.
batchsize, maps, insize = 1, 1, 10
data = gpuarray.to_gpu(np.arange(batchsize * maps * insize).reshape((batchsize, maps, insize)).astype(np.float32))
print(data)
[[[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]]]
Let us initialize the module with standard parameters (size=2
, stride=2
, pad=0
, useMask=False
):
pool = MaxPool1D()
print(pool(data))
[[[1. 3. 5. 7. 9.]]]
Size parameter¶
Let us leave all parameters the same except for size
:
pool = MaxPool1D(size=4)
print(pool(data))
[[[3. 5. 7. 9.]]]
Stride parameter¶
Let us set stride
value to 1:
pool = MaxPool1D(stride=1)
print(pool(data))
[[[1. 2. 3. 4. 5. 6. 7. 8. 9.]]]
Let us change both stride
and size
:
pool = MaxPool1D(size=4, stride=4)
print(pool(data))
[[[3. 7.]]]
As can be seen, the last two elements of the initial tensor were not included in the calculations, as the subtensor of them was smaller in size than the pool window.
Pad parameter¶
To enable the last elements from the previous example, let us initialize the padding:
pool = MaxPool1D(size=4, stride=4, pad=1)
print(pool(data))
[[[2. 6. 9.]]]
Please note that padding in the module is always symmetric, namely one new element (row or column) was added on each side of the original tensor, i.e. after padding it would look as follows:
[[[0. 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 0.]]]
useMask parameter¶
The useMask
parameter is responsible for preserving the maximum elements indexes tensor. To demonstrate the way it operates, let us reinitialize the data tensor:
np.random.seed(123)
data = gpuarray.to_gpu(np.random.randint(low=0, high=9, size=(batchsize, maps, insize)).astype(np.float32))
print(data)
[[[2. 2. 6. 1. 3. 6. 1. 0. 1. 0.]]]
pool = MaxPool1D(useMask=True)
print(pool(data))
[[[2. 6. 6. 1. 1.]]]
print(pool.mask)
[[[[0 2 5 6 8]]]]
Для каждого элемента батча и каждой карты индексы возвращаются отдельно:
maps = 2
data = gpuarray.to_gpu(np.random.randint(low=0, high=9, size=(batchsize, maps, insize)).astype(np.float32))
print(data)
[[[2. 2. 6. 1. 3. 6. 1. 0. 1. 0.]
[0. 3. 4. 0. 0. 4. 1. 7. 3. 2.]]]
pool = MaxPool1D(useMask=True)
print(pool(data))
<div class="output">
</div>
```python
print(pool.mask)
[[[[0 2 5 6 8]]
[[1 2 5 7 8]]]]
print(pool.mask.shape)
(1, 2, 1, 5)