Skip to content

Conv1D

Description

Info

Parent class: ConvND

Derived classes: -

This module performs a one-dimensional convolution operation. For more detailed theoretical information about the convolution operation, see ConvND.

For an input tensor of shape (N, C_{in}, L_{in}), and an output one of shape (N, C_{out}, L_{out}) and convolution kernel of size size the operation is performed as follows (we consider the i-th element of the batch and the j-th map of the output tensor):

out_i(C_{out_j}) = bias(C_{out_j}) + \sum_{k=0}^{C_{in} - 1}weight(C_{out_j}, k) \star input_i(k)

where

N - batch size;
C - number of maps in the tensor;
L - size of the sequence; bias - bias tensor of the convolution layer, of shape (1, C_{out}, 1, 1);
weight - weights tensor of the convolution layer, of shape (C_{out}, C_{in}, 1, size);
\star - cross correlation operator.

Initializing

def __init__(self, inmaps, outmaps, size, stride=1, pad=0, dilation=1, wscale=1.0, useBias=True, name=None,
                 initscheme=None, empty=False, groups=1):

Parameters

Parameter Allowed types Description Default
inmaps int Number of maps in the input tensor -
outmaps int Number of maps in the output tensor -
size int Convolution kernel size -
stride int Convolution stride 1
pad int Map padding 0
dilation int Convolution window dilation 1
wscale float Random layer weights variance 1.0
useBias bool Whether to use the bias vector True
initscheme Union[tuple, str] Specifies the layer weights initialization scheme (see createTensorWithScheme) None -> ("xavier_uniform", "in")
name str Layer name None
empty bool Whether to initialize the matrix of weights and biases False
groups int Number of groups the maps are split into for separate processing 1

Explanations

Info

For the above input (N, C_{in}, L_{in}) and output (N, C_{out}, L_{out}) tensors there is a relation between their shapes: \begin{equation} L_{out} = \frac{L_{in} + 2pad - dil(size - 1) - 1}{stride} + 1 \end{equation}


pad - possible to specify only a single padding value for all sides of the maps. The possibility of creating an asymmetric padding (filling with additional elements on only one side of the tensor) is not provided for this module, please use Pad1D;


groups - number of groups into which the set of maps is split in order to be convoluted separately.

By default, each output map interacts with all input maps (groups=1).

If there are two groups, the operation becomes equivalent to two convolution layers, where each one sees half of the input channels of the other one and produces half of the output channels. Each of these output sequences is then lined up one after another.

If the number of groups matches the number of input maps, each channel is convoluted with its own set of kernels of size (\frac{outmaps}{inmaps})

The values of the inmaps and outmaps parameters should be divided by the value of the groups parameter.

Examples


Basic convolution example


Necessary imports.

import numpy as np
from PuzzleLib.Backend import gpuarray
from PuzzleLib.Modules import Conv1D
from PuzzleLib.Variable import Variable

Info

gpuarray is required to properly place the tensor in the GPU.

Let us set the tensor parameters so that we can clearly demonstrate the operation of the module: we will set the number of input and output maps equal to 1 and 2, respectively.

batchsize, inmaps, l = 1, 1, 10
outmaps = 2

data = gpuarray.to_gpu(np.arange(batchsize * inmaps * l).reshape((batchsize, inmaps, l)).astype(np.float32))
print(data)
[[[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.]]]

Let us set the filter size to 2 and leave the rest of the convolution parameters default (stride=1, pad=0, dilation=1, groups=1). The use of bias will be explicitly disabled (although by default their tensor is zero and does not affect the final result):

size = 2
conv = Conv1D(inmaps=inmaps, outmaps=outmaps, size=size, useBias=False)

At this point, a small hack was made to set weights explicitly. Since there are two output maps and the weights tensors are of shape (C_{out}, C_{in}, 1, size):

def customW(size):
        w1 = np.ones(shape=(1, 1, 1, size))
        w2 = np.arange(size).reshape((1, 1, 1, size)) * -1
        w = np.vstack([w1, w2]).astype(np.float32)

        return w
w = customW(size)
print(w)
[[[[ 1.  1.]]]

 [[[ 0. -1.]]]]

Let us set the weights of the module:

conv.setVar("W", Variable(gpuarray.to_gpu(w)))

Important

We introduce the requirement that the module weights for all examples are set by the customW function. For brevity sake, this point will be omitted in the code examples.

Let us perform the convolution operation on a synthetic tensor. Since padding was not specified, the size of the maps of the output tensor is smaller than one of the input:

print(conv(data))
[[[ 1.  3.  5.  7.  9. 11. 13. 15. 17.]
  [-1. -2. -3. -4. -5. -6. -7. -8. -9.]]]

Size parameter


Let us use the same settings as in the previous example, but set a different filter size:

conv = Conv1D(inmaps=inmaps, outmaps=outmaps, size=3, useBias=False)
print(conv(data))
[[[  3.   6.   9.  12.  15.  18.  21.  24.]
  [ -5.  -8. -11. -14. -17. -20. -23. -26.]]]

Pad parameter


We will use the parameters from the previous example, but let us suppose that we want to preserve the shape of the tensor. Considering that the filter size is 3, and the convolution stride is 1, then to preserve the size of 5 we need padding 1, i.e. the initial tensor will look as follows:

[[[0. 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 0.]]]

Let us reinitialize the convolution:

conv = Conv1D(inmaps=inmaps, outmaps=outmaps, size=3, pad=1, useBias=False)
print(conv(data))
[[[  1.   3.   6.   9.  12.  15.  18.  21.  24.  17.]
  [ -2.  -5.  -8. -11. -14. -17. -20. -23. -26.  -9.]]]

Stride parameter


Let us return to the default settings and the filter of size 2, but change the convolution stride:

conv = Conv1D(inmaps=inmaps, outmaps=outmaps, size=2, stride=2, useBias=False)
print(conv(data))
[[[ 1.  5.  9. 13. 17.]
  [-1. -3. -5. -7. -9.]]]

To preserve the shape of the original tensor, we have to set the padding to 2:

conv = Conv1D(inmaps=inmaps, outmaps=outmaps, size=2, stride=2, pad=2, useBias=False)
print(conv(data))
[[[ 0.  0.  0.  3.  7. 11. 15.  9.  0.  0.]
  [ 0.  0.  0. -2. -4. -6. -8.  0.  0.  0.]]]

Dilation parameter


The dilation dilates the convolution filters by inserting zero elements between the original filter values. For more information on the dilation, please see the theory in ConvND.

conv = Conv1D(inmaps=inmaps, outmaps=outmaps, size=2, stride=1, pad=0, dilation=2, useBias=False)
print(conv(data))
[[[ 2.  4.  6.  8. 10. 12. 14. 16.]
  [-2. -3. -4. -5. -6. -7. -8. -9.]]]

Groups parameter


For this example, printing tensors would lead to very lengthy constructions, so we will omit them.

In this example, the reinitialization of weights by the customW function does not occur.

batchsize, inmaps, l = 1, 16, 10
outmaps = 32
groups = 1
conv = Conv1D(inmaps, outmaps, size=2, initscheme="gaussian", groups=groups)
print(conv.W.shape)
(32, 16, 1, 2)

We can see that the result is an ordinary convolution. Let us change the number of groups:

groups = 4
conv = Conv1D(inmaps, outmaps, size=2, initscheme="gaussian", groups=groups)
print(conv.W.shape)
(32, 4, 1, 2)

It may not be obvious from the presented code, but now the convolution operation will proceed as follows: from the first \frac{inmaps}{groups}=4 input maps \frac{outmaps}{groups}=8 output maps will be obtained, the same principle applies to the remaining fours.

To get the Depthwise Separable Convolution block:

from PuzzleLib.Containers import Sequential
seq = Sequential()
seq.append(Conv1D(inmaps, inmaps, size=size, initscheme="gaussian", groups=inmaps, name="depthwise"))
seq.append(Conv1D(inmaps, outmaps, size=1, initscheme="gaussian", name="pointwise"))
print(seq["depthwise"].W.shape)

(16, 1, 1, 2)
print(seq["pointwise"].W.shape)
(32, 16, 1, 1)
data = gpuarray.to_gpu(np.random.randn(batchsize, inmaps, l).astype(np.float32))
seq(data)