Skip to content

Sequential

Description

A container which carries out a sequential connection of modules.

Initializing

def __init__(self, name=None):

Parameters

Parameter Allowed types Description Default
name str Container name None

Explanations

-

Examples


Simple sequence


Simple sequence:

  1. Padding of the input tensor from the bottom and to the right with 2 additional rows / columns
  2. Maximum value pooling
  3. Flattening of the tensor

Necessary imports:

import numpy as np
from PuzzleLib.Backend import gpuarray
from PuzzleLib.Containers import Sequential
from PuzzleLib.Modules import Pad2D, MaxPool2D, Flatten

Info

gpuarray is necessary for the correct placement of the tensor in the GPU

Initialization of a sequence and addition of the above operations to it using the append method:

seq = Sequential()
seq.append(Pad2D(pad=(0, 2, 0, 2), fillValue=255, name="pad"))
seq.append(MaxPool2D(name="pool"))
seq.append(Flatten(name="flatten"))

Synthetic tensor:

np.random.seed(123)
data = np.random.randint(0, 127, size=(1, 1, 4, 4)).astype(np.float32)
print(data)

[[[[126. 109. 126.  66.]
   [ 92.  98. 102.  17.]
   [ 83. 106. 123.  57.]
   [ 86.  97.  96. 113.]]]]
!!! important
    For the library to work correctly with tensors having four axes (for example, pictures with NCHW axes, where N is the tensor number in the batch, C is the number of channels (maps), H is the height, W is the width), two conditions must be fulfilled:

    * The sequence of axes of the tensor must be  - (N, C, H, W)
    * Tensor data type must be  - float32

Placement of the initial tensor in the GPU and running it through the sequence:
```python
seq(gpuarray.to_gpu(data))

Tip

Any element of a sequence can be addressed either by its name or by its index

Result:

# 'pad' layer results
print(seq["pad"].data)

[[[[126. 109. 126.  66.   0.   0.]
   [ 92.  98. 102.  17.   0.   0.]
   [ 83. 106. 123.  57.   0.   0.]
   [ 86.  97.  96. 113.   0.   0.]
   [  0.   0.   0.   0.   0.   0.]
   [  0.   0.   0.   0.   0.   0.]]]]
# 'pool' layer results
print(seq["pool"].data)
[[[[126. 126.   0.]
   [106. 123.   0.]
   [  0.   0.   0.]]]]
# 'flatten' layer results
print(seq["flatten"].data)
[[126. 126.   0. 106. 123.   0.   0.   0.   0.]]

Cyclic expansion


Let us suppose that you need to repeat the same set of blocks on the network several times, which consist, for example, of:

  1. Multiplication of the tensor by a constant
  2. Activation("ReLU")

Necessary imports:

from PuzzleLib.Modules import MulAddConst, Activation

Function for creating component blocks:

def block(numb):
  block = Sequential()

  block.append(MulAddConst(a=2, b=0, name="mul_const_{}".format(numb)))
  block.append(Activation(activation="relu", name="act_{}".format(numb)))

  return block

Extension of the base sequence with blocks:

for i in range(3):
... seq.extend(block(i))

Synthetic tensor:

np.random.seed(123)
data = np.random.randint(-5, 5, size=(1, 1, 4, 4)).astype(np.float32)
print(data)

[[[[-3. -3.  1. -4.]
   [-2.  4.  1. -4.]
   [-5. -4.  4. -5.]
   [-5.  4. -2. -1.]]]]

Placement of the initial tensor in the GPU and running it through the sequence:

seq(gpuarray.to_gpu(data))

Result:

# 'mul_const_0' layer results
print(seq["mul_const_0"].data)

[[[[ -6.  -6.   2.  -8.]
   [ -4.   8.   2.  -8.]
   [-10.  -8.   8. -10.]
   [-10.   8.  -4.  -2.]]]]
# 'act_0' layer results
print(seq["act_0"].data)
[[[[0. 0. 2. 0.]
   [0. 8. 2. 0.]
   [0. 0. 8. 0.]
   [0. 8. 0. 0.]]]]