Restoration of a nonlinear function by a neural network with activation

Neural network with nonlinearity: theory

As we have shown in the previous tutorial, it is impossible to restore a nonlinear function with a neural network not containing nonlinearity. We will have to use the activation of neurons discussed in the Restoration of linear function by a single neuron to fix this:

Network diagram with one hidden layer and activation
Figure 1. Network diagram with one hidden layer and activation


x_1 .. x_m - network input;
z_1 .. z_n - hidden layer neurons;
h_1 .. h_n - activation of the respective neurons of the hidden layer via the f function;
y - network output;
w_{ij}^{(l)} - weight of appropriate connections between neurons; l - layer number, i - neuron on l + 1 layer, j - neuron on l layer;
b_i^{(l)} - bias on the respective layers;

We will take the network from the previous tutorial with three inputs and three neurons on the hidden layer. Then the only difference is activation after the hidden layer:

y = \sum_{j=1}^{n} w_{1j}^{(2)}h_j + b_1^{(2)} = W^{(2)}h + b^{(2)} = w_{11}^{(2)}h_1 + w_{12}^{(2)}h_2 + w_{13}^{(2)}h_3 + b_{1}^{(2)}

where h_j = f(z_j).

The gradient calculation for the parameter will change accordingly

\begin{equation} \frac{\partial{J}}{\partial{w_{11}^{(1)}}} = \frac{\partial{J}}{\partial{y}} \frac{\partial{y}}{\partial{h_1}} \frac{\partial{h_1}}{\partial{z_1}} \frac{\partial{z_1}}{\partial{w_{11}^{(1)}}} \end{equation}

We take ReLU as an activation function, where:

f'(x) = \begin{cases} 0 & \quad \text{if } x < 0 \\ 1 & \quad \text{if } x \geq 0 \end{cases}

Neural network with nonlinearity: code

Start of the script:

import numpy as np

from Utils import show, showSubplots
from NetLinearTest import Net, Linear

X = np.linspace(-3, 3, 1024, dtype=np.float32).reshape(-1, 1)

def func(x):
    from math import sin
    return 2 * sin(x) + 5

f = np.vectorize(func)
Y = f(X)

A new class - Activation is added. It implements ReLU activation:

class Activation:
    def __init__(self, name="relu"): = name

        self.inData = None = None

        self.grad = None

    def forward(self, data):
        self.inData = data = data * (data > 0)


    def backward(self, grad):
        self.grad = ( > 0) * grad

        return self.grad

    def update(self, lr):

Network training

We introduce a new trick for a nonlinear function - data mixing. We used to form a batch from values that follow each other in a sequence, but now a batch can contain points from different ends of the function definition area:

def trainNet(size, steps=1000, batchsize=10, learnRate=1e-2):

    net = Net()
    net.append(Linear(insize=1, outsize=size, name="layer_1"))
    net.append(Linear(insize=size, outsize=1, name="layer_2"))

    predictedBT = net(X)

    XC = X.copy()
    perm = np.random.permutation(XC.shape[0])
    XC = XC[perm, :]

    for i in range(steps):
        idx = np.random.randint(0, 1000 - batchsize)
        x = XC[idx:idx + batchsize]
        y = f(x).astype(np.float32)

        net.optimize(x, y, learnRate)

    predictedAT = net(X)

            "y": predictedBT,
            "name": "Net results before training",
            "color": "orange"
            "y": predictedAT,
            "name": "Net results after training",
            "color": "orange"
trainNet(5, steps=1000, batchsize=100)

Comparison of network results (5 neurons in a hidden layer) before and after the training
Figure 2. Comparison of network results (5 neurons in a hidden layer) before and after the training

As you can see, the network intends to describe our function, and each neuron in the hidden layer is responsible for its own section of the curve. It would be a logical step to try increasing the number of neurons:

trainNet(100, steps=1000, batchsize=100)

Comparison of network results (100 neurons in a hidden layer) before and after the training
Figure 3. Comparison of network results (100 neurons in a hidden layer) before and after the training

Looks better, but still not accurate. We could increase the number of steps to improve the quality, but that would take a too long, so it is time to use some more powerful tools of the PuzzleLib library.

Implementing the library tools

Please mind that this time we train the network not by steps (running a single batch), but by epochs (running the entire dataset), and we also pick a more advanced optimizer:

def trainNetPL(size, epochs, batchsize=10, learnRate=1e-2):
    from PuzzleLib.Modules import Linear, Activation
    from PuzzleLib.Modules.Activation import relu
    from PuzzleLib.Containers import Sequential
    from PuzzleLib.Optimizers import MomentumSGD
    from PuzzleLib.Cost import MSE
    from PuzzleLib.Handlers import Trainer
    from PuzzleLib.Backend.gpuarray import to_gpu


    net = Sequential()
    net.append(Linear(insize=1, outsize=size, initscheme="gaussian"))
    net.append(Linear(insize=size, outsize=1, initscheme="gaussian"))

    predictedBT = net(to_gpu(X)).get()

    cost = MSE()
    optimizer = MomentumSGD(learnRate)

    trainer = Trainer(net, cost, optimizer, batchsize=batchsize)

    show(X, Y, net(to_gpu(X)).get())

    XC, YC = X.copy(), Y.copy()
    perm = np.random.permutation(XC.shape[0])
    XC = XC[perm, :]
    YC = YC[perm, :]

    for i in range(epochs):
        trainer.trainFromHost(XC.astype(np.float32), YC.astype(np.float32), macroBatchSize=1000,
                                onMacroBatchFinish=lambda train: print("PL module error: %s" % train.cost.getMeanError()))


    predictedAT = net(to_gpu(X)).get()

            "y": predictedBT,
            "name": "PL net results before training",
            "color": "orange"
            "y": predictedAT,
            "name": "PL net results after training",
            "color": "orange"

Comparison of PL network results before and after the training
Figure 4. Comparison of PL network results before and after the training


We can conclude that any framework for deep learning is generally a library for automatic differentiation of computational graphs, no magic involved here. We hope that by filling the gaps in understanding the work of neural networks we managed to help the reader get more confident, so you could perhaps even participate in the development of the PuzzleLib library by writing your own modules.