# SmoothL1¶

## Description¶

A loss function that is a combination of MAE and MSE error functions. It is differentiable at zero, unlike MAE, and not as sensitive to outliers as MSE.

It is used in regression tasks.

The error function formula is:

S = \begin{cases} \frac{1}{2}(y-y^p)^2 & \quad \text{if } |y-y^p| < 1\\ |y-y^p| - \frac{1}{2} & \quad \text{if } |y-y^p| \geq 1 \end{cases}

where

$N$ - number of objects in the sample;
$y$ - real value of the object; $y^p$ - predicted value for the object.

## Initializing¶

def __init__(self):


Parametrs

-

Explanations

-

## Examples¶

Necessary imports:

>>> import numpy as np
>>> from PuzzleLib.Backend import gpuarray
>>> from PuzzleLib.Cost import SmoothL1


Info

gpuarray is required to properly place the tensor in the GPU.

is required to properly place the tensor in the GPU.

>>> targets = gpuarray.to_gpu(np.random.randn(10, 10).astype(np.float32))
>>> predictions = gpuarray.to_gpu(np.random.randn(10, 10).astype(np.float32))


Important

Please remember that the first dimension of target and prediction tensors is the size of the batch .

Initializing the error function:

>>> smooth = SmoothL1()


Calculating the error and the gradient on the batch:

>>> error, grad = smooth(predictions, targets)