Skip to content

BCE

Description

The loss function that calculates binary cross-entropy (binary cross-entropy - BCE), which is a special case of ordinary cross-entropy.

It is used in classification tasks with two classes.

The error function formula is:

H = y\cdot \log{p(y)} + (1-y)\cdot \log(1-p(y))

where

y - binary indicator of belonging to the target class;
p(y) - probability of belonging to the target class predicted by the classifier.

Initializing

def __init__(self):

Parameters

-

Explanations

-

Examples


Binary classification


Necessary imports:

import numpy as np
from PuzzleLib.Backend import gpuarray
from PuzzleLib.Cost import BCE

Info

gpuarray is required to properly place the tensor in the GPU.

Synthetic target and prediction tensors:

scores = gpuarray.to_gpu(np.random.randn(10, 1).astype(np.float32))
labels = gpuarray.to_gpu(np.random.randn(10,).astype(np.float32))

Important

Please remember that the first dimension of target and prediction tensors is the size of the batch.

Initializing the error function:

bce = BCE()

Calculating the error and the gradient on the batch:

error, grad = bce(scores, labels)


Multiclass Classification with Binary Flags


Everything is the same as for the previous example, except for the scores and labels tensors. Let us suppose you must determine the presence or absence of four attributes of an object. In this case, the above tensors will be specified in the following way:

scores = gpuarray.to_gpu(np.random.randn(10, 4).astype(np.float32))
labels = gpuarray.to_gpu(np.random.randn(10, 4).astype(np.float32))