In this case, the activation function does not depend mediante scores of other classes durante \(C\) more than \(C_1 = C_i\). So the gradient respect preciso the each punteggio \(s_i\) in \(s\) will only depend on the loss given by its binary problem.
- Caffe: Sigmoid Ciclocross-Entropy Loss Layer
- Pytorch: BCEWithLogitsLoss
- TensorFlow: sigmoid_cross_entropy.
Focal Loss
, from Facebook, sopra this paper. They claim esatto improve one-tirocinio object detectors using Focal christianconnection Loss onesto train per detector they name RetinaNet. Focal loss is a Cross-Entropy Loss that weighs the contribution of each sample preciso the loss based durante the classification error. The ispirazione is that, if verso sample is already classified correctly by the CNN, its contribution to the loss decreases. With this strategy, they claim preciso solve the problem of class imbalance by making the loss implicitly focus per those problematic classes. Moreover, they also weight the contribution of each class to the lose in per more explicit class balancing. They use Sigmoid activations, so Focal loss could also be considered per Binary Ciclocampestre-Entropy Loss. We define it for each binary problem as:
Where \((1 – s_i)\gamma\), with the focusing parameter \(\genere >= 0\), is a modulating factor sicuro ritornato the influence of correctly classified samples durante the loss. With \(\tipo = 0\), Focal Loss is equivalent to Binary Ciclocross Entropy Loss.
Where we have separated formulation for when the class \(C_i = C_1\) is positive or negative (and therefore, the class \(C_2\) is positive). As before, we have \(s_2 = 1 – s_1\) and \(t2 = 1 – t_1\).
The gradient gets a bit more complex paio onesto the inclusion of the modulating factor \((1 – s_i)\gamma\) durante the loss formulation, but it can be deduced using the Binary Ciclocross-Entropy gradient expression.
Where \(f()\) is the sigmoid function. Sicuro get the gradient expression for per negative \(C_i (t_i = 0\)), we just need preciso replace \(f(s_i)\) with \((1 – f(s_i))\) con the expression above.
Topo that, if the modulating factor \(\gamma = 0\), the loss is equivalent esatto the CE Loss, and we end up with the same gradient expression.
Forward pass: Loss computation
Where logprobs[r] stores, verso each element of the batch, the sum of the binary ciclocampestre entropy verso each class. The focusing_parameter is \(\gamma\), which by default is 2 and should be defined as per layer parameter mediante the net prototxt. The class_balances can be used puro introduce different loss contributions per class, as they do durante the Facebook paper.
Backward pass: Gradients computation
In the specific (and usual) case of Multi-Class classification the labels are one-hot, so only the positive class \(C_p\) keeps its term in the loss. There is only one element of the Target vector \(t\) which is not niente \(t_i = t_p\). So discarding the elements of the summation which are zero paio puro target labels, we can write:
This would be the pipeline for each one of the \(C\) clases. We batteria \(C\) independent binary classification problems \((C’ = 2)\). Then we sum up the loss over the different binary problems: We sum up the gradients of every binary problem esatto backpropagate, and the losses puro schermo the global loss. \(s_1\) and \(t_1\) are the punteggio and the gorundtruth label for the class \(C_1\), which is also the class \(C_i\) mediante \(C\). \(s_2 = 1 – s_1\) and \(t_2 = 1 – t_1\) are the risultato and the groundtruth label of the class \(C_2\), which is not a “class” mediante our original problem with \(C\) classes, but per class we create esatto servizio up the binary problem with \(C_1 = C_i\). We can understand it as verso background class.