# Margin-Based Losses

Margin-based losses are used when the goal is not only to make the correct prediction, but to make it by a sufficient margin. A margin measures separation. In classification, it measures how much more strongly the model favors the correct class than an incorrect class.

These losses are common in support vector machines, metric learning, ranking systems, face recognition, verification tasks, and some contrastive representation learning methods.

The central idea is simple: a prediction should be counted as good only when the correct output is separated from competing outputs by at least some positive amount.

### Binary Classification Margins

Consider binary classification with labels

$$
y \in \{-1,+1\}.
$$

Let the model output a real-valued score

$$
s = f_\theta(x).
$$

The sign of \(s\) gives the predicted class:

$$
\hat{y} =
\begin{cases}
+1, & s \geq 0, \\
-1, & s < 0.
\end{cases}
$$

The margin is

$$
ys.
$$

If \(ys > 0\), the example is classified correctly. If \(ys < 0\), it is classified incorrectly.

But margin-based learning usually asks for more than correctness. It asks for

$$
ys \geq 1.
$$

The value \(1\) is a conventional margin size. Other values can be used.

### Hinge Loss

The standard margin-based loss for binary classification is hinge loss:

$$
L = \max(0, 1 - ys).
$$

If \(ys \geq 1\), the loss is zero. The model already classifies the example correctly with sufficient margin.

If \(ys < 1\), the loss is positive. The model either classifies the example incorrectly or correctly but with too little margin.

The hinge loss is piecewise linear:

$$
L =
\begin{cases}
0, & ys \geq 1, \\
1 - ys, & ys < 1.
\end{cases}
$$

This makes the objective focus on examples that are near the decision boundary or on the wrong side of it.

### Hinge Loss in PyTorch

PyTorch does not require a special module to implement basic hinge loss.

```python
import torch

scores = torch.tensor([2.0, 0.3, -0.5, -2.0])
targets = torch.tensor([1.0, 1.0, -1.0, -1.0])

loss = torch.clamp(1.0 - targets * scores, min=0.0).mean()

print(loss)
```

Here `scores` are raw model outputs. The targets use \(-1\) and \(+1\), not \(0\) and \(1\).

A neural network can produce these scores directly:

```python
import torch
import torch.nn as nn

model = nn.Sequential(
    nn.Linear(10, 32),
    nn.ReLU(),
    nn.Linear(32, 1),
)

x = torch.randn(64, 10)
y = torch.randint(0, 2, (64,)).float()
y = 2 * y - 1  # convert {0, 1} to {-1, +1}

scores = model(x).squeeze(-1)

loss = torch.clamp(1.0 - y * scores, min=0.0).mean()
```

The model output is a score, not a probability. No sigmoid is needed for hinge loss.

### Gradient Behavior

For one example, hinge loss is

$$
L = \max(0, 1 - ys).
$$

The derivative with respect to the score \(s\) is

$$
\frac{\partial L}{\partial s} =
\begin{cases}
0, & ys \geq 1, \\
-y, & ys < 1.
\end{cases}
$$

This means examples outside the margin do not contribute gradients. Once an example is correctly classified with enough separation, the loss ignores it.

This is different from logistic loss or cross-entropy, which continue to provide small gradients even for correctly classified examples.

### Squared Hinge Loss

A smooth variant is squared hinge loss:

$$
L = \max(0, 1 - ys)^2.
$$

Compared with ordinary hinge loss, squared hinge loss penalizes margin violations more strongly. Its gradient also changes continuously inside the margin.

In PyTorch:

```python
margin_error = torch.clamp(1.0 - targets * scores, min=0.0)
loss = (margin_error ** 2).mean()
```

Squared hinge loss is more sensitive to large violations than ordinary hinge loss. It may produce stronger corrections for badly misclassified examples, but it can also be more affected by outliers.

### Multiclass Margin Loss

For multiclass classification, the model produces one score per class:

$$
s = f_\theta(x) \in \mathbb{R}^{K}.
$$

Let \(c\) be the correct class. A multiclass margin loss requires the correct class score \(s_c\) to exceed every incorrect class score \(s_j\) by at least a margin \(\Delta\):

$$
s_c \geq s_j + \Delta
\quad
\text{for all } j \neq c.
$$

Equivalently,

$$
s_j - s_c + \Delta \leq 0.
$$

The multiclass hinge loss is

$$
L =
\sum_{j \neq c}
\max(0, s_j - s_c + \Delta).
$$

If every incorrect class is sufficiently below the correct class, the loss is zero. If one or more incorrect classes are too close or higher than the correct class, those classes contribute positive loss.

### Multiclass Margin Loss in PyTorch

The following implementation computes multiclass hinge loss for logits of shape `[B, K]` and integer class labels of shape `[B]`.

```python
import torch

def multiclass_hinge_loss(scores, targets, margin=1.0):
    batch_size = scores.shape[0]

    correct_scores = scores[torch.arange(batch_size), targets].unsqueeze(1)
    margins = scores - correct_scores + margin

    margins[torch.arange(batch_size), targets] = 0.0

    return torch.clamp(margins, min=0.0).sum(dim=1).mean()

scores = torch.randn(32, 10)
targets = torch.randint(0, 10, (32,))

loss = multiclass_hinge_loss(scores, targets)
print(loss)
```

This loss can be used with any neural network that outputs class scores.

Cross-entropy is usually preferred for modern classification because it gives probabilistic outputs and stable optimization. Multiclass margin losses remain useful when ranking separation matters more than probability calibration.

### Margin Loss for Ranking

Margin losses are also common in ranking. Suppose a system should rank a positive item above a negative item.

Let

$$
s^+ = f_\theta(x, a^+)
$$

be the score of a positive item, and

$$
s^- = f_\theta(x, a^-)
$$

be the score of a negative item.

A pairwise ranking loss is

$$
L = \max(0, m - s^+ + s^-),
$$

where \(m > 0\) is the required margin.

The loss is zero when

$$
s^+ \geq s^- + m.
$$

Thus, the positive item must score at least \(m\) higher than the negative item.

In PyTorch:

```python
positive_scores = torch.randn(64)
negative_scores = torch.randn(64)

margin = 1.0
loss = torch.clamp(margin - positive_scores + negative_scores, min=0.0).mean()
```

This type of loss is useful in recommendation systems, retrieval systems, and learning-to-rank problems.

### Triplet Margin Loss

Triplet margin loss is widely used in metric learning. Each training example consists of three items:

| Item | Meaning |
|---|---|
| Anchor | Reference example |
| Positive | Similar to the anchor |
| Negative | Dissimilar to the anchor |

The model maps each item to an embedding vector:

$$
a = f_\theta(x_a),
\qquad
p = f_\theta(x_p),
\qquad
n = f_\theta(x_n).
$$

The goal is to make the anchor closer to the positive than to the negative by at least margin \(m\):

$$
d(a,p) + m \leq d(a,n).
$$

The triplet loss is

$$
L =
\max(0, d(a,p) - d(a,n) + m).
$$

Here \(d(\cdot,\cdot)\) is a distance function, often Euclidean distance.

PyTorch provides this loss:

```python
import torch
import torch.nn as nn

loss_fn = nn.TripletMarginLoss(margin=1.0, p=2)

anchor = torch.randn(32, 128)
positive = torch.randn(32, 128)
negative = torch.randn(32, 128)

loss = loss_fn(anchor, positive, negative)
print(loss)
```

Triplet loss is used in face recognition, image retrieval, speaker verification, and embedding learning.

### Contrastive Margin Loss

Contrastive margin loss uses pairs rather than triplets. Each pair has a label indicating whether the two examples are similar or dissimilar.

Let

$$
z_1 = f_\theta(x_1),
\qquad
z_2 = f_\theta(x_2).
$$

Let

$$
d = \|z_1 - z_2\|_2.
$$

For a similar pair, the loss encourages small distance:

$$
L_{\text{similar}} = d^2.
$$

For a dissimilar pair, the loss encourages distance at least \(m\):

$$
L_{\text{dissimilar}} =
\max(0, m-d)^2.
$$

With label \(y=1\) for similar and \(y=0\) for dissimilar, one common form is

$$
L =
y d^2
+
(1-y)\max(0, m-d)^2.
$$

In PyTorch:

```python
def contrastive_margin_loss(z1, z2, y, margin=1.0):
    d = torch.norm(z1 - z2, dim=1)
    similar = y * d.pow(2)
    dissimilar = (1 - y) * torch.clamp(margin - d, min=0.0).pow(2)
    return (similar + dissimilar).mean()
```

This loss directly shapes the geometry of an embedding space.

### Margin Losses and Embedding Geometry

Margin-based losses are useful when the representation space matters.

For example, in face verification, we may not want only a class prediction. We want an embedding space where images of the same person are close and images of different people are far apart.

The loss creates geometric constraints:

$$
d(\text{same identity}) \text{ small},
$$

and

$$
d(\text{different identity}) \geq m.
$$

This allows the model to generalize to new identities not seen during training. Instead of memorizing a fixed set of classes, the model learns a distance function.

This is also useful in search and retrieval. A query and a relevant document should have nearby embeddings. An irrelevant document should be farther away.

### Hard Negative Mining

Margin losses often depend heavily on the choice of negative examples.

If negatives are too easy, the loss becomes zero and the model receives little training signal. If negatives are too hard, especially early in training, optimization can become unstable.

A hard negative is a negative example that the model currently scores too highly or places too close to the anchor.

Triplet loss, for example, benefits from mining negatives such that

$$
d(a,n)
$$

is small enough to violate the margin.

Common strategies include:

| Strategy | Description |
|---|---|
| Random negatives | Sample negatives uniformly |
| Hard negatives | Use negatives with highest incorrect score |
| Semi-hard negatives | Use negatives that violate the margin but remain farther than positives |
| In-batch negatives | Treat other examples in the batch as negatives |

Hard negative mining can greatly improve metric learning, but aggressive mining can amplify label noise.

### Margin Size

The margin is a hyperparameter. A larger margin requires stronger separation. A smaller margin allows closer decision boundaries.

If the margin is too small, embeddings may not separate well. If the margin is too large, many constraints may become impossible to satisfy, which can slow or destabilize training.

For hinge loss, the conventional margin is \(1\). For embedding losses, the margin depends on the distance scale. If embeddings are normalized to unit length, the feasible distance range is limited, so the margin must be chosen accordingly.

In practice, the margin is tuned using validation performance.

### Margins with Normalized Embeddings

Many modern embedding systems normalize embeddings to unit length:

$$
\|z\|_2 = 1.
$$

Then similarity is often measured by cosine similarity:

$$
\cos(z_1,z_2) = z_1^\top z_2.
$$

Because embeddings lie on the unit sphere, margin constraints become angular or cosine constraints.

This idea appears in face recognition losses such as angular margin softmax methods. These losses modify classification logits so that classes must be separated by angular margins, improving embedding discrimination.

The general principle remains the same: the correct identity should be separated from competing identities by a margin in representation space.

### Margin Losses Versus Cross-Entropy

Cross-entropy and margin losses both train classifiers, but they emphasize different properties.

| Property | Cross-entropy | Margin-based loss |
|---|---|---|
| Output interpretation | Probabilities | Scores or distances |
| Main goal | Likelihood of correct class | Separation from alternatives |
| Gradient behavior | Nonzero for most examples | Zero outside margin |
| Calibration | Usually better | Usually weaker |
| Common use | Classification | Ranking, metric learning, retrieval |
| Negative examples | Implicit through softmax | Often explicitly sampled |

For ordinary classification, cross-entropy is usually the best default. For verification, ranking, retrieval, and embedding learning, margin losses are often more natural.

### Practical Guidelines

Use hinge loss when you want a score-based classifier with a clear separation margin. Use multiclass margin loss when class separation matters more than calibrated probabilities. Use triplet loss or contrastive margin loss when learning embeddings for similarity search, verification, or retrieval.

Check label conventions carefully. Binary hinge loss usually expects targets in \(\{-1,+1\}\), while binary cross-entropy expects targets in \(\{0,1\}\).

Tune the margin. Monitor how many examples violate the margin. If almost none violate it, training may have little signal. If almost all violate it for a long time, the margin may be too large, the model may be underpowered, or negative examples may be too hard.

Margin-based losses are best understood as constraints made differentiable. They do not only ask the model to be correct. They ask it to be correct with room to spare.

