Skip to content

Commit 777705b

Browse files
author
Kevin Musgrave
committed
Updated docs
1 parent 05c397a commit 777705b

File tree

3 files changed

+109
-6
lines changed

3 files changed

+109
-6
lines changed

Diff for: CONTENTS.md

+2
Original file line numberDiff line numberDiff line change
@@ -21,12 +21,14 @@
2121
| [**IntraPairVarianceLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#intrapairvarianceloss) | [Deep Metric Learning with Tuplet Margin Loss](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yu_Deep_Metric_Learning_With_Tuplet_Margin_Loss_ICCV_2019_paper.pdf)
2222
| [**LargeMarginSoftmaxLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#largemarginsoftmaxloss) | [Large-Margin Softmax Loss for Convolutional Neural Networks](https://arxiv.org/pdf/1612.02295.pdf)
2323
| [**LiftedStructreLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#liftedstructureloss) | [Deep Metric Learning via Lifted Structured Feature Embedding](https://arxiv.org/pdf/1511.06452.pdf)
24+
| [**ManifoldLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss) | [Ensemble Deep Manifold Similarity Learning using Hard Proxies](https://openaccess.thecvf.com/content_CVPR_2019/papers/Aziere_Ensemble_Deep_Manifold_Similarity_Learning_Using_Hard_Proxies_CVPR_2019_paper.pdf)
2425
| [**MarginLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#marginloss) | [Sampling Matters in Deep Embedding Learning](https://arxiv.org/pdf/1706.07567.pdf)
2526
| [**MultiSimilarityLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#multisimilarityloss) | [Multi-Similarity Loss with General Pair Weighting for Deep Metric Learning](http://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Multi-Similarity_Loss_With_General_Pair_Weighting_for_Deep_Metric_Learning_CVPR_2019_paper.pdf)
2627
| [**NCALoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ncaloss) | [Neighbourhood Components Analysis](https://www.cs.toronto.edu/~hinton/absps/nca.pdf)
2728
| [**NormalizedSoftmaxLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#normalizedsoftmaxloss) | - [NormFace: L2 Hypersphere Embedding for Face Verification](https://arxiv.org/pdf/1704.06369.pdf) <br/> - [Classification is a Strong Baseline for DeepMetric Learning](https://arxiv.org/pdf/1811.12649.pdf)
2829
| [**NPairsLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#npairsloss) | [Improved Deep Metric Learning with Multi-class N-pair Loss Objective](http://www.nec-labs.com/uploads/images/Department-Images/MediaAnalytics/papers/nips16_npairmetriclearning.pdf)
2930
| [**NTXentLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss) | - [Representation Learning with Contrastive Predictive Coding](https://arxiv.org/pdf/1807.03748.pdf) <br/> - [Momentum Contrast for Unsupervised Visual Representation Learning](https://arxiv.org/pdf/1911.05722.pdf) <br/> - [A Simple Framework for Contrastive Learning of Visual Representations](https://arxiv.org/abs/2002.05709)
31+
| [**P2SGradLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#p2sgradloss) | [P2SGrad: Refined Gradients for Optimizing Deep Face Models](https://arxiv.org/abs/1905.02479)
3032
| [**PNPLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#pnploss) | [Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones is Enough](https://arxiv.org/pdf/2102.04640.pdf)
3133
| [**ProxyAnchorLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#proxyanchorloss) | [Proxy Anchor Loss for Deep Metric Learning](https://arxiv.org/pdf/2003.13911.pdf)
3234
| [**ProxyNCALoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#proxyncaloss) | [No Fuss Distance Metric Learning using Proxies](https://arxiv.org/pdf/1703.07464.pdf)

Diff for: README.md

+9-5
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,15 @@
1818

1919
## News
2020

21+
**June 18**: v2.2.0
22+
- Added [ManifoldLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss) and [P2SGradLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#p2sgradloss).
23+
- Added a `symmetric` flag to [SelfSupervisedLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss).
24+
- See the [release notes](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v2.2.0).
25+
- Thank you [domenicoMuscill0](https://github.com/domenicoMuscill0).
26+
2127
**April 5**: v2.1.0
2228
- Added [PNPLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#pnploss)
23-
- Thanks to contributor [interestingzhuo](https://github.com/interestingzhuo).
24-
25-
**January 29**: v2.0.0
26-
- Added SelfSupervisedLoss, plus various API improvements. See the [release notes](https://github.com/KevinMusgrave/pytorch-metric-learning/releases/tag/v2.0.0).
27-
- Thanks to contributor [cwkeam](https://github.com/cwkeam).
29+
- Thanks you [interestingzhuo](https://github.com/interestingzhuo).
2830

2931

3032
## Documentation
@@ -225,6 +227,7 @@ Thanks to the contributors who made pull requests!
225227

226228
| Contributor | Highlights |
227229
| -- | -- |
230+
|[domenicoMuscill0](https://github.com/domenicoMuscill0)| - [ManifoldLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss) <br/> - [P2SGradLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#p2sgradloss)
228231
|[mlopezantequera](https://github.com/mlopezantequera) | - Made the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers) work on any combination of query and reference sets <br/> - Made [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) work with arbitrary label comparisons |
229232
|[cwkeam](https://github.com/cwkeam) | - [SelfSupervisedLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#selfsupervisedloss) <br/> - [VICRegLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#vicregloss) <br/> - Added mean reciprocal rank accuracy to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) <br/> - BaseLossWrapper|
230233
|[marijnl](https://github.com/marijnl)| - [BatchEasyHardMiner](https://kevinmusgrave.github.io/pytorch-metric-learning/miners/#batcheasyhardminer) <br/> - [TwoStreamMetricLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/#twostreammetricloss) <br/> - [GlobalTwoStreamEmbeddingSpaceTester](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/#globaltwostreamembeddingspacetester) <br/> - [Example using trainers.TwoStreamMetricLoss](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/TwoStreamMetricLoss.ipynb) |
@@ -273,6 +276,7 @@ This library contains code that has been adapted and modified from the following
273276
- https://github.com/ronekko/deep_metric_learning
274277
- https://github.com/tjddus9597/Proxy-Anchor-CVPR2020
275278
- http://kaizhao.net/regularface
279+
- https://github.com/nii-yamagishilab/project-NN-Pytorch-scripts
276280

277281
### Logo
278282
Thanks to [Jeff Musgrave](https://www.designgenius.ca/) for designing the logo.

Diff for: docs/losses.md

+98-1
Original file line numberDiff line numberDiff line change
@@ -545,6 +545,55 @@ losses.LiftedStructureLoss(neg_margin=1, pos_margin=0, **kwargs):
545545
* **loss**: The loss per positive pair in the batch. Reduction type is ```"pos_pair"```.
546546

547547

548+
## ManifoldLoss
549+
550+
[Ensemble Deep Manifold Similarity Learning using Hard Proxies](https://openaccess.thecvf.com/content_CVPR_2019/papers/Aziere_Ensemble_Deep_Manifold_Similarity_Learning_Using_Hard_Proxies_CVPR_2019_paper.pdf)
551+
552+
```python
553+
losses.ManifoldLoss(
554+
l: int,
555+
K: int = 50,
556+
lambdaC: float = 1.0,
557+
alpha: float = 0.8,
558+
margin: float = 5e-4,
559+
**kwargs
560+
)
561+
```
562+
563+
**Parameters**
564+
565+
- **l**: embedding size.
566+
567+
- **K**: number of proxies.
568+
569+
- **lambdaC**: regularization weight. Used in the formula `loss = intrinsic_loss + lambdaC*context_loss`.
570+
If `lambdaC=0`, then it uses only the intrinsic loss. If `lambdaC=np.inf`, then it uses only the context loss.
571+
572+
- **alpha**: parameter of the Random Walk. Must be in the range `(0,1)`. It specifies the amount of similarity between neighboring nodes.
573+
574+
- **margin**: margin used in the calculation of the loss.
575+
576+
577+
Example usage:
578+
```python
579+
loss_fn = ManifoldLoss(128)
580+
581+
# use random cluster centers
582+
loss = loss_fn(embeddings)
583+
# or specify indices of embeddings to use as cluster centers
584+
loss = loss_fn(embeddings, indices_tuple=indices)
585+
```
586+
587+
**Important notes**
588+
589+
`labels`, `ref_emb`, and `ref_labels` are not supported for this loss function.
590+
591+
592+
**Default reducer**:
593+
594+
- This loss returns an **already reduced** loss.
595+
596+
548597
## MarginLoss
549598
[Sampling Matters in Deep Embedding Learning](https://arxiv.org/pdf/1706.07567.pdf){target=_blank}
550599
```python
@@ -761,6 +810,37 @@ losses.NTXentLoss(temperature=0.07, **kwargs)
761810
* **loss**: The loss per positive pair in the batch. Reduction type is ```"pos_pair"```.
762811

763812

813+
814+
## P2SGradLoss
815+
[P2SGrad: Refined Gradients for Optimizing Deep Face Models](https://arxiv.org/abs/1905.02479)
816+
```python
817+
losses.P2SGradLoss(descriptors_dim, num_classes, **kwargs)
818+
```
819+
820+
**Parameters**
821+
822+
- **descriptors_dim**: The embedding size.
823+
824+
- **num_classes**: The number of classes in your training dataset.
825+
826+
827+
Example usage:
828+
```python
829+
loss_fn = P2SGradLoss(128, 10)
830+
loss = loss_fn(embeddings, labels)
831+
```
832+
833+
**Important notes**
834+
835+
`indices_tuple`, `ref_emb`, and `ref_labels` are not supported for this loss function.
836+
837+
838+
**Default reducer**:
839+
840+
- This loss returns an **already reduced** loss.
841+
842+
843+
764844
## PNPLoss
765845
[Rethinking the Optimization of Average Precision: Only Penalizing Negative Instances before Positive Ones is Enough](https://arxiv.org/pdf/2102.04640.pdf){target=_blank}
766846
```python
@@ -849,14 +929,31 @@ loss_optimizer.step()
849929

850930
## SelfSupervisedLoss
851931

852-
A common use case is to have `embeddings` and `ref_emb` be augmented versions of each other. For most losses, you have to create labels to indicate which `embeddings` correspond with which `ref_emb`. `SelfSupervisedLoss` automates this.
932+
A common use case is to have `embeddings` and `ref_emb` be augmented versions of each other. For most losses, you have to create labels to indicate which `embeddings` correspond with which `ref_emb`.
933+
934+
`SelfSupervisedLoss` is a wrapper that takes care of this by creating labels internally. It assumes that:
935+
936+
- `ref_emb[i]` is an augmented version of `embeddings[i]`.
937+
- `ref_emb[i]` is the only augmented version of `embeddings[i]` in the batch.
853938

854939
```python
940+
losses.SelfSupervisedLoss(loss, symmetric=True, **kwargs)
941+
```
942+
943+
**Parameters**:
944+
945+
* **loss**: The loss function to be wrapped.
946+
* **symmetric**: If `True`, then the embeddings in both `embeddings` and `ref_emb` are used as anchors. If `False`, then only the embeddings in `embeddings` are used as anchors.
947+
948+
Example usage:
949+
950+
```
855951
loss_fn = losses.TripletMarginLoss()
856952
loss_fn = SelfSupervisedLoss(loss_fn)
857953
loss = loss_fn(embeddings, ref_emb)
858954
```
859955

956+
860957
??? "Supported Loss Functions"
861958
- [AngularLoss](losses.md#angularloss)
862959
- [CircleLoss](losses.md#circleloss)

0 commit comments

Comments
 (0)