Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

🎉 0.3.0

Compare
Choose a tag to compare
@github-actions github-actions released this 16 Dec 09:49
8d081b4

Minor release 0.3.0

New features

  1. Tailor now allows you to freeze weights by layer names freeze=['layer1', 'layer2'] and attach a customised bottleneck net module bottleneck_net=MLP() on top of your embedding model #230, #238.
  2. Finetuner now support callbacks. Callbacks can be triggered on model training process, and we've implemented several built-in callbacks such as WandBLogger which could log your tranining progress with Weights and Biases #231, #237.
  3. Built-in different mining strategy, such as hard negative mining, can be plug-into loss fucntions, such as TripletLoss(miner=TripletEasyHardMiner(neg_strategy='semihard') ) #236.
  4. Learning rate scheduler support on batch or epoch level using scheduler_step #248.
  5. Multiprocess data loading now supports with Pytorch and PaddlePaddle backend with num_workers #263.
  6. Built-in Evaluator support with different metrics supported, such as precision, recall, mAP, nDCG etc #223, #224.

Bug fixes & Refactoring & Testing

  1. Make blob property writable with Pytorch backend #244.
  2. Now the reserved tag for finetuner change to finetuner_label #251.
  3. Code consistency improvement in embed and preprocessing #256, #255.
  4. Minor bug fixed includs type casting #268, unit/integration test improvement #264, #253, DocArray import refactoring after we split docarray as a seperate project #277, #275.

🙇 We'd like to thank all contributors for this new release! In particular,
Tadej Svetina, Wang Bo, George Mastrapas, Gregor von Dulong, Aziz Belaweid, Han Xiao, Mohammad Kalim Akram, Deepankar Mahapatro, Nan Wang, Maximilian Werk, Roshan Jossy, Jina Dev Bot, 🙇