This repository has been archived by the owner on Sep 18, 2024. It is now read-only.
🎉 0.3.0
Minor release 0.3.0
New features
- Tailor now allows you to freeze weights by layer names
freeze=['layer1', 'layer2']
and attach a customised bottleneck net modulebottleneck_net=MLP()
on top of your embedding model #230, #238. - Finetuner now support
callbacks
. Callbacks can be triggered on model training process, and we've implemented several built-in callbacks such as WandBLogger which could log your tranining progress with Weights and Biases #231, #237. - Built-in different mining strategy, such as hard negative mining, can be plug-into loss fucntions, such as
TripletLoss(miner=TripletEasyHardMiner(neg_strategy='semihard') )
#236. - Learning rate scheduler support on
batch
orepoch
level usingscheduler_step
#248. - Multiprocess data loading now supports with Pytorch and PaddlePaddle backend with
num_workers
#263. - Built-in
Evaluator
support with different metrics supported, such as precision, recall, mAP, nDCG etc #223, #224.
Bug fixes & Refactoring & Testing
- Make
blob
property writable with Pytorch backend #244. - Now the reserved tag for finetuner change to
finetuner_label
#251. - Code consistency improvement in
embed
andpreprocessing
#256, #255. - Minor bug fixed includs type casting #268, unit/integration test improvement #264, #253, DocArray import refactoring after we split docarray as a seperate project #277, #275.
🙇 We'd like to thank all contributors for this new release! In particular,
Tadej Svetina, Wang Bo, George Mastrapas, Gregor von Dulong, Aziz Belaweid, Han Xiao, Mohammad Kalim Akram, Deepankar Mahapatro, Nan Wang, Maximilian Werk, Roshan Jossy, Jina Dev Bot, 🙇