-
Notifications
You must be signed in to change notification settings - Fork 216
[ENH] Adding Time Mixingup Contrastive Learning to Self Supervised module #3015
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thank you for contributing to
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Adding a state of the art model TimeMCL [1] to populate the self supervised model, after this will probably add a base model to contain the load/save functions
The model takes as input 2 time series, with no informaiton on the labels, augments a third series by mixing up x1 and x2 as x3 = lamda.x1 + (1-lamda).x2 and the model uses a contrastive learning loss to match again the augmented x3 sample to x1 and x2 and discard the rest of the series in the batch
Original code is in torch i adapted it to tensorflow keras
[1] Wickstrøm, Kristoffer, Michael Kampffmeyer, Karl Øyvind Mikalsen, and Robert Jenssen. "Mixing up contrastive learning: Self-supervised representation learning for time series." Pattern Recognition Letters 155 (2022): 54-61.