-
Notifications
You must be signed in to change notification settings - Fork 817
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Embedding Quality Difference #1161
Comments
If you provide a specific issue and colab link reproducing it I can take a look. As it stands this issue is too vaguely described. |
Hi, Here is the colab link comparing parametric UMAP and standard UMAP for supervised FMNIST. |
Hello @timsainb, Were you able to look at it? |
Thanks for providing the colab notebook. Note that you are plotting the results on the training data here and not the held out testing data. This is very important when you consider the difference between parametric umap and umap. Supervised nonparametric umap is performing an embedding by balancing your distance metric in data spece (e.g. euclidean distance) and in categorical space. If you were to set the balance to 100% categorical distance, you would get perfect separation between classes, but it wouldn't practically tell you anything about your data. Parametric UMAP can't do that because the embedding is parametrically related to the input data using a neural network. Imagine you were to sample data as two classes from the same gaussian distribution. Since it comes from the same distribution, even a supervised neural network won't allow you to separate the classes. |
Hello @timsainb ,
When using Parametric UMAP for supervised tasks, the quality of the embeddings is significantly worse compared to the embeddings produced by standard UMAP. This difference is observed across multiple datasets and configurations. What could be the reason and can it be improved?
The text was updated successfully, but these errors were encountered: