Replies: 2 comments 3 replies
-
@orion512 typically I wouldn't spending anytime worrying about this if it is with typical floating point rounding differences This (https://pytorch.org/docs/stable/notes/randomness.html) is worth a read, and might want to see if it still happens if you disable benchmarking and use deterministic algorithms |
Beta Was this translation helpful? Give feedback.
-
@rwightman As I mentioned above I was able to control reproducibility (on my mac laptop) by processing only 15 tensors in a batch. The problem I am facing now is that I get different results locally than on the build machine (ubuntu) while using GitHub actions. The difference again is in rounding (below example of the first element of the vector)
as mention I only use a CPU and not a GPU and have also tried setting the below:
Are you aware of anything else I can try to have reproducible results. |
Beta Was this translation helpful? Give feedback.
-
This completely confusses me. Depending on the input into forward features the tensor result changes.
The change isn't large I believe only rounding (maybe grad_fn=SelectBackward0). Yet still different.
Below I simply run forward features 3x times.
keep in mind the first 10 images are always the same.
Then when I compare the results to each other the batch with 30 images produces different tensors.
the output is
Beta Was this translation helpful? Give feedback.
All reactions