K fold cross validation? #1152
-
I'm a bit confused by the accuracy function in timm.utils.metrics, it takes the parameter topk? Does this specify number of fold cross validation? |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments
-
not a bug |
Beta Was this translation helpful? Give feedback.
-
The parameter For ImageNet, it is quite common to report top five accuracy alongside "normal" accuracy. |
Beta Was this translation helpful? Give feedback.
-
@NouranFadlallah the answer from @TorbenSDJohansen is correct, it's a basic accuracy measure. Cross fold validation is something you'd have to add on top, as well as using different metrics like f-scores, precision/recall, etc. The included train/validation scripts have been focused on ImageNet where accuracy is the standard metric due to it being a relatively well balanced dataset (for the in1k at least). There will eventually be some other metrics as I expand the focus of the train/val scripts for some more interesting tasks, but in general you want to seek out some additional metrics libraries that are suited to your task. Many should be easy enough to intergrate with the scripts here... |
Beta Was this translation helpful? Give feedback.
The parameter
topk
specifies the accuracy of the label being part of the top k predictions. For example, if the label is "boat" and the model predicts that the two most likely labels are, in order, ["bird", "boat"], then the prediction is wrong in the sense of normal accuracy -- "bird" is not the same as "boat" -- but correct in thetop2
sense.For ImageNet, it is quite common to report top five accuracy alongside "normal" accuracy.