The training code is downloadable here. Once trained, the model can be used for inference in the main repository. The command used for training is in train.sh (in the zip folder). We notice that due to examples in FSC-147 with high (> 900) objects and the sensitivity of the RMSE to large outlier errors from images with high counts, the RMSE obtained after training can vary significantly between runs. On the other hand, the MAE is pretty stable across runs. We currently address counting high numbers of objects with adaptive cropping (see here and here). Improving the robustness of this method to multiple training runs constitutes future work. The pretrained checkpoint that exactly reproduces the results in the paper is provided in the main README.md available here.