-
Notifications
You must be signed in to change notification settings - Fork 78
Open
Description
The current workflow for identifying regressions and benchmarking packages is not ideal.
Some comments/ideas for improvement:
spatialdataandnapari-spatialdataboth utilizeasv, but in different ways: in particular,spatialdatarequires manual runs, whilenapari-spatialdatahas workflows enabled.- The benchmarking machinery in
napari-spatialdatais complex and should be simplified. - We lack a benchmarks results page, making the exploration of benchmark results cumbersome: https://pv.github.io/numpy-bench/#?sort=3&dir=desc
- It appears that with the current workflow, we cannot easily benchmark versions with incompatible installs, which are precisely the scenarios where benchmarks would be most useful: https://github.com/scverse/napari-spatialdata/actions/runs/20647610286/job/59287297836?pr=377
Path forward: Currently, the focus on spatialdata is at the specs and APIs level, not performance. Therefore, I would consciously place less emphasis on systematic benchmarks for the time being and also disable the benchmarks workflow in napari-spatialdata (which can still be used on-demand, similar to spatialdata). When we shift our focus to performance, we should revamp the benchmarks suite, enable it universally (not just in napari-spatialdata), document it for easier onboard for new devs, and enhance it for systematic use.
Metadata
Metadata
Assignees
Labels
No labels