1 | initial version |
I am not aware of any historical performance data stored somewhere for the OpenCV buildbots. And I doubt performance data on this CI instances would be meaningful. Dedicated hardware (but costly, time consuming for maintenance, ...) would be needed, that is not the job of the buildbot CI. And OpenCV human resources are already limited.
So all code changes to OpenCV rely on the individual developer always checking for performance regressions.
Yes it is. See the optimization label for some examples/discussions.
I would like also add that OpenCV relies on external libraries for "performance critical" functions like Intel IPP for lot of image processing functions, or cuDNN/OpenVINO/Tengine for deep learning.
I think there are few "micro optimizations" for contributed new code, or in OpenCV code base.
I mean, a large part of the pull requests for performance optimization are "obvious optimizations": e.g. add SIMD code path for a specific function, parallelize an algorithm, add OpenCL code path, ...
Another topic is conversion of old code from native intrinsics to universal intrinsics / wide-universal intrinsics (more information). For a library of this size, it is unmaintainable to have native intrinsics for all the supported architecture (x86, ARM, PowerPC, RISC-V, WebAssembly, ...).
In summary, and in my opinion:
My intention is to see the performance effect of a code fix and compare it historically across all the platforms the pipeline/builtbot makes.
Maybe you can ad more information about which function you are targeting, what is the optimization?
Is the expected performance gain significative?
Note: