Is there a good way to read frames from a video file in C++ and process them, such that regardless of the frame processing time we always process the latest available frame in real-time and drop all frames in between?
Using this approach the Total processing time = Recorded video length + Time to process the last frame, regardless of the algorithm processing time. OpenCV might already support this.
If you capture data using your camera at 30 or 60 fps and you would like to run an algorithm on it using OpenCV, you can load it and read it frame by frame using the VideoCapture() class. You will process every frame and the Total speed = Avg. processing time of a frame * Number of frames.
This approach is not suitable to evaluate real-time performance of algorithms. It is particularly problematic for algorithms that require longer than 1/fps to process a frame since they would have dropped some of the frames in a real-time application, which can dramatically alter their accuracy.