Drawbacks of using FFMPEG through OpenCV vs directly using FFMPEG libraries
I have been exploring the option providing Video support through OpenCV. I would like to use FFMPEG as the backend. One of the things that I observed with OpenCV there is not much control on the video encoding settings that I could pass to FFMPEG. Is my understanding correct? What are the advantages and drawbacks of using FFMPEG though OpenCV vs directly calling FFMPEG apart from the programming ease?
opencv is a computer-vision library.
to edit / encode video files, indeed you're better off using (lib)ffmpeg / gstreamer directly.
I understand that OpenCV is CV library. We already have OpenCV support. Hence we were looking at leveraging OpenCV itself for the video support. I see that we can pass FPS(frames per second), quality( 0 to 100) parameters to OpenCV video APIs. I would like to understand if these are the only parameters that we can use to control the video encoding. If there are any other possibilities.
"I would like to understand if these are the only parameters that we can use to control the video encoding" -- to my knowledge, yes.