1 | initial version |
1st of all your big difference in your timing might be due to cout
that introduce some cache and buffering effects on grabbing.
After this, on platforms where the number of CPU ticks feature is available, cv::getTickFrequency()
can accurately measure the execution time of very small code fragments (in the range of hundreds of nano seconds), but you have to be sure that duration of the code under test is greater than your clock resolution. This is almost always true if CPU ticks feature is available;
On my Win7/i3, Clock resolution = 1/cv::getTickFrequency() ~= 400ns
To overcome clock resolution and get around some other optimization (cache, register, ...) is common way to take time averaged over N execution of code under test, in this case you can measure a for loop duration than calculate the average.
Use the code below to take your measure or to check your measurement function using a known duration time as from Sleep
#define TEST = 1 // 1:to test this measurement system
// 0:to measure grab time
int64 start, stop;
double clockResolution = cv::getTickFrequency(); // ticks per second
std::cout << std::endl << "Clock resolution: "
<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;
int numOfFrames;
int testTimeMs = 10;
start = cv::getTickCount();
for (numOfFrames = 0; numOfFrames<100; numOfFrames++)
{
#ifd TEST==1
#if __cplusplus >= 199711L // if C++11
std::this_thread::sleep_for(std::chrono::milliseconds(testTimeMs));
#elif defined(_WIN32) // if windows system
Sleep(testTimeMs);
#else // assume this is Unix system
usleep(1000 * testTimeMs);
#endif
#else
cap >> frame;
if (frame.empty()) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
#endif
}
stop = cv::getTickCount();
double totalTime = (stop - start) / cv::getTickFrequency(); // seconds
double averageTime = totalTime / numOfFrames; // seconds
std::cout << "Total time passed: " << 1000 * totalTime << "ms" << std::endl;
std::cout << "Average time: " << 1000 * averageTime << "ms" << std::endl;
here my test result:
Clock resolution: 405.217ns
Total time passed: 999.405ms
Average time: 9.99405ms
2 | No.2 Revision |
1st of all your big difference in your timing might be due to cout
that introduce some cache and buffering effects on grabbing.
After this, on platforms where the number of CPU ticks feature is available, cv::getTickFrequency()
can accurately measure the execution time of very small code fragments (in the range of hundreds of nano seconds), but you have to be sure that duration of the code under test is greater than your clock resolution. This is almost always true if CPU ticks feature is available;
On my Win7/i3, Clock resolution = 1/cv::getTickFrequency() ~= 400ns
To overcome clock resolution and get around some other optimization (cache, register, ...) is common way to take time averaged over N execution of code under test, in this case you can measure a for loop duration than calculate the average.
Use the code below to take your measure or to check your measurement function using a known duration time as from Sleep
#define TEST = 1 // 1:to test this measurement system
// 0:to measure grab time
int64 start, stop;
double clockResolution = cv::getTickFrequency(); // ticks per second
std::cout << std::endl << "Clock resolution: "
<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;
int numOfFrames;
int testTimeMs = 10;
start = cv::getTickCount();
for (numOfFrames = 0; numOfFrames<100; numOfFrames++)
{
#ifd #if TEST==1
#if __cplusplus >= 199711L // if C++11
std::this_thread::sleep_for(std::chrono::milliseconds(testTimeMs));
#elif defined(_WIN32) // if windows system
Sleep(testTimeMs);
#else // assume this is Unix system
usleep(1000 * testTimeMs);
#endif
#else
cap >> frame;
if (frame.empty()) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
#endif
}
stop = cv::getTickCount();
double totalTime = (stop - start) / cv::getTickFrequency(); // seconds
double averageTime = totalTime / numOfFrames; // seconds
std::cout << "Total time passed: " << 1000 * totalTime << "ms" << std::endl;
std::cout << "Average time: " << 1000 * averageTime << "ms" << std::endl;
here my test result:
Clock resolution: 405.217ns
Total time passed: 999.405ms
Average time: 9.99405ms
3 | No.3 Revision |
1st of all your big difference in your timing might be due to cout
that introduce some cache and buffering effects on grabbing.
After this, on platforms where the number of CPU ticks feature is available, cv::getTickFrequency()
can accurately measure the execution time of very small code fragments (in the range of hundreds of nano seconds), but you have to be sure that duration of the code under test is greater than your clock resolution. This is almost always true if CPU ticks feature is available;
On my Win7/i3, Clock resolution = 1/cv::getTickFrequency() ~= 400ns
To overcome clock resolution and get around some other optimization (cache, register, ...) is common way to take time averaged over N execution of code under test, in this case you can measure a for loop duration than calculate the average.
Use the code below to take your measure or to check your measurement function using a known duration time as from Sleep
#define TEST 1 // 1:to test this measurement system
// 0:to measure grab time
int64 start, stop;
double clockResolution = cv::getTickFrequency(); // ticks per second
std::cout << std::endl << "Clock resolution: "
<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;
int numOfFrames;
#if TEST==1
int testTimeMs = 10;
#else
double fpsSet;
double fpsWanted = 20;
cap.set(CV_CAP_PROP_FPS, fpsWanted);
fpsSet = cvRound(cap.get(CV_CAP_PROP_FPS));
if (fpsSet != fpsWanted){
std::cout << std::endl << fpsWanted << "is not supported! "
<< fpsSet << " has been accepted by cam";
}
#endif
start = cv::getTickCount();
for (numOfFrames = 0; numOfFrames<100; numOfFrames++)
{
#if TEST==1
#if __cplusplus >= 199711L // if C++11
std::this_thread::sleep_for(std::chrono::milliseconds(testTimeMs));
#elif defined(_WIN32) // if windows system
Sleep(testTimeMs);
#else // assume this is Unix system
usleep(1000 * testTimeMs);
#endif
#else
cap >> frame;
if (frame.empty()) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
#endif
}
stop = cv::getTickCount();
double totalTime = (stop - start) / cv::getTickFrequency(); // seconds
double averageTime = totalTime / numOfFrames; // seconds
std::cout << "Total time passed: " << 1000 * totalTime << "ms" << std::endl;
std::cout << "Average time: " << 1000 * averageTime << "ms" << std::endl;
EDIT: clarifing test result
here my test result:result for testing function @10ms
Clock resolution: 405.217ns
Total time passed: 999.405ms
Average time: 9.99405ms
9.99405ms // expected 10ms
result for grabbing @20pfs
Clock resolution: 405.217ns
Total time passed: 4965.61ms
Average time: 49.656ms // expected 50ms
4 | No.4 Revision |
1st of all your big difference in your timing might be due to cout
that introduce some cache and buffering effects on grabbing.
After this, on platforms where the number of CPU ticks feature is available, cv::getTickFrequency()
can accurately measure the execution time of very small code fragments (in the range of hundreds of nano seconds), but you have to be sure that duration of the code under test is greater than your clock resolution. This is almost always true if CPU ticks feature is available;
On my Win7/i3, Clock resolution = 1/cv::getTickFrequency() ~= 400ns
To overcome clock resolution and get around some other optimization (cache, register, ...) is common way to take time averaged over N execution of code under test, in this case you can measure a for loop duration than calculate the average.
Use the code below to take your measure or to check your measurement function using a known duration time as from Sleep
#define TEST 1 // 1:to test this measurement system
// 0:to measure grab time
int64 start, stop;
double clockResolution = cv::getTickFrequency(); // ticks per second
std::cout << std::endl << "Clock resolution: "
<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;
int numOfFrames;
#if TEST==1
int testTimeMs = 10;
#else
double fpsSet;
double fpsWanted = 20;
cap.set(CV_CAP_PROP_FPS, fpsWanted);
fpsSet = cvRound(cap.get(CV_CAP_PROP_FPS));
if (fpsSet != fpsWanted){
std::cout << std::endl << fpsWanted << "is not supported! "
<< fpsSet << " has been accepted by cam";
}
#endif
start = cv::getTickCount();
for (numOfFrames = 0; numOfFrames<100; numOfFrames++)
{
#if TEST==1
#if __cplusplus >= 199711L // if C++11
std::this_thread::sleep_for(std::chrono::milliseconds(testTimeMs));
#elif defined(_WIN32) // if windows system
Sleep(testTimeMs);
#else // assume this is Unix system
usleep(1000 * testTimeMs);
#endif
#else
cap >> frame;
if (frame.empty()) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
#endif
}
stop = cv::getTickCount();
double totalTime = (stop - start) / cv::getTickFrequency(); // seconds
double averageTime = totalTime / numOfFrames; // seconds
std::cout << "Total time passed: " << 1000 * totalTime << "ms" << std::endl;
std::cout << "Average time: " << 1000 * averageTime << "ms" << std::endl;
EDIT: clarifing clarifying test result
here my result for testing function @10ms
Clock resolution: 405.217ns
Total time passed: 999.405ms
Average time: 9.99405ms // expected 10ms
10ms, this is machine independent
result for results grabbing from webcam @20pfs
Clock resolution: 405.217ns
Total time passed: 4965.61ms
Average time: 49.656ms // expected 50ms
50ms, this should be machine independent
results reading from AVI file (encoded DVIX @5pfs)
Average time: 47.656ms // this will change for different codec and machine
5 | No.5 Revision |
1st of all your big difference in your timing might be due to cout
that introduce some cache and buffering effects on grabbing.
After this, on platforms where the number of CPU ticks feature is available, cv::getTickFrequency()
can accurately measure the execution time of very small code fragments (in the range of hundreds of nano seconds), but you have to be sure that duration of the code under test is greater than your clock resolution. This is almost always true if CPU ticks feature is available;
On my Win7/i3, Clock resolution = 1/cv::getTickFrequency() ~= 400ns
To overcome clock resolution and get around some other optimization (cache, register, ...) is common way to take time averaged over N execution of code under test, in this case you can measure a for loop duration than calculate the average.
Use the code below to take your measure or to check your measurement function using a known duration time as from Sleep
#define TEST 1 // 1:to test this measurement system
// 0:to measure grab time
int64 start, stop;
double clockResolution = cv::getTickFrequency(); // ticks per second
std::cout << std::endl << "Clock resolution: "
<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;
int numOfFrames;
#if TEST==1
int testTimeMs = 10;
#else
double fpsSet;
double fpsWanted = 20;
cap.set(CV_CAP_PROP_FPS, fpsWanted);
fpsSet = cvRound(cap.get(CV_CAP_PROP_FPS));
if (fpsSet != fpsWanted){
std::cout << std::endl << fpsWanted << "is not supported! "
<< fpsSet << " has been accepted by cam";
}
#endif
start = cv::getTickCount();
for (numOfFrames = 0; numOfFrames<100; numOfFrames++)
{
#if TEST==1
#if __cplusplus >= 199711L // if C++11
std::this_thread::sleep_for(std::chrono::milliseconds(testTimeMs));
#elif defined(_WIN32) // if windows system
Sleep(testTimeMs);
#else // assume this is Unix system
usleep(1000 * testTimeMs);
#endif
#else
cap >> frame;
if (frame.empty()) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
#endif
}
stop = cv::getTickCount();
double totalTime = (stop - start) / cv::getTickFrequency(); // seconds
double averageTime = totalTime / numOfFrames; // seconds
std::cout << "Total time passed: " << 1000 * totalTime << "ms" << std::endl;
std::cout << "Average time: " << 1000 * averageTime << "ms" << std::endl;
EDIT: clarifying test result
here my result for testing function @10ms
Clock resolution: 405.217ns
Total time passed: 999.405ms
Average time: 9.99405ms // expected 10ms, this is must be machine independent
results grabbing from webcam @20pfs
Clock resolution: 405.217ns
Total time passed: 4965.61ms
Average time: 49.656ms // expected 50ms, this should be machine independent
results reading from AVI file (encoded DVIX @5pfs)
Average time: 47.656ms // this will change for different codec and machine
6 | No.6 Revision |
1st of all your big difference in your timing might be due to cout
that introduce some cache and buffering effects on grabbing.
After this, on platforms where the number of CPU ticks feature is available, cv::getTickFrequency()
can accurately measure the execution time of very small code fragments (in the range of hundreds of nano seconds), but you have to be sure that duration of the code under test is greater than your clock resolution. This is almost always true if CPU ticks feature is available;
On my Win7/i3, Clock resolution = 1/cv::getTickFrequency() ~= 400ns
To overcome clock resolution and get around some other optimization (cache, register, ...) is common way to take time averaged over N execution of code under test, in this case you can measure a for loop duration than calculate the average.
Use the code below to take your measure or to check your measurement function using a known duration time as from Sleep
#define TEST 1 // 1:to test this measurement system
// 0:to measure grab time
int64 start, stop;
double clockResolution = cv::getTickFrequency(); // ticks per second
std::cout << std::endl << "Clock resolution: "
<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;
int numOfFrames;
#if TEST==1
int testTimeMs = 10;
#else
double fpsSet;
double fpsWanted = 20;
cap.set(CV_CAP_PROP_FPS, fpsWanted);
fpsSet = cvRound(cap.get(CV_CAP_PROP_FPS));
if (fpsSet != fpsWanted){
std::cout << std::endl << fpsWanted << "is not supported! "
<< fpsSet << " has been accepted by cam";
}
#endif
start = cv::getTickCount();
for (numOfFrames = 0; numOfFrames<100; numOfFrames++)
{
#if TEST==1
#if __cplusplus >= 199711L // if C++11
std::this_thread::sleep_for(std::chrono::milliseconds(testTimeMs));
#elif defined(_WIN32) // if windows system
Sleep(testTimeMs);
#else // assume this is Unix system
usleep(1000 * testTimeMs);
#endif
#else
cap >> frame;
if (frame.empty()) //if not success, break loop
{
cout << "Cannot read the frame from video file" << endl;
break;
}
#endif
}
stop = cv::getTickCount();
double totalTime = (stop - start) / cv::getTickFrequency(); // seconds
double averageTime = totalTime / numOfFrames; // seconds
std::cout << "Total time passed: " << 1000 * totalTime << "ms" << std::endl;
std::cout << "Average time: " << 1000 * averageTime << "ms" << std::endl;
EDIT: clarifying test result
here my result for testing function @10ms
Clock resolution: 405.217ns
Total time passed: 999.405ms
Average time: 9.99405ms // expected 10ms, this must be machine independent
results grabbing from webcam @20pfs
Average time: 49.656ms // expected 50ms, this should be machine independent
results reading from AVI file (encoded DVIX @5pfs)DVIX)
Average time: 47.656ms // this will change for different codec and machine
7 | No.7 Revision |
1st of all your big difference in your timing might be due to cout
that introduce some cache and buffering effects on grabbing.
After this, on platforms where the number of CPU ticks feature is available, cv::getTickFrequency()
can accurately measure the execution time of very small code fragments (in the range of hundreds of nano seconds), but you have to be sure that duration of the code under test is greater than your clock resolution. This is almost always true if CPU ticks feature is available;
On my Win7/i3, Clock resolution = 1/cv::getTickFrequency() ~= 400ns
To overcome clock resolution and get around some other optimization (cache, register, ...) is common way to take time averaged over N execution of code under test, in this case you can measure a for loop duration than calculate the average.
Use EDIT 2: Code refactored to be more clear. #define TEST
has been removed, and tests has been encapsulated in functions. I hope it's better now.
Below are results using the new code below to take your measure or to check your measurement function using a known duration time as from Sleep(system: intel [email protected], win7/64, OCV 2.4.10)
#define -----------------------
TEST 1 // 1:to test this measurement system
// 0:to measure MEASUREMENT SYSTEM
Clock resolution: 405.217ns
Expected time: 20ms Measured time: 19.9917ms
------------------
TEST GRABING FROM CAM
INFO: try to set cam at: 25fps
INFO: currently we should grab time
int64 start, stop;
double clockResolution = cv::getTickFrequency(); // ticks per second
std::cout << std::endl << "Clock resolution: "
<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;
int numOfFrames;
#if TEST==1
int testTimeMs = 10;
#else
double fpsSet;
double fpsWanted = 20;
cap.set(CV_CAP_PROP_FPS, fpsWanted);
fpsSet = cvRound(cap.get(CV_CAP_PROP_FPS));
if (fpsSet != fpsWanted){
std::cout << std::endl << fpsWanted << "is not supported! "
<< fpsSet << " at: 25fps
Measuring time needed to grab a frame from camera...
Expected time: 40ms Measured time: 49.7196ms~=20fps
INFO frame size: [640 x 480]
-------------------------
TEST GRABBING FROM VIDEO FILE
INFO: The file has been accepted by cam";
}
#endif
start = cv::getTickCount();
for (numOfFrames = 0; numOfFrames<100; numOfFrames++)
{
#if TEST==1
created at: 20fps
Measuring time needed to read a frame from video file...
Expected time: UNAVAILABLE Measured time: 1.32921ms~=752fps
INFO frame size: [640 x 480]
here the new code
// Defines a standard _PAUSE function
#if __cplusplus >= 199711L // if C++11
std::this_thread::sleep_for(std::chrono::milliseconds(testTimeMs));
#include <thread>
#define _PAUSE(ms) (std::this_thread::sleep_for(std::chrono::milliseconds(ms)))
#elif defined(_WIN32) // if windows system
Sleep(testTimeMs);
#include <windows.h>
#define _PAUSE(ms) (Sleep(ms))
#else // assume this is Unix system
usleep(1000 * testTimeMs);
#include <unistd.h>
#define _PAUSE(ms) (usleep(1000 * ms))
#endif
#else
// Tests the accuracy of our functions
void TestMeasurementSys()
{
int64 start;
double totalTime = 0, averageTime = 0;
std::cout << "-----------------------" << std::endl;
std::cout << "TEST MEASUREMENT SYSTEM" << std::endl;
double clockResolution = cv::getTickFrequency(); // ticks per second
std::cout << "\tClock resolution: "
<< 1000 * 1000 * 1000 / clockResolution << "ns" << std::endl;
int testTimeMs = 20;
int count = 0, maxCount = 100;
for (count = 0; count < maxCount; count++)
{
start = cv::getTickCount();
_PAUSE(testTimeMs);
totalTime += (cv::getTickCount() - start);
}
totalTime /= cv::getTickFrequency(); // seconds
averageTime = totalTime / count; // seconds
std::cout << "\tExpected time: " << testTimeMs << "ms"
<< "\tMeasured time: " << averageTime * 1000 << "ms" << std::endl;
}
/**
* \brief Measures time needed get a frame from a cv::VideoCapture
* \param [in]cap valid and opened cv::VideoCapture instance
*
* \note If we are grabbing from a cam with high fps(>10) and if driver is working fine
* some \b simple additional code inside the grab loop (like imshow and waitkey(1) )
* shouldn't introduce a delay because \c cap>>frame will wait for the driver
* the needed time to run at given fps, than a bit of time spent here
* doesn't have effect on loop duration.
*
* In other words, between two consecutive grabs <b>from a camera</b> you have to wait
* at most 1/fps seconds...than you have time to show the frame in the meantime.
*/
void TestGrabGeneric(cv::VideoCapture &cap)
{
int64 start;
double totalTime = 0, averageTime = 0;
cv::Mat frame;
cap >> frame;
int count = 0, maxCount = 100;
start = cv::getTickCount();
for (count = 0; count < maxCount; count++)
{
cap >> frame;
if (frame.empty()) //if not success, break loop
{
cout << "Cannot read std::cout << "\tERROR! an empty frame has been received" << std::endl; break;
}
// see note in the frame from video file" << endl;
break;
function header
// imshow("Grabbed frame", frame);
// waitKey(1);
}
#endif
}
stop = cv::getTickCount();
double totalTime = (stop (cv::getTickCount() - start) / cv::getTickFrequency(); // seconds
double averageTime = totalTime / numOfFrames; count; // seconds
std::cout << "Total std::cout << "\tMeasured time: "
<< averageTime * 1000 << "ms"
<< "~=" << cvRound(1 / averageTime) << "fps"
<< std::endl;
std::cout << "\tINFO frame size: " << frame.size() << std::endl;
cap.release();
}
/**
* \brief Measures time passed: needed to grab a frame from a camera.
* Tries to set wanted fps for grabbing and compares it with measured fps
* \note if you cant set \c wantedFps try to open \c device+CV_CAP_DSHOW
* or some other CV_CAP_???
*/
void TestGrabFromCam(int device = 0)
{
std::cout << "------------------" << std::endl;
std::cout << "TEST GRABING FROM CAM" << std::endl;
cv::VideoCapture cap;
double wantedTimeMs, wantedFps;
double testTimeMs, testFps;
wantedFps = 25;
wantedTimeMs = 1000 / wantedFps;
cap.open(device);
if (!cap.isOpened())
{
std::cout << "\tERROR! Unable to open default camera" << std::endl;
return;
}
// set/get fps
std::cout << "\tINFO: " << "try to set cam at: " << wantedFps << "fps" << std::endl;
cap.set(CV_CAP_PROP_FPS, wantedFps);
testFps = cap.get(CV_CAP_PROP_FPS);
if (cvRound(testFps) != wantedFps)
{
std::cout << "\tWARNING! " << wantedFps << "fps isn't supported by your cam!" << std::endl;
}
std::cout << "\tINFO: " << "currently we should grab at: " << testFps << "fps" << std::endl;
testTimeMs = 1000 * totalTime << "ms" << std::endl;
std::cout << "Average / testFps;
std::cout << "\tMeasuring time needed to grab a frame from camera..." << std::endl;
std::cout << "\tExpected time: " << 1000 * averageTime << "ms" << std::endl;
testTimeMs << "ms";
TestGrabGeneric(cap);
}
/**
* \brief Measures time needed to read a frame from video file
* \param [in]videoFileName an AVI file name used for reading test
*/
void TestGrabFromVideoFile(const std::string &videoFileName)
{
std::cout << "-------------------------" << std::endl;
std::cout << "TEST GRABBING FROM VIDEO FILE" << std::endl;
cv::VideoCapture cap;
double testTimeMs, testFps;
cap.open(videoFileName);
if (!cap.isOpened())
{
std::cout << "\tERROR! Unable to open " << videoFileName << std::endl;
return;
}
// get fps
testFps = cap.get(CV_CAP_PROP_FPS);
std::cout << "\tINFO: " << "The file has been created at: " << testFps << "fps" << std::endl;
std::cout << "\tMeasuring time needed to read a frame from video file..." << std::endl;
std::cout << "\tExpected time: UNAVAILABLE";
TestGrabGeneric(cap);
}
int main(int argc, char* argv[])
{
TestMeasurementSys();
TestGrabFromCam();
TestGrabFromVideoFile("../bin/test_raw.avi");
std::cout << std::endl << "Press enter to terminate...";
std::cin.get();
}
EDIT: clarifying test result
here my result for testing function @10ms
Clock resolution: 405.217ns
Total time passed: 999.405ms
Average time: 9.99405ms // expected 10ms, this must be machine independent
results grabbing from webcam @20pfs
Average time: 49.656ms // expected 50ms, this should be machine independent
results reading from AVI file (encoded DVIX)
Average time: 47.656ms // this will change for different codec and machine