I have a list of instances that contain a cv::Mat image and a cv::Mat alpha/mask channel. These are separate Mats because they are generated according to different processes.
Before I render (in GL) I need to convert the RGB image and mask images into an RGBA image. Here is my current function:
void percepUnit::applyAlpha() {
int x,y,w,h;
/*vector<cv::Mat> channels;
if (image.rows == mask.rows and image.cols == mask.cols) {
cv::split(image,channels); // break image into channels
channels.push_back(mask); // append alpha channel
cv::merge(channels,alphaImage); // combine channels
}*/
// Avoid merge
cv::Mat src[] = {this->image, this->mask};
int from_to[] = {0,0, 1,1, 2,2, 3,3};
this->alphaImage = Mat(image.rows, image.cols, CV_8UC4);
cv::mixChannels(src, 2, &(this->alphaImage), 1, from_to, 4); // &(*alphaImage)?
}
I just loop through my list of instances and applyAlpha() before rendering. I had to increase the resolution of the Mats to work around an opencv bug to 1280x720, and now this function uses a lot of time (about half the cycles of the whole program).
Is there a way I can speed up this operation?
The list contains a few thousand instances, so it seems some parallelism could help; I have 4 cores and a fast GPU (GeForce 780).