There are two versions of code, the first is the original python version from openface, the second one is c++ version translated by myself.
They are supposed to have the same result when using dlib landmarks detector, however the resulted landmark positions are slightly different. (same .dat file is used)
Here I only paste the simplified necessary parts.
1. Python version (copied from openface align-dlib.py, align_dlib.py)
rgb = imgObject.getRGB() def findLandmarks(self, rgb, bb): points = self.predictor(rgb, bb) def getBGR(self): bgr = cv2.imread(self.path) def getRGB(self): bgr = self.getBGR() rgb = cv2.cvtColor(bgr, cv2.COLOR_BGR2RGB)
Then the first two landmark positions are [[ 91. 112.] [ 91. 127.]...
2. translated c++ version
Mat mat_img = imread(argv[i]); Mat mat_img_rgb; cv::cvtColor( mat_img, mat_img_rgb, CV_BGR2RGB); cv_image<rgb_pixel> cv_img(mat_img_rgb); shape_predictor sp; full_object_detection shape = sp(cv_img, face);
The first two landmark positions are (90 112) (90 126)
I am guessing the reason I got the different result is that I converted Mat to cv_image, but I have to do this to use dlib detector,