I am preparing data for training an SVM. I use PCA to reduce dimensionality of data before feeding into SVM as shown in code below.
Mat trainData; //Hold data for training. Each row is a sample
vector<Mat> histograms; //Contains row histograms of LBP features
convertVectorToMat(histograms, trainData); //Convert vector of Mat to Mat (40 rows, 4096 columns)
PCA pca(trainData, Mat(), PCA::DATA_AS_ROW, (classes - 1));//PCA gives (40 rows, 39 columns)
Mat mean = pca.mean.reshape(1, 1);
//Project data to PCA feature space
Mat projection = pca.project(trainData);
//Perform LDA on data projected on PCA feature space
LDA lda((classes - 1));
lda.compute(projection, labels);
Mat_<float> ldaProjected = lda.project(projection);
normalize(ldaProjected, ldaProjected, 0, 1, NORM_MINMAX, CV_32FC1);
I am passing Mat ldaProjected
to SVM for training together with corresponding labels for training. My question is am I doing it right or I should have passed Mat projection
to SVM. I whichever case, SVM is only giving same class label for any sample I predict. Kindly advice if am preparing my data well for training. I intended to use LDA for dimensionality reduction for training multi-class SVM.