For quick background, I'm taking an AlexNet-like network and passing an image through. At the FC7 layer, I flatten and then concatenate with hand-crafted features of dimension 20. I was able to create a training.prototxt and train the network without issues, and now, I am trying to deploy it. Here is the error that I get when I try to run:
concat_layer.cpp:38] Check failed: num_axes == bottom[i]->num_axes() (2 vs. 4) All inputs must have the same #axes.
For reference, here is the C++ code; it compiles successfully, but on running, returns the error above.
#define CPU_ONLY
#include <cstring>
#include <cstdlib>
#include <vector>
#include <string>
#include <iostream>
#include <stdio.h>
#include "caffe/caffe.hpp"
#include "caffe/util/io.hpp"
#include "caffe/blob.hpp"
using namespace caffe;
using namespace std;
int main(int argc, char** argv) {
boost::shared_ptr<Net<float> > net;
std::shared_ptr<caffe::Solver<float> > solver;
net.reset(new Net<float>("/path/to/deploy.prototxt", caffe::TEST));
net->CopyTrainedLayersFrom("/path/to/weights.caffemodel");
return 0;
}
Lastly, here is my deploy.prototxt relevant parts.
name: "Network"
input: "X"
input_dim: 1
input_dim: 1
input_dim: 96
input_dim: 96
input: "XFeat"
input_dim: 1
input_dim: 1
input_dim: 1
input_dim: 20
... Layers applying conv on X here...
# Drop1
layer {
name: "drop2"
type: "Dropout"
bottom: "norm2"
top: "drop2"
dropout_param {
dropout_ratio: 0.5
}
}
# Flatten to concatenate with features
layer {
name: "flatten6"
type: "Flatten"
bottom: "drop2"
top: "flatten6"
}
# Concatenate with XFeat
layer {
name: "concat6"
type: "Concat"
bottom: "flatten6"
bottom: "XFeat"
top: "concat6"
concat_param {
axis: 1
}
}
Is there something wrong with what I'm doing? It still trained successfully, so I figured this is the way to do it. I've also tried changing the input_dims for XFeat to [1, 20], but that still fails (albeit at a different spot with a different message). Is there an underlying issue with the caffemodel and the deploy.prototxt somehow giving conflicting sizes?