@MrEsp, the main goal is that default usage of TensorFlow should maches default usage of OpenCV. You're right, TensorFlow uses NHWC data layout by default but OpenCV uses NCHW. Use blobFromImage to create NCHW blob from an image and pass it to imported network. You may find set of tests generated from TensorFlow. In example, image->convolution->flatten (reshape to 2-dimensional vector persisting batch size dimension)->fully-connected
network:
inp = tf.placeholder(tf.float32, [1, 2, 3, 4], 'input')
conv = tf.layers.conv2d(inp, filters=5, kernel_size=[1, 1],
activation=tf.nn.relu,
bias_initializer=tf.random_normal_initializer())
flattened = tf.reshape(conv, [1, -1], 'reshaped')
biases = tf.Variable(tf.random_normal([10]), name='matmul_biases')
weights = tf.Variable(tf.random_normal([2*3*5, 10]), name='matmul_weights')
mm = tf.matmul(flattened, weights) + biases
save(inp, mm, 'nhwc_reshape_matmul')
2) How does readNetFromTensorflow process fully connected and convolutional layers not to mess up with ordering ?
For common image processing pipelines with convolution layers we can use TenorFlow nodes' attribute which indicates data layout. Depends on TensorFlow version they are called NCHW
and NHWC
or channels_first
and channels_last
correspondingly. For this particular example above, OpenCV inserts permutation layer before reshape
so both flatten's and matmul's outputs from OpenCV and TensorFlow libraries are similar. Convolutions' outputs are the same too but in different layouts.
3) Are there any differences in OpenCV 3.4+ ?
OpenCV grows fast and the newer version it is the more deep learning architectures it can cover. However all the users applications which already worked with old versions should work in the same way with next releases of OpenCV.