Issues with Opencv 4.2 and Tensorflow 2.0 [Solve] [closed]
I tried to follow this tutorial on wiki : https://github.com/opencv/opencv/wiki... and update it for Tensorflow 2.0.
I use TensorFlow 2.0 and OpenCV 4.2, i would like to know How to use Opencv 4.0 with tensorflow 2.0 model ? (like .pb), I have issues with this script :
import numpy as np
import tensorflow as tf
import cv2 as cv
# Read the graph.
with tf.io.gfile.GFile('frozen_inference_graph.pb', 'rb') as f:
print('OK0')
graph_def = tf.compat.v1.GraphDef()
print('OK1')
graph_def.ParseFromString(f.read())
print('OK2')
with tf.compat.v1.Session() as sess:
# Restore session
sess.graph.as_default()
tf.import_graph_def(graph_def, name='')
# Read and preprocess an image.
img = cv.imread('img.jpg')
rows = img.shape[0]
cols = img.shape[1]
inp = cv.resize(img, (300, 300))
inp = inp[:, :, [2, 1, 0]] # BGR2RGB
# Run the model
out = sess.run([sess.graph.get_tensor_by_name('num_detections:0'),
sess.graph.get_tensor_by_name('detection_scores:0'),
sess.graph.get_tensor_by_name('detection_boxes:0'),
sess.graph.get_tensor_by_name('detection_classes:0')],
feed_dict={'image_tensor:0': inp.reshape(1, inp.shape[0], inp.shape[1], 3)})
# Visualize detected bounding boxes.
num_detections = int(out[0][0])
for i in range(num_detections):
classId = int(out[3][0][i])
score = float(out[1][0][i])
bbox = [float(v) for v in out[2][0][i]]
if score > 0.3:
x = bbox[1] * cols
y = bbox[0] * rows
right = bbox[3] * cols
bottom = bbox[2] * rows
cv.rectangle(img, (int(x), int(y)), (int(right), int(bottom)), (125, 255, 51), thickness=2)
cv.imshow('TensorFlow MobileNet-SSD', img)
cv.waitKey()
When i tried to run this script with differents models (from wiki and my own model from Tensorflow 2.0, .pb files) it exit script without message :/
Same issue when print f.read()
I think the problem comes from this line but I do not know how to solve the problem without message :
graph_def.ParseFromString(f.read())
Is there a reason why you are explicitly using v1? Have you tried running without compatibility? Another thing you can do is initialize
graph_def
toNone
then after assigning it to graphDef, print it out (or its type) to see if an object is even assigned to it.Is there a reason you are explicitly using v1? Have you tried running without compatibility? Another thing you can do is initialize
graph_def
toNone
then after assigning it to graphDef, print it out (or its type) to see if an object is even assigned to it.I would like to use the latest version as this is recommended by my teacher, I tested with version 2.1 and under linux on the other config, it works, I don't really know why under windows it doesn't work
Edit: I tried with a complete reinstallation again, it work now and I have one warning : test.py:12: RuntimeWarning: Unexpected end-group tag: Not all data was converted graph_def.ParseFromString(std)
Good job, glad you solved!