How can I use a custom tensorflow model with the CV2 DNN module?
I retrained an object detection model based on Google's Tensorflow object detection API. I exported it as a frozen inference graph. I would like to use it with CV2's DNN module:
cap = cv2.VideoCapture(URL) cvNet = cv2.dnn.readNetFromTensorflow('graph.pb', 'graph.pbtxt') while True: ret, img = cap.read() rows = img.shape cols = img.shape cvNet.setInput(cv2.dnn.blobFromImage(img, 1.0/127.5, (300, 300), (127.5, 127.5, 127.5), swapRB=True, crop=False)) cvOut = cv2Net.forward() for detection in cv2Out[0,0,:,:]: score = float(detection) if score > 0.3: left = detection * cols top = detection * rows right = detection * cols bottom = detection * rows cv2.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (23, 230, 210), thickness=2) cv2.imshow('img', img) if cv2.waitKey(1) ==27: exit(0)
I get this error:
Const input blob for weights not found in function getConstBlob
From my research, I believe I have to optimize the inference graph. I can't find any documentation as how to do this.
If anyone could point me in the right direction, it would be very much appreciated.
See also questions close to this topic
web cam in a webpage using flask and python
I have created a face recognition model using keras and tensorflow, and now I am trying to convert it as a web application using flask and python. My requirement is that, I need a live webcam displayed on the webpage and by clicking on a button it should take the picture and save it to a specified directory, and using that picture the application should recognize the person. if the person is not found in the dataset then a message should be displayed over the webpage that unknown identity is been found. To do this job i have started learning flask and after that when it comes to the requirement it was very difficult for me. somebody help me out to solve this situation.
pandas df - sort on index but exclude first column from sort
I want to sort this df on rows but I want to exclude the first column from the sort so it remains where it is:
Radisson Marriott Hilton Accorhotels IHG Category good job 0.214941 0.40394 0.448931 0.314185 0.375316
Radisson Hilton Marriott IHG Category good job 0.214941 0.448931 0.40394 0.375316
I don't know to edit my code below to exclude the 1st column from the sort:
df = df.sort_values(by=[df.index], axis=1, ascending=False)
How implement this distribution in my model?
I have a NN model that classify statements between 3 categories [-1, 0, 1]
stop = EarlyStopping(monitor='val_loss', min_delta=0.01, patience=15, verbose=1, mode='auto', restore_best_weights=True) model = Sequential() model.add(Embedding(num_words, EMBEDDING_DIM, input_length=MAX_SEQUENCE_LENGTH)) model.layers.set_weights([embedding_matrix]) model.layers.trainable = False model.add(Dense(300,kernel_regularizer=regularizers.l1(0.000001))) model.add(Dropout(0.35)) model.add(Dense(200,kernel_regularizer=regularizers.l1(0.000001))) model.add(Dropout(0.35)) model.add(Dense(100,kernel_regularizer=regularizers.l1(0.000001))) model.add((LSTM(50, dropout=0.1, recurrent_dropout=0.1)))#, kernel_regularizer=regularizers.l1(0.00001)))) model.add(Dense(1,kernel_regularizer=regularizers.l1(0.000001))) model.add(Activation("tanh")) model.compile(loss="mse", optimizer="rmsprop", metrics=["accuracy"])
After training I use model.predict() to "predict" new values and I got this
array([[ 0.82085645], [ 0.8304224 ], [ 0.8198626 ], [ 0.8128166 ], [ 0.1621628 ], [ 0.74597526], [-0.59259003], [-0.57140785]], dtype=float32)
Ok, here is my first question. How do I interpret this? My first thought was
predictioncm = model.predict(texts_test) predictioncm1 =  for p in predictioncm: if p<=0.5 and p>=-0.5: predictioncm1.append(0) elif p>0.5: predictioncm1.append(1) else: predictioncm1.append(-1)
But the thing is...when I assign:
predictioncm = model.predict_proba(texts_test) predictioncm1 =  for p in predictioncm: if p<=0.7999 and p>=0: predictioncm1.append(0) elif p>0.7999: predictioncm1.append(1) else: predictioncm1.append(-1)
This "fit" perfect with the outcome that I want (the statements are from a test set, that's why I know the result). So the results are totally skewed to 1. So my second question is.. Is it possible assign this weights to my primary model to get those values directly or a way to justify this assignment? or do I have to keep the result from my first thought because is the "correct"?
Gather subtensor of padded tensor of variable-sized data in Tensorflow
I have some variable-length sequencial data of fixed tensor dimension. Consider a list of tensors s_1, ..., s_b of a list of some fixed dimensional matrices [l_1,m,n], ..., [l_b,m,n]. For example
s_1 = [ [[1,2],[3,4]], [[5,6],[7,8]] ] s_2 = [ [[9,10],[11,12]], [[13,14],[15,16]], [[17,18],[19,20]] ]
Though the data is given in a padded form, like the following
S = [ [[[1,2],[3,4]], [[5,6],[7,8]], [[0,0],[0,0]]], [[[9,10],[11,12]], [[13,14],[15,16]], [[17,18],[19,20]]] ] l = [2,3]
Sis the tensor of padded lists of matrices and
lis the 1-D tensor with
i-th entry given by the length of sequence
Now, I want to extract a tensor given by the concatenation of the list of matrices. The result should be
[ [[1,2],[3,4]], [[5,6],[7,8]], [[9,10],[11,12]], [[13,14],[15,16]], [[17,18],[19,20]] ]
This calculation will be done in a function for the
mapmethode of a
What would be the correct approach to do this in tensorflow? I thought of using
tf.sequence_maskbut can't quite get the result I want. Maybe some clever use of
seems to give the correct result, but I can only get it to work outside of a function passed to the
def _parse_SE(in_example_proto): S = ... l = ... #obtain S and l from the record return tf.tuple([S,l]) dataset = tf.data.TFRecordDataset("test.txt") dataset = dataset.map(_parse_SE) dataset = dataset.padded_batch(BATCH_SIZE, padded_shapes=(, [None], , [None,n,s])) iterator = dataset.make_initializable_iterator() [S_bat, l_bat] = iterator.get_next() wanted_bat = tf.boolean_mask(S_bat,tf.sequence_mask(l_bat)) # when evaluated wanted_bat stores the wanted concatenation
does work. But if I try to make the modification inside of the mapping function I just get
def _parse_SE(in_example_proto): S = ... l = ... #obtain S and l from the record wanted = tf.boolean_mask(S,tf.sequence_mask(l)) return tf.tuple([wanted,l]) dataset = tf.data.TFRecordDataset("test.txt") dataset = dataset.map(_parse_SE) dataset = dataset.padded_batch(BATCH_SIZE, padded_shapes=(, [None], , [None,n,s])) iterator = dataset.make_initializable_iterator() [wanted_bat, l_bat] = iterator.get_next() # when evaluated wanted_bat just contains S_bat of the previous example
Whenever I try to import tensorflow for gpu, this problem arises. please help. I have NVIDIA GTX 1080 gpu.
ImportError: Traceback (most recent call last): File "C:\Users\Daniel\Anaconda3\lib\site- packages\tensorflow\python\pywrap_tensorflow.py ", line 58, in <module> from tensorflow.python.pywrap_tensorflow_internal import * File "C:\Users\Daniel\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_in ternal.py", line 28, in <module> _pywrap_tensorflow_internal = swig_import_helper() File "C:\Users\Daniel\Anaconda3\lib\site-packages\tensorflow\python\pywrap_tensorflow_in ternal.py", line 24, in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) File "C:\Users\Daniel\Anaconda3\lib\imp.py", line 242, in load_module return load_dynamic(name, filename, file) File "C:\Users\Daniel\Anaconda3\lib\imp.py", line 342, in load_dynamic return _load(spec) ImportError: DLL load failed: The specified module could not be found.
keras custom layer variables doesnt have ndim and shape attributes
I am trying to create a custom layer in keras which has two trainable variables. I am passing this two variables to another function inside which they need to get the ndims and shape of the variables. But it is showing error. My custom layer code is-
class MyLayer(Layer): def __init__(self, output_dim, **kwargs): self.output_dim = output_dim super(MyLayer, self).__init__(**kwargs) def build(self, input_shape): # Create a trainable weight variable for this layer. assert len(input_shape) >= 3 input_dim = input_shape[1:] print(input_shape) self.kernel1 = self.add_weight( shape=self.output_dim,input_dim), name = 'kernel1', initializer='uniform', trainable=True) print(self.kernel1) self.kernel2 = self.add_weight( shape=(self.output_dim,input_dim), name = 'kernel2', initializer='uniform', trainable=True) print(self.kernel2) super(MyLayer, self).build(input_shape) # Be sure to call this at the end def call(self, x): print(x.shape) mat1 =np.array(self.kernel1) print(K.shape(mat1)) #print(mat1.ndim) mat2 =np.array(self.kernel2) print(K.shape(mat2)) #print(mat2.ndim) output1 = Myoperation(x,mat1,1) output = Myoperation(output1,mat2,2) return output def compute_output_shape(self, input_shape): return (input_shape, self.output_dim)
inside the function Myoperation, it needs to calculate
m2=list(np.shape(M)); M is mat1 or mat2enter code here The error is -
IndexError: list index out of range if we use mat1.shape to check the shape, we get TypeError: Expected binary or unicode string, got
OpenCV implementation of YOLO v3 reproduces Exception on a GCP instance
I have successfully implemented object detection from video using YOLO v3 model from OpenCV 126.96.36.199. It is running successfully on a local machine, so I wanted to test it on a Google Cloud Platform instance.
I've cloned my project, built OpenCV from source and launched YOLO v3 object detection. Though, this time I've caught an exception on the Darknet initialization step:
net = cv2.dnn.readNetFromDarknet(cfg_path, weights_path)
Here is also the traceback:
Traceback (most recent call last): File "/home/username/path_to_app/yolo_object_detection.py", line 21, in run_detection: net = cv2.dnn.readNetFromDarknet(cfg_path, weights_path) cv2.error: OpenCV(4.0.0) /home/username/opencv- 4.0.0/modules/dnn/src/darknet/darknet_io.cpp:690: error: (-213:The function/feature is not implemented) Transpose the weights (except for convolutional) is not implemented in function 'ReadDarknetFromWeightsStream'
What does How can I overcome this exception?
Add transparent logo to 2D 16bit image array python openCV2
I am trying to add a transaprent logo to an existing image array. It works fine except that I lose the transparency and it is replaced in black. I think this happens when the image is resized. Here is my code:
import cv2 import numpy as np def add_logo(array): logo = cv2.imread("resources/logo/logo.png", 0) logo = np.array(logo, dtype=np.uint16) logo *= 256 scale_percent = width = int(logo.shape * scale_percent / 100) height = int(logo.shape * scale_percent / 100) dim = (width, height) logo = cv2.resize(logo, dim, interpolation = cv2.INTER_AREA) #cv2.imwrite('02.png',logo) print(array.shape, logo.shape) x_offset = y_offset = 50 array[y_offset:y_offset+logo.shape, x_offset:x_offset+logo.shape] = logo return array imgs = get_images_from_xxxx() edited = add_logo(imgs) cv2.imwrite('01.png',edited)
I also tried this solution, but i does not apply to my context, as it it very important that the original array shape or data is not changed except for adding the logo.
This is part of the image i'm getting: but I don't need the black background as the original logo is transparent. Sorry had to crop out the logo!
And if it helps, here is a print of the array passed to add_logo, it's a numpy array:
[[3505 3514 3606 ... 4622 4781 0] [3566 3507 3503 ... 4587 4386 0] [3522 3503 3453 ... 4584 4434 0] ... [3435 3428 3428 ... 3721 3779 0] [3451 3418 3455 ... 3829 3877 0] [ 0 0 0 ... 0 0 0]]
And the outputs of
(1721, 912) (378, 304)
Any ideas are greaaaatly appreciated. :)
Reducing frame-rate with Python OpenCV VideoCapture
I have a Raspberry Pi running Raspbian 9. I have OpenCV 4.0.1 installed. I have a USB Webcam attached. The Raspberry is headless, I connect with
ssh <user>@<IP> -X. The goal is to get a real-time video stream on my client computer.
The issue is that there is a considerable lag of around 2 seconds. Also the stream playback is unsteady, this means slow and then quick again.
My guess is that SSH just cannot keep up with the camera's default 30fps. I therefore try to reduce the frame-rate as I could live with a lower frame-rate as long as there's no noticeable lag. My own attempts to reduce the frame rate have not worked.
My code, commented the parts that I tried myself to reduce the frame-rate but did not work.
import cv2 #import time cap = cv2.VideoCapture(0) #cap.set(cv2.CAP_PROP_FPS, 5) while(True): ret, frame = cap.read() #time.sleep(1) #cv2.waitKey(100) cv2.imshow('frame', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break cap.release() cv2.destroyAllWindows()
what I tried to reduce the frame rate:
I tried to set
cap.set(cv2.CAP_PROP_FPS, 5)(also tried 10 or 1). If I then
print(cap.get(cv2.CAP_PROP_FPS))it gives me the frame-rate I just set but it has no effect on the playback.
I tried to use
time.sleep(1)in the while loop but it has no effect on the video.
I tried to use a second
cv2.waitKey(100)in the while loop as suggested here on Quora: https://qr.ae/TUSPyN , but this also has no effect.
As pointed out in the comment,
cv2.waitKey(1000)should both work and indeed they did after all. It was necessary to put these at the end of the while loop, after
As pointed out in the first comment, it might be better to choose a different setup for streaming media, which is what I am looking at now to get rid of the lag.