have any SDK or third party using depth module to face recognition?
I want to scan face using depth module and face recognition it. Now I use Intel realsense D435 what could be any SDK or third party can do this? Free or pay it very well because I want not Improve accuracy by myself it takes a lot of time.
I find FaceCept3D but they make for Kinect. I know the old realsense SDK have this module, but SDK >2.0 I use to implement it by myself with Point Cloud Library.
See also questions close to this topic
how to find the template matching accuracy
i am doing a template matching
now,what i want to do is find the accuracy of template matching
I have done template matching, but how do i get the accuracy i think i have to subtract the matched region and template image. how do i achieve this
import cv2 as cv import numpy as np import matplotlib.pyplot as plt img = cv.imread('image.jpg',0) img1 = img.copy() template = cv.imread('template.jpg',0) w, h = template.shape[::-1] method = ['cv.TM_CCOEFF_NORMED','cv.TM_CCORR_NORMED'] for meth in method: img = img1.copy() method = eval(meth) res = cv.matchTemplate(img,template,method) min_val, max_val, min_loc, max_loc = cv.minMaxLoc(res) bottom_right = (top_left + w, top_left + h) cv.rectangle(img,top_left, bottom_right, 255, 2) plt.subplot(121) plt.imshow(res,cmap = 'gray') plt.title('Matching Result') plt.subplot(122) plt.imshow(img,cmap = 'gray') plt.title('Detected Point') plt.show()
visualising features learned by InceptionResnetV2
I have trained an InceptionResnetV2 model to perform a 4 classification problem of my images in keras. Now I need to pass a test image through the trained model to see which features were actually learned by my model. My intention is to visualise up to the last layer, but I started with the first 12 layer as I am following this tutorial.
My problem starts when I define my input and outputs for the model which I want to extract the feature I get an AttributeError: 'Tensor' object has no attribute '_keras_shape'. I saw a solution here but could not manage to implement it.
here is my codes
from keras.preprocessing import image from keras import models import matplotlib.pyplot as plt import numpy as np import tensorflow as tf img_path='../../../data/fourclasses/Colo704_PTL/205.jpg' img = image.load_img(img_path, target_size=(299, 299)) img = image.img_to_array(img) img = np.expand_dims(img, axis=0) # img /= 255. # plt.imshow(img) # plt.show() # print(img.shape) new_model=tf.keras.models.load_model('test_model.model') # predictions=new_model.predict(img) # print(predictions) # label = np.argmax(predictions, axis=1) # print(label) # Extracts the outputs of the top 12 layers layer_outputs = [layer.output for layer in new_model.layers[:12]] # Creates a model that will return these outputs, given the model input activation_model = models.Model(inputs=new_model.input, outputs=layer_outputs) # Returns a list of five Numpy arrays: one array per layer activation activations = activation_model.predict(img) # first layer activation first_layer_activation = activations print(first_layer_activation.shape) #plotting the fourth channel of the activation of the first layer of the original model plt.matshow(first_layer_activation[0, :, :, 4], cmap='viridis')
Assertion failed (scn == 3 || scn == 4) in cvtColor as I try the Lucas Kanade method. What could be the reason for this?
I am trying the
Lucas Kanademethod as given in the example here. But as I try to run the following code, I get an error.
import numpy as np import cv2 as cv cap = cv.VideoCapture('sophia.avi') # params for ShiTomasi corner detection feature_params = dict( maxCorners = 100, qualityLevel = 0.3, minDistance = 7, blockSize = 7 ) # Parameters for lucas kanade optical flow lk_params = dict( winSize = (15,15), maxLevel = 2, criteria = (cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 10, 0.03)) # Create some random colors color = np.random.randint(0,255,(100,3)) # Take first frame and find corners in it ret, old_frame = cap.read() old_gray = cv.cvtColor(old_frame, cv.COLOR_BGR2GRAY) p0 = cv.goodFeaturesToTrack(old_gray, mask = None, **feature_params) # Create a mask image for drawing purposes mask = np.zeros_like(old_frame) while(1): ret,frame = cap.read() frame_gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY) # calculate optical flow p1, st, err = cv.calcOpticalFlowPyrLK(old_gray, frame_gray, p0, None, **lk_params) # Select good points good_new = p1[st==1] good_old = p0[st==1] # draw the tracks for i,(new,old) in enumerate(zip(good_new,good_old)): a,b = new.ravel() c,d = old.ravel() mask = cv.line(mask, (a,b),(c,d), color[i].tolist(), 2) frame = cv.circle(frame,(a,b),5,color[i].tolist(),-1) img = cv.add(frame,mask) cv.imshow('frame',img) k = cv.waitKey(30) & 0xff if k == 27: break # Now update the previous frame and previous points old_gray = frame_gray.copy() p0 = good_new.reshape(-1,1,2) cv.destroyAllWindows() cap.release()
The error says:
OpenCV(3.4.1) Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /io/opencv/modules/imgproc/src/color.cpp, line 11147 Traceback (most recent call last): File "test.py", line 23, in <module> frame_gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY) cv2.error: OpenCV(3.4.1) /io/opencv/modules/imgproc/src/color.cpp:11147: error: (-215) scn == 3 || scn == 4 in function cvtColor
What could be the reason for this?
I took 4 png images and used
ffmpegto combine them together to create a video to be tested with the Lucas Kanade method as given in the example.
MemoryError: bad allocation when Pycharm shows barely any memory use
So I am following the tutorial on https://www.pyimagesearch.com/2018/06/18/face-recognition-with-opencv-python-and-deep-learning/ in Pycharm environment. When I run the encode faces file it comes out with this error.
Traceback (most recent call last): File "Encoding_Faces.py", line 29, in <module> boxes = face_recognition.face_locations(rgb, model=args["detection_method"]) File "C:\Users\my name\AppData\Local\Programs\Python\Python36- 32\Webcam_Face_Detect\lib\site-packages\face_recognition\api.py", line 116, in face_locations return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn")] File "C:\Users\my name\AppData\Local\Programs\Python\Python36- 32\Webcam_Face_Detect\lib\site-packages\face_recognition\api.py", line 100, in _raw_face_locations return cnn_face_detector(img, number_of_times_to_upsample) MemoryError: bad allocation
But when I see the memory usage on the bottom right of the screen it is around 200 of 4096M. I increase the memory from 750M but to no avail. Weirdly, the error occured on the first photo itself. My images are around 200kb each and 1920 by 1080. Total 17 images. My computer has no gpu so I am not sure if that is the problem.
I checked the task manager as well and the memory usage was about 50% when the program crashed.
My computer is a Hp Spectre x360 i5 6th gen 8gb ram. 2 years old if that is important.
how to recognize faces from a video using dlib
I want to recognize multiple faces from a camera video using dlib on android platform but unable to do this. how can it be done?
How to run the python code on AWS server via the android app
I am now about Python and AWS, I am currently doing one of my project, that require me to develop the android app using kivy and some functions are conducted using python. However, it is quite computationally intensive, so I'm thinking about to access the AWS cloud system at first, and all my computation works will be done on AWS, and once the computation finishes, it tell the result to the android app. Can someone tell me about how to run my python code on AWS server, actually, my project is about face-recognition, and if user clicks one single button, it will connect to the cloud server, while he/she clicks the other button, it will upload the test image to the cloud server, and the computation algorithms are all on AWS.
Trouble building the ROS wrapper for the Intel Intel RealSense camera
I am having trouble building the ros wrapper for the Intel RealSense camera.
This is the error message I am seeing:
IOError: could not find ddynamic_reconfigure among message packages. Does the that package have a on message_generation in its package.xml?
I am following the instructions here. I get the above error when I run this command:
catkin_make -DCATKIN_ENABLE_TESTING=False -DCMAKE_BUILD_TYPE=Release
I installed the RealSense SDK2.0 from the Debian Package. I verified the SDK install by running
I am working on ROS Kinetic, on Ubuntu 16.04.
Intel Realsense D435i NDVI processing
Can Intel Realsense D435i be used for NDVI(Normalized Difference Vegetation Index) processing for monitoring crop health?
How to using PointCloudLibrary on VisualStudio with CMake nowadays?
I need to grab the pointcloud data from intel realsense cameras, and write on a .ply or .asc archive to generate a 3D model later.
The PCL has the codes to do it, but I tried a bunch of ways to import it and none worked well.
I followed this link but gave me errors on boost, even when its installed already.
If anyone knows another code to extract the pcd from the depth cameras to generate the 3D model I would be very thankful.