how to interface baumer camera or kinect V2 with opencv?
I have the program for opencv that works with the webcam. But I am finding it difficult to use opencv with Kinect camera or Baumer camera. I want to know how to interface Baumer camera or Kinect V2 with opencv?
See also questions close to this topic
cv2.fillPoly strange results
Trying to rasterize simple poly using
cv2.fillPolyI get strange result - sum of no zero pixels is 10201, but it should be 100*100.
pts = np.array([[0, 0], [100, 0], [100, 100], [0, 100], [0, 0]]) img = np.zeros((256, 256)) vertices = np.array([pts], dtype=np.int32) mask = cv2.fillPoly(img, vertices, color=255) print('np.count_nonzero(mask)', np.count_nonzero(mask))
Get a info about coloured pixels at gray image. Python, opencv
I have small r g b image. Imake it gray.
original = cv2.imread('im/auto5.png') print(original.shape) # 27,30,3 print(original[13,29]) # [254 254 254] orig_gray = cv2.cvtColor(original, cv2.COLOR_BGR2GRAY) print(orig_gray.shape) # 27,30
Is it in this array info about white and black pixels? Or it lost this data? What mean this numbers?
At r g b image it mean color (3 digits, like [254,254,254]). But what mean one digit in my case with gray image? I want to get quanity of white pixels for my recognising.
How to extract only the name of the car from an image?
for the image shown below i want to extract name of the car i-e brio.
I tried following code but it shows nothing as output.
`import cv2 import numpy as np import pytesseract img = cv2.imread('car.jpg',cv2.IMREAD_COLOR) pic=img[350:550,200:1000] gray = cv2.cvtColor(pic, cv2.COLOR_BGR2GRAY) cv2.imwrite( "../opencv/graypic.jpg", gray ); config = ('-l eng --oem 1 --psm 3') text = pytesseract.image_to_string(gray, config=config) print(text) cv2.waitKey(0) cv2.destroyAllWindows()
Creating a video on real time from kinect?
I want to make an app to use it on a stand for an expo, It´s something like this:
The aproach I was thinking is to make the video original with transparency where the camera images will be added and a every frame calculate the mean position where the alpha is 0 in the original video and scale the camera images and position it on that point behind the original.
I would like to know if someone know another and better aproach.
Kinect v2 with Openni 2 and show with OpenCV 3.1.0
I am a newbie in programming and very much in Computer Vision. I am assigned a project in university to detect objects using the Kinect2 sensor.
I am trying to open Kinect 2 sensor via opencv 3.1.0 with VideoCapture method. I am using device.open (CAP_OPENNI2) method to open the Kinect rgb stream and retrieve it into an OpenCV Mat.
Before this, I have built the OPenNI from the source available on the Occipital's GitHub repo: https://github.com/occipital/OpenNI2/tree/kinect2/Source/Drivers
It has a Kinect2 driver and I was successful in building the Kinect2.dll binaries.
I then built OpenCV 3.1.0 from source with the flag "WITH_OPENNI2" and linked the Includes and Lib path to the OpenNI 126.96.36.199 I built with Kinect2 driver.
Though the OpenCV build was successful and I had all the binaries, linked it to VS2013 and also the OpenNI binaries, I couldn't still open the Kinect 2 sensor.
Has anybody succeeded in doing so?
Kinect v2 depth resolution
I've been searching for a long time now the specs of the Kinect v2 such as said here for the Kinect v1. I'm especially interested in the depth resolution of the Kinect v2.
I'm also quite surprised that such data is so hard to find with Kinect-like sensors (I found the RealSense depth accuracy in a forum for example). Why is the resolution so hard to find ?
NB : by resolution I mean the smallest variation of the measurand that the captor can measure
So if anyone knows where I can find these specs for the Kinect v2, thank you in advance !