Kinect v2 depth resolution
I've been searching for a long time now the specs of the Kinect v2 such as said here for the Kinect v1. I'm especially interested in the depth resolution of the Kinect v2.
I'm also quite surprised that such data is so hard to find with Kinectlike sensors (I found the RealSense depth accuracy in a forum for example). Why is the resolution so hard to find ?
NB : by resolution I mean the smallest variation of the measurand that the captor can measure
So if anyone knows where I can find these specs for the Kinect v2, thank you in advance !
See also questions close to this topic

Displayed Numerical Precision in R / RStudio
I'm having a problem where the displayed precision of a numerical result sometimes reverts to roundedoff integer format. For instance, some days, a simple calculation like this generates the expected decimalprecision result:
> x < 344.5  .25 > x [1] 344.25
But then I can come back another day, try again and get:
> 344.5  .25 > x [1] 344
I've verified via subtraction of terms that the hidden precision is still there: so it may display 344, for instance, but the value of 344.25 is still contained in the variable.
I can't figure out what's changing (or how to control it) so that it just stays one way consistently, preferably with some displayed precision to the right of the decimal point.

Computing sample variance using sum of squares gives absurd results
Can someone explain what's going on in case 2?
n < 20 x < 1:n
Case 1
c( sum((x  mean(x)) ^ 2), sum(x ^ 2)  n * mean(x) ^ 2 ) #[1] 665 665
Case 2
a < 1e+10 x_new < x + a c( sum((x_new  mean(x_new)) ^ 2), sum(x_new ^ 2)  n * mean(x_new) ^ 2 ) #[1] 665 0

Floatingpoint number system in R
Let us consider the floatingpoint number system with base b, precision p and exponent range from emin to emax. How many different nonzero normalized numbers can we find? What are the smallest and largest normalized positive numbers?
For a pprecision number, exponent ranges from 0 to 2^(np) and is interpreted by subtracting the bias for an nbit exponent to get an exponent value in the range from emin to emax. So there are 2^(np) nonzero normalized numbers. Is that true? How can I find the smallest and the largest one?

Kinect not connecting to PC / only registered as audio device
I am trying to get my Kinect running with my PC and therefore installed and updated all the drivers and the latest SDK. Now when I connect the Kinect it is registered but only as an audio device. When the dialog on what to do with the device opens, the only option is "Kinect" which keeps "downloading" forever. I have plugged it into the ss USB3 port and checked back every piece of hardware with a replacement. My PC is rather new and running Windows 10 and both Kinects are around 3 years old What could have gone wrong?

Depth Camera Intrinsics for Kinect v2
I'm trying to get the camera intrinsics for kinect v2 using C#. I'm pretty new to Visual Studio, C# and Kinect v2 and the lack of a detailed official tutorial is driving me crazy..(if there's any please let me know..)
I know there's a function called GetDepthCameraIntrinsics that returns a calibration data but how do I store that data?(What type does the variable has to be to store the data?)

Get Image from gl Video Frame (Java for kinect)
I am trying to use kinect 360 with opencv in Java. So i use
J4k
(Java for kinect ) library. I need to get image or Mat for opencv from VideoFrame.byte[] img = myKinect.getColorFrame();
returns byte array, but i can't convert it by using bytearrayinputstream because it returns null image. So I can only useVideoFrame
, like in examples of lib. But I need to get image or mat from it. So I don't know how to do it. Help please, or tell about another lib for kinect. link to description of Videoframe class 
recursion counter error (depth of the recursion) Java
I've been asked to get a specific number's 9degree by getting the the depth of the recursion.[and I've been getting wrong results for some inputs and was wondering why] example below in my main class.
For example, let 99999 is the input N. now sum= 9+9+9+9+9=45 //45 is more than one digit so again sum= 4+5=9 here we get 9 (one digit) so stop. We get 9 by two steps • 1st step adding the sum value • 2nd step is again adding the sum value. This will continue until we get 9 from a given string and the steps we need to get 9 is our desired 9degree N. So for the above example, the 9degree is 2.
The following is my attempt at calculating the depth (the depth is the variable count in my code):
public static BigInteger divideBy9(BigInteger number) { //base case if (number.equals(new BigInteger("0"))) { return new BigInteger("0"); } // if it's not a single digit if (!(number.remainder(new BigInteger("10")).equals(new BigInteger("0")))) { if (!(number.equals(new BigInteger("9")))) { //if not 9 set counter to zero count = 0; } count++; //keep adding numbers until it's a single digit return number.remainder(new BigInteger("10")).add(divideBy9(number.divide(new BigInteger("10")))); } return new BigInteger("0");
This is my main class:
//here it should be a 9degree of 3 but it gives me 2 System.out.println(divideBy9(new BigInteger("999999999999999999999"))); System.out.println(count);

Object size relation with depth map
I have depth map of an image and i want to place new object to this image. In other words,i need to scale object by using depth map information before the placement. Can you recomment any project, paper, or idea related with these?
Thanks.

Why am i unable to run coins with single arg?
We have been given coins.ss that we are unable to change. We then were given a generic search function that we had to convert to a curried version. Finally we had to provide 4 additional functions 2 searches and merges, one each for breadth/depth first. After writing code which i believe is functioning the way it should i am unable to run coins with a single arg. I am new to scheme so obviously i am doing something wrong. Please note i do not need code just an explanation on what is happening.
Exact error is
(coindepthfirst 48) . . ...2 441/search.ss:67:6: arity mismatch; the expected number of arguments does not match the given number expected: 3 given: 1 arguments...: '(48 ())
Coins.ss
;; There are 7 kinds of old British coins (define oldbritishcoins '(120 30 24 12 6 3 1)) ;; Or, you can do the same for US coins (define uscoins '(100 50 25 10 5 1)) ;; Here, we will do the old British coins (define *coins* oldbritishcoins) ;; Is a state the goal state? (define goal? (lambda (state) (zero? (car state)))) ;; returns children of a state (define extend (lambda (state visited) (let ((coins (applicablecoins state visited *coins*))) (map (lambda (coin) (list ( (car state) coin) (append (cadr state) (list coin)))) coins)))) ;; find all applicable coins from a state (define applicablecoins (lambda (state visited coins) (cond ((null? coins) '()) ((<= (car coins) (car state)) (if (visited? state visited (car coins)) (applicablecoins state visited (cdr coins)) (cons (car coins) (applicablecoins state visited (cdr coins))))) (else (applicablecoins state visited (cdr coins)))))) ;; see if a state has been visited before (define visited? (lambda (state visited coin) (cond ((null? visited) #f) ((= ( (car state) coin) (caar visited)) #t) (else (visited? state (cdr visited) coin))))) ;; prettyprint a state (define prettyprintpath (lambda (path) (prettyprintstate (car path)))) (define prettyprintstate (lambda (state) (let ((change (car state)) (coins (cadr state)) (total (apply + (cadr state)))) (printf "===> Total of ~a paid with ~a, with remainder of ~a <===~%" total coins change)))) ;; customize the generic depthfirstsearch for coin problem (define coindepthfirstsearch (depthfirstsearch extend goal? prettyprintpath)) ;; instance of a coin problem using depthfirst search (define coindepthfirst (lambda (amount) (coindepthfirstsearch (list amount '())))) ;; customize the generic breadthfirstsearch for coin problem (define coinbreadthfirstsearch (breadthfirstsearch extend goal? prettyprintpath)) ;; instance of a coin problem with breadthfirst search (define coinbreadthfirst (lambda (amount) (coinbreadthfirstsearch (list amount '()))))
Search.ss
;; only the following 2 procedures will be provided (provide depthfirstsearch breadthfirstsearch) ;; generic search algorithm (define search (lambda (mergequeue) (lambda (extend goal? printpath) (lambda(initstate) (letrec ((searchhelper (lambda (queue visited) (cond ((null? queue) #f) ((goal? (caar queue)) (begin (printpath (car queue)) (car queue))) (else (let ((successors (extend (caar queue) visited))) (cond ((null? successors) (searchhelper (cdr queue) visited)) (else (let ((newpaths (extendpath successors (car queue)))) (searchhelper (mergequeue queue newpaths) (append successors visited))))))))))) (searchhelper (list (list initstate)) ; initial queue (list initstate))))))) ; initial visited (define extendpath (lambda (successors path) (if (null? successors) '() (cons (cons (car successors) path) (extendpath (cdr successors) path))))) ;; merge new extended paths to queue for depth first search ;;  uncomment and define your merge for depth first search (define depthfirstmerge (lambda (queue newpaths estimate) (append newpaths queue))) ;; merge new extended paths to queue for breadth first search ;;  uncomment and define your merge for breadth first search (define breadthfirstmerge (lambda (queue newpaths estimate) (append queue newpaths))) ;; customize the generic search for depth first search ;;  uncomment and define your depthfirstsearch in terms of your ;; curried version of search and depthfirstmerge (define depthfirstsearch (lambda (extend goal? printpath) (search (depthfirstmerge extend goal? printpath)))) ;; customize the generic search for breadth first search ;;  uncomment and define your breadthfirstsearch in terms of your ;; curried version of search and breadthfirstmerge (define breadthfirstsearch (lambda (extend goal? printpath) (search (breadthfirstmerge extend goal? printpath)))) ;;;; DO NOT REMOVE THE FOLLOWING LINE )

Kienct v2 sensor not initializing when called in c++ python extending library
I've been working for a while on creating a kinect v2 (xbox one) motion capture plugin for Blender. In order to do this, I have to build a c++ library using the Kinect sdk and Boost Python. Then I'll be able to call it from my python plugin for Blender.
My problem is, I have made a c++ class that I can access when loading my library in python, but I can't initialize the sensor. "GetDefaultKinectSensor" always fails.
The thing is I had first made a really basic and ugly c++ code without using Boost python, and without a class. And with this code, I could initialize the sensor (the issue was that it closed almost instantly because it wasn't a live object). There are no other difference than the fact that I call the kinect method in a class method now, so I really don't understand the issue.
edit : I work on Windows 10 with Visual Studio 2017. Python 3.5.3 (required to work with Blender), Boost 1.67.0, Kinect sdk v2 with latest drivers (v2.2)
Here's my header file :
#pragma once #include <Kinect.h> #define BOOST_PYTHON_STATIC_LIB #include <boost/python.hpp> using namespace boost::python; class CKinectMocap { public: CKinectMocap(); ~CKinectMocap(); int getFrame(Joint* joints); bool isOpened(); int open(); private: Joint joints[JointType_Count]; // Current Kinect IKinectSensor* m_pKinectSensor; // Body reader IBodyFrameReader* m_pBodyFrameReader; void close(); };
And the code :
#include "CKinectMocap.h" #include <iostream> using namespace std; // Safe release for interfaces template<class Interface> inline void SafeRelease(Interface *& pInterfaceToRelease) { if (pInterfaceToRelease != NULL) { pInterfaceToRelease>Release(); pInterfaceToRelease = NULL; } } CKinectMocap::CKinectMocap() { } CKinectMocap::~CKinectMocap() { close(); } int CKinectMocap::getFrame(Joint * joints) { return 0; } bool CKinectMocap::isOpened() { BOOLEAN opened = false; if (m_pKinectSensor) { m_pKinectSensor>get_IsOpen(&opened); } return opened; } int CKinectMocap::open() { HRESULT hr; int ret = 0; hr = GetDefaultKinectSensor(&m_pKinectSensor); if (hr && m_pKinectSensor) { // Initialize the Kinect and get coordinate mapper and the body reader IBodyFrameSource* pBodyFrameSource = NULL; hr = m_pKinectSensor>Open(); if (SUCCEEDED(hr)) { hr = m_pKinectSensor>get_BodyFrameSource(&pBodyFrameSource); } if (SUCCEEDED(hr)) { hr = pBodyFrameSource>OpenReader(&m_pBodyFrameReader); ret = 1; } SafeRelease(pBodyFrameSource); } return ret; } void CKinectMocap::close() { // done with body frame reader SafeRelease(m_pBodyFrameReader); // close the Kinect Sensor if (m_pKinectSensor) { m_pKinectSensor>Close(); } SafeRelease(m_pKinectSensor); } BOOST_PYTHON_MODULE(kinectMocapLib2) { class_<CKinectMocap>("CKinectMocap") .def("isOpened", &CKinectMocap::isOpened) .def("open", &CKinectMocap::open) ; }
Here is the one that's almost working (except in Blender it kills the connection with kinect immediatly). Sorry, it is really messy, I made it for quick testing.
Header file :
#pragma once #include <Python.h> #include <Kinect.h> // Current Kinect IKinectSensor* m_pKinectSensor; Joint joints[JointType_Count]; // Body reader IBodyFrameReader* m_pBodyFrameReader; Thanks for your help !
Cpp file :
#include "kinectMocap.h" // Safe release for interfaces template<class Interface> inline void SafeRelease(Interface *& pInterfaceToRelease) { if (pInterfaceToRelease != NULL) { pInterfaceToRelease>Release(); pInterfaceToRelease = NULL; } } // start kinect captor PyObject* StartKinect(PyObject *, PyObject* o) { HRESULT hr; hr = GetDefaultKinectSensor(&m_pKinectSensor); if (FAILED(hr)) { return Py_BuildValue("i", 0); } if (m_pKinectSensor) { // Initialize the Kinect and get coordinate mapper and the body reader IBodyFrameSource* pBodyFrameSource = NULL; hr = m_pKinectSensor>Open(); if (SUCCEEDED(hr)) { hr = m_pKinectSensor>get_BodyFrameSource(&pBodyFrameSource); } if (SUCCEEDED(hr)) { hr = pBodyFrameSource>OpenReader(&m_pBodyFrameReader); } SafeRelease(pBodyFrameSource); } if (!m_pKinectSensor  FAILED(hr)) { return Py_BuildValue("i", 0); } return Py_BuildValue("s", "{'FINISHED'}"); //Py_RETURN_NONE; } // stop kinect captor PyObject* StopKinect(PyObject *, PyObject* o) { // done with body frame reader SafeRelease(m_pBodyFrameReader); // close the Kinect Sensor if (m_pKinectSensor) { m_pKinectSensor>Close(); } SafeRelease(m_pKinectSensor); return Py_BuildValue("s", "{'FINISHED'}"); } // get frame PyObject* getFrame(PyObject *, PyObject* o) { float res1 = 0, res2 = 0, res3 = 0; int err = 0; PyObject* jointsDict = NULL; PyObject* key = NULL; PyObject* value = NULL; if (!m_pBodyFrameReader) { Py_RETURN_NONE; } IBodyFrame* pBodyFrame = NULL; HRESULT hr = m_pBodyFrameReader>AcquireLatestFrame(&pBodyFrame); if (SUCCEEDED(hr)) { IBody* ppBodies[BODY_COUNT] = { 0 }; hr = pBodyFrame>GetAndRefreshBodyData(_countof(ppBodies), ppBodies); if (SUCCEEDED(hr)) { // only 1 body allowed in this version //if (_countof(ppBodies) == 1) { for (int i = 0; i < _countof(ppBodies); i++) { IBody* pBody = ppBodies[i]; if (pBody) { BOOLEAN bTracked = false; hr = pBody>get_IsTracked(&bTracked); if (SUCCEEDED(hr) && bTracked) { hr = pBody>GetJoints(_countof(joints), joints); if (SUCCEEDED(hr)) { jointsDict = PyDict_New(); for (int j = 0; j < _countof(joints); j++) { key = Py_BuildValue("i", j); value = Py_BuildValue("(i,f,f,f)", j, joints[j].Position.X, joints[j].Position.Y, joints[j].Position.Z); PyDict_SetItem(jointsDict, key, value); } } } } } } } SafeRelease(pBodyFrame); return jointsDict; } static PyMethodDef kinectMocapLib_methods[] = { // The first property is the name exposed to Python, fast_tanh, the second is the C++ // function name that contains the implementation. { "startKinect", (PyCFunction)StartKinect, METH_NOARGS, nullptr }, { "stopKinect", (PyCFunction)StopKinect, METH_NOARGS, nullptr }, { "getFrame", (PyCFunction)getFrame, METH_NOARGS, nullptr }, // Terminate the array with an object containing nulls. { nullptr, nullptr, 0, nullptr } }; static PyModuleDef kinectMocapLib_module = { PyModuleDef_HEAD_INIT, "kinectMocapLib", // Module name to use with Python import statements "Provides access to kinect body tracking functions", // Module description 0, kinectMocapLib_methods // Structure that defines the methods of the module }; PyMODINIT_FUNC PyInit_kinectMocapLib() { return PyModule_Create(&kinectMocapLib_module); }

Using Kinect v2 and and Leap Motion together
I am trying to use both Leap motion camera and Kinect v2 together for a project. I am using pykinect2 and leap motion python library. I have created separate projects and algorithms for each and they are working perfectly fine. But when I try combining them, I am getting error
> The program '[12456] python.exe' has exited with code 1073741819 (0xc0000005) 'Access violation'.
The frame work of my code is shown below:
class Kinect(object): def method1(self): ..... def method2(self): ...... def method3 (self): if condition1 ==true switch_leap() class LeapListener(Leap.Listener): def method1(self): ..... def method2(self): ..... def method3(self): ..... def switch_leap(): # Create a sample listener and controller listener = LeapListener() controller = Leap.Controller() # Have the sample listener receive events from the controller controller.add_listener(listener) __main__ = "Kinect v2 Body Game" print("yooooo") paikinect = Kinect(); paikinect.method3();
The program exits itself with the fore mentioned error as soon as the condition to switch to leap is met. I am trying to figure this out, but I don't know where to start or where to look. Will this work if I use Unity? Any ideas or suggestions will be greatly appreciated.

Creating a video on real time from kinect?
I want to make an app to use it on a stand for an expo, It´s something like this:
The aproach I was thinking is to make the video original with transparency where the camera images will be added and a every frame calculate the mean position where the alpha is 0 in the original video and scale the camera images and position it on that point behind the original.
I would like to know if someone know another and better aproach.
Greetings.