Capturing Android depth camera stream (DEPTH16) as an OpenGL ES texture

My goal is to read the Android depth camera stream (DEPTH16) as an OpenGL ES texture, so that I can process it in the shader. As the documentation states, in DEPTH16 format

Each pixel is 16 bits, representing a depth ranging measurement from a depth camera or similar sensor. The 16-bit sample consists of a confidence value and the actual ranging measurement.

So, in the OpenGL shader I'm trying to extract the depth range value and create a grayscale preview that can be displayed on the screen.

I'm using camera2 API. For the regular RGB camera stream I use SurfaceTexture, create a Surface from it and add it as a target for the camera capture session. Here, the shader accesses the texture by using samplerExternalOES sampler type. However, if I understand correctly, all this only works for 8 bit 3-channel color images, while in DEPTH16, each pixel represents only one 16 bit value.

Is there some other OpenGL texture I could use for capturing DEPTH16 and what would be its type in the shader? I'm new to OpenGL and any comments/suggestions would be really appreciated.

1 answer

  • answered 2020-09-24 20:40 Eddy Talvala

    There's no direct support like SurfaceTexture for this; you'll need to use an ImageReader and manually copy the depth information into an OpenGL texture from the ImageReader Image, each frame. You'll also likely need to separate out the confidence value and the depth estimate value in this stage (though you could do that in the shader in principle, just have to be careful about the transformations).

    Fortunately, depth maps tend to be low-resolution (~320x240 for example), so this is probably not a big performance problem.

    For the actual EGL texture, you'd ideally want a single-channel texture (two-channel if you want to include the confidence values included in the DEPTH16 format) that can store at least 13 bits of depth info. In practice this means a 16 bit integer, or a float.