We're starting to use DepthKit for some experimental production. Right now it's easiest to distribute using Unity (See Volumetric Video in Unity), but it would be great to have a more open and accessible WebVR toolkit for the inclusion of volumetric video.
We've taken a first pass at writing an A-Frame component based on DepthKit.js.
The first steps involve calibrating the software for our particular lenses. You need a stiff board with a checkerboard pattern.
First, you take a series of 13 photos with your DSLR to help the software quantify the curvature of your color lens.
Next, take a series of snaps with both the DSLR and the depth Camera to match up the optics.
We had a little trouble with our first calibration. So we went back and reshot with brighter exposure, more careful focusing, a larger 11x17 checkerboard, more consistent lighting and we started the final set of calibrations closer to the camera. That second calibration worked fine.
The capture software which only runs on a windows machine is used to capture a sequence of .pngs at roughly 24fps. The frames are relatively low res (512x424) These frames capture the depth of the scene. Here's a sample frame:
And here's the matching color frame from the DSLR:
Then Depthkit Visualize (Which runs on Mac and Windows) can be used to sync the two streams and combine them into a playable textured mesh.
The result can be cropped and adjusted in a number of ways along the X,Y & Z axis, and then exported in different formats for a wide variety of platforms. Including a .webm file with paired depth and color.
Try importing footage into Unity and then Hololens
Try combining DepthKit JS with Aframe.io
Experiment with other filters and visualizations
Try setting up camera arrays
Make capture system mobile
Try adding a background
There is some weird occlusion going on in DepthKitJS. When you rotate the video counter clockwise. You can see the fan which is in the foreground become blocked by pixels in the body, which is in the background.
Simple RGBd Demo with Kinnect v1
9/12/16 | 3 hrs
This is a simple screen captured demo using a Kinect sensor bar V1 and RGB-D Viewer.
This test uses an original Xbox Kinect Sensor which detects the depth of a space in real time using an infrared projector, a low-res infrared camera, it then uses a low res camera to color the resulting 3D point cloud.