I had trouble getting Potree to work with any of our scans.
Partly because the scanner saves files encoded in binary and compressed using little endian compression. Also potree uses an octree file format to allow quick rendering.
Potree still creates a much nicer final image.
See this sample:
The underlying technology behind Potree and some other 3D renderers is a JS library called THREE.js that I've done some work with previously, so I decided to try and build a Point Cloud viewer right in three.js.
Here's the result:
In order to use this, I'm saving the point cloud as a human readable JSON file and looping through it to get XYZ coordinates and RGB values.
Create side by sides of various display approaches
A-Frame.io is a free and open source JS library from Mozilla for creating VR experiences. It's incredibly simple to use. After calling the script, available here:
Loading and placing a textured 3D scene is as simple as this:
I wanted to use A-Frame.io to share scans from the MatterPort Scenes app. Unfortunately, while A-Frame supports 3D content, it only supports textured meshes, not the color point clouds that the Matterport app produces. So I needed to turn a collection of colored dots, into a 3D polygon and a series of colored triangles. After some research, I discovered a workflow for doing that in MeshLab.
Following this tutorial, with some difficulty, ( I need to figure out the Undo function in MeshLab, and I seem to run into a problem that requires an occasional relaunch if I miss a step ) I managed to transform a series of point-cloud-based scans into Meshes. To be specific .OBJ files with .PNG textures. Here's a video of one result:
I've made two full demos available, THE FIRST SCENE includes two rooms from the WhiteHouse at Christmas time and our team around a McClatchy Board Room. THE SECOND SCENE is my apartment around Thanksgiving. Both scenes are scanned in several chunks in the Thanksgiving scene I've tried to bring the chunks seamlessly together.
*The files in these pages are not optimized, neither is the loading process, so be patient while the A-Frame demos load. You may even need to refresh a few times.
Here are two practice scans of the dress Rosa Parks was carrying that historic day she, when she was arrested for riding in front of bus.
The two scans illustrate a problem with the MatterPort Scenes. In the image on the left, I tried to move around the dress and spatial drifting occurred. The phone lost its place and so it started to re-record the same points in different places. It's very pronounced in this image which was recorded through a reflective surface, but even when scanning plain opaque surfaces, there is a loss of clarity if you do multiple passes. The image on the right, by contrast, was produced with the phone sweeping over the dress only once, from a single perspective. The result is a 3D image with less noise and increased clarity. However the depth is limited. The recorded image is essentially a shell.
I started scanning using an app called Real-Time Appearance-Based Maps, but found it buggy at best.
Then I discovered the more polished MatterPort Scenes App. The App creates a 3D model using Tango's structured light sensor. It basically bounces infrared light off of a surface to determine how far a given point is from the lens. It then colors that point using the wide angle lens, and begins establishing a cloud of these points in 3D, as you move through space and turn the phone, the app progressively adds more points. The resulting file is called a point cloud.
Here's a gif of the first point cloud I made:
File sharing isn't fully supported in the app but it's possible to export the model as a .PLY or Polygon File Format formatted as binary_little_endian 1.0.
After a bit of research, I discovered a free and powerful cross-platform application called MeshLab that allows me to open and manipulate the file. The GIF above is a screen record of MeshLab