Inspiration

With LiDAR and Photogrammetry technology becoming increasingly accessible, we had to ask, why? Why is Apple including LiDAR scanners in their top-end devices? Why are there dozens of 3D reality capture apps? What is the utility of these technologies?

Eager to learn more about reality capture, we committed to building a project involving the technology.

The first thing we noticed was that the existing format for 3D reality capture had little infrastructure to support it. While point clouds are great for visualization purposes, in their raw format, are difficult to decipher, distribute, and manipulate.

What it does

As such, we built a one-click solution to 3D print a photogrammetry / LiDAR scan from a mobile device. We built a platform where users could upload, process, and have their 3D capture 3D printed, all in one click.

Our goal was to abstract away all the post-processing that existed between a 3D scan and a 3D-printed object.

Additionally, we designed an electromechanical scanning rig for increased quality, consistency, and autonomy of mobile scans to combat the inconsistency of today’s mobile scanners.

How we built it

Sitting on top of a Linux instance, we built a web server and interface using Flask, Python, HTML, and CSS. Here, users can upload their point clouds to be processed and printed.

Unfortunately, raw point clouds from mobile devices are riddled with outliers and noise. To combat this, we shortlisted a small selection of filtering, normal estimation, surface reconstruction, and point cloud segmentation algorithms. These included; Voxel Downsampling, Poisson Surface Reconstruction, and PCA Normal Estimation. To implement these algorithms I chose a python-based library named Open3D.

From there, we wrote bash scripts to control Slic3r, a local software used to convert a .stl to a Gcode file (instructions for the 3D printer).

Afterwards, we flashed a Raspberry Pi 3 with Octopi (OS) and connected it to a 3D printer, allowing us to send 3D models to print wirelessly.

Lastly, we built the scanning rig with 3D-printed parts and an Arduino-controlled stepper motor for the turn table.

Challenges we ran into

Poor quality of mobile 3D scans. Regardless of their features or UX/UI, they could not capture a half-decent scan. To overcome this challenge we built a rig to standardize and optimize the 3D scanning conditions (lighting, angles, and smooth movement).

Developing scripts robust enough to complete file conversions, remove noise, and mesh surfaces even with suboptimal inputs. Extreme input variance leads the scripts to produce undesirable results or crash.

Also, we quickly realized that altering even a small set of parameters amongst a few algorithms produced vastly different results. As such, we began to focus more on optimizing the parameter values rather than introducing more and more algorithms.

Overall, 3D printing from a mobile scan served as an excellent benchmark for testing the robustness and quality of our post-processing pipeline.

Accomplishments that we're proud of

  1. Keeping it simple, not losing sight of our original goal
  2. Remaining focused, even with appealing distractions (workshops & events at HTN)
  3. Completing the project in a timely manner and having a functional MVP.

What we learned

To test more frequently and to make fewer assumptions.

But overall, we learned that the value proposition for LiDAR is translation. Currently, humans serve as a translation layer interfacing between the physical and digital worlds. Inherently, we are prone to error and bias. As such, information transfer between these realms is flawed. Overall, we realized that 3D reality capture technologies would enable humans to automate and replace the translation layer between the physical and digital worlds, while greatly increasing the fidelity.

What's next for RepliCam

  1. Build native mobile apps or partner with the likes of Polycam and Scaniverse - become vertically integrated.
  2. Improve our UX / UI
  3. Optimize post-processing algorithms for speed and performance (C++ libraries, Parallel Processing?)
  4. Find and incorporate more robust post-processing algorithms
  5. Add additional features such as merging multiple object scans to fix broken objects

Built With

Share this project:

Updates