Inspiration

We want to supercharge prototyping and traditional whiteboarding by combining physical whiteboarding with digital elements to make brainstorming more fun and memorable with 3D objects and AI summary of the physical and digital multimodal capture on specs. Buy a pack of specs and keep them in your meeting room!

What it does

Bring capabilities normally reserved for online meetings into the real-world:

  • Import 3D models and place them around the whiteboard.
  • Annotate and draw naturally with real physical markers
  • Save snapshots into history that you can recall later and overlay back into the meeting room
  • Recalling snapshots from previous meetings also recalls digital content

How we built it

Completed Implementation

3D Models are currently embedded in-app but could also easily be stored in Supabase storage to allow for future expansion. A panel allows you to browse models and place them in the physical world relative to the whiteboard. When a snapshot is initiated from the hand menu, we upload the image to Supabase storage and create a history entry in Postgres. This history entry includes not only the image, but also information about the models that are loaded and their placement relative to the whiteboard. When a history snapshot is later loaded, we display the image as well as reload the same models and their relative locations.

Desired But Unimplemented

Originally we wanted to leverage AI to perform OCR on the whiteboard as well as describe the whiteboard. This would allow meeting participants to get "transcripts" and historical data about previous meetings. Gemini Image Understanding is able to provide quite a number of details about the whiteboard composition. And since we collect this information over time, we could provide interesting statistics about how a project or plan might have evolved over time. Unfortunately, the current Supabase implementation was extremely limited due to not being able to link local CLI environments. This restricted us from implementing all of our planned advanced AI features. We wanted to capture a transcript of the user's presentation and had this partially working but had to cut the feature during the merge.

Challenges we ran into

  1. Camera capture API is currently broken. We needed to use a workaround that grabs a single frame of video.
  2. Supabase storage APIs for file upload are incompatible with Snap data types. This blocked us from being able to upload an image to Supabase using their built-in storage APIs. Working with a Snap engineer we tried to use the Blob data type but were unable to construct it properly or it was still incompatible. Our workaround was to create an edge function just to handle image upload.
  3. It was not possible to link a locally hosted Supabase CLI environment to a Snap cloud hosted Supabase environment. Even using the special --proifile snap command line parameter did not work. The error reported is: failed to connect to postgres: failed to connect to 'host=db.hawshozwedzwfeecapev.snapcloud.dev user=cli_login_postgres database=postgres': hostname resolving error (lookup db.hawshozwedzwfeecapev.snapcloud.dev: no such host)
  4. Testing edge functions in the Snap hosted Supabase UI was also broken. When viewing an edge function, if we click the "Test" button and then the "Send Request" button we would receive Error 400 invalid path. It appears this error may have been fixed overnight.
  5. CONSTANT hangs of the Preview window in Lens Studio. This almost always happened after changing code in Visual Studio and switching back. Not only did this require closing Lens Studio, we also had to terminate the process in Task Manager because the process wouldn't close and kept our project file locked open.
  6. The Base64.encodeTextureAsync hard crashes Lens Studio when the onSuccess callback is another method in the component - if it is an anonymous inner method it works.
  7. Brittle prefab structure: if you add a button to a prefab, and change insert an @input between two existing @input buttons, and drag the new button in it, then the new button won't we filled in the scene
  8. At least the new Android companion app was broken, and required re-pairing of the Spectacles every time it powered down. This morning at 4:50 am an even newer version was downloaded by a team member apparently fixing this problem

Accomplishments that we're proud of

We're pretty proud that we got any of it working. The parts we did get working required a lot of interaction with engineering to identify workarounds.

What we learned

  • Lens Studio 5.15 should have been released as an alpha or perhaps as a Beta. It was certainly not ready for a full public release.
  • Supabase capability is very cool, but not at all ready for public consumption. Local CLI support is critical for any meaningful development and being unable to link projects is an immediate blocker.
  • Snap should really ideally test scenarios that developers will want before green lighting a capability like Supabase. We were quite surprised to find that there was no sample for uploading images captured by Snap to Supabase and that the data types weren't even compatible. This process is called 'dogfooding' and has clearly not been executed.

What's next for SupAR Board

Going back to the "drawing board to revamp" the Supaboard infrastructure once Supabase is more stable with lens studio

What's next for Snap

Updates for Lens Studio! The new cloud features are great feature addition, but there is some room for the tools to mature. We really wanted to do more with edge functions and supabase and need these roadblocks removed in order for this project to work properly

Built With

  • lensstudio
  • supabase
Share this project:

Updates