Inspiration
After testing Google Earth VR for the Meta Quest 3 glasses, we decided to implement a model similar to the navigation method used in Google Maps using the Google Maps 3D API but implementing it in the browser to provide a better user experience.
What it does
MapSphere is an interactive navigation platform designed to enhance user exploration of maps and routes in a dynamic, immersive environment. This project integrates cutting-edge technologies like Google Maps API, Threej.s for 3D visualization, and advanced AI functionalities for enhanced user experiences. The system aims to deliver a seamless and engaging way for users to interact with maps, set routes, teleport and chat with an AI assistant.
Key Features
Dynamic 3D Navigation:
•Character Movement: Users can control an animated character across the map using W, A, S, D keys from the keyboard. This is the primary navigation control for movement. Arrow keys also provide alternative controls for seamless interaction. •Camara control: the camera can be adjusted with mouse movements to explore the surroundings. Additionally, the control key toggles between locking and unlocking the camera for focused or free movement.
Google Maps Integration
•WebGL Overlay View: A WebGL overlay combines the power of Google Maps with Three.js for an immersive 3D experience.
•Minimap: a secondary minimap allows users to see their position and track their movement in real-time.
•Advanced Routing: users can set and calculate routes dynamically using the Google Maps Directions API. The system visually displays the routes both on the minimap and in the 3D view.
AI Chat Integration:
•AI Assistant: powered by Google Generative AI (Gemini Model), users can interact with an intelligent assistant to ask questions about nearby locations, get guided suggestions for routes or destinations, enhance the overall exploration experience with rich contextual information
Teleportation System:
•Place autocomplete: users can search for destinations using Google´s Place Autocomplete feature. •Instant Teleport: upon selecting a location, the system teleports the character to the desired spot, providing instant exploration capabilities.
How we built it
Technologies Used
•Google Maps API Libraries: Places, Geometry, Marker, Directions. Offer geolocation data, advanced routing, and mapping functionalities
•Three.js Provides a robust framework for 3D graphics and animations. Integrate seamlessly with Google Maps to create a WebGL-based overlay
•Google Generative AI Used for natural language processing and generating responses Enhances the map interaction by delivering location-based suggestions and conversational engagement.
•Bootstrap Simplifies the UI/UX design for a clean, responsive interface.
Challenges we ran into
Handling the coordinate system on the map.
Loading a 3D model into the map
Integrating Three.js with WebGL.
Manipulating models with GLTF extension.
Keeping the 3D elements (like the character and arrow) aligned with the Google Maps view required synchronization between the map's geographic center and the WebGL scene.
Configuring the Google Places Autocomplete component and integrating it with the Directions API for route calculation was complex.
Rendering 3D models and animations alongside Google Maps was tricky due to differing coordinate systems (geographical latitude/longitude vs. Cartesian coordinates) and resource-heavy rendering in WebGL.
Accomplishments that we're proud of
Display a 3D map and have it explored interactively by manipulating a small character using the directional keys or from the keyboard. We think it's a fun way to learn how to get to a destination from a point of origin
What we learned
Consume the Google 3D Maps API Set up the project on GCP and enable the API to add it to the source code Interact with WebGL models by combining them with Three.js Analyze the camera positioning system in the browser Integrate the Gemini AI model within the map to provide the user with an option to learn more about a specific location. Load a 3D model (the character) to place it on top of a top layer within 3D Maps. Add and define different labels or markers that we can define from GCP for later viewing on the Google map. We realized that we can define government buildings, parks, main streets, and tourist sites on the map and that all this information will be presented to the user to help them with their search for where they want to go or a specific place they need to find.
Technical Skills and Concepts Learned
1. Integrating Google Maps API with Advanced Features
● Dynamic Map Rendering: learned to implement and customize Google Maps with features such as vector maps, WebGL overlays, and a minimap, gaining a deeper understanding of how the API works beyond the basics.
● Advanced Marker Implementation: By using google.maps.marker.AdvancedMarkerElement, improved the ability to render custom markers with visual customization and learned to handle the transition from legacy Marker to AdvancedMarkerElement.
● Directions and Route Calculation: implemented Google’s Directions API to calculate and render driving routes dynamically on the map. This reinforced the ability to handle API requests and process responses effectively.
2. Working with 3D Graphics using Three.js
● 3D Object Loading: Using THREE.GLTFLoader, loaded and animated 3D models (like the astronaut and arrow). This taught us how to handle external assets and integrate them into your scene.
● Animation and Movement: applied THREE.AnimationMixer handles character animations dynamically based on user input (e.g., running, idle). This required learning how to synchronize animation clips and manage transitions.
● Camera Positioning: You dynamically adjust the camera’s position and rotation based on user input (WASD and mouse movement), giving you a practical understanding of camera manipulation in 3D space.
3. User Input and Interaction Design
● Keyboard and Mouse Controls: You implemented WASD and arrow key movement, mouse-based camera control, and camera locking/unlocking. This required managing event listeners effectively and mapping user input to precise transformations in the 3D space.
● Google Places Autocomplete Integration: You learned to integrate and configure the google.maps.places.Autocomplete component, enabling users to search for destinations efficiently.
4. API Management and Optimization
● API Key Security and Configuration: You managed sensitive keys for the Google Maps and Generative AI APIs using environment variables, ensuring the security of your project.
● Rate Limiting and Optimization: By understanding potential request limits (e.g., OVER_QUERY_LIMIT errors), you improved your awareness of API quotas and learned to optimize requests, such as caching results and reducing redundant calls.
5. Generative AI Integration
● Gemini AI for Natural Language Processing: integrated the Google Generative AI service to create a responsive chatbot.
● Custom Prompt Engineering: shaping its responses to be relevant and contextually aware of the user’s map-based activities.
6. Frontend Development with Modern Tools
● Responsive UI with Bootstrap: By leveraging Bootstrap, design a responsive interface with elements like buttons, input fields, and modals, ensuring a seamless user experience across devices.
● Dynamic HTML Injection: features like the map, minimap, and control buttons.
7. Debugging and Troubleshooting
● Error Handling in API Requests: handling API errors, such as quota limits, invalid requests, and network issues. This improved debugging skills with tools like the browser DevTools.
● WebGL Errors: related to WebGL rendering (e.g., fallback to raster maps), learning how to debug compatibility problems and configure the project for optimal performance.
8. Project Structuring and Best Practices
● Module-Based Architecture: By splitting the project into modules (helpers, app.js), for maintainability and scalability.
● Environment Variable Usage: utilized .env files to manage sensitive configuration, ensuring the project is portable and secure.
● Version Control with Git: version control best practices and project organization.
9. Cross-Disciplinary Knowledge
● Combining APIs and 3D Graphics: This project combined data-driven maps with immersive 3D visualization.
● User-Centric Design: built features that directly improve user experience, such as intuitive controls, real-time route updates, and conversational AI guidance.
What's next for MapSphere
We want to continue investigating how it would be possible to integrate Google 3D Maps with Three.js so that it can be displayed in the browser using WebXR. What we are looking for is to try to provide a more immersive user experience. Because WebXR technology is booming, it would be interesting to provide the user with an experience from the cell phone but with 3D models displayed. Currently, Google Maps as such is widely used globally, but the way it is rendered is still flat. We are aware that displaying 3D models on the screen involves computational costs (GPC, CPU, memory) but it could be that with the advancement of technology, this can be achieved in the future.
Another thing is to adapt the map to different resolutions of mobile devices.
Incorporate virtual controls like the controls of a video game so that the user can manipulate and control the character in a way.
Create a landing page at the beginning to give the user some instructions
Built With
- bootstrap
- gcp
- geminiai
- google-maps
- javascript
- three.js
- webgl


Log in or sign up for Devpost to join the conversation.