Inspiration
"Wake up, Neo..."
Our project was born from a love for the seminal 1999 sci-fi classic, The Matrix. We wanted to answer a simple question: What if you could sit at that terminal?
We were inspired by the aesthetic of late 90s cyberpunk—the glowing green CRT monitors, the cascading digital rain, and the command-line interfaces that defined the era's view of "hacking." Instead of just watching Neo make the choice, we wanted to build a simulation where the user has to make the choice. We wanted to blend nostalgia with modern web technologies to create an experience that feels like you have truly jacked into the Matrix.
What it does
The One Protocol is an immersive, browser-based terminal simulator that gamifies the plot of The Matrix.
Interactive Storyline: It guides the user through 8 distinct stages of the movie, from the Red/Blue pill choice to the final subway showdown.
Visual Simulation: It renders a procedurally generated "Digital Rain" effect using HTML5 Canvas and mimics a retro CRT monitor with CSS scanlines, flicker, and neon bloom.
Audio Immersion: It utilizes the Web Speech API to deliver robotic, system-level text-to-speech feedback and Tone.js to generate dynamic, sci-fi synth sounds for every keystroke and system event.
Cloud Persistence: It integrates with Firebase Firestore to log "Mission Reports," saving every success and failure to a cloud database, effectively tracking who among our users has the potential to be "The One."
How we built it
We made a conscious decision to stick to Vanilla JavaScript to maintain raw control over the DOM and performance, avoiding the overhead of heavy frameworks for what is essentially a high-performance visual and audio demo.
Core Logic: The game engine is a state-machine built in JavaScript. It tracks
currentStageand parses user input against valid story commands (e.g.,RED,DODGE,BELIEVE).The Digital Rain: We used the HTML5 Canvas API. The rain isn't a video; it's calculated in real-time. We implemented an algorithm where drops fall at randomized speeds ($v$) and columns ($x_i$), represented by:
$$ y_{i, t+1} = y_{i, t} + v_i \cdot \Delta t $$
We also implemented the "White Leader" effect, where the first character in a stream glows white before fading to green.
Audio System: We used Tone.js to synthesize sounds directly in the browser. Instead of loading MP3 files, we generate waveforms (sine/square) in real-time. For the voice, we synchronized the
SpeechSynthesisinterface with our custom typewriter effect so the voice speaks exactly as the text appears.Database: We utilized the Firebase Modular SDK to authenticate users anonymously and write to a Firestore collection, creating a permanent audit trail of simulation attempts.
Challenges we ran into
The biggest hurdle was Synchronization.
The "Race Condition" of Typing: Originally, if a user typed
RESETwhile a message was still printing, the two messages would merge into garbled text. We had to implement a robustclearTimeoutsystem to kill active typing loops before starting new ones.Audio/Visual Sync: Browsers handle Text-to-Speech at different rates. We had to fine-tune the typing speed (roughly 75ms per character) to align with the SpeechSynthesis rate (1.25x speed) so the robotic voice wouldn't finish seconds before the text was readable.
Canvas Performance: Rendering hundreds of falling text streams at 60FPS can be heavy. We optimized the render loop to only draw the necessary alpha-fading rectangles rather than clearing the whole screen every frame, creating the "trail" effect efficiently.
Accomplishments that we're proud of
The Atmosphere: We nailed the "feel." The combination of the CRT CSS flicker, the scanlines, and the audio feedback makes the terminal feel like a physical object.
The Story Engine: Successfully adapting a 2-hour movie into a cohesive, text-based interactive adventure that fits in a single web page.
No External Assets: Aside from the font and libraries, there are no images. Every visual element is code.
What we learned
Browser Audio is Powerful: We learned how to manipulate the
AudioContextvia Tone.js to create sounds from scratch, giving us way more flexibility than just playing audio files.Canvas vs. DOM: We gained a deeper appreciation for using Canvas for high-frequency animations (the rain) while keeping the UI (the terminal text) in the DOM for accessibility and ease of styling.
The Value of Polish: Small details—like the white head on the rain droplets or the slight pitch randomization on keystrokes—drastically increase the quality of the final product.
What's next for Project: Wake Up
Voice Recognition: Using the Web Speech API's SpeechRecognition interface so users can actually speak the commands ("I know Kung Fu") instead of just typing them.
Multiplayer "Agent" Mode: Using Firebase Realtime Database to allow a second user to log in as an "Agent" and type commands that glitch or disrupt the Player's terminal in real-time.
3D Construct: Integrating Three.js to transition from the 2D terminal into a 3D "White Room" construct when the user wins the game.
Built With
- css3
- firebase
- firebase-authentication
- firebase-firestore
- google-font
- html5
- javascript
- tone.js
- web-speech-api
Log in or sign up for Devpost to join the conversation.