This repository holds the code for controlling LeLamp. The runtime provides a comprehensive control system for the robotic lamp, including motor control, recording/replay functionality, voice interaction, and testing capabilities.
LeLamp is an open source robot lamp based on Apple's Elegnt, made by [Human Computer Lab]
LeLamp Runtime is a Python-based control system that interfaces with the hardware components of LeLamp including:
- Servo motors for articulated movement
- Audio system (microphone and speaker)
- RGB LED lighting
- Camera system
- Voice interaction capabilities
lelamp_runtime/
├── main.py # Main runtime entry point
├── pyproject.toml # Project configuration and dependencies
├── lelamp/ # Core package
│ ├── setup_motors.py # Motor configuration and setup
│ ├── calibrate.py # Motor calibration utilities
│ ├── list_recordings.py # List all recorded motor movements
│ ├── record.py # Movement recording functionality
│ ├── replay.py # Movement replay functionality
│ ├── follower/ # Follower mode functionality
│ ├── leader/ # Leader mode functionality
│ └── test/ # Hardware testing modules
└── uv.lock # Dependency lock file
- UV package manager
- Hardware components properly assembled (see main LeLamp documentation)
- Clone the runtime repository:
git clone https://github.com/humancomputerlab/lelamp_runtime.git
cd lelamp_runtime- Install UV (if not already installed):
curl -LsSf https://astral.sh/uv/install.sh | sh- Install dependencies:
# If on your personal computer
uv sync
# If on Raspberry Pi
uv sync --extra hardwareNote: For motor setup and control, LeLamp Runtime can run on your computer and you only need to run uv sync. For other functionality that connects to the head Pi (LED control, audio, camera), you need to install LeLamp Runtime on that Pi and run uv sync --extra hardware.
If you have LFS problems, run the following command:
GIT_LFS_SKIP_SMUDGE=1 uv syncIf your installation process is slow, use the following environment variable:
export UV_CONCURRENT_DOWNLOADS=1The runtime includes several key dependencies:
- feetech-servo-sdk: For servo motor control
- lerobot: Robotics framework integration
- livekit-agents: Real-time voice interaction
- numpy: Mathematical operations
- sounddevice: Audio input/output
- adafruit-circuitpython-neopixel: RGB LED control (hardware)
- rpi-ws281x: Raspberry Pi LED control (hardware)
Prior to following the instructions here, you should have an overview of how to control LeLamp through this tutorial.
- Find the servo driver port:
This command finds the port your motor driver is connected to.
uv run lerobot-find-port- Setup motors with unique IDs:
This command set up each motor of LeLamp with an unique ID.
uv run -m lelamp.setup_motors --id your_lamp_name --port the_port_found_in_previous_step- Calibrate motors:
This command calibrate your motors.
sudo uv run -m lelamp.calibrate --id your_lamp_name --port the_port_found_in_previous_stepThe calibration process will:
- Calibrate both follower and leader modes
- Ensure proper servo positioning and response
- Set baseline positions for accurate movement
The runtime includes comprehensive testing modules to verify all hardware components:
# Run with sudo for hardware access
sudo uv run -m lelamp.test.test_rgbuv run -m lelamp.test.test_audiouv run -m lelamp.test.test_motors --id your_lamp_name --port the_port_found_in_previous_stepOne of LeLamp's key features is the ability to record and replay movement sequences:
To record a movement sequence:
uv run -m lelamp.record --id your_lamp_name --port the_port_found_in_previous_step --name movement_sequence_nameThis will:
- Put the lamp in recording mode
- Allow you to manually manipulate the lamp
- Save the movement data to a CSV file
To replay a recorded movement:
uv run -m lelamp.replay --id your_lamp_name --port the_port_found_in_previous_step --name movement_sequence_nameThe replay system will:
- Load the movement data from the CSV file
- Execute the recorded movements with proper timing
- Reproduce the original motion sequence
To view all recordings for a specific lamp:
uv run -m lelamp.list_recordings --id your_lamp_nameThis will display:
- All available recordings for the specified lamp
- File information including row count
- Recording names that can be used for replay
Recorded movements are saved as CSV files with the naming convention:
{sequence_name}.csv
If you want to start LeLamp's voice app upon booting. Create a systemd service file:
sudo nano /etc/systemd/system/lelamp.serviceAdd this content:
ini[Unit]
Description=Lelamp Runtime Service
After=network.target
[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi/lelamp_runtime
ExecStart=/usr/bin/sudo uv run main.py console
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.targetThen enable and start the service:
sudo systemctl daemon-reload
sudo systemctl enable lelamp.service
sudo systemctl start lelamp.serviceFor other service controls:
# Disable from starting on boot
sudo systemctl disable lelamp.service
# Stop the currently running service
sudo systemctl stop lelamp.service
# Check status (should show "disabled" and "inactive")
sudo systemctl status lelamp.serviceNote: Boot time might vary with each run and extended usage (>1 hour) can burn the motors.
Sample apps to test LeLamp's capabilities.
To run a conversational agent on LeLamp, create a .env file with the following content in the root of this directory in your Raspberry Pi.
OPENAI_API_KEY=
LIVEKIT_URL=
LIVEKIT_API_KEY=
LIVEKIT_API_SECRET=On how to get LiveKit secrets, please refer to LiveKit's guide. Install LiveKit CLI, then you can run the following command:
lk app env -w
cat .env.localThis will automatically create an .env.local file for you, which contains all the secrets on LiveKit side.
On how to get OpenAI secrets, you can follow this FAQ.
Then you can run the agent app by:
# Only need to run this once
sudo uv run main.py download-files
# Pick one of the below
# For Discrete Animation Mode
sudo uv run main.py console
# For Smooth Animation Mode
sudo uv run smooth_animation.py consoleIn case your lamp is not lelamp, change the id of the lamp inside main.py:
async def entrypoint(ctx: agents.JobContext):
agent = LeLamp(lamp_id="lelamp") # <- Chnage the name hereThis is an open-source project by Human Computer Lab. Contributions are welcome through the GitHub repository.
Maintained by Human Computer Lab.
See CONTRIBUTORS.md for contributors and their roles.
See SPONSORS.md for sponsor thanks and how to support the project.
Check the main LeLamp repository for licensing information.
