PosePAL is a tracker-assisted labeling tool for annotating keypoints in video sequences. It leverages test-time optimization on general-purpose point trackers to efficiently generate temporally dense pose annotations from sparse labels. This tool was originally developed for animal pose labeling, but it is also applicable to any video sequence where keypoints need to be annotated.
PosePAL is based on our paper: Animal Pose Labeling Using General-Purpose Point Trackers, presented at CV4Animals@CVPR 2025.
-
Load a video
Upload your video by clicking theChoose Filebutton.
– Alternatively, you can try the tool using the sample video in thesample_videosdirectory. -
Add keypoints
Click theNEWbutton, then click anywhere on the video frame to place a keypoint.
– Repeat this step to add multiple keypoints. -
Initial tracking
After adding keypoints, click theTRACKbutton to begin tracking. The tool uses a pre-trained general-purpose tracker (CoTracker3) to generate initial trajectories for each keypoint. -
Interactive refinement
Refine the tracked keypoints by dragging any inaccurate ones to the correct positions.
– We recommend providing corrections every 10–20 frames for each keypoint to achieve accurate optimization.
– After making corrections, switch the tracking method toStep 2: Optimize the tracker, then clickTRACKagain to optimize the trajectories based on your input.Additional tips:
- To delete a keypoint, click the
🗑️icon. - To add more keypoints, click
NEWand repeat the process.
- To delete a keypoint, click the
-
Save annotations
Once you're satisfied with the keypoints and their trajectories, export your results by saving the annotations in JSON format.
Here is a video demo of the tool:
gui_demo.mp4
- Clone the repository and the dependencies
git clone https://github.com/Zhuoyang-Pan/PosePAL.git --recursive
cd PosePAL
- Download the pre-trained model of cotracker
mkdir -p dependencies/cotracker3/checkpoints
wget https://huggingface.co/facebook/cotracker3/resolve/main/scaled_offline.pth -O dependencies/cotracker3/checkpoints/scaled_offline.pth
- First install torch following the instructions at https://pytorch.org/get-started/locally/ Then install other Python dependencies
pip install -r requirements.txt
If you don't have node.js installed, you can install it(we recommend v22.16.0) from https://nodejs.org/en/download/, and then install the JavaScript dependencies
npm install
- Run the server
uvicorn main:app --reload
- Run the client
npm start
The labeled DAVIS-Animals and the sampled DeepFly3D dataset used in our paper are available here.
July 22, 2025: Initial release
The tool is in active development, and we are working on improving the user experience and adding more features.
- Examples and detailed documentation for using the tool.
- Add post-processing methods(filters) to further refine the keypoints.
- Load and save annotations in different formats.
This codebase is released with the following paper.
| Zhuoyang Pan1, 2, Boxiao Pan1, Guandao Yang1, Adam W. Harley1, Leonidas Guibas1. Animal Pose Labeling Using General-Purpose Point Trackers. CV4Animals@CVPR 2025, Oral Presentation. |
Please cite our paper if you find this work useful for your research:
@article{pan2025animal,
title = {Animal Pose Labeling Using General-Purpose Point Trackers},
author = {Pan, Zhuoyang and Pan, Boxiao and Yang, Guandao and Harley, Adam W and Guibas, Leonidas},
journal = {arXiv preprint arXiv:2506.03868},
year = {2025}
}
Thanks!