Washes static watermarks off video, slowly.
Currently Mac only, because this uses a CoreML implementation of LaMa. The inference bits could be switched out to something else though.
Apache-2.0.
- Contains code derived from https://github.com/mallman/CoreMLaMa (Apache-2.0, @mallman)
- Based on https://github.com/advimman/lama (Apache-2.0)
- use
uv run find_mask.py compute-variance -i video.mp4 -o variance.npyto get a variance map - use
uv run find_mask.py generate-mask -i variance.npy --output-png mask.png --threshold=1500 --kernel-size=5to get a mask – but it might be better to just hand-draw a mask based on the variance debug image - use
uv run crop_mask.py mask.png -o mask-crop.png -j mask-crop.json– this crops your mask to its minimal bounding box because computing the inpaint on full 4K frames is slow - use
uv run convert_mlama.py -w WIDTH -h HEIGHTwhereWIDTHandHEIGHTare the mask size from above - finally, with much patience, run
uv run run_inference.py -m mask-crop.png -c mask-crop.json -i video.mp4– preferably with-f temp-framesif you have the disk space to spare to save JPEG frames
On my M2 Max Macbook, this took 7 hours to process a 15-minute video.