Compressing gaussian splats#309
Conversation
As an FYI |
|
Did some cleanup and put up a doc page for it (
|
| "quats": torch.randn(N, 4), | ||
| "opacities": torch.randn(N), | ||
| "sh0": torch.randn(N, 1, 3), | ||
| "sh1": torch.randn(N, 24, 3), |
|
Great to see Self-Organizing-Gaussians beeing used! |
|
FYI: I was messing around with the code with Claude, and it seems to inadvertently have caught and fixed a bug related to the compression in simple_trainer.py: It seems correct to me at a glance, but as I am not read in into the code I don't want to open an issue/PR for an accidental Claude change that I haven't looked into, I figured I'd just put it here instead so you can judge it yourself. |
|
Hello, sorry for late party. I have used this method in I must decompress it, so I can convert it to PLY. But, unfortunatelly, the size of PLY is remain same. The question is, any tutorial for getting compressed PLY with this method? |
Yeah, I got this error too. It does not error somehow in my setup with 1 GPU, but if it has more >1 GPU, it caused error It correct code should be |
|
Hi @jefequien, are there anyway to load Checkpoint from Nerfstudio and run just the compression step, instead of training from ground up in gsplat? |
|
@Ben-Mack Assuming Nerfstudio uses the same .pt file as gsplat, it should be simple. Just use simple_trainer.py with --ckpt |
* plas * sort * working * crappy compression * cool vis * vis grid * vis * compress * sort strategy wip * sort strategy * sort attr * detach * png compression is working * shN png * blur attributes * kmeans * refactor * good psnr * means_u and means_l * count sorted more carefully * cleanup * bug * sort and quantize as post is pretty effective * simple * powers of 2 * simple sort and quantize is best * 6 bits * wip * sort splats * fine tune codebook works * save sorted ckpt * kmeans might not be worth it * wip * simplify * reduce diff * wip * stage * sumarize stats * kwargs * crop * cleanup * 1m * fix shape bug * support any types * minor * 1m * compress and decompress npz * move imports inside of functions * verbose * req txt * support simple trainer as well * sh0 * utils * refactor * png compression * trainers * shape zero * minor * minor * minor * 1m * minor * kwargs * minor * dict -> Dict * dict -> Dict * compression_strategy -> compression * minor * set rank in compress_dir * minor fix * clang * minor * minor * minor * cleanup scripts * doc * minor doc * black * add tests * black * simple trainer reg fix * imageio import as v2 * lpips_net arg * compression scripts and results * fix typo --------- Co-authored-by: Ruilong Li <397653553@qq.com>



Thanks largely to MCMC's improved densification strategy, a simple post-training compression scheme is enough to top the 3D Gaussian Splatting Compression Methods leaderboard.
This PR implements quantization, sorting, and spherical harmonic coefficients K-means clustering for compression.
Relevant links:
https://github.com/fraunhoferhhi/Self-Organizing-Gaussians
https://aras-p.info/blog/2023/09/27/Making-Gaussian-Splats-more-smaller/
Results
Evaluated on MipNeRF360 dataset.
Note: Using K-means centroids and labels to initialize a codebook sets the high score for a given file size, at the cost of another training run. This is not implemented in the PR.
Data format
This PR quantizes
meansto 16 bits,opacities,quats,scales,sh0to 8 bits, andshNto 6 bit. Sorting these fields with PLAS and saving them as PNGs saves a few MBs and generates nice visuals.meansare quantized in log space.shNcoefficients are clustered with K-means into 2**16 clusters and their centroids and labels are saved to a compressed npz file.Bonsai scene's quantized and sorted

sh0png.