Update 2025-03-12: We have since improved STGG+ and added active learning (STGG+AL). It beats RL method at generating molecules with complex properties. The molecules we get are much nicer than the ones from the original paper. Molecule synthesizability can be improved simply by adding constraints such as max-ring-size โค 6 and removing too large molecules … Continue reading Any-Property-Conditional Molecule ๐งช Generation with Self-Criticism ๐ฉโ๐ซ using Spanning Trees (STGG+)
Fashion repeats itself: Generating tabular data via Diffusion and XGBoost ๐ฒ
Paper / Code Since AlexNet showed the world the power of deep learning, the field of AI has rapidly switched to almost exclusively focus on deep learning. Some of the main justifications are that 1) neural networks are Universal Function Approximation (UFA, not UFO ๐ธ), 2) deep learning generally works the best, and 3) it … Continue reading Fashion repeats itself: Generating tabular data via Diffusion and XGBoost ๐ฒ
Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation
In this joint work with Vikram Voleti and Christopher Pal, we show that a single diffusion model can solve many video tasks: 1) interpolation, 2) forward/reverse prediction, and 3) unconditional generation through a well-designed masking scheme ๐งโโ๏ธ. See our website, which contains many videos: https://mask-cond-video-diffusion.github.io. The paper can be found here. The code is available … Continue reading Masked Conditional Video Diffusion for Prediction, Generation, and Interpolation
Alternative losses for Relativistic GANs
Further investigation needs to be done, but I suspect some variants of Relativistic average GANs (RaGANs) might be more sensible than the ones I proposed in myย paper. If you are using Relativistic GANs, you might be interested in trying out also variant 3 which is the most promising. For simplicity, let's assume we use the … Continue reading Alternative losses for Relativistic GANs
