Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

readme.md

Least Squares Generative Adversarial Networks (LSGAN)

  • TensorFlow implementation of Least Squares Generative Adversarial Networks (ICCV 2017).
  • The LSGAN uses least square loss function to pull data generated by generator towards the real data manifold. Not like log loss which only penalizes the sample with incorrect sign, the least square loss penalizes the samples far away from the decision boundary.
  • The least square loss function make the training more stable because it can provide more gradients by penalizing the samples far away from decision boundary when updataing the generator.

Implementation Details

  • The architectures of discriminator and generator are exactly the same as mentioned in the paper.
  • LSGAN model is defined in src/nets/lsgan.py. An example to show how to train and test the model can be found in examples/gans.py.
  • Random input vector are uniformly sampled within [-1, 1].
  • The linear activation function instead of sigmoid is used for the output of discriminator, since the least square loss with sigmoid easily gets saturated.
  • When applied on MNIST, Dropout with 0.5 is used both for training and testing phase after each convolutional layer of the generator except the output layer. Because I found this reduced the noise on the generated images.

Usage

Results

MNIST

  • vector length = 100, images size 28 x 28, generator dropout = 0.5 for both training and testing. Dropout is used for the same reason as my implementation of DCGAN.

  • The generated images get much better after adding dropout.

Epoch 1 Epoch 7 Epoch 14 Epoch 21
Image Image Image Image
  • Interpolation between two digits

manifold

  • Images generated by uniformly sampling along x = [-1, 1] and y = [-1, 1] when input vector length is 2.

manifold

CelebA

  • vector length = 1024, images are rescaled to 64 x 64, no dropout
  • At beginning, the model generates better and better faces, then it breaks down at around epoch 7. However, when I continue training the model, it is finally able to generate nice faces.
Epoch 1 Epoch 7
Image Image
Epoch 17 Epoch 27 Epoch 50
Image Image Image
  • More result at epoch 25

finalface

  • Interpolation between two faces

interp1 interp2