Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

readme.md

Deep Convolutional Generative Adversarial Networks (DCGAN)

Implementation Details

  • DCGAN model is defined in /src/nets/dcgan.py. An example to show how to train and test the model is defined in examples/gans.py.
  • Random input vector are uniformly sampled within [-1, 1].
  • As mentioned in the paper, batch normalization is used for all layers of generator and discriminator except the first and output layer of discriminator and the last layer of generator.
  • ReLU is used in the generator except for the output, which uses Tanh. LeakyReLU with Leaky = 0.2 is used in the discriminators.
  • All weights are initialized from a zero-centered Normal distribution with standard deviation 0.02. Learning rate is set to be 2e-4 and Adam optimizer with beta1 = 0.5 is used for optimization.
  • When applied on MNIST, Dropout with 0.5 is used both for training and testing phase after each convolutional layer of the generator except the output layer. Because I found this reduced the noise on the generated images.

Usage

Results

MNIST

  • vector length = 100, images size 28 x 28, generator dropout = 0.5 for both training and testing.

  • If dropout is removed, you can see several noise pixels around digits as shown below.

noise

  • The generated images get much better after adding dropout.
Epoch 1 Epoch 7 Epoch 14 Epoch 21
Image Image Image Image
  • Interpolation between two digits

manifold

  • Images generated by uniformly sampling along x = [-1, 1] and y = [-1, 1] when input vector length is 2.

manifold

CelebA

  • vector length = 100, images are rescaled to 64 x 64, no dropout
Epoch 1 Epoch 10 Epoch 20 Epoch 25
Image Image Image Image
  • More result at epoch 25

finalface

  • Interpolation between two faces

interp1 interp2